Browsing by Subject "wavelet transform"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item Open Access Directionally selective fractional wavelet transform using a 2-d non-separable unbalanced lifting structure(Springer, Berlin, Heidelberg, 2012) Keskin, Furkan; Çetin, A. EnisIn this paper, we extend the recently introduced concept of fractional wavelet transform to obtain directional subbands of an image. Fractional wavelet decomposition is based on two-channel unbalanced lifting structures whereby it is possible to decompose a given discrete-time signal x[n] sampled with period T into two sub-signals x 1[n] and x 2[n] whose average sampling periods are pT and qT, respectively. Fractions p and q are rational numbers satisfying the condition: 1/p+1/q=1. Filters used in the lifting structure are designed using the Lagrange interpolation formula. 2-d separable and non-separable extensions of the proposed fractional wavelet transform are developed. Using a non-separable unbalanced lifting structure, directional subimages for five different directions are obtained. © 2012 Springer-Verlag.Item Open Access Fire detection algorithms using multimodal signal and image analysis(2009) Töreyin, Behçet UğurDynamic textures are common in natural scenes. Examples of dynamic textures in video include fire, smoke, clouds, volatile organic compound (VOC) plumes in infra-red (IR) videos, trees in the wind, sea and ocean waves, etc. Researchers extensively studied 2-D textures and related problems in the fields of image processing and computer vision. On the other hand, there is very little research on dynamic texture detection in video. In this dissertation, signal and image processing methods developed for detection of a specific set of dynamic textures are presented. Signal and image processing methods are developed for the detection of flames and smoke in open and large spaces with a range of up to 30m to the camera in visible-range (IR) video. Smoke is semi-transparent at the early stages of fire. Edges present in image frames with smoke start loosing their sharpness and this leads to an energy decrease in the high-band frequency content of the image. Local extrema in the wavelet domain correspond to the edges in an image. The decrease in the energy content of these edges is an important indicator of smoke in the viewing range of the camera. Image regions containing flames appear as fire-colored (bright) moving regions in (IR) video. In addition to motion and color (brightness) clues, the flame flicker process is also detected by using a Hidden Markov Model (HMM) describing the temporal behavior. Image frames are also analyzed spatially. Boundaries of flames are represented in wavelet domain. High frequency nature of the boundaries of fire regions is also used as a clue to model the flame flicker. Temporal and spatial clues extracted from the video are combined to reach a final decision.Signal processing techniques for the detection of flames with pyroelectric (passive) infrared (PIR) sensors are also developed. The flame flicker process of an uncontrolled fire and ordinary activity of human beings and other objects are modeled using a set of Markov models, which are trained using the wavelet transform of the PIR sensor signal. Whenever there is an activity within the viewing range of the PIR sensor, the sensor signal is analyzed in the wavelet domain and the wavelet signals are fed to a set of Markov models. A fire or no fire decision is made according to the Markov model producing the highest probability. Smoke at far distances (> 100m to the camera) exhibits different temporal and spatial characteristics than nearby smoke and fire. This demands specific methods explicitly developed for smoke detection at far distances rather than using nearby smoke detection methods. An algorithm for vision-based detection of smoke due to wild fires is developed. The main detection algorithm is composed of four sub-algorithms detecting (i) slow moving objects, (ii) smoke-colored regions, (iii) rising regions, and (iv) shadows. Each sub-algorithm yields its own decision as a zero-mean real number, representing the confidence level of that particular subalgorithm. Confidence values are linearly combined for the final decision. Another contribution of this thesis is the proposal of a framework for active fusion of sub-algorithm decisions. Most computer vision based detection algorithms consist of several sub-algorithms whose individual decisions are integrated to reach a final decision. The proposed adaptive fusion method is based on the least-mean-square (LMS) algorithm. The weights corresponding to individual sub-algorithms are updated on-line using the adaptive method in the training (learning) stage. The error function of the adaptive training process is defined as the difference between the weighted sum of decision values and the decision of an oracle who may be the user of the detector. The proposed decision fusion method is used in wildfire detection.Item Open Access Human face detection and eye location in video using wavelets(2006) Türkan, MehmetHuman face detection and eye localization problems have received significant attention during the past several years because of wide range of commercial and law enforcement applications. In this thesis, wavelet domain based human face detection and eye localization algorithms are developed. After determining all possible face candidate regions using color information in a given still image or video frame, each region is filtered by a high-pass filter of a wavelet transform. In this way, edge-highlighted caricature-like representations of candidate regions are obtained. Horizontal, vertical and filter-like edge projections of the candidate regions are used as feature signals for classification with dynamic programming (DP) and support vector machines (SVMs). It turns out that the proposed feature extraction method provides good detection rates with SVM based classifiers. Furthermore, the positions of eyes can be localized successfully using horizontal projections and profiles of horizontal- and vertical-crop edge image regions. After an approximate horizontal level detection, each eye is first localized horizontally using horizontal projections of associated edge regions. Horizontal edge profiles are then calculated on the estimated horizontal levels. After determining eye candidate points by pairing up the local maximum point locations in the horizontal profiles with the associated horizontal levels, the verification is also carried out by an SVM based classifier. The localization results show that the proposed algorithm is not affected by both illumination and scale changes.Item Open Access Moving object detection and tracking in wavelet compressed video(2003) Töreyin, Behçet UğurIn many surveillance systems the video is stored in wavelet compressed form. An algorithm for moving object and region detection in video that is compressed using a wavelet transform (WT) is developed. The algorithm estimates the WT of the background scene from the WTs of the past image frames of the video. The WT of the current image is compared with the WT of the background and the moving objects are determined from the difference. The algorithm does not perform inverse WT to obtain the actual pixels of the current image nor the estimated background. This leads to a computationally efficient method and a system compared to the existing motion estimation methods. In a second aspect, size and locations of moving objects and regions in video is estimated from the wavelet coefficients of the current image, which differ from the estimated background wavelet coefficients. This is possible because wavelet coefficients of an image carry both frequency and space information. In this way, we are able to track the detected objects in video. Another feature of the algorithm is that it can determine slowing objects in video. This is important in many practical applications including highway monitoring, queue control, etc.Item Open Access Pyroelectric infrared (PIR) sensor based event detection(2009) Soyer, Emin BireyPyroelectric Infra-red (PIR) sensors have been extensively used in indoor and outdoor applications as they are low cost, easy to use and widely available. PIR sensors respond to IR radiating objects moving in its viewing range. The current sensors give an output of logical one when they detect a hot object’s motion and a logical zero when there is no moving hot object. In this method, only moving objects can be detected and the rate of false alarm is high. New types of PIR sensors are more sophisticated and more capable. They have a lower false alarm ratio compared to classical ones. Although they can distinguish pets and humans, again they can only be used for detection of hot object motions due to the limitations caused by the usage of the simple comparator structure inside. This structure is unalterable, not flexible for development, and not suitable for implementing algorithms. A new approach is developed to use PIR sensors by modifying the sensor circuitry. Instead of directly using the output of a classical PIR sensor, an analog signal is extracted from the PIR output and it is sampled. As a result, intelligent signal processing algorithms can be developed using the discrete-time sensor signal. In this way, it is possible to develop human, pet and flame detection methods. It is also possible to find the direction of moving objects and estimate their distances from the sensor. Furthermore, the path of a moving target can be estimated using a PIR sensor array. We focus on object and event classification using sampled PIR sensor signals. Pet, human and flame detection methods are comparatively investigated. Different human motion events are modeled and classifed using Hidden Markov Models (HMM) and Conditional Gaussian Mixture Models (CGMMs). The sampled data is wavelet transformed for feature extraction and then fed into HMMs for analysis. The final decision is reached according to the Markov Model producing the highest probability. Experimental results demonstrate the reliability of the proposed HMM based decision and event classification algorithm.