Browsing by Subject "hidden Markov models"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Dynamic texture analysis in video with application to flame, smoke and volatile organic compound vapor detection(2009) Günay, OsmanDynamic textures are moving image sequences that exhibit stationary characteristics in time such as fire, smoke, volatile organic compound (VOC) plumes, waves, etc. Most surveillance applications already have motion detection and recognition capability, but dynamic texture detection algorithms are not integral part of these applications. In this thesis, image processing based algorithms for detection of specific dynamic textures are developed. Our methods can be developed in practical surveillance applications to detect VOC leaks, fire and smoke. The method developed for VOC emission detection in infrared videos uses a change detection algorithm to find the rising VOC plume. The rising characteristic of the plume is detected using a hidden Markov model (HMM). The dark regions that are formed on the leaking equipment are found using a background subtraction algorithm. Another method is developed based on an active learning algorithm that is used to detect wild fires at night and close range flames. The active learning algorithm is based on the Least-Mean-Square (LMS) method. Decisions from the sub-algorithms, each of which characterize a certain property of the texture to be detected, are combined using the LMS algorithm to reach a final decision. Another image processing method is developed to detect fire and smoke from moving camera video sequences. The global motion of the camera is compensated by finding an affine transformation between the frames using optical flow and RANSAC. Three frame change detection methods with motion compensation are used for fire detection with a moving camera. A background subtraction algorithm with global motion estimation is developed for smoke detection.Item Open Access Vision based behavior recognition of laboratory animals for drug analysis and testing(2009) Sandıkcı, SelçukIn pharmacological experiments, a popular method to discover the effects of psychotherapeutic drugs is to monitor behaviors of laboratory mice subjected to drugs by vision sensors. Such surveillance operations are currently performed by human observers for practical reasons. Automating behavior analysis of laboratory mice by vision-based methods saves both time and human labor. In this study, we focus on automated action recognition of laboratory mice from short video clips in which only one action is performed. A two-stage hierarchical recognition method is designed to address the problem. In the first stage, still actions such as sleeping are separated from other action classes based on the amount of the motion area. Remaining action classes are discriminated by the second stage for which we propose four alternative methods. In the first method, we project 3D action volume onto 2D images by encoding temporal variations of each pixel using discrete wavelet transform (DWT). Resulting images are modeled and classified by hidden Markov models in maximum likelihood sense. The second method transforms action recognition problem into a sequence matching problem by explicitly describing pose of the subject in each frame. Instead of segmenting the subject from the background, we only take temporally active portions of the subject into consideration in pose description. Histograms of oriented gradients are employed to describe poses in frames. In the third method, actions are represented by a set of histograms of normalized spatio-temporal gradients computed from entire action volume at different temporal resolutions. The last method assumes that actions are collections of known spatio-temporal templates and can be described by histograms of those. To locate and describe such templates in actions, multi-scale 3D Harris corner detector and histogram of oriented gradients and optical flow vectors are employed, respectively. We test the proposed action recognition framework on a publicly available mice action dataset. In addition, we provide comparisons of each method with well-known studies in the literature. We find that the second and the fourth methods outperform both related studies and the other two methods in our framework in overall recognition rates. However, the more successful methods suffer from heavy computational cost. This study shows that representing actions as an ordered sequence of pose descriptors is quite effective in action recognition. In addition, success of the fourth method reveals that sparse spatio-temporal templates characterize the content of actions quite well.