Habiboğlu, Y. H.Günay, O.Çetin, A. Enis2016-02-082016-02-082011-09-170932-8092http://hdl.handle.net/11693/21275This paper proposes a video-based fire detection system which uses color, spatial and temporal information. The system divides the video into spatio-temporal blocks and uses covariance-based features extracted from these blocks to detect fire. Feature vectors take advantage of both the spatial and the temporal characteristics of flame-colored regions. The extracted features are trained and tested using a support vector machine (SVM) classifier. The system does not use a background subtraction method to segment moving regions and can be used, to some extent, with non-stationary cameras. The computationally efficient method can process 320×240 video frames at around 20 frames per second in an ordinary PC with a dual core 2.2 GHz processor. In addition, it is shown to outperform a previous method in terms of detection performance.EnglishCovariance descriptorsFire detectionSupport vector machinesBackground subtraction methodComputationally efficientDescriptorsDetection performanceDual coreFeature vectorsFire detectionFire detection systemsFlame detectionFrames per secondsMoving regionsNonstationarySpatio-temporalTemporal characteristicsTemporal informationVideo frameCovariance matrixFire detectorsSupport vector machinesCovariance matrix-based fire and flame detection method in videoArticle10.1007/s00138-011-0369-1