up previous
Up: Computer Vision Based Method Previous: Conclusion

Bibliography

1
B. W. Albers and A. K. Agrawal, ``Schlieren Analysis of an Oscillating Gas-jet Diffusion,'' Combustion and Flame, Vol. 119, pp. 84-94, 1999.

2
M. Bagci, Y. Yardimci, and A. E. Cetin, ``Moving Object Detection Using Adaptive Subband Decomposition and Fractional Lower Order Statistics in Video Sequences,'' Signal Processing, pp. 1941-1947, 2002.

3
A. E. Cetin and R. Ansari, ``Signal Recovery from Wavelet Transform Maxima,'' IEEE Transactions on Signal Processing, Vol. 42, pp. 194-196, 1994.

4
D. S. Chamberlin and A. Rose, The First Symposium (International) on Combustion, The Combustion Institute, Pittsburgh, pp. 27-32, 1965.

5
T. Chen, P. Wu, and Y. Chiou, ``An Early Fire-Detection Method Based on Image Processing,'' in the Proc. of IEEE Int. Conf. on Image Processing, ICIP '04, pp. 1707-1710, 2004.

6
R. T. Collins, A. J. Lipton, and T. Kanade, ``A System for Video Surveillance and Monitoring,'' in the Proc. of American Nuclear Society (ANS) Eighth International Topical Meeting on Robotics and Remote Systems, Pittsburgh, PA, 1999.

7
J. W. Davis and A. F. Bobick, ``The Representation and Recognition of Action Using Temporal Templates,'' in the Proc. of IEEE Computer Vision and Pattern Recognition Conference (CVPR'97), pp. 928-934, 1997.

8
Fastcom Technology SA, Boulevard de Grancy 19A, CH-1006 Lausanne, Switzerland, ``Method and Device for Detecting Fires Based on Image Analysis,'' Patent Cooperation Treaty Application No. PCT/CH02/00118, PCT Publication No. WO02/069292, 2002.

9
O. N. Gerek and A. E. Cetin, ``Adaptive Polyphase Subband Decomposition Structures for Image Compression,'' IEEE Transactions on Image Processing, Vol. 9, No. 10, pp. 1649-1660, 2000.

10
N. Haering, R. J. Qian, and M. I. Sezan, ``A Semantic Event-Detection Approach and Its Application to Detecting Hunts in Wildlife Video,'' IEEE Transactions on Circuits and Systems for Video Technology, Vol. 10 No. 6, pp. 857-868, 2000.

11
G. Healey, D. Slater, T. Lin, B. Drda, and A. D. Goedeke, ``A System for Real-time Fire Detection,'' in the Proc. of IEEE Computer Vision and Pattern Recognition Conference (CVPR'93), pp. 605-606, 1993.

12
F. Heijden, Image Based Measurement Systems: Object Recognition and Parameter Estimation, Wiley, 1996.

13
O. Javed and M. Shah, ``Tracking And Object Classification For Automated Surveillance,'' in the Proc. of European Conference on Computer Vision (ECCV'02), pp. 343 - 357, 2002.

14
C. W. Kim, R. Ansari, and A. E. Cetin, ``A Class of Linear-phase Regular Biorthogonal Wavelets,'' in the Proc. of International Conference on Acoustics, Speech, and Signal Processing, ICASSP-92, Vol. 4, pp. 673-676, 1992.

15
C. B. Liu and N. Ahuja, ``Vision Based Fire Detection,'' in the Proc. of International Conference on Pattern Recognition (ICPR'04), Vol. 4, pp. 134-137, 2004.

16
S. Mallat and S. Zhong, ``Characterization of Signals from Multiscale Edges,'' IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 7, pp. 710-732, 1992.

17
M. R. Naphade, T. Kristjansson, B. Frey, and T. S. Huang, ``Probabilistic Multimedia Objects (Multijects): A Novel Approach to Video Indexing and Retrieval in Multimedia Systems,'' in the Proc. of IEEE Int. Conf. on Image Processing, ICIP '98, pp. 536-540, 1998.

18
B. Parhami, ``Voting Algorithms,'' IEEE Transactions on Reliability, Vol. 43, No. 4, pp. 617-629, 1994.

19
W. Phillips III, M. Shah, and N. V. Lobo, ``Flame Recognition in Video,'' Pattern Recognition Letters, Vol. 23 (1-3), pp. 319-327, 2002.

20
D. A. Reynolds and R. C. Rose,``Robust Text-Independent Speaker Identification Using Gaussian Mixture Speaker Models,'' IEEE Transactions on Speech and Audio Processing, Vol. 3, No. 1, pp. 72-83, 1995.

21
C. Stauffer and W. E. L. Grimson, ``Adaptive Background Mixture Models for Real-Time Tracking,'' in the Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 246-252, 1999.

Figure: (a) A sample fire color cloud in RGB space, and (b) the spheres centered at the means of the Gaussian distributions with radius twice the standard deviation.
Image fig1

Figure: A two-stage filter bank. HPF and LPF represent half-band high-pass and low-pass filters, with filter coefficients {-0.25, 0.5, -0.25} and {0.25, 0.5, 0.25}, respectively. This filter bank is used for wavelet analysis.
Image fig2

Figure: (a) Temporal variation of image pixels $x_n[111, 34]$ in time. The pixel at [111, 34] is part of a flame for image frames $x_n$, n=1, 2, 3, 19, 23, 24, 41 and 50. It becomes part of the background for n = 12,..., 17, 20, 21, 26, 27, 31,..., 39, 45, 52,..., and 60. Wavelet domain subsignals (b) $d_n$ and (c) $e_n$ reveal the fluctuations of the pixel at [111, 34].
Image fig3

Figure: (a) Temporal history of the pixel [18, 34] in time. It is part of a fire-colored object for n = 4, 5, 6, 7, and 8, and it becomes part of the background afterwards. Corresponding subsignals (b) $d_n$ and (c) $e_n$ exhibit stationary behavior for $n>8$.
Image fig4

Figure: (a) A child with a fire-colored t-shirt, and b) the absolute sum of spatial wavelet transform coefficients, $\vert x_{lh}[k,l]\vert$+$\vert x_{hl}[k,l]\vert$+$\vert x_{hh}[k,l]\vert$, of the region bounded by the indicated rectangle.
Image fig5

Figure: (a) Fire, and (b) the absolute sum of spatial wavelet transform coefficients, $\vert x_{lh}[k,l]\vert$+$\vert x_{hl}[k,l]\vert$+$\vert x_{hh}[k,l]\vert$, of the region bounded by the indicated rectangle.
Image fig6

Figure: (a) With the method using color and temporal variation only (Method 2) [19], false alarms are issued for the fire colored line on the moving truck and the ground, (b) our method (Method 1) does not produce any false alarms.
Image fig7

Figure: Sample images (a) and (b) are from Movies 7 and 9, respectively. (c) False alarms are issued for the arm of the man with the method using color and temporal variation only (Method 2) [19] and (d) on the fire-colored parking car. Our method does not give any false alarms in such cases (see Table 1).
Image fig8


Figure: Sample images (a) and (b) are from Movies 2 and 4, respectively. Flames are successfully detected with our method (Method 1) in (c) and (d). In (c), although flames are partially occluded by the fence, a fire alarm is issued successfully. Fire pixels are painted in bright green.
Image fig9



Table: Comparison of the proposed method (Method 1), the method based on color and temporal variation clues only (Method 2) described in [19], and the method proposed in [5] (Method 3).
Video Number of frames Number of frames Number of false Description
sequences with fire detected as fire positive frames
Method Method
1 2 3 1 2 3
Movie 1 0 0 46 13 0 46 13

A fire-colored

moving truck
Movie 2 5 5 5 5 0 0 0

Fire in a garden
Movie 3 0 0 7 5 0 7 5

A car leaving

a fire-colored

parking lot
Movie 4 37 37 44 47 0 7 10

A burning box
Movie 5 64 64 88 84 0 24 20

A burning pile

of wood
Movie 6 41 41 56 50 0 15 9

Fire behind a

man with a fire

colored shirt
Movie 7 0 0 14 7 0 14 7

Four men walking

in a room
Movie 8 18 18 18 18 0 0 0

Fire in

a fireplace
Movie 9 0 0 15 5 0 15 5

A crowded

parking lot
Movie 10 0 0 0 0 0 0 0

Traffic on

a highway
Movie 11 0 9 107 86 9 107 86


Dancing man with

fire-colored shirt





Table: Time performance comparison of Methods 1, 2, and 3 for the movies in Table 1. The values are the processing times per frame in milliseconds.

Videos
Method 1 Method 2 Method 3

Movie 1
16 12 14
Movie 2 16 12 14
Movie 3 16 12 14
Movie 4 16 12 14
Movie 5 17 13 15
Movie 6 17 13 15
Movie 7 17 13 15
Movie 8 17 13 15
Movie 9 16 12 14
Movie 10 16 12 14
Movie 11 16 12 14


up previous
Up: Computer Vision Based Method Previous: Conclusion
ugur toreyin 2005-11-27