Utilising Deep Convolutional Neural Networks for Classifying Fire Disasters Through Surveillance: An Indoor and Outdoor Perspective to Predict Man-Made or Natural Disaster

Utilising Deep Convolutional Neural Networks for Classifying Fire Disasters Through Surveillance: An Indoor and Outdoor Perspective to Predict Man-Made or Natural Disaster

Shankar Ganesan* Kalaiselvi Geetha Manoharan Ezhumalai Periyathambi

Department of Computer Science and Engineering, FEAT, Annamalai University, Chidambaram 608002, Tamilnadu, India

Department of Computer Science and Engineering, R.M.D. Engineering College, Chennai 601206, India

Corresponding Author Email: 
gs.cse@rmd.ac.in
Page: 
1323-1330
|
DOI: 
https://doi.org/10.18280/ria.370525
Received: 
12 July 2023
|
Revised: 
2 September 2023
|
Accepted: 
9 September 2023
|
Available online: 
31 October 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Disasters, unpredictable events inflicting substantial harm to human lives and property, are categorized broadly into natural and man-made occurrences. Fires, in particular, pose significant threats due to their hazardous impact and the challenges associated with early detection and origin determination. This study narrows its focus to fires, aiming to predict their onset and distinguish between man-made and natural causes. Over recent decades, traditional algorithms have been employed to predict fire events; however, this work adopts a novel approach, utilizing deep neural networks in conjunction with surveillance systems. The proposed model not only predicts the onset of a fire but also identifies its likely cause and location, specifically differentiating between indoor and outdoor fires. Furthermore, the model maintains the integrity of sensitive details present in the original images, an essential consideration for privacy and safety. The model was trained and tested on real-time fire datasets, resulting in an impressive accuracy of 97.44% in predicting the nature of the fire and classifying its location. This work thus contributes significantly to disaster management efforts by enabling early fire detection, facilitating rapid response, and ultimately safeguarding human lives and property.

Keywords: 

fire detection, disasters, deep neural networks, manmade, natural, indoor, outdoor

1. Introduction

Recently, the hot topic of the research area is disaster management systems. Many types of disasters happen in the real world like earthquakes, storms, fire, etc. Fire is one among the disasters classified the above. Fire disaster prediction with surveillance has many benefits like flexible structure installation, high accuracy, and the capacity to really identify fires in huge spaces and complex structure structures [1]. The images are acquired from the surveillance camera and process the image data the fire detection algorithms to find out the occurrence of a fire or fire risk images. Therefore, the performance of the fire image is determined by the algorithm of core technology.

In the rapid economic world, the construction of fire disaster prediction and classification system may be highly complex, more cost and introducing great challenges. Accordingly, fire prediction systems are high responsiveness and accuracy is primary to shrink fire losses. However, many traditional fire detection algorithms are implemented with and without sensors whereas the sensors are not suitable for large area coverage. Sensor based systems are working or suitable for small scale coverage. Surveillance systems are overcoming these area coverage limitation, false detection, delayed detection, false alarms, and other issues often occur, making it even more complex to accomplish fire warnings.

Fire disaster prediction process comprises three main stages includes image preprocessing, feature extraction and detection. Among, feature extraction is the most complex and challenging task for the researchers, and it is the core of the algorithm. Traditional approaches rely on manual fire feature selection and machine learning classification. Manual fire feature selection is inadequacy due to lack of professional knowledge in fire feature. Though the researchers have studied simple features like flame, fire, color, edges and textures. These features do not help to detect the fire in a complex background. Because the complex background has many obstacles in the sensible applications, low and middle complex extracted image features are complex to distinguish fire and fire-like objects in a scene, thereby it may cause less accuracy and the ability of week generalization.

The traditional methods have many stages to predict the fire event and consuming more time. So, this novel approach follows Convolutional neural networks (CNNs) to extract and learn the image features effectively even if it is more complex. Such a kind of algorithms attracted enormous concerns and achieved great performance in many aspects of algorithms. Therefore, the researchers familiarize CNNs in fire detection system, accordingly, fostering the self-learned algorithm in collection of fire image features [1-3].

Traditional methods had numeral issues like color-like objects are misclassified through color-based model, complex scene obtaining less accuracy in background subtraction method, pixel-based detection takes more time, feature selection and feature extraction is another significant problem. Recently, CNN became familiar in detection and classification of object in the real-time scene detections through surveillance system. CNN has many stages, and it also provides automatic feature extraction. So, our novel work step into CNN to detect and classify the fire object in the video sequences.

This study reveals proposed convolutional neural networks in the fire disaster prediction and classify whether manmade or natural disaster. This model is developed and trained by the fire dataset and custom fire dataset. Video sequences acquired from the surveillance, and it converted into images. Number of images ae retrieved based on video quality such as frames per second. The raw images different in size CNN doesn’t accept the different size of images and it needs to be converted as same in size. Then the images are taken as input to the CNN model and proceed to extract the feature and apply the process of convolutional process. Finally, the performance of the algorithm is determined. The results providing helpful information for modification of detection algorithms to classify the fire and prediction system with manmade or natural.

However, fire prediction and classification algorithms in view of CNNs have more advancement in the accuracy in complex scenes than traditional approaches, a few issues exist. Initially, a Machine learning algorithm is considered the fire detection as classification and ignores the region proposal stage. Thus, the proposal region must be resolute prior to the image classification to gain the algorithm ability to detect the fire. Next, the algorithm is designed to generate the proposal region by selecting manual features and classifying proposal region with CNNs.

In this paper, we discuss ways to address the obstacles and fortify the generalization of fire prediction in the real-world conditions to find the type of fire accident is made by human or not. Deep learning-based model offered to detect the fire automatically with color image segmentation information recorded by the camera and the classification model classify the types of fire. Summary and principal contribution of the paper is as follows:

Express a fire like objects which are invariant to modify the identity, illuminations, and its appearances in a scene. Outcome of results exhibit portrayal of fire by feeding as input to the CNN model to learn highly features thoroughly and generalize effective fire detection from the real-world environments. The proposed model is a troupe of different CNN builds that learns about fire and segmentation information based on fire and human presence. In view of multi model input data, the proposed model benefits from both modality specific and complementary information between the modalities and high improves the accuracy of fire and human objects.

2. Related Work

In the last decade, machine learning and deep learning approaches have worked with broad headways in a several computer vision problems, object detection [4], object segmentation [5], and road monitoring [6]. DL calculations had the option to distinguish every pixel in the image proficiently and characterize the presence of objects accurately during object segmentation. These algorithms were performing more dependably than conventional AI models.

Traditional approaches for fire detection rely on manual fire feature extraction, like color [7], edges [8], texture [9], motion [10]. Some researchers congregate multiple features to improve the algorithm performance. Emmy Prema et al. [11], proposed an efficient flame detection method based on static and dynamic texture analysis in forest fire detection. The method uses a combination of texture-based features and machine learning algorithms to detect flames. Habiboğlu et al. [12] covariance matrix-based fire and flame detection method in video. The method uses a covariance matrix descriptor to extract features from flame and non-flame regions. Torabian et al. [13] proposed a fire detection method based on fractal analysis and spatio-temporal features. The method uses fractal dimension and motion information to detect fire and smoke in video frames. Initially, in each frame dynamic texture is detected with different fractal analysis methods, and threshold techniques, then color probability model were used to separate the motion region.

The advancement of CNNs brought outstanding upgrades in the exhibition of numerous computer vision responsibilities, in specific, recognition and classification problems. LeCun et al. [14] introduced a neural network architecture for document recognition called Convolutional Neural Network (CNN) and demonstrated its effectiveness in recognizing handwritten digits. The network used a gradient-based learning method to improve its performance. For instance, Alexnet which is proposed by Krizhevsky et al. [15] developed a deep CNN architecture called AlexNet for the ImageNet Large-Scale Visual Recognition Challenge, which involved classifying images into 1000 categories. AlexNet achieved a top-5 error rate of 15.3%, which was significantly better than the previous state-of-the-art method.

Though, the color distribution is intricate because fire doesn't have any well-formed features. Additionally, fire is susceptibility to the significance of external features like background light and wind. In this scenario, manual feature extraction is more arduous, which affects the results in accuracy. The convolutional neural network (CNN) has great and powerful capability feature extraction. In recent years, deep learning based on CNNs has attained tremendous results, and they have developed quickly in the field of image classification and object detection.

Consequently, various deep learning methods are proposed to detect fire in video sequences, R-CNN, SSD, and Yolo [16] are proposed using popular object detection methods for real-time forest fire detection. They used the You Only Look Once (YOLO) object detection algorithm with two different versions: YOLOv2 and YOLOv3. Their experiments showed that YOLOv3 achieved higher accuracy than YOLOv2 and other object detection methods such as Faster R-CNN and Single Shot MultiBox Detector (SSD). Also, these models are based on VGG-16 pre-trained and Resnet50 is used in the study [17] to recommend fire hotspot detection system on CCTV videos using the YOLO method and the Tiny YOLO model. They used a pre-trained model on the ImageNet dataset and fine-tuned it on their dataset of CCTV videos., YOLO model [18] the YOLO algorithm for real-time object detection. They designed a unified architecture that predicts bounding boxes and class probabilities directly from full images in one evaluation. Their experiments showed that YOLO was both faster and more accurate than previous object detection methods.

A multi-scale object detection algorithm is proposed [19]. Fire detection for smart city environment is developed [20] with YOLOv4 algorithm with Banana Pi M3 board is tested with only three layers and gives great accuracy. Anti-fire surveillance systems for real time fire and smoke detection [21] is implemented with YOLOv2. Color classifier and novel image classification model is combined [22] to detect fire and CoAtNet-4 architecture is used. The mixed learning of YOLOv4 and LiDAR [23] is introduced in forest fire detection system and this model transcends the traditional models. UAV based forest fire detection developed [24] and VGG-19 based transfer learning is used to achieve accuracy, to detect fire and smoke like objects using deep separable convolutional neural networks [25].

VGG-16 and Resnet50 are the pretrained models and these are already trained with huge numbers of dataset. If these models are used for our proposed work, it gives the standard result and accuracy may reduce or model may not fit for the new dataset. Also, it required fine tune to produce better accuracy and novelty may not be proved. SSD model fails to detect the moving regions in the video sequences. R-CNN is multi-stage model with independent components, but this will not train end-to-end. The YOLO model produces less recall and high localization error than the R-CNN.

In view of the prior discussions, it may be concluded that a few of them are too unsophisticated, and this execution time is speedy, but such methods compromise on accuracy, producing a high false alarm. Contrarily, few algorithms have obtained excellent accuracies, but their execution time is too high, consequently they can't be carried out in real-world environments in essential regions where a minor put off can cause a large catastrophe. Therefore, high accuracy and fire prediction with types of disaster, we need a robust mechanism that can detect fire during varying conditions and can send the important key frames and alert immediately to disaster management systems.

3. Proposed Methodology

This section describes that, proposed fire detection system and detailed discussion about the working principles of fire detection.

Smoke detection through video surveillance is achieved using DCNN method [2] in this the flames are detected to predict early fire in the video sequence. Flames are identified through moving region using convolutional model are proposed in studies [3, 4], object detection is achieved with segmentation over deep learning models are obtained from the study [5], helps to gives a key to recognize the human object in the scene. Abdollahi et al. [6] discussed object extraction from video sequences using deep learning models and sensors to detect the object movements.

Image classification is accomplished with the CNN over the ImageNet [15] to classify fire or non-fire. Multi-stage fire detection on convolutional model is proposed in the study [19] and fire objects are detected with warning alert. Fire detection in smart city environment through deep neural networks are implemented in the study [20] and achieved high accuracy. Real-time fire and smoke detection in video surveillance is proposed [21] and focused on flames in the video to analyze the early fire warning.

3.1 Deep convolutional neural network (DCNN)

This proposed a novel deep learning-based architecture which predicts and classifies fire in video sequences. Further, this architecture uses high resolution RGB images to detect fire in a scene. If a fire is found, then we classify whether it happened indoor or outdoor. Later, this model predicts whether the fire was made by human or natural disaster based on the human presence in video. Figure 1 shows the overall architecture of CNN, which comprises many stages like convolution layer, sub-sampling layer, fully connected layers and so on. The outline of proposed Fire Disaster System (FDS) model is acquired from Figure 2. FDS net, a CNN model that uses visual representations based on the RGB image and segmentation and learns high-level embedding features for fire and human object recognition.

Here, the proposed model DCNN takes input as images, these images initially different in size, quality, etc., But the model will not adapt to train with such a raw input. So, primarily the images are resized with 222 × 222 size, then the model simply iterates the input to train the model. Further it processes to the next convolutional layers and applies the sub sampling process. Each convolutional process the images size is reduced, and the sub sampling process also been made. Next it forwards to the fully connected layer to classify the predicted output.

The model has acquired the input with six classes such as indoor and outdoor fire environment with or without human presence, smoke, and non-fire images. This model trained with these classes, and it predict man-made disaster if the image consists fire and human object otherwise it predicts as natural disaster.

Figure 1. Overall architecture of CNN

Figure 2. Fire and human detection (a. Indoor fire with manmade disaster, b. Indoor fire with natural disaster and c. Outdoor fire with natural disaster)

3.2 Fire and human detection

The context of fire detection in disaster management systems through surveillance in the real-world application saves social damage, ecological, and economic. Nevertheless, fire detection is a crucial dilemma due to frequent changes in lighting, abnormal shadows, and similar colored objects like fire. Thus, there may be a need for an approach that could gain high accuracy within the fore mentioned situations while minimizing the quantity of false alarms. Also, a new idea is proposed with this approach is to predict the fire and classify whether it is manmade or natural disaster system. To accomplish this aspiration, the proposed deep CNNs and excogitated architecture on behalf of fire detection through surveillance in disaster management systems. Once fire detected successfully in a video, this model continues to send an instantaneous alert signal and message to the peoples who are nearby the fire area. Later, these models classify the fire area like indoor or outdoor and find whether any human presence is there. For detecting human presence input alone passed as human with fire object as one of the classes. So, if the model trained like fire with human then it predicts that fire happened by human. Finally, if any human is found in a video in the fire area, then it concludes that the fire is happened by man otherwise natural disaster.

Next, this proposed method integrates various human and fire detection using hare cascade classifiers. It follows improves to predict the correct fire object even if any fire-like object presents in a scene, segmentation errors, and produce high accuracy results compared to Minaee et al. [5] uses segmentation, Emmy Prema et al. [11] used in dynamic texture and Habiboğlu et al. [12] used covariance-matrix based color model for identifying fire-like object region in a scene. Also, this model used visual representation based on fire objects and humans with segmentation for learning features. This novel framework generalizes the unseen real-world environments with success compared with Muhammad et al. [1] and Minaee et al. [5] to recognize and detect the fire and human presence with significant changes in a scene.

3.3 Proposed architecture

Even though the literature study had suggested many algorithms with sensor based or non-sensor based, or traditional machine learning algorithms still there is gapped to fulfill some issues. So, this proposed model involved resolving the issues addressed in the literature section. This model uses DCNN to obtain better accuracy than the traditional methods.

Figure 3 depicts the overall model of proposed architecture which comprises with three main components like Convolution layer, sub-sampling layer and fully connected layer. FDSnet (Fire Disaster System net), a CNN based model that uses visual representations derived from the RGB image and segmentation and learns high-level embedding features for fire and human recognition. A detailed description of every component of his structure is given below.

Figure 3. Architecture of FDSnet model

This proposed model can predict only whether the fire is caused by human or natural. It meant that based on human detection it concludes the fire happened because of humans. But, in future it steps into concentrate with human action recognition to predict human-made disaster or not. Also, this model will not be adopted to combine many algorithms together.

4. Experimental Results and Analysis

The objective of this work is carried out to predict fire with indoor and outdoor classification with detection man-made or natural fire disaster. In this scenario the proposed model is working to improve the better performance for predicting fire with human presence. Also, it works excellent with complex background scenes, it means the complex backgrounds consists of fire like objects, more objects present in the scene and fire color objects etc., But this model gives great accuracy even though in complex backgrounds.

4.1 Dataset

Object detection, recognition and classification with deep learning models require enormous image datasets for learning huge parameters. So, the dataset is identified from the literature survey, which is most suitable for the proposed work. This dataset contains 31 videos, and each video is dissimilar with time sequence, frames per second and resolution. These videos are downloaded from the following link https://mivia.unisa.it/datasets/video-analysis-datasets/fire-detection-dataset/ consists of fire and smoke videos.

Later, videos converted into frames and each frame is of different in their size, totally this dataset has 62690 frames with 6 different classes. The dataset having fire frames of manmade disasters and natural disaster with indoor and outdoor environment, smoke frames, and non-fire and non-smoke frames and sample frames are shows in Figure 4. The summary of original dataset is mentioned in Table 1:

Figure 4. Sample fire datasets

Table 1. Summary of dataset [26]

Video Name

Resolution

No. of Frames / FPS

Description

Fire1

-

Fire31

320×240

705/15

Outdoor fire with human walking around a bucket.

320×240

116/29

Outdoor fire without human presence and fire object located at long distance from the surveillance location.

400×256

255/15

Outdoor fire forest location with human.

400×256

240/15

Outdoor fire forest location without human.

400×256

195/15

Outdoor fire forest location without human.

320×240

1200/10

Outdoor fire with human.

400×256

195/15

Outdoor fire without human.

400×256

240/15

Outdoor fire without human.

400×256

240/15

Outdoor fire without human.

400×256

210/15

Outdoor fire without human.

400×256

210/15

Outdoor fire without human.

400×256

210/15

Outdoor fire without human.

320×240

1650/25

Indoor fire with and without human presence.

320×240

5535/15

Outdoor fire with and without human presence.

320×240

240/15

Smoke

320×240

900/10

Smoke

320×240

1725/25

Smoke

352×288

600/10

Smoke

320×240

630/10

Smoke

320×240

5958/9

Smoke

720×480

80/10

Smoke

480×272

22500/25

Smoke

720×576

6097/7

Smoke

320×240

342/10

Smoke

352×288

140/10

Smoke

720×576

847/7

Smoke

320×240

1400/10

Smoke

352×288

6025/25

Smoke

720×576

600/10

Smoke

800×600

1920/15

No fire and smoke

800×600

1485/15

No fire and smoke

4.2 Data augmentation

Initially, the dataset contains 31 videos with fire and smokes. But the proposed problem is initiated to detect fire and classify with whether the fire is happened by natural or manmade with indoor and outdoor space. For this, the dataset has converted from video to images and divided into six classes. Before the data augmentation, the dataset has 62,690 frames with imbalanced result of the accuracy of the model is very poor. For maintaining balanced level of the dataset, the augmentation is performed. The data augmentation is achieved with changing of following parameters, the image rotation range is changed with 30,45,50 and 60, Height and with shift ranges are changed with 0.2,0.25, and 0.3 inches, shear and zoom ranges varies from 0.2, 0.25 and 0.3. Outline of the dataset for each class about augmentation is mentioned in Table 2 and augmented frames shows in Figure 5:

Table 2. Summary of dataset for before and after augmentation

Classes

Before Augmentation

After Augmentation

Indoor Fire with Human

4325

45,444

Indoor Fire without Human

4325

47762

Outdoor Fire with Human

7695

47971

Outdoor Fire without Human

3856

47707

No Fire

4405

45731

Smoke

38084

48084

Total Frames

62690

2,82,699

4.3 Training and implementation

The proposed model (given in Table 3) has five layers and initializes the weights of the model. This model weights are initialized with layers using the weights of next embedding layer that possessing zero-mean Gaussian distribution (SD=0.1 and bias=0). For instance, 20 epochs, the trained convolutional layer and embedding layers are from end to another end. This model uses adam optimizers with the learning rate of 0.001and softmax as activation function with 6 units of dense layers.

Figure 6. Sample feature map for the input image

Table 3. Outline of the FDSnet model

Layer Type

Filter Size & Stride

Details

Output Shape

Conv1

3×3 and stride=1

Conv1(16)

222, 222, 16

Activation

ReLu

 

 

MaxPooling

 

Pooling Size (2,2)

111, 111, 16

Conv2

3×3 and stride=1

Conv2(32)

109, 109, 32

Activation

ReLu

 

 

MaxPooling

 

Pooling Size (2,2)

54, 54, 32

Conv3

3×3 and stride=1

Conv3(32)

52, 52, 32

Activation

ReLu

 

 

MaxPooling

 

Pooling Size (2,2)

26, 26, 32

Conv4

3×3 and stride=1

Conv4(64)

24, 24, 64

Activation

ReLu

 

 

MaxPooling

 

Pooling Size (2,2)

12, 12, 64

Conv5

3×3 and stride=1

Conv5(64)

10, 10, 64

Activation

ReLu

 

 

MaxPooling

 

Pooling Size (2,2)

5, 5, 64

Flatten

 

 

1600

Dense1

Dense Input Layer=128

 

128

Dense2

Dense Class Layer=6

 

6

Activation Function

Softmax

 

6

4.4 Training and testing

The dataset consists of 2,82,699 frames with all classes, the proposed model is trained with 70% of frames (1,97,885 Frames) from the total dataset and tested with 30% of the frames (84,814 frames). The facts of the dataset used to train and test the model is given in Table 4 and feature map details mentioned in Figure 6:

Table 4. Training and testing datasets

Class Name

Total No. of Frames

No. of Frames Used for Training

No. of Frames Used for Testing

Indoor Fire with Human

45444

31810

13634

Indoor Fire without Human

47762

33433

14329

Outdoor Fire with Human

47971

33579

14392

Outdoor Fire without Human

47707

33394

14313

Smoke

45731

32011

13720

No Fire

48084

33658

14426

Total

282699

197885

84814

4.5 Evaluation metrics

Evaluation metrics is measured with confusion matrix, it comprises of true positive, true negative, false positive, false negative values which measures the fire prediction and classify whether manmade or natural type of disaster. When the classifier is identifying the correct issues is measured as success rates of the classifier otherwise, it denotes the failure state. The performance of the classifier is obtained by the error rate. It is the proportion of the errors over the instances set. The confusion matrix of the problem is given Table 5:

Table 5. Confusion matrix for fire prediction

 

 

Actual Value

 

 

Positive

Negative

Predicted Value

Positive

TP

FP

 

Negative

FN

TN

Precision (P) or detection rate is the ratio of acceptable labelled instances and the sum of all labelled instances. Typically, Precision measures the prediction of the model which denotes the true positive (TP) which is given below:

Precision $P=\frac{T P}{T P+F P}$

Recall(R) or sensitivity is the ratio of acceptable labelled instances and the total instances. Mathematically, R measures is given below, Figure 7 represents the sample predictions.

Recall $R=\frac{T P}{T P+F N}$

Figure 7. Sample frames for fire prediction with manmade or natural disaster

The Harmonic mean of precision is stated by F-score and F-score is expressed as:

$F_\beta=\frac{(1+\beta)^2 \cdot T P}{(1+\beta)^2 \cdot T P+\beta^2 \cdot F N+F P}$ OR F1 $=2 \frac{P R}{P+R}$

Figure 8. Accuracy proposed model

Figure 9. Loss of proposed model

Hence, 70% of the dataset used to train the model and consecutively 30% of the dataset is used to test the model. Efficiency of the model is evaluated with parameters like precision, recall and f-score. Generally, the f-score used to predict the fire and accuracy used to classify the fire [26]. Figure 8 and Figure 9 represents the accuracy and loss of the proposed model.

Figure 10. Confusion matrix for fire detection

Figure 11. Performance matrices of fire detection

Figure 10 gives the confusion matrix of the proposed model which classify the all the classes to predict man-made or natural disasters. Figure 11 shoes that classification report for the performance matrices.

5. Conclusions

Fire detection and classification indoor and outdoor with man-made or natural is vital role in disaster management system. Traditional algorithms are having limitations in fire detection even though used for sensor-based systems. Recently, Deep learning models are applied widely in computer vision technologies. By utilizing DCCN models the results are more significant in fire detection then the traditional methods. This work carried out to predict the fire disaster and classify whether the fire happened with manmade or natural. A deep convolutional neural network is initiated to predict fire. The spine network by accumulating convolution path to get better feature extraction capability and gives better performance. This proposed model will ensure to classify the fire source like manmade disaster or natural disaster with indoor and outdoor area. Experiments with complex background and fire like object presents on the frames show unknown fire environments but, this proposed model achieves high accuracy for fire detection. The dataset contains variations in size, quality, resolution and various viewpoints of camera. Expanding this strategy to the disaster area of other events to improve its scale for prediction of fire.

In traditional algorithms works with detection of fire or early warning to the people who are nearby the fire area. Next some other researchers involved to classified as whether the fire is happened in indoor or outdoor environment. But the proposed model introduced that all together with predicting man-made or natural disaster.

  References

[1] Muhammad, K., Ahmad, J., Mehmood, I., Rho, S., Baik, S.W. (2018). Convolutional neural networks based fire detection in surveillance videos. IEEE Access, 6: 18174-18183. https://doi.org/10.1109/ACCESS.2018.2812835

[2] Tao, C., Zhang, J., Wang, P. (2016). Smoke detection based on deep convolutional neural networks. In 2016 International Conference on Industrial Informatics-Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII), Wuhan, China, pp. 150-153. https://doi.org/10.1109/ICIICII.2016.0045

[3] Filonenko, A., Kurnianggoro, L., Jo, K.H. (2017). Comparative study of modern convolutional neural networks for smoke detection on image data. In 2017 10th international conference on human system interactions (HSI), Ulsan, Korea (South), pp. 64-68. https://doi.org/10.1109/HSI.2017.8004998

[4] Zhao, Z.Q., Zheng, P., Xu, S.T., Wu, X. (2019). Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, 30(11): 3212-3232. https://doi.org/10.1109/TNNLS.2018.2876865

[5] Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D. (2021). Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7): 3523-3542. https://doi.org/10.1109/TPAMI.2021.3059968

[6] Abdollahi, A., Pradhan, B., Shukla, N., Chakraborty, S., Alamri, A. (2020). Deep learning approaches applied to remote sensing datasets for road extraction: A state-of-the-art review. Remote Sensing, 12(9): 1444. https://doi.org/10.3390/rs12091444

[7] Ham, S., Ko, B.C., Nam, J.Y. (2011). Vision based forest smoke detection using analyzing of temporal patterns of smoke and their probability models. In Image Processing: Machine Vision Applications IV, 7877: 92-98. https://doi.org/10.1117/12.871995

[8] Töreyin, B.U., Dedeoğlu, Y., Cetin, A.E. (2005). Wavelet based real-time smoke detection in video. In 2005 13th European Signal Processing Conference, Antalya, Turkey, pp. 1-4.

[9] Yu, C.Y., Zhang, C.Y., Fang, J., Wang, J.J. (2009). Texture analysis of smoke for realtime fire detection. In: Second International Workshop on Computer Science and Engineering, Qingdao, China, pp. 511-515. https://doi.org/10.1109/WCSE.2009.864

[10] Han, D., Lee, B. (2009). Flame and smoke detection method for early real-time detection of a tunnel fire. Fire Safety Journal, 44(7): 951-961. https://doi.org/10.1016/j.firesaf.2009.05.007

[11] Emmy Prema, C., Vinsley, S.S., Suresh, S. (2018). Efficient flame detection based on static and dynamic texture analysis in forest fire detection. Fire Technology, 54: 255-288. https://doi.org/10.1007/s10694-017-0683-x

[12] Habiboğlu, Y.H., Günay, O., Çetin, A.E. (2012). Covariance matrix-based fire and flame detection method in video. Machine Vision and Applications, 23: 1103-1113. https://doi.org/10.1007/s00138-011-0369-1

[13] Torabian, M., Pourghassem, H., Mahdavi-Nasab, H. (2021). Fire detection based on fractal analysis and spatio-temporal features. Fire Technology, 57(5): 2583-2614. https://doi.org/10.1007/s10694-021-01129-7

[14] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278-2324. https://doi.org/10.1109/5.726791

[15] Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25: 1097-1105. https://doi.org/10.1145/3065386

[16] Wu, S., Zhang, L. (2018). Using popular object detection methods for real time forest fire detection. In 2018 11th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, pp. 280-284. https://doi.org/10.1109/ISCID.2018.00070

[17] Lestari, D.P., Kosasih, R., Handhika, T., Sari, I., Fahrurozi, A. (2019). Fire hotspots detection system on CCTV videos using you only look once (YOLO) method and tiny YOLO model for high buildings evacuation. In 2019 2nd International Conference of Computer and Informatics Engineering (IC2IE), pp. 87-92. https://doi.org/10.1109/IC2IE47452.2019.8940842

[18] Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788.

[19] Huo, Y., Zhang, Q., Jia, Y., Liu, D., Guan, J., Lin, G., Zhang, Y. (2022). A deep separable convolutional neural network for multiscale image-based smoke detection. Fire Technology, 1-24. https://doi.org/10.1007/s10694-021-01199-7

[20] Avazov, K., Mukhiddinov, M., Makhmudov, F., Cho, Y.I. (2021). Fire detection method in smart city environments using a deep-learning-based approach. Electronics, 11(1): 73. https://doi.org/10.3390/electronics11010073

[21] Saponara, S., Elhanashi, A., Gagliardi, A. (2021). Real-time video fire/smoke detection based on CNN in antifire surveillance systems. Journal of Real-Time Image Processing, 18: 889-900. https://doi.org/10.1007/s11554-020-01044-0

[22] Pereira, D.M., Vieira, M.B., Villela, S.M. (2022). Combining neural networks and a color classifier for fire detection. In Brazilian Conference on Intelligent Systems, pp. 139-153. https://doi.org/10.1007/978-3-031-21689-3_11

[23] Kasyap, V.L., Sumathi, D., Alluri, K., Reddy Ch, P., Thilakarathne, N., Shafi, R.M. (2022). Early detection of forest fire using mixed learning techniques and UAV. Computational Intelligence and Neuroscience, 2022: 3170244. https://doi.org/10.1155/2022/3170244

[24] Khan, A., Hassan, B., Khan, S., Ahmed, R., Abuassba, A. (2022). DeepFire: A novel dataset and deep transfer learning benchmark for forest fire detection. Mobile Information Systems, 2022: 5358359. https://doi.org/10.1155/2022/5358359

[25] Dai, P., Zhang, Q., Lin, G., Shafique, M. M., Huo, Y., Tu, R., Zhang, Y. (2022). Multi-scale video flame detection for early fire warning based on deep learning. Frontiers in Energy Research, 10: 848754. https://doi.org/10.3389/fenrg.2022.848754

[26] Muhammad, K., Ahmad, J., Baik, S.W. (2018). Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing, 288: 30-42. https://doi.org/10.1016/j.neucom.2017.04.083