Research of Fire Alarm System Based on Extension Neural Network

Research of Fire Alarm System Based on Extension Neural Network

Tichun WangHao Yan Shisheng Zhong Yongjian Zhang 

College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China

School of Mechatronics’ Engineering, Harbin Institute of Technology, Harbin 150001, China

Corresponding Author Email: 
wangtichun2010@nuaa.edu.cn
Page: 
9-16
|
DOI: 
http://dx.doi.org/10.18280/rces.020102
Received: 
N/A
| |
Accepted: 
N/A
| | Citation

OPEN ACCESS

Abstract: 

According to the current situation of fire detection and the demand of achieving early fire detection, the fire alarm system is designed by multi-sensor information fusion (MSIF). The system takes extension neural network (ENN) as fusion algorithm. The input parameters are temperature, smoke density and CO density. And the output parameters are open fire probability, smoldering fire probability and no fire probability. It enhances the sensitivity and reliability of the alarm system output, and realizes the purpose of early warning.

Keywords: 

faire alarm, multi-sensor information fusion, extension neural network

1. Introduction

It is a research that all the countries in the word are trying hard to study to detect and automatic alarm to the fire as soon as possible accurately and reliable. Fire alarm system is mainly composed of related sensors, data acquisition and processing system. The traditional fire alarm system always used 3 kinds of sensors to improve the reliability, reduce the omission and false positives [1-3].The system will alarm as well as the monitored quantity of only one or one kind sensor overrun. And it reduces the rate of false positives system and greatly improves the reliability. Multi-sensor information fusion is becoming a research hotspot [4-6] It has been widely studied and applied in artificial intelligence, target recognition, medical diagnosis, aerospace and military fields. The system can alarm accurately and fast by using the difference and complementarity of multi sensor. And it can integrate a variety of information from various sensors and analysis data by fusion method. Also it can filter the jamming signal effectively. All these make the system more reliable [7].

This paper adopts MSIF and ENN, and it is able to predict the fire information in the early stage of fire and smoldering phase, to gain time for fire extinguishing. This design uses the ENN algorithm for network training, and establishes the fire alarm model. The network was trained in the condition of open fire, smoldering fire and no fire. The system will warning early for the slow development of fire and timely alarm for the rapid development of fire.

2. The Algorithm of Extension Neural Network (ENN)

2.1 Basic concept of Extenics

Extenics is an original and traverse discipline put forward by Chinese scholars in 1983 [8]. It discusses the possibility of matters’ extension and rules and methods of innovation with formalized models, which are used to solve contradictory conditions. At present it has been preliminarily confirmed that the core of extension theory is basic-element theory, extension set theory and extension logic. The logical cells of Extenics are matter element, affair element and relation element. Here are some basic concepts of extension [9-10].

Definition 1: An ordered triple composed of the measure $w$ of $N$ about $c,$ with matter $N$ as object, and $c$ as characteristic.

$R=(N, c, w)$

If the value of measure $c$ is a interval, then the matter element $R=(N, c, V)=\left[N, c,\left\langle w^{L}, w^{U}\right\rangle\right] .$ Wherein $w^{L}$ and $w^{H}$ is the lower bound and upper bound of measure $c$.   

Definition 2: The array composed of matter $N,$ -names of characteristics of $C=\left(c_{1}, c_{2}, \ldots, c_{n}\right)$ and the corresponding measure $w_{i}, i=1,2, \ldots, n$ of $N$ about $c_{i}, i=1,2, \ldots, n, R=(N, C, W)=\left[\begin{array}{rrr}N & c_{1} & w_{1} \\ &c_{2} & w_{2} \\ &\ldots & \ldots \\ &c_{n} & w_{n}\end{array}\right]$ to as n-dimensional matter-element. Wherein $R_{i}=\left(N, c_{i}, w_{i}\right), i=1,2, \ldots, n$ is the sub matter-element of $R$.

Definition $3: U$ is universe of discourse, $k$ is a map from $U$ to real number field $(-\infty,+\infty) .$ Then the extension set in the $\quad$ universe $\quad$ of discourse $U$ is $A=\{(u, y), u \in U, y=k(u) \in(-\infty,+\infty)\}$ $y=k(u)$ is the dependent function of $A$.

Definition $4: X_{0}=\langle a, b\rangle, X=\langle c, d\rangle, X_{0} \subseteq X \cdot X_{0}$ is classical domain, and $X$ is joint domain. Then the dependent function is

$K(x)=\frac{\rho\left(x, X_{0}\right)}{D\left(x, X_{0}, X\right)}$

Wherein

$\rho\left(x, X_{0}\right)=\left|x-\frac{(a+b)}{2}\right|-\frac{(b-a)}{2}$

$D\left(x, X_{0}, X\right)=\left\{\begin{array}{cc}\rho(x, X)-\rho\left(x, X_{0}\right) & x \notin X_{0} \\ -1 & x \in X_{0}\end{array}\right.$

When $K(x) \geq 0,$ said $x$ belong to $X_{0} .$ And when $K(x)<0,$ said $x$ do not belong to $X_{0}$.

2.2 The structure of ENN

ENN is constructed by the thought of comprehensive evaluation combined with the structure of the artificial neural network. It is different from the general neural network in that it only have input layer and output layer. And the input parameter of every neuron has two weight, the upper and lower bounds of weight [11].

2.1.1 The extension neuron

Figure 1. Extension neuron model

The basic processing unit of artificial neural networks is called neuron. And the neuron model of ENN is called extension neuron. It is a simulation of biological neurons and has a structure different from other neuron constructed by the quantitative analysis tool of correlation function in extenics, shown as Figure 1. The neuron unit composed of multiple inputs $x_{i}, i=1,2, \ldots, n$ and one output $y$. To establish triple according to matter-element theory for each input signal, that is to establish the input signal matter-element, shown as Eq.1

$N_{i}=\left[s, x_{i}, v_{i}\right], i=1,2, \ldots, n$             (1)

Wherein, $N_{i}$ represent the ith element of input signal. $s$ represent input signal. $v_{i}$ represent the value of the ith input signal $x_{i} .$ The intermediate state is represented by classic domain $W_{i}=\left[w_{i}^{l}, w_{i}^{u}\right], i=1,2, \ldots, n$ of input signal matterelement and modification value. And the output formula is:

$y=\sum_{j=1}^{n} \lambda_{j} k\left(x_{j}\right)$       (2)

Wherein:                   

$k\left(x_{j}\right)=E D\left(w_{j}^{l}, w_{j}^{u}, x_{j}\right)$

$k\left(x_{j}\right)=\left\{\begin{array}{cc}\frac{-\rho\left(x_{i}, w_{i}\right)}{\mid\left(w_{k j}^{k}-w_{k j}^{l}\right) / 2} \mid & x_{i} \in W_{i} \\ \frac{\rho\left(x_{i}, w_{i}\right)}{\rho\left(x_{i},\left(v_{i}^{L}-\eta v_{i}^{L}, v_{i}^{U}+\eta v_{i}^{U}\right)\right)-\rho\left(x_{i}, W_{i j}\right)} & x \notin W_{i}\end{array}\right.$

$\rho\left(x_{i},\left(w_{i}^{\prime}, w_{i}^{u}\right)\right)=\left|x_{i}-\left(w_{i}^{l}+w_{i}^{u}\right) / 2\right|-\left(w_{i}^{u}-w_{i}^{l}\right) / 2$

$\rho\left(x_{i},\left(v_{i}^{L}-\eta v_{i}^{L}, v_{i}^{U}+\eta v_{i}^{U}\right)\right)$$=\left|x_{i}-\frac{v_{i}^{L}-\eta v_{i}^{L}+v_{i}^{U}+\eta v_{i}^{U}}{2}\right|-\frac{v_{i}^{U}+\eta v_{i}^{U}-v_{i}^{L}+\eta v_{i}^{L}}{2}$

And $\lambda_{j}, j=1,2, \ldots, n$ is the corresponding weight coefficient of each characteristic.

2.2.2 The network structure of extension neuron

Extenics can analysis the thing in qualitative and quantitative, but with no ability of parallel computing and learning. And neural network can realize the ability of self-learning based on samples. So the extension neural network integrates the two systems and develops their advantages to form a complementary structure. On one hand, the network can be constructed by a kind of formalized language, so that the weights of the network have obvious significance. On the other hand, the learning mechanism is introduced to the accuracy and practicality of knowledge expression. The principle of ENN is shown as figure 2.

Figure 2. The principle of ENN

The structure of ENN is shown as figure 3 [12]. It is composed of two layers of neural network, including the input layer, output layer and the link weight that connect the input neurons and output neurons. Each neuron of input layer corresponding to the different characteristics of multidimensional matter-element. And the neuron of output layer refers to the probability of fire in different stage. There are two link weights between each input layer neurons and output layer neurons. A weight is the upper bound of the value of one characteristic corresponding with one fire stage, and another weight is the lower bound of that. The upper bound and lower bound between the jth input neuron and the kth output neuron are represented as $W_{k j}^{U}$ and $W_{k j}^{L}$. And the function of output layer neuron is dependent function.

Figure 3. The structure diagram of ENN

2.3 The ENN algorithm based on supervision algorithm

The training pattern set is $X=\left\{X_{1}, X_{2}, \ldots, X_{N_{p}}\right\}, N_{p}$ is amount of training pattern. The ith pattern denoted by $X_{i}^{P}=\left\{X_{i 1}^{P}, X_{i 2}^{P}, \ldots, X_{i n}^{P}\right\}, n$ is amount of characteristic. The learning error is $E_{T}=\frac{N_{m}}{N_{P}}, N_{m}$ is the total training error count. The specific learning algorithm steps are as follows:

Step 1: To establish the matter-element model of the weight of ENN input node and output node based on mater-element theory. Shown as Eq.(3).

$R_{k}=\left[\begin{array}{ccc}N_{k} & c_{1} & \left(w_{k 1}^{L}, w_{k 1}^{U}\right) \\ & c_{2} & \left(w_{k 2}^{L}, w_{k 2}^{U}\right) \\ & \cdots & \cdots \\ & c_{n} & \left(w_{k n}^{L}, w_{k n}^{U}\right)\end{array}\right], k=1,2, \ldots, n_{c}$     (3)

Wherein, $w_{k i}^{L}$ and $w_{k i}^{U}$ represent the upper bound and lower bound of the value of the ith characteristic $c_{i}$ about the kth cluster, and $i=1,2, \ldots, n$. The classical domain can be achieved from the given training data set.

$w_{k j}^{L}=\operatorname{Min} \sum_{i \in N_{p}}\left\{x_{i j}^{k}\right\}$

$w_{k j}^{U}=\operatorname{Max} \sum_{i \in N_{P}}\left\{x_{i j}^{k}\right\}$

Step 2: To calculate initial center of each clustering.

$Z_{k}=\left\{z_{k 1}, z_{k 2}, \ldots, z_{k n}\right\}$

$z_{k j}=\frac{w_{k j}^{L}+w_{k j}^{U}}{U}$      (4)

Wherein, $k=1,2, \ldots, n_{c}, j=1,2, \ldots, n$.

Step 3: Read in the pth sample of the ith training model.

$X_{i}^{p}=\left\{x_{i 1}^{p}, x_{i 2}^{p}, \ldots, x_{i n}^{p}\right\}, p \in n_{c}$         (5)

Step 4: To calculate the correlation degree of training sample $X_{i}^{p}$ and the kth cluste based on extension distance function.

Wherein, $k=1,2, \ldots, n_{c},$.

Step 5: Find $k^{*},$ meeting $E D_{i k^{*}}=\operatorname{Min}\left\{E D_{i k}\right\}$, and enter the Step7, or in Step6.

Step 6: Update the weights and clustering center.

Step 6.1: Update the clustering center of the pth sample and K*.

$z_{p j}^{\text {new}}=z_{p j}^{\text {old}}+\eta\left(x_{i j}^{p}-z_{p j}^{\text {old}}\right)$         (6)

$z_{k^{*} j}^{n e w}=z_{k^{*} j}^{o l d}-\eta\left(x_{i j}^{p}-z_{k^{*} j}^{\text {old}}\right)$       (7)

   Step 6.1: Update the weight of the pth sample and K*.

$\left\{\begin{array}{l}w_{p j}^{L(n e w)}=w_{p j}^{L(o l d)}+\eta\left(x_{i j}^{p}-z_{p j}^{o l d}\right) \\ w_{p j}^{U(n e w)}=w_{p j}^{U(o l d)}+\eta\left(x_{i j}^{p}-z_{p j}^{o l d}\right)\end{array}\right.$          (8)

$\left\{\begin{array}{l}w_{k^{*} j}^{L(n e w)}=w_{k^{*} j}^{L(o l d)}+\eta\left(x_{i j}^{p}-z_{k^{*}}^{o l d}\right) \\ w_{k^{*} j}^{U(n e v v)}=w_{k^{*} j}^{U(o l d)}+\eta\left(x_{i j}^{p}-z_{k^{*} j}^{o l d}\right)\end{array}\right.$        (9)

Wherein, $\eta$ is the learning rate.

Step 7: Repeat step3-step6 until all model training complete, then the learning to complete.

Step 8: If the clustering process is convergent and total error reach to the given value, then training to complete, or back to step3

3. Simulation and Analysis Based on Matlab

Table 1. The training sample of fire data

No.

Training sample

Desired output

 

Temperature

Smoke

CO

Open fire probability

Smoldering fire probability

No fire probability

1

0.95

0.21

0.75

0.85

0.12

0.03

2

0.88

0.2

0.01

0.78

0.08

0.14

3

0.75

0.15

0.75

0.7

0.25

0.05

4

0.63

0.16

0.3

0.65

0.25

0.1

5

0.22

0.75

0.8

0.35

0.65

0

6

0.31

0.37

0.68

0.07

0.92

0.01

7

0.41

0.67

0.75

0.03

0.96

0.01

8

0.24

0.53

0.68

0.3

0.65

0.05

9

0.2

0.3

0.1

0.08

0.12

0.8

10

0.15

0.08

0.23

0.04

0.2

0.76

In order to verify on validity and reliability of fire detection based on the extension neural network model, we use MATLAB to training and simulation. The neural network training sample set is the experimental parameters of the national standard test fire [13]. The inputs of the neural network are temperature, smoke concentration and CO concentration, the desired outputs are open fire probability, smoldering fire probability and no fire probability. The input physical quantities are different from each other. So it is necessary to normalize the sample data before network training in order to prevent small numerical information being overwhelmed by large number that is to normalize the input signal into {0.1}[14]. Take 30 groups of experimental data, normalized and used as the neural network training samples. Part of the experimental data is shown in table 1.

Through adjust the value of the neural network model constantly in the process of training, after repeated training, so that the output value of neural network model with measurement error between the sample output values is less than a preset value. Then the training error curve as shown in figure 4.

Figure 4. The training error decline curve

The training error of the two layer neural network model is in a significant convergence trend with increase the number of training. And the training speed is fast, after more than 20 times to complete the training.

And then verify the model of neural network after training. Select 20 groups of no fire test samples, 20 groups of smoldering fire test samples and 20 groups of open fire test samples. After the test, all results in accordance with the expected results.

4. The Hardware Design of Fire Alarm System

4.1 The overall design of the system

In the hardware design, we need to consider the scalability and flexibility. This paper studies a kind of intelligent fire alarm system which taking STM32 development board and the wireless communication module as the main control unit. The fire signal collected by detector and conduct rough handing. Then the signal is send to STM32 development board though Zigbee module. And the STM32 development board operate extension neural network algorithm for processing the signal. The board is connected to the internet according to embedded web server to realize remote monitoring. At the same time, the main controller sent the judgment conclusion of fire condition to user based on GPRS module. The design diagram of fire alarm system is shown in figure 5.

Figure 5. The design diagram of fire alarm system

4.2 The choice of sensor

4.2.1 The choice of temperature sensor

The choice of temperature sensor directly affects the precision and accuracy of the temperature acquisition of the system. The system selects the DS18B20 digital temperature sensor, the temperature can be read out directly, and can respectively transform temperature to 9 or 12 bits digital value in 93.75ms and 750ms. This kind of temperature sensor can make the system more simple structure and higher reliability. Part of its parameters see table 2.

Table 2. Performance parameters of DS18B20

Parameter Name

Value range

Measuring range

-55℃~+125℃

Voltage scope

3~5V

Test precision

0.5℃(-10℃~+85℃)

Interface mode

Single wire interface

4.2.2 The choice of smoke sensor

Select the NIS-09C smoke sensor made by NEMOTO of Japan for the system. It is a kind of ion type smoke sensor and the radiation element adopted is Am241. It is a high sensitivity smoke sensor designed specifically for fire warning. Part of its parameters see table3.

Table 3. Performance parameters of NIS-09C

Parameter Name

Value range

Output voltage

5.6±0.4v

current loss

29±3PA

sensitivity

0.6±0.1V

Environment temperature

0℃~50℃

Environment humidity

Below 95%RH

4.2.3 The choice of CO sensor

TGS2442 type CO sensor not only has the characteristics of high precision and high sensitivity, but also cheaper and with lower consumption than the same performance product. Part of its parameters see table 4.

Table 4. Performance parameters of TGS2442

Parameter Name

Value range

Measuring range

CO 0~1000ppm

Heating resistor

17±2.5Ω

Heating current

Approx.203mA(in case of VHH)

Heat power consumption

Approx.14mW(ave)

Response time

<30s

Environment temperature

-20℃~50℃

Environment humidity

65±5%RH

4.3 The design of wireless network

The GPRS network is realized based on the existing GSM network. It is of wide coverage, high speed of data transmission and the data error correction ability. So it can guarantee the reliability and real-time capability of data transmission. ZigBee is a new wireless network technology, mainly used for short distance wireless connection. It has the characteristic of low power consumption, large network capacity, strong compatibility and high safety. GPRS, ZigBee and the sensors combined with each other to form the wireless sensor network which local and remote complements each other.

4.3.1 GPRS module

The GPRS module use MC35i module produced by Siemens. It can realize data transmission, speech transmission and message transmission fast and reliable. The MC35i module mainly consists of a GSM baseband processor, GSM module, power supply module, flash memory, ZIF connector and antenna interface. The working voltage of the module is 3.3-4.8V, and can work in the frequency band of 900MHz and 1800MHz.

4.3.2 ZigBee communication module

ZigBee communication module use CC2420. The module is the first transceiver in accordance with the standard of 2.4GHz IEEE802.15.4. It is based on the technology of SmartRF 03 made by 0.18um CMOS technology, only very few external components, stable performance and low power consumption [15].

The CC2420 has 33 registers configuration of 16 bit, 15 command strobe registers, one 128 byte RX RAM and one security information storage of 112 byte. CC2420 through the SPI bus 4 lines (SI, SO, SCLK, CSN) chip set operation mode to achieve the read / write cache and the read / write status register. In slave mode using the SPI must consider the driver of FIFO, FIFOP, SFD, CCA, VREG_EN and RF_RESET.

4.4 The design information fusion processing unit

The information fusion processing unit is the core unit of the whole fire alarm system. It is mainly composed of a microprocessor STM32, ZigBee transceiver module, RS232 serial port to send and receive circuit, RJ45 interface circuit, GSM module, alarm module and power supply module. The schematic diagram is shown in figure 6.

Figure 6. The schematic diagram of information fusion processing unit

Among them, STM32 microprocessor is the core unit for receiving information fusion, sensor data fusion, all judgment, remote data distribution and output control function. The processor adopts the Cortex-M3 kernel which is a breakthrough of arm. The core meets the demand of embedded field of high integration, low power consumption, low cost and real-time application [16].

5. Conclusion

The fire alarm system based on extension neural network can give full play to the advantage of multi sensor information resource. And the information is fused based on extension neural network algorithm though the redundancy or complementary in space or time. Then we can get more accurate detection and consistency description to the detected object, and the reliability of the system can be improved significantly. The modified result of this model has superior performance than the traditional fire monitoring system which only composed a single sensor.

Acknowledgements

This research was supported by the National Natural Science Foundation Youth Fund of China (No. 51005114); The Fundamental Research Funds for the Central Universities, China (No. NS2014050);The Research Fund for the Doctoral Program of Higher Education,China (No. 20112302130003); Jiangsu Planned Projects for Postdoctoral Research Funds (No. 1301162C).

  References

1. Chen Tao, Yuan Shui-hong, Fan Wei-ceng. Prospect of the Fire Detection Technology [J], Fire Safety Science, 2001, 2(10): 108-110.

2. James A. Mike, Monitoring Multiple Aspects of Fire Signatures for Discriminating [J], Fire Detection Technology, 1999, 35(3):25-29.

3. Wang Shu, Fire Detection and Signal Processing Technology [M], Wuhan: Hua Zhong University of Science and Technology Press, 1998.

4. Liu Xiao-juan, Hou Xiao-yan, Multi Sensor Fusion Technology Becomes the Mainstream Development [J], Aerodynamic Missile Journal, 2010(8):86-90.

5. Wang Hui-qing, Han Yan-ling, Research Based on Multi Sensor and Data Fusion Technology [J], Computer and Modernization, 2002, 9: 33-36.

6. Lawrence A. Klein, Multi Sensor Data Fusion Theory and Its Application [M], Beijing Institute of Technology Press, 2000.

7. Daniel T. Gottuk, Michelle J. Peatross, Richard J. Roby, et al., Advanced Fire Detection Using Multi-Signature Alarm Algorithms [J], Fire Safety Journal, 2002, 37:381-394.

8. Cai Wen, Shi Yong, The Scientific Significance and Future Development of Extenics [J], Journal of Harbin Institute of Technology, 2006, 38 (7):1079-1086.

9. Cai Wen, Extension Set and Non-Compatible Problems [J], Chinese Journal of Scientific Exploration, 1983, (1).

10. Cai Wen, Extension Management Engineering and Application [J], International Journal of Operations and Quantitative Management, 1999, (1): 59-72.

11. Zhou Yu, Qian Xu, Zhang Jun-cai, et al., Survey and Research of Extension Neural Network [J], Application Research of Computers, 2010, (1): 1-5.

12. Sun Bai-qing, Xing Ai-guo, Zhang Ji-bin, et al., Design and Implementation Neural Network Model [J], Journal of Harbin Institute of Technology, 2006, (7):1156-1159.

13. The National Standard of the People’s Republic of China-Specification for Design of Automatic Fire Alarm System (GB50116-98), Beijing: Chinese Standard Press, 1998.

14. Zhou Li-kai, Kang Yao-hong, Neural Network Model and MATLAB Simulation Program Design [M], Beijing: Tsinghua University Press. 2011.

15. CC2420 Completely Manual [Z], 2010, (10):22-29.

16. Wang Yong-hong, ET. STM32 Series ARM Cortex-M3 Microcontroller Principle and Practice [M], Beijing: Beihang University Press, 2008.