Vers un Système de Détection et Caractérisation par Caméra de Conditions Météo Critiques pour la Sécurité Routière

Vers un Système de Détection et Caractérisation par Caméra de Conditions Météo Critiques pour la Sécurité Routière

Nicolas Hautière Jérémie Bossu  Erwan Bigorgne  Didier Aubert 

Université Paris-Est, IFSTTAR, IM, LEPSIS 58 boulevard Lefebvre, F-75015 Paris

Page: 
575-603
|
DOI: 
https://doi.org/10.3166/TS.28.575-603
Received: 
4 January 2011
| |
Accepted: 
6 September 2011
| | Citation

OPEN ACCESS

Abstract: 

The presence of a reduced visibility distance on a road network (thick fog, heavy rain, etc.) affects its safety. We designed a roadside system on which aims to detect critical situations such as dense fog or heavy rain with a simple CCTV camera. Different image processing are presented, particularly the estimation of visibility distance, the detection of fog, and the detection of rain. Based on the principles underlying these algorithms, a camera is specified to meet the needs expressed by the standard NF P 99-320 on highway meteorology. Experimental results are presented as well as prospective validation at a bigger scale.

Extended Abstract

The occurrence of visibility impairment weather conditions on a road network alters traffic safety. In such conditions, drivers are instructed to slow down. For example in France, the speed limit is reduced to 50 km/h whenever the visibility is less than 50 m. Unfortunately, weather centers are not able to accurately monitor low visibility areas because fog is a very local phenomenon. To achieve this goal, road operators may equip their networks with dedicated optical sensors. When located in areas where fog is recurrent (MacHutchon, Ryan, 1999), these road sensors provide information that can be relayed to vehicles or displayed on variable message signs. Unfortunately, these sensors are expensive and are sensitive to the heterogeneity of fog because of a too small measurement volume (Hautière, Labayrade, Aubert, 2006). To improve detection of fog along the roads, camera-based approaches are being developed (Bush, Debes, 1998 ; Hagiwara et al., 2006 ; Hallowell et al., 2007 ; Hautière et al., 2008 ; Lagorio et al., 2008), because the cameras are multifunctional sensors, which are already heavily deployed along the roads to monitor traffic and to detect incidents.

Based on the NF-P-99-320 standard on road weather (AFNOR, 1998), we present in this paper a camera-based system for detecting and characterizing weather conditions which are critical for road safety. It determines the visibility range and also detects the presence of fog or rain. It only requires an accurate geometrical calibration of the camera without prior learning phase.

Visibility Estimation

The meteorological visibility distance is the greatest distance at which a black object of suitable dimensions can be seen against the sky at the horizon. The International Commission on Illumination adopted a value of 5% for the minimum visible contrast (CIE, 1987). Different methods to estimate the meteorological visibility distance have been developed in the field of transportation. One family of methods estimates the contrast of objects in the scene and assigns a distance, usually assuming the road is flat. Bush et Debes (1998) use a wavelet transform to detect the highest edge in the image with a contrast above 5% in a region of interest which encompasses the road pavement. However, the presence of vertical objects, such as a truck, in their area of interest biases the estimation. Similarly, the accuracy of their method strongly depends on the characteristics of the camera. A system for estimating the meteorological visibility distance with the same principles is proposed by Zhao-Zheng et al. (2009), except that it uses the algorithm proposed by (Hautière, Aubert, Jourlin, 2006) to estimate the contrast.

Our approach belongs to this family of methods. Unlike Hallowell et al. (2007); Hagiwara et al. (2006), it takes into account the 3-D structure of the scene thanks to the detection of the circulated area. Contrary to Bush et Debes (1998) ; Zhao-Zheng et al. (2009), moving objects in the region of interest are filtered out using a background subtraction method. The accuracy of the system is discussed with respect to the characteristics of the camera. In particular, a camera is specified to fulfill the requirements expressed in the NF-P-99-320 standard (AFNOR, 1998).

Detection of Fog and Rain

In the literature on video surveillance, detection of fog is not explicitly addressed. Generally, it is indirectly done by means of a simple threshold applied to the visibility distance (see previous paragraph). In the area of road weather, a threshold of 400 m is used as a standard (AFNOR, 1998). In the field of meteorological observation, a threshold of 1000 m is commonly used. These distances correspond to a reaction time in case of an unexpected obstacle respectively on the road and the runway. In order to develop a fog detection method using fixed camera, we propose to adapt a method, which was originally dedicated to driver assistance (Hautière, Tarel et al., 2006). This method is based on a daytime fog model, namely Koschmieder’s law whose parameters are dynamically instantiated. In this aim, a region within the image that displays minimal line-to-line gradient variation when browsed from bottom to top is identified thanks to a region growing process. A vertical band is then selected in the detected area. Finally, taking the median intensity of each segment yields the vertical variation of the intensity of the image and the position of an inflection point, whose relative position with respect to the horizon line is proportional to the fog density.

The detection of rain is rarely discussed in the literature as well. Existing works focus more on the segmentation of raindrops for image rendering applications (Hase et al., 1999 ; Zhang et al., 2006 ; Garg, Nayar, 2007 ; Barnum et al., 2010). These methods generally lead to an oversegmentation of rain pixels which cause false detections in the absence of rain. Therefore, they cannot be used alone to detect the presence of rain or snow. To circumvent these limitations, we developed a method based on robust local descriptors. Selection rules based on photometry and size are proposed in order to select the potential rain streaks. Then a Histogram of Orientations (HOS) of rain or snow Streaks, estimated with the method of geometric moments, is computed, which is assumed to follow a model of Gaussian uniform mixture. The Gaussian distribution represents the orientation of the rain or the snow whereas the uniform distribution represents the orientation of the noise. An algorithm of expectation maximization is used to separate these two distributions. Following a goodness-of-fit test, the Gaussian distribution is temporally smoothed and its amplitude allows deciding the presence of rain or snow. This approach can be seen as a generalization of the median temporal filter proposed by Hase et al. (1999), the temporal classification algorithm tested by Zhang et al. (2006) and the photometric constraints proposed by Garg et Nayar (2007).

RÉSUMÉ

La présence d’une distance de visibilité réduite sur un réseau routier (épais brouillard, pluie forte, etc.) affecte la sécurité de celui-ci. Nous avons conçu un système de bord de voies qui vise à détecter des situations critiques telles que le brouillard dense ou les fortes chutes de pluie à l’aide d’une caméra vidéo. Les différents traitements d’image sont présentés, en particulier

l’estimation de la distance de visibilité, la détection de brouillard, ainsi que la détection de pluie. En se fondant sur les principes sous-jacents de ces algorithmes, une caméra est ensuite spécifiée pour répondre aux besoins exprimés par la norme NF P 99-320 sur la météorologie routière. Des résultats expérimentaux sont présentés ainsi que des perspectives de validation à plus grande échelle.

Keywords: 

machine vision, road safety, visual surveillance, visibility, fog, rain, camera, calibration, background subtraction.

MOTS-CLÉS

système de vision, sécurité routière, vidéosurveillance, visibilité, brouillard, pluie, caméra, calibrage, soustraction de fond.

1. Introduction
2. Un Capteur « Temps Présent » Fondé sur une Caméra de Bord de Voie
3. Outils pour l’Estimation de la Classe de Visibilité Routière
4. Détection du Brouillard
5. Détection de la Présence de Pluie
6. Spécification et Calibrage de la Caméra
7. Validation Expérimentale
8. Conclusion et Perspectives
  References

AFNOR. (1998). Road meteorology - gathering of meteorological and road data - terminology. NF P 99-320.

Barnum P., Narasimhan S., Kanade T. (2010). Analysis of rain and snow in frequency space. International Journal of Computer Vision, vol. 86, no 2-3, p. 256-274.

Bossu J., Hautière N., Tarel J.-P. (2009). Utilisation d’un modèle probabiliste d’orientation de segments pour détecter des hydrométéores dans des séquences vidéo. In XXII Colloque GRETSI, Dijon, France.

Brewer N., Liu N. (2008). Using the shape characteristics of rain to identify and remove rain from video. In Joint IAPR International Workshop SSPR & SPR, Orlando, USA, vol. 5342/2009, p. 451-458. Springer.

Bush C., Debes E. (1998). Wavelet transform for analyzing fog visibility. IEEE Intelligent Systems, vol. 13, no 6, p. 66–71.

Cheung S.-C., Kamath C. (2004). Robust techniques for background subtraction in urban traffic video. In Video Communications and Image Processing, SPIE Electronic Imaging, p. 881-892.

CIE. (1987). International lighting vocabulary no 17.4. Dean N., Raftery A. (2005). Normal uniform mixture differential gene expression detection for cDNA microarrays. BMC Bioinformatics, vol. 6, no 173.

Dhome Y., Tronson N., Vacavant A., Chateau T., Gabard C., et al. (2010). A benchmark for Background Subtraction Algorithms in Monocular Vision : A Comparative Study. In IEEEinternational Conference on Image Processing Theory, Tools and Applications (IPTA’10). France.

Garg K., Nayar S. (2007). Vision and rain. International Journal of Computer Vision, vol. 75, no 1, p. 3-27.

Hagiwara T., Ota Y., Kaneda Y., Nagata Y., Araki K. (2006). A method of processing CCTV digital images for poor visibility identification. In 85th Transportation Research Board Annual Meeting.

Hallowell R., Matthews M., Pisano P. (2007, January). An automated visibility detection algorithm utilizing camera imagery. In 23rd Conference on IIPS, 87th AMS Annual Meeting, San Antonio, Texas, USA.

Hase H., Miyake K., Yoneda M. (1999). Real-time snowfall noise elimination. In IEEE International Conference on Image Processing, vol. 2, p. 406-409.

Hautière N., Aubert D., Jourlin M. (2006, Septembre). Mesure du contraste local dans les images, application à la mesure de distance de visibilité par caméra embarquée. Traitement du Signal, vol. 23, no 2, p. 145-158.

Hautière N., Bigorgne E., Aubert D. (2008). Visibility range monitoring through use of a roadside camera. In IEEE Intelligent Vehicles Symposium, Eindhoven, Netherlands.

Hautière N., Bossu J., Brand C. (2009). Une solution d’acquisition d’images à multiples fonctions de réponse. In XXIIeme Colloque GRETSI, Dijon, France.

Hautière N., Labayrade R., Aubert D. (2006). Estimation of the visibility distance by stereovision: a generic approach. IEICE Transactions on Information and Systems, vol. E89-D, no 7, p. 2084-2091.

Hautière N., Tarel J.-P., Lavenant J., Aubert D. (2006, April). Automatic fog detection and estimation of visibility distance through use of an onboard camera. Machine Vision and Applications Journal, vol. 17, no 1, p. 8-20.

KaewTraKulPong P., Bowden R. (2001). An improved adaptive background mixture model for real-time tracking with shadow detection. In 2nd European Workshop on Advanced Video-Based Surveillance Systems, Kingston, UK.

Kalman R. E., Bucy R. S. (1961). New results in linear filtering and prediction theory. Transactions of the ASME - Journal of Basic Engineering, vol. 83, p. 95-107.

Kohavi R., Provost F. (1998). Glossary of terms. Machine Learning, vol. 30, p. 271-274.

Kwon T. M. (2004, July). Atmospheric visibility measurements using video cameras: Relative visibility. Rapport technique. University of Minnesota Duluth.

Lagorio A., Grosso E., Tistarelli M. (2008). Automatic detection of adverse weather conditions in traffic scenes. In IEEE International Conference on Advanced Video and Signal based Surveillance, p. 273-279.

MacHutchon K., Ryan A. (1999). Fog detection and warning, a novel approach to sensor location. In IEEE African Conference, Cap Town, South Africa, vol. 1, p. 43-50.

Middleton W. (1952). Vision through the atmosphere. University of Toronto Press.

Mitsunaga T., Nayar S. (1999). Radiometric self calibration. In IEEE Conference on Computer Vision and Pattern Recognition.

Safee-Rad R., Smith K., Benhabib B., Tchoukanov I. (1992). Application of moment and fourier descriptors to the accurate estimation of elliptical-shape parameters. Pattern Recognition Letters, vol. 13, no 7, p. 497-508.

Stauffer C., Grimson W. (2000). Learning patterns of activity using real-time tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no 8, p. 747-757.

Strobl K., Sepp W., Fuchs S., Paredes C., Arbter K. (s. d.). DLR CalLab and DLR CalDe, Institute of Robotics and Mechatronics, German Aerospace Center (DLR), Oberpfaffenhofen, Germany. http://www.robotic.dlr.de/callab/

Swets J. (1988, 3 june). Measuring the accuracy of diagnostic system. Science, vol. 240, no 4857, p. 1285-1293.

Zhang, Li H., Qi Y., Leow W. K., Ng T. K. (2006). Rain removal in video by combining temporal and chromatic properties. In IEEE International Conference on Multimedia & Expo.

Zhang Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no 11, p. 1330-1334.

Zhao-Zheng C., Jia L., Qi-mei C. (2009). Real-time video detection of road visibility conditions. In WRI World Congress on Computer Science and Information Engineering, p. 472-476.