Exploration Autonome et Cartographie Topologique en Environnement Inconnu Référencées Vision Omnidirectionnelle

Exploration Autonome et Cartographie Topologique en Environnement Inconnu Référencées Vision Omnidirectionnelle

Romain Marie Ouiddad Labbani-Igbida  El Mustapha Mouaddib 

Laboratoire Modélisation, Information & Systèmes Université de Picardie Jules Verne, Amiens, France

Page: 
221-243
|
DOI: 
https://doi.org/10.3166/TS.31.221-243
Received: 
27 September 2013
| |
Accepted: 
12 May 2014
| | Citation

OPEN ACCESS

Abstract: 

In this paper, we address the problem of exploration and topological map building in totally unknown environments for a mobile robot equipped with a sole catadioptric sensor. Multiple representations for spatial knowledge are built upon visual information only : free space detection, local space topology extraction, place signatures and topological links inferred from robot actions. We show how these spatial representations are nested together to incrementally build a spatial map of the explored robot space. The efficiency of the approach is evaluated in real world experiments, giving new solutions to real-time simultaneous localization,navigation and topological map building.

Extended Abstract

This pape raddresses the problem of autonomous exploration, navigation and topological map building in unknown environments. Considering a mobile robot equipped with an omnidirectional catadioptric sensor, we develop an incremental approach to merge successive local perceptions into a spatial model of the explored space. It is based on a set of local representations built from visual information only : 

– The omnidirectional free space, extracted in the image using an active contour (Merveilleux et al., 2011) initialized on the robot projection. It segments the image in two regions with complementary properties : the navigable space which carries the topological structure of the ground surroundings, and the obstacles which contain most of the relevant photometric information. 

– Thelocal Generalized VoronoiDiagram (GVD) computeddirectly in the image using an omnidirectional version of the Delta Medial Axis (DMA) on the extracted navigable space. It is based on an almost linear time skeletonization algorithm (Marie et al., 2013), able to deal with boundary noise through an original pruning process using a single parameter. To consider distortions involved by the catadioptric sensor, and thus compute the skeleton of the free space instead of the skeleton of its projection, we replace the euclidian metric by an adapted version that considers both the projection model and the position of the sensor with respect to the ground. 

– A visual signature built from the outer part of the image (the obstacles), which contains most of the relevant photometric information of the scene. We use the Haar integral invariants formalism to capture those information, and propose a combination of several kernel functions to produce a distribution-like signature with good discriminative properties.

Using a simple control law following the extracted skeleton, the robot navigates autonomously on the GVD, builds new signatures of discovered places, and identifies loop closures when needed. Each place is associated to a node in the topological map, and defined by a visual signature. A two threshold approach is used for localization purposes, and allows both new places identification and loop-closure detection. The entire process, including image acquisition, free space segmentation, skeleton extraction, signature computation and map update is performed online by the robot, in less than 200 ms. The efficiency of the approach is evaluated in real world experiments, giving new solutions to the real-time simultaneous localization, navigation and topological map building.

RÉSUMÉ

Dans ce papier, nous présentons une nouvelle méthode pour l’exploration autonome d’environnements inconnus et la construction de carte topologique par un robot mobile terrestre à partir d’un capteur de vision catadioptrique. Nous proposons une approche incrémentale qui permet au robotd’extraire et de combiner plusieurs représentations spatiales construites à partir des images acquises: l’espace libre navigable obtenu par la propagationd’un contour actif, les topologies locale et globale déterminées à partir du squelette de l’espace libre et une signature de lieu fondée sur l’information photométrique pertinente. Des résultats expérimentaux en environnement extérieur à large échelle démontrentl’efficacitédel’approche et ouvrent de nouvelles perspectives pour les méthodes de navigation autonome basées vision, qui sont encore aujourd’hui un problème ouvert en robotique. 

Keywords: 

autonomous exploration, topological mapping, place recognition.

MOTS-CLÉS

exploration autonome, carte topologique, reconnaissance de lieu.

1. Introduction
2. État de l’Art
3. Segmentation de L’Espacelibre
4. Extraction de la Topologie Locale
5. Construction de la Signature de Lieu
6. Exploration Autonome et Construction de Carte
7. Validation et Expérimentations en Conditions Réelles
8. Conclusion
Remerciements
  References

Angeli A., Doncieux S., Meyer J.-A., Filliat D. (2009). Visual topological slam and global localization. In International conference on robotics and automation, p. 2029–2034. 

Barla A., Odone F., Verri A. (2003). Histogram intersection kernel for image classification. In International conference on image processing, p. 513–516. 

BarretoJ.P.,AraújoH. (2001). Issuesonthegeometryofcentralcatadioptricimageformation. In Computer vision and pattern recognition, p. 422–427. 

Blum H. (1967). A transformation for extracting new descriptors of shape. In WWD (Ed.), Models for the perception of speech and visual form, p. 362-380. MIT Press. 

Chapoulie A., Rives P., Filliat D. (2013). Appearance-based segmentation of indoors/outdoors sequences of spherical views. In Iros, p. 1946-1951. 

Charron C., Labbani-Igbida O., Mouaddib E. M. (2006). On building omnidirectional image signatures using haar invariant features: Application to the localization of robots. In Acivs, p. 1099-1110.

ChaussardJ.,CouprieM.,TalbotH. (2011). Robustskeletonizationusingthediscrete λ-medial axis. Pattern Recognition Letters, vol. 32, no 9. 

Choset H., Nagatani K. (2001). Topological simultaneous localization and mapping (slam): Toward exact localization without explicit localization. Transactions on Robotics and Automation, vol. 17, no 2, p. 125–137. 

Couprie M., Coeurjolly D., Zrour R. (2007). Discrete bisector function and euclidean skeleton in 2d and 3d. Image and Vision Comp., vol. 25, no 10, p. 1519–1698. 

Cummins M., Newman P. (2011). Appearance-only slam at large scale with fab-map 2.0. International Journal of Robotics Research, vol. 30, no 9, p. 1100–1123. 

DanielssonP. (1980). Euclideandistancemapping. ComputerGraphicsandImageProcessing, vol. 14, no 3, p. 227–248. 

DavisonA.J.,ReidI.D.,MoltonN.D.,StasseO. (2007). Monoslam:Real-timesinglecamera slam. PAMI Transactions, vol. 29, no 6, p. 1052–1067. 

Dayoub F., Cielniak G., Duckett T. (2011). Long-term experiments with an adaptive spherical view representation for navigation in changing environments. Robotics and Autonomous Systems, vol. 59, no 5, p. 285–295. 

Durrant-Whyte H., Bailey T. (2006). Simultaneous localization and mapping (slam): Part i. Robotics Automation Magazine, vol. 13, no 2, p. 99–110. 

Eade E., Drummond T. (2006). Scalable monocular slam. In Computer vision and pattern recognition, p. 469–476. 

Geyer C., Daniilidis K. (2001). Catadioptric projective geometry. International Journal on Computer Vision, vol. 45, no 3, p. 223–243. 

HesselinkW. (2007). Alinear-timealgorithmforeuclideanfeaturetransformsets. Information Processing Letters, vol. 102, no 5, p. 181–186. 

Konolige K., Agrawal M. (2008). Frameslam: From bundle adjustment to real-time visual mapping. Transactions on Robotics, vol. 24, no 5, p. 1066–1077. 

Korrapati H., Courbon J., Mezouar Y., Martinet P. (2012). Image sequence partitioning for outdoor mapping. In Icra, p. 1650-1655. 

Kuipers B., ModayilJ., Beeson P., MacMahon M.,SavelliF. (2004). Localmetricalandglobal topological maps in the hybrid spatial semantic hierarchy. In International conference on robotics and automation, p. 4845–4851. 

Labbani-I. O., Charron C., Mouaddib M. (2011). Haar invariant signatures and spatial recognition using omnidirectional visual information only. Autonomous Robots Journal, vol. 30, no 3, p. 333-349. 

Lim J., Frahm J.-M., Pollefeys M. (2012). Online environment mapping using metrictopological maps. I. J. Robotic Res., vol. 31, no 12, p. 1394-1408. 

Marie R., Labbani-Igbida O., Mouaddib M. (2013). The delta-medial axis: a robust and linear time algorithm for euclidian skeleton computation. In International conference on image processing.

Merveilleux P., Labbani-Igbida O., Mouaddib M. (2011). Robust free space segmentation using active contours and monocular omnidirectional vision. In International conference on image processing, p. 2877–2880.

Milford M., Wyeth G. (2010). Persistent navigation and mapping using a biologically inspired slam system. International Journal of Robotics Research, vol. 29, no 9, p. 1131-1153.

MuhammadN.,FofiD.,AinouzS. (2009). Currentstateoftheartofvisionbasedslam. InSpie image processing: Machine vision applications ii, p. 72510F–72510F-12. 

Murillo A. C., Kosecka J. (2009). Experiments in place recognition using gist panoramas. In Ieee workshop on omnidirectional vision, camera networks and non-classical cameras, held with iccv, p. 2196–2203. 

Se S., Lowe D., Little J. (2002). Mobile robot localization and mapping with uncertainty usingscale-invariantvisuallandmarks. InternationalJournalofRoboticsResearch,vol.21, p. 735-758.

SimR.,ElinasP.,LittleJ.J. (2007). Astudyoftherao-blackwellisedparticlefilterforefficient and accurate vision-based slam. International Journal on Computer Vision, vol. 74, no 3, p. 303–318. 

Singh G., Kosecka J. (2010). Visual loop closing using gist descriptors in manhattan world. International Conference on Robotics and Automation. 

Valgren C., Lilienthal A. J., Duckett T. (2006). Incremental topological mapping using omnidirectional vision. In International conference on intelligent robots and systems, p. 3441– 3447.