Qualitative Visual Servoing for Navigation. Asservissement Visuel Qualitatif pour la Navigation

Qualitative Visual Servoing for Navigation

Asservissement Visuel Qualitatif pour la Navigation

Anthony Remazeilles François Chaumette  Patrick Gros 

IRISA, Campus de Beaulieu 35042 Rennes Cedex, France

Page: 
191-209
|
Received: 
12 January 2006
| |
Accepted: 
N/A
| | Citation

OPEN ACCESS

Abstract: 

We propose in this article a novel approach for vision-based control of a robotic system during a navigation task.This technique is based on a topological representation of the environment in which the scene is directly described within the sensor space, by an image database acquired off-line. Before each navigation task, a preliminary step consists in localizing the current position of the robotic system.This is realized through an image retrieval scheme, by searching within the database the views that are the most similar to the one given by the camera.Then a classical shortest path finding algorithm enables to extract from the database a sequence of views that visually describe the environment the robot has to go through in order to reach the desired position.This article mainly focuses on the control law that is used for controlling the motions of the robotic system, by comparing the visual information extracted from the current view and from the image path.This control law does not need a CAD model of the environment, and does not perform a temporal path planning. Furthermore, the images from the path are not considered as successive desired positions that have to be consecutively reached by the camera.The qualitative visual servoing scheme proposed, based on cost functions, ensures that the robotic system is always able to observe some visual features initially detected on the image path. Experiments realized in simulation and with a real system demonstrate that this formalism enables to control a camera moving in a 3D environment.

Résumé

Dans cet article, une nouvelle méthode est proposée pour contrôler les mouvements d'un système robotique à l'aide d'un capteur de vision monoculaire durant une tâche de navigation. Cette approche s'appuie sur une représentation topologique de l'environnement, où la scène est directement décrite dans l'espace du capteur par une base d'images acquises hors-ligne. Lors de la navigation, une étape préalable de recherche d'images permet de localiser la position courante du robot, en mettant en relation la vue que sa caméra fournit avec celles stockées dans la base. Un algorithme classique de recherche de plus-court chemin permet alors d'extraire de la base un ensemble de vues caractérisant l'espace à parcourir afin de rejoindre la position désirée. Cet article se concentre principalement sur la loi de commande permettant de déduire les mouvements du robot en fonction des informations extraites de ce chemin et de la vue courante de la caméra. Notre méthode ne s'appuie pas sur un modèle 3D de la scène, et n'effectue pas une planification temporelle de la trajectoire à réaliser. De plus, les images du chemin ne sont pas considérées comme des positions désirées intermédiaires vers lesquelles doit converger la caméra. Le schéma d'asservissement visuel proposé, qualifié de qualitatif, repose sur des fonctions de coût, et assure que le robot peut toujours observer les amers visuels initialement détectés sur le chemin d'images. Des expériences réalisées en simulation et avec un système réel montrent que le formalisme proposé permet de contrôler les mouvements d'une caméra dans un environnement 3D.

Keywords: 

Robotics, visual servoing, computer vision.

Mots clés

Robotique, asservissement visuel, vision par ordinateur.

1. Introduction
2. Localisation du Robot et Détermination du Chemin
3. Navigation par Asservissement Visuel Qualitatif
4. Résultats Expérimentaux
5. Conclusion
  References

[Argyros et al., 2001] ARGYROS A., BEKRIS C. and ORPHANOUDAKIS S. (2001). Robot homing based on corner tracking in a sequence of panoramic views. In IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3-10, Kauai, USA. 

[Berrani et al., 2003] BERRANI S., AMSALEG L. and GROS P. (2003). Approximate searches: k-neighbors+precision. In ACM Int. Conf. on Information and Knowledge Management, pp. 70-77, New Orleans, USA. 

[Blanc et al., 2005] BLANC G., MEZOUAR Y. and MARTINET P. (2005). Indoor navigation of a wheeled mobile robot along visual routes. In IEEE Int. Conf. on Robotics and Automation, Barcelone, Espagne. 

[Burschka and Hager, 2001] BURSCHKA D. and HAGER G.D. (2001). Vision-based control of mobile robots. In IEEE Int. Conf on Robotics and Automation, pp. 1707-1713, Séoul, Corée du Sud.

[Cobzas et al., 2003] COBZAS D., ZHANG H. and JAGERSAND M. (2003). Image-based localization with depth-enhanced image map. In IEEE Int. Conf. on Robotics and Automation, pp. 1570-1575, Taipeh, Taiwan. 

[Dao et al., 2003] DAO N. X., YOU B. J., OH S. R. and HWANGBO M. (2003). Visual self-localization for indoor mobile robots using natural lines. In IEEE Int. Conf. on Intelligent Robots and Systems, pp. 12521255, Las Vegas, USA. 

[De La Torre and Black, 2001] DE LA TORRE F. and BLACK M. J. (2001). Robust principal component analysis for computer vision. In IEEE Int. Conf. on Computer Vision,volume 1, pp. 362-369, Vancouver, Canada. 

[Espiau et al., 1992] ESPIAU B. and CHAUMETTE F. and RIVES P. (1992). A new approach to visual servoing in robotics. IEEE Trans. s. on Robotics and Automation, 8(3) :313-326. 

[Faugeras and Lustman, 1988] FAUGERAS O. and LUSTMAN F. (1988). Motion and structure from motion in a piecewise planar environment. Int. Journal of Pattern Recognition and Artificial Intelligence,2 :485508. 

[Harris and Stephens, 1988] HARRIS C. and STEPHENS M. (1988). A combined corner and edge detector. In Alvey Vision Conf., pp. 147151, Université de Manchester, Angleterre. 

[Hartley and Zisserman, 2000] HARTLEY R. and ZISSERMAN A. (2000). Multiple view geometry in computer vision. Cambridge University Press, Angleterre. 

[Hayet et al., 2002] HAYET J.B., LERASLE F. and DEVY M. (2002). A Visual Landmark Framework for Indoor Mobile Robot Navigation. InIEEE Int. Conf. on Robotics and Automation, pp. 3942-3947, Washington, USA.

[Jin et al., 2001] JIN H., FAVARO P. and SOATTO S. (2001). Real-time feature tracking and outlier rejection with changes in illumination. In IEEE Int. Conf. on Computer Vision,volume 1, pp. 684-689, Vancouver, Canada. 

[Jones et al., 1997] JONES S.,ANDERSEN C. and CROWLEY J.L. (1997). Appearance based processes for visual navigation. In IEEE Int. Conf on Intelligent Robots and Systems,volume 2, pp. 551-557.

[Ko˘ seckà et al., 2003] KO˘ SECKÀ J., ZOUH L., BARBER P. and DURIC Z. (2003). Qualitative image based localizaion in indoor environments. In IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3-10, Madison, USA. 

[Latombe, 1993] LATOMBE J. (1993). Robot Motion Planning. Kluwer. [Laumond, 1998] LAUMOND J. (1998). Robot Motion Planning and Control. Lectures Notes in Control and Information Sciences, Springer-Verlag. 

[Lowe, 2004] LOWE D. G. (2004). Distinctive image features from scaleinvariant keypoints. Int. Journal of Computer Vision, 60(2) :91-110. 

[Malis and Chaumette, 2000] MALIS E. and CHAUMETTE F. (2000). 2 1/2d visual servoing with respect to unknown objects through a new estimation scheme of camera displacement. Int. Journal of Computer Vision, 37(1) :79-97. 

[Matsumoto et al., 2000] MATSUMOTO Y., INABA M. and INOUE H. (2000). View-based approach to robot navigation. In IEEE Int. Conf. on Intelligent Robots and Systems, pp. 1702-1708, Takamatsu, Japon. 

[Mezouar et al., 2002] MEZOUAR Y., REMAZEILLES A., GROS P. and CHAUMETTE F. (2002). Image interpolation for image-based control under large displacement. In IEEE Int. Conf. on Robotcs and Automation,volume 3, pp. 3787-3789, Washington, USA. 

[Rasmussen and Hager, 1996] RASMUSSEN C. and HAGER G. (1996). Robot navigation using image sequences. In National Conf. on Artificial Intelligence,volume 2, pp. 938-943, Portland, USA.

[Remazeilles et al., 2004] REMAZEILLES A., CHAUMETTE F. and GROS P. (2004). Robot motion control from a visual memory. In IEEE Int. Conf. on Robotics and Automation,volume 4, pp. 46954700, New Orleans, USA. 

[Royer et al., 2004] ROYER E., LHUILLER M., DHOME M. and CHATEAU T. (2004). Towards an alternative gps sensor in dense urban environment from visual memory. In British Machine Vision Conf., Londres, Angleterre. 

[Schmid and Mohr, 1997] SCHMID C. and MOHR R. (1997). Local grayvalue invariants for image retrieval. IEEE Trans. s on Pattern Analysis and Machine Intelligence, 19(5) :530-534. 

[Se et al., 2002] SE S., LOWE D. and LITTLE J. (2002). Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks. Int. Journal of Robotics Research, 21(8) :735-758.

[Shashua and Navab, 1996] SHASHUA A. and NAVAB N. (1996). Relative affine structure: Canonical model for 3d from 2d geometry and applications. IEEE Trans.s on Pattern Analysis and Machine Intelligence, 18(9) :873-833. 

[Shi and Tomasi, 1994] SHI J. and TOMASI C. (1994). Good features to track. In IEEE Conf. on Computer Vision and Pattern Recognition, pp. 593-600, Seattle, USA. 

[Smeulders et al., 2000] SMEULDERS A. W. M., WORRING M. and SANTINI S., GUPTA A. and JAIN R. (2000). Content-based image retrieval at the end of the early years. IEEE Trans.s on Pattern Analysis and Machine Intelligence, 22(12) :1349-1380. 

[Tahri and Chaumette, 2005] TAHRI O. and CHAUMETTE F. (2005). Point-based and region-based image moments for visual servoing of planar objects. IEEE Trans.s on Robotics, 21(6) :1116-1127.

[Thrun et al., 2000] THRUN S., BURGARD W. and FOX D. (2000). A real time algorithm for mobile robot mapping with applications to multirobot and 3d mapping. In IEEE Int. Conf. on Robotics and Automation, pp. 321-328, San Francisco USA. 

[Zhou et al., 2003] ZHOU C., WEI Y. and TAN T. (2003). Mobil robot selflocalization based on global visual appearance features. In IEEE Int. Conf. on Robotics and Automation, pp. 1271-1276, Taipeh, Taiwan.