TítuloReinforcement Learning for Landmark-based Robot Navigation
Publication TypeConference Paper
Year of Publication2002
AuthorsBusquets D, de Mántaras RLópez, Sierra C, Dietterich TG
Conference NameFirst International Joint Conference on Autonomous Agents and Multiagent Systems. Conference (AAMAS-2002) Bologna, Italy
Volume2
EditorialACM
Paginación841-843
Resumen

In landmark-based navigation, robots start in an unknown location and must navigate to a desired target using viasually-acquired landmarks. In the scenario we are studying the target is visible from the robot's initial location, but it may subsequently be occluded by intervening objects. The challenge for the robot is to acquire enough information about the environment so that it can, even in that case, move from the starting location to the target position. In this paper, we build upon our previously described multiagent system for outdoor landmark-based navigation. It is composed of three systems: the Pilot, responsible for all motions of the robot, the Vision system, responsible for identifying and tracking landmarks and for detecting obstacles, and the Navigation system, responsible for choosing high-level robot motions.