2012 年 30 巻 9 号 p. 907-916
In this paper, the autonomous navigation system which traverses in complex urban area without prior environmental information. If very large scale information (e.g., scanned three dimensional points cloud) on the route were known priorly, autonomous navigation would be easy. However, if robots had to move long range in the area where is out of control, quite few information could be given previously. In ''Tsukuba Challenge'', it is allowed that capturing any kind of information on the route beforehand and using it on the challenge day. Since autonomous navigation system is valid in realistic situation, we try not to use such prior information for the navigation system. Therefore, we propose an autonomous navigation system which only uses sensory data valid in real situation, e.g., positions from GPS, attitude from IMU, visual images and LRF data. Especially for visual images and LRF data, we only use them for on-line processing, or we handle them as one of the general information. Road boundary estimation and speed control were successfully executed in the presence of many pedestrians, other robots, and variance of road shape. The effectiveness of the proposed system is proved through the final result of the ''Tsukuba Challenge 2010''.