An incremental-learning-by-navigation approach to visionbased autonomous land vehicle (ALV) guidance in indoor environments is proposed. The approach consists of three stages: initial learning, navigation, and model updating. In the initial learning stage, the ALV is driven manually, and environment images and other status data are recorded automatically. Then, an off-line procedure is performed to build an initial environment model. In the navigation stage, the ALV moves along the learned environment automatically, locates itself by model matching, and records necessary information for model updating. In the
model updating stage, an off-line procedure is performed to refine the learned model. A more precise model is obtained after each navigationand-update iteration. Used environment features are vertical straight lines in camera views. A multiweighted generalized Hough transform is proposed for model matching. A real ALV was used as the testbed, and successful navigation experiments show the feasibility of the proposed approach.
Relation:
IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics 28 (5): 740-748