Our Science Robotics article on the October issue cover!
September 19, 2024

BearNav : Simple and Reliable Visual Teach & Repeat System

Zdeněk Rozsypálek
Zdeněk Rozsypálek
Senior PhD student
BearNav : Simple and Reliable Visual Teach & Repeat System

Bearnav is a visual teach-and-repeat visual navigation system robust to appearance changes induced by varying illumination and naturally-occurring environment changes.

STRoLL BearNav

Bearnav is a simple teach-and-repeat visual navigation system robust to appearance changes induced by varying illumination and naturally-occurring environment changes. It's core method is computationally efficient, it does not require camera calibration and it can learn and autonomously traverse arbitrarily-shaped paths. During the teaching phase, where the robot is driven by a human operator, the robot stores its velocities and image features visible from its on-board camera. During autonomous navigation, the method does not perform explicit robot localisation in the 2d/3d space but it simply replays the velocities that it learned during a teaching phase, while correcting its heading relatively to the path based on its camera data. The experiments performed indicate that the proposed navigation system corrects position errors of the robot as it moves along the path. Therefore, the robot can repeatedly drive along the desired path, which was previously taught by the human operator. Early versions of the system proved their ability to reliably traverse polygonal trajectories indoors and outdoors during adverse illumination conditions, in environments undergoing drastic appearance changes and on flying robots The version presented here is described in and it allows to learn arbitrary, smooth paths, is fully integrated in the ROS operating system and is available on-line in this repository.

System overview

The navigation system works in two steps: teach and repeat. During the learning phase, a robot is guided by an operator along a path, which is the robot supposed to autonomously navigate in the repeat phase. When learning, the robot extracts salient features from its on-board camera image and stores its current traveled distance and velocity. During autonomous navigation, the robot sets its velocity according to the traveled distance and compares the currently detected and previously mapped features to correct its heading.