ActiveSLAM
Funding Source: Max Planck Institute and the University of Stuttgart
Motivation: Robots that are capable to help humans in everyday tasks, either in workplaces or homes, are becoming rapidly popular. To be fully functional companions, robots should be able to navigate and map unknown environments in a seamless and efficient way -- quickly, using less energy and without navigating unnecessarily. Simultaneous localization and mapping, popularly called SLAM, has been developed mainly as a passive process where robots are only required to follow external control inputs, are controlled directly by humans or where previous knowledge is being exploited through predefined waypoints or landmarks. Active SLAM, on the other hand, refers to an approach in which robots exploit their sensors measurements and based on that take control decisions to increase map information, while simultaneously performing other user-defined tasks in an energy-efficient way.
Goals and Objectives:
Methodology: In our most recent work in ActiveSLAM, we introduce an active visual SLAM approach for our omnidirectional robot 'Robotino' [1]. It focuses both on the exploration of the unknown parts of the environment and on the re-observation of already mapped areas to improve the so called 'coverage information' for a better overall map quality. We employ activeness at two different levels -- the first one acts on the global planning, through informative path planning. This is done by selecting the best path and headings at every waypoint on that path using the information provided by the global occupancy map. The second one influences only the short-term movement using the real-time local distribution of 3D visual features. Inside the utility function, we use Shannon's entropy measure and balance between exploration and coverage behaviours. By exploiting all the available information to drive the camera direction (since our robot is omni directional), we are able to maximize the amount of information gathered during the robot's movement between waypoints [2].
Publications:
[1] Bonetto, E., Goldschmid, P., Pabst, M., Black, M. J., & Ahmad, A. (2022). iRotate: Active visual SLAM for omnidirectional robots. Robotics and Autonomous Systems. https://doi.org/10.1016/j.robot.2022.104102
[2] Bonetto, E., Goldschmid, P., Black, M. J., & Ahmad, A. (2021). Active Visual SLAM with Independently Rotating Camera. 2021 European Conference on Mobile Robots (ECMR), 1–8. https://doi.org/10.1109/ECMR50962.2021.9568791