Considering visual localization accuracy at the planning time gives preference to robot motion that can be better localized and thus has the potential of improving vision-based navigation, especially in visually degraded environments. To integrate the knowledge about localization accuracy in motion planning algorithms, a central task is to quantify the amount of information that an image taken at a 6 degree-of-freedom pose brings for localization, which is often represented by the Fisher information. However, computing the Fisher information from a set of sparse landmarks (i.e., a point cloud), which is the most common map for visual localization, is inefficient. This approach scales linearly with the number of landmarks in the environment and does not allow the reuse of the computed Fisher information. To overcome these drawbacks, we propose the first dedicated map representation for evaluating the Fisher information of 6 degree-of-freedom visual localization for perception-aware motion planning. By formulating the Fisher information and sensor visibility carefully, we are able to separate the rotational invariant component from the Fisher information and store it in a voxel grid, namely the Fisher information field. This step only needs to be performed once for a known environment. The Fisher information for arbitrary poses can then be computed from the field in constant time, eliminating the need of costly iterating all the 3D landmarks at the planning time. Experimental results show that the proposed Fisher information field can be applied to different motion planning algorithms and is at least one order-of-magnitude faster than using the point cloud directly. Moreover, the proposed map representation is differentiable, resulting in better performance than the point cloud when used in trajectory optimization algorithms.
0:00 Intro and main idea
0:33 Experiments overview
0:40 Simulation validation
1:16 Motion planning: experiment setup
1:30 Motion planning: RRT*
2:13 Motion planning: trajectory optimization
3:21 Building FIF incrementally from VIO output
Zichao Zhang, Davide Scaramuzza
Fisher Information Field: an Efficient and Differentiable Map for Perception-aware Planning. arXiv preprint 2020.
Our research on active vision and exploration: http://rpg.ifi.uzh.ch/research_active_vision.html
Our research on visual-inertial odometry and SLAM: http://rpg.ifi.uzh.ch/research_vo.html
More about our research: http://rpg.ifi.uzh.ch/publications.html
Affiliations: The authors are with the Robotics and Perception Group, Dep. of informatics, University of Zurich, and Dep. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland http://rpg.ifi.uzh.ch/