Conference: 18th IEEE/RAS International Conference on Humanoid Robots, Humanoids 2018.
Title: rxKinFu: Moving Volume KinectFusion for 3D Perception and Robotics
Authors: Dimitrios Kanoulas, Nikos G. Tsagarakis, and Marsette Vona
Abstract: KinectFusion is an impressive algorithm that was introduced in 2011 to simultaneously track the movement of a depth camera in the 3D space and densely reconstruct the environment as a Truncated Signed Distance Formula (TSDF) volume, in real-time. In 2012, we introduced the Moving Volume KinectFusion method that allows the volume/camera move freely in the space. In this work, we further develop the Moving Volume KinectFusion method (as rxKinFu) to fit better to robotic and perception applications, especially for locomotion and manipulation tasks. We describe methods to raycast point clouds from the volume using virtual cameras, and use the point clouds for heightmaps generation (e.g., useful for locomotion) or object dense point cloud extraction (e.g., useful for manipulation). Moreover, we present different methods for keeping the camera fixed with respect to the moving volume, fusing also IMU data and the camera heading/velocity estimation. Last, we integrate and show some demonstrations of rxKinFu on the mini-bipedal robot RPBP, our wheeled quadrupedal robot CENTAURO, and the newly developed full-size humanoid robot COMAN+. We release the code as an open-source package, using the Robotic Operating System (ROS) and the Point Cloud Library (PCL).
Paper PDF: goo.gl/GuJH2p
Code: github.com/RoViL-Team/rxkinfu
コメント