A. Amor-Martinez, A. Santamaria-Navarro, F. Herrero, A. Ruiz and A. Sanfeliu
IEEE International Symposium on Safety, Security and Rescue Robotics, pp. 15-20, Lausanne, Switzerland, 2016
We present a featureless pose estimation method that, in contrast to current Perspective-n-Point (PnP) ap- proaches, it does not require n point correspondences to obtain the camera pose, allowing for pose estimation from natural shapes that do not necessarily have distinguished features like corners or intersecting edges. Instead of using n correspondences (e.g. extracted with a feature detector) we will use the raw polygonal representation of the observed shape and directly estimate the pose in the pose-space of the camera. This method compared with a general PnP method, does not require n point correspondences neither a priori knowledge of the object model (except the scale), which is registered with a picture taken from a known robot pose. Moreover, we achieve higher precision because all the information of the shape contour is used to minimize the area between the projected and the observed shape contours. To emphasize the non-use of n point correspondences between the projected template and observed contour shape, we call the method Planar P∅P. The method is shown both in simulation and in a real application consisting on a UAV localization where comparisons with a precise ground-truth are provided.
Natural shapes (first row) and demonstration of an augmented reality application (second row). Because we are not using correspondences we can test our method with any natural (closed) shape that we might find. The method only needs initially a frontal view of the shapes.