AUTHOR=Ruiz Eduardo , Mayol-Cuevas Walterio TITLE=Geometric Affordance Perception: Leveraging Deep 3D Saliency With the Interaction Tensor JOURNAL=Frontiers in Neurorobotics VOLUME=14 YEAR=2020 URL=https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2020.00045 DOI=10.3389/fnbot.2020.00045 ISSN=1662-5218 ABSTRACT=
Agents that need to act on their surroundings can significantly benefit from the perception of their interaction possibilities or affordances. In this paper we combine the benefits of the Interaction Tensor, a straight-forward geometrical representation that captures multiple object-scene interactions, with deep learning saliency for fast parsing of affordances in the environment. Our approach works with visually perceived 3D pointclouds and enables to query a 3D scene for locations that support affordances such as sitting or riding, as well as interactions for everyday objects like the where to hang an umbrella or place a mug. Crucially, the nature of the interaction description exhibits one-shot generalization. Experiments with numerous synthetic and real RGB-D scenes and validated by human subjects, show that the representation enables the prediction of affordance candidate locations in novel environments from a single training example. The approach also allows for a highly parallelizable, multiple-affordance representation, and works at fast rates. The combination of the deep neural network that learns to estimate scene saliency with the one-shot geometric representation aligns well with the expectation that computational models for affordance estimation should be perceptually direct and economical.