Skip to main content

ORIGINAL RESEARCH article

Front. Robot. AI
Sec. Robot Vision and Artificial Perception
Volume 11 - 2024 | doi: 10.3389/frobt.2024.1431826
This article is part of the Research Topic Computer Vision Mechanisms for Resource-Constrained Robotics Applications View all 4 articles

Minimal Perception: Enabling Autonomy in Resource-Constrained Robots

Provisionally accepted
Chahat Deep Singh Chahat Deep Singh *Botao He Botao He Cornelia Fermuller Cornelia Fermuller Christopher Metzler Christopher Metzler Yiannis Aloimonos Yiannis Aloimonos
  • University of Maryland, College Park, College Park, United States

The final, formatted version of the article will be published soon.

    The rapidly increasing capabilities of autonomous mobile robots promise to make them ubiquitous in the coming decade. These robots will continue to enhance efficiency and safety in novel applications such as disaster management, environmental monitoring, bridge inspection, and agricultural inspection. To operate autonomously without constant human intervention, even in remote or hazardous areas, robots must sense, process, and interpret environmental data using only onboard sensing and computation. This capability is made possible by advancements in perception algorithms, allowing these robots to rely primarily on their perception capabilities for navigation tasks. However, tiny robot autonomy is hindered mainly by sensors, memory, and computing due to size, area, weight, and power constraints. The bottleneck in these robots lies in the real-time perception in resource-constrained robots. To enable autonomy in robots of sizes that are less than 100 mm in body length, we draw inspiration from tiny organisms such as insects and hummingbirds, known for their sophisticated perception, navigation, and survival abilities despite their minimal sensor and neural system. This work aims to provide insights into designing a compact and efficient minimal perception framework for tiny autonomous robots from higher cognitive to lower sensor levels.

    Keywords: Frugal AI, Minimal AI, resource-constrained, autonomy, navigation, depth estimation, optical flow

    Received: 13 May 2024; Accepted: 15 Aug 2024.

    Copyright: © 2024 Singh, He, Fermuller, Metzler and Aloimonos. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Chahat Deep Singh, University of Maryland, College Park, College Park, United States

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.