About this Research Topic
In light of this, our Research Topic is dedicated to advancing data-efficient, computation-efficient, and knowledge-efficient AI approaches. Our aim is to disseminate recent research in these fields and discuss future directions for resource-limited AI. We aspire to provide fresh insights to address the challenge of limited resources in AI and contribute to the development of real-world AI applications.
Our objective is to tackle the challenge of effectively implementing machine learning when faced with limited data, models, and knowledge. This has prompted researchers from diverse fields to explore efficient learning techniques, encompassing three main dimensions:
1) Data-Efficient Learning, involving methods like data compression.
2) Model-Efficient Learning, including model compression.
3) Knowledge-Efficient Learning, exemplified by zero/few-shot learning.
We invite experts from these domains as well as related fields to collaborate and propose innovative methodologies. In particular, we are keen on receiving ideas that amalgamate multiple aforementioned themes. For instance, we encourage the exploration of integrated data-machine efficient learning approaches that leverage compact networks to reduce data volume. Such methodologies hold the potential to streamline the trajectory of machine learning and its associated tasks, spanning areas like computer vision and natural language processing. By addressing these challenges with limited resources, we aim to make meaningful contributions to the research community.
This research theme focuses on the important task of achieving flexible and efficient learning, a key aspect of artificial intelligence. Our main goal is to help AI models overcome their current reliance on extensive data and computational resources, enabling them to generalize effectively in various unpredictable situations. Authors are strongly encouraged to explore an array of subjects, including but not confined to zero/few-shot learning, domain adaptation, resource-effective architectural design, and data compression. We enthusiastically welcome contributions spanning inventive methodologies, empirical explorations, theoretical advancements, proposals of benchmark standards, as well as pragmatic use-case scenarios.
Through these efforts, we aim to chart a new direction in AI research, fostering models that excel in generalization and performance even when confronted with resource constraints.
Keywords: Artificial intelligence, Deel learning, Computer vision, Model compression, Data compression, Domain adaptation
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.