About this Research Topic
This Research Topic welcomes manuscripts on the following themes:
● Annotation-efficient AI: Manual data annotation is a labor-intensive and time consuming process that often poses challenges in terms of scalability, cost, and subjectivity. This topic aims to explore innovative approaches and techniques to develop annotation-efficient AI models that can leverage minimal data annotation while maintaining high performance and generalization, such as activate learning, semi-/weakly/un-/self-supervised learning, etc.
● Training-efficient AI: Training AI models can be computationally intensive and time consuming, hindering their widespread adoption and scalability. This topic aims to explore innovative approaches and techniques to develop training-efficient AI models that optimize the training process, reduce computational requirements, and enhance overall performance and scalability, such as model compression, transfer learning, distributed training, hardware acceleration, etc.
● Inference-efficient AI: Inference efficiency focuses on optimizing the deployment and runtime performance of AI models. This involves techniques such as model quantization, model pruning, hardware acceleration, and algorithmic optimizations to ensure fast and resource-efficient inference. The objective is to reduce the computational requirements and latency of AI models, enabling real-time and efficient predictions.
● Communication-efficient AI: Communication efficiency focuses on reducing the communication overhead and bandwidth requirements in distributed or federated learning settings. Techniques, such as model aggregation, compression, and decentralized learning aim to minimize the amount of data exchanged between devices while maintaining model accuracy and privacy. This enables efficient collaborative learning across distributed networks or in privacy-sensitive settings.
● Domain-efficient AI: Domain-efficient AI refers to the ability of a deep learning model to quickly adapt and generalize well to new domains or tasks. It is particularly important in scenarios where collecting large amounts of labeled data is time consuming, expensive, or impractical. Techniques such as transfer learning, domain adaptation, meta-learning, few-shot learning, active learning, etc. These techniques are critical to the deployment of AI models for real-world ophthalmic disease screening and assessment where acquiring labeled data is challenging.
Keywords: Ophthalmic imaging, optical coherence tomography, OCT angiography (OCTA), fundus photography, and fluorescein angiography
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.