About this Research Topic
However, the challenges are in the effective integration of multi-modal data with AI and the extraction of meaningful knowledge within the data. Recently, Large Language Models (LLMs) such as ChatGPT have shown great success in understanding textual data. However, there have been limited studies focusing on how such LLMs can be effectively coupled with other AI models trained using different types of data (e.g., image, video, etc).
This call for papers invites submissions on LLMs and multi-modal (cross) learning (e.g., medical images and clinical reports), with a specific focus on their application in medicine. We encourage authors to report original research and high-quality reviews and perspectives covering broad topics including but not limited to:
• Applications of LLMs and multi-modal (cross) learning in medicine
• Novel use of LLMs and multi-modal data for structuring and interpreting electronic health records
• Integrated cross-modal learning systems for comprehensive medical diagnostics
• Cross-modal learning AI approaches for radiology and pathology
• Novel context learning and prompt engineering for LLMs in medicine
• Advances in machine learning algorithms for medical imaging and language analysis
Keywords: Large Language Models, Cross-modal Learning, Medical Image Analysis, Multi-modal Learning, Prompt Engineering
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.