Generative Artificial Intelligence (GenAI) is a relatively new type of AI, capable of generating text, images, videos, and other data. GenAI has gained widespread attention recently because of the success of chatbots such as ChatGPT and image generators such as DALL-E. It is already having a large impact in many industries such as gaming, movies, literature and music. In healthcare, we are just at the start of the revolution that GenAI might start. GenAI can help healthcare providers by writing summaries of radiology and pathology reports, creating patient-facing chatbots empowered with the latest medical knowledge, creating synthetic data based on privacy-sensitive real data, and much more.
In this Research Topic, we would like to create an overview of applications and evaluation/validation of GenAI in healthcare, using, for example, large language/vision models (LLMs/LVMs), generative adversarial networks (GAN) and retrieval-augmented generation (RAG). We aim to present several examples of successful implementations and evaluations of GenAI in healthcare, and its way towards deployment, but, since this area of research is developing fast, also look to its possibilities and necessities in the (near) future. As healthcare can be considered a "high-risk" industry (the life of the patient might depend on the output of the AI algorithm), we would welcome manuscripts that discuss risks and pitfalls around GenAI in healthcare, and the possible solutions for these issues.
We welcome all papers related to the use, application and evaluation of GenAI in healthcare, for example:
- Large Language Models (LLMs) finetuned for use in healthcare (e.g. chatbots, summarizing reports)
- Large Vision Models (LVMs) applied to medical images
- Generative Adversarial Networks (GANs) to create synthetic medical data
- Retrieval-Augmented Generation (RAG) to enable retrieval of information from medical databases
- Multimodal use of Gen AI in healthcare
- Intrinsic and extrinsic evaluation of Gen AI in healthcare - automated and human evaluation/validation
- Responsible AI: discussing potential risks of GenAI in healthcare (related to ethics, privacy, bias, explainability, etc.) and how to overcome them
- Regulatory and legislative aspects of Generative AI in Healthcare
Keywords:
Generative Artificial Intelligence Large Language Models, Large Vision Models, Generative Adversarial Networks, Retrieval-Augmented Generation, Multimodality, Healthcare, Medicine
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Generative Artificial Intelligence (GenAI) is a relatively new type of AI, capable of generating text, images, videos, and other data. GenAI has gained widespread attention recently because of the success of chatbots such as ChatGPT and image generators such as DALL-E. It is already having a large impact in many industries such as gaming, movies, literature and music. In healthcare, we are just at the start of the revolution that GenAI might start. GenAI can help healthcare providers by writing summaries of radiology and pathology reports, creating patient-facing chatbots empowered with the latest medical knowledge, creating synthetic data based on privacy-sensitive real data, and much more.
In this Research Topic, we would like to create an overview of applications and evaluation/validation of GenAI in healthcare, using, for example, large language/vision models (LLMs/LVMs), generative adversarial networks (GAN) and retrieval-augmented generation (RAG). We aim to present several examples of successful implementations and evaluations of GenAI in healthcare, and its way towards deployment, but, since this area of research is developing fast, also look to its possibilities and necessities in the (near) future. As healthcare can be considered a "high-risk" industry (the life of the patient might depend on the output of the AI algorithm), we would welcome manuscripts that discuss risks and pitfalls around GenAI in healthcare, and the possible solutions for these issues.
We welcome all papers related to the use, application and evaluation of GenAI in healthcare, for example:
- Large Language Models (LLMs) finetuned for use in healthcare (e.g. chatbots, summarizing reports)
- Large Vision Models (LVMs) applied to medical images
- Generative Adversarial Networks (GANs) to create synthetic medical data
- Retrieval-Augmented Generation (RAG) to enable retrieval of information from medical databases
- Multimodal use of Gen AI in healthcare
- Intrinsic and extrinsic evaluation of Gen AI in healthcare - automated and human evaluation/validation
- Responsible AI: discussing potential risks of GenAI in healthcare (related to ethics, privacy, bias, explainability, etc.) and how to overcome them
- Regulatory and legislative aspects of Generative AI in Healthcare
Keywords:
Generative Artificial Intelligence Large Language Models, Large Vision Models, Generative Adversarial Networks, Retrieval-Augmented Generation, Multimodality, Healthcare, Medicine
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.