The final, formatted version of the article will be published soon.
ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Medicine and Public Health
Volume 7 - 2024 |
doi: 10.3389/frai.2024.1493716
This article is part of the Research Topic GenAI in Healthcare: Technologies, Applications and Evaluation View all articles
Fine-Tuning a Local LLaMA-3 Large Language Model for Automated Privacy-Preserving Physician Letter Generation in Radiation Oncology
Provisionally accepted- 1 Department of Radiation Oncology, University Hospital Erlangen, Erlangen, Germany
- 2 Pattern Recognition Lab, Department of Computer Science, Friedrich Alexander University Erlangen-Nuremberg, Erlangen, Bavaria, Germany
Generating physician letters is a time-consuming task in daily clinical practice. This study investigates local fine-tuning of large language models (LLMs), specifically LLaMA models, for physician letter generation in a privacy-preserving manner within the field of radiation oncology.Our findings demonstrate that base LLaMA models, without fine-tuning, are inadequate for effectively generating physician letters. The QLoRA algorithm provides an efficient method for local intra-institutional fine-tuning of LLMs with limited computational resources (i.e., a single 48 GB GPU workstation within the hospital). The fine-tuned LLM successfully learns radiation oncology-specific information and generates physician letters in an institution-specific style. ROUGE scores of the generated summary reports highlight the superiority of the 8B LLaMA-3 model over the 13B LLaMA-2 model. Further multidimensional physician evaluations of 10 cases reveal that, although the fine-tuned LLaMA-3 model has limited capacity to generate content beyond the provided input data, it successfully generates salutations, diagnoses and treatment histories, recommendations for further treatment, and planned schedules. Overall, clinical benefit was rated highly by the clinical experts (average score of 3.4 on a 4-point scale). With careful physician review and correction, automated LLM-based physician letter generation has significant practical value.
Keywords: Radiation Oncology, ChatGPT, Data privacy, Parameter-Efficient Fine-Tuning, Llama, Fine-tuning, Physician Letter
Received: 09 Sep 2024; Accepted: 18 Dec 2024.
Copyright: © 2024 Hou, Bert, Gomaa, Lahmer, Höfler, Weissmann, Voigt, Schubert, Schmitter, Depardon, Semrau, Maier, Fietkau, Huang and Putz. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Yixing Huang, Department of Radiation Oncology, University Hospital Erlangen, Erlangen, Germany
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.