Large language models (LLMs), such as ChatGPT, have seen widespread adoption due to the public availability of both simple user interfaces and open-source models, as well as seemingly impressive performance growth across a wide variety of tasks, demonstrating surprising capabilities and flexibility. However, there are still limitations and unanswered questions regarding the implementation of LLMs. As some examples, LLMs can produce hallucinations (i.e., incorrect responses to questions that are presented as factual and accurate) which mislead users and could have unforeseen consequences, and more powerful models tend to require large and specialized computer infrastructure. In addition, LLMs may perform worse in different languages than that in which they were primarily trained on, limiting their applicability and equity. The availability of this technology to the public offers significant benefits, including decision support, upskilling, and individual growth. However, online LLMs controlled by private enterprises raise concerns regarding the privacy and security of sensitive user information. For instance, patient data provided in user prompts could be collected by the model's owner and potentially used for unintended purposes. There are also concerns regarding the governance and control of the data, the model, and its outputs. This is particularly critical when LLMs are used to provide health or medical advice without the oversight of trained professionals, which could put patients at risk
The goal of this research topic is to foster a collaborative exploration of the ethical dimensions of deploying large language models (LLMs) across various digital health domains. Our objective is to proactively identify and address ethical challenges, promote equitable practices, and enhance the applicability of LLMs. By bringing together diverse perspectives, we aim to highlight current ethical issues, disseminate best practices, and develop actionable strategies that mitigate risks and empower stakeholders to harness the full potential of advanced LLM technologies.
Through this initiative, we seek to catalyze a constructive dialogue among researchers, practitioners, and policymakers. Our ultimate goal is to transform ethical considerations into opportunities for innovation and inclusive growth in the use of LLMs in digital health. We encourage contributions that propose viable solutions, share successful case studies, and outline forward-thinking policies that will enhance the ethical deployment of these technologies. We welcome different types of studies tackling ethical questions regarding the use of LLMs, including but not limited to: security and privacy of data, accessibility, equity, fairness, bias, hallucinations, model performance, and applications in specific domains (e.g., healthcare).
Keywords:
LLMs, fairness, Large Language Models (LLMs), Artificial Intelligence, Natural Language, Ethics, ChatGPT, Bias, Security, Privacy, Equity, trustworthy
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Large language models (LLMs), such as ChatGPT, have seen widespread adoption due to the public availability of both simple user interfaces and open-source models, as well as seemingly impressive performance growth across a wide variety of tasks, demonstrating surprising capabilities and flexibility. However, there are still limitations and unanswered questions regarding the implementation of LLMs. As some examples, LLMs can produce hallucinations (i.e., incorrect responses to questions that are presented as factual and accurate) which mislead users and could have unforeseen consequences, and more powerful models tend to require large and specialized computer infrastructure. In addition, LLMs may perform worse in different languages than that in which they were primarily trained on, limiting their applicability and equity. The availability of this technology to the public offers significant benefits, including decision support, upskilling, and individual growth. However, online LLMs controlled by private enterprises raise concerns regarding the privacy and security of sensitive user information. For instance, patient data provided in user prompts could be collected by the model's owner and potentially used for unintended purposes. There are also concerns regarding the governance and control of the data, the model, and its outputs. This is particularly critical when LLMs are used to provide health or medical advice without the oversight of trained professionals, which could put patients at risk
The goal of this research topic is to foster a collaborative exploration of the ethical dimensions of deploying large language models (LLMs) across various digital health domains. Our objective is to proactively identify and address ethical challenges, promote equitable practices, and enhance the applicability of LLMs. By bringing together diverse perspectives, we aim to highlight current ethical issues, disseminate best practices, and develop actionable strategies that mitigate risks and empower stakeholders to harness the full potential of advanced LLM technologies.
Through this initiative, we seek to catalyze a constructive dialogue among researchers, practitioners, and policymakers. Our ultimate goal is to transform ethical considerations into opportunities for innovation and inclusive growth in the use of LLMs in digital health. We encourage contributions that propose viable solutions, share successful case studies, and outline forward-thinking policies that will enhance the ethical deployment of these technologies. We welcome different types of studies tackling ethical questions regarding the use of LLMs, including but not limited to: security and privacy of data, accessibility, equity, fairness, bias, hallucinations, model performance, and applications in specific domains (e.g., healthcare).
Keywords:
LLMs, fairness, Large Language Models (LLMs), Artificial Intelligence, Natural Language, Ethics, ChatGPT, Bias, Security, Privacy, Equity, trustworthy
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.