![Man ultramarathon runner in the mountains he trains at sunset](https://d2csxpduxe849s.cloudfront.net/media/E32629C6-9347-4F84-81FEAEF7BFA342B3/0B4B1380-42EB-4FD5-9D7E2DBC603E79F8/webimage-C4875379-1478-416F-B03DF68FE3D8DBB5.png)
94% of researchers rate our articles as excellent or good
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.
Find out more
ORIGINAL RESEARCH article
Front. Med. , 22 January 2025
Sec. Healthcare Professions Education
Volume 12 - 2025 | https://doi.org/10.3389/fmed.2025.1529305
This article is part of the Research Topic Large Language Models for Medical Applications View all 10 articles
Background: Artificial Intelligence (AI) has made a strong entrance into different fields such as healthcare, but currently, medical degree curricula are not adapted to the changes that adopting these types of tools entitles. It is important to understand the future needs of students to provide the most comprehensive education possible.
Objective: The aim of this teaching improvement project is to describe the knowledge, attitudes, and perspectives of medical students regarding the application of AI and chatbots with patients, also considering their ethical perceptions.
Methods: Descriptive cross-sectional analysis in which the participants were students enrolled in the subject “Preventive Medicine, Public Health and Applied Statistics” during the second semester of the 2023/24 academic year, corresponding to the fifth year of the Degree in Medicine at the University of Barcelona. The students were invited to complete a specific questionnaire anonymously and voluntarily, which they could respond to using their mobile devices by scanning a QR code projected on the classroom screen, we used Microsoft Forms to perform the survey.
Results: Out of the 61 students enrolled in the subject, 34 (56%) attended the seminar, of whom 29 (85%) completed the questionnaire correctly. Of those completing the questionnaire, 20 (69%) had never used chatbots for medical information, 19 (66%) expressed a strong interest in the practical applications of AI in medicine, 14 (48%) indicated elevated concern about the ethical aspects, 17 (59%) acknowledged potential biases in these tools, and 17 (59%) expressed at least moderate confidence in chatbot-provided information. Notably, 24 (83%) agreed that acquiring AI-related knowledge will be essential to effectively perform their future professional roles.
Conclusion: Surveyed medical students demonstrated limited exposure to AI-based tools and showed a mid-level of awareness about ethical concerns, but they recognized the importance of AI knowledge for their careers, emphasizing the need for AI integration in medical education.
Digital transformation in the healthcare sector is driving a deep reconfiguration of medical practice, with Artificial Intelligence (AI) emerging as a key factor in addressing current and future healthcare challenges (1). AI-based tools, such as machine learning algorithms and large-scale data analysis, have already demonstrated their capacity to improve diagnostic accuracy and accelerate the early identification of diseases, resulting in more timely interventions and more favorable patient outcomes (2, 3). Additionally, the increasing digitization of information and the incorporation of decision support systems optimize workflows, reduce administrative burdens, and facilitate access to care, even in resource-limited settings (4).
Within this technological ecosystem, chatbots AI-driven conversational assistants have positioned themselves as promising tools to enhance interaction between healthcare professionals and patients (5, 6). These systems can provide immediate responses to basic inquiries, offer reliable information on symptoms and treatments, and promote health education, thereby expanding access to healthcare services beyond geographical and temporal limitations (7, 8). However, the implementation of these technologies is not without challenges, particularly concerning ethical issues and the quality of information provided (9).
AI in healthcare presents ethical dilemmas that encompass information integrity, data privacy, and accountability in algorithm-mediated clinical scenarios (10–12). Furthermore, the potential for biases, informational “hallucinations” (responses that appear valid but are unfounded), and the possible erosion of the doctor-patient relationship underscore the need to address these technologies with prudence and rigor (13–15). In this regard, medical education plays a central role: preparing future healthcare professionals to understand, adopt, and critically evaluate AI tools is essential to ensure their ethical, effective, and patient-centered integration into clinical practice (16, 17).
Although various studies have explored the general perceptions of students and healthcare professionals regarding AI, there remains a gap in the literature concerning the specific understanding that advanced medical students have about the use of chatbots in clinical settings (18). This population is at a critical juncture: on the brink of entering professional practice, their perceptions, concerns, and expectations provide valuable insights into how curricula and training strategies should be shaped to meet the demands of an imminent future marked by the gradual inclusion of AI in healthcare delivery (11, 14, 18, 19). Understanding their attitudes, knowledge levels, and ethical concerns offers a solid foundation for designing curricula that balance technical training with ethical reflection, promoting responsible and informed use of AI.
In this context, the present teaching improvement project aims to describe the knowledge, attitudes, and perspectives of medical students regarding the application of AI and the use of chatbots in the healthcare field, with particular attention to their ethical perceptions. This approach seeks to generate an initial framework to guide the future inclusion of AI-related content in medical education, ensuring that tomorrow’s physicians are better prepared to integrate these tools into their clinical practice competently and ethically.
A descriptive cross-sectional study was conducted with the aim of obtaining an initial understanding of medical students’ perceptions and attitudes regarding the integration of AI-based chatbots in the healthcare sector.
The target population comprised students enrolled in the course “Preventive Medicine, Public Health, and Applied Statistics,” corresponding to the fifth year of the Medicine Degree at the University of Barcelona, during the second semester of the 2023/24 academic year. A non-probabilistic sampling method was employed, selecting participants who attended a theoretical seminar on the use of chatbots in the medical field and who voluntarily agreed to complete the questionnaire.
The sample size was determined by seminar attendance and voluntary participation in the survey. Given the exploratory and preliminary nature of the study, an ideal sample size was not calculated using specific statistical formulas. The sample included students who, prior to the seminar, scanned a QR code and completed the online questionnaire using the Microsoft Forms application.
A quantitative questionnaire was designed, structured into three main sections with a total of 14 items. The questionnaire was intended to be simple, providing an initial approximation of students’ perceptions. Each question featured a closed-response format (predefined options or Likert scales ranging from 1 to 5). The three dimensions investigated were:
• Attitudes and Prior Knowledge (3 items): Assesses previous familiarity with AI tools and chatbots in medicine.
• Ethical Perceptions (3 items): Explores ethical concerns and the level of trust in information provided by chatbots.
• Future Perspectives (8 items): Investigates the future relevance of AI knowledge for medical practice and professional training.
Q1. Have you ever used chatbots to obtain medical information? 1-Never, 5-Frequently.
Q2. Are you familiar with current artificial intelligence tools applied to medicine, such as AI-assisted diagnosis or therapeutic recommendations? 1 - not at all, 5 - very much.
Q11. I am interested in the practical aspects of AI in medicine. 1-very little, 5-a lot.
Q8. Are you concerned about the ethics of using chatbots in medicine? 1 “not very concerned” and 5 “very concerned.”
Q9. On a scale from 1 to 5, where 1 is “not very confident” and 5 is “very confident,” how much trust do you have in the information provided by chatbots on medical topics?
Q10. Are you concerned about potential biases in such tools? 1-very little, 5-a lot.
Q3. The use of AI in healthcare can positively change medicine. 1-Strongly disagree, 5-Strongly agree.
Q4. The use of AI can negatively affect the doctor-patient relationship. 1-Strongly disagree, 5-Strongly agree.
Q5. Doctors will need to know about AI-based tools to perform their jobs in the near future. 1-Strongly disagree, 5-Strongly agree.
Q6. AI should be part of medical education. 1-Strongly disagree, 5-Strongly agree.
Q7. Practical content on the use of AI-based tools in medicine should be introduced in medical degree programs. 1-Strongly disagree, 5-Strongly agree.
Q12. The use of such tools will lead to a dehumanization of medicine. 1-Strongly disagree, 5-Strongly agree.
Q13. The use of such tools will create dependency among medical staff. 1-Strongly disagree, 5-Strongly agree.
Q14. The imposition of these new technologies may influence the choice of specialization for medical personnel. 1-Strongly disagree, 5-Strongly agree.
This instrument was applied in its second iteration, following a pilot test conducted with 14 students in a workshop. This pilot allowed for the adjustment and consensus of questions with expert faculty members to enhance clarity and relevance.
Given the preliminary and exploratory nature of the study, comprehensive psychometric analyses (e.g., formal internal reliability tests or construct validity assessments) were not performed. However, the questionnaire underwent review by expert faculty in the fields of preventive medicine and public health, as well as medical education, to ensure clarity, internal consistency, and item relevance. Future research is recommended to formally validate the instrument, including conducting more extensive pilot tests and appropriate psychometric analyses to strengthen the questionnaire’s reliability and validity.
Prior to the commencement of the theoretical seminar, students were invited to complete the questionnaire anonymously and voluntarily. Participation involved scanning a QR code projected in the classroom and responding to the questionnaire on their personal mobile devices via Microsoft Forms. To prevent duplicate responses, a time limit was set for completing the questionnaire. No personal, health-related, or sensitive data were collected. Participants were informed about the confidentiality of their responses and their right to abstain from answering or to withdraw from the survey at any time without any consequences.
Data analysis was structured according to the three sections of the questionnaire: Attitudes and Prior Knowledge, Ethical Perceptions, and Future Perspectives. The following techniques were employed:
Relative frequency calculations were utilized to characterize responses within each section of the questionnaire. This provided a quantitative overview of participants’ knowledge and opinions on AI and chatbots prior to their exposure to the theoretical seminar.
Horizontal bar charts were employed to graphically represent the results, facilitating visual comparison of response distributions on a scale of 1 to 5. This type of visualization aids in quickly identifying trends and patterns within the collected data.
Results from each section were synthesized to present a comprehensive conclusion, analysing medical students’ perspectives on the integration of AI-based chatbots in healthcare. This approach prioritized quantitative aspects, allowing for a deeper exploration of participants’ views beyond numerical data.
As an exploratory study, complex inferential methods or systematic evidence synthesis were not employed, limiting the analysis to basic quantitative description and the identification of general patterns in students’ perceptions.
A total of 61 students were enrolled in the course “Preventive Medicine, Public Health, and Applied Statistics” during the second semester of the 2023/24 academic year. Of these, 34 (56%) students attended the theoretical seminar on the use of chatbots in the medical field, and 29 (85%) of them fully completed the questionnaire. The results are organized according to the three dimensions outlined in the study’s objective: initial knowledge and attitudes, ethical perceptions, and future perspectives on the integration of AI in clinical practice.
This dimension aimed to describe the initial level of familiarity with AI tools and chatbots, as well as the interest in their application. The results indicated a low degree of prior exposure to these technologies:
Previous Use of Chatbots: 20 (69%) responses scored below 3 on a scale of 1 to 5, reflecting little to no experience in using chatbots to obtain medical information.
Knowledge of AI Tools in Clinical Settings: 23 (79%) responses also scored below 3, suggesting limited knowledge of specific AI applications in medicine.
Despite this lack of familiarity, a notable interest in the practical applications of AI in the medical field emerged, with 19 (66%) scores exceeding 3. This indicates a positive attitude towards acquiring knowledge and skills related to these tools (Figure 1).
Figure 1. Graphic representation of the answers obtained for the questions on “Attitudes and Prior knowledge” through relative frequencies (1, 2, 11).
This section explored concerns regarding the reliability, biases, and ethical implications of using AI-based chatbots in healthcare settings. The results revealed a significant level of concern:
Ethics of Using Chatbots: 14 (48%) participants rated above 3, indicating concerns about the moral and deontological implications of integrating these tools into medical practice.
Potential Biases: 17 (59%) expressed concern (scores >3) about the existence of biases, suggesting that students are aware of the risk of partiality in the recommendations or information provided by AI tools.
Trust in Information Provided by Chatbots: Regarding the accuracy of the content supplied by chatbots, 17 (59%) scored ≥3, revealing moderate trust that is nevertheless tempered by the previously mentioned ethical doubts (Figure 2).
Figure 2. Graphic representation of the answers obtained for the questions on “Ethical concern” through relative frequencies (8–10).
The final section focused on opinions about the long-term impact of AI in medicine, including its effect on clinical practice, the training of future doctors, and the doctor-patient relationship. The findings suggest that students anticipate a significant change in their professional practice:
Positive Impact on Medicine: 100% of respondents rated ≥3, believing that AI can favourably transform medicine.
Educational Needs: 24 (83%) believe that doctors will require knowledge of AI to perform their duties effectively (≥3), and 26 (90%) consider that the medical curriculum should include AI (≥3), as well as practical content on its use (≥3).
Concerns about the Doctor-Patient Relationship: 25 (86%) perceive that the use of AI could negatively affect this relationship (≥3), and 19 (65%) believe it could contribute to the dehumanization of healthcare (≥3). Additionally, 22 (76%) fear the development of dependence on these tools (≥3).
Influence on Specialty Choice: 24 (83%) consider that the imposition of new technologies, such as AI, could influence their future decisions regarding medical specialization (≥3).
Overall, these results demonstrate that while students have limited prior contact with AI tools, they show a growing interest in learning and integrating them. They recognize the potential of AI to transform medicine and medical education but remain cautious about the ethical and human implications of its implementation. These perceptions, aligned with the study’s objective, provide an initial perspective on the educational needs, ethical concerns, and expectations of future healthcare professionals in the face of the increasing presence of AI in the health sector (Figure 3).
Figure 3. Graphic representation of the answers obtained for the questions on “Future perspectives” through relative frequencies (3–7, 12–14).
The results obtained are consistent with the academic characteristics and formative stage of our sample of 29 medical students. These students, who have already completed Medical Ethics coursework and are concurrently engaging in clinical practices alongside theoretical subjects, represent an ideal profile for capturing how future healthcare professionals perceive the integration of Artificial Intelligence (AI) tools into their medical activities. Additionally, the non-mandatory nature of theoretical seminar attendance at this stage, combined with the documented absenteeism phenomenon in health sciences (20), reinforces the relevance of this sample as a study group.
This teaching improvement project aimed to explore the level of knowledge, ethical perceptions, and future perspectives of medical students regarding the use of AI tools in the healthcare field, specifically the employment of chatbots. Despite their limited direct experience with AI, the findings indicate that students are aware of the inherent ethical challenges of these technologies while recognizing the importance of acquiring competencies in this area for their future professional practice.
The limited prior use of chatbots to obtain medical information aligns with trends described in the literature (21), indicating that these tools have not yet been widely incorporated into students’ routine information-seeking practices. This lack of familiarity suggests the need for specific educational interventions that increase exposure to AI and enhance understanding of its applications (22). Nevertheless, the positive disposition towards learning these technologies reflects an open field for curricular development.
The identification of ethical concerns by the students constitutes one of the most significant findings of this study, highlighting an area that warrants deeper attention. Participants expressed concerns about the accuracy of information, the presence of biases, data confidentiality, and the moral implications of using chatbots in clinical practice. This sensitivity to ethical dilemmas aligns with literature that underscores the importance of addressing these issues in the integration of AI in healthcare (17, 23, 24).
Although students showed a certain degree of trust in the responses provided by chatbots, this trust is tempered by the previously mentioned ethical reservations. It is clear that the mere incorporation of AI tools is insufficient: it is imperative to establish solid ethical frameworks, well-defined guidelines, and training that goes beyond technical competencies. Including ethics modules focused on AI, case-based discussions, and dialogues with ethics and technology experts could foster a critical and responsible view of the use of these tools. In this way, future doctors can adopt balanced approaches, ensuring safe, equitable, and patient-centered applications.
The students’ perspectives suggest that AI could facilitate collaboration between healthcare professionals and chatbots, potentially optimizing care in an increasingly complex clinical environment (1). The nearly unanimous conviction that knowledge of these tools will be essential in their careers underscores the need to reform medical curricula, incorporating technological skills that prepare future professionals for a rapidly transforming care scenario (7, 25).
Furthermore, concerns about the risk of dehumanizing care, potential technological dependence, or the influence of AI on specialty choice should not be overlooked. These warnings highlight the importance of balancing technological literacy with the development of humanistic, ethical, and communication competencies. Extending these training strategies to other health science degrees will promote teamwork and a comprehensive approach to AI usage.
Although this project provides valuable preliminary findings, it is important to acknowledge several limitations that affect the generalizability and robustness of the results. Firstly, the sample size was small, and participation was not mandatory, which not only impedes the representativeness of the general population of medical students but also introduces a non-response bias: those students who chose not to participate might hold different perceptions or attitudes regarding AI in education. Secondly, the study was conducted within the specific context of a seminar focused on AI, so the perceptions gathered could be influenced by the educational intervention itself, generating a potential acquiescence bias toward the presented environment.
Additionally, although the questionnaire used underwent a second iteration following a pilot with 14 participants and was agreed upon with expert educators, it lacks a formal psychometric validation process. The absence of objective questions that assess the actual level of knowledge limits the ability to contrast subjective perceptions with more direct indicators, and the simplicity of the instrument may not capture the real complexity of the perceptions, attitudes, and contextual factors that influence the use of AI in medical training environments.
To address these limitations, future research should consider using larger, more diverse samples with higher response rates to enhance representativeness and statistical power. It would also be advisable to evaluate the effectiveness of AI educational initiatives in different training contexts and over longer periods, as well as to refine and validate the questionnaire through rigorous psychometric analyses, incorporate objective questions, and encompass broader contextual factors. In this way, the conclusions drawn would be more robust, applicable, and generalizable to a wider range of medical education settings.
This teaching improvement project, aimed at describing the knowledge, attitudes, and perspectives of medical students regarding the application of AI and the use of chatbots in the healthcare field, revealed that participants are not significantly exposed to these tools nor are they a regular part of their academic or clinical routines. Despite this limited familiarity, they demonstrated a moderate awareness of the ethical challenges involved in incorporating AI into medical practice, reflecting an emerging sensitivity to the moral and deontological implications of these technologies.
At the same time, a marked optimism regarding the future adoption of AI-based tools was evident, as all students recognized the need to acquire knowledge in this area to perform effectively as healthcare professionals. This combination of ethical concerns and positive expectations underscores the importance of integrating specific AI-related educational content into medical education, enabling future doctors to use these tools effectively, thoughtfully, and responsibly.
Ultimately, the need to strengthen AI training within the medical curriculum not only responds to the growing presence of these technologies in healthcare delivery but also addresses the urgency of preparing tomorrow’s physicians to leverage the opportunities offered by AI while resolving the complex ethical implications associated with its implementation.
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
This project was conducted in accordance with current ethical guidelines and focused exclusively on the collection of anonymous data regarding students’ perceptions and attitudes toward the described topic. No identifiable personal data or sensitive information related to participants’ health or academic performance were collected. Participation was entirely voluntary, with no economic or academic incentives offered. Therefore, no Ethics Board approbation or informed consent was required.
JG-G: Conceptualization, Investigation, Methodology, Software, Visualization, Writing – original draft, Writing – review & editing. LB-M: Conceptualization, Investigation, Methodology, Writing – original draft, Writing – review & editing. MB: Supervision, Writing – review & editing. AV: Writing – review & editing. IT-R: Conceptualization, Investigation, Methodology, Supervision, Writing – review & editing. AP: Conceptualization, Investigation, Methodology, Supervision, Writing – review & editing.
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
We thank all the students who attended the seminar and actively participated in the questionnaire.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The authors declare that Gen AI was used in the creation of this manuscript. Generative AI tools have been used to translate and correct writing errors.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
1. Topol, EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. (2019) 25:44–56. doi: 10.1038/s41591-018-0300-7
2. Rajpurkar, P, Chen, E, Banerjee, O, and Topol, EJ. AI in health and medicine. Nat Med. (2022) 28:31–8. doi: 10.1038/s41591-021-01614-0
3. Garcia-Vidal, C, Sanjuan, G, Puerta-Alcalde, P, Moreno-García, E, and Soriano, A. Artificial intelligence to support clinical decision-making processes. EBioMedicine. (2019) 46:27–9. doi: 10.1016/j.ebiom.2019.07.019
4. Gomez, C, Smith, BL, Zayas, A, Unberath, M, and Canares, T. Explainable AI decision support improves accuracy during telehealth strep throat screening. Commun Med. (2024, 2024) 4:1, 149–111. doi: 10.1038/s43856-024-00568-x
5. Laranjo, L, Dunn, AG, Tong, HL, Kocaballi, AB, Chen, J, Bashir, R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. (2018) 25:1248–58. doi: 10.1093/JAMIA/OCY072
6. Milne-Ives, M, de Cock, C, Lim, E, Shehadeh, MH, de Pennington, N, Mole, G, et al. The effectiveness of artificial intelligence conversational agents in health care: systematic review. J Med Internet Res. (2020) 22:e20346. doi: 10.2196/20346
7. Masters, K. Artificial intelligence in medical education. Med Teach. (2019) 41:976–80. doi: 10.1080/0142159X.2019.1595557
8. Aggarwal, A, Tam, CC, Wu, D, Li, X, and Qiao, S. Artificial intelligence–based Chatbots for promoting health behavioral changes: systematic review. J Med Internet Res. (2023) 25:e40789. doi: 10.2196/40789
9. Mittelstadt, BD, Allo, P, Taddeo, M, Wachter, S, and Floridi, L. The ethics of algorithms: mapping the debate. Big Data Soc. (2016) 3:679. doi: 10.1177/2053951716679679
10. Grote, T, and Berens, P. On the ethics of algorithmic decision-making in healthcare. J Med Ethics. (2020) 46:205–11. doi: 10.1136/MEDETHICS-2019-105586
11. Rigby, MJ. Ethical dimensions of using artificial intelligence in health care. AMA J Ethics. (2019) 21:E121–4. doi: 10.1001/AMAJETHICS.2019.121
12. Wartman, SA, and Donald, CC. Medical education must move from the information age to the age of artificial intelligence. Acad Med. (2018) 93:1107–9. doi: 10.1097/ACM.0000000000002044
13. Sit, C, Srinivasan, R, Amlani, A, Muthuswamy, K, Azam, A, Monzon, L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights. Imaging. (2020) 11:830. doi: 10.1186/S13244-019-0830-7
14. Paranjape, K, Schinkel, M, Panday, RN, Car, J, and Nanayakkara, P. Introducing artificial intelligence training in medical education. JMIR. Med Educ. (2019) 5:e16048. doi: 10.2196/16048
15. Chan, KS, and Zary, N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. JMIR Med Educ. (2019) 5:e13930. doi: 10.2196/13930
16. Kolachalama, VB, and Garg, PS. Machine learning and medical education. NPJ Digit Med. (2018) 1:54. doi: 10.1038/S41746-018-0061-1
17. Morley, J, Machado, CCV, Burr, C, Cowls, J, Joshi, I, Taddeo, M, et al. The ethics of AI in health care: a mapping review. Soc Sci Med. (2020) 260:113172. doi: 10.1016/j.socscimed.2020.113172
18. Floridi, L, and Cowls, J. A unified framework of five principles for AI in society. Harv Data Sci Rev. (2019) 1:550. doi: 10.1162/99608F92.8CD550D1
19. Rosic, A. Legal implications of artificial intelligence in health care. Clin Dermatol. (2024) 42:451–9. doi: 10.1016/J.CLINDERMATOL.2024.06.014
20. Babakhanian, Z, Khorashadi, RR, Vakili, R, Abbasi, MA, Akhavan, H, and Saeidi, M. Analyzing the factors affecting students’ absenteeism in university classrooms; a systematic review. Syst Rev. (2022) 3:563–75. doi: 10.22034/MEB.2022.353007.1064
21. Buabbas, AJ, Miskin, B, Alnaqi, AA, Ayed, AK, Shehab, AA, Syed-Abdul, S, et al. Investigating students’ perceptions towards artificial intelligence in medical education. Healthcare. (2023) 11:1298. doi: 10.3390/healthcare11091298
22. Cherrez-Ojeda, I, Gallardo-Bastidas, JC, Robles-Velasco, K, Osorio, MF, Velez Leon, EM, Leon Velastegui, M, et al. Understanding health care students’ perceptions, beliefs, and attitudes toward AI-powered language models: cross-sectional study. JMIR Med Educ. (2024) 10:e51757. doi: 10.2196/51757
23. Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. (2019) 1:501–7. doi: 10.1038/s42256-019-0114-4
24. Nundy, S, Montgomery, T, and Wachter, RM. Promoting trust between patients and physicians in the era of artificial intelligence. JAMA. (2019) 322:497–8. doi: 10.1001/jama.2018.20563
Keywords: medical education, artificial intelligence, chatbots, medical students, teaching improvement
Citation: Gualda-Gea JJ, Barón-Miras LE, Bertran MJ, Vilella A, Torá-Rocamora I and Prat A (2025) Perceptions and future perspectives of medical students on the use of artificial intelligence based chatbots: an exploratory analysis. Front. Med. 12:1529305. doi: 10.3389/fmed.2025.1529305
Received: 19 November 2024; Accepted: 07 January 2025;
Published: 22 January 2025.
Edited by:
Thomas F. Heston, University of Washington, United StatesReviewed by:
Ghulam Farid, Shalamar Medical and Dental College, PakistanCopyright © 2025 Gualda-Gea, Barón-Miras, Bertran, Vilella, Torá-Rocamora and Prat. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Juan José Gualda-Gea, amd1YWxkZ2UxN0BhbHVtbmVzLnViLmVkdQ==
†These authors share senior authorship
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Research integrity at Frontiers
Learn more about the work of our research integrity team to safeguard the quality of each article we publish.