- 1Institute of Digital Technology, Faculty of Informatics, Eszterhazy Karoly Catholic University, Eger, Hungary
- 2Institute of Electronics and Communication Systems, Kandó Kálmán Faculty of Electrical Engineering, Óbuda University, Budapest, Hungary
- 3Institute of Computer Science, University of Dunaujvaros, Dunaujvaros, Hungary
- 4GAMF Faculty of Engineering and Computer Science, John von Neumann University, Kecskemét, Hungary
1 Introduction
The emergence of ChatGPT, a high-performance artificial intelligence language model developed by OpenAI, has generated both excitement and concern in academia (Li, 2024). Equipped with advanced natural language processing techniques, ChatGPT is able to generate human-like text that provides coherent and contextually relevant responses to a wide range of queries. This unprecedented capability has raised optimism and concern as it could fundamentally change traditional practices in academia, industry and everyday life (Cambra-Fierro et al., 2024).
The basic function of “ask me anything” and “I might have a good answer” is no longer just a concern in many fields. The scientific knowledge disseminated in journals is already struggling with the role that such technology will play. Questions arise about whether it will be, and can be, co-authored (Tang, 2024). Professors who create knowledge immediately face the challenge of assessing students in the presence of such technology. These are practical and legitimate questions.
While ChatGPT has many benefits in terms of increased student engagement, collaboration and accessibility outcomes, it also has very serious academic integrity implications: at its core is plagiarism. This paper offers comprehensive strategies on how educators can help mitigate these risks by promoting ethical use and fairness within the academic use of AI tools.
2 Challenges and risks of ChatGPT and generative AI
ChatGPT was truly disruptive, which should have surprised no one. It can be seen that these technologies are being adopted very quickly from university labs; ChatGPT reached one million users in its first 5 days and now has over 180 million (Duarte, 2024). This kind of rapid adoption demonstrates a remarkable property of generative AI: that it persists with coherent and contextually relevant text.
One of the main problems with AI models like ChatGPT is the range of threats they pose, including black box algorithms, including black box algorithms, discrimination, biases, vulgarity, copyright infringement, plagiarism, and many others, such as the generation of fake text content or fake media (Sloan et al., 2024). Therefore, organizations need disciplined risk management approaches to effectively address these threats. Considering the continuous evolution of artificial intelligence algorithms due to the rapidity of data sources, the review of heterogeneity and variability bias in periodic risk assessments should also be weighed against ethical considerations (Schwartz et al., 2022).
The experience was that the resulting text lacked an obvious logical structure, contained speculative information, did not elaborate on critical data, and did not provide original contributions (Giuggioli and Pellegrini, 2023). Any article on the topic would be conventional, lack logic and facts, and would not be critically engaging. In addition, ChatGPT references are generally incorrect; titles and authors, as well as other publication details, are misstated. Such inaccuracies require careful double-checking, especially in professional contexts such as journalism and software development.
Inaccuracy, poor logical flow, factual inaccuracies, lack of critical analysis, and lack of originality of AI-generated content can result from the current state of technology (Yang, 2024). This is based on deep learning models that are trained using very extensive datasets of prior information that may be outdated or of low quality. Although improvements in training models and data quality may improve the performance of AIs, it is not clear that improvements based on technical level necessarily lead to significant gains in innovation (Dwivedi et al., 2023).
The recent applications of generative AI in text, film and music production all indicate that these platforms will at best be partners in the innovation process, complementing rather than replacing human intelligence. In the case of complex activities requiring creativity and emotional intelligence, a well-formulated request alone is not sufficient for AI to produce markedly different and original outputs. Human oversight and collaboration remain essential (Liu, 2024). Research, practice, and urgent policy decisions in an era of rapidly evolving AI technologies require researchers, practitioners, and policymakers to critically engage with these changes. Building on the strengths of AI, while being aware of its limitations and making serious efforts to improve them, will foster an environment in which generative AI tools such as ChatGPT are used responsibly and effectively.
3 Addressing ChatGPT-induced plagiarism
Integrating ChatGPT into the scientific environment is not without its challenges. The primary concern is the possibility of plagiarism. Students may get used to using ChatGPT to create essays and assignments, which they then submit as their own work. This undermines the educational process and devalues academic credentials. Another challenge is the potential for inequality. Students who have access to ChatGPT can complete assignments in much less time and possibly better, giving them an unfair advantage over students who do not have ChatGPT. This may further increase existing inequalities in educational outcomes. On the other hand, it is difficult to distinguish content created by students from content created by AI. Because ChatGPT generates human-like, coherent text, the difficulty of distinguishing it from the “original” student content makes it difficult for educators to detect AI-assisted plagiarism.
While this work focuses on addressing the risks of plagiarism, ChatGPT and other AI tools hold great promise for improving learning outcomes and stimulating creativity. Through adaptive tutoring systems, these tools can improve personalized learning, provide immediate feedback and facilitate deeper interaction with course material. Furthermore, AI-driven creative applications allow students to experiment with problem-solving and critical thinking in new ways, ultimately resulting in a more dynamic and engaging learning environment.
3.1 Current educational strategies to counter unethical use of LLMs
The rise of large language models, such as ChatGPT, in education has led many educators and institutions to develop ways to prevent misuse. These approaches aim to protect academic integrity while adapting to the new environment of AI-enhanced learning environments. Different strategies have been introduced in different educational settings with varying degrees of success.
3.1.1 Regulating AI usage within curricula
This is probably the reason why many educational facilities have started to establish clear policies on how and when to employ AI tools such as ChatGPT. Many of these often tend to explain the emphasis on proper citation or attribution in the case of using generated AI content in a student's work. For example, some universities require students to mention what AI tool they used throughout the assignment, similar to citing sources from academic literature.
3.1.2 Enhancing plagiarism detection tools
A number of universities have now implemented high-tech, AI-detecting tools that work within plagiarism-checking programs. Indeed, services such as Turnitin have just this year introduced algorithms which detect AI-generated text by flagging submissions that are out of character for a student and/or contain unnatural patterns of speech. In addition, new software designed to detect AI-assisted content is being developed and implemented, further complicating student efforts to misrepresent AI-generated text as their own.
3.1.3 Promoting unique and creative assignments
Another effective strategy is the design of assessments that increasingly require a high level of originality and creativity on the part of the student, for which AI tools are less effective. For example, assignments of a personal reflective nature, or those which require original research questions or specific local contexts, make it harder for students to fall back on AI-generated content only. This strategy minimizes not only the chances of misuse of AI but also fosters deeper learning and critical thinking skills among students.
3.1.4 Incorporating oral examinations and presentations
Some educators have been adopting oral examinations wherein students are made to present and defend ideas, assignments, and research projects. These face-to-face or virtual exchanges permit the instructor to engage directly with the student to determine the depth of understanding of course material. In these oral exams, it will be almost impossible for the students to use AI tools because it involves real-time response and justification.
3.1.5 Collaborative group work and peer review
In contexts where group work is fostered, students often have to work in teams on elaborate projects, which already raises noticeable obstacles for AI-generated content to fit smoothly inside the final product. Group-based assignments by their very nature require communication, coordination, and collaboration among team members, aspects that no AI could imitate. Moreover, the mechanisms of peer review make students evaluate the work of their colleagues, thus automatically increasing the chances of identification of inconsistencies or any potential misuse of AI tools.
3.1.6 Reducing AI-assisted plagiarism through collaborative and reflective assessment
Empirical evidence supports the importance of using adaptive and reflective evaluation to reduce AI-related plagiarism. Successful pilot programs at highly regarded colleges that incorporate reflective and personalized tasks are highlighted by Moorhouse et al. (2023). These programs limit the misuse of AI by requiring individualized responses tailored to students. Furthermore, Dempere et al. (2023) provide evidence in favor of technology-based and ethics-based interventions, showing that ethical AI use campaigns in combination with AI recognition technologies greatly improve academic integrity compliance. Taken together, these studies show that integrating educational awareness campaigns and adaptive assessment provides a strong foundation for successful prevention of AI-enabled plagiarism.
3.2 Strategies to prevent plagiarism using ChatGPT
To address the challenges of using generative AI in education, educators can use a number of strategies to prevent ChatGPT plagiarism. Cotton et al. (2024) highlight the dual nature of ChatGPT in academia, highlighting the problems associated with scientific integrity and the prospect of increased engagement. They call for proactive institutional measures such as the integration of AI-recognition technologies, education of students on the ethical use of AI, and the creation of explicit policies on the use of AI tools. By implementing these tactics, universities can protect academic integrity and encourage ethical use of AI. Zeb et al. (2024) highlight the dual nature of ChatGPT in higher education, pointing to both its potential benefits for student engagement and its risks related to academic integrity. They recommend that institutions implement clear policies, create assessment tasks that require critical thinking, and provide training to guide ethical AI use. By integrating these measures, educators can harness the benefits of AI tools like ChatGPT while minimizing risks of misuse.
Strategies for the prevention of plagiarism, taking into account the opinions and suggestions:
Technological solutions
• There are various plagiarism detectors that can find copied content. If there is a possibility to search for texts in student submissions that match existing sources, a possible case of plagiarism is flagged. Educators can also invest in advanced technologies to detect artificial intelligence-generated content through language patterns and stylistic anomalies.
• Use learning analytics to track learner progress and detect unexplained patterns in learner performance. This could include sudden, unexplained improvement or different writing style, which is often a sign of AI-enabled plagiarism.
• Use adaptive testing methods where questions are modified or reformulated based on previous student responses. This will make the AI tools more difficult to work with, as it will be very difficult to generate or predict correct answers when incorporating dynamic approaches.
Pedagogical approaches
• Educating students about plagiarism is one of the most effective ways to combat plagiarism through education. Students need to be made aware of what exactly plagiarism is and the damage it does to learning and to the academic integrity built in the name of educational institutions. This can be achieved through teaching materials, classroom discussions, and clear communication of the consequences of plagiarism.
• Include reflective writing exercises in which learners should discuss the learning process, the challenges encountered, and the insights gained. This can help teachers to assess the credibility of students' work and understand their thinking processes.
• Peer assessment should be incorporated, where students are asked to evaluate each other's work. This both raises the quality of the work submitted and allows inconsistencies and possible plagiarism to be detected.
• Encourage projects in which pupils produce individual, creative outputs. Such products could include multimedia presentations that engage users through their senses. This could include podcasts or other digital communication tools that are unlikely to be replicable by AI.
General assessment design
• Design assessments that allow linking to personal experiences, local contexts, or specific curricula. These types of personalized tasks are less effective for general AI tools.
• In addition to the written essay, encourage students to communicate what they have learned through a variety of media, such as slide shows, audio recordings, films, and portfolios. AI has difficulty replicating these alternative assessment methods, which encourage learners to develop more versatile skills.
Policy and institutional changes
• Setting clear guidelines for the use of artificial intelligence tools such as ChatGPT is essential. Students need to know how and in what context to use such tools, i.e., proper citation and attribution of AI-generated texts.
• Requiring students to submit an outline of their work can help instructors identify potential AI-generated content early in the process. This approach allows for timely feedback and guidance, reducing the likelihood of students resorting to plagiarism.
• Regularly checking student submissions and work. This could include thorough reading of assignments, oral presentations to check understanding, and the use of detection devices to flag suspicious content.
• Large tasks are broken down into smaller tasks structured by key points, with appropriate deadlines. This approach ensures that students build up their work gradually, making it more difficult to complete a whole project with AI.
• Oral examinations can be a sure test of originality; students have to justify their arguments and even defend their work with oral answers, which in a sense makes it impossible to include AI-generated content in this assessment scenario.
3.3 Designing assessments to minimize AI misuse
To further minimize the risk of AI-assisted plagiarism, educators can design assessments that are less prone to misuse. Some extended ways to minimize AI misuse:
Critical thinking and problem-solving tasks
• Tasks that require highly critical thinking or problem solving are unlikely to be performed satisfactorily by AI. This may include group discussions, project presentations, and interactive activities that require the individual to use their knowledge and skills.
• Designing open-ended tasks that encourage originality and creativity can create conditions in which AI tools are less useful. For example, having students formulate their own research questions or arguments fosters independent thinking.
• Refine tasks to focus on areas where AI tools fall short, such as in-depth critical analysis and personalized responses.
Real-life applications and practical assessments
• Demonstrate practical applications: create assessments in which students apply theoretical knowledge to practical, real-world problems. Case studies, simulations, and project-based learning activities are contexts in which AI's ability to generate relevant content is limited.
• Design assessments that replicate real-life tasks and situations in authentic contexts, such as service-learning projects, internships, or community-based research. Such tasks require personal engagement and cannot be easily outsourced to AI.
• Develop role-playing exercises and simulations in which students take on designated roles or characters. This is a great way to increase creativity and critical thinking, elements that are difficult for AI to simulate.
Personalized and reflective assignments
• Create personalized tasks for each student or cohort that include dynamic elements such as current events, specific local problems, or personal reflections. Individualizing tasks minimizes the applicability of general AI responses.
• Providing more personalized feedback and requiring follow-up actions based on that feedback, which fosters deeper engagement with material and reduces reliance on AI.
• In a portfolio-based assessment, the student collects work done over time. Portfolios show progress or improvement in learning, which is challenging for AI to simulate.
Collaborative and peer-based learning
• Group projects are those in which learners have to work together to create a final product, ensuring authentic input as collaboration requires communication and coordination that AI cannot replicate.
• Peer-assisted learning activities, where learners tutor or mentor their classmates. This reinforces knowledge and requires explanation and justification, which AI cannot provide.
Timed and proctored assessments
• Real-time or proctored exams prevent students from using AI in assessments. This approach greatly reduces plagiarism and ensures the work represents each student's abilities.
• Conduct timed assessments, such as in-class essays or timed online tests, to limit students' use of AI tools. This format emphasizes students' ability to think and respond quickly based on their own knowledge.
Multimodal and mixed assessment formats
• Use mixed forms of assessment: written work, presentations, and practical demonstrations. Multimodal assessments require diverse skills, making it difficult for AI alone to handle all elements.
• Interactive and adaptive learning systems, which vary the difficulty and nature of questions based on student performance, provide personalization that challenges AI.
Frequent and ongoing assessments
• Frequent, low-level assessments to monitor students' progress on an ongoing basis. This allows for early detection of irregularities and reduces the likelihood of last-minute reliance on AI.
3.4 Challenges in implementing anti-plagiarism strategies
Although the above-mentioned tactic offers a sound method for curbing AI-assisted plagiarism, its application may present a number of ethical and practical difficulties.
Some universities, especially those with limited resources, may find the high costs of using sophisticated plagiarism detectors and learning analytics prohibitive. Furthermore, the effectiveness of these technologies depends on frequent updates to keep pace with rapidly evolving AI capabilities, further increasing operational costs.
Many technology solutions, including learning analytics and adaptive testing, require the collection of large amounts of student data. This raises questions about data security and privacy, especially when sensitive data is required. The scope of information that can be collected and examined may be limited by the fact that schools and other organizations must ensure compliance with data protection laws.
Authentic student work can be mistaken for AI created using AI-based detection methods, especially when students use certain language patterns or have a distinctive writing style. This can lead to false claims that undermine student confidence and require manual investigation by teachers, a time- and resource-intensive process.
Teachers must devote a lot of time and energy to implementing pedagogical and policy-based measures, such as teaching plagiarism, oral exams, and dividing large tasks into smaller ones. It can be difficult for institutions to provide teachers with the tools and support they need to successfully integrate these changes into their daily routines.
A heavy reliance on technology detection techniques can divert attention from raising students' ethical awareness. While resources such as plagiarism detectors are helpful, a thorough awareness of academic integrity through education remains key to developing long-lasting moral behavior.
Since the AI is constantly changing, strategies need to be constantly modified and checked. Institutions must regularly adjust their strategies as generative artificial intelligence technology evolves, necessitating potential regulatory changes as well as ongoing teacher training. Administrative and faculty resources may be further burdened by this ongoing change.
3.5 Comparing strategies and extracting recommendations
These strategies discussed in this paper coincided with a number of approaches that educators globally have already begun to start. The next section will point out the similarities between these methods and make recommendations based on their relative success.
3.5.1 Educational awareness campaigns
This makes education perhaps the most effective form of plagiarism prevention. Nothing works better than awareness of the tools and the consequences of their incorrect usage. Institutions that are really involved in raising awareness among students about the ethical use of AI tools and consequences of plagiarism tend to show better compliance. In ensuring a culture of integrity, there is a need to have students taught how their learning and future careers will be affected by dishonestly passed practices. For example, some universities introduced workshops or online modules that teach how to use AI tools with ethics in mind-reminding about originality and proper attribution.
3.5.2 Dynamic assessments and continuous monitoring
These adaptive, updated assessments of performance-real-time quizzes or personalized work-are important deterrents in the growing misuse of AI. Adaptive tests adjust the questions based on previous responses, which makes it quite difficult for AI models to know the correct answers. Continuous assessment approaches-including continuous low-stakes assignments-help track the progress of students, underlining discrepancies indicative of AI misuse. As these approaches are implemented into practice, educators are then in a better position to follow students' learning through iterations and become less vulnerable to last-minute AI-generated submissions.
3.5.3 Diversified assessment formats
Multimodal assessments are becoming the preferred fighter against AI-assisted academic dishonesty in that written work, oral presentations, and practical demonstrations together raise the expectation that students will demonstrate a wider range of skills. Moreover, portfolio-based assessments-where students collect and present a body of work over a semester-offer a more panoramic view of the student's development and thus have made it more easily probable to spot changes in quality or style.
3.5.4 AI-detection tools
Already, many institutions have adopted or are Trialing detection software for this type of AI. Early data suggests these tools can often flag AI-generated content while the accuracy continuously improves; educators should consider blending AI detection with traditional plagiarism detection methods. Those few institutions that have applied these technologies so far recommend that their use be combined with instructor vigilance, since manual review of suspicious texts is still an indispensable part of the process.
4 Discussion
Despite all the benefits, the integration of ChatGPT into an educational environment raises some very serious ethical concerns. One major concern is that it facilitates plagiarism and other forms of scientific dishonesty. Students could use ChatGPT to write essays and complete assignments as if they had written them themselves. This practice emphasizes both the circumcision of the learning process and the devaluation of all forms of academic assessment. Above all, it challenges teachers to ensure high standards of academic integrity in their classroom practice. The problem has been compounded by the difficulty of distinguishing student-generated content from content created by artificial intelligence. Traditional plagiarism detection tools are unable to identify text written using advanced AI models such as ChatGPT, and therefore cannot alert instructors when AI-assisted plagiarism has occurred.
Many strategies avoid the risks associated with ChatGPT and try to manage its ethical use in education. Students should be made aware of the ethical use of AI tools and the need to prevent academic dishonesty altogether. Assessments should be designed to make the misuse of AI less likely, to further reduce the potential for AI-enabled plagiarism. Tasks or tests that require critical thinking, problem-solving or creativity will not be performed adequately by AI.
While useful, generative AI tools such as ChatGPT have the very real potential to facilitate scientific fraud. The implementation of these strategies, from plagiarism detection to curriculum redesign, requires a multi-faceted approach to this challenge. Educators, administrators, and policymakers need to stay ahead of the technology and democratically update it on an ongoing basis with the intention that the pace of development will keep pace with the advances in AI technology.
By detecting AI-enhanced content, AI detection techniques are essential to maintaining scientific integrity and preserving the integrity and trust of scientific work. However, these tools also raise ethical issues, such as the possibility of miscategorising genuine student work due to stylistic differences, which can lead to unfounded accusations. Furthermore, if perceptual technology is overused, attention may be diverted from promoting scientific ethics through education. With a well-designed strategy combining ethical teaching and AI perception, integrity can be maintained without compromising individual responsibility for learning.
Those few institutions that have already taken such steps prove that success lies in blending technology-based solutions with educational efforts: awareness campaigns, adaptive testing, personalized assignments, and diversification of assessment formats top the list of effective measures to minimize the risk of AI misuse. It will be important going forward to create a culture of responsible use of AI, where students realize the risks but are also informed about how to deploy these tools responsibly to advance their learning.
As AI advances, its impact on education is likely to grow, enabling more personalized learning and adaptive feedback that can improve outcomes and access. However, increased reliance on AI creates difficulties, including privacy issues, algorithmic biases, and the changing role of teachers in operating AI-augmented classrooms. Institutions may need to continually adjust rules to protect academic integrity as AI systems improve in learning and perception. The ethical and successful integration of AI into education depends on addressing these long-term impacts.
Researchers, practitioners and policy makers need to explore the ever-changing face of ChatGPT and other generative AI technologies. This paper moves in this direction by providing strategies for integrating AI tools into the university environment.
Author contributions
AK: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.
Funding
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Cambra-Fierro, J. J., Blasco, M. F., López-Pérez, M. E. E., and Trifu, A. (2024). ChatGPT adoption and its influence on faculty well-being: an empirical research in higher education. Educ. Inf. Technol. 2024, 1–22. doi: 10.1007/s10639-024-12871-0
Cotton, D. R., Cotton, P. A., and Shipway, J. R. (2024). Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 61, 228–239. doi: 10.1080/14703297.2023.2190148
Dempere, J., Modugu, K., Hesham, A., and Ramasamy, L. K. (2023). The impact of ChatGPT on higher education. Front. Educ. 8:1206936. doi: 10.3389/feduc.2023.1206936
Duarte, F. (2024). Number of ChatGPT Users (Jul 2024). Exploding Topics. Available at: https://explodingtopics.com/blog/chatgpt-users (accessed July 10, 2024).
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71:102642. doi: 10.1016/j.ijinfomgt.2023.102642
Giuggioli, G., and Pellegrini, M. M. (2023). Artificial intelligence as an enabler for entrepreneurs: a systematic literature review and an agenda for future research. Int. J. Entrepr. Behav. Res. 29, 816–837. doi: 10.1108/IJEBR-05-2021-0426
Li, M. (2024). “The impact of ChatGPT on teaching and learning in higher education: challenges, opportunities, and future scope,” in Encyclopedia of Information Science and Technology, Sixth Edition 1–20. doi: 10.4018/978-1-6684-7366-5.ch079
Liu, J. (2024). ChatGPT: perspectives from human–computer interaction and psychology. Front. Artif. Intel. 7:1418869. doi: 10.3389/frai.2024.1418869
Moorhouse, B. L., Yeo, M. A., and Wan, Y. (2023). Generative AI tools and assessment: guidelines of the world's top-ranking universities. Comput. Educ. Open 5:100151. doi: 10.1016/j.caeo.2023.100151
Schwartz, R., Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., et al. (2022). Towards a standard for identifying and managing bias in artificial intelligence. NIST Special Publication 1270. Available at: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf (accessed July 10, 2024).
Sloan, J., Powis, N., and Tan, J. (2024). Introduction to artificial intelligence What are the key risks for insurers to consider over the short to medium term? Milliman white paper. Available at: https://www.milliman.com/-/media/milliman/pdfs/2024-articles/7-12-24_introduction-to-ai-briefing-note.ashx (accessed July 10, 2024).
Tang, B. L. (2024). Did ChatGPT ask or agree to be a (co) author? ChatGPT authorship reflects the wider problem of inappropriate authorship practices. Sci. Edit. 11, 93–95. doi: 10.6087/kcse.337
Yang, W. (2024). Beyond algorithms: The human touch machine-generated titles for enhancing click-through rates on social media. PLoS ONE 19:e0306639. doi: 10.1371/journal.pone.0306639
Keywords: ChatGPT, generative AI, plagiarism, ethical AI use, assessments, adaptive testing, creative assignments, personalized tasks
Citation: Kovari A (2025) Ethical use of ChatGPT in education—Best practices to combat AI-induced plagiarism. Front. Educ. 9:1465703. doi: 10.3389/feduc.2024.1465703
Received: 16 July 2024; Accepted: 20 November 2024;
Published: 09 January 2025.
Edited by:
Xinyue Ren, Old Dominion University, United StatesReviewed by:
Antonio Sarasa-Cabezuelo, Complutense University of Madrid, SpainCopyright © 2025 Kovari. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Attila Kovari, a292YXJpLmF0dGlsYUB1bmktZXN6dGVyaGF6eS5odQ==