Skip to main content

OPINION article

Front. Educ.
Sec. Higher Education
Volume 9 - 2024 | doi: 10.3389/feduc.2024.1465703
This article is part of the Research Topic AI's Impact on Higher Education: Transforming Research, Teaching, and Learning View all 3 articles

Ethical Use of ChatGPT in Education -Best Practices to Combat AI-Induced Plagiarism Opinion paper

Provisionally accepted
  • 1 Institute of Computer Science, University of Dunaújváros, Dunaújváros, Hungary
  • 2 Institute of Digital Technology, Eszterhazy Karoly Catholic University, Eger, Hungary

The final, formatted version of the article will be published soon.

    The emergence of ChatGPT, a high-performance artificial intelligence language model developed by OpenAI, has generated both excitement and concern in academia (Li, 2024). Equipped with advanced natural language processing techniques, ChatGPT is able to generate human-like text that provides coherent and contextually relevant responses to a wide range of queries. This unprecedented capability has raised optimism and concern as it could fundamentally change traditional practices in academia, industry and everyday life (Cambra-Fierro et al., 2024).The basic function of "ask me anything" and "I might have a good answer" is no longer just a concern in many fields. The scientific knowledge disseminated in journals is already struggling with the role that such technology will play. Questions arise about whether it will be, and can be, coauthored (Tang, 2024). Professors who create knowledge immediately face the challenge of assessing students in the presence of such technology. These are practical and legitimate questions.While ChatGPT has many benefits in terms of increased student engagement, collaboration and accessibility outcomes, it also has very serious academic integrity implications: at its core is plagiarism. This paper offers comprehensive strategies on how educators can help mitigate these risks by promoting ethical use and fairness within the academic use of AI tools. ChatGPT was truly disruptive, which should have surprised no one. It can be seen that these technologies are being adopted very quickly from university labs; ChatGPT reached one million users in its first five days and now has over 180 million (Duarte, 2024). This kind of rapid adoption demonstrates a remarkable property of generative AI: that it persists with coherent and contextually relevant text.One of the main problems with AI models like ChatGPT is the range of threats they pose, including black box algorithms, including black box algorithms, discrimination, biases, vulgarity, copyright infringement -plagiarism -and many others, such as the generation of fake text content or fake media (Sloan, Powis & Tan, 2024). Therefore, organisations need disciplined risk management approaches to effectively address these threats. Considering the continuous evolution of artificial intelligence algorithms due to the rapidity of data sources, the review of heterogeneity and variability bias in periodic risk assessments should also be weighed against ethical considerations (Schwartz et al., 2022).The experience was that the resulting text lacked an obvious logical structure, contained speculative information, did not elaborate on critical data, and did not provide original contributions (Giuggioli & Pellegrini 2023). Any article on the topic would be conventional, lack logic and facts, and would not be critically engaging. In addition, ChatGPT references are generally incorrect; titles and authors, as well as other publication details, are misstated. Such inaccuracies require careful double-checking, especially in professional contexts such as journalism and software development.Inaccuracy, poor logical flow, factual inaccuracies, lack of critical analysis and lack of originality of AI-generated content can result from the current state of technology (Yang, 2024). This is based on deep learning models that are trained using very extensive datasets of prior information that may be outdated or of low quality. Although improvements in training models and data quality may improve the performance of AIs, it is not clear that improvements based on technical level necessarily lead to significant gains in innovation (Dwivedi et al., 2023).The recent applications of generative AI in text, film and music production all indicate that these platforms will at best be partners in the innovation process, complementing rather than replacing human intelligence. In the case of complex activities requiring creativity and emotional intelligence, a well-formulated request alone is not sufficient for AI to produce markedly different and original outputs. Human oversight and collaboration remain essential (Liu, 2024). Research, practice, and urgent policy decisions in an era of rapidly evolving AI technologies require researchers, practitioners, and policymakers to critically engage with these changes. Building on the strengths of AI, while being aware of its limitations and making serious efforts to improve them, will foster an environment in which generative AI tools such as ChatGPT are used responsibly and effectively. Integrating ChatGPT into the scientific environment is not without its challenges. The primary concern is the possibility of plagiarism. Students may get used to using ChatGPT to create essays and assignments, which they then submit as their own work. This undermines the educational process and devalues academic credentials. Another challenge is the potential for inequality. Students who have access to ChatGPT can complete assignments in much less time and possibly better, giving them an unfair advantage over students who do not have ChatGPT. This may further increase existing inequalities in educational outcomes. On the other hand, it is difficult to distinguish content created by students from content created by AI. Because ChatGPT generates human-like, coherent text, the difficulty of distinguishing it from the "original" student content makes it difficult for educators to detect AI-assisted plagiarism.While this work focuses on addressing the risks of plagiarism, ChatGPT and other AI tools hold great promise for improving learning outcomes and stimulating creativity. Through adaptive tutoring systems, these tools can improve personalized learning, provide immediate feedback and facilitate deeper interaction with course material. Furthermore, AI-driven creative applications allow students to experiment with problem-solving and critical thinking in new ways, ultimately resulting in a more dynamic and engaging learning environment. The rise of large language models, such as ChatGPT, in education has led many educators and institutions to develop ways to prevent misuse. These approaches aim to protect academic integrity while adapting to the new environment of AI-enhanced learning environments. Different strategies have been introduced in different educational settings with varying degrees of success. This is probably the reason why many educational facilities have started to establish clear policies on how and when to employ AI tools such as ChatGPT. Many of these often tend to explain the emphasis on proper citation or attribution in the case of using generated AI content in a student's work. For example, some universities require students to mention what AI tool they used throughout the assignment, similar to citing sources from academic literature. A number of universities have now implemented high-tech, AI-detecting tools that work within plagiarism-checking programs. Indeed, services such as Turnitin have just this year introduced algorithms which detect AI-generated text by flagging submissions that are out of character for a student and/or contain unnatural patterns of speech. In addition, new software designed to detect AIassisted content is being developed and implemented, further complicating student efforts to misrepresent AI-generated text as their own. Another effective strategy is the design of assessments that increasingly require a high level of originality and creativity on the part of the student, for which AI tools are less effective. For example, assignments of a personal reflective nature, or those which require original research questions or specific local contexts, make it harder for students to fall back on AI-generated content only. This strategy minimizes not only the chances of misuse of AI but also fosters deeper learning and critical thinking skills among students. Some educators have been adopting oral examinations wherein students are made to present and defend ideas, assignments, and research projects. These face-to-face or virtual exchanges permit the instructor to engage directly with the student to determine the depth of understanding of course material. In these oral exams, it will be almost impossible for the students to use AI tools because it involves real-time response and justification. In contexts where group work is fostered, students often have to work in teams on elaborate projects, which already raises noticeable obstacles for AI-generated content to fit smoothly inside the final product. Group-based assignments by their very nature require communication, coordination, and collaboration among team members, aspects that no AI could imitate. Moreover, the mechanisms of peer review make students evaluate the work of their colleagues, thus automatically increasing the chances of identification of inconsistencies or any potential misuse of AI tools. Empirical evidence supports the importance of using adaptive and reflective evaluation to reduce AIrelated plagiarism. Successful pilot programs at highly regarded colleges that incorporate reflective and personalized tasks are highlighted by Moorhouse et al (2023). These programs limit the misuse of AI by requiring individualized responses tailored to students. Furthermore, Dempere et al (2023) provide evidence in favor of technology-based and ethics-based interventions, showing that ethical AI use campaigns in combination with AI recognition technologies greatly improve academic integrity compliance. Taken together, these studies show that integrating educational awareness campaigns and adaptive assessment provides a strong foundation for successful prevention of AIenabled plagiarism. To address the challenges of using generative AI in education, educators can use a number of strategies to prevent ChatGPT plagiarism. Cotton et al (2024) highlight the dual nature of ChatGPT in academia, highlighting the problems associated with scientific integrity and the prospect of increased engagement. They call for proactive institutional measures such as the integration of AIrecognition technologies, education of students on the ethical use of AI, and the creation of explicit policies on the use of AI tools. By implementing these tactics, universities can protect academic integrity and encourage ethical use of AI. Zeb et al (2024) highlight the dual nature of ChatGPT in higher education, pointing to both its potential benefits for student engagement and its risks related to academic integrity. They recommend that institutions implement clear policies, create assessment tasks that require critical thinking, and provide training to guide ethical AI use. By integrating these measures, educators can harness the benefits of AI tools like ChatGPT while minimizing risks of misuse.Strategies for the prevention of plagiarism, taking into account the opinions and suggestions:Technological Solutions• There are various plagiarism detectors that can find copied content. If there is a possibility to search for texts in student submissions that match existing sources, a possible case of plagiarism is flagged. Educators can also invest in advanced technologies to detect artificial intelligence-generated content through language patterns and stylistic anomalies. • Use learning analytics to track learner progress and detect unexplained patterns in learner performance. This could include sudden, unexplained improvement or different writing style, which is often a sign of AI-enabled plagiarism. • Use adaptive testing methods where questions are modified or reformulated based on previous student responses. This will make the AI tools more difficult to work with, as it will be very difficult to generate or predict correct answers when incorporating dynamic approaches. • Educating students about plagiarism is one of the most effective ways to combat plagiarism through education. Students need to be made aware of what exactly plagiarism is and the damage it does to learning and to the academic integrity built in the name of educational institutions. This can be achieved through teaching materials, classroom discussions, and clear communication of the consequences of plagiarism. • Include reflective writing exercises in which learners should discuss the learning process, the challenges encountered, and the insights gained. This can help teachers to assess the credibility of students' work and understand their thinking processes. • Peer assessment should be incorporated, where students are asked to evaluate each other's work. This both raises the quality of the work submitted and allows inconsistencies and possible plagiarism to be detected. • Encourage projects in which pupils produce individual, creative outputs. Such products could include multimedia presentations that engage users through their senses. This could include podcasts or other digital communication tools that are unlikely to be replicable by AI. • Design assessments that allow linking to personal experiences, local contexts, or specific curricula. These types of personalized tasks are less effective for general AI tools. • In addition to the written essay, encourage students to communicate what they have learned through a variety of media, such as slide shows, audio recordings, films, and portfolios. AI has difficulty replicating these alternative assessment methods, which encourage learners to develop more versatile skills. • Setting clear guidelines for the use of artificial intelligence tools such as ChatGPT is essential. Students need to know how and in what context to use such tools, i.e., proper citation and attribution of AI-generated texts. • Requiring students to submit an outline of their work can help instructors identify potential AI-generated content early in the process. This approach allows for timely feedback and guidance, reducing the likelihood of students resorting to plagiarism. • Regularly checking student submissions and work. This could include thorough reading of assignments, oral presentations to check understanding, and the use of detection devices to flag suspicious content.• Large tasks are broken down into smaller tasks structured by key points, with appropriate deadlines. This approach ensures that students build up their work gradually, making it more difficult to complete a whole project with AI. • Oral examinations can be a sure test of originality; students have to justify their arguments and even defend their work with oral answers, which in a sense makes it impossible to include AI-generated content in this assessment scenario. To further minimize the risk of AI-assisted plagiarism, educators can design assessments that are less prone to misuse. Some extended ways to minimize AI misuse:Critical Thinking and Problem-Solving Tasks• Tasks that require highly critical thinking or problem solving are unlikely to be performed satisfactorily by AI. This may include group discussions, project presentations, and interactive activities that require the individual to use their knowledge and skills. • Designing open-ended tasks that encourage originality and creativity can create conditions in which AI tools are less useful. For example, having students formulate their own research questions or arguments fosters independent thinking. • Refine tasks to focus on areas where AI tools fall short, such as in-depth critical analysis and personalized responses. • Demonstrate practical applications: create assessments in which students apply theoretical knowledge to practical, real-world problems. Case studies, simulations, and project-based learning activities are contexts in which AI's ability to generate relevant content is limited. • Design assessments that replicate real-life tasks and situations in authentic contexts, such as service-learning projects, internships, or community-based research. Such tasks require personal engagement and cannot be easily outsourced to AI. • Develop role-playing exercises and simulations in which students take on designated roles or characters. This is a great way to increase creativity and critical thinking, elements that are difficult for AI to simulate. • Create personalized tasks for each student or cohort that include dynamic elements such as current events, specific local problems, or personal reflections. Individualizing tasks minimizes the applicability of general AI responses. • Providing more personalized feedback and requiring follow-up actions based on that feedback, which fosters deeper engagement with material and reduces reliance on AI. • In a portfolio-based assessment, the student collects work done over time. Portfolios show progress or improvement in learning, which is challenging for AI to simulate.Collaborative and Peer-Based Learning• Group projects are those in which learners have to work together to create a final product, ensuring authentic input as collaboration requires communication and coordination that AI cannot replicate. • Peer-assisted learning activities, where learners tutor or mentor their classmates. This reinforces knowledge and requires explanation and justification, which AI cannot provide. • Real-time or proctored exams prevent students from using AI in assessments. This approach greatly reduces plagiarism and ensures the work represents each student's abilities. • Conduct timed assessments, such as in-class essays or timed online tests, to limit students' use of AI tools. This format emphasizes students' ability to think and respond quickly based on their own knowledge. • Use mixed forms of assessment: written work, presentations, and practical demonstrations.Multimodal assessments require diverse skills, making it difficult for AI alone to handle all elements. • Interactive and adaptive learning systems, which vary the difficulty and nature of questions based on student performance, provide personalization that challenges AI. • Frequent, low-level assessments to monitor students' progress on an ongoing basis. This allows for early detection of irregularities and reduces the likelihood of last-minute reliance on AI. Although the above-mentioned tactic offers a sound method for curbing AI-assisted plagiarism, its application may present a number of ethical and practical difficulties.Some universities, especially those with limited resources, may find the high costs of using sophisticated plagiarism detectors and learning analytics prohibitive. Furthermore, the effectiveness of these technologies depends on frequent updates to keep pace with rapidly evolving AI capabilities, further increasing operational costs.Many technology solutions, including learning analytics and adaptive testing, require the collection of large amounts of student data. This raises questions about data security and privacy, especially when sensitive data is required. The scope of information that can be collected and examined may be limited by the fact that schools and other organizations must ensure compliance with data protection laws.Authentic student work can be mistaken for AI created using AI-based detection methods, especially when students use certain language patterns or have a distinctive writing style. This can lead to false claims that undermine student confidence and require manual investigation by teachers, a time-and resource-intensive process.Teachers must devote a lot of time and energy to implementing pedagogical and policy-based measures, such as teaching plagiarism, oral exams, and dividing large tasks into smaller ones. It can be difficult for institutions to provide teachers with the tools and support they need to successfully integrate these changes into their daily routines.A heavy reliance on technology detection techniques can divert attention from raising students' ethical awareness. While resources such as plagiarism detectors are helpful, a thorough awareness of academic integrity through education remains key to developing long-lasting moral behavior.Since the AI is constantly changing, strategies need to be constantly modified and checked.Institutions must regularly adjust their strategies as generative artificial intelligence technology evolves, necessitating potential regulatory changes as well as ongoing teacher training.Administrative and faculty resources may be further burdened by this ongoing change. These strategies discussed in this paper coincided with a number of approaches that educators globally have already begun to start. The next section will point out the similarities between these methods and make recommendations based on their relative success. This makes education perhaps the most effective form of plagiarism prevention. Nothing works better than awareness of the tools and the consequences of their incorrect usage. Institutions that are really involved in raising awareness among students about the ethical use of AI tools and consequences of plagiarism tend to show better compliance. In ensuring a culture of integrity, there is a need to have students taught how their learning and future careers will be affected by dishonestly passed practices. For example, some universities introduced workshops or online modules that teach how to use AI tools with ethics in mind-reminding about originality and proper attribution. These adaptive, updated assessments of performance-real-time quizzes or personalized work-are important deterrents in the growing misuse of AI. Adaptive tests adjust the questions based on previous responses, which makes it quite difficult for AI models to know the correct answers. Continuous assessment approaches-including continuous low-stakes assignments-help track the progress of students, underlining discrepancies indicative of AI misuse. As these approaches are implemented into practice, educators are then in a better position to follow students' learning through iterations and become less vulnerable to last-minute AI-generated submissions. Multimodal assessments are becoming the preferred fighter against AI-assisted academic dishonesty in that written work, oral presentations, and practical demonstrations together raise the expectation that students will demonstrate a wider range of skills. Moreover, portfolio-based assessments-where students collect and present a body of work over a semester-offer a more panoramic view of the student's development and thus have made it more easily probable to spot changes in quality or style. Already, many institutions have adopted or are Trialling detection software for this type of AI. Early data suggests these tools can often flag AI-generated content while the accuracy continuously academic integrity as AI systems improve in learning and perception. The ethical and successful integration of AI into education depends on addressing these long-term impacts.Researchers, practitioners and policy makers need to explore the ever-changing face of ChatGPT and other generative AI technologies. This paper moves in this direction by providing strategies for integrating AI tools into the university environment.

    Keywords: ChatGPT, Generative AI, plagiarism, ethical AI use, Assessments, adaptive testing, Creative assignments, personalized tasks

    Received: 16 Jul 2024; Accepted: 20 Nov 2024.

    Copyright: © 2024 Kovari. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    * Correspondence: Attila Kovari, Institute of Computer Science, University of Dunaújváros, Dunaújváros, Hungary

    Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.