Skip to main content

BRIEF RESEARCH REPORT article

Front. Educ. , 25 March 2025

Sec. Leadership in Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1417642

This article is part of the Research Topic Continuing Engineering Education for a Sustainable Future View all 19 articles

Who is solving the challenge? The use of ChatGPT in mathematics and biology courses using challenge-based learning

  • 1Directorate of Educational Innovation and Digital Learning, Vice Presidency of Educational Innovation and Academic Regulations, Tecnológico de Monterrey, Monterrey, Mexico
  • 2School of Engineering and Sciences, Tecnológico de Monterrey, Monterrey, Mexico
  • 3Institute for the Future of Education, Tecnológico de Monterrey, Monterrey, Mexico

The advent of Artificial Intelligence has revolutionized how students can solve academic assignments. In particular, the Conditional Generative Pretrained Transformer, ChatGPT, has become a powerful tool for generating quick solutions to academic assignments in higher education. However, we are still at the beginning of its use and do not yet know the scope or consequences that this will bring to developing both disciplinary and transversal graduation competencies. Here, we report a pilot study in two digital subjects in higher education with the resolution of activities using ChatGPT. The students were exposed to carrying out these assignments individually, and then they verified the quality of their work with traditional sources of high academic quality. After surveying what they experienced, some declared that this was their first time using ChatGPT, while others had already used the tool. The tool has many advantages for the student, such as the immediacy of the information, ease, and availability. However, many concerns arose about the veracity and depth of the topics covered and discomfort based on whether the tool would supplant the teacher or whether the development of skills and competencies would be affected. The need for urgent modifications to the code of academic integrity and the application of new ethics for the use of AI is clear. Our results indicate that teachers should be prepared to use AI expeditiously and that detectors for text generated by AI should be available for evaluation on using this powerful tool.

1 Introduction

One of the significant challenges facing higher education today is that teachers must teach a generation of students who are digital natives, which puts an intense pressure on teachers to keep up to date in the use of new technologies and teaching strategies. Technological advances have also made education evolve at a dizzying speed. A constant challenge is the teaching update; however, at the end of 2022, as a post-COVID-19 event, the advent of artificial intelligence (AI) attracted great attention and discussion about its usefulness and risks. Educational models such as Project-based, Practical-based, Challenge-based, and Problem-based learning (Membrillo-Hernández et al., 2021) are being adapted to a new reality (Akgun and Greenhow, 2022). Among notable advances in AI, the Conditional Generative Pretrained Transformer ChatGPT has emerged as a prominent development. Developed by OpenAI, ChatGPT (Stojanov, 2023)1 is an advanced natural language processing (NLP) model that is trained on a massive amount of data, including billions of web pages and documents, making it capable of generating human-like text responses to prompts (Stojanov, 2023). Since its launch, it has quickly become one of the fastest-growing consumer applications in history, with an estimated 100 million active users monthly. ChatGPT is a language model that uses deep learning techniques to generate text responses that resemble human language.

Integrating ChatGPT into education can revolutionize traditional teaching approaches by providing students with personalized and interactive learning experiences (Crawford et al., 2023). There are known implications for teaching, learning, academic research, epistemology, the digital transformation of educational institutions, and even ethics (García-Martínez et al., 2023; García-Peñalvo, 2023; Stokel-Walker, 2022). Several investigations have shown that incorporating GPT in education could provide personalized feedback and interactive learning experiences (García-Martínez et al., 2023). It can help students understand complex concepts and theories by offering real-time explanations and illustrative examples when applied to STEM topics. However, there is a paucity of research on the effects of ChatGPT on academic performance (Anderson et al., 2023). In this line, UNESCO2 differentiates three dimensions of the link between AI and education: (i) learning to use AI tools in the classroom, (ii) learning to know AI and its technical possibilities, (iii) raising public awareness about the impact of AI on people’s lives. We are still ignorant of the impact of ChatGPT in education; proof of this is that some universities, such as those in Hong Kong and some in France and Italy, have prohibited its use or have established severe sanctions in some cases. Other universities are updating their academic integrity policies and adapting the exams to prevent the misuse of ChatGPT by students (Tlili et al., 2023).

The main objective of this research is to explore the impact of ChatGPT on the academic performance of higher education students in digital distance learning courses, specifically in Mathematics and Biology, in an educational environment governed by challenge-based learning. In that first study, we will focus on the effect of adopting ChatGPT. This study is part of a pilot program to test different applications of ChatGPT and evaluate its suitability in higher education courses in specific STEM disciplines. The results are part of a study that evaluated students and teachers using AI. More specifically, by using the Tec21 educational model, a challenge-based learning model (Membrillo-Hernández et al., 2021), we were able to examine whether AI could improve this type of teaching. Assessment methods, exercise design, and student feedback were collected to comprehensively analyze the possibilities of using an AI chatbot like ChatGPT.

1.1 Literature review

1.1.1 The role of ChatGPT in higher education

ChatGPT has emerged as a transformative tool in higher education, enabling enhanced learning experiences for students and educators. Studies highlight its ability to provide instant feedback, clarify doubts, and assist in personalized learning, contributing to student success and reducing cognitive overload (Kasneci et al., 2023; Wang et al., 2024). AI-driven tools like ChatGPT are particularly valuable in facilitating accessibility, enabling students from diverse backgrounds to bridge learning gaps. For instance, ChatGPT can simplify complex topics, draft essay outlines, and support collaborative learning (Dwivedi et al., 2021). However, researchers also emphasize the need for human oversight to ensure its effective integration into curricula, as over-reliance on AI may hinder critical thinking and creativity.

1.1.2 Ethical challenges in the use of AI tools

The integration of AI tools into education raises critical ethical concerns. One primary issue is the potential misuse of tools like ChatGPT for plagiarism and academic dishonesty, undermining the principles of academic integrity (Lee et al., 2024). Moreover, AI algorithms are prone to biases that may influence the content generated, inadvertently reinforcing stereotypes or delivering inaccurate information (Birhane, 2021). To address these concerns, universities must adopt ethical guidelines and promote responsible use of AI among students and faculty (Dwivedi et al., 2021).

1.1.3 Balancing innovation with ethical responsibility

The challenge lies in balancing the transformative potential of ChatGPT with ethical responsibilities. Researchers suggest implementing AI literacy programs to help students critically evaluate AI outputs and use them as supplements rather than replacements for original thought (Kasneci et al., 2023). Furthermore, ethical AI governance, including transparency in AI design and the integration of fairness principles, is crucial for fostering trust and ensuring equity in education (Farooqi et al., 2024). Educators, developers, and policymakers must collaborate to ensure that AI tools like ChatGPT promote inclusivity and integrity.

1.1.4 The future of AI ethics in education

The growing influence of AI in higher education necessitates a proactive approach to ethics. Institutions must develop comprehensive frameworks prioritizing accountability, privacy, and inclusivity in AI-driven learning environments (Soori et al., 2023). Future research should explore the long-term implications of AI on student learning outcomes and cognitive development. By aligning ethical practices with innovative uses of tools like ChatGPT, higher education can unlock the full potential of AI while upholding its commitment to equitable and responsible learning.

This literature review provides a holistic view of ChatGPT’s role in higher education, focusing on its benefits and the ethical considerations for responsible implementation. In this report, we assess students’ perceptions about adopting ChatGPT in learning activities in Digital Education courses, considering three dimensions: acceptance of the tool, instructional design of the activity, and development of critical thinking. In addition, to identify relevant findings to guide future AI-based pedagogical implementations.

2 Methodology

Throughout the February–June 2023 semester, Tecnologico de Monterrey in Mexico ran a pilot program to evaluate the integration of ChatGPT within digital higher education courses. This pilot program was conducted in “Fundamentals of Biological Systems” and “Mathematics and Data Science for Decision Making” from the School of Engineering and Sciences of the Tecnologico de Monterrey. These courses share several distinctive attributes: both are university-level, national in scope, delivered online, and enroll a significant number of students. These courses are accessible to students of various majors and are included in the general education category in the academic curriculum. The purpose of general education courses is to provide fundamental knowledge and basic methodologies in specific areas of knowledge, offer a broad perspective, and improve students’ cognitive skills.

The “Fundamentals of Biological Systems” course was made up of 94 students from six different campuses (Tecnologico de Monterrey has 26 campuses spread throughout Mexico): State of Mexico, Guadalajara, Monterrey, Puebla, Querétaro and Toluca. In this course, ChatGPT was used within an individual activity titled “Feed your microbiota: Exploring the impact of your favorite foods on gut health.” The main objective of this activity was to investigate the influence of personal food preferences on the intestinal microbiota and to develop dietary adjustments to improve health and well-being. The students used ChatGPT as a starting point for their research, providing nutritional information and ingredients for commonly consumed products. By consulting ChatGPT, the students gained insight into how these products could affect their gut microbiota using a quick example provided. After this, students were tasked with validating the information derived from ChatGPT by cross-checking it with at least two academic, reliable, verifiable, and current sources.

On the other hand, the course “Mathematics and Data Science for Decision Making” was made up of 392 students from various campuses throughout Mexico, including Monterrey, Querétaro, Guadalajara, Saltillo, Tampico, Toluca, Mexico City, León, Chihuahua, San Luis Potosí, Aguascalientes, Hermosillo, Morelia, Laguna, Hidalgo, Puebla, Santa Fe, Chiapas, Irapuato, and Cuernavaca. ChatGPT was integrated into the activity titled “Machine Learning Research with ChatGPT.” The main objective of this activity was to investigate practical applications of the scikit-learn library in Python for machine learning. The focus was cultivating a comprehensive understanding of standard algorithms and data science methodologies across various academic disciplines. In the context of this course, ChatGPT acted as a virtual research assistant to help the student with topics that may require more attention, especially those that can be addressed through data science, emphasizing scikit-learn algorithms. In addition, it allowed for further exploration of these algorithms and helped to obtain relevant Python code examples tailored to each student’s area of interest. Throughout the activity, students were provided with sample prompts designed to guide their research process.

2.1 Data collection

In both courses, upon the completion of the ChatGPT-assisted activity, a concluding survey was conducted. The primary purpose of this survey was to assess students’ overall perception of the learning activity supported by ChatGPT and to gauge their acceptance of this tool within the educational process. The survey included fifteen items, with thirteen items evaluated on a Likert scale spanning from 1 (strongly disagree) to 10 (strongly agree) and two open questions (see Table 1). The Likert-scale items are inspired by the Unified Theory of Acceptance and Use of Technology (UTAUT2) model (Joshi et al., 2015). These thirteen items aligned with critical thinking, instructional design, technology acceptance, and usage. Furthermore, the survey encompassed open-ended questions about the ongoing enhancement of the learning experience and an inquiry about awareness of the institutional stance on academic integrity concerning the use of ChatGPT. The questions in the survey may seem repetitive. However, they were filtered and consulted with experts in Psychology and Pedagogical Architects and were recommended as such since they are sequential in analyzing students’ perceptions when using ChatGPT.

Table 1
www.frontiersin.org

Table 1. Dimensions and corresponding survey items (Likert scale: 1–10) used to assess students’ perceptions of ChatGPT integration in learning activities.

2.2 Data analysis

2.2.1 Likert-scale items

The responses to the Likert-scale items from the concluding survey were analyzed as continuous variables for comparison purposes. For each item, means and standard deviations (SD) were calculated. To facilitate the interpretation of the results, the Likert-scale questions were grouped into three dimensions: Acceptance of the AI Tool, Instructional Design of the Learning Activity, and Critical Thinking (as shown in Table 1). The averages for each dimension were calculated by averaging the scores of the items corresponding to that dimension.

To compare the responses between the two courses, independent t-tests were conducted at a significance level of 0.05 to determine statistical differences. All analyses, including the calculation of means, SD, t-tests, and the creation of bar graphs, were performed in Microsoft Excel.

2.2.2 Open-ended questions

To gain insight into students’ perceptions of the ChatGPT-assisted activities, the open-ended responses to the survey question “What I liked most about the ChatGPT activity” were analyzed using two complementary approaches: sentiment polarity and topic modeling. These methods offered a comprehensive understanding of the feedback’s emotional tone and thematic structure.

The emotional tone of each response was evaluated through sentiment polarity analysis using Python’s TextBlob library. This method assigned a numerical value ranging from −1 (indicating negative sentiment) to 1 (indicating positive sentiment) to each response, with scores of 0 classified as neutral. This analysis provided an overview of students’ emotional reactions to the activity by categorizing responses into positive, neutral, or negative sentiments.

Topic-modeling techniques were applied to the text responses to uncover recurring themes. Originally in Spanish, answers were preprocessed to ensure consistency and accuracy in the analysis. This process included splitting the text into individual words, removing stop words, and applying lemmatization to standardize terms.

Two topic-modeling methods were used: Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA). LDA, a probabilistic technique for identifying topics in a collection of documents, was implemented following the approach described by Blei et al. (2003). Based on Singular Value Decomposition (SVD), LSA was applied to extract latent semantic structures in the text, as described by Deerwester et al. (1990). Preprocessing steps, including lemmatization of the Spanish text, were carried out using spaCy (http://spacy.io).

Additionally, a word cloud was generated to visually represent the most frequently used words in students’ responses. To ensure accuracy, the text was tokenized and cleaned by removing common words (such as articles and pronouns) using the nltk library in Python. This preprocessing allowed the analysis to focus on nouns, verbs, and descriptive words that captured students’ experiences rather than grammatical elements. The word cloud provided a quick and intuitive visualization of the key themes emerging from the open-ended responses, complementing the structured insights obtained from topic modeling.

These analyses provided structured insights into students’ perceptions. Sentiment polarity highlighted the overall emotional tone of the answers, while topic modeling and the word cloud revealed key themes, offering a comprehensive understanding of the qualitative data.

3 Results

This study involved two exploratory courses from the School of Engineering and Sciences: “Fundamentals of Biological Systems” and “Mathematics and Data Science for Decision Making”. Both courses integrated ChatGPT into their learning activities, which required students to use the tool and validate the information provided by ChatGPT using formal academic sources, such as peer-reviewed articles, books, and verified websites. After completing the activities, students evaluated their experiences through a survey, with items grouped into three dimensions: Acceptance of the AI Tool (Q1–Q5), Instructional Design of the Learning Activity (Q6–Q7), and Critical Thinking (Q8–Q13) (see Table 1).

3.1 Comparative results between courses

The survey responses revealed significant differences between the two courses, particularly in the Acceptance of the AI Tool dimension. As shown in Figure 1, students in the “Mathematics and Data Science for Decision Making” course rated this dimension higher on average than those in the “Fundamentals of Biological Systems” course. This suggests that students in the mathematics course found ChatGPT more user-friendly and effective for their tasks. However, the data also show more significant variation in the “Fundamentals of Biological Systems” course, as evidenced by more significant standard deviations in the survey responses for this dimension (see Table 2).

Figure 1
www.frontiersin.org

Figure 1. Average Likert scores (0–10) for the dimensions of Acceptance of the AI Tool, Instructional Design of the Learning Activity, and Critical Thinking across both courses. The asterisk indicates a significant difference between courses as determined by a t-test (p ≤ 0.05). Error bars represent standard deviations (SD).

Table 2
www.frontiersin.org

Table 2. Mean scores and standard deviations of survey items evaluating students' perceptions of ChatGPT integration in learning activities (Likert scale: 1–10).

The more significant variation in the “Fundamentals of Biological Systems” course could be attributed to the smaller sample size compared to the “Mathematics and Data Science for Decision Making” course. The smaller group size may amplify individual differences in students’ perceptions, leading to higher response variability.

While both courses showed similar scores in the Instructional Design of the Learning Activity and Critical Thinking dimensions, it is important to highlight the consistently lower scores for items Q11, Q12, and Q13 related to critical evaluation and validation of information provided by ChatGPT.

3.2 Survey results by item

The survey items were analyzed to identify specific trends within each dimension. For Acceptance of the AI Tool (Q1–Q5), students in both courses rated the tool highly for its usability and efficiency. For example, Q1, which evaluated whether ChatGPT allowed students to complete activities more quickly, received some of the highest scores across both courses (9.36 ± 2.01 in “Fundamentals of Biological Systems” and 9.76 ± 0.76 in “Mathematics and Data Science for Decision Making”). The high scores in this dimension suggest that students perceived ChatGPT as a helpful and user-friendly tool for completing tasks.

The Instructional Design of the Learning Activity (Q6–Q7) dimension also received positive feedback. Q6, which assessed whether the professor’s instructions were clear, was rated highly in both courses (9.59 ± 0.88 and 9.26 ± 1.44), indicating that the activities were well-structured and communicated. Students’ engagement with the activities was also reflected in their responses to Q7, which asked whether ChatGPT helped them stay focused.

In contrast, the Critical Thinking (Q8–Q13) dimension revealed areas where students faced challenges. Items Q11, Q12, and Q13, which specifically addressed students’ ability to evaluate and validate information provided by ChatGPT, received the lowest scores across both courses. For instance, Q12, which asked whether the activity encouraged students to look for information from other sources, scored 8.62 ± 2.20 in “Fundamentals of Biological Systems” and 8.43 ± 2.31 in “Mathematics and Data Science for Decision Making”. These scores are notable, given that students were explicitly instructed to validate ChatGPT’s outputs using formal sources. Similarly, Q11 and Q13, which evaluated the use of independent judgment and skepticism regarding ChatGPT’s outputs, also scored lower. These results suggest that while students found ChatGPT helpful, its integration into the activities did not strongly foster critical evaluation or validation skills.

3.3 Perception of students on the use of ChatGPT

To explore students’ perceptions and feelings regarding the integration of ChatGPT in their academic activities, we employed a data-driven approach to analyzing responses to open-ended survey questions. Students were asked, “What did you like most about the activity that integrates ChatGPT?” The collected responses were analyzed using Python with natural language processing libraries, as detailed in the methodology.

Initially, sentiment polarity was calculated for each response, ranging from −1 (negative) to 1 (positive). Neutral responses scored at 0. The sentiment distribution revealed that 90% of the responses were neutral, while only 2% were positive and 8% negative (Figure 2A). This high percentage of neutral responses indicates that many students might still lack familiarity with ChatGPT’s full capabilities or harbor uncertainties about its potential. Additionally, a word cloud was generated to visually represent the most frequently used words in students’ responses. As described in the methodology, the text was preprocessed using the nltk library to remove common words and highlight key terms. As shown in Figure 2B, the most frequently used words were information, simplicity, new, tool, and technology.

Figure 2
www.frontiersin.org

Figure 2. (A) Distribution of the analysis results of the feelings generated from the responses to the open question, “What did I like most about the activity with ChatGPT?” This analysis includes reactions from the courses “fundamentals of biological systems” and “mathematics and data science for decision making.” (B) Word cloud analysis of the responses to the same open question.

To gain deeper insights and address the limitations of sentiment analysis, we employed advanced natural language processing techniques, including Latent Dirichlet Allocation (LDA) and Latent Semantic Analysis (LSA), to perform topic modeling on the open-ended responses. This approach enabled us to uncover underlying themes in the feedback and better understand the nuances of student perceptions.

The LDA topic modeling revealed five key themes. These themes included: (1) Use of tools for information retrieval, emphasizing simplicity and efficiency; (2) Rapid and dynamic learning with technology, showcasing students’ appreciation for ChatGPT’s innovative capabilities; (3) Ease and speed of ChatGPT usage, highlighting its accessibility; (4) Research and specific responses, reflecting its role in supporting precise academic inquiries; and (5) Technology and artificial intelligence as support tools, underlining the broader relevance of AI in learning contexts.

Complementing these findings, LSA grouped responses into related categories, further emphasizing ChatGPT’s practical utility and role in facilitating innovative and efficient learning experiences. Responses described ChatGPT as a tool that “simplifies research,” “saves time,” and “provides clear and concise explanations.” However, some students expressed concerns about the reliability of ChatGPT’s outputs and emphasized the importance of validating its responses with credible sources.

Transitioning from sentiment polarity to topic modeling underscores the importance of employing advanced techniques to interpret complex qualitative data. While the sentiment analysis highlighted the prevalence of neutral opinions, the topic modeling illuminated how students viewed ChatGPT as both a facilitator of efficient learning and a tool requiring responsible use and validation. These findings offer a nuanced perspective on student perceptions, bridging initial neutrality with deeper thematic insights.

4 Discussion

ChatGPT and similar AI tools have rapidly gained significance in higher education, reshaping how students and educators interact with information and learn. These tools offer instant access to vast knowledge repositories, enabling students to explore topics in-depth, generate ideas, and receive personalized assistance. For educators, AI provides innovative ways to design interactive learning experiences and streamline administrative tasks, such as grading or creating lesson plans. The availability of such technology enhances accessibility, allowing students from diverse backgrounds to learn effectively at their own pace.

However, with great power comes great responsibility, and the ethical implications of using AI tools in education cannot be overlooked. One of the primary concerns is ensuring academic integrity. Tools like ChatGPT can inadvertently facilitate plagiarism or undermine critical thinking when misused. Educational institutions must prioritize teaching students how to use AI responsibly—encouraging them to view it as a supplement to their efforts rather than a replacement. Establishing guidelines for ethical AI use in academia can help maintain the quality and credibility of education.

This study describes a pilot experiment to analyze students’ perceptions about the use of the artificial intelligence tool ChatGPT in two science classes at the Tecnologico de Monterrey. The students’ responses clearly show that we are still at the beginning of using this tool and remain ignorant of its potential. ChatGPT has been used in various academic activities (Stokel-Walker, 2022), but the impact on the future is still unknown.

One advantage of using ChatGPT is instant access to information since it allows access to information in real-time. In addition, ChatGPT can encourage personalized learning, as it can adapt to students’ individual needs, offering explanations and examples that fit their level of knowledge and learning style. It can also promote self-directed learning, motivating students to explore topics of interest at their own pace and level.

However, some student responses pointed out that ChatGPT may constitute a danger by creating excessive dependency and preventing consultation of documents of high academic quality. This would strongly impact the development of critical thinking skills, reasoning for complexity, and problem-solving. An interesting note had to do with the fact that in the future, ChatGPT could lead us to lose interaction with humans, and ChatGPT could be the teacher, the teammate, the one who answers questions of all kinds. On the other hand, a student commented on the lack of precision in the data used by ChatGPT, which is often not updated, or a paid version is required to access updated content. Many other potential dangers were mentioned, such as response bias, the probable failure to secure student data, and the limited development of STEM graduation competencies. We are still at the beginning of the use of ChatGPT; we do not yet know the consequences of its use, but we must take into account the new rules already imposed by academic authorities on the use of these tools. Several universities have even banned its use.

Another ethical aspect lies in the transparency and fairness of AI tools. Biases in AI models could reinforce stereotypes or propagate misinformation, leading to unintended consequences in learning environments. Institutions should advocate for using ethical AI systems built with inclusivity in mind. At the same time, educators must emphasize the importance of evaluating AI outputs critically to ensure the information aligns with reliable, factual sources.

ChatGPT in higher education can significantly benefit student learning if implemented carefully and thoughtfully. However, it is important to recognize and address the potential risks and limitations associated with its use, thereby ensuring a practical and ethical educational experience. The academic integrity code of all universities today includes the responsible use of Artificial Intelligence. When students were questioned, to our surprise, a third of the population surveyed did not know about the modifications to the regulations.

There is still much to learn about using artificial intelligence tools, but at least in these first drafts of higher education, we can say that they can be useful when used responsibly and can help solve many academic problems in many areas of knowledge.

Ultimately, integrating AI tools like ChatGPT in higher education presents an incredible opportunity to enhance learning but must be guided by ethical principles. These include promoting responsible use, fostering critical engagement with AI outputs, and ensuring fairness and transparency in their deployment. By addressing these ethical concerns, higher education can harness the benefits of AI while upholding its commitment to nurturing informed, thoughtful, and socially responsible learners.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants or participants legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.

Author contributions

ME-G: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Writing – original draft, Writing – review & editing. HH-D: Conceptualization, Data curation, Methodology, Software, Supervision, Validation, Writing – original draft. IB-G: Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Project administration, Writing – original draft. PC: Conceptualization, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing, Project administration. JM-H: Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This study was financially supported by the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico.

Acknowledgments

The authors would like to acknowledge the financial support of the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, in producing this work. The authors also acknowledge the Educational Innovation and Digital Education area at Tecnologico de Monterrey, particularly Julian Urrutia for configuring the measuring instrument and processing quantitative data, Cynthia Enciso, Mariano Garay, and Olaf Román for their pedagogical guidance, and the academic coordinators of the Directorate of Digital Education for their support in managing this pilot study.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

References

Akgun, S., and Greenhow, C. (2022). Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI Ethics 2, 431–440. doi: 10.1007/s43681-021-00096-7

PubMed Abstract | Crossref Full Text | Google Scholar

Anderson, N., Belavy, D. L., Perle, S. M., Hendricks, S., Hespanhol, L., Verhagen, E., et al. (2023). AI did not write this manuscript, or did it? Can we trick the AI text detector into generating texts? The potential future of ChatGPT and AI in sports & exercise medicine manuscript generation. BMJ Open Sport Exerc. Med. 9:e001568. doi: 10.1136/bmjsem-2023-001568

PubMed Abstract | Crossref Full Text | Google Scholar

Birhane, A. (2021). Algorithmic injustice: a relational ethics approach. Patterns 2:100205. doi: 10.1016/j.patter.2021.100205

PubMed Abstract | Crossref Full Text | Google Scholar

Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022.

Google Scholar

Crawford, J., Cowling, M., and Allen, K.-A. (2023). Leadership is needed for ethical ChatGPT: character, assessment, and learning using artificial intelligence (AI). J. Univ. Teach. Learn. Pract. 20. doi: 10.53761/1.20.3.02

Crossref Full Text | Google Scholar

Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. (1990). Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41, 391–407. doi: 10.1002/(SICI)1097-4571(199009)41:6%3c391::AID-ASI1%3e3.0.CO;2-9

Crossref Full Text | Google Scholar

Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., et al. (2021). Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice, and policy. Int. J. Inf. Manag. 57:101994. doi: 10.1016/j.ijinfomgt.2019.08.002

Crossref Full Text | Google Scholar

Farooqi, M. T. K., Amanat, I., and Awan, S. M. (2024). Ethical considerations and challenges in the integration of artificial intelligence in education: a systematic review. J. Excell. Manag. Sci. 3, 35–50. doi: 10.69565/jems.v3i4.314

Crossref Full Text | Google Scholar

García-Martínez, I., Fernández-Batanero, J. M., Fernández-Cerero, J., and León, S. P. (2023). Analysing the impact of artificial intelligence and computational sciences on student performance: systematic review and Meta-analysis. J. New Approach. Educ. Res. 12, 171–197. doi: 10.7821/naer.2023.1.1240

Crossref Full Text | Google Scholar

García-Peñalvo, F. J. (2023). La percepción de la Inteligencia Artificial en contextos educativos tras el lanzamiento de ChatGPT: Disrupción o pánico. Educ. Knowled. Soc. 24:e31279. doi: 10.14201/eks.31279

Crossref Full Text | Google Scholar

Joshi, A., Kale, S., Chandel, S., and Pal, D. (2015). Likert scale: explored and explained. British J. Appl. Sci. Technol. 7, 396–403. doi: 10.9734/BJAST/2015/14975

Crossref Full Text | Google Scholar

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 103:102274. doi: 10.1016/j.lindif.2023.102274

Crossref Full Text | Google Scholar

Lee, D., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., et al. (2024). The impact of generative AI on higher education learning and teaching: a study of educators’ perspectives. Comput. Educ. 6:100221. doi: 10.1016/j.caeai.2024.100221

Crossref Full Text | Google Scholar

Membrillo-Hernández, J., De Jesús Ramírez-Cadena, M., Ramírez-Medrano, A., García-Castelán, R. M. G., and García-García, R. (2021). Implementation of the challenge-based learning approach in academic engineering programs. Int. J. Interact. Des. Manuf. (IJIDeM) 15, 287–298. doi: 10.1007/s12008-021-00755-3

PubMed Abstract | Crossref Full Text | Google Scholar

Soori, M., Arezoo, B., and Dastres, R. (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cogn. Robot. 3, 54–70. doi: 10.1016/j.cogr.2023.04.001

Crossref Full Text | Google Scholar

Stojanov, A. (2023). Learning with ChatGPT 3.5 as a more knowledgeable other: an autoethnographic study. Int. J. Educ. Technol. High. Educ. 20:35. doi: 10.1186/s41239-023-00404-7

PubMed Abstract | Crossref Full Text | Google Scholar

Stokel-Walker, C. (2022). AI bot ChatGPT writes smart essays—should professors worry? Nature. doi: 10.1038/d41586-022-04397-7

PubMed Abstract | Crossref Full Text | Google Scholar

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., et al. (2023). What if the devil is my guardian angel? ChatGPT is a case study of using chatbots in education. Smart Learn. Environ. 10:15. doi: 10.1186/s40561-023-00237-x

Crossref Full Text | Google Scholar

Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., and Du, Z. (2024). Artificial intelligence in education: a systematic literature review. Expert Syst. Appl. 252:124167. doi: 10.1016/j.eswa.2024.124167

Crossref Full Text | Google Scholar

Keywords: educational innovation, artificial intelligence, higher education, ChatGPT, challenge-based learning

Citation: Elizondo-García ME, Hernández-De la Cerda H, Benavides-García IG, Caratozzolo P and Membrillo-Hernández J (2025) Who is solving the challenge? The use of ChatGPT in mathematics and biology courses using challenge-based learning. Front. Educ. 10:1417642. doi: 10.3389/feduc.2025.1417642

Received: 15 April 2024; Accepted: 13 February 2025;
Published: 25 March 2025.

Edited by:

Yousef Wardat, Yarmouk University, Jordan

Reviewed by:

Alba Meça, University of Padua, Italy
Amnon Meir, Southern Methodist University, United States

Copyright © 2025 Elizondo-García, Hernández-De la Cerda, Benavides-García, Caratozzolo and Membrillo-Hernández. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jorge Membrillo-Hernández, am1lbWJyaWxsb0B0ZWMubXg=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Research integrity at Frontiers

Man ultramarathon runner in the mountains he trains at sunset

94% of researchers rate our articles as excellent or good

Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


Find out more