Skip to main content

OPINION article

Front. Educ. , 05 March 2025

Sec. Digital Learning Innovations

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1534582

This article is part of the Research Topic AI Innovations in Education: Adaptive Learning and Beyond View all articles

The power duo: unleashing cognitive potential through human-AI synergy in STEM and non-STEM education

  • 1Mount Carmel College of Teacher Education, Kottayam, Kerala, India
  • 2Department of Health and Wellness, Marian College Kuttikkanam Autonomous, Kuttikkanam, India
  • 3St. Joseph College of Teacher Education for Women, Ernakulam, India
  • 4Department of Mathematics, Karmela Rani Training College, Kollam, Kerala, India

Introduction

AI-driven education tools are expected to impact over 2 billion learners worldwide in the coming years, transforming both STEM and non-STEM disciplines in unprecedented ways (Louly, 2024; Sandhu et al., 2024; World Economic Forum, 2024). Artificial Intelligence (AI) is revolutionizing education through personalized tutoring, real-time feedback, and adaptive learning experiences (Akavova et al., 2023). AI enables teachers to create individualized development plans according to the needs of the students. Its impact on intellectual tasks such as critical thinking, emotional intelligence, and moral reasoning is, however, a debatable topic (Çela et al., 2024). Greater dependence on AI-driven tools is a cause for concern with surface learning and minimal engagement with complex problem-solving and debates (Çela et al., 2024).

While AI enhances education in all subjects, it does so unevenly between STEM and non-STEM fields, particularly in its engagement with structured logic-based learning vs. interpretative, abstract reasoning (Nagaraj et al., 2023; Singer et al., 2023). Within STEM education, AI's analytical and structured logic nature provides excellent benefits in problem-solving, simulation, and automation of complex calculations. However, non-STEM fields, such as the humanities and social sciences, require more interpretative, ethical, and creative engagements that AI is less likely to be able to provide. This paper explores these differences while advocating for an even-keeled integration of AI that augments, rather than replaces, human teaching.

Bridging intelligence: a literature review on AI-human synergy in education

Theoretical perspectives on AI in education

Learning theories offer a framework for understanding AI's application in education. In the context of Bloom's Taxonomy, AI can support lower-order thinking skills (knowledge, comprehension, application) in STEM education but lags in supporting higher-order skills like evaluation and synthesis (Essien et al., 2024). Within the context of Vygotsky's Zone of Proximal Development, for non-STEM, AI can be seen as a form of scaffold that can assist students with guided learning tasks but that still needs to be closely mediated by a human to facilitate the growth of abstract thought and creativity (Xue, 2023). Both theories emphasize the ability of AI to guide users in solvable problems however suggest the need for human intervention to achieve a deeper learning outcome.

AI in STEM education: enhancing efficiency and problem-solving

Artificial Intelligence in Education (AIEd) entails a broad range of varied tasks, ranging from adaptive learning systems to automated grading. AI, in STEM disciplines, is used to accelerate problem-solving and the grading process; for example, AI tools facilitate personalized tutoring to improve engagement and course performance in computational material, such as mathematics and engineering (Gupta et al., 2024; Mustafa, 2024). AI use in grading has reduced grading inconsistencies up to 44% compared to human grading (Gobrecht et al., 2024) which means less bias across STEM exams. Notwithstanding these prospects, however, there are apprehensions about bias and AI failure on tasks that entail subjective grading like grading for engineering ethics (Orchard and Radke, 2023).

AI is also being utilized as a teaching assistant in STEM classrooms. Squirrel AI, an adaptive learning system in China, has been used to provide real-time feedback to students in mathematics and physics, significantly improving their problem-solving efficiency and retention rates (Luo, 2023). AI-powered teaching assistants like Jill Watson, developed at Georgia Tech, have demonstrated the ability to answer student queries efficiently in online courses, reducing educators' workload while increasing student engagement (Taneja et al., 2024). Such applications highlight AI's potential to serve as a valuable assistant in structured, logic-driven subjects.

Beyond the classroom, AI is transforming STEM research via predictive analysis and automation. In biomedical engineering, for example, AI-powered tools analyze medical imaging and predict disease patterns (Burri and Mukku, 2024). In engineering, AI supports computational modeling and rapid prototyping, allowing researchers to conduct complicated simulations that would otherwise take months (Subramonyam et al., 2021). AI-powered research labs at MIT and Stanford have already begun utilizing AI-driven robotic experiments, which reduce the time needed for hypothesis testing and enhance the precision of experimental outcomes (Gower et al., 2024). However, studies found that while AI supports efficiency in STEM learning, overreliance on automated solutions led to the loss of students' ability to analyze and solve problems on their own, provoking concerns over the long-term cognitive impact (Çela et al., 2024).

AI in non-STEM education: challenges in interpretation and creativity

AI application in non-STEM education is contentious. Although AI-assisted writing software enhances grammatical accuracy, studies show that it doesn't enhance creative and critical thinking (Rahmi et al., 2024). Furthermore, AI-powered software that analyzes literature or historical texts cannot grasp cultural context and ethical sensitivity (Shehu, 2024). Unlike STEM's structured problem-solving, non-STEM disciplines entail subjective interpretation, and AI is not as effective if applied in independent learning (McIntosh et al., 2024).

Recent studies have also examined AI's role as a teaching assistant in humanities and social sciences. AI-driven discussion facilitators have been utilized in some institutions to assist online philosophy and ethics courses by summarizing discussion points and proposing further investigation (Aleynikova and Yarotskaya, 2024; Baiburin et al., 2024). However, such systems are not deep enough for abstract discussion. Similarly, AI-assisted legal research platforms, e.g., ROSS Intelligence, have enabled more efficient case law analysis but cannot provide contextual insight, which continues to be essential in legal education (Migliorini and Moreira, 2024; Mohamed et al., 2024). These studies highlight that while AI might assist procedural aspects of non-STEM disciplines, its failure to comprehend the context and perform interpretative reasoning necessitates continued human intervention to ensure the depth and complexity of learning.

Discussion: the diverging roles of AI in STEM and non-STEM education

AI in education presents a sharp contrast between STEM and non-STEM disciplines. In STEM education, AI boosts structured problem-solving, simulation, and automatic grading, with measurable gains in efficiency and accuracy. AI teaching assistants such as Jill Watson and Squirrel AI have already succeeded in STEM subjects by demonstrating accurate response capability, adaptive feedback, and performance analysis in real-time. Such structured environments are exactly where AI excels in pattern recognition, data processing, and structured problem-solving. There is, however, worry that over-reliance on AI tools will erode the capacity for independent critical thinking among students (Çela et al., 2024; Jia and Tu, 2024).

On the other hand, AI is unable to fully replicate human reasoning in non-STEM fields, where subjective interpretation, morality, and creativity take center stage. While AI-powered discussion facilitators and legal research tools (ROSS Intelligence) make information more accessible and automate citation, they fall short of deep interpretation and contextual understanding (Migliorini and Moreira, 2024). The need for high-level, human-led critical engagement in fields such as literature, law, and philosophy underscores AI's limitations in abstract reasoning and moral decision-making (Jia and Tu, 2024). These asymmetries dictate that AI can only be used as a complementary tool and not a substitute for human instruction in non-STEM fields.

From an ethical perspective, the role of AI in education brings questions of algorithmic bias and fairness. AI-augmented grading systems in STEM have been shown to be less subjective, yet the literature also reports racial and gender biases in AI-based grading (Chinta et al., 2024; Mangal and Pardos, 2024). Similar problems in humanities and non-STEM work, where AI-generated historical analyses and text-based responses have included false-making citations or mis-contextualization in cultural contexts (Mandal et al., 2024; Papadopoulos et al., 2024). These results call for transparency, fairness, and human involvement in AI-assisted learning.

However, to progress its use in education, interdisciplinary collaboration will be necessary as AI evolves. Pumpkin must find a middle-ground solution that combines automation with human-led insights to close the divide between STEM and non-STEM uptake of AI. It is essential that educators, policymakers, and AI technologists collaborate to integrate AI tools responsibly and equitably into education, focusing on critical thinking, ethical decision-making, and equitable access to technology.

Future implementations and recommendations

We must balance efficiency with cognitive engagement. In STEM fields, where AI augments problem-solving, hybrid learning environments mitigate against overreliance by coupling AI-powered support with instructor-facilitated reasoning exercises. AI-based digital tools should not present students with answers without clearer insight into the underlying logic that leads to those answers (Bhat and Long, 2024), ensuring that students engage critically rather than passively. Self-regulated learning strategies, like asking students to reflect on AI-generated solutions, have the potential to build up independent problem-solving skills.

In non-STEM disciplines, AI's limitations in interpretative reasoning necessitate context-aware tools that incorporate cultural, ethical, and historical nuances (John-Mathews, 2021). AI discussion facilitators should evolve beyond summarization, using Socratic-style questioning to encourage deeper analysis. AI-assisted peer learning can also enhance engagement by allowing AI to structure discussions while instructors guide critical debates.

Algorithmic transparency and fairness are absolutely important across disciplines. AI systems should be interpretable and should provide insight into how a particular conclusion has been reached. Other actions can be undertaken to inform AI committees within universities of AI oversight activities, enabling them to freely review AI-based assessments and discern bias. This, alongside its integration within curricula, will not only cultivate individuals with responsible AI use in mind, but also sharpen their critical faculties toward understanding the content produced by AI; ultimately, this is vital to ensuring proper equilibrium in the human-AI partnership.

Improving the integration of AI will allow education to benefit from AI without sacrificing critical thinking, creativity, and ethical judgment. Further studies should prioritize improving the interpretability of AI whereby the supportive nature of AI is preserved in reference to human intelligence.

Conclusion: toward responsible AI integration

Artificial intelligence has transformed education, streamlining solving structured problems but complicating critical interpretation of answered questions and moral analysis of problems. AI has found a comfortable home amongst the gradually data-heavy and more systematic STEM disciplines, thus far outstripping the acute pressure toward interpretative and creative learning that highlights the need for human authority around how such an alphabet can be used in education. Responsible AI deployment is about ensuring that it stays a tool that enables collaboration, as opposed to a human replacement.

To counterbalance these challenges, institutions can promote equitable and effective AI-influenced education with AI literacy initiatives that enable students and educators to evaluate AI-generated output critically. AI governance frameworks will be crucial here too, ensuring we maintain fairness, transparency, and remove bias from grading and evaluation. Moreover, tackling the digital divide and guaranteeing that AI-powered learning tools are available to all pupils is key to avoiding educational inequalities.

Going forward, collaboration between educators, AI developers, and policymakers will be essential to ensure that AI applications in education are improved and applied properly. Subsequently, future research should focus on improving the interpretative capabilities of AI, enabling it to support both structured and creative dimensions of learning.

Ultimately, AI's success in education depends on ethical governance, human-centered design, and pedagogical innovation. By maintaining a balanced AI-human synergy, education can harness AI's benefits without compromising critical thinking, creativity, and ethical judgment.

Author contributions

NV: Writing – original draft, Writing – review & editing, Supervision. BJ: Conceptualization, Investigation, Methodology, Writing – original draft, Writing – review & editing. TB: Methodology, Writing – original draft, Writing – review & editing. AC: Validation, Writing – original draft, Writing – review & editing. SN: Supervision, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Akavova, A., Temirkhanova, Z., and Lorsanova, Z. (2023). Adaptive learning and artificial intelligence in the educational space. E3S Web Conf. 451:6011. doi: 10.1051/e3sconf/202345106011

Crossref Full Text | Google Scholar

Aleynikova, D., and Yarotskaya, L. (2024). AI Implications for Vocational Foreign Language Teaching and Learning: New Meaning. Tambov University Review. Series: Humanities.

Google Scholar

Baiburin, A., Berezkin, Y., Boitsova, O., Gromov, A., Kovalenko, K., Kovalyova, N., et al. (2024). Forum 60: AI in the social sciences and humanities. Anthropol. Forum 20, 11–68. doi: 10.31250/1815-8870-2024-20-60-11-68

Crossref Full Text | Google Scholar

Bhat, M., and Long, D. (2024). Designing interactive explainable AI tools for algorithmic literacy and transparency. Proceedings of the 2024 ACM Designing Interactive Systems Conference (Hamburg), 939–957.

Google Scholar

Burri, V., and Mukku, L. (2024). Predictive analytics in healthcare: harnessing AI for early disease detection. Glob. J. Res. Anal. 13. doi: 10.36106/gjra/5803759

PubMed Abstract | Crossref Full Text | Google Scholar

Çela, E., Fonkam, M. M., and Potluri, R. (2024). Risks of AI-assisted learning on student critical thinking. Int. J. Risk Contingency Manag. 12, 1–19. doi: 10.4018/IJRCM.350185

Crossref Full Text | Google Scholar

Chinta, S. V., Wang, Z., Yin, Z., Hoang, N., Gonzalez, M., Quy, T. L., et al. (2024). FairAIED: navigating fairness, bias, and ethics in educational AI applications. ArXiv, abs/2407.18745. doi: 10.48550/arXiv.2407.18745

Crossref Full Text | Google Scholar

Essien, A., Bukoye, O., O'Dea, C., and Kremantzis, M. (2024). The influence of AI text generators on critical thinking skills in UK business schools. Stud. High. Educ. 49, 865–882. doi: 10.1080/03075079.2024.2316881

Crossref Full Text | Google Scholar

Gobrecht, A., Tuma, F., Möller, M., Zöller, T., Zakhvatkin, M., Wuttig, A., et al. (2024). Beyond human subjectivity and error: a novel AI grading system. ArXiv, abs/2405.04323. doi: 10.48550/arXiv.2405.04323

Crossref Full Text | Google Scholar

Gower, A., Korovin, K., Brunnsåker, D., Kronström, F., Reder, G., Tiukova, I., et al. (2024). The use of AI-robotic systems for scientific discovery. ArXiv, abs/2406.17835. doi: 10.48550/arXiv.2406.17835

Crossref Full Text | Google Scholar

Gupta, S., Dharamshi, R., and Kakde, V. (2024). An impactful and revolutionized educational ecosystem using generative AI to assist and assess the teaching and learning benefits, fostering the post-pandemic requirements. 2024 Second International Conference on Emerging Trends in Information Technology and Engineering (ICETITE) (Vellore), 1–4.

Google Scholar

Jia, X.-H., and Tu, J.-C. (2024). Towards a new conceptual model of AI-enhanced learning for college students: the roles of artificial intelligence capabilities, general self-efficacy, learning motivation, and critical thinking awareness. Systems 12:74. doi: 10.3390/systems12030074

Crossref Full Text | Google Scholar

John-Mathews, J.-M. (2021). Some critical and ethical perspectives on the empirical turn of AI interpretability. ArXiv, abs/2109.09586. doi: 10.48550/arXiv.2109.09586

Crossref Full Text | Google Scholar

Louly, N. (2024). Application of AI Tools in Education- A Conceptual Framework. Palakkadu: Recent Trends in Management and Commerce.

Google Scholar

Luo, Q. (2023). The influence of AI-powered adaptive learning platforms on student performance in chinese classrooms. J. Educ. 6, 660–678. doi: 10.53819/81018102t4181

Crossref Full Text | Google Scholar

Mandal, B., Li, X., and Ouyang, J. (2024). Contextualizing generated citation texts. ArXiv, abs/2402.18054. doi: 10.48550/arXiv.2402.18054

Crossref Full Text | Google Scholar

Mangal, M., and Pardos, Z. (2024). Implementing equitable and intersectionality-aware ML in education: a practical guide. Br. J. Educ. Technol. 55, 2003–2038. doi: 10.1111/bjet.13484

Crossref Full Text | Google Scholar

McIntosh, T., Liu, T., Susnjak, T., Watters, P., and Halgamuge, M. (2024). A reasoning and value alignment test to assess advanced GPT reasoning. ACM Trans. Interact. Intell. Syst. 41:37. doi: 10.1145/3670691

Crossref Full Text | Google Scholar

Migliorini, S., and Moreira, J. I. (2024). The case for nurturing AI literacy in law schools. Asian J. Leg. Educ. 12, 2–10. doi: 10.1177/23220058241265613

Crossref Full Text | Google Scholar

Mohamed, E., Quteishat, A., Qtaishat, A., and Mohammad, A. (2024). Exploring the role of AI in modern legal practice: opportunities, challenges, and ethical implications. J. Electr. Syst. doi: 10.52783/jes.3320

Crossref Full Text | Google Scholar

Mustafa, A. N. (2024). The future of mathematics education: Adaptive learning technologies and artificial intelligence. Int. J. Sci. Res. Arch. 12, 2594–2599. doi: 10.30574/ijsra.2024.12.1.1134

Crossref Full Text | Google Scholar

Nagaraj, B. K., A, K., R, S. B., S, A., Sachdev, H. K. N., et al. (2023). The emerging role of artificial intelligence in STEM higher education: a critical review. Int. Res. J. Multidiscip. Technov. doi: 10.54392/irjmt2351

Crossref Full Text | Google Scholar

Orchard, A., and Radke, D. (2023). An Analysis of Engineering Students' Responses to an AI Ethics Scenario (Washington, DC: Association for the Advancement of Artificial Intelligence), 15834–15842.

Google Scholar

Papadopoulos, S.-I., Koutlis, C., Papadopoulos, S., and Petrantonakis, P. (2024). Similarity over Factuality: Are we making progress on multimodal out-of-context misinformation detection? ArXiv, abs/2407.13488. doi: 10.48550/arXiv.2407.13488

Crossref Full Text | Google Scholar

Rahmi, R., Amalina, Z., Andriansyah, A., and Rodgers, A. (2024). Does it really help? Exploring the impact of Al-generated writing assistant on the students' english writing. Stud. Engl. Lang. Educ. 11, 998–1012. doi: 10.24815/siele.v11i2.35875

Crossref Full Text | Google Scholar

Sandhu, R., Channi, H., Ghai, D., Cheema, G., and Kaur, M. (2024). “An introduction to generative AI tools for education 2030,” in Integrating Generative AI in Education to Achieve Sustainable Development Goals (Washington, DC: Association for the Advancement of Artificial Intelligence), 1–28.

Google Scholar

Shehu, Y. (2024). The impact of artificial intelligence (AI) on the evolution of hausa literature, language, and culture. Dundaye J. Hausa Stud. 3, 111–116. doi: 10.36349/djhs.2024.v03i01.013

Crossref Full Text | Google Scholar

Singer, S., Haines, B., and Roopaei, M. (2023). Adventure alongside AI into STEM education. 2023 IEEE Frontiers in Education Conference (FIE) (College Station, TX), 1–4.

Google Scholar

Subramonyam, H., Seifert, C., and Adar, E. (2021). ProtoAI: model-informed prototyping for AI-powered interfaces. Proceedings of the 26th International Conference on Intelligent User Interfaces (Ann Arbor, MI).

Google Scholar

Taneja, K., Maiti, P., Kakar, S., Guruprasad, P., Rao, S., and Goel, A. (2024). Jill Watson: a virtual teaching assistant powered by ChatGPT. ArXiv, abs/2405.11070. doi: 10.1007/978-3-031-64302-6_23

Crossref Full Text | Google Scholar

World Economic Forum (2024). Shaping the Future of Learning: The Role of AI in Education 4.0, 28. Available online at: https://www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf

PubMed Abstract | Google Scholar

Xue, Z. (2023). Exploring Vygotsky's zone of proximal development in pedagogy: a critique of a learning event in the business/economics classroom. Int. J. Educ. Humanities. 47, 4–17. doi: 10.54097/ijeh.v9i3.10506

Crossref Full Text | Google Scholar

Keywords: STEM and non-STEM education, AI integration in education, critical thinking, human-AI collaboration, algorithmic transparency

Citation: Varghese NN, Jose B, Bindhumol T, Cleetus A and Nair SB (2025) The power duo: unleashing cognitive potential through human-AI synergy in STEM and non-STEM education. Front. Educ. 10:1534582. doi: 10.3389/feduc.2025.1534582

Received: 26 November 2024; Accepted: 20 February 2025;
Published: 05 March 2025.

Edited by:

Stefano Triberti, Pegaso University, Italy

Reviewed by:

Ozden Sengul, Bogaziçi University, Türkiye

Copyright © 2025 Varghese, Jose, Bindhumol, Cleetus and Nair. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Binny Jose, bWtheWFuaTgzQGdtYWlsLmNvbQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Research integrity at Frontiers

Man ultramarathon runner in the mountains he trains at sunset

94% of researchers rate our articles as excellent or good

Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


Find out more