Skip to main content

HYPOTHESIS AND THEORY article

Front. Comput. Sci., 27 July 2023
Sec. Digital Education
This article is part of the Research Topic Impact and implications of AI methods and tools for the future of education View all 15 articles

How are AI assistants changing higher education?

\r\nEva-Maria Schn
Eva-Maria Schön1*Michael NeumannMichael Neumann2Christina Hofmann-StltingChristina Hofmann-Stölting3Ricardo Baeza-YatesRicardo Baeza-Yates4Maria RauschenbergerMaria Rauschenberger5
  • 1Faculty of Business Studies, University of Applied Sciences Emden/Leer, Emden, Germany
  • 2Faculty of Business and Computer Science, Department of Business Information Systems, University of Applied Sciences and Arts Hannover, Hannover, Germany
  • 3Faculty of Business and Social Sciences, Department of Business, University of Applied Sciences (HAW) Hamburg, Hamburg, Germany
  • 4Institute for Experiential AI, Northeastern University, Boston, MA, United States
  • 5Faculty of Technology, University of Applied Sciences Emden/Leer, Emden, Germany

Context: Higher education is changing at an accelerating pace due to the widespread use of digital teaching and emerging technologies. In particular, AI assistants such as ChatGPT pose significant challenges for higher education institutions because they bring change to several areas, such as learning assessments or learning experiences.

Objective: Our objective is to discuss the impact of AI assistants in the context of higher education, outline possible changes to the context, and present recommendations for adapting to change.

Method: We review related work and develop a conceptual structure that visualizes the role of AI assistants in higher education.

Results: The conceptual structure distinguishes between humans, learning, organization, and disruptor, which guides our discussion regarding the implications of AI assistant usage in higher education. The discussion is based on evidence from related literature.

Conclusion: AI assistants will change the context of higher education in a disruptive manner, and the tipping point for this transformation has already been reached. It is in our hands to shape this transformation.

1. Introduction

The context of higher education is changing at an accelerating pace. During the COVID-19 pandemic, digital teaching became the new method of teaching. In addition, new learning concepts have evolved, and collaborative technologies have spread. Innovative teaching concepts have been explored, such as gamification frameworks in learning environments (Rauschenberger et al., 2019), agile approaches (Neumann and Baumann, 2021; Schön et al., 2022), or the use of emerging technologies (e.g., a robot that serves as a teaching assistant and scrum master; Buchem and Baecker, 2022). Nowadays, emerging technologies, which include AI tools such as ChatGPT, are changing the context of higher education in a disruptive manner (Haque et al., 2022). ChatGPT is a large language model based on GPT-3 and was released by the company OpenAI in November 2022 (OpenAI, 2022). The AI chatbot provides real-time communication for users prompting their requests. The quality of ChatGPT's natural speaking answers marks a major change in how we will use AI-generated information in our day-to-day lives and has the potential to completely alter our interaction with technology (Aljanabi et al., 2023). More recently, Google launched Bard (which is currently only available for a small group of external testers), an AI chatbot competitor to ChatGPT that is tailored to search tasks and can even use current information on the web to answer questions (The decoder, 2021).

There have been many discussions regarding the potential impact of ChatGPT, with some viewing it as a disruptive technology (Haque et al., 2022; Rudolph et al., 2023). Some even believe it should be prohibited due to the change it may bring. However, ChatGPT is still in its infancy and makes mistakes (Gao et al., 2022). But, at the same time, it has great potential for the future. For instance, it could be used for voice user interfaces to overcome issues with response behavior or response quality (Klein et al., 2021). AI assistants can support the creation of new ideas or help in terms of automating tasks. Hence, AI assistants will change the way people work.

This development also has implications for the context of higher education. On the one hand, it poses challenges such as unknown handling by students, more time-consuming assessments, and unknown potential. On the other hand, it offers opportunities such as increased individual intelligent tutoring systems (ITS) and creativity engagement (Neumann et al., 2023). One of the challenges with the output of ChatGPT in the context of higher education is that established control structures (such as plagiarism checkers or AI detection tools) are not able to detect whether the text is generated by an AI or a human (Gao et al., 2022). As a result, we will have a change in the way we conduct and evaluate exams. In addition, we must consider creating good practices for using ChatGPT in a responsible and ethical manner, e.g., Atlas (2023). These examples will not be the only changes. Equally important is a change at the cultural level toward a student-centered approach and value-based learning (Schön et al., 2022). However, it is obvious that the context of higher education is changing. Thus, we need discussion and guidance on how to deal with such emerging technologies in order to actively shape this transformation.

This paper examines the following research questions (RQ):

RQ1: What is the impact of AI assistants in the context of higher education?

RQ2: How can higher education institutions adapt to the changes brought by AI assistants?

To answer RQ1, we developed a conceptual structure that highlights how AI assistants will change the context of higher education. The aim of our conceptual structure is to formalize ongoing activities in terms of the changes brought by generative AI assistants such as ChatGPT. The conceptual structure distinguishes between humans, learning, organization, and disruptor. It allows us to have a guided discussion regarding the implications of AI assistant usage in higher education. We enriched the discussion with evidence from related literature and our own experience with different use cases using the chatbot ChatGPT (Schön et al., 2023). To answer RQ2, we present lessons learned from the agile community that allow us to outline good practices for adapting to change.

This paper is structured as follows: Section 2 briefly summarizes the related work. Section 3 presents our conceptual structure for the usage of AI assistants in the context of higher education. Section 4 outlines the implications of AI assistant usage in higher education and discusses the changes in terms of humans, learning, and organizational aspects. Section 5 explores ethical issues regarding the use of AI assistants in the context of higher education and presents lessons learned from the agile community concerning transformation toward value-based working. This paper closes with a conclusion and future work in Section 6.

2. Related work

AI assistants have become more relevant in recent years and have reached the context of higher education, which is shown by the increasing number of publications and literature reviews conducted on this topic (Ouyang et al., 2022). Thus, we looked for secondary studies related to AI in higher education using Google Scholar. Secondary studies examine all primary studies related to a particular research question or topic, with the goal of synthesizing the evidence related to that question. The latest reviews concerning AI in higher education are briefly summarized below.

Alam and Mohanty (2022) surveyed existing literature in a systematic manner with the objective of identifying and examining the ethical considerations, challenges, and potential threats associated with using AI in higher education as well as exploring the potential uses of AI. They grouped their results into four categories: intelligent tutoring systems, personalization and adaptive systems, evaluation and assessment, and prediction and profiling. The authors state that their research reveals a lack of critical thinking regarding the challenges and potential threats of using AI in higher education.

Another systematic review by Ouyang et al. (2022) focuses on AI in online higher education. Their research aims to examine the various purposes for which AI is applied, the AI algorithms utilized, and the outcomes produced by AI techniques in online higher education. In terms of teaching, the authors found that AI applications are used to predict learning status, performance or satisfaction, resource recommendation, automatic assessment, and improvement of the learning experience. The authors claim that AI has been a crucial aspect of education from the perspectives of instructors, learners, and administrators, with the ability to create both opportunities and challenges in the transformation of higher education.

Since the release of ChatGPT in November 2022, there have already been several publications on the AI assistant. Rudolph et al. (2023) surveyed the existing literature regarding ChatGPT and higher education and found some peer-reviewed articles and preprints, which they included in their review. The authors also performed queries with ChatGPT. Their article presents the strengths and limitations of ChatGPT and discusses the implications of ChatGPT for higher education concerning student-facing AI applications, teacher-facing AI applications, and system-facing AI applications. Moreover, they offer recommendations for handling ChatGPT in higher education. The authors categorize ChatGPT as an AI-powered writing assistant. They conclude that ChatGPT can be beneficial for providing conceptual explanations and applications but cannot create content that requires higher-order thinking (such as critical or analytical thinking).

The related work provides evidence that allows us to create our conceptual structure and discuss the impact of AI assistants in higher education. Compared to our research, the paper of Rudolph et al. (2023) presents a good overview of current developments regarding ChatGPT. However, our conceptual structure goes one step beyond and allows us to structure the knowledge and discussion on a meta-level. Summarizing our findings, we can say that the quality of AI tools is rapidly improving. As outlined in the introduction, the quality of the text generated by ChatGPT is impressive and will change the context of higher education in a disruptive manner. Thus, we have reached a tipping point at which change has been initiated on several levels, marking a major transformation. The question now is about how we want to deal with this transformation. Hence, the following section presents a conceptual structure that outlines the changes that AI assistants will bring to the context of higher education, followed by a discussion of the implications.

3. Developing a conceptual structure for AI assistants in higher education

A conceptual structure is a way to describe the organization and connections between various components of a specific system, similar to a meta-model, which goes beyond and creates the basis for a language used for creating models (Escalona and Koch, 2007; Schön et al., 2019). Moreover, a conceptual structure provides a type of classification and allows us to make inferences and predictions based on selected information (Medin, 1989). It facilitates a shared understanding of a particular problem area and provides an abstract perspective on the problem. We want to answer our RQ1: What is the impact of AI assistants in the context of higher education? with this conceptual structure (see Figure 1) and the following discussion of implications in the Section 4.

FIGURE 1
www.frontiersin.org

Figure 1. Conceptual structure for AI assistants in higher education.

3.1. Developing the conceptual structure

We used a formalized approach to develop our conceptual structure. In the beginning, we started with a review of gray literature related to ChatGPT (Neumann et al., 2023). Moreover, we tested different use cases using the chatbot ChatGPT (Schön et al., 2023). Next, we had several discussions with lecturers and researchers to better understand how AI assistants will change higher education. We used a Miro board to visualize and capture our discussions and thoughts. We then started to create the first version of our conceptual structure based on our preliminary work and related literature. We used a UML (Object Management Group, 2017) notation since it provides a formal representation method that is commonly used. The conceptual structure was refined over eight iterations through discussions among the authors. It is presented in Figure 1.

3.2. Conceptual structure for AI assistants in higher education

This section presents our conceptual structure for AI assistants in higher education and discusses the concepts and relations between the classes. We present in our conceptual structure (see Figure 1) four different areas (see the color encoding) related to the change in higher education due to AI assistants:

Humans (yellow): classes Lecturer, Student

Learning (green): classes Learning Experience, Learning Assessment, Module

Organization (blue): classes University, Regulation

Disruptor (red): class AI Assistant

The conceptual structure contains the class Lecturer with the following attributes: topic, skill, competency, mindset, and way of working. Lecturers interact with Students who are described by a degree program, skills, competency, mindset, and learning style. Both classes are related to the concept of Humans. Moreover, the concept of Learning comprises the classes Learning Experience, Learning Assessment, and Module. Students have Learning Experience which differs in terms of prior knowledge, mindset/values, condition, skills, and tools. Lecturers are responsible for Learning Assessments that vary in terms of type, evaluation, and grading. Learning Assessments are part of Modules covering didactic goals, teaching concepts, didactic methods, learning assessments, and course material. Modules are taught by Lectures and influenced by Regulations. Regulations are determined by examination regulations, degree program regulations, or unwritten rules. Together with the class University, they represent the concept of Organization. University has faculty, department, institution, degree program, and university committee. In the middle of Figure 1, the concept of Disruptor is placed as the class AI Assistant. This one is highlighted in red because it changes the whole context of higher education. As outlined above, when Lecturers and Students use Assistants, this will change the Learning Experience, Learning Assessment, Module, and Regulation. We will discuss these changes in the next section and outline the impact of using AI Assistants in higher education.

In the following, we present an example of how our conceptual structure can lead a discussion regarding the implications of AI assistant usage in higher education in reality:

Human: The lecturer in this example is mainly responsible for the topic of artificial intelligence. Her skill is programming and her competencies are focused on research in machine learning. Her mindset is innovative and adaptable to new circumstances and she engages in collaborative teaching methods. Usually, the lecturer interacts with students in computer science, who have special skills in problem-solving and coding. The students' competencies are in understanding algorithms. Their mindset is curious and they are eager to learn. Most of them are visual learners.

Learning: The student's learning experience differs in terms of their knowledge with the understanding of basic programming concepts. They learn in a mix of classroom-based or online learning conditions. In doing so, they use programming software and online resources to train their logic and critical thinking. The lecturer needs to adapt her learning assessments to the new opportunities that AI assistants allow in terms of efficiency. For instance, traditional coding projects need to be adapted to the new possibilities in terms of evaluation and grading. In addition, a module in mastering programming concepts could be elaborated so that AI assistants are explicitly used for hands-on coding tasks. Students can use AI assistants as tutoring systems that support them in analyzing bugs and understanding error messages.

Organization: Where the use of AI assistants is explicitly allowed, examination guidelines and grading criteria will need to be further developed to take account of the extent to which AI assistants have been used and the student's own contributions. In addition, unwritten rules like informal expectations and norms within the university should be clarified among stakeholders such as the computer science faculty, and the Department of artificial intelligence with regard to the degree program bachelor in computer science.

This example shows that by applying our conceptual structure, a structured representation of the changes to be considered those trigger AI assistants is possible. Finally, it can also be applied to other domains and study programs.

4. Implications of AI assistants in higher education

This section will discuss the impact of AI assistants according to the four previously-defined concepts (see Section 3): Humans, Learning, Organization, and Disruptor. The conceptual structure (see Figure 1) uses the relationships highlighted in red to show the extent to which AI assistants may change the context of higher education. We focus on how areas of higher education are being transformed by AI assistants using the example of ChatGPT. Therefore, we use related work to enrich the following discussion of implications and execute tests with ChatGPT.1

4.1. Humans

Students can use AI assistants to identify strengths or gaps in their knowledge and to receive personalized feedback on their learning progress, or work results (Zawacki-Richter et al., 2019). Thus, they are individually supported in the development of their competencies. ChatGPT may help one to improve their academic research and writing skills. It can summarize papers, extract key facts, and even provide citations and references. The tool can also assist (not replace) academic writing skills by generating essays or parts of essays for papers, dissertations, or similar work. Furthermore, ChatGPT can give feedback and correct text passages (Aljanabi et al., 2023). There are various research papers in which ChatGPT is used to write literature review articles with promising results (e.g., Aydin, 2022). This implies that AI assistants will play an essential role in the field of research and writing to support academics.

Lecturers may use AI assistants to reduce their workload by automating assessment, administration, and feedback mechanisms (Rudolph et al., 2023). In particular, the time saved by automating assessment allows Lecturers to focus on empathetic human teaching (Zawacki-Richter et al., 2019). In addition, lecturers can use AI assistants for lesson planning by having them create a course syllabus with short descriptions of the topics (Kasneci et al., 2023). Moreover, AI assistants can help lecturers to create materials for different learning levels. They may also transfer solution examples (e.g., from one programming language to another) to save time. Test queries show that different examples can be generated and easily transformed by ChatGPT to suit all levels, from beginners to experts (Schön et al., 2023).

Both students and lecturers need to develop competencies so that they can use AI assistants effectively. At the same time, it is important to establish a proper mindset and raise awareness about the ethical aspects of AI assistant usage. Because ChatGPT generates near-perfect natural speech answers, a human may think “That must be correct.” ChatGPT also has other limitations: generated answers can be too short, misinterpreted, not understandable for students, or wrong (Gao et al., 2022; Qadir, 2022; Rudolph et al., 2023; Schön et al., 2023). Thus, humans must always evaluate the quality of ChatGPT's answers. Misinterpretation of queries can lead the AI assistant to learn the individual wording of students and lectures (like predictive text on a smartphone) to avoid misinterpretation in the future. A solution could be for lecturers to provide videos or audio recordings of their own lectures and then query exam questions from the AI assistants. We should keep in mind that AI assistants should be controlled by humans and that even the best AI can make mistakes (Zawacki-Richter et al., 2019; Perry et al., 2023).

4.2. Learning

AI assistants are revolutionizing higher education, creating both opportunities and challenges for enhancing the quality of higher education and improving the learning experience (Ouyang et al., 2022).

In particular, since AI assistants can serve as intelligent tutoring systems (ITS), they are changing how students learn (Zawacki-Richter et al., 2019; Fauzi et al., 2023; Rudolph et al., 2023). For instance, ChatGPT can serve as an AI-powered writing assistant and will therefore bring innovation to certain types of tasks. Lecturers can provide students with learning material that the students can then work through at their own pace since AI assistants can give them feedback. This would change the learning experience. Students can also use AI assistants to create individualized learning paths and personalized learning instructions according to their prior knowledge, conditions, and pace. Hence, students can interact with AI assistants such as ChatGPT and engage with content that is new to them and fits their needs. Especially with regard to large-scale lectures and massive open online courses, AI assistants have the potential to create individual learning experiences (Winkler and Soellner, 2018).

For exam preparation, AI assistants can also be used as intelligent tutoring systems (ITS). For example, questions about texts and other learning materials can be generated as a mock exam. ChatGPT is able to generate multiple-choice questions, quizzes, open questions, and much more. It is also conceivable that the AI assistant can not only provide the solution to a complex problem (e.g., a math problem) but also an individualized explanation of the solution process.

When implementing software code, computer science students can use AI-generated tests (e.g., unit tests) to test their own program codes. However, there are some limitations since AI-generated code is significantly less secure (Perry et al., 2023). Still, when asked for code, ChatGPT does sometimes unintentionally suggest secure improvements (Schön et al., 2023). In addition, ChatGPT could potentially supplement or replace Google search or communities such as Stackoverflow.2 This is because ChatGPT responds in a matter of seconds, whereas communities with actual humans need hours, days, or even weeks to answer.

AI assistants are already changing learning assessments. Assessment types that are generic and could be created by a human or an AI assistant should be avoided. Instead, assessments should be designed to develop students' creative and critical thinking skills (Rudolph et al., 2023).

These assessment types could cover presentations as well as multimedia content (such as videos, websites, or animations). Another type could be a stealth assessment: a continuous, integrated, and inconspicuous method of evaluation that takes place in various forms (such as serious games, simulations, virtual labs, or forums). It involves collecting data on student performance while they engage in tasks (Caspari-Sadeghi, 2023).

When a large number of students need to be assessed, automated assessment plays an important role. Automating assessment also allows the students to conduct the assessment whenever they are ready to do so. Students may benefit from flexible timed exams in a variety of subjects such as math, as it allows them to take the exam when they feel ready.

Tasks that are comparable and coordinated according to content, difficulty, and level of competence can be generated automatically. Lecturers can also use AI assistants to grade exams, including giving individual feedback regarding strengths and weaknesses for various types of assessments (such as essays, research papers, and written exams, Kasneci et al., 2023).

The use of AI assistants affects instructional design and implementation through various educational perspectives, thus having a significant impact (Ouyang et al., 2022). The increasing digitalization requires skills such as working in an interdisciplinary team and self-organization. Modules should cover those aspects in terms of didactic goals and didactic methods. An example of how those competencies could be achieved is the experience of working in an agile team. Therefore, no-code platforms in combination with agile methods and agile practices are used (Lebens and Finnegan, 2021). AI assistants can support students who are not primarily studying computer science to gain this experience. With the increasing number of low-code platforms (e.g., Salesforce3) and ChatGPT, more and more people are able to create source code. This will bring more change to all degree programs since other competencies will be in demand. For example, business students may soon automate their work themselves instead of having a programmer for each task. This is a controversial societal issue that needs to be addressed.

Using an AI assistant requires certain competencies, just as using the web requires media competencies. Therefore, students need to learn how to use AI assistants and develop competence for different tasks according to their study program (e.g., informatics vs. business vs. social pedagogy). The current question is how and what we want to teach students, as we do not yet have much experience with AI assistants in daily use. Thus, research needs to be done. In addition, students are going into industries; therefore, we need to determine what companies need from their future employees regarding AI assistant usage competencies.

As for the concept of humans, the same is valid for concept learning: we need to be aware of the kinds of mistakes ChatGPT or any other AI assistant makes. The competence for ethical use needs to be conveyed. In addition, students must develop a mindset of wanting to learn due to intrinsic motivation of mastery and purpose.

4.3. Organization

As outlined above, AI assistants will have an impact on regulations. For instance, they will change the learning assessments in terms of types and evaluations, thus impacting examination regulations and degree program regulations. For instance, automatic assessment is one of the ways AI is already being used in higher education (Zawacki-Richter et al., 2019; Ouyang et al., 2022). Lecturers are concerned that ChatGPT will change the process of writing as we know it and that traditional assessment types (such as essays and take-home exams) must be reformed (Rudolph et al., 2023). ChatGPT is capable of creating essays in just a few seconds. Established control mechanisms, such as plagiarism checkers or AI output detectors, cannot reliably and consistently identify generated texts (Gao et al., 2022).

If lecturers use AI assistants to generate exams and students use AI assistants to answer exam questions, then AI assistant usage will have reached a point of absurdity. Since this would mean the AI generates and completes the exams, it would no longer represent the learning level of students. Hence, we need to discuss limits regarding AI assistant usage or other approaches to assess the learning levels of students. Given the possibilities, one must consider whether certain forms of learning assessments, such as term papers and online exams, still make sense in the modern day (Susnjak, 2022). Additionally, new forms of learning assessments in which ChatGPT is explicitly used or does not provide any added value should be developed. For example, examinations could be designed to assess higher levels of competence, such as critical thinking or problem-solving skills (Cotton et al., 2023).

Despite the implications of AI assistants for regulations, these tools can deliver value-added services to other areas of universities. AI assistants can be used for profiling and prediction. Such AI assistants rely on learner profiles or models to make predictions, such as the risk of a student dropping out of a course or the likelihood of their admission to a program. This information can then be used to provide timely support, feedback, and guidance in content-related matters during the learning process (Zawacki-Richter et al., 2019). Another example of value-added services is that AI assistants can support course guidance services, such as answering questions about specific courses at a university. However, it can also provide individual advice for courses of study based on individual skills and prior experiences. AI assistants can also support the office of student affairs or other offices concerned with study organization, matriculation, certificates, and FAQs, although many legal and ethical questions remain.

When proven control mechanisms no longer work, and types of learning assessments change, new ways of dealing with deception and evaluation must be found. In this context, it is important to develop rules for dealing with AI assistants that all parties involved can understand and follow. There are still many open questions, such as Is it plagiarism if AI writes an essay? or Who is the author of AI-generated texts? Because of this, a university should coordinate legal opinions and provide clear information to students and staff (Ruhr University Bochum, 2023). The legal opinion is intended to provide guidance to universities in North Rhine-Westphalia (Germany) on how issues related to ChatGPT and similar programs should be handled regarding copyright and examination laws. However, some copyright issues can only be resolved with the providers of the AI assistants since they train the models with data, and it is not always transparent whether all training data was used legally (Kasneci et al., 2023).

5. Discussion

So far, we have outlined the impact of AI assistants in the context of higher education. However, there is another important topic that needs to be addressed regarding the use of AI assistants. As with other emerging technologies, we need to be aware of ethical issues especially when technology is used to support decision-making. Therefore, we present our concerns below. In addition, this section presents lessons learned from the agile community that are relevant to transformation in the context of higher education and can support leaders.

5.1. Ethical issues

There are several ethical issues related to ChatGPT, which is based on large language models (LLMs). The main ones, in order of importance, are:

- Discrimination: LLMs learn the human biases that appear in the training texts, so they can sometimes generate biased, unfair, or discriminatory answers that may psychologically harm some readers. In addition, since answers depend on the prompt language, there is also a quality bias according to the language used (that is, for the same prompt, the answer may be different for different languages).

- No attribution: Current LLMs cannot give attribution to the texts that are used to generate an answer. Hence, ownership, copyrights, and other intellectual rights are not protected.

- Weak and arrogant character: ChatGPT trusts humans, and, in that sense, it is very naive and can be easily manipulated (e.g., tell me why drinking chlorine is good). On the other hand, it always answers with confidence, even when it is wrong. So, people may believe it unless they can easily check the facts.

- Consent and privacy: LLMs are mainly trained using web documents that may have usage restrictions that were not respected. Even if there are no restrictions, there is never explicit consent to allow these documents to be used as training data. As a result, the generated answers may reveal private information, violating personal or institutional privacy.

The ownership and copyright of generated text of ChatGPT are unclear, and one could believe a text belongs to the person who prompted it. Otherwise, one could argue that the AI-generated output belongs to the AI and needs to be cited. Either way, we as a society or institutions need to decide how to handle AI-generated output.

5.2. Lessons learned from the agile community

As described above, several AI assistants exist and are currently used by students and lecturers for various use cases. However, with the release of ChatGPT and the immense interest in that topic, a disruptive change has already begun. Thus, one may predict that tools such as ChatGPT will not disappear. Furthermore, we expect the release of further AI tools [for a list of AI-based tool examples, see (Schön et al., 2023)] and their integration into the existing tool landscape (e.g., the Microsoft Office Suite). This situation may be described as a tipping point from a change management perspective, as disruptive change has already begun.

Disruptive changes are not a new phenomenon that only occurs due to the release of AI tools. Moreover, we know such situations from various examples: the rising dynamic of the markets, including the challenges to meet customer needs at an accelerating pace, navigating in uncertain and fast-paced environments, or the fundamental changes in work organization due to the COVID-19 pandemic, just to name a few. The agile community is used to disruptive changes, and we know from several aspects (e.g., agile transformation) that such rapid transitions come with various challenges (e.g., Dikert et al., 2016; Schön et al., 2017; Karvonen et al., 2018; Strode et al., 2022). So, what can we learn from the agile community to support this transformation in the context of higher education? The objective of the following discussion is to answer our RQ2: How can higher education institutions adapt to the changes brought by AI assistants?

First, we want to point to one major challenge, which is well-known in the area of agile transformation and has gained more and more research interest in recent years: the need for a cultural change (Sidky et al., 2007; Kuchel et al., 2023). We know that the interplay between technical and cultural agility (also known as being vs. doing agile) is of high importance when using or introducing agile methods and practices (Diebold et al., 2015; Küpper et al., 2017). Hence, the fit of cultural values in an organization and the underlying values and principles of agile (Schwaber and Sutherland, 2020; Beck et al., 2021) requires a cultural change.

In-depth knowledge from the area of agile transformation, particularly regarding cultural aspects, may support us in tackling the upcoming challenges and promoting the potential of AI tools for higher education. From our point of view, it is important to establish a culture of trust, especially between lecturers and students. Established control mechanisms such as plagiarism scanners are now useless. Currently, there are no reliable technical methods for determining whether a text or other content was generated by an AI assistant such as ChatGPT or by a human being. Thus, we argue that there is a need for a cultural shift toward a value-based learning approach that focuses on a trustful learning environment. This requires new competencies of students and lecturers (e.g., self-organization or adaption of the learning process according to continuous feedback, Schön et al., 2022) as well as new learning assessments. We also see the need for a defined set of values and principles surrounding this topic. This set would provide a foundation for a value-based learning approach regarding the upcoming aspects of AI assistant usage.

Summarizing this discussion, we point to the aspect of being vs. doing agile WRT with regard to the integration of AI assistants into the higher education context. As AI assistants are available and already used by both students and lecturers, the technical facet (doing) is covered. However, as described above, the facet of values and principles (being) needs our focus, e.g., through discussion of how we can enable a new mindset in the higher education context. By finding comprehensive solutions that address both facets, we can create a future-oriented, sustainable, and resilient educational environment, which may be described as an agile higher educational context.

5.3. Limitations

Although our study was designed and conducted according to established guidelines, it is important to consider certain limitations.

5.3.1. Construct validity

We developed our conceptual structure for AI in higher education inductively in several iterations based on existing literature and discussions among researchers. Since this research is about an emergent technology, it cannot cover deeply unresolved issues such as the ethical implications of the use of AI assistants in the context of higher education. However, with our conceptual structure, we want to outline the open issues, that need to be addressed in future research.

5.3.2. Internal validity

We outlined the connection between the concepts and relationships of our conceptual structure and the existing literature in Section 4. One potential concern is the possibility of confounding variables, which could have influenced the observed relationships between the concepts. We tried to avoid this through intensive, structured discussions among the group of authors.

5.3.3. External validity

The aim of our conceptual structure is to formalize ongoing activities in terms of the changes brought about by generative AI assistants such as ChatGPT. As an emerging technology, we cannot be sure that we have considered all aspects. Therefore, we need to conduct case studies in the future to observe and better understand the changes. Our findings are currently applicable to similar AI assistants such as chatbots from other providers, and also to other areas such as image generation or speech generation (see Figure 1, type of AI Assistant).

6. Conclusion and future work

This paper presents a conceptual structure that highlights the changes that AI assistants bring to the context of higher education. Our conceptual structure was developed by means of a literature review and extensive discussion among researchers and lecturers. The conceptual structure covers the following concepts: Humans, Learning, and Organization. Furthermore, we discuss the implications that AI assistants have for the context of higher education. The implications comprise changes addressing: (1) how lecturers and students teach and learn, (2) the competencies that both need for the ethical and technical usage of AI assistants, (3) the learning experience, (4) the evolution of learning assessments and grading, (5) changes to regulations, and (6) topics that must be addressed by the organization. Moreover, we presented critical aspects such as ethical issues as well as changes to the value system and mindset that must be guided by the people who are involved in the transformation process (e.g., students, lecturers). Therefore, we presented lessons learned from the agile community.

Technological innovations, such as the digital calculator or search engines, have challenged common practices in research and teaching, as AI assistants are doing today. However, the urgency with which we must act is greater for AI assistants, as the tool has already been used by many people in a short amount of time. We need to examine our processes and adapt them to new circumstances, addressing the concepts Humans, Learning, and Organization. At the same time, we need to consider a shift in value systems toward a value-based learning approach that requires new competencies of lecturers and students. Only then can we definitely take advantage of AI assistants in higher education.

In future work, we want to investigate the impact of AI assistants in higher education through empirical study. For instance, we started case studies that investigate the changes that are brought by AI assistants to higher education. Since this research is about an emergent technology, we have to observe and analyze this phenomenon in the long term to cover deeply unresolved issues such as the ethical implications of the use of AI assistants in higher education. In addition, we want to shape the transformation process and clarify what should be discussed regarding the ethical use of AI assistants and the change toward a value-based learning approach.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://figshare.com/s/5a3297b38934470b6c69.

Author contributions

E-MS, MN, CH–S, RB–Y, and MR: conceptualization, methodology, writing—original draft, review, and editing. All authors contributed to the article and approved the submitted version.

Funding

The article processing charges for this article are supported by the University of Applied Sciences Emden/Leer.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^Examples of our test prompts can be found in Schön et al. (2023).

2. ^Available online at: https://stackoverflow.com/ (accessed February 10, 2023).

3. ^Available online at: https://www.salesforce.com/eu/ (accessed February 10, 2023).

References

Alam, A., and Mohanty, A. (2022). “Foundation for the future of higher education or 'misplaced optimism'? Being human in the age of artificial intelligence,” in Communications in Computer and Information Science, eds M. Panda, S. Dehuri, M. R. Patra, P. K. Behera, G. A. Tsihrintzis, S. -B. Cho, and C. A. Coello (Springer Science and Business Media Deutschland GmbH), 17–29. doi: 10.1007/978-3-031-23233-6_2

CrossRef Full Text | Google Scholar

Aljanabi, M., Ghazi, M., Ali, A. H., and Abed, A. (2023). ChatGPT: open possibilities. Iraqi J. Comput. Sci. Math. 4, 62–64. doi: 10.52866/20ijcsm.2023.01.01.0018

CrossRef Full Text | Google Scholar

Atlas, S. (2023). ChatGPT for Higher Education and Professional Development: A Guide to Conversational AI. DigitalCommon@URI.

Google Scholar

Aydin, Ö. (2022). OpenAI ChatGPT Generated Literature Review: Digital Twin in Healthcare. Available online at: https://papers.ssrn.com/sol3/papers.cfmabstract_id=4308687 (accessed February 10, 2023).

Google Scholar

Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., et al. (2021). Agile Manifesto. Available online at: https://agilemanifesto.org/ (accessed February 10, 2023).

Google Scholar

Buchem, I., and Baecker, N. (2022). “Nao robot as scrum master: Results from a scenario-based study on building rapport with a humanoid robot in hybrid higher education settings,” in Training, Education, and Learning Sciences, Vol. 59 (AHFE International), 65–73.

Google Scholar

Caspari-Sadeghi, S. (2023). Learning assessment in the age of big data: learning analytics in higher education. Cogent Educ. 10, 2162697. doi: 10.1080/2331186X.2022.2162697

CrossRef Full Text | Google Scholar

Cotton, D. R., Cotton, P. A., and Shipway, J. R. (2023). Chatting and Cheating: Ensuring academic integrity in the era of ChatGPT. Available online at: https://edarxiv.org/mrz8h/download/?format=pdf (accessed February 10, 2023).

Google Scholar

Diebold, P., Küpper, S., and Zehler, T. (2015). “Nachhaltige agile transition: Symbiose von technischer und kultureller agilität (sustainable agile transition: Symbiosis of technical and cultural agility),” in Projektmanagement und Vorgehensmodelle 2015, eds M. Engstler, M. Fazal-Baqaie, E. Hanser, M. Mikusz, and A. Volland (Bonn: Gesellschaft für Informatik e.V), 121–126.

Google Scholar

Dikert, K., Paasivaara, M., and Lassenius, C. (2016). Challenges and success factors for large-scale agile transformations: a systematic literature review. J. Syst. Softw. 119, 87–108. doi: 10.1016/j.jss.2016.06.013

CrossRef Full Text | Google Scholar

Escalona, M. J., and Koch, N. (2007). “Metamodeling the requirements of web systems,” in Web Information Systems and Technologies, eds J. Filipe, J. Cordeiro, and V. Pedrosa (Berlin; Heidelberg: Springer), 267–280.

Google Scholar

Fauzi, F., Tuhuteru, L., Sampe, F., Ausat, A. M. A., and Hatta, H. R. (2023). Analysing the role of chatGPT in improving student productivity in higher education. J. Educ. 5, 14886–14891. doi: 10.31004/joe.v5i4.2563

CrossRef Full Text | Google Scholar

Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, S., Luo, Y., et al. (2022). Comparing scientific abstracts generated by chatGPT to original abstracts using an artificial intelligence output detector, plagiarism detector, and blinded human reviewers. bioRxiv. doi: 10.1101/2022.12.23.521610

CrossRef Full Text | Google Scholar

Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., and Ahmad, H. (2022). I think this is the most disruptive technology: Exploring sentiments of ChatGPT early adopters using twitter data. arXiv[Preprint]. arXiv: 2212.05856. doi: 10.48550/arXiv.2212.05856

CrossRef Full Text | Google Scholar

Karvonen, T., Sharp, H., and Barroca, L. (2018). “Enterprise agility: why is transformation so hard?” in Agile Processes in Software Engineering and Extreme Programming, eds J. Garbajosa, X. Wang, and A. Aguiar (Cham: Springer International Publishing), 131–145.

Google Scholar

Kasneci, E., Sessler, K., Kuechemann, S., Bannert, M., Dementieva, D., Fischer, F., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Diff. 103, 102274. doi: 10.1016/j.lindif.2023.102274

CrossRef Full Text | Google Scholar

Klein, A. M., Rauschenberger, M., Thomaschweski, J., and Escalona, M. J. (2021). Comparing voice assistant risks and potential with technology-based users: a study from Germany and Spain. J. Web Eng. 7, 1991–2016. doi: 10.13052/jwe1540-9589.2071

CrossRef Full Text | Google Scholar

Kuchel, T., Neumann, M., Diebold, P., and Schön, E. -M. (2023). “Which challenges do exist with agile culture in practice?,” in SAC '23: Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing (Tallinn), 1018–1025. doi: 10.1145/3555776.3578726

CrossRef Full Text | Google Scholar

Küpper, S., Kuhrmann, M., Wiatrok, M., Andelfinger, U., and Rausch, A. (2017). “Is there a blueprint for building an agile culture?” in Projektmanagement und Vorgehensmodelle, eds A. Volland, M. Engstler, M. Fazal-Baqaie, E. Hanser, O. Linssen, and M. Mikusz (Bonn: Gesellschaft für Informatik), 111–128.

Google Scholar

Lebens, M., and Finnegan, R. (2021). “Using a low code development environment to teach the agile methodology,” in Agile Processes in Software Engineering and Extreme Programming, eds P. Gregory, C. Lassenius, X. Wang, and P. Kruchten (Cham: Springer International Publishing), 191–199.

Google Scholar

Medin, D. L. (1989). Concepts and conceptual structure. Am. Psychol. 44, 1469–1481.

Google Scholar

Neumann, M., and Baumann, L. (2021). “Agile methods in higher education: Adapting and using eduScrum with real world projects,” in Proceedings of the 2021 IEEE Frontiers in Education Conference (FIE) (Lincoln, NE: IEEE), 1–8. doi: 10.1109/FIE49875.2021.9637344

CrossRef Full Text | Google Scholar

Neumann, M., Rauschenberger, M., and Schön, E.-M. (2023). “We Need to Talk About ChatGPT:” The Future of AI and Higher Education. Hochschule Hannover.

Google Scholar

Object Management Group (2017). Unified Modeling Language Version 2.5.1. Technical report, Object Management Group.

Google Scholar

OpenAI (2022). ChatGPT: Optimizing Language Models for Dialogue. Available online at: https://openai.com/blog/chatgpt/ (accessed February 10, 2023).

Google Scholar

Ouyang, F., Zheng, L., and Jiao, P. (2022). Artificial intelligence in online higher education: a systematic review of empirical research from 2011 to 2020. Educ. Inform. Technol. 27, 7893–7925. doi: 10.1007/s10639-022-10925-9

CrossRef Full Text | Google Scholar

Perry, N., Srivastava, M., Kumar, D., and Boneh, D. (2023). Do Users Write More Insecure Code With AI Assistants? Available online at: https://anonymous.4open.science/r/ui (accessed February 10, 2023).

Google Scholar

Qadir, J. (2022). Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI for Education. doi: 10.36227/techrxiv.21789434.v1

CrossRef Full Text | Google Scholar

Rauschenberger, M., Willems, A., Ternieden, M., and Thomaschewksi, J. (2019). Towards the use of gamification frameworks in learning environments. J. Interact. Learn. Res. 2019, 147–165.

PubMed Abstract | Google Scholar

Rudolph, J., Tan, S., and Tan, S. (2023). ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 6, 343–363. doi: 10.37074/jalt.2023.6.1.9

CrossRef Full Text | Google Scholar

Ruhr University Bochum (2023). Bochumer Projekt schafft Klarheit zu KI-Tools für NRW-Hochschulen (Bochum Project Provides Clarity on AI Tools for NRW Universities).

Google Scholar

Schön, E.-M., Buchem, I., and Sostak, S. (2022). “Agile in higher education: how can value-based learning be implemented in higher education?” in Proceedings of the 18th International Conference on Web Information Systems and Technologies (WEBIST 2022) (Valletta), 45–53. doi: 10.5220/0011537100003318

CrossRef Full Text | Google Scholar

Schön, E.-M., Neumann, M., Hofmann-Stölting, C., Baeza-Yates, R., and Rauschenberger, M. (2023). Research protocol Chat GPT XP. Available online at: https://figshare.com/s/5a3297b38934470b6c69 (accessed July 17, 2023).

Google Scholar

Schön, E.-M., Sedeño, J., Mejías, M., Thomaschewski, J., and Escalona, M. J. (2019). A metamodel for agile requirements engineering. J. Comput. Commun. 7, 1–22. doi: 10.4236/jcc.2019.72001

PubMed Abstract | CrossRef Full Text | Google Scholar

Schön, E.-M., Thomaschewski, J., and Escalona, M. (2017). Agile requirements engineering: a systematic literature review. Comput. Standards Interfaces 49, 79–91. doi: 10.1016/j.csi.2016.08.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwaber, K., and Sutherland, J. (2020). The Scrum Guide. Available online at: https://www.scrumguides.org/scrum-guide.html (accessed February 10, 2023).

Google Scholar

Sidky, A., Arthur, J., and Bohner, S. (2007). A disciplined approach to adopting agile practices: the agile adoption framework. Innovat. Syst. Softw. Eng. 3, 203–216. doi: 10.1007/s11334-007-0026-z

CrossRef Full Text | Google Scholar

Strode, D. E., Sharp, H., Barroca, L., Gregory, P., and Taylor, K. (2022). Tensions in organizations transforming to agility. IEEE Trans. Eng. Manage. 69, 3572–3583. doi: 10.1109/TEM.2022.3160415

CrossRef Full Text | Google Scholar

Susnjak, T. (2022). ChatGPT: The end of online exam integrity?. arXiv[Preprint]. arXiv: 2212.09292. doi: 10.48550/ARXIV.2212.09292

CrossRef Full Text | Google Scholar

The decoder. (2023). Google launches ChatGPT competitor “Bard” and more. Available online at: https://the-decoder.com/google-launches-chatgpt-competitor-bard (accessed February 9, 2023).

Google Scholar

Winkler, R., and Soellner, M. (2018). Unleashing the potential of chatbots in education: a state-of-the-art analysis. Acad. Manage. Proc. 2018, 15903. doi: 10.5465/AMBPP.2018.15903abstract

CrossRef Full Text | Google Scholar

Zawacki-Richter, O., Marin, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. Higher Educ. 16, 1–27. doi: 10.1186/s41239-019-0171-0

CrossRef Full Text | Google Scholar

Keywords: AI, ChatGPT, higher education, disruption, agile values

Citation: Schön E-M, Neumann M, Hofmann-Stölting C, Baeza-Yates R and Rauschenberger M (2023) How are AI assistants changing higher education? Front. Comput. Sci. 5:1208550. doi: 10.3389/fcomp.2023.1208550

Received: 19 April 2023; Accepted: 12 July 2023;
Published: 27 July 2023.

Edited by:

Kamal Kant Hiran, Symbiosis University of Applied Sciences, India

Reviewed by:

Sandra Sanchez-Gordon, Escuela Politécnica Nacional, Ecuador
Abdallah Qusef, Princess Sumaya University for Technology, Jordan

Copyright © 2023 Schön, Neumann, Hofmann-Stölting, Baeza-Yates and Rauschenberger. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Eva-Maria Schön, ZXZhLW1hcmlhLnNjaG9lbiYjeDAwMDQwO2hzLWVtZGVuLWxlZXIuZGU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.