POLICY AND PRACTICE REVIEWS article

Front. Educ., 20 February 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1505370

This article is part of the Research TopicAI's Impact on Higher Education: Transforming Research, Teaching, and LearningView all 10 articles

Higher Education Act for AI (HEAT-AI): a framework to regulate the usage of AI in higher education institutions

  • 1Department Computer Science and Security, St. Pölten University of Applied Sciences, St. Pölten, Lower Austria, Austria
  • 2Service and Competence Centre for Teaching/Learning Development and Educational Offers, St. Pölten University of Applied Sciences, St. Pölten, Lower Austria, Austria

The introduction of artificial intelligence (AI) into educational institutions is part of a global trend shaped by the capabilities of this technology. However, due to the disruptive nature of AI technologies, it greatly affects the way of teaching and learning. It is therefore essential to establish clear guidelines that not only ensure that all competencies required by the curricula are still effectively taught, but also empower students to use the new technology in a productive manner. Developing such guidelines for emerging and dynamic technologies is a very challenging task, as rules often struggle to keep pace with rapidly evolving advancements. The European Union found a good way to tackle this problem in its AI Act by introducing a risk-based approach to regulate AI applications of organizations. Depending on the level of risk, applications might be prohibited, require extensive analysis and safeguards, have transparency obligations, or need no further action. This paper adapts the core structure of the AI Act regulation for the education sector to provide teachers and students with a structured framework for dealing with AI. Various use cases, based on teaching and learning life cycles, are presented to illustrate the versatility of AI in teaching and the learning process. By establishing such a framework, we not only promote competence development in dealing with AI but also contribute to the creation of an ethical and responsible use of AI in education.

1 Introduction

Although artificial intelligence (AI) is widely used in research across all domains (Xu et al., 2021), the advancements of generative AI have led to many discussions about the right way to integrate this new technology into teaching and learning activities.

Higher Education Institutions (HEI) all over the world reacted in different ways to the new development. While some universities designed guidelines and policies on the usage of AI in courses, others tried to ban it. Recently, some universities even decided, therefore, to change the process of bachelor thesis.

As this rapidly developing technology is also going to change the world of work, it is vital that universities adapt their practices to this new situation and disruptive impact on education. It is indisputable that artificial intelligence offers numerous new applications for higher education institutions (HEI), both for educators and learners.

Knowledge workers have been shown to be much more productive with AI support (Dell'Acqua et al., 2023), for example when publishing (research) texts (Kitamura, 2023) or reducing administrative time (Bond et al., 2024). Another crucial benefit of AI in education is that with the help of generative AI, people with special educational needs can also be integrated into educational settings, allowing inclusive education (Khazanchi and Khazanchi, 2024).

Furthermore, the use of AI enables teachers to provide individual learning materials and learning pathways (Bond et al., 2024). Support for developing tailored educational content increases student engagement and learning outcomes (Holmes et al., 2019). These developments could lead to broader social impacts by increasing equality of opportunity for students.

The support of generative AI may also have economic effects, as the workload of faculty could be reduced. On the other hand significant investments in data-protected and safe AI infrastructure are required, which may strain budgets (Saidakhror, 2024).

The use of artificial intelligence also presents new challenges for academic organizations. Since the release of ChatGPT, numerous articles have pointed out that it was able to perform well in some assignments and exams. Various studies highlight that generative AI is already used by students to write assignments or essays (e.g., Oravec, 2023; Sweeney, 2023).

While generative AI tools have the potential to enhance personalized learning and engagement, there are concerns about their ability to undermine critical thinking and perpetuate misinformation. A recent study examining the relationship between students' use of generative AI and their exam performance reveals that students who use generative AI tend to score lower in their assessments (Wecks et al., 2024).

Further challenges such as data privacy, bias, and the need for ethical frameworks must be addressed to fully leverage its benefits in teaching and learning (Baek and Wilson, 2024).

As technology further develops and generative AI is more and more integrated into our daily routines and applications (e.g. Microsoft Co-Pilot), this challenge is going to increase.

If HEIs cannot ensure that AI is used in a responsible manner, it could lead to severe consequences. The improper use of the technology can lead to incorrect content (i.e., hallucinations). Therefore, it is crucial to establish rules that, on the one hand, encourage the use of AI and, on the other hand demand transparency and critical assessment of obtained results.

Therefore, in this paper we examine the following research questions:

• What do students and teachers need to be given in order to deal responsibly with artificial intelligence?

• How can a framework for higher education institute regulate the use of artificial intelligence?

The major contribution of this paper is the introduction of a flexible framework that regulates AI usage in HEI, which at the same time also shows the consequences of non-compliance.

Inspired by the AI Act of the European Union, the framework takes a risk-based approach (e.g., risk to privacy, risk to academic integrity). The term risk is defined by the EU AI Act (European Commission, 2024) as “…the combination of the probability of an occurrence of harm and the severity of that harm.”

This paper focuses mainly on generative AI, addressing AI systems capable of creating text, images, and videos. However, the framework introduced can be further extended to encompass other approaches to artificial intelligence, such as machine learning techniques that facilitate decision-making, predictions, and recommendations. A pertinent example is personalized learning, where educational content is recommended based on student training data.

The remainder of this paper is structured as follows. Section 2 outlines our research methodology. Section 3 provides an overview of how AI technologies are currently used in HEI and what rules have been established to regulate usage. Furthermore, we briefly highlight those aspects of the European AI Act, which have been used to derive and develop our proposed Higher Education Act for AI (HEAT-AI). Section 4 introduces our novel approach to govern the usage of artificial intelligence, especially generative AI, in educational institutions. In order to clarify how the proposed framework can be used, we also included example use cases. In Section 5 we discuss and interpret our findings, before presenting our main conclusions in Section 6. Section 7 presents future work.

2 Methodology

In this section, we outline our research methodology to develop a framework to regulate AI technologies in higher education institutions.

Figure 1 depicts the main steps of our research methodology, combining theoretical (dark blue) and empirical (blue-green) steps.

Figure 1
www.frontiersin.org

Figure 1. HEAT-AI research methodology.

Our first step was a collaborative analysis of the problems, challenges, and opportunities with key stakeholders at the St. Pölten University of Applied Sciences which offers bachelor and master programs in the fields of technology, business, social affairs and health.

• Open space with 23 bachelor and master program directors (March 2023).

• Round table within smaller groups (April 2023–March 2024).

• Their insights shaped our understanding of AI usage, concerns and opportunities in higher education.

To ensure the robustness of our findings and to get a broader view on the topic, we conducted a comprehensive literature review on the use and potential of artificial intelligence. This review, which included an exploration of AI's benefits and drawbacks, served as a solid foundation for our subsequent work.

Building on our literature review, we conducted a comparative study of the rules and regulations of leading higher education institutions. In addition, we analyzed the AI Act, the first AI regulation worldwide, to build knowledge for the development of a future-proof and flexible AI regulation for universities.

With the knowledge gained in the previous steps, we started to design an approach and asked key stakeholders, such as the committee for quality development in teaching or the committee for study law at the St. Pölten University of Applied Sciences, for feedback.

Based on our initial design and feedback, we began the development of a pilot version of HEAT-AI. The first draft was completed in June 2024. An iterative process with members of the University board (one of them with students) helped finalize the framework. HEAT-AI was approved by the University Board and went live in September 2024.

As AI is a highly dynamic field, the approach's effectiveness and usability a broad evaluation of the framework has already started. Currently, we are collecting testimonials from all the departments regarding the use of the regulations within the supervision process of scientific theses. In addition, we actively collect questions from lecturers and students regarding the usage of the framework within teaching and learning processes. The focus groups with lecturers and students began in December 2024.

The framework will be evaluated at the end of the academic year 2025.

3 Related work

In this section, we highlight the use cases of AI in HEIs, their policies, and guidelines. Furthermore, we provide a short overview on relevant parts of the AI Act which build the foundation of our HEAT-AI approach.

3.1 Artificial intelligence in education

The use of artificial intelligence has made its way into various contexts of teaching and learning activities at universities. AI is both a part of digitalization and an independent field. The fundamental insights on digitalization in teaching, research, open science, and university administration can also be applied to changes brought about using AI. Especially generative AI brought a disruptive change to the way of teaching and learning. Text-to-image AI generators assist teachers to implement new art teaching concepts (Dehouche and Dehouche, 2023). Text-to-text AI generators provide personalized learning support and help teachers prepare lectures, support students, and evaluate their work. A systematic categorization has been developed based on a broad meta study (Zawacki-Richter et al., 2019). The researchers in this study related their use-cases for higher education to the student life cycle (Schulmeister, 2007), starting from guidance on study choices until the graduation. Their results lead to the following categories:

Profiling and prediction address f.ex. the likelihood of students dropping out of a program. This category focuses on admission decisions, course scheduling, dropout, and retention as well as student models and academic achievement. By applying machine learning methods, AI is used for recognizing and classifying patterns as well as to model predictive student profiles.

Intelligent tutoring systems focus on a teaching and learning level. This includes teaching and learning course content, where students and teachers use chat bots to help achieve learning outcomes. Furthermore, AI helps identify students' problems to achieve the intended learning outcomes and to provide automated feedback and learning material. Another use case is the facilitation of collaboration between learners by supporting online discussions or fostering collaborative writing.

Assessment and evaluation include automated grading, providing feedback, evaluation of students' progress as well as their academic integrity and the evaluation of teaching effectiveness.

Adaptive systems and personalization aim at individual course content delivery and learning pathways as well as teaching design. This includes monitoring and guiding students using academic data (Zawacki-Richter et al., 2019).

The above-mentioned categorization highlights how broadly AI can be implemented at different levels of a student's life cycle. Each of these categories involves various risks, such as unfairness, when it comes to admission processes (Marcinkowski et al., 2020) or inaccuracy when it comes to prediction of students' performance (Hemachandran et al., 2022).

As the category system of Zawacki-Richter et al. (2019) has been developed before the rise of broad access to generative AI tools, the corresponding use cases were not included. For the identification of specific use cases for teaching and learning, we refer first to the policies of the Top 5 Universities of the Times Higher Education World University Ranking 2024 (Times Higher Education, 2024).

Secondly, we analyze the typical lifecycle of teaching design and learning, cf. Sections 3.2, 3.3. Student-related use cases are defined as specific interactions in which generative AI is used to enable a specific learning process or to complete tasks. Use cases for teachers refer to all activities in which teachers use generative AI to design lessons, teach, examine, or adapt the curriculum. In order to provide a deeper understanding on the topic, in the following we highlight a selection of use cases divided by the different target groups.

3.1.1 Use cases according to the Top 5 Universities

According to the Top 5 Universities of the Times Higher Education World University Ranking 2024 [i.e., University of Oxford, Stanford University, Massachusetts Institute of Technology (MIT), Harvard University, and University of Cambridge] Teaching-centered use-cases include:

• giving formative feedback,

• evaluating students work,

• develop a grading rubric,

• providing questions for reflections on a specific topic,

• developing scenarios and cases,

• anticipating students' questions,

• planning learning activities and specifying assignments,

• design for individual learning pathways,

• design cognitive retrieval practice quizzes.

Student-centered use-cases include:

• relate to generative AI to find (new/alternative) learning techniques and study habits (e.g. asking generative AI to give examples for theories or create a test on a specific topic),

• access information using different senses (view/sound/etc.).

Both student-centered and teacher-centered use cases, as they apply for a lot of everyday tasks, including:

• translation of text,

• transcription of audio data,

• writing and Brainstorming assistance,

• generating ideas and specific examples,

• synthesizing information,

• summarizing bigger amounts of text or other data,

• research and analysis capabilities,

• project planning,

• generate visual summaries.

For an all-encompassing picture, the student lifecycle and the teacher lifecycle were used in the next step to identify possible blind spots.

3.1.2 Teachers lifecycle: planning and teaching a course

The teaching design lifecycle (see Figure 2) in higher education is a systematic approach to planning, delivering, and continuously improving courses in higher education. This lifecycle ensures that courses are effective, engaging, and aligned with both student needs and institutional goals.

Figure 2
www.frontiersin.org

Figure 2. Teacher's life cycle.

The first step in the teaching design lifecycle is to conduct a needs analysis of the target group. This involves identifying the learning needs and the learners' prior knowledge through analyzing the current curriculum. Once the learning needs have been identified, the next step is to define clear, measurable learning objectives. These objectives should be aligned with the curriculums' goals and should be competency-orientated, student-centered and achievable.

A teacher then develops the course content as well as the course materials. According to the learning outcomes and the content the teacher chooses instructional strategies that facilitate learning. Appropriate learning and teaching methods include lectures, discussions, exercise, feedback etc. in different group forms (i.e. group work, plenary work and single work) and different learning spaces (on premise, online. in the field, on the job, etc.). The last design step is the assessment. Both formative and summative assessment techniques are useful tools for evaluating and grading the learning outcomes. Assessment design should be aligned to the learning process, the correspondent instructional methods, and the learning outcomes. The actual teaching situations involve communication between lessons as well as organization of learning materials (i.e. via learning management system).

The effectiveness and course satisfaction should be surveyed by collecting feedback and analyzing assessment outcomes. Teachers then can identify the course design areas to be revised and areas that should be maintained. The results if these reflections influence the next course planning (Lehner, 2019; Osterroth, 2018).

3.1.3 Learner's lifecycle: being a learner in a course

The first stage of the learner's lifecycle (see Figure 3) the introduction to the course structure, intended learning outcomes, and expectations. Students familiarize themselves with the learning management system (LMS) and course materials. In addition, they engage in initial activities to build community and rapport among students and with teachers. Students actively participate in lectures, discussions, group work and other learning activities. They interact with peers and instructors asynchronously through forums and collaborative tools. Students also engage with readings, multimedia, and lectures to understand the material to foster their learning and comprehension. They apply their knowledge through exercises, case studies, and practical tasks, which help reinforce learning. Office hours, tutoring, and study groups provide additional support.

Figure 3
www.frontiersin.org

Figure 3. Learner's lifecycle.

(Peer-)Feedback and self-assessment help students to identify gabs in their learning outcomes. They revisit and revise course materials and seek additional resources or support for challenging topics. Students participate in formative assessment techniques, such as quizzes and assignments, to foster their understanding of the material. They complete their courses though summative assessment, such as continuous assignments, exams and projects, and demonstrate the achievement of the intended learning outcomes.

Ideally, students also reflect on their learning experiences and outcomes, assessing their progress toward learning objectives and their learning techniques. At the end of the course, they give feedback and/or evaluate the course on its' efficiency and their satisfaction with the learning and teaching process (Biggs et al., 2022).

These models in teaching and learning show which specific use cases should be addressed by policies on the usage of generative AI in higher education.

3.2 Policies and guidance documents

In this section, we will take a closer look at the major aspects of the selected AI policies that are currently in use.

The European Commission highlights in its ethical guidelines for the use of artificial intelligence and data in teaching and learning the importance of human agency, fairness, humanity, and justified choice (European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022). The office for educational technology in the United States of America (USA) emphasizes “keeping humans in the loop” and stresses the importance of informing, training, and involving educators in policy making processes (Cardona et al., 2023).

Both the European as well as the USA policy address the same topic areas for using AI in general:

• security and privacy (e.g., data protection),

• equity and access,

• transparency,

• ethical considerations (e.g., human agency, environmental impact, bias, exploitation...),

• academic integrity (e.g., fairness, respect, honesty, ...),

• accountability.

Security and privacy are paramount, with a focus on protecting sensitive data, exemplified by regulations like the upcoming European AI Act. Equity and access underscore efforts to ensure fair distribution and utilization of AI tools across diverse student populations, advocating for inclusive access to educational resources and opportunities.

Transparency is emphasized, calling for clarity and openness in the development and deployment of AI technologies within educational settings. This involves revealing the inner workings of AI systems to foster trust and understanding. Ethical considerations are central, addressing concerns regarding human agency, environmental impact, bias, and exploitation in AI applications.

The guidelines aim to mitigate these risks, ensuring that AI in education upholds ethical standards and respects the dignity of all individuals.

Academic integrity is upheld through a commitment to fairness and honesty in research and educational practices involving AI. Collaboration and integrity are promoted to maintain the credibility and integrity of academic pursuits in the realm of artificial intelligence.

Accountability is emphasized, holding institutions and individuals responsible for the ethical and equitable use of AI in higher education. HEI need to ensure that stakeholders are accountable for their actions and decisions related to AI implementation.

Additionally, policies contain the understanding, identifying, and preventing of academic misconduct and the corresponding rethinking of assessment methods. Along those lines of thought, guidelines on AI should include how to correctly attribute the work of generative AI in students' assignments (Chan and Hu, 2023).

For policy making in higher education there must be a clear difference in addressing teaching with AI and teaching for AI. Teaching with AI leverages existing AI tools to enhance teaching practices, while teaching for AI equips students with the knowledge and skills needed to navigate the AI-driven world effectively. One research area dedicates its work on building curricula and offering electives that include the development of AI competencies (Chan, 2023).

For teachers to use AI tools with a high level of awareness, they should also be equipped with a certain level of AI literacy (European Commission, 2023). Artificial intelligence literacy should be prior to teaching with AI tools and focus on fundamental concepts related to computer systems, programming, machine learning, and data science. AI literacy ensures that teachers and students can navigate AI-driven environments confidently.

The Top 5 universities of the Times Higher Education World University Ranking 2024 include these elements of policy making. However, their approaches differ:

While Harvard, Cambridge, and Oxford focus on specific guidelines related to legal provisions regarding studies, MIT and Stanford also aim to sensitize educators and students as well as provide training for responsible use. None of the guidelines explicitly forbid the use of AI tools for teaching and learning. Some of these policies provide specific guidance on the overall institutional stance, positioning Artificial Intelligence as a future competence and integral part of the university's strategy.

All policies on AI in Higher Education should mitigate risks for students, teachers and the institution itself for supporting advantages and opportunities of using AI tools for education.

3.3 EU artificial intelligence act

As our proposed HEAT-AI framework has been strongly inspired by the structure of the EU's AI Act, in this section, we briefly introduce the cornerstones of the new regulation.

The European Commission's proposal (European Commission, 2024) for an AI Act aims at regulating the emerging developments in the AI sector and establishes as one of the first large economies harmonized rules for the development and usage of AI.

Similar to the ongoing debates regarding the use of AI in academia, the development of the AI Act was marked by numerous discussions and thorough reviews. The first proposal by the European Commission was already made public in 2021. After public consultations, many rounds of discussions of various stakeholder in the European Union (e.g., the European Parliament), a provisional agreement has been finally established in December 2023.

The key provisions of the upcoming regulation include the classification of AI systems according to their risks, thereby establishing obligations and responsibilities for providers and users of AI.

The AI Act uses the four risk categories: unacceptable risk, high risk, limited risk, and minimal risk.

Unacceptable risk refers to AI systems that violate fundamental rights or values of the European Union. Examples could be systems that compromise human dignity or make decisions that violate human rights. The category of high-risk AI systems refers to AI systems that pose a high risk to the safety, fundamental rights or health of EU citizens. Examples include AI, which is used in critical infrastructure, transportation or healthcare. AI systems with limited risk are AI systems that pose a certain risk, but less than high-risk systems. These can be AI applications in the area of customer management or recruitment, for example. AI systems with minimal risk include AI systems that are considered safe and therefore require less regulation. These include, for example, simple chat bots or voice recognition systems.

4 Higher Education Act for AI

As the AI Act provides a flexible framework for regulating the use of AI, the risk-based concept outlined in the regulation can serve as a blueprint for defining a flexible set of rules for higher education institutions.

In this section, we therefore present our developed Higher Education Act for AI (HEAT-AI), which is a framework for the secure usage of AI in teaching and research.

The development of HEAT-AI was based on the following principles:

• Students and faculty members shall be encouraged to make use of the new technology.

• Academic integrity shall not be impacted by the usage of AI.

• The new technology shall be used in a ethical and lawful manner.

• The use of AI shall not violate the privacy.

In order to provide a better understanding how HEAT-AI could be used in an university setting, we provide a detailed description on all risk categories followed by sample use cases for the individual categories, in the following subsections.

As the general framework of HEAT-AI is flexible, different higher education institutions may tailor the use case categorization according to their requirements and AI risk appetite and principles of the organization.

4.1 Unacceptable risks of usage

Areas that pose an unacceptable risk are prohibited for both faculty members and students. As indicated in the principles of HEAT-AI lawfulness and academic integrity has to be preserved. In the following, we are providing more detailed information on specific unacceptable risks or risk categories.

Unacceptable risk includes the usage of (generative) AI in a way that legal requirements are violated. An example of such a violation would be the transfer of personal data to an AI system without the consent of the concerned person (data subject) and thus a violation the General Data Protection Regulation (GDPR1) (European Union, 2016).

The EU defines personal data [or personal identifiable information (PII)] as everything, which could identify a person including surname and first name, a private address, an e-mail address (e.g., firstname.surname@company.com), an ID number, location data (e.g., the location function on cell phones), an IP address, a cookie identifier, the advertising identifier of your telephone and data held by a hospital or doctor that could lead to the unique identification of a natural person.

According to GDPR, personal data that has been anonymized in such a way that the data subject cannot or can no longer be identified, is not considered as personal identifiable information and thus can be used in any way. It is important to mention, that the data has to be truly anonymized and the anonymization must be irreversible. We are aware that there are also AI tools that do not violate the GDPR. Nevertheless, awareness should be created for the correct and lawful handling of personal data. The number of AI tools is growing and not every one is GDPR-compliant, so the transfer of this data without the explicit consent of the data subject in the educational setting to an AI falls under the prohibited category.

Furthermore, taking AI-generated content (text, images, program code, etc.) and presenting it as your own work would violate the academic integrity and therefore is also strictly prohibited.

Another unacceptable use of AI are situations where students' rights are undermined. Quality in teaching and research is an important asset. If students are assessed with the help of AI, the decision of AI cannot always be retraced. It is therefore essential to ensure that grading is not carried out automatically by AI systems, but remains in the responsibility of the teachers.

Furthermore, all attempts to use artificial intelligence to cheat are strictly forbidden. Example use cases include the usage of large language models as an unauthorized aid to answer exam questions or rephrase work in order to fool plagiarism detection.

For the effective implementation of the regulation it is essential to introduce sanctions. If an unauthorized use of AI is discovered, this can lead to far-reaching consequences. Teachers can be withdrawn from courses or receive warnings, while students can expect negative evaluations. Furthermore, any violations of the regulation is documented and reported.

In order to clarify use cases of this category, Table 1 highlights unacceptable use cases.

Table 1
www.frontiersin.org

Table 1. Use cases—Unacceptable risk of usage.

4.2 High risks of usage

The use of AI in teaching, which is considered a high-risk area, is strictly regulated. This category includes all areas of application where the integrity of science and knowledge transfer or a violation of the above mentioned principles are at risk.

In education, it is important to convey correct content, build knowledge, guarantee the networking of knowledge and train students to become critical and inquisitive experts. To this end, it is also important to promote a scientific approach.

Therefore, if AI-generated content is used, it must be carefully checked and documented. It should be noted at this point that generative models in particular are not suitable for generating knowledge, Large Language Models tend to hallucinate. They have been trained to create texts, images, etc. and are not expert systems. AI should only be used in the right context. In order to prevent incorrectly generated learning content, teachers and students should search for scientific publications or use search engines to find valid and verified sources; if the intent is to prepare texts linguistically, generative language models are suitable.

If AI is used by students or faculty members, it is important to consider what the requirements are. The focus here is on teaching and learning objectives. If the content is essential for the course or performance, the adopted AI content must be documented. It is also essential to provide full details on how and which AI tool was used.

In addition, it is crucial to take a critical look at how AI is used in high-risk areas. Questions such as the following help to critically examine the application of AI in high-risk areas:

• Are the results trustworthy?

• Is there a possible bias?

• Are the answers valid?

• Are the results distorted with the help of AI?

If HEI stakeholders (e.g., students, faculty members) decide to use the output of generative AI, they should adhere to the following procedure:

1. As usual in science, the source (in this case, the generative AI tool) must be cited as the original reference.

2. In addition, the content of the statement must be substantiated by citing original, traceable, and verifiable sources.

3. The prompts and the generated output have to be provided in the appendix of the student work. The following example shows how this can look like with a direct quote.

Of course, there are challenges for teachers when the main source is suddenly generative AI. It has to be judged at what point it is no longer considered as the students' own/original work. Here, it is important to clearly communicate the rules and what the learning objective of the course is. For example, if the learning objective is to learn the English language, it must be clearly communicated that generative AI is not permitted.

Sample Citation

“Linear regression is a statistical technique used to model and analyze the relationship between a dependent variable (also called the target variable or outcome) and one or more independent variables (also called predictors or features). The main objective of linear regression is to find the best-fitting linear equation that describes the relationship between the dependent variable and the independent variables, allowing for predictions of the dependent variable based on new data.” (ChatGPT 4o validated through [1])
The original sources are listed in the list of references:

1. Weisberg, Sanford. Applied linear regression. Vol. 528. John Wiley & Sons, 2005.

There are many cases that can be considered high risk. Table 2 shows common use cases that in our opinion should be categorized as high risk.

Table 2
www.frontiersin.org

Table 2. Use cases—High risk of usage.

4.3 Limited risks of usage

The concept of limited risk in the use of AI in teaching refers to the potential risks associated with insufficient transparency in the use of AI.

A transparency statement serves to protect faculty and students. It ensures that people are informed when AI is used. This strengthens trust. This means that a declaration such as “AI generated” is sufficient.

Figure 4 depicts an example of an AI-generated image.

Figure 4
www.frontiersin.org

Figure 4. An AI generated image of Albert Einstein using Midjourney.

Use cases considered limited risk are described in Table 3.

Table 3
www.frontiersin.org

Table 3. Use cases—Limited risk of usage.

4.4 Minimum risks of usage

If the use of AI falls within Minimal Risk of Usage, unsrestricted use of AI is permitted. This is the case if AI is used as a support and is not part of an examination modality.

However, it is strongly recommended to check the content again afterward. However, it must be reiterated here that the use of AI is only allowed if the output does not contribute to the grade.

An example is a language course where the learning objective is to learn a specific language. Of course, in this case AI shall not be used for translations, and the output of an AI shall not be counted as the student's own work.

In this case, transparency is also particularly important to ensure fairness, as it might have effects on grades. Many grading schemes for submissions of assignments and essays also still consider style and wording as an important factor. However, with the advent of generative artificial intelligence, more and more students are using AI to correct and rewrite texts. This could lead to a situation where students who do not use this new technology face a serious disadvantage. Therefore, it is crucial to know where and how AI is used.

Use cases posing minimal risk are shown in Table 4.

Table 4
www.frontiersin.org

Table 4. Use cases—Minimal risk of usage.

5 Discussion

5.1 Result summary

We are currently implementing the approach at our University of Applied Sciences and gaining initial experience with it. Therefore, we held several workshops with key internal and external stakeholders, such as academic directors, program directors, heads of research institutes or researchers, lecturers, and students. To this end, care was taken to involve stakeholders from different domains such as technology, business, health, and social sciences. Different fields of study programs prefer different didactic concepts or examination modalities.

Curricula were reviewed, teaching and learning requirements were identified, and our framework was incorporated. In addition, we learned what program directors and lecturers need to implement HEAT-AI, such as explanatory slides and specific use cases.

From November 2024 until April 2025 the University Board is working on a process to deal with cases of misconduct, thus ensuring that the HEAT-AI guidelines are followed.

The development of the framework was driven by a comprehensive comparison of existing policies and self-collected teaching and learning concepts with stakeholders in our university. By analyzing these sources, we identified key elements that could inform the appropriate use of generative AI in higher education.

The categorization of the use cases of learning and teaching in the four distinct categories of our framework emerged through expert interviews, which provided valuable insights and ensured that the structure was grounded in practical experience. However, this categorization is not static; it is subject to regular evaluation and refinement based on continuous feedback and real-world experiences. This iterative approach allows the framework to remain flexible and responsive to evolving needs in the educational landscape, ensuring its ongoing relevance and effectiveness.

In the following, we briefly summarize our key findings for HEAT-AI:

Broad target audience: artificial intelligence affects almost all disciplines at universities.

Harmonized rules with departmental flexibility: the policy establishes harmonized rules throughout the university.

Encouraging innovation: innovation in teaching and learning using AI is strongly encouraged.

Rapid technological development: a flexible approach is essential to address the challenges posed by rapidly emerging AI technologies.

Risk-based approach: the priorities of individual higher education institutions can be established using the four different risk categories.

Transparency requirements: clear transparency requirements are established to ensure that the use of AI in teaching and learning is open and understandable to all stakeholders.

5.2 Interpretation

This section highlights the interpretation of the key findings mentioned above.

Broad target audience: during the development of the framework, when gathering requirements and meeting key stakeholders, it became clear that all university study programs were affected by the rapid development in the field of artificial intelligence. Therefore, it was crucial to have an approach that is suitable for a heterogeneous broad target audience. For the development of the university AI regulation, it was important to use as little jargon as possible and to ensure that all stakeholders can quickly understand the new rules. The development of rules around the risks to academic integrity and privacy supported the acceptance of the new rules.

Harmonized rules with departmental flexibility: an important requirement of the development was that departments could adapt or refine the university's AI regulations to eliminate ambiguities among lecturers and students in their field and tailor the regulations to their specific needs. Using use cases to tailor the harmonized rules to the specifics of a certain discipline has proven to be very useful and well suited for this purpose.

Encouraging innovation: as a higher education institution, an objective was to support the use of innovative artificial intelligence technologies that were useful. In addition, it was found that teaching students the critical skill of using AI responsibly and ethically could become a critical competence in the near future. Therefore, an approach that requires an assessment of the risks received broad support.

Rapid technological development: HEAT-AI provides a stable framework, particularly for high-risk scenarios, which can adapt to new developments in AI. Although the advantage definitely lies in the technology neutral definition, it provides more room for interpretation and sometime could require, by contrast to very specific rules, more effort to estimate the risk of using AI for a not defined use case.

Risk-based approach: having a risk-based approach for the use of AI raises awareness. We observed that communicating that risks have to be assessed, when using AI technology already leads to a certain degree of awareness amongst all stakeholders that the impacts have to be considered and must not be neglected. The risk-based approach also ensures that appropriate measures are taken depending on the level of risk.

Transparency requirements: being transparent about the use of AI is a key requirement. This is critical to be able to grade the competences of the students. In addition, technologies and applications that are used by some stakeholders might also be of interest to others. Transparently highlighting what and how AI was used therefore helps to better support all stakeholders in efficiently and effectively using the technology.

It should be mentioned that the introduction of the approach also requires training and support for all target groups. Since the introduction of the rules, we could observe broad support for the approach. However, more extensive evaluations in the future will extend the practical implications of this novel approach to regulate AI in universities and also show the limitations.

5.3 Comparison with existing research

In this section, we set our findings in relation to other research in the area.

In the past year, research worldwide has emphasized the need for clear, concise, and audience-oriented policies for higher education (Moore and Lookadoo, 2024). Studies highlight various areas that policies need to address. For example, while policies in the US, Japan, China, and Mongolia stress the importance of diversity, equity, and inclusion, they often lack clear discussions or actionable measures to address the digital divide.

This gap indicates a need for more focused efforts to ensure equitable access to generative AI technologies in education (Xie et al., 2024).

A survey in Australia revealed a divided perspective between institutions regarding the existence of guidelines and policies related to AI and data governance. This indicates that while some institutions have established frameworks, others are still in the early stages of developing such policies. The urgency of effective governance of AI in higher education is increasingly highlighted (Selvaratnam and Venaruzzo, 2024).

In African higher education, challenges include not only a lack of ethics and policies to govern AI use but also resource constraints and skill shortages (Maina and Kuria, 2024). On a global level, institutional policies regulate the accountability of learning outcomes, while human beings retain moral and legal responsibility for AI-related misconduct. Instructors have the freedom to decide how to incorporate generative AI tools in their courses, allowing personalized teaching methods (Dabis and Csáki, 2024).

Adopting a human-centered approach in AI ensures that stakeholder concerns about privacy and data control are adequately addressed (Alade and Aduwape, 2024).

The literature also shows that Generative AI can support both teachers and learners in many areas, but only if they use it correctly (Wecks et al., 2024). Due to the easy availability of Generative AI, its usage cannot be forbidden, but as with all technical aids, it is possible to determine when and how it may be used. In addition, it is difficult to estimate how the rapid development of AI will lead to which new tools.

Therefore a need for a highly flexible and adaptive policy framework in a rapidly evolving landscape of generative AI technology (Ghimire and Edwards, 2024).

When comparing our results and the policy idea of HEAT-AI respond to various issues presented in current research results. An institutionalized policy that is as clear and concise as possible (e.g., concerning data protection), but still allows teachers to find their own way in teaching their respective disciplines, seems like a good answer to the ambiguity concerning the regulation of AI usage in Higher Education.

5.4 Implications of the findings

Universities offer different study programs in a wide variety of fields such as technology, health, media, natural sciences, to name just a few.

But no matter which field, we have found that the use of AI tools has conquered all disciplines. Both teachers and students use especially generative AI in equal measure. Like any technological advancement, the easy availability of tools and perhaps lack of technical knowledge lead to misapplication.

One of our top priorities in university education is academic integrity. It is important that well-grounded content is taught, but also learned.

Learners must show that they can solve tasks independently and learn to think in a networked way in their domain. To achieve this, it is necessary to educate all stakeholders about the use of AI and to point out its limitations. Of course, not everyone needs to learn the technical details behind AI, but a basic understanding is nonetheless necessary when using technical tools. HEAT-AI uses a risk-based approach and specifies use cases to determine whether AI may and may not be used.

The framework also provides information on labeling requirements. Regulatory aspects are also included without everyone having to read the legislation in its entirety. All students should have the same chances of graduating successfully, not just the students who have easy access to the right tools. To do this, awareness has to be created that targeted support is allowed, but the learning process is the most important thing.

In the following paragraphs, we would like to briefly share our initial learnings from the application of HEAT-AI.

The brevity of the rules and the clear structure of HEAT-AI resulted in the feedback of university lecturers and students that the rules are transparent and understandable.

Communication is very important. The early involvement of the stakeholder (e.g., academic directors, student representatives, researchers) led to broad support. Active communication with students is also essential to answer open questions before introducing the approach.

Defining the use-cases in away that assignments are supported in a sound manner initially requires some effort. However, it could be observed that after a while, stakeholders get used to the framework.

What still needs to be investigated is the analysis of access to certain AI applications. The transparency requirements provide the opportunity to see which AI applications are being used. This is important to ensure fairness, for example, if paid versions of AI applications would provide significantly better results but are not accessible to students.

6 Conclusion

Due to numerous advantages of the usage of artificial intelligence, the increasing use of this new technology in higher education institutions is irreversible. The opportunities and versatile benefits of using artificial intelligence for teaching and research are undisputed. In this work, we therefore presented selected use cases at the time of writing to highlight the current state of practice.

However, as with almost any new technology that has a major impact on the way we research, teach, or learn, the risks of using it need to be carefully assessed by universities to mitigate any emerging negative effects. It is already clearly recognizable that in order to ensure academic integrity and ethical use, it is essential to establish clear regulations governing the use of AI.

The major contribution presented of this article is the introduction of a future-proof, and aforementioned flexible framework for the usage in academia, which on the one side encourages the usage of artificial intelligence technologies in order to provide a modern education and on the other side establishes clear rules, which also anticipates the rapid changes of this technology. In order to achieve this flexibility, the structure of our HEAT-AI policy adapts the risk-based governance approach of the European AI Act.

The presented approach should serve as a reference for other higher education institutions that are currently in the pressing need to define a framework for regulating the usage of artificial intelligence.

In line with European legislation, our introduced HEAT-AI categorizes the usage of artificial intelligence into four risk categories (according to their impacts on the core values of the institution, academic integrity, ethics, and privacy) that determine the different measures to be taken if AI is used in higher education institutions.

Based on the results of this article, St. Pölten University of Applied Sciences already established their rules for teaching and learning, which came into force this semester.

Although the effects on teaching and learning cannot be fully anticipated at the time of writing, many relevant stakeholders are supporting the approach and actively participating in its improvement by providing new use cases or experiences that can be incorporated in future versions.

An important factor that has been identified is the development of a new skill set for both teachers and students (e.g., prompting, limitations, and risk of using AI), which poses a substantial challenge due to the large number of individuals that must be trained in a relatively short period.

We are aware that the pace of AI advances and the pervasive nature of technology will require some changes in the future.

However, we are confident that the flexible structure will allow one to integrate new requirements in an efficient manner. The flexibility of the approach also allows other higher educational institutions to follow our introduced approach and tailor it to their specific needs and use cases.

7 Future work

As stated in the conclusion, St. Pölten University of Applied Sciences has already introduced its AI guidelines, based on the approach outlined in this article. In order to further improve the approach, we established various evaluation and feedback mechanisms with relevant target groups (e.g., lecturers, academic directors, students, didactic specialists), which can be also used for a more in-depth analysis of the effects on teaching, learning, and usage of AI.

The initial feedback from both students and lecturers is promising, suggesting that the herein-introduced approach facilitates the use of AI in the academic field while also providing clear rules. However, since the rules came into force quite recently, more data and feedback have to be collected over a longer period of time to perform a rigorous evaluation. A round table meeting is scheduled for December 2024 to align HEAT-AI with the requirements of the Ethics Advisory Board is scheduled for December 2024. During this meeting, issues of ethical compliance, among other topics, will be discussed.

Another area of research that we plan to tackle in the future focuses on the support that is needed by higher education institutions. In order to embed new rules in an organizational setting and to facilitate the adoption of HEAT-AI in other higher education institutions, we are currently working on the definition of a holistic governance and management framework, which incorporates our recent experiences and is based on the seven components of the widely adopted COBIT framework (i.e., Principles, policies, and frameworks; Processes; Organizational structures; Culture, ethics, and behavior; Information; Services, infrastructure, and applications; and People, skills, and competencies). A first activity, which already started, is the training concept of the internal and external lecturers.

The overall aim of our future research is the development of a holistic reference model for AI governance and management in higher education institutions, which can be tailored to specific requirements of universities and research institutions.

This article solely concentrates on the usage of AI, especially in the context of teaching and learning. As compliance requirements of higher educational institutions in Europe are constantly increasing (e.g., General Data Protection Regulation, Cyber Resilience Act, NIS2), future research activities could extend HEAT-AI to support further requirements (e.g., privacy, security, and resilience).

Author contributions

MT: Conceptualization, Methodology, Visualization, Writing – original draft, Writing – review & editing. ST: Conceptualization, Methodology, Visualization, Writing – original draft, Writing – review & editing. LD: Methodology, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^Harmonizes the data protection laws within the European Union and regulates privacy requirements in the European Union.

References

Alade, A., and Aduwape, M. (2024). Artificial intelligence (AI) in higher education: a threat or helping hand in improving student-instructor communication. Int. J. Commun. Public Relat. 9, 45–61. doi: 10.47604/ijcpr.2987

Crossref Full Text | Google Scholar

Baek, E. O., and Wilson, R. V. (2024). An inquiry into the use of generative AI and its implications in education: boon or bane. Int. J. Adult Educ. Technol. 15, 1–14. doi: 10.4018/IJAET.349233

Crossref Full Text | Google Scholar

Biggs, J., Tang, C., and Kennedy, G. (2022). Teaching for Quality Learning at University 5e. London: McGraw-hill Education (UK).

Google Scholar

Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., et al. (2024). A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour. Int. J. Educ. Technol. High. Educ. 21:4. doi: 10.1186/s41239-023-00436-z

Crossref Full Text | Google Scholar

Cardona, M., Rodríguez, R., and Ishmael, K. (2023). Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. Office of Educational Technology. Retrieved from: https://coilink.org/20.500.12592/rh21zz (accessed February 11, 2025).

PubMed Abstract | Google Scholar

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. Int. J. Educ. Technol. High. Educ. 20:38. doi: 10.1186/s41239-023-00408-3

PubMed Abstract | Crossref Full Text | Google Scholar

Chan, C. K. Y., and Hu, W. (2023). Students' voices on generative AI: perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 20:43. doi: 10.1186/s41239-023-00411-8

Crossref Full Text | Google Scholar

Dabis, A., and Csáki, C. (2024). AI and ethics: investigating the first policy responses of higher education institutions to the challenge of generative AI. Human. Soc. Sci. Commun. 11:1006. doi: 10.1057/s41599-024-03526-z

Crossref Full Text | Google Scholar

Dehouche, N., and Dehouche, K. (2023). What's in a text-to-image prompt? The potential of stable diffusion in visual arts education. Heliyon 9:e16757. doi: 10.1016/j.heliyon.2023.e16757

PubMed Abstract | Crossref Full Text | Google Scholar

Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., et al. (2023). Navigating the jagged technological frontier: field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology &Operations Mgt. Unit Working Paper, 24–013. doi: 10.2139/ssrn.4573321

Crossref Full Text | Google Scholar

European Commission (2023). Teachers competences. briefing report 1 by the European digital education hub's squad on artificial intelligence in education. Online.

Google Scholar

European Commission (2024). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Online.

Google Scholar

European Commission Directorate-General for Education, Youth, Sport and Culture. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union.

Google Scholar

European Union (2016). Regulation (EU) 2016/679 of the European parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32016R0679 (accessed June 2023).

PubMed Abstract | Google Scholar

Ghimire, A., and Edwards, J. (2024). “From guidelines to governance: a study of AI policies in education,” in International Conference on Artificial Intelligence in Education (Springer), 299–307. doi: 10.1007/978-3-031-64312-5_36

Crossref Full Text | Google Scholar

Hemachandran, K., Verma, P., Pareek, P., Arora, N., Kumar, K. V. R., Ahanger, T. A., et al. (2022). Research article artificial intelligence: a universal virtual tool to augment tutoring in higher education. Comput. Intell. Neurosci. 2022:1410448. doi: 10.1155/2022/1410448

PubMed Abstract | Crossref Full Text | Google Scholar

Holmes, W., Bialik, M., and Fadel, C. (2019). Artificial Intelligence in Education Promises and Implications for Teaching and Learning. Center for Curriculum Redesign.

Google Scholar

Khazanchi, R., and Khazanchi, P. (2024). “Generative AI to improve special education teacher preparation for inclusive classrooms,” in Exploring New Horizons: Generative Artificial Intelligence and Teacher Education, 159.

Google Scholar

Kitamura, F. C. (2023). ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology 307:e230171. doi: 10.1148/radiol.230171

PubMed Abstract | Crossref Full Text | Google Scholar

Lehner, M. (2019). Didaktik. Dordrecht: utb GmbH. doi: 10.36198/9783838552088

Crossref Full Text | Google Scholar

MAIna, A. M., and Kuria, J. (2024). “Building an AI future: research and policy directions for Africa's higher education,” in 2024 IST-Africa Conference (IST-Africa) (IEEE). doi: 10.23919/IST-Africa63983.2024.10569692

Crossref Full Text | Google Scholar

Marcinkowski, F., Kieslich, K., Starke, C., and Lünich, M. (2020). “Implications of AI (un-) fAIrness in higher education admissions: the effects of perceived AI (un-) fAIrness on exit, voice and organizational reputation,” in Proceedings of the 2020 Conference on fAIrness, Accountability, and Transparency, 122–130. doi: 10.1145/3351095.3372867

Crossref Full Text | Google Scholar

Moore, S., and Lookadoo, K. L. (2024). Communicating clear guidance: advice for generative AI policy development in higher education. Bus. Prof. Commun. Quart. 87, 610–629. doi: 10.1177/23294906241254786

Crossref Full Text | Google Scholar

Oravec, J. A. (2023). Artificial intelligence implications for academic cheating: expanding the dimensions of responsible human-AI collaboration with chatGPT. J. Inter. Learn. Res. 34, 213–237. Available at: https://www.learntechlib.org/primary/p/222340/

Google Scholar

Osterroth, A. (2018). Lehren an der Hochschule. Cham: Springer-Verlag. doi: 10.1007/978-3-476-04549-2

Crossref Full Text | Google Scholar

SAIdakhror, G. (2024). The impact of artificial intelligence on higher education and the economics of information technology. Int. J. Law Policy 2, 1–6. doi: 10.59022/ijlp.125

Crossref Full Text | Google Scholar

Schulmeister, R. (2007). Der “student lifecycle” als organisationsprinzip für elearning. eUniversity-Update Bologna. Waxmann, Münster, 45–77.

Google Scholar

Selvaratnam, R., and Venaruzzo, L. (2024). Governance of artificial intelligence and data in Australasian higher education: a snapshot of policy and practice. ACODE Whitepapers. doi: 10.14742/apubs.2023.717

Crossref Full Text | Google Scholar

Sweeney, S. (2023). Who wrote this? Essay mills and assessment – considerations regarding contract cheating and AI in higher education. Int. J. Manag. Educ. 21:100818. doi: 10.1016/j.ijme.2023.100818

Crossref Full Text | Google Scholar

Times Higher Education (2024). World University Rankings 2024. Available at: https://www.timeshighereducation.com/world-university-rankings/2024/world-ranking (accessed May 2024).

Google Scholar

Wecks, J. O., Voshaar, J., Plate, B. J., and Zimmermann, J. (2024). Generative AI usage and academic performance. arXiv preprint arXiv:2404.19699.

Google Scholar

Xie, Q., Li, M., and Enkhtur, A. (2024). Exploring generative AI policies in higher education: a comparative perspective from China, Japan, Mongolia, and the USA. arXiv preprint arXiv:2407.08986.

Google Scholar

Xu, Y., Liu, X., Cao, X., Huang, C., Liu, E., Qian, S., et al. (2021). Artificial intelligence: a powerful paradigm for scientific research. Innovation 2:100179. doi: 10.1016/j.xinn.2021.100179

PubMed Abstract | Crossref Full Text | Google Scholar

Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education-where are the educators? Int. J. Educ. Technol. High. Educ. 16, 1–27. doi: 10.1186/s41239-019-0171-0

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: higher education institutions, artificial intelligence, education, large language models, rules (regulations), guidelines and recommendations, teaching

Citation: Temper M, Tjoa S and David L (2025) Higher Education Act for AI (HEAT-AI): a framework to regulate the usage of AI in higher education institutions. Front. Educ. 10:1505370. doi: 10.3389/feduc.2025.1505370

Received: 02 October 2024; Accepted: 31 January 2025;
Published: 20 February 2025.

Edited by:

Oscar Robayo-Pinzon, Rosario University, Colombia

Reviewed by:

Maila Pentucci, University of Studies G. d'Annunzio Chieti and Pescara, Italy
Bianca Ifeoma Chigbu, Walter Sisulu University, South Africa
Rocsana Bucea-Manea-Tonis, National University of Physical Education and Sport, Romania

Copyright © 2025 Temper, Tjoa and David. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Simon Tjoa, c2ltb24udGpvYUBmaHN0cC5hYy5hdA==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Research integrity at Frontiers

94% of researchers rate our articles as excellent or good

Learn more about the work of our research integrity team to safeguard the quality of each article we publish.


Find out more