Skip to main content

BRIEF RESEARCH REPORT article

Front. Artif. Intell., 16 December 2024
Sec. AI for Human Learning and Behavior Change
This article is part of the Research Topic The Role of Conversational AI in Higher Education View all 6 articles

Measuring the effects of pedagogical agent cognitive and affective feedback on students’ academic performance

  • 1IN3-Department of Computer Science, Multimedia and Telecommunications, Open University of Catalonia, Barcelona, Spain
  • 2Escuelas Universitarias Gimbernat (EUG), adscrita a la Universitat Autònoma de Barcelona, Sant Cugat, Spain
  • 3Department of Cultural Technology and Communication, University of Aegean, University Hill, Mytilini, Greece

There is still a debate on the influence and effectiveness of pedagogical agents in a learning environment, especially on the means these agents employ for enhancing students’ academic performance. The current study aims at measuring the effectiveness of cognitive and affective feedback (CaAF) types that a human teacher and a virtual Affective Pedagogical Tutor (APT) used in their groups of students (control and experimental groups respectively) in an authentic long-term learning situation. Participants were a sample of 115 students carrying out collaborative activities in a “web design” course. Our findings showed that APT cognitive feedback (CF) significantly increased students’ learning outcomes compared to the human teacher’s feedback, whereas APT affective feedback (AF) only achieved partial success. Nevertheless, the study has some limitations: it is based on a single course and a specific academic context, limiting the generalizability of its findings. Additionally, while cognitive feedback demonstrated a clear impact, the analysis of affective feedback was less conclusive, and its design requires further refinement. Finally, the cross-sectional design of the study restricts the ability to assess whether improvements in learning outcomes persist over time. Future research directions include exploring the generalizability of results across diverse disciplines, deepening the analysis of affective feedback, and incorporating longitudinal studies to evaluate the durability of the observed effects.

1 Introduction

Emotions are critical for motivation, self-regulated learning, and performance, playing a vital role in cognitive development (Arguedas and Daradoumis, 2021). Research in affective learning focuses on emotion awareness, affective feedback, and emotional education to enhance learners’ skills in identifying, managing, and understanding emotions—both their own and others’. Effective emotional education promotes self-motivation, conflict recovery, and social–emotional connection to studies, peers, and instructors (D’Mello et al., 2011; Pérez-Marín, 2021). Tools that provide group awareness and facilitate collaboration are also key, aiming to empower students to complete their learning journey successfully (Devis-Rozental et al., 2017).

2 Literature review on affective PAs, cognitive and affective feedback

Α pedagogical agent (PA) is designed to guide learners through an educational environment with the aim at creating an interesting, pleasant, safe and creative environment for learning, but also assists learners to cope with learning difficulties, accomplish their learning objectives as well as enhance their self-reflection about what they learned and how they learned during the learning process (Arguedas and Daradoumis, 2021; Norman, 2004) and it caused them important changes in learning and motivation.

According to Kim et al. (2017), Multiple Intelligent Pedagogical Agents (MIPAs) are a group of intelligent agents integrated into an educational system designed to collaboratively interact with learners to support their learning processes. Each agent in the system typically has distinct roles, expertise, or characteristics that contribute uniquely to the instructional objectives. The agents can embody various personas, such as tutor, motivator, peer, or facilitator, providing diverse perspectives and fostering a rich and engaging learning environment. Some approaches used MIPAs to promote more flexible and dynamic affective communication. These systems were designed to adapt to the cognitive and affective needs of the learner (Ammar et al., 2010). They also aimed to detect and process users’ emotions, enabling real-time responses to user needs. This allowed the systems to provide more complex motivational feedback (Scholten et al., 2017). Such feedback was perceived positively by students because it supported their learning and motivation (Kim et al., 2017). More recent reviews have found that the presence of PAs can improve learning outcomes. However, the effectiveness of different feature combinations and outcome variables has not been systematically studied. As a result, it remains unclear which features work best or under what circumstances (Martha and Santoso, 2019; Arguedas and Daradoumis, 2021).

To better understand how students perceive the quality of the cognitive feedback (CF) they receive, several researchers highlighted the importance of the content of feedback (Kornell and Rhodes, 2013; Van der Kleij et al., 2015): global versus elaborated. Global feedback may allow students to verify the correctness of their answers or indicate whether the answer is correct or incorrect. In contrast, elaborated feedback (e.g., providing additional information, extra study material, or an explanation, giving a hint or an example) can offer, detailed and constructive information that engages students into more effective cognitive processes, which enables learners to perform better in subsequent tasks (Finn et al., 2018; Wang et al., 2019). CF in our PA uses elaborated feedback of different types which are presented in detail in Section 4.1. The effectiveness of CF depends on task difficulty, learners’ characteristics (e.g., age, prior knowledge), feedback type and format (Attali et al., 2016; Lin et al., 2020). Therefore, more work is needed to explore students’ perception of CF quality and how it affects learning development in a computer-based environment.

Early e-learning systems began integrating affective feedback to improve learner motivation and mood (Mao and Li, 2009). Different strategies have been used, such as empathetic responses or task-based adjustments, to align with learners’ emotions (Robison et al., 2009). D’Mello et al. (2011) introduced AutoTutor, an agent that adapts feedback based on learners’ cognitive and emotional states, promoting engagement. Bevacqua et al. (2012) describe systems that select and analyzes feedback based on verbal (like tone of voice and word choice) and non-verbal (such as facial expressions, gestures, and posture) cues by leveraging emotion recognition techniques and embodied conversational agents. For instance, if a learner shows signs of frustration (e.g., frowning, slumped posture, or using negative language), the system might adapt its response by offering empathetic and supportive feedback. By dynamically interpreting these cues, the system aims to adjust its interactions in real-time, providing feedback that aligns with the user’s emotional state, thereby fostering a more engaging and supportive learning environment.

Studies show that affective feedback can boost motivation and enjoyment but depends on the believability of the agent (Guo and Goh, 2016). Existing studies, however, mainly focus on motivation and satisfaction, lacking a comprehensive exploration of other emotional responses (Lin et al., 2020).

Other emotional responses are critical in shaping effective learning experiences. On the one hand, frustration can hinder persistence and problem-solving, but personalized feedback strategies help mitigate its effects and improve engagement (Rajendran et al., 2019). Likewise, anxiety negatively impacts cognitive performance; yet embodied agents can reduce anxiety and foster a supportive learning environment (Kim et al., 2017). On the other hand, empathy plays a vital role in enhancing collaborative learning by promoting emotional connections among peers (D'Mello and Graesser, 2012). And resilience enables learners to view mistakes as growth opportunities, fostering a mindset focused on continuous improvement (Yeager and Dweck, 2012). These emotions highlight the importance of tailored affective feedback in education. Research suggests that emotion regulation strategies, such as reappraisal, can help learners manage negative emotions, increasing engagement (Malekzadeh et al., 2015).

Text-based feedback remains popular due to its accessibility, and its effectiveness is influenced by clarity and timely delivery (Howard, 2021). Affective support helps reduce off-task behavior and boredom, contributing to improved learning outcomes (Grawemeyer et al., 2017). For instance, systems can provide motivational prompts or empathetic messages when disengagement is detected, helping students refocus and stay productive. Gamification elements also enhance engagement through personalized feedback, particularly when addressing frustration (Rajendran et al., 2019). Features like rewards and adaptive challenges turn frustration into motivation, promoting persistence and a sense of achievement. Based on the review of the literature we adapted various feedback types to the context of our study, the age of participants and learning situation (group activity on “web design” conducted in the class laboratory). The resulting list of CaAF types is presented in Table 1, Section 3.

Table 1
www.frontiersin.org

Table 1. The cognitive and affective feedback types provided in the teaching sessions.

3 Research aims

3.1 Aim

The aim of this study has been to examine whether the APT CaAF increased significantly the students’ learning outcomes compared to human teacher’s feedback.

In order to achieve the above goal, we performed an experiment that we included in an existing class in a classroom setting. In particular, together with the class teacher we designed a scenario which involved an authentic learning experience through problem-based learning coupled with collaborative learning.

In this context, the APT is a specific agent whose design has followed the Activity Theory Framework (Engeström et al., 1999), and forms part of a larger project and framework which includes several components. This framework involves an emotion analysis model which first analyzes text and conversation (wiki, chats and forum debates) generated by students involved in collaborative learning activities. Then, it proceeds to identify and represent the students’ emotions that take place during these activities in a non-intrusive way. This information is shown to both the human teacher and the APT, thus providing emotion awareness with regard to the way students’ emotions appear and evolve over time. This enables both the teacher and the APT to offer students cognitive and affective feedback that influences students’ motivation, engagement, self-regulation and learning outcome. Details of how the APT and feedback work are fully described in the aforementioned research articles (Arguedas and Daradoumis, 2021).

Since the distinction between CaAF is central to our research, we would like to make clear how APT treats each feedback type. As students work in the Moodle environment, they may raise cognitive doubts about the topic or the activity to be carried out. Then the APT responds through CF using spoken and/or written language which provides the student the necessary information about the question at hand.

If the student questions do not correspond to the topic or the activity they are carrying out, or if they have impolite, inappropriate, or distracting tone, the APT responds by giving AF with the aim to redirect student’s behavior, as well as their attention to the activity they must carry out. We set the following research questions for the specific learning situation.

3.2 Research questions

The main research questions we deal in this work are the following:

1. Has the APT CF increased significantly students’ learning outcomes compared to human teacher’s feedback?

2. Has the APT AF increased significantly students’ learning outcomes compared to human teacher’s feedback?

3.3 Definition of variables for the learning situation

The learning situation represents the space where the main teaching and learning processes occur. There are two independent variables (IVs) relevant to the study: Affective Feedback (A) and Cognitive Feedback (C). As such, the weight of each variable is the same, that is, both variables A and C are equally important for the learning situation at hand. The IVs include the types of cognitive and affective feedback (CaAF) provided by the APT (Affective Pedagogical Tutor) and the human teacher. These variables are qualitative but are indirectly measured through their implementation in the experimental design (e.g., elaborated feedback, motivational feedback).

The dependent variables (DVs) are the students’ learning outcomes, which are measured using a 5-point Likert-type scale, ranging from 1 (Almost never) to 5 (Almost always). The scale captures students’ perceptions of the effectiveness of the feedback they received, including its impact on their learning experience.

The study uses a questionnaire to quantify the dependent variable (learning outcomes) by associating specific types of CaAF with student feedback ratings. Statistical analyses, including t-tests, are applied to compare the means of these responses across the control and experimental groups.

4 Method

4.1 Participants and procedure

Participants were a sample of 115 students attending the course “Web Design.” We randomly divided students in two big groups, a control and an experimental group, with 57 and 58 students, respectively. Then, each big group was further divided into smaller teams. We wanted the teams to be big enough so that to promote more interaction in the team forums. For this reason, we created teams between 4 and 7 members. More specifically, in the control group (supervised by the human teacher), 12 teams were formed randomly: nine teams of four members and three teams of seven members. In the experimental group (where the APT was acting), 16 teams were also formed randomly: three teams of seven members, one team of five members and eight teams of four members. In this sense, the groups have been randomly distributed; both samples have been independent and had a normal distribution.

As mentioned in Section 2, our APT is a specifically designed agent that is based on a work project and framework. This framework involves a cognition-emotion analysis model which is composed of two tools. One of them, the fuzzy logic tool, analyses students’ text to infer a dimensional and categorical emotional state of the students during their learning process. The other one, the APT, is a client–server web application. A client is installed in each student’s computer that connects to the server. When running on student’s computer, it displays an environment with the APT on the left side of the screen, an embedded Moodle LMS on the right side of the screen, as well as a text edit box at the bottom that allows the student to contact the APT textually. As such, the APT is characterized by a specific voice, the emotional expressions that can display and the dialogs that can be involved. The student works on the LMS, carries out his/her tasks, collaborates with peers, whereas at the same time he/she can interact with the APT in a textual manner through the edit box located at the bottom of the screen. The APT responds to the student with audible and gestural signals that were scheduled in advance, while providing the student with the information that he/she previously requested (Arguedas and Daradoumis, 2021). Two examples of this responds are shown in Figure 1.

Figure 1
www.frontiersin.org

Figure 1. Example of APT cognitive feedback (A) and APT affective feedback (B).

Students’ emotional states were detected by our emotion analysis model after each student intervention (message) in the group forum. This information was used to define both the teacher and the APT reaction to each student, giving them CaAF. The types of CaAF provided are described in Table 1. They represent generic types of feedback. Since both the human tutor and APT act independently, each provides its own particular feedback in its own wording and expression, i.e., feedback articulation differs between the control and experimental groups; however, each particular feedback utterance should adhere to the generic feedback type it refers. It means that every instance of feedback given during the study was designed to fit within one of the predefined categories of cognitive or affective feedback. This consistency ensures that both the human teacher and the Affective Pedagogical Tutor (APT) adhered to the same framework for feedback delivery, maintaining the reliability of comparisons between the experimental and control groups. This structured alignment was central to evaluating the distinct impacts of cognitive and affective feedback types on student learning outcomes. This is a condition that the human teacher was aware of. As such, the kind of support the teacher was giving to students had to be associated with a specific feedback type. For the sake of illustration, in Table 1 we show examples of all CaAF types provided by the APT in our learning situation.

4.2 Data collection

The questionnaire was composed of questions that were associated with the 19 CaAF types (one question for each feedback type) presented in Table 1. For all questions, we used a five-point Likert-type scale ranging from 1 (Almost never) to 5 (Almost always) requiring a quantitative answer. The aim of the questionnaire was to measure the dependent variable, ‘students’ learning outcomes.’ To do so, we look how successful the human teacher’s and APT CaAF have been. The comparison of the mean values of this feedback can provide this information; this is completed by a Student’s t-test as well [as it is shown in section 5.1 -Table 2 (a) and section 5.2 -Table 2(b)].

Table 2
www.frontiersin.org

Table 2. Mean values of students’ learning outcomes and T-test related to (a) cognitive and (b) affective feedback.

In addition, students’ academic achievement, which is a qualitative outcome obtained from different evaluation techniques such as observation or oral examinations, has been also consulted. For all questions, we used a five-point Likert-type scale ranging from 1 (Almost never) to 5 (Almost always) requiring a quantitative answer.

4.3 Research analyses

Apart from descriptive statistic measures, differences in student’s learning outcomes were examined through t-test for independent groups according to CaAF provided by the human teacher (control group) and the APT (experimental group).

Due to space restrictions, we provide a compact version of reliability statistics and multivariate normality measures instead of presenting them for each subscale. To ensure the reliability of data collection, the Cronbach’s alpha coefficient has been applied to both groups, CG and EG as mentioned before, by obtaining values higher than 0.70, which reinforces the reliability of our indicators.

As the variables under study are quantitative, specifically, on a Likert scale of 1 to 5, the Student’s t-test for independent samples has been applied to analyze whether there are differences in the results obtained between the control and experimental groups, in the variables involved in the study: CaAF. To this end, the necessary hypotheses (normality of the data and homogeneity of variances) were previously verified. The confidence level chosen for the different tests is 95%. The Kolmogorov–Smirnov (KS) test was also used to test the normality of the different variables in each group. KS was not significant therefore normality was met. In addition, the skewness and kurtosis of each variable were examined to check for multivariate normality. Critical values of all test statistics were calculated. The results showed that data were normally distributed as absolute values of skewness and kurtosis did not exceed the allowed maximum (2.0 for univariate skewness and 7.0 for univariate kurtosis). The application of the Levene test for equality of variance defined the outcomes to be considered. Levene’s test for equality of variances tells us whether we can assume equal variances or not, i.e., if the probability associated with the Levene statistic is >0.05 we assume equal variances and if <0.05 we assume different variances. To that end, we establish the null hypothesis, Ho: “The APT feedback did not significantly enhance students’ learning outcomes compared to human teacher’s feedback.” Then, based on the t-test for independent groups, if Sig. (p-value) ≤ 0.05, Ho is rejected.

5 Results

Below, we present the results that address our research questions. For each question, we present a table with the mean, median and mode values of students’ learning outcomes related to human teacher’s cognitive feedback (Control Group) and Affective Pedagogical Tutor (Experimental Group). Moreover, in the same table we show the results of t-test to analyze more in deep the differences between GC and EG.

5.1 Differences in learning outcomes according to CF (RQ1)

In Table 2 (a), we can observe that means in both groups are higher than 3, which seems to indicate that students were satisfied by the CF received by both the teacher and the animated agent (APT). Nevertheless, the mode measure provided more information about which CF was more significant in each group (GC and EG). In this sense, we can observe that 3.1 was more important to GC and 3.2, 3.3, 3.4, 3.6 and 3.10 were more important in EG.

5.2 Differences in learning outcomes according to AF (RQ2)

In Table 2 (b), we can observe that means in both groups are higher than 3, which seems indicate that students were satisfied by the AF received by both the teacher and the animated agent (APT). Nevertheless, the mode measure provided more information about which AF was more significant in each group (GC and EG). In this sense, we can observe that only, 3.11 and 3.16 were more significant for EG and the others were equal significant for both groups.

The results of the t-test presented in Table 2 (t-test ≤0.05) showed significant differences between CG and EG in items 3.12, 3.13, 3.17 and 3.19. Thus, certain types of APT AF increased significantly EG students’ learning outcomes compared to human teacher’s feedback.

All others, which have t-test>0.05, did not show significant differences in both groups (highlighted in bold and marked with an “*”). In this sense, our study should carefully reconsider these items and continue to work for improving APT AF design for supporting students’ learning outcomes.

In addition, we present the distribution of responses according to Likert scale used in GC and EG, respectively, as shown in Figures 2A,B.

Figure 2
www.frontiersin.org

Figure 2. The distribution of responses according to the Likert scale used can be presented in a bar chart, with one bar for each response category to (A) CG and (B) EG. Axis Y shows results obtained for each measure using a 5-point Likert-type scale, ranging from 1 (Strongly Disagree) to 5 (Strongly Agree). Axis X shows all items about cognitive and affective feedback.

6 Discussion

The study investigates whether the APT CaAF improves student learning outcomes more effectively than human teacher feedback. The first research objective (RQ1) examined the impact of APT CF. The findings show that students who interacted with the animated agent perceived its CF as significantly more effective compared to those who received feedback from the teacher. The only exception was feedback related to working in small groups (3.6), where no significant difference was observed.

Unlike previous studies that primarily focused on performance metrics like scores and learning gains (Martha and Santoso, 2019), this study emphasizes students’ perceptions of their learning outcomes. Research indicates that the effectiveness of pedagogical agents depends on specific conditions and features (Schroeder et al., 2017). Challenges in emotion detection mechanisms can limit the agent’s effectiveness (Scholten et al., 2017), but the study’s system aims to offer reliable emotional awareness, enhancing feedback quality. Lin et al. (2020) also highlighted that elaborate feedback leads to higher learning scores, supporting the design choice for APT’s detailed feedback, which proved effective except for group work facilitation.

The learning situation in this study, involved students in a long-term “Web Design” activity. Both the duration and the specificity of the activity acted as a crucial factor that influenced the feedback types which were specifically designed and adapted to this context. This is in line with previous research work, such as Dinçer and Doganay (2017), who have emphasized the importance of customizing agents to fit specific learning environments.

The second objective (RQ2) focused on APT’s AF. Results identified four types of AF (3.12, 3.13, 3.17, and 3.19) that significantly enhanced learning outcomes compared to the teacher’s feedback. These types included guidance for group communication, task completion, addressing difficulties, and maintaining interest. This aligns with previous studies that demonstrate the effectiveness of concise, supportive feedback in reducing negative behaviors and maintaining engagement (Cabestrero et al., 2018).

Several of the AF types used in this study were consistent with empathetic or task-based strategies identified in other research (D’Mello et al., 2011). Text-based AF also proved valuable, taking forms such as prompts, hints, and motivational messages, which positively influenced learning outcomes (Grawemeyer et al., 2017).

7 Conclusion

Our findings point out that the APT has an important effect on learning situations which depends on students’ collaborative activities. Although the learning activities were similar in both groups, students that interacted with the APT perceive that their learning is significantly more enhanced compared to learning reported by the students who interacted with the teacher.

First, the majority of CF was perceived by the students who interacted with the animated agent as significantly more effective for their learning outcomes compared to students who interacted with the teacher. Second, though students seem satisfied by the AF received by both the teacher and the animated agent (APT), four (out of nine) AF types that students received from the animated agent were perceived as significantly more conducive to their learning outcomes compared to AF received by the teacher. Finally, both CaAF is necessary to act together so that to enhance significantly students’ learning outcomes.

Many agent-based studies have been laboratory-based and the participants were often college students, usually from a university subject pool (Cabestrero et al., 2018). Unlike other studies, our study constitutes an in-situ that has integrated specific CaAF strategies into an APT design, aiming at enhancing their learning outcomes.

The results of our experiments on APT effectiveness are drawn from the users’ perceptions, by means of questionnaires. However, a recent review on CPAs reveals a set of CPA design recommendations to promote their use in different learning situations (Pérez-Marín, 2021) that include instructional methods embedded in the agent, new interaction modalities and domains (which may change the type of agent used), as well as Human-Computer Interaction guidelines besides of real-time user signals can be also captured by sensors. The analysis of such data can be fed into the APT endowing it with adaptive and social behavior according to users’ needs and task requirements. Such information can also be used to cross-check students’ learning outcomes that have been provided by the questionnaire.

To further enhance the generalizability and impact of the findings, future studies should expand the testing of APT across diverse academic subjects and learning environments. This would allow a more comprehensive evaluation of how cognitive and affective feedback (CaAF) strategies perform in varied contexts. Moreover, a deeper exploration of the reasons behind the differing effectiveness of specific affective feedback types is recommended. Including case examples and connecting findings to established theories in educational psychology could provide practical insights. Finally, introducing a longitudinal component to assess the durability of the observed improvements in learning outcomes over time would significantly strengthen the study’s contributions. This approach would not only validate the long-term benefits of APT but also highlight its potential for sustained educational enhancement.

Data availability statement

Requests to access the datasets should be directed to bWFydGFhcmdAdW9jLmVkdQ==.

Author contributions

MA: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. TD: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. SC: Funding acquisition, Supervision, Writing – review & editing, Writing – original draft.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This work was partially supported by the following projects: ‘CARUOC: Conversational Agents and Recommenders for UOC’, under the Research Accelerator program of the Universitat Oberta de Catalunya; ‘REGRANAPIA: Repositorios gestionados con Gramáticas: Navegación, Personalización e Inteligencia’, funded by the Spanish Ministry of Science and Innovation (PID2021-123048NB-I00); ‘LExDigTeach: Uso de Analíticas de Aprendizaje en Entornos Digitales Universitarios: Impacto en la Mejora del Desempeño Docente’, funded by the Spanish Ministry of Science and Innovation (PID2020-115115GB-100).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Ammar, M. B., Neji, M., Alimi, A. M., and Gouarderes, G. (2010). The affective tutoring system. Expert Syst. Appl. 37, 3013–3023. doi: 10.1016/j.eswa.2009.09.031

Crossref Full Text | Google Scholar

Arguedas, M., and Daradoumis, T. (2021). Analysing the role of a pedagogical agent in psychological and cognitive preparatory activities. J. Comput. Assist. Learn. 37, 1167–1180. doi: 10.1111/jcal.12556

Crossref Full Text | Google Scholar

Attali, Y., Laitusis, C., and Stone, E. (2016). Differences in reaction to immediate feedback and opportunity to revise answers for multiple-choice and open-ended questions. Educ. Psychol. Meas. 76, 787–802. doi: 10.1177/0013164415612548

PubMed Abstract | Crossref Full Text | Google Scholar

Bevacqua, E., Eyben, F., Heylen, D., ter Maat, M., Pammi, S., and Pelachaud, C. (2012). “Interacting with emotional virtual agents” in Intelligent technologies for interactive entertainment. eds. A. Camurri and C. Costa (Berlin Heidelberg: Springer), 243–245.

Google Scholar

Cabestrero, R., Quirós, P., Santos, O. C., Salmeron-Majadas, S., Uria-Rivas, R., Boticario, J. G., et al. (2018). Some insights into the impact of affective information when delivering feedback to students. Behav. Inform. Technol. 37, 1252–1263. doi: 10.1080/0144929X.2018.1499803

Crossref Full Text | Google Scholar

D’Mello, S. K., Lehman, B., and Graesser, A. (2011). A motivationally supportive affect-sensitive autotutor. New perspectives on affect and learning technologies, vol. 3 (pp. 113-126). New York: Springer.

Google Scholar

Devis-Rozental, C., Eccles, S., and Mayer, M. (2017). Developing socio-emotional intelligence in first year HE students through one-to-one learning development tutorials. J. Learn. Develop. High. Educ. 12:389. doi: 10.47408/jldhe.v0i12.389

Crossref Full Text | Google Scholar

Dinçer, S., and Doganay, A. (2017). The effects of multiple-pedagogical agents on learners’ academic success, motivation, and cognitive load. Comput. Educ. 111, 74–100. doi: 10.1016/j.compedu.2017.04.005

Crossref Full Text | Google Scholar

D'Mello, S., and Graesser, A. (2012). Dynamics of affective states during complex learning. Learn. Instr. 22, 145–157. doi: 10.1016/j.learninstruc.2011.10.001

Crossref Full Text | Google Scholar

Engeström, Y., Miettinen, R., and Punamäki, R. L. (1999). Perspectives on activity theory. Cambridge, UK: Cambridge University Press.

Google Scholar

Finn, B., Thomas, R., and Rawson, K. A. (2018). Learning more from feedback: elaborating feedback with examples enhances concept learning. Learn. Instr. 54, 104–113. doi: 10.1016/j.learninstruc.2017.08.007

Crossref Full Text | Google Scholar

Grawemeyer, B., Mavrikis, M., Holmes, W., Gutiérrez-Santos, S., Wiedmann, M., and Rummel, N. (2017). Affective learning: improving engagement and enhancing learning with affect-aware feedback. User Model. User-Adap. Inter. 27, 119–158. doi: 10.1007/s11257-017-9188-z

Crossref Full Text | Google Scholar

Guo, Y. R., and Goh, D. H.-L. (2016). Evaluation of affective embodied agents in an information literacy game. Comput. Educ. 103, 59–75. doi: 10.1016/j.compedu.2016.09.013

Crossref Full Text | Google Scholar

Howard, N. R. (2021). How did I do?: giving learners effective and affective feedback. Educ. Technol. Res. Dev. 69, 123–126. doi: 10.1007/s11423-020-09874-2

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, Y., Thayne, J., and Wei, Q. (2017). An embodied agent helps anxious students in mathematics learning. Educ. Technol. Res. Dev. 65, 219–235. doi: 10.1007/s11423-016-9476-z

Crossref Full Text | Google Scholar

Kornell, N., and Rhodes, M. G. (2013). Feedback reduces the metacognitive benefit of tests. J. Exp. Psychol. Appl. 19, 1–13. doi: 10.1037/a0032147

PubMed Abstract | Crossref Full Text | Google Scholar

Lin, L., Ginns, P., Wang, T., and Zhang, P. (2020). Using a pedagogical agent to deliver conversational style instruction: what benefits can you obtain? Comput. Educ. 143:103658. doi: 10.1016/j.compedu.2019.103658

Crossref Full Text | Google Scholar

Malekzadeh, M., Mustafa, M. B., and Lahsasna, A. (2015). A review of emotion regulation in intelligent tutoring systems. Educ. Technol. Soc. 18, 435–445.

Google Scholar

Mao, X., and Li, Z. (2009). “Implementing emotion-based user-aware e-learning” in CHI’09 extended abstracts on human factors in computing systems. New York, NY, USA: (ACM), 3787–3792.

Google Scholar

Martha, A. S. D., and Santoso, H. B. (2019). The design and impact of the pedagogical agent: a systematic literature review. J. Educ. 16:n1. doi: 10.9743/jeo.2019.16.1.8

Crossref Full Text | Google Scholar

Norman, D. (2004). Emotional design why we love (or hate) everyday things. New York: Basic Books.

Google Scholar

Pérez-Marín, D. (2021). Review of the practical applications of pedagogic conversational agents to be used in school and university classrooms. Digital 1, 18–33. doi: 10.3390/digital1010002

Crossref Full Text | Google Scholar

Rajendran, R., Iyer, S., and Murthy, S. (2019). Personalized affective feedback to address students’ frustration in ITS. IEEE Trans. Learn. Technol. 12, 87–97. doi: 10.1109/TLT.2018.2807447

Crossref Full Text | Google Scholar

Robison, J., McQuiggan, S., and Lester, J. (2009). Evaluating the consequences of affective feedback in intelligent tutoring systems. Proceedings of the 3rd international conference on affective computing & intelligent interaction

Google Scholar

Scholten, M. R., Kelders, S. M., and Van Gemert-Pijnen, J. E. (2017). Self-guided web-based interventions: scoping review on user needs and the potential of embodied conversational agents to address them. J. Med. Internet Res. 19:e383. doi: 10.2196/jmir.7351

PubMed Abstract | Crossref Full Text | Google Scholar

Schroeder, N. L., Romine, W. L., and Craig, S. D. (2017). Measuring pedagogical agent persona and the influence of agent persona on learning. Comput. Educ. 109, 176–186. doi: 10.1016/j.compedu.2017.02.015

Crossref Full Text | Google Scholar

Van der Kleij, F. M., Feskens, R. C., and Eggen, T. J. (2015). Effects of feedback in a computer-based learning environment on students' learning outcomes: a metaanalysis. Rev. Educ. Res. 85, 475–511. doi: 10.3102/0034654314564881

Crossref Full Text | Google Scholar

Wang, Z., Gong, S.-Y., Xu, S., and Hu, X.-E. (2019). Elaborated feedback and learning: examining cognitive and m otivational influences. Comput. Educ. 136, 130–140. doi: 10.1016/j.compedu.2019.04.003

Crossref Full Text | Google Scholar

Yeager, D. S., and Dweck, C. S. (2012). Mindsets that promote resilience: when students believe that personal characteristics can be developed. Educ. Psychol. 47, 302–314. doi: 10.1080/00461520.2012.722805

Crossref Full Text | Google Scholar

Keywords: intelligent tutoring systems, affective tutor, pedagogical agent, cognitive feedback, affective feedback

Citation: Arguedas M, Daradoumis T and Caballé S (2024) Measuring the effects of pedagogical agent cognitive and affective feedback on students’ academic performance. Front. Artif. Intell. 7:1495342. doi: 10.3389/frai.2024.1495342

Received: 12 September 2024; Accepted: 27 November 2024;
Published: 16 December 2024.

Edited by:

Pauldy Otermans, Brunel University London, United Kingdom

Reviewed by:

Kostas Karpouzis, Panteion University, Greece
Beverley Pickard-Jones, Bangor University, United Kingdom

Copyright © 2024 Arguedas, Daradoumis and Caballé. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Marta Arguedas, bWFydGFhcmdAdW9jLmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.