- 1Department of Industrial Economics and Technology Management, Norwegian University of Science and Technology, Trondheim, Norway
- 2Department of Training and Education Sciences, University of Antwerp, Antwerp, Belgium
The ability to give, receive and process feedback is essential for higher education students not only during their studies, but also for their future work life. Despite the extensive amount of research on feedback in education, there is limited research on feedback skills as collaborative skills and on what might influence these skills. Through surveying a large sample of 2,907 university students who worked in self-managed project teams, this study explores how individual characteristics as antecedents relate to students’ perceived feedback skills. We use a person-oriented approach to examine how these antecedents relate within students and how these profiles relate to perceived feedback skills. In addition to reliability, confirmatory factor analysis provided evidence for the structural validity of a newly developed feedback skills instrument. Five scales were used from existing feedback instruments as antecedents of feedback skills, and these were also found to be valid and reliable. Through a person-oriented approach, we applied a hierarchical and a k-means cluster analysis to create student profiles or groups based on feedback antecedents. We identified five distinct groups of students with common feedback antecedents. The results indicate that the five groups also had different levels of perceived feedback skills. The study contributes to the limited research on the dynamics of giving and receiving feedback from the perspective of students in the context of collaborative learning. It has implications for researchers and practitioners to better understand individual differences and to consider these differences when designing collaborative learning activities and facilitating student teams.
1 Introduction
Developing the ability to give, receive and process feedback is essential for higher education (HE) students. In light of the demand that HE must prepare students for a work life that increasingly expects graduates with teamwork skills (O’Neill et al., 2020), feedback skills are crucial. Research demonstrates that feedback is a critical component of teamwork regulation (London and Sessa, 2006) and that it affects team functioning and performance (e.g., Gabelica et al., 2012). Awareness, or knowing how one is perceived by others through giving and receiving feedback, is also linked to successful team outcomes (Hulse-Killacky et al., 2006). From a learning perspective, feedback is fundamental for reflection (Aoun et al., 2018) and positively relates to learning outcomes (Black and William, 1998; Fong and Schallert, 2023). Therefore, placing the development of feedback skills in HE curricula has a double purpose: it is important for developing teamwork skills and key to student learning.
Within education, research on feedback has largely focused on feedback on tasks or performances for assessment purposes (Evans, 2013; Winstone and Boud, 2022) and students’ development of feedback literacy (e.g., Carless and Boud, 2018; Dawson et al., 2023). In this line of research, feedback is seen as a cornerstone of effective learning and teaching (Black and William, 1998) and is often studied as a tool for student assessment (e.g., Small and Attree, 2016; Ashenafi, 2017). In contrast to the significant amount of research focusing on feedback for learning, there is only limited research on feedback skills as collaborative and interpersonal skills, i.e., giving and receiving feedback between team members as part of regulating and improving their collaboration and for reflection on learning.
There is also limited research on what feedback skills are (Johnsen et al., 2023) and on what might influence individuals’ feedback skills, from a situational perspective, as well as from the perspective of individual student characteristics (i.e., antecedents of students’ feedback skills). Within the context of student teams or teamwork more generally, the focus has mainly been placed on feedback on team effectiveness or on how to give effective feedback, with the source often coming from outside the group (e.g., Gabelica et al., 2014)—in most cases, an instructor. Researchers have paid little attention to receiving feedback and how and why the same feedback can be significantly differently perceived and processed by the different members of a team (Gabelica and Popov, 2020). Individual differences might play an important role in these processes, though according to Gabelica and Popov (2020), much research on feedback in teams has been undertaken with a ‘one-size-fits-all’ approach, which does not consider the individual characteristics of the members within a team. In the same team, some people might, for example, perceive the feedback, be it from the teacher or from a team member, as blunt and inappropriate, leading to a feeling of failure or a refusal to accept it, while others might see it as constructive, interesting and an opportunity to learn. This variety of reactions to the same feedback illustrates that a ‘one-size-fits-all approach’ is not in line with the complex reality and interplay of factors within and between the actors. It is also unclear if and how the different personal feedback characteristics relate within an individual person. This urges the need for a person-oriented approach when studying this area.
This paper first validates the concepts that are used in this study and then explores how individual characteristics as antecedents relate to the students’ perceived feedback skills. The study uses a large sample of 2,907 university students who worked in self-managed project teams. On the basis of subsamples, we use a person-oriented approach and examine how these antecedents relate within students to be able to distinguish different profiles. We then consider how these different profiles relate to perceived feedback skills. The results of the study have implications for researchers and practitioners to better understand individual differences and to take these differences into account when designing learning activities, in particular those aiming to develop students’ feedback skills.
2 Frames of reference
Feedback can be understood as the transmission of evaluative or corrective information about actions, events or processes (London and Sessa, 2006). It is also widely acknowledged as a sense-making process, including both the givers and receivers of the feedback (de Kleijn, 2023). In team settings, feedback can be given to individuals, a subset of members, or the team as a whole. It is typically aimed at regulating actions to achieve the team’s goal or promoting team performance. Research recognizes the importance of continuous feedback mechanisms for team performance and learning (Gabelica et al., 2012). Feedback can guide, motivate and reinforce effective behaviors and reduce or stop ineffective behaviors (London, 2003). London and Sessa (2006) argued that without feedback, teams can change but not learn, as they depend on feedback to monitor and regulate themselves. For feedback to have these positive effects, however, team members need the skills to give, receive and process feedback in ways that help facilitate open and productive communication in the team (Hulse-Killacky et al., 2006). Training feedback skills has been identified as an important element in course designs wherein students work in teams (e.g., Sjølie et al., 2021, 2022).
2.1 Feedback skills—what are they?
While there might be agreement on the notion that feedback skills are important, there is ambiguity as to what constitutes such skills. First, there are differences in terms of which situations or contexts have been studied. For example, the skills that are required for a student who receives feedback from a teacher on an academic task might be different from those used for giving feedback on a peer student’s behavior in a team where the students work together toward a common goal. Regarding team settings, research on feedback has highlighted four core characteristics of feedback that can be related to the situation or context in which feedback is exchanged (London and Sessa, 2006; Gabelica and Popov, 2020). One characteristic is the source (e.g., an instructor, a team member) of the feedback. Feedback can be objective (e.g., based on measured data) or subjective, which comes from a person inside or outside the team. Another characteristic is the feedback level, as feedback can target the team as a whole, individual team members or both. Another characteristic is the type of feedback, which is often divided into describing either performance (often related to a task or product) or behaviors and processes (Gabelica and Popov, 2020). The last characteristic is feedback valence, that is, if the feedback contains a positive or negative evaluation of what the feedback is about.
Second, while many frameworks and instruments for measuring skill development include feedback, the operationalizations of feedback skills vary significantly. In some instruments, feedback is related to a task or product (e.g., Cumming et al., 2015; Muukkonen et al., 2020). In other instruments, feedback is related to interpersonal skills and often have different labels (social, communication, collaboration or teamwork skills) and are part of other scales in the form of one or two questions (e.g., MacDonald et al., 2010). In some instruments, feedback is used as a more general term, including several aspects in one and the same item (e.g., Alpay and Walsh, 2008). A final observation in reviewing the feedback literature is that existing frameworks and instruments only or primarily focus on negative feedback, often called ‘corrective feedback’, and not on positive feedback. On one hand, the large variation in the operationalization of feedback skills can be considered functional because of variations in the contexts and purposes of feedback. On the other hand, this conceptual ambiguity becomes problematic when we conduct research on feedback skills, as it challenges content validity and may lead to misinterpretations of the findings.
Feedback skills, as operationalized in this paper, include two components: the valence and actions feedback. As a main characteristic of the feedback, we focus on its valence (positive-or negative-oriented feedback) as part of the model developed by London and Sessa (2006). Actions refer to giving and receiving feedback. Giving feedback is expressed by first formulating the feedback and then actually giving it, while receiving feedback is expressed by first receiving feedback and then making changes (or not) based on this feedback.
2.2 Antecedents of feedback skills
A number of conditions influence how feedback can be given and received (and processed) differently by different people. Some of these conditions are situational (Fong and Schallert, 2023), such as what is expected of a person at a certain moment and place, the relationship between the giver and the receiver, or the dynamic in the team in which the feedback takes place. Other conditions are related to individual differences and can be seen as antecedents of feedback skills. For example, London and Smither (2002) found that differences among people can be noticed in terms of the sensitivity to others’ views of oneself. Feedback from others can be used to become fully aware of others’ views, but not everybody shows this need to the same extent. Linderbaum and Levy (2010) also found that the capacity to cope with feedback differs among people. One could be more or less sensitive to handling feedback adequately; in other words, feedback self-efficacy differs. Consequently, there are individual differences at play as antecedents for feedback skills, and these antecedents can impact students’ (perception and development of) feedback skills. For the remainder of this paper, we will focus not on the situational conditions but only on the individual differences.
In this study, we aim to understand how perceived feedback skills relate to individual differences as antecedents of these feedback skills. The context of the study is an interdisciplinary project-based course that has the team members give each other feedback as an explicit part of the course design. The students work in self-managed teams on open-ended, real-world problems. Employing a person-oriented approach, we explore the interplay of the students’ individual factors that might impact the giving and receiving of feedback. For the purposes of our study, we have chosen to focus on two areas of individual differences, one related to corrective feedback and the other to feedback orientation.
2.2.1 Corrective feedback
Corrective feedback is a parallel term to negative feedback. According to Hulse-Killacky et al. (2006), corrective feedback is ‘intended to encourage thoughtful self-examination and/or to express the feedback giver’s perception of the need for change on the part of the perceiver’ (p. 264). Both receiving and giving this type of feedback can be uncomfortable. There are also individual differences as to how difficult a person finds it to ask the giver of the feedback for clarification (Hulse-Killacky et al., 2006). Overall, students’ reactions to negative feedback are far more complex than their reactions to positive feedback (Jussim et al., 1989). We therefore focus on this valence in our choice of antecedents.
Hulse-Killacky et al. (2006) distinguished three factors in their model and instrumentalization: feelings, an evaluative factor, and a clarifying factor. Feelings relate to concerns about being negatively evaluated because of corrective feedback. The evaluative factor is related to criticism and judgment. The clarifying factor describes the reluctance to ask for clarification. The underlying assumption is that these factors might impact perceived feedback skills.
2.2.2 Feedback orientation
Feedback orientation was developed as a new concept in a theoretical contribution by London and Smither (2002). They described this as a person’s overall receptivity to feedback. King et al. (2009) defined feedback orientation as the ‘individual response bias that students possess toward feedback in instructional settings’ (p. 236).
Linderbaum and Levy (2010) distinguished two dimensions within feedback orientation: social awareness and feedback self-efficacy. Social awareness refers to an individual’s tendency to use feedback so as to be aware of others’ views of oneself and to be sensitive to these views (London and Smither, 2002). In addition, it refers to external pressure to be aware of and respond to feedback. The dimension draws specifically on the construct of public self-consciousness, the extent to which individuals see themselves as social objects and are aware of the observation of others in a public context (Fenigstein et al., 1975). Individuals with higher public self-consciousness have a greater desire for feedback and more initial feedback-seeking intentions. Therefore, we assume that social awareness is related to the students’ perceived feedback skills. Feedback self-efficacy on the other hand, refers to an individual’s perceived competence to interpret and respond to feedback appropriately. It concerns an individual’s self-efficacy as it relates specifically to feedback.
2.3 Aims and research questions of this study
The purpose of this study was to explore the relationship between students’ perceived feedback skills and individual factors as antecedents to how feedback is given and received within the context of an interdisciplinary project-based course with self-managed student teams. Considering the conceptual ambiguity of feedback skills and the lack of instruments in the literature to measure them (Johnsen et al., 2023), an instrument for measuring student feedback skills was developed and validated. Given the novel nature of this instrument, we looked at descriptive results related to student background characteristics in casu gender. Then, we used a person-oriented analysis approach using existing and already validated instruments with the aim of exploring whether we can distinguish clusters or groups of students’ characteristics that can impact feedback skills. Finally, we aimed to determine the extent to which the distinguished groups of students are related to the differences in feedback skills.
This paper addresses the following research questions:
• RQ1a: To what extent does the students’ feedback skills instrument have a clear construct, and can it be measured in a reliable way?
• RQ1b: To what extent do the students’ feedback characteristics scales have a clear construct, and can these be measured in a reliable way?
• RQ2: To what extent are the students’ feedback skills different according to gender?
• RQ3: What clusters of students can be distinguished based on their feedback characteristics?
• RQ4: How do the clusters relate to student feedback skills?
The following section explains the context of the study and describes the participants, instruments, data collection and statistical analyses.
3 Materials and methods
3.1 Context
This study was conducted among students enrolled in an interdisciplinary project-based course at a large Norwegian university. The course includes around 3,000 students from all faculties at the university, divided into 110 classes of 25–35 and teams of 5–7 students. The teams worked on real-world problems and defined their own projects. No specific guidelines were provided regarding the distribution of team roles and tasks. The teaching staff for each class was comprised of one faculty member and two learning assistants who were trained in team facilitation. One of the goals of the course was to develop the students’ teamwork skills, and the course design contained feedback exercises and ‘real-time’ facilitation (Sjølie et al., 2021). One of these was a “2 + 1 exercise,” in which each team member gives two positive and one negative feedback to each of the others on how they contribute and act in the team. They also formulate the feedback they would have given to themselves.
3.2 Participants
The data for this study were gathered from students who were enrolled in the course in the spring semester of 2022. The study sample consisted of 2,907 students (40.9% women, 56.3% men, 0.1% other, 1.2% preferred not to say). The students included in the study were from eight different faculties (see Table 1).
Given the lengthy character of the questionnaire, we split it, so that not every student had to fill in all parts of the questionnaire. All students answered the feedback skills items, but only a subsample answered the feedback characteristics items. To enable this, in the survey software Nettskjema1, we created subsets that were randomly assigned to the students. As a result, we collected data from 647 students who answered the five scales of feedback characteristics that were the focus of this paper.
To summarize, for RQ1a, 2,907 students were involved, and for RQ1b, RQ2, RQ3 and RQ4, a random sample of the initial group of 2,907 was asked to answer the feedback characteristics items. This subsample consisted of 647 students (44.5% women, 54.3% men, 1.2% preferred not to say). Table 2 describes the distribution of this subsample of students according to their faculty.
3.3 Instruments
3.3.1 Feedback skills
The student feedback skills scale was developed based on a combination of existing and newly formulated items. The instrument has 8 items, 4 focusing on positive feedback (e.g., formulating positive feedback for other students) and 4 on negative feedback (e.g., receiving negative feedback from other students). Possible answers ranged from 1 = strongly disagree to 5 = strongly agree (the items can be found in Appendix A). We expected that the instrument would represent two distinct dimensions, depending on valence (positive or negative feedback).
3.3.2 Feedback antecedents
For this study, we were interested in how students deal with corrective feedback and their feedback orientation. The Corrective Feedback Instrument (Hulse-Killacky et al., 2006) is an instrument originally used in the context of group work for counselor training to explore student reactions to giving and receiving corrective feedback in group settings. We used three dimensions from this instrument: feelings (5 items, e.g., I try to avoid being in conflict with others whenever possible), evaluative (5 items, e.g., It is hard for me not to interpret corrective feedback as a criticism of my personal competence), and clarifying (3 items, e.g., I am usually too uncomfortable to ask someone to clarify corrective feedback delivered to me). Possible answers ranged from 1 = strongly disagree to 6 = strongly agree.
The Feedback Orientation Scale (Linderbaum and Levy, 2010) is an instrument that aims to grasp the individual’s overall receptivity to feedback. We used two concepts, social awareness and feedback self-efficacy, with each containing 5 items. Social awareness refers to an individual’s tendency to use feedback to become aware of others’ views on oneself and to be sensitive to these views (e.g., ‘Feedback helps me manage the impression I make on others’). Feedback self-efficacy describes an individual’s tendency to have confidence in dealing with feedback situations and feedback (e.g., ‘I believe I have the ability to deal with feedback effectively’). Possible answers ranged from 1 = strongly disagree to 5 = strongly agree.
3.4 Procedure and data collection
This study follows the guidelines for research ethics (NESH, 2021) and general data protection (GDPR), and approval was provided by the Norwegian Centre for Research Data (NSD). An electronic questionnaire was distributed to all registered students of the course via e-mail. The participants gave their consent to participate after being informed about the aims of the study. They were told that they could withdraw from the study at any time and for any reason.
3.5 Statistical analyses
A few preliminary tests were run before the process of validating the different concepts was initiated. Descriptive statistics, Kolmogorov–Smirnov tests of normality and the inter-item correlation matrix were performed, making use of SPSS v.28.
The feedback skills items are partly newly developed; therefore we applied another statistical approach than for the other concepts coming from existing instruments. To test the construct validity of this new instrument, we first ran a principal component analysis on half of the sample (random-based). We expected two components: one for the items related to positive feedback and one for the items related to negative feedback. Next, we performed a confirmatory factor analysis with the second half of the sample (RQ1a). The robust maximum likelihood estimation method was used given the non-normal distribution of the items. The following fit indices were used to assess the model fit (cut-off scores are provided in parentheses): the root mean square error of approximation (RMSEA, <0.05 to 0.10), the comparative fit index (CFI, >0.90), the Standardized Root Mean Square Residual (SRMR, <0.08), and the Tucker–Lewis index (TLI, >0.90) (Hu and Bentler, 1999; Hooper et al., 2008). The χ2 index was not used, give its sensitivity to sample size.
Given their previous validation results, we tested the structure of the retained scales of the Corrective Feedback Instrument and the Feedback Orientation Scale immediately via confirmatory factor analysis (RQ1b). The same fit indices were used for the feedback skills items. The internal consistency among the items of all scales was calculated using Cronbach’s α and McDonald’s Ω (RQ1a and RQ1b) (Cronbach, 1951).
To create a comprehensive description of the students’ feedback characteristics, we combined the five feedback characteristics and applied a person-oriented approach. This approach identifies homogenous groups of students based on their responses to variables, instead of the usual variable-centered approach that typically groups variables on common underlying dimensions or factors (Laursen and Hoff, 2006). The advantage of such a person-oriented approach is that individuals are seen as organized wholes with interconnected components (Bergman and Lundh, 2015), i.e., the retained concepts of the existing feedback instruments. This approach allowed us to allocate students to clusters or groups on the basis of and characterized by a particular feedback profile. No theory nor prior research was found that combined the concepts of the Corrective Feedback Instrument and the Feedback Orientation Scale using this person-oriented approach; therefore, the number of expected clusters or groups was unknown. To determine the number of clusters, we applied hierarchical cluster analysis and conducted a visual inspection of the resulting dendrogram. In the next phase, we used a k-means cluster analysis with Ward’s method and Bonferonni testing (RQ3).
A t-test was used to look for differences in students’ feedback skills according to gender (RQ2), and ANOVA was used to look for differences among the resulting cluster groups (RQ4). All analyses were carried out with SPSS v.28, except for the confirmatory analyses for which we used MPlus v. 8.6 (Muthén and Muthén, 1998–2017).
4 Results
4.1 Construct validity and reliability of feedback skills
Given the novel character of the feedback skills instrument, we first conducted a principal component analysis, making use of the first half of the sample. Visual inspection of the scree plot supports the expected two-factor structure (see Figure 1). Table 3 shows the rotated component matrix, including all loadings. The two factors have Eigenvalues above 1.0 (Kaiser criterion) and these explain 55.18% of the variance.
All items loaded high on one of the components except for the item ‘Making changes based on negative feedback you receive from other students’, which loaded moderate on both components. To avoid ambiguity, we decided to remove this item and rerun the analysis.
Table 4 shows the rotated component matrix, including all loadings of the remaining seven items. The two factors explained 59.42% of the variance. The KMO-value (0.71) and Bartlett’s Test of Sphericity χ2 = 1495.196 (df = 21, p < 0.001) of this second analysis were both good. All factor loadings higher than 0.40 are shown in Table 4. The first component contains the items related to positive feedback and the second component represents the items related to negative feedback.
In the second phase, we checked whether the structure found could be confirmed with a confirmatory factor analysis approach, using the second half of the sample.
The structure we tested had two latent variables: positive feedback (with four items as observed variables) and negative feedback (with three items as observed variables). The fit indices confirm the factor structure: RMSEA =0.085 (0.07 < 95% C.I. < 0.10, p = 0.000, CFI = 0.93, SRMR = 0.04 and TLI = 0.88). The correlation between both latent variables was 0.49. All standardized factor loadings were above 0.40. The positive feedback scale showed good reliability figures (Cronbach’s α = 0.73; McDonald’s Ω = 0.73) as well as the negative feedback scale (Cronbach’s α = 0.72; McDonald’s Ω = 0.74).
As an answer to RQ1a, we can conclude that the student feedback skill items are represented by a two-factor model, with a scale that includes four items related to positive feedback and three items related to negative feedback. The internal consistency of both scales was good.
4.2 Relationship between feedback skills and gender
By means of a t-test, we checked for differences in positive and negative feedback skills according to gender (RQ2). Positive feedback skills were indifferent for gender (X female = 3.78; X male = 3.73, t = 1.835, df = 1758, p = 0.07). The negative feedback skills scale, however, revealed differences according to gender: female students scored significantly lower than male students (X female = 3.01; X male = 3.22, t = −5.951, df = 1757, p < 0.001). This difference is meaningful (Cohen’s d = 0.56).
4.3 Construct validity and reliability of antecedents of feedback skills
In the third phase, we conducted confirmatory factor analyses of the scales and items we used from existing instruments (RQ1b). From the Corrective Feedback Instrument, we used three scales: a feelings factor, an evaluative factor and a clarifying factor. We tested this structure by means of confirmatory factor analysis, allowing the three latent variables to correlate using a robust maximum likelihood estimation method, given the non-normal distribution of the items. The structure with three underlying latent variables was confirmed (RMSEA =. 037 [0.032 < 95% C.I. < 0.042, p = 0.000], CFI = 0.97, SRMR = 0.034 and TLI = 0.97). The correlation between the latent variables varied between 0.55 and 0.67. All standardized factor loadings were above 0.40.
From the Feedback Orientation Instrument, we used two scales: social awareness and feedback self-efficacy. A similar approach was used. The structure with two underlying latent variables is confirmed (RMSEA = 0.050 [0.043 < 95% C.I. < 0.56, p = 0.000], CFI = 0.95, SRMR = 0.043 and TLI = 0.93). The correlation between both latent variables was 0.27. All but one standardized factor loadings were above 0.40 (item LI_SA_01).
The internal consistency of the retained scales from the Corrective Feedback Instrument and the Feedback Orientation Instrument are shown in Table 5. All scales exhibited good to very good reliability indices for both Cronbach’s α and McDonald’s Ω (RQ1b).
4.4 Cluster analysis of the antecedents of feedback skills
In the next phase, we used the five students’ feedback characteristics to create clusters or groups of students with comparable characteristics. As input, we used the standardized sum scores.
We created a dendrogram by means of hierarchical cluster analysis to determine the number of clusters that should be retained distinguished. A visual inspection of this dendrogram (see Figure 2) revealed a solution with five clusters.
Afterwards, we used the result of the hierarchical cluster analysis, i.e., a priori distinguishing five groups, as input for a k-means analysis, using Ward’s method. A visual representation of the final cluster centers of the five groups solution can be found in Figure 3.
As an answer to RQ3, we found five different groups or clusters among the students with comparable feedback characteristics.
Cluster 1 contains 117 students who have scores below average on the Feelings, Evaluative and Clarifying factors of the Corrective Feedback Instrument. These students were less concerned about being negatively evaluated, criticized or judged, and they were less hesitant to ask for clarifications. They showed a very low level of social awareness, meaning that they were much less sensitive to others’ views of themselves. The group members of Cluster 1 showed a moderate level of feedback self-efficacy.
Cluster 2 contains 89 students with high levels of the corrective feedback factors, feeling, evaluation and clarifying, meaning that they are highly concerned about being negatively evaluated, criticized or judged, and they were reluctant to ask for clarifications. They scored average on social awareness and very low on feedback self-efficacy, the latter meaning that they judged their competences to interpret and respond to feedback appropriately as very low.
Cluster 3 contained 120 students with a less pronounced profile, except that they scored above average on Feeling, higher above average on Clarifying and below average on Social Awareness.
Cluster 4 has the least pronounced profile, with 154 students who scored slightly above average for Feelings, Evaluative and Social Awareness.
Cluster 5 contained 179 students who scored low on the three factors of the Corrective Feedback Instrument, meaning that they were less concerned about being negatively evaluated, being criticized or judged, and were less hesitant to ask for clarifications. At the same time, they scored high on both dimensions of the Feedback Orientation Scale, social awareness, and feedback self-efficacy.
4.5 The relation between the five clusters and feedback skills
As a final step in the analyses (RQ4), we explored whether the distinct cluster groups performed differently on the positive and negative feedback skills scales. The results with the means and standard deviations of the feedback skills scale for the five groups can be found in Table 6.
The results of the ANOVA reveal that there are significant differences between the groups for both the positive (p < 0.001) and negative (p < 0.001) feedback skills.
The Bonferroni post hoc tests for positive feedback skills show that differences between groups 1 and 5 (p < 0.001), groups 2 and 5 (p < 0.001), groups 3 and 5 (p < 0.001), and groups 4 and 5 (p < 0.05) are significant.
The Bonferroni post hoc tests for the negative feedback skills show significant differences between groups 1 and 2 (p < 0.001), groups 1 and 3 (p < 0.001), groups 2 and 4 (p < 0.01), and groups 2 and 5 (p < 0.001), groups 3 and 5 (p < 0.001), and groups 4 and 5 (p < 0.001).
5 Discussion
Much has been written about the importance of and quality criteria for feedback given to HE students. Research has, however, largely focused on formative, task-related feedback for assessment purposes and employing a ‘one-size-fits-all’ approach (Gabelica and Popov, 2020). Less is known about how feedback is perceived and processed by students, in particular within the context of giving and receiving peer feedback on behavior and performance in a team. Considering the increased demand to integrate soft skills, such as feedback skills, into HE curricula, this study demonstrates that more refined treatment and more in-depth insights regarding feedback in student teams are needed.
First, this study provides an instrument to measure how skilled students perceive themselves to be when they give and receive feedback. Despite the importance of feedback skills in HE and the increased use of team-based learning activities, there are no existing instruments for feedback skills. We developed and validated an instrument with two scales related to positive and negative feedback. The result of explorative and confirmatory factor analysis confirmed the structural validity of this instrument, and the reliability of both scales (Cronbach’s α and McDonald’s Ω) was found to be adequate (RQ1a). Differences according to gender were found for the negative feedback scale (female students scored lower than male students), while no gender differences were found for the positive feedback scale (RQ2).
To investigate individual differences in how feedback is perceived and processed, we selected a distinct set of scales coming from existing feedback questionnaires from other contexts, developed by Hulse-Killacky et al. (2006) and Linderbaum and Levy (2010). Five scales were used for our research purposes: Feelings, Evaluative, Clarifying, Social Awareness and Feedback Self-Efficacy. Since these scales were now to be used in a different context (university students working in interdisciplinary project teams), we conducted a series of confirmatory factor analyses. These analyses confirmed the structural validity of each of the five scales. All scales showed good to very good reliability scores (Cronbach’s α and McDonald’s Ω) (RQ1b).
To explore how individual characteristics as antecedents relate to the students’ perceived feedback skills, we facilitated a person-oriented approach and examined how these antecedents relate within students. We first applied a hierarchical cluster analysis, followed by a non-hierarchical cluster analysis (k-means). The output of the hierarchical cluster analysis supported the conclusion of five distinguishable clusters or groups of students. The subsequent k-means analysis provided cluster centers for these five clusters, which allows for a qualitative description based on the shared characteristics of these distinct clusters (RQ3). The observed characteristics show striking differences between the five clusters. For instance, cluster 1 and cluster 2 show nearly opposite patterns, while cluster 3 and 4 have less distinct profiles. Cluster 5 shows a highly distinct pattern as all corrective feedback variables score negative (meaning the student being less concerned about being criticized) and all Feedback Orientation Scale score positive (meaning that they report high on social awareness and self-efficacy).
It is furthermore interesting to note that there were significant differences between the clusters regarding the scores on perceived feedback skills (RQ4). Cluster 5 shows the highest scores for both positive and negative feedback skills. For positive feedback skills, cluster 5 scores significantly higher than all the other clusters, and for negative feedback skills the differences are significant compared to cluster 2, 3, and 4. The most substantial difference is between cluster 5 and 2 regarding how they perceived negative feedback skills (Table 6). These significant differences between the clusters show that to support students in developing feedback skills, individual differences need to be taken into consideration to “tailor” the students’ learning process (Gabelica and Popov, 2020; de Kleijn, 2023). As such, this study reveals some relevant antecedents of feedback skills to consider for teachers in higher education. These are related to two areas of individual differences, one related to corrective feedback (Hulse-Killacky et al., 2006) and the other to feedback orientation (Linderbaum and Levy, 2010). The area of corrective feedback includes feelings related to concerns about being negatively evaluated, criticized and judged, and reluctance to ask for clarification. Feedback orientation is about a person’s overall receptivity to feedback, operationalized as social awareness and feedback self-efficacy.
5.1 Limitations
This study has several limitations. First, it does not allow for studying the influence of the context and situational factors, such as how the relationship between the members in the team might influence giving and receiving feedback or how the course design might contribute to how the students perceived the feedback situation. The students in this study filled out the questionnaire as part of a course where there was an explicit focus on feedback with compulsory feedback exercises. In these exercises, the students’ task was to give feedback on team members’ behavior, although the questions in the questionnaire did not specifically distinguish between feedback on performance or behavior. This context might have influenced how the students perceived their skills and how they answered questions about their individual preferences and characteristics regarding feedback.
A second limitation concerns the technique used. Cluster analysis is an exploratory technique that has no strong a priori indices or benchmarks, which implies that some decisions or choices, although informed, must be made by the researcher. We chose a five-cluster solution based on visual inspection of the dendrogram, although a two-cluster solution could have been a viable option as well. However, the two-cluster solution was not very informative for our purposes in terms of the resulting cluster characteristics. As there existed no previous theory pointing in a particular direction, we chose the five-cluster solution that represents a finer-grained picture of the different clusters of students present in our sample.
The third limitation is related to the sampling methodology: all participants in our study came from a single research-oriented university with a strong technical tradition and located within one country. This implies that this sample, on which clustering into groups is based, is not a random sample for the whole population of university students. Thus, we cannot guarantee that all student groups were represented in our results. Finally, we relied on self-reporting, which ideally should be complemented with qualitative data, such as observations and interviews.
5.2 Implications and future research
This study has implications for teachers in higher education and other practitioners. Most importantly, the study is a reminder for teachers in higher education that there is no ‘one-size-fits-all’ way to give and receive feedback. This is particularly important to consider in collaborative learning activities in which individual differences can play out differently in different student groups. Furthermore, the newly developed feedback instrument seems to be a promising tool to be used as a self-assessment instrument for students to describe their perceived feedback skills. The instrument can be used as an instrument to evaluate students’ development during courses that are intended to foster feedback skills.
For future research, we encourage that the feedback instrument is used in other educational contexts to allow validation replication. It is also of interest to employ pre-post measurement and mixed methods design that allows a description of the development of student feedback skills during a course and a more in-depth understanding of differences that can play out in a group. Finally, it is relevant to investigate if there are connections between the students’ personalities, feedback antecedents, and other collaborative skills.
6 Conclusion
This study contributes to the limited research on the dynamics of giving and receiving feedback from the perspective of students in the context of collaborative learning. We constructed a feedback skills questionnaire with two distinct dimensions (for positive and negative feedback) that meet robust psychometric requirements. This short instrument (7 items) can easily be administered to assess the students’ feedback skills or monitor their development during a course. On the basis of a well-judged selection of feedback characteristics and applying hierarchical and k-means cluster analysis, we were able to distinguish five distinct groups of students with common characteristics. The person-oriented approach allowed us to disentangle the interplay of the feedback characteristics within the person and their corresponding cluster or group. The results provided evidence that an approach that goes beyond ‘one-size-fits-all’ is necessary for studying feedback in student groups and for facilitating student feedback in collaborative learning activities.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving humans were approved by Norwegian Agency for Shared Services in Education and Research (SIKT). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
ES: Conceptualization, Investigation, Methodology, Project administration, Writing – original draft. MJ: Conceptualization, Writing – review & editing. PP: Conceptualization, Formal analysis, Investigation, Methodology, Validation, Writing – original draft.
Funding
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2024.1475944/full#supplementary-material
Footnotes
References
Alpay, E., and Walsh, E. (2008). A skills perception inventory for evaluating postgraduate transferable skills development. Assess. Eval. High. Educ. 33, 581–598. doi: 10.1080/02602930701772804
Aoun, C., Vatanasakdakul, S., and Ang, K. (2018). Feedback for thought: examining the influence of feedback constituents on learning experience. Stud. High. Educ. 43, 72–95. doi: 10.1080/03075079.2016.1156665
Ashenafi, M. M. (2017). Peer-assessment in higher education – twenty-first century practices, challenges, and the way forward. Assess. Eval. High. Educ. 42, 226–251. doi: 10.1080/02602938.2015.1100711
Bergman, L. R., and Lundh, L. G. (2015). The person-oriented approach: roots and roads to the future. J. Person Oriented Res. 1, 1–6. doi: 10.17505/jpor.2015.01
Black, P., and William, D. (1998). Assessment and classroom learning. Assessment Educ. 5, 7–74. doi: 10.1080/0969595980050102
Carless, D., and Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assess. Eval. High. Educ. 43, 1315–1325. doi: 10.1080/02602938.2018.1463354
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika 16, 297–334. http://doi:10.1007/bf02310555. doi: 10.1007/BF02310555
Cumming, J., Woodcock, C., Cooley, S. J., Holland, M. J. G., and Burns, V. E. (2015). Development and validation of the groupwork skills questionnaire (GSQ) for higher education. Assess. Eval. High. Educ. 40, 988–1001. doi: 10.1080/02602938.2014.957642
Dawson, P., Yan, Z., Lipnevich, A., Tai, J., Boud, D., and Mahoney, P. (2023). Measuring what learners do in feedback: the feedback literacy behaviour scale. Assess. Eval. High. Educ. 49, 348–362. doi: 10.1080/02602938.2023.2240983
de Kleijn, R. A. M. (2023). Supporting student and teacher feedback literacy: an instructional model for student feedback processes. Assess. Eval. High. Educ. 48, 186–200. doi: 10.1080/02602938.2021.1967283
Evans, C. (2013). Making sense of assessment feedback in higher education. Rev. Educ. Res. 83, 70–120. doi: 10.3102/0034654312474350
Fenigstein, A., Scheier, M. F., and Buss, A. H. (1975). Public and private self-consciousness: assessment and theory. J. Consult. Clin. Psychol. 43, 522–527. doi: 10.1037/h0076760
Fong, C. J., and Schallert, D. L. (2023). Feedback to the future”: advancing motivational and emotional perspectives in feedback research. Educ. Psychol. 58, 146–161. doi: 10.1080/00461520.2022.2134135
Gabelica, C., and Popov, V. (2020). One size does not fit all”: revisiting team feedback theories from a cultural dimensions perspective. Group Org. Manag. 45, 252–309. doi: 10.1177/1059601120910859
Gabelica, C., Van den Bossche, P., De Maeyer, S., Segers, M., and Gijselaers, W. (2014). The effect of team feedback and guided reflexivity on team performance change. Learn. Instr. 34, 86–96. doi: 10.1016/j.learninstruc.2014.09.001
Gabelica, C., Van den Bossche, P., Segers, M., and Gijselaers, W. (2012). Feedback, a powerful lever in teams: a review. Educ. Res. Rev. 7, 123–144. doi: 10.1016/j.edurev.2011.11.003
Hooper, D., Coughlan, J., and Mullen, M. R. (2008). Structural equation modelling: guidelines for determining model fit. Electron. J. Bus. Res. Methods 6, 53–60.
Hu, L.-T., and Bentler, P. M. (1999). Cut-off criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct. Equ. Model. Multidiscip. J. 6, 1–55. doi: 10.1080/10705519909540118
Hulse-Killacky, D., Orr, J. J., and Paradise, L. V. (2006). The corrective feedback instrument–revised. J. Specialists Group Work 31, 263–281. doi: 10.1080/01933920600777758
Johnsen, M. M. W., Sjølie, E., and Johansen, V. (2023). Learning to collaborate in a project-based graduate course: a multilevel study of student outcomes. Res. High. Educ. 65, 439–462. doi: 10.1007/s11162-023-09754-7
Jussim, L., Coleman, L. M., and Nassau, S. R. (1989). Reactions to interpersonal evaluative feedback. J. Appl. Soc. Psychol. 19, 862–884. doi: 10.1111/j.1559-1816.1989.tb01226.x
King, P. E., Schrodt, P., and Weisel, J. J. (2009). The instructional feedback orientation scale: conceptualizing and validating a new measure for assessing perceptions of instructional feedback. Commun. Educ. 58, 235–261. doi: 10.1080/03634520802515705
Laursen, B., and Hoff, E. (2006). Person-centered and variable-centered approaches to longitudinal data. Merrill-Palmer Q. 52, 377–389. doi: 10.1353/mpq.2006.0029
Linderbaum, B. A., and Levy, P. E. (2010). The development and validation of the feedback orientation scale (FOS). J. Manag. 36, 1372–1405. doi: 10.1177/0149206310373145
London, M. (2003). Job feedback: Giving, seeking, and using feedback for performance improvement. Mahwah, NJ: Lawrence Erlbaum.
London, M., and Sessa, V. I. (2006). Group feedback for continuous learning. Hum. Resour. Dev. Rev. 5, 303–329. doi: 10.1177/1534484306290226
London, M., and Smither, W. (2002). Feedback orientation, feedback culture, and the longitudinal performance management process. Group Org. Manag. 24, 162–184.
MacDonald, C. J., Archibald, D., Trumpower, D., Casimiro, L., Cragg, B., and Jelley, W. (2010). Designing and operationalizing a toolkit of bilingual Interprofessional education assessment instruments. J. Res. Interprofess. Pract. Educ. 1, 304–316. doi: 10.22230/jripe.2010v1n3a36
Muukkonen, H., Lakkala, M., Lahti-Nuuttila, P., Ilomaki, L., Karlgren, K., and Toom, A. (2020). Assessing the development of collaborative knowledge work competence: scales for higher education course contexts. Scand. J. Educ. Res. 64, 1071–1089. doi: 10.1080/00313831.2019.1647284
NESH. (2021). Guidelines for research ethics in the social sciences, humanities, law and theology. Available at: https://www.forskningsetikk.no/retningslinjer/hum-sam/forskningsetiske-retningslinjer-for-samfunnsvitenskap-og-humaniora/ (Accessed May 5, 2024).
O’Neill, T. A., Pezer, L., Solis, L., Larson, N., Maynard, N., Dolphin, R., et al. (2020). Team dynamics feedback for post-secondary student learning teams: introducing the “bare CARE” assessment and report. Assess. Eval. High. Educ. 45, 1121–1135. doi: 10.1080/02602938.2020.1727412
Sjølie, E., Espenes, T. C., and Buø, R. (2022). Social interaction and agency in self-organizing student teams during their transition from face-to-face to online learning. Comput. Educ. 189:104580. doi: 10.1016/j.compedu.2022.104580
Sjølie, E., Strømme, A., and Boks-Vlemmix, J. (2021). Team-skills training and real-time facilitation as a means for developing student teachers’ learning of collaboration. Teaching and Teacher Education. 107:103477.
Small, F., and Attree, K. (2016). Undergraduate student responses to feedback: expectations and experiences. Stud. High. Educ. 41, 2078–2094. doi: 10.1080/03075079.2015.1007944
Keywords: feedback skills, feedback characteristics, validation, higher education, person-oriented perspective, project-based learning, student teams
Citation: Sjølie E, Johnsen MMW and van Petegem P (2024) Profiling feedback antecedents in higher education students: a person-oriented perspective. Front. Educ. 9:1475944. doi: 10.3389/feduc.2024.1475944
Edited by:
Amelia Manuti, University of Bari Aldo Moro, ItalyReviewed by:
Katrin Herget, University of Aveiro, PortugalTiago Tempera, Instituto Politécnico de Lisboa, Portugal
Copyright © 2024 Sjølie, Johnsen and van Petegem. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Ela Sjølie, ela.sjolie@ntnu.no
†These authors have contributed equally to this work