Skip to main content

MINI REVIEW article

Front. Educ., 08 January 2024
Sec. Assessment, Testing and Applied Measurement

Assessing class participation in physical and virtual spaces: current approaches and issues

  • 1Faculty of Education, Teaching and Learning Innovation Centre, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
  • 2Department of Psychology, De La Salle University, Manila, Philippines
  • 3Seinan Gakuin University, Fukuoka, Japan

Learning occurs best when students are given opportunities to be active participants in the learning process. As assessment strategies are being forced to change in the era of Generative AI, and as digital technologies continue to integrate with education, it becomes imperative to gather information on current approaches to evaluating student participation. This mini-review aimed to identify existing methods used by higher education teachers to assess participation in both physical and virtual classrooms. It also aimed to identify common issues that are anticipated to impact future developments in this area. To achieve these objectives, articles were downloaded from the ERIC database. The search phrase “assessment of class participation” was utilized. Search was limited to peer-reviewed articles written in English. The educational level was limited to “higher education” and “postsecondary education” in the search. From the 2,320 articles that came up, titles and abstracts were screened and 65 articles were retained. After reading the full text, a total of 45 articles remained for analysis, all published between 2005 and 2023. Using thematic analysis, the following categories were formed: innovations in assessing class participation, criteria-related issues, and issue of fairness in assessing class participation. As education becomes more reliant on technology, we need to be cognizant of issues that came up in this review regarding inequity of educational access and opportunity, and to develop solutions that would promote equitable learning. We therefore call for more equity-focused innovation, policymaking, and pedagogy for more inclusive classroom environments. More implications and potential directions for research are discussed.

1 Introduction

The assessment of classroom participation is a critical topic for both longstanding and rapidly emerging reasons. First, it is widely acknowledged that students learn best when they are active participants in the learning process (Petress, 2006; Ryan et al., 2007; Campbell et al., 2022). Therefore, class participation has become a valued standard for student learning. As marking participation is associated with effective preparation for class, frequency of participation, and comfort with class participation (Dallimore et al., 2006), assessing classroom participation can be an effective means to promote student engagement. Second, there is urgency in reviving interest for assessing classroom participation as a vital tool to assess student learning, especially as educators face challenges related to the rise of generative language models.

However, there are barriers to objectively assessing class participation. A seminal article by Armstrong and Boud (1983) discussed the problems commonly encountered in grading participation, and evidence suggests that most of these problems in assessing participation persist at present (Márquez et al., 2023). Some of the challenges include subjectivity, reliability, and the unintended detrimental effect of some methods of assessment used by instructors on the quality of class discussions (Armstrong and Boud, 1983; Bean and Peterson, 1998; Xu and Qiu, 2022). These issues have resulted in some arguments against grading participation, such as students mistaking the quantity of participation for quality and students ramping up oral participation just to score points toward their participation grade (Arnold, 2021).

Traditionally, attending classes, and orally communicating one’s thoughts during lectures and class discussions have been considered as the main behavioral indicators of participation. But with the advent of online learning and emerging educational technologies, there is a need to expand our view on what constitutes participation. For example, in large in-person classes, there is less chance to reliably evaluate each student’s participation in oral discussions given the limited time in lectures (Penn, 2008). Teachers having to assess class participation while actively facilitating a learning session can be burdensome, especially in these large classes. Furthermore, vocal students are not the only engaged learners (Shi and Tan, 2020). There could be various reasons for students’ silence such as the fear of committing mistakes and embarrassing oneself in public, even as they are mentally engaged during class. Thus, more inclusive assessment strategies are needed in evaluating class participation.

Creating rubrics that explicitly indicate expected competencies and communicating these to students is one approach to address problems with subjective assessment of participation. An example of scoring rubrics is a six-point scale with descriptors developed by Bean and Peterson (1998). Craven and Hogan (2001) also developed a rubric for higher education classrooms—assigning points for different levels of participation (exceeds, meets, or fails to meet expectations). While teachers tend to differ in terms of what they give emphasis on when assessing class participation, the most important thing is having clear and consistent standards in the process of evaluation. Teachers also have to ensure that the approach is aligned with their course goals and pedagogical methods.

Various methods of assessing class participation exist, the most common ones being through observation, self-assessment, peer assessment, and teacher assessment of in-class activities. While teacher observations are useful in evaluating the quality and quantity of student interaction and participation, this is sometimes considered as a subjective approach, especially if no clear criteria are provided (Armstrong and Boud, 1983). Self-assessment, on the other hand, is when students are given the opportunity to evaluate their own performance. This method is often questioned as students tend to inflate the grades that they assign to themselves relative to teachers’ ratings (Gopinath, 1999; Ryan et al., 2007). The third approach, peer assessment, is where students are given a chance to evaluate their peers’ participation. While heralded as a reasonable alternative to teacher ratings (Arnold, 2021), this method is also not safe from bias. Similar to self-assessments, students’ ratings of their peers tend to be significantly higher than teachers’ grades (Gopinath, 1999) and are not predictive of teacher evaluations (Ryan et al., 2007). Lastly, while teacher assessment of students’ participation in in-person activities can be useful in guiding students and providing immediate feedback on their work, it may also be a limited means of assessing participation especially in online environments where there are no physical interactions.

A scoping review (Czekanski and Wolf, 2013) drew attention to clicker technology as a way to engage learners in digital environments (Hunter Revell and McCurry, 2010). While this promoted participation, the anonymity of responses did not allow teachers to identify students and provide marks for participation. One study using an electronic bulletin board system found a medium, positive correlation between the number of posts graduate students made and their score on the course examination (Siegel et al., 2001). This suggests an association between mastery of material and participation in web-based discussions, highlighting the significance of assessing class participation despite arguments against it.

2 The present review

Assessment of participation has been and continues to be a complex issue that vexes many higher education teachers (Flaherty et al., 2008). With increased prevalence of online learning and improved access to educational technologies, it becomes imperative to review literature on assessment of class participation and reflect on how practices have changed across the years. Furthermore, the increased importance of coming up with alternative ways of assessing student learning or understanding of content is corollary to the current assessment challenges (e.g., cheating, plagiarism) faced by educators in the age of generative language models (Dwivedi et al., 2023).

In order to meet the objectives of this mini-review, we began by searching the articles through ERIC, known to be the largest database in education. We used “assessment of class participation” as a search term. The article search was conducted in July limiting the search to “peer reviewed only” articles and came up with 8,315 articles. The search was later refined by limiting the educational level to “higher education” and “postsecondary education,” and this step yielded 2,320 articles. Articles that had no full-text available in ERIC were searched through the authors’ institution’s subscription. Only full-text articles that were in English were retained after title and abstract screening. A total of 65 records were fully read to examine whether they were about assessing class participation. Forty-five articles were retained in the final roster. Two of these were reflexive pieces, in recognition that valuable insights can also be gained from educators’ personal reports. The articles were coded, and thematic analysis was applied to develop categories that highlight existing issues in assessing class participation. The said themes are discussed in subsequent sections. Using the same search term, another round of search in ERIC was conducted during the first week of December 2023 to capture recently published articles on the topic that were not in the original roster. No new articles came up from this search, highlighting the scarcity of studies on assessing class participation.

3 Findings

3.1 Studies’ characteristics

The highest annual number of articles published on assessment of class participation (N = 11) was in 2021. The journal Assessment and Evaluation in Higher Education published the highest number of articles on the topic (N = 5). Thirty-five studies focused on undergraduate students, four on post-graduate students and another four on both educational levels. Three articles did not specify an educational level focus.

No specific timeframe was set in the ERIC database during the search. The oldest text considered in the mini-review was published in 2005 (without specific publication date), while the most recent article was published in 2023 (with the exact date of official release in the journal’s issue during April of the said year). Table 1 displays information on these articles.

TABLE 1
www.frontiersin.org

Table 1. Details from oldest and most recent text reviewed.

3.2 Indicators of classroom participation

Indicators of participation were classified into three categories: general indicator, in-person indicators, and online indicators. Attendance was the most often used general indicator, with 16 articles mentioning it. Most often used in-person indicators were peer interactions and collaborations (N = 24), oral participation and discussions (N = 19), and work completion (N = 7). Most often mentioned online indicators are forum or discussion posts (N = 7), and access and engagement with online materials (N = 5).

Findings were grouped under three themes. These themes were innovations in assessing participation, criteria-related issues, and the issue of fairness in assessing class participation.

3.3 Innovations in assessing class participation

To keep up with the digitalization of education, innovations in assessing class participation with the use of technology have steadily grown. Technology-enabled formative assessment with an organized system of point collection to evaluate both the effort and achievement of each student after each session is one example (Kereković, 2021; Lu and Cutumisu, 2022). A visual-based measurement called graphical self-assessment manikin (SAM) assisted in measuring students’ emotional responses and participation in an online peer assessment activity (Cheng et al., 2012). There is also a classroom data visualization tool that enables tracking of individuals during group-centered instruction (Makowski and Lubienski, 2023). Another innovation to monitor participation is the activation of a Zoom function that automates attendance recording (Bekkering and Ward, 2020). The researchers also used focus on the application to measure attentiveness. Other innovations included the use of online chat box (Huang, 2022), electroencephalogram (EEG) measuring attention levels (Sezer et al., 2017), threaded online discussions with peers (Lai, 2011; Jin, 2021), learning analytics (e.g., log file data), and natural language processing techniques (NLP; Bihani and Paepcke, 2018) for crediting forum posts.

3.4 Criteria-related issues

Articles mentioned the importance of aligning criteria and expectations with the nature of the course and the activities to be accomplished (Smith, 2008; Barlow et al., 2020; Orzolek, 2020). It is important to communicate these expectations to students clearly, and to provide a detailed rubric to avoid a subjective marking process (Flaherty et al., 2008; Baghurst, 2014; Holloman et al., 2021). Communicating expectations will also prevent students from expecting grades that are disproportionate to their performance (Alshakhi, 2021). One recommendation is to involve students in establishing criteria and to negotiate expectations so that grade policies can be aligned with the forms of participation that they value (Quesada et al., 2019; Chessey, 2021). However, learners’ overreliance on some guidelines for participation can at times lead to superficial participation (Koehler and Meech, 2021), raising the issue of quantity vs. quality where students attempt to fill a quota just to receive high participation grades without necessarily contributing quality answers (Flaherty et al., 2008; Lai, 2011; Yildirim, 2017). Numerically capping credit for participation in online discussion boards was one of the ways teachers prevented a minority of students from dominating the discussion thread (Galyon et al., 2015).

3.5 Issue of fairness in assessing class participation

Fairness has been a long-standing issue in discussions about assessing class participation. The practice of having the frequency of students’ oral participation as sole basis for participation grades has been questioned. For instance, a highly skilled student who does not require much effort in publicly demonstrating proficiency could be unfairly graded even though the student has mastered the course content (Baghurst, 2014). Silent learners (Baghurst, 2014; Macfarlane, 2016; Theriault, 2019), shy learners (Macfarlane, 2016; Akpur, 2021), and individual activity oriented (IAO) students (Crosthwaite et al., 2015) have also started to gain attention. While online dashboards provide a possible solution for students who are uncomfortable speaking in public, online methods are not without their limitations. Despite teachers using an ad-hoc formulae over the participation statistics provided by online discussion forums, a fair forum participation credit is still difficult to give in very large classes (Bihani and Paepcke, 2018). Instructors noted instances where students were caught “gaming” the system, by simply copying a peer’s forum posts, adding spaces or characters to cheat the automated system (Flaherty et al., 2008).

There was also a gender equity issue raised, with evidence favoring men (Ernest et al., 2019). A previous study found that student gender influenced teachers’ assessment of their participation, as both male and female teachers interacted more and gave more attention to their male than their female students (Spender, 1982). This demonstrates how teachers’ biases could influence the assessment process (Flaherty et al., 2008). Ethnic and cultural considerations were also raised, such as the relative disadvantage experienced by students who are non-native speakers and who are from minority groups (Yildirim, 2017; Chessey, 2021). In a study comparing participation patterns of students from developing and developed countries, socio-economic inequities were also highlighted, with the gap in participation patterns being attributed to unequal access to technology and the training required to be adept at using them (Giannini-Gachago and Seleka, 2005).

4 Discussion

Assessing class participation continues to be a contentious issue. An important question remains whether the benefits of assessing participation outweigh the costs or the efforts it requires from stakeholders. Easing teachers’ burden without compromising the quality of learning should be the goal of most innovations. Fortunately, we have technology-enabled formative assessment and data visualization tools to monitor individuals’ performance in a group. These are less subjective and less effortful ways of assessing participation. Automating functions such as attendance recording in online classes also saves teachers’ time and effort. While focus on the videoconferencing application was recommended as an indicator of attentiveness, it was found to not be a reliable measure. Given this, online chat boxes and discussion boards can be considered as more reliable tools to aid in assessing participation. The online chat box (Huang, 2022) provides a more comfortable classroom climate and facilitates more participation compared to the limited participation of a limited number of students observed in studies focused on oral participation (Rocca, 2010; Theriault, 2019). The caveat is that there must be a way to not just automatically quantify responses, but also to evaluate the quality of their content. Learning analytics and advanced NLP techniques are thus promising tools, if they allow detection of cheating to prevent students from gaming the system (Flaherty et al., 2008).

Web-based student response systems enable students to respond to teachers’ questions, receive feedback, and view their classmates’ responses in real-time, thereby enhancing interactivity in the classroom and giving students room to self-assess (Heaslip et al., 2013; Persaud and Persaud, 2019). This is beneficial since researchers have highlighted the critical role of timely feedback in assessing class participation (Márquez et al., 2023), to allow students to engage in corrective actions (Macfarlane, 2016). These web-based systems are also ideal for use in large classes where time constraints are often an issue. But while the anonymity of the software largely encourages participation, it makes it impossible for the teacher to assign marks to students unless they find a way to attach students’ responses to their names. The challenge is that most students prefer to be anonymous and are more likely to participate if they know they will not be identified (Heaslip et al., 2013; Persaud and Persaud, 2019). This brings to light ethical concerns such as students’ possible resistance to “intrusive technology” (Bekkering and Ward, 2020). To address such concerns, it is the teacher’s obligation to inform students (e.g., through the syllabus) that they are being monitored, in what manner and for what purpose. In using automated systems that count responses, students should be given the freedom to choose whether they wish to be identified or not to avoid breach of privacy. In another study, the use of EEG to monitor attention levels was mentioned (Sezer et al., 2017). While ideal for being language- or culture-free, physiological measures are currently not ideal for application in natural learning contexts and are only well-suited for research purposes.

Speaking of innovations, the limited access to technology in low-and middle-income countries (LMIC) is a problem that deserves attention. Socio-economic inequalities in technology access and computer literacy have been placed at the forefront during the COVID-19 pandemic when learning shifted online (Czerniewicz et al., 2020). Teachers have huge roles to play as the implementers of innovations. Teachers also need to be aware of the biases that they have and of the impact their behaviors have on students, given the inequities found in this review with regard to gender, ethnicity, and other factors. Lastly, teachers are responsible for ensuring that rubrics are designed on the basis of the classroom context, with the criteria targeted at promoting engagement as well as assessing students’ learning.

4.1 Conclusion and recommendation

Due to EdTech tools, class participation is no longer limited to oral contributions. Thus, criteria and tasks should account for individual differences between students and their cultural backgrounds. Aligning and clearly communicating expectations through provision of a detailed rubric remain to be one of the best practices in assessing class participation, in line with the call for a standards-based and criteria-based assessment practice (Sadler, 2005; Alonzo et al., 2019). We highlight the importance of class participation assessment as a viable and alternative means in assessing student learning, given the challenges encountered by teachers in this age of generative language models. This review gathered some technological solutions meant to ease teachers’ burden in assessing participation. The use of AI is one viable solution notwithstanding its inherent challenges. However, AI applications would require significant investment from institutions to introduce innovations and to provide training opportunities to teachers. It would also require flexibility and willingness from teachers to implement changes in their practices. We recommend revisiting current criteria to evaluate whether standards for assessing classroom participation are aligned with the current educational climate, especially in online contexts.

Given the scarcity of studies in this area, researchers are encouraged to gather the current practices of teachers and the challenges they encounter in assessing class participation in both online and offline learning environments. Another important question to address is how technology-enabled assessment impacts teaching, and consequently, students’ learning. It would also be interesting to examine whether the available technologies and their functions and affordances are aligned with intended learning outcomes.

Educational technologies can aid learning and instruction by making the assessment of class participation manageable for teachers. Going forward, researchers and practitioners are called on to develop and advocate for more equity-focused innovation, policymaking, and pedagogy.

Author contributions

PS: Conceptualization, Formal analysis, Writing – original draft. LF: Conceptualization, Writing – review & editing. KN: Formal analysis, Writing - review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. Funding was received from Seinan Gakuin University.

Acknowledgments

The authors would like to thank Yijin Li for her assistance with this project.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2023.1306568/full#supplementary-material

References

Akpur, U. (2021). Does class participation predict academic achievement? A mixed-method study. English Lang. Teach. Educ. J. 4, 148–160. doi: 10.12928/eltej.v4i2.3551

CrossRef Full Text | Google Scholar

Alonzo, D., Mirriahi, N., and Davison, C. (2019). The standards for academics’ standards-based assessment practices. Assess. Eval. High. Educ. 44, 636–652. doi: 10.1080/02602938.2018.1521373

CrossRef Full Text | Google Scholar

Alshakhi, A. (2021). EFL teachers’ assessment practices of students’ interactions in online classes: an activity theory Lens. TESOL Int. J. 16, 148–176.

Google Scholar

Armstrong, M., and Boud, D. (1983). Assessing participation in discussion: an exploration of the issues. Stud. High. Educ. 8, 33–44. doi: 10.1080/03075078312331379101

CrossRef Full Text | Google Scholar

Arnold, S. L. (2021). Replacing “the holy grail”: use peer assessment instead of class participation grades! Int. J. Manag. Educ. 19:100546. doi: 10.1016/j.ijme.2021.100546

CrossRef Full Text | Google Scholar

Baghurst, T. (2014). Assessment of effort and participation in physical education. Phys. Educ. 71, 505–513.

Google Scholar

Barlow, A., Brown, S., Lutz, B., Pitterson, N., Hunsu, N., and Adesope, O. (2020). Development of the student course cognitive engagement instrument (SCCEI) for college engineering courses. Int. J. STEM Educ. 7, 1–20. doi: 10.1186/s40594-020-00220-9

CrossRef Full Text | Google Scholar

Bean, J. C., and Peterson, D. (1998). Grading classroom participation. New Dir. Teach. Learn. 1998, 33–40. doi: 10.1002/tl.7403

CrossRef Full Text | Google Scholar

Bekkering, E., and Ward, T. (2020). Class participation and student performance: a tale of two courses. Inf. Syst. Educ. J. 18, 86–98.

Google Scholar

Bihani, A., and Paepcke, A. (2018). QuanTyler: apportioning credit for student forum participation. Proceedings of the 11th international conference on educational data mining, 106–115.

Google Scholar

Campbell, L. O., Heller, S., and Pulse, L. (2022). Student-created video: an active learning approach in online environments. Interact. Learn. Environ. 30, 1145–1154. doi: 10.1080/10494820.2020.1711777

CrossRef Full Text | Google Scholar

Cheng, K.-H., Hou, H.-T., and Wu, S.-Y. (2012). Exploring students’ emotional responses and participation in an online peer assessment activity: a case study. Interact. Learn. Environ. 22, 271–287. doi: 10.1080/10494820.2011.649766

CrossRef Full Text | Google Scholar

Chessey, M. K. (2021). Shifting participation grade policies to value more kinds of student engagement. Phys. Teach. 59, 16–18. doi: 10.1119/10.0003008

CrossRef Full Text | Google Scholar

Craven, J. A., and Hogan, T. (2001). Assessing student participation in the classroom. Sci. Scope 25:36.

Google Scholar

Crosthwaite, P. R., Bailey, D. R., and Meeker, A. (2015). Assessing in-class participation for EFL: considerations of effectiveness and fairness for different learning styles. Lang. Test. Asia 5, 1–19. doi: 10.1186/s40468-015-0017-1

CrossRef Full Text | Google Scholar

Czekanski, K. E., and Wolf, Z. R. (2013). Encouraging and evaluating class participation. J. Univ. Teach. Learn. Pract. 10, 83–96. doi: 10.53761/1.10.1.7

CrossRef Full Text | Google Scholar

Czerniewicz, L., Agherdien, N., Badenhorst, J., Belluigi, D., Chambers, T., Chili, M., et al. (2020). A wake-up call: equity, inequality and Covid-19 emergency remote teaching and learning. Postdigital Sci. Educ. 2, 946–967. doi: 10.1007/s42438-020-00187-4

CrossRef Full Text | Google Scholar

Dallimore, E. J., Platt, M. B., and Hertenstein, J. H. (2006). Nonvoluntary class participation in graduate discussion courses: effects of grading and cold calling. J. Manag. Educ. 30, 354–377. doi: 10.1177/1052562905277031

CrossRef Full Text | Google Scholar

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., et al. (2023). “So what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71:102642. doi: 10.1016/j.ijinfomgt.2023.102642

CrossRef Full Text | Google Scholar

Ernest, J. B., Reinholz, D. L., and Shah, N. (2019). Hidden competence: Women’s mathematical participation in public and private classroom spaces. Educ. Stud. Math. 102, 153–172. doi: 10.1007/s10649-019-09910-w

CrossRef Full Text | Google Scholar

Flaherty, J., Choi, H. C., and Johan, N. (2008). A research-based approach to participation assessment: evolving beyond problems to possibilities. Collected Essays Learn. Teach. 1, 110–114. doi: 10.22329/celt.v1i0.3188

CrossRef Full Text | Google Scholar

Galyon, C. E., Heaton, E. C., Best, T. L., and Williams, R. L. (2015). Comparison of group cohesion, class participation, and exam performance in live and online classes. Soc. Psychol. Educ. 19, 61–76. doi: 10.1007/s11218-015-9321-y

CrossRef Full Text | Google Scholar

Giannini-Gachago, D., and Seleka, G. (2005). Experiences with international online discussions: participation patterns of Botswana and American students in an adult education and development course at the University of Botswana. Int. J. Educ. Dev. Inf. Commun. Technol. 1, 163–184.

Google Scholar

Gopinath, C. (1999). Alternatives to instructor assessment of class participation. J. Educ. Bus. 75, 10–14. doi: 10.1080/08832329909598983

CrossRef Full Text | Google Scholar

Heaslip, G., Donovan, P., and Cullen, J. G. (2013). Student response systems and learner engagement in large classes. Act. Learn. High. Educ. 15, 11–24. doi: 10.1177/1469787413514648

CrossRef Full Text | Google Scholar

Holloman, T. K., Lee, W. C., London, J. S., Hawkins Ash, C. D., and Watford, B. A. (2021). The assessment cycle: insights from a systematic literature review on broadening participation in engineering and computer science. J. Eng. Educ. 110, 1027–1048. doi: 10.1002/jee.20425

CrossRef Full Text | Google Scholar

Huang, Q. (2022). Does learning happen? A mixed study of online chat data as an indicator of student participation in an online English course. Educ. Inf. Technol. 27, 7973–7992. doi: 10.1007/s10639-022-10963-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Hunter Revell, S. M. H., and McCurry, M. K. (2010). Engaging millennial learners: effectiveness of personal response system technology with nursing students in small and large classrooms. J. Nurs. Educ. 49, 272–275. doi: 10.3928/01484834-20091217-07

PubMed Abstract | CrossRef Full Text | Google Scholar

Jin, S.-H. (2021). Educational effects on the transparency of peer participation levels in asynchronous online discussion activities. IEEE Trans. Learn. Technol. 14, 604–612. doi: 10.1109/tlt.2021.3126388

CrossRef Full Text | Google Scholar

Kereković, S. (2021). Formative assessment and motivation in ESP: a case study. Lang. Teach. Res. Q. 23, 64–79. doi: 10.32038/ltrq.2021.23.06

CrossRef Full Text | Google Scholar

Koehler, A. A., and Meech, S. (2021). Ungrading learner participation in a student-centered learning experience. TechTrends 66, 78–89. doi: 10.1007/s11528-021-00682-w

CrossRef Full Text | Google Scholar

Lai, K. (2011). Assessing participation skills: online discussions with peers. Assess. Eval. High. Educ. 37, 933–947. doi: 10.1080/02602938.2011.590878

CrossRef Full Text | Google Scholar

Lu, C., and Cutumisu, M. (2022). Online engagement and performance on formative assessments mediate the relationship between attendance and course performance. Int. J. Educ. Technol. High. Educ. 19:2. doi: 10.1186/s41239-021-00307-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Macfarlane, B. (2016). The performative turn in the assessment of student learning: a rights perspective. Teach. High. Educ. 21, 839–853. doi: 10.1080/13562517.2016.1183623

CrossRef Full Text | Google Scholar

Makowski, M. B., and Lubienski, S. T. (2023). Classroom data visualization: tracking individuals during group-centered instruction. Educ. Res. 52, 164–169. doi: 10.3102/0013189x231158374

CrossRef Full Text | Google Scholar

Márquez, J., Lazcano, L., Bada, C., and Arroyo-Barrigüete, J. L. (2023). Class participation and feedback as enablers of student academic performance. SAGE Open 13:215824402311772. doi: 10.1177/21582440231177298

CrossRef Full Text | Google Scholar

Orzolek, D. C. (2020). Effective and engaged followership: assessing student participation in ensembles. Music. Educ. J. 106, 47–53. doi: 10.1177/0027432119892057

CrossRef Full Text | Google Scholar

Penn, B. K. (2008). Mastering the teaching role. F.A. Davis, Philadelphia, PA.

Google Scholar

Persaud, V., and Persaud, R. (2019). Increasing student interactivity using a think-pair-share model with a web-based student response system in a large lecture class in Guyana. Int. J. Educ. Dev. Inf. Commun. Technol. 15, 117–131.

Google Scholar

Petress, K. (2006). An operational definition of class participation. Coll. Stud. J. 40, 821–823.

Google Scholar

Quesada, V., Gómez Ruiz, M. Á., Gallego Noche, M. B., and Cubero-Ibáñez, J. (2019). Should I use co-assessment in higher education? Pros and cons from teachers and students’ perspectives. Assess. Eval. High. Educ. 44, 987–1002. doi: 10.1080/02602938.2018.1531970

CrossRef Full Text | Google Scholar

Rocca, K. A. (2010). Student participation in the college classroom: an extended multidisciplinary literature review. Commun. Educ. 59, 185–213. doi: 10.1080/03634520903505936

CrossRef Full Text | Google Scholar

Ryan, G. J., Marshall, L. L., Porter, K., and Jia, H. (2007). Peer, professor and self-evaluation of class participation. Act. Learn. High. Educ. 8, 49–61. doi: 10.1177/1469787407074049

CrossRef Full Text | Google Scholar

Sadler, D. R. (2005). Interpretations of criteria-based assessment and grading in higher education. Assess. Eval. High. Educ. 30, 175–194. doi: 10.1080/0260293042000264262

CrossRef Full Text | Google Scholar

Sezer, A., İnel, Y., Seçkin, A. Ç., and Uluçınar, U. (2017). The relationship between attention levels and class participation of first-year students in classroom teaching departments. Int. J. Instr. 10, 55–68. doi: 10.12973/iji.2017.1024a

CrossRef Full Text | Google Scholar

Shi, M., and Tan, C. Y. (2020). Beyond oral participation: a typology of student engagement in classroom discussions. N. Z. J. Educ. Stud. 55, 247–265. doi: 10.1007/s40841-020-00166-0

CrossRef Full Text | Google Scholar

Siegel, D., Ward, L., and McCoach, D. B. (2001). Instructional WebBoard strategies in secondary education and university teaching. Annual Meeting of American Educational Research Association, Seattle, WA, April 10–14, 2001. EDRS, ED 452 271.

Google Scholar

Smith, H. (2008). Assessing student contributions to online discussion boards. Pract. Res. High. Educ. 2, 22–28.

Google Scholar

Spender, D. (1982). Invisible women: the schooling scandal. London: Writers and Readers.

Google Scholar

Theriault, J. C. (2019). Exploring college students’ classroom participation: a case study of a developmental literacy classroom. J. Coll. Read. Learn. 49:206-222.955. doi: 10.1080/10790195.2019.1638219

CrossRef Full Text | Google Scholar

Xu, Y., and Qiu, X. (2022). Necessary but problematic: Chinese university English teachers’ perceptions and practices of assessing class participation. Teach. High. Educ. 27, 841–858. doi: 10.1080/13562517.2020.1747424

CrossRef Full Text | Google Scholar

Yildirim, O. (2017). Class participation of international students in the U.S.A. Int. J. High. Educ. 6:94. doi: 10.5430/ijhe.v6n4p94

CrossRef Full Text | Google Scholar

Keywords: assessing class participation, student engagement, online teaching and learning, technology-enabled assessment, higher education

Citation: Simon PD, Fryer LK and Nakao K (2024) Assessing class participation in physical and virtual spaces: current approaches and issues. Front. Educ. 8:1306568. doi: 10.3389/feduc.2023.1306568

Received: 04 October 2023; Accepted: 13 December 2023;
Published: 08 January 2024.

Edited by:

Diana Pereira, University of Minho, Portugal

Reviewed by:

Eva Fernandes, University of Minho, Portugal

Copyright © 2024 Simon, Fryer and Nakao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Patricia D. Simon, psimon@hku.hk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.