Skip to main content

CURRICULUM, INSTRUCTION, AND PEDAGOGY article

Front. Educ., 22 September 2021
Sec. Educational Psychology
This article is part of the Research Topic Pedagogical Practices that Promote Student Communication, Problem-Solving and Learning in a Digital Age View all 9 articles

Digital Hardware for Peer Assessment in K-12 Schools and Universities

  • School of Education, University of Dundee, Dundee, United Kingdom

Digital peer assessment (PA) is an arrangement for learners to consider and specify the level, value, or quality of a product or the performance of other equal-status learners, using computers, tablets, mobiles or other devices, and the internet. Digital PA is of increasing relevance as more educational establishments are moving toward online or blended learning. It has been widely used for some time, not only in elementary (primary) and high (secondary) schools but also in higher education. In this article, the purposes of PA are considered. Then, questions of effectiveness are briefly discussed. Then, the majority of the article describes in general terms how to do it. A review is offered for variations in types of PA and the underpinning theory, both of which have practical implications, irrespective of whether the PA is digital or face-to-face. Then, the use of different kinds of digital hardware in different kinds of PA will be considered. After this, the social and emotional aspects of digital PA are considered. As the contexts are so different, differences between primary school, high school, and higher education are reviewed. A conclusion summarises the strengths and weaknesses of digital PA, which can certainly be effective as a teaching and learning method and enhance student communication, problem-solving, and self-confidence.

Introduction

Digital peer assessment is becoming much more common as more educational establishments are switching to online and blended learning. However, many articles do not discriminate between primary school, secondary school, and university, which is performed here. The operation of face-to-face peer assessment is described, and then referred to the varieties of digital environments, which again is a new contribution to literature. The social aspects of digital peer assessment can be lacking and need special care, and this is emphasised. This emphasis is a further novel contribution. Overall, digital peer assessment appears more effective than face-to-face peer assessment, but there are a number of operational issues which must be addressed if it is to be effective, which are outlined here.

Peer assessment has been widely used for some time, not only in elementary (primary) schools, high (secondary) schools, and in higher education but also in a wide range of workplace scenarios. Here, we will use the abbreviation PA for peer assessment. This article is concerned with digital PA, taking place through computers, tablets, and mobile phones, which are of increasing relevance as more educational establishments are moving towards online or blended learning. First, a definition of PA will be offered. Then, the purposes of PA will be considered. Then, questions of effectiveness will be briefly discussed. Then, the majority of the article addresses the question: How do you do it? A review is offered for variations in types of PA and the practical implications of the underpinning theory, irrespective of whether PA is digital or face-to-face. Then, the use of different kinds of digital hardware in different kinds of PA will be considered. Then, the social and emotional aspects of these two formats are considered. As the contexts are so different, differences between primary school, high school, and higher education will be reviewed. A conclusion summarises the strengths and weaknesses of digital PA.

Definition of Peer Assessment

A widely quoted definition of peer assessment refers to an arrangement for learners to consider and specify the level, value, or quality of a product, or the performance of other equal-status learners (O’Donnell and Topping, 1998). Other similar terms (synonyms) are in the literature (e.g., peer grading/marking—giving a score to a peer product/performance; peer feedback—peers giving elaborated feedback; peer evaluation—happens more usually in workplaces regarding skill and knowledge; or peer review—happens more usually in academic institutions regarding the assessment of written articles). Of course, it is entirely possible to include both peer grading and peer feedback in peer assessment.

When we turn to digital PA, we mean an arrangement for learners to consider and specify the level, value, or quality of a product, or the performance of other equal-status learners using computers, tablets, and mobiles or other digital devices to store work, allocate peer assessors, store peer assessments, average multiple peer assessments of the same piece of work, manage communication between assessors and assessees, and possibly manage the whole procedure. Of course, digital PA might be wholly online or it might be blended, the latter involving some face-to-face contact (especially useful at the start to establish positive relationships and some sense of trust).

Purposes of Peer Assessment

PA is usually a type of formative assessment intended to improve the quality of student work, which is why many teachers encourage the use of it. This is particularly true when PA incorporates elaborate feedback from peers, from which the assessee must choose which aspects to implement in the final version which is submitted for summative assessment. For the teacher, PA is a relatively cost-effective (to them) way of improving the final products. For the assessor, the intellectual demands of reflecting, making a balanced assessment, and formulating and delivering feedback can all lead to learning gains (Yu, 2011). For the assessee, the intellectual demands of receiving and evaluating the feedback, deciding what aspects to implement and what not to implement, and reflecting on other issues prompted by the feedback but not contained within it can all lead to learning gains (Li et al., 2012).

In addition to the use of PA as an assessment tool generating learning gains, at least three other goals are evident: the active and interactive participation of all students in the classroom, practice and preparation for self-monitoring and self-regulation in education and, indeed, in subsequent employment, and a change in the nature of social control in the classroom (Gielen et al., 2011a).

PA is not just for managing the assessment burdens of teachers, by diverting some of the assessment burden onto students. In any case, teachers may be concerned about the reliability of PA, but, in fact, PA when properly done proves to be more reliable than teacher assessment, although teacher assessment is, in fact, not very reliable (e.g., Harlen, 2005; Johnson, 2013). Thus, using correlation with instructor scores as an indicator of reliability is not advised, and is better referred to as “consistency” between teacher and peer assessor scores (Domínguez et al., 2016).

The Empirical Literature on Effectiveness

Peer assessment has similar effects at primary school, secondary school, and in higher education (Topping, 2017), but that does not mean it can be implemented in the same way in all these contexts. There have been many literature reviews on peer assessment which have all been positive, from the earliest reviews of face-to-face PA (e.g., Topping, 1998, on peer grades and feedback; and Falchikov and Goldfinch, 2000, on peer grades) to the latest meta-analyses (Double et al., 2020; Li et al., 2020).

In the latter, Li et al. (2020) found an overall effect size of 0.29 in 58 studies, which would be adjectivally described as small to moderate (Cohen, 1992). However, the ES was larger when PA was computer-mediated (0.45) than when face-to-face (0.24), although it is not clear how many studies were computer-based. There were significant moderator variables for training and the online/digital mode. Thus, more training led to greater effectiveness, while digital PA was more effective than face-to-face PA. Double et al. (2020) found an overall ES of 0.31 in 54 studies (again a small to moderate value), but no significant moderator variables. Unfortunately, in both these meta-analyses, studies from primary schools, secondary schools, and higher education were muddled together without any consideration of their radically different contexts. Additionally, studies which used digital technology were muddled with studies which did not use it.

However, Zheng et al. (2020) meta-analysed 37 studies of computerised peer assessment from 1998 to 2018, suggesting digital PA was more effective than face-to-face PA. However, problems remained regarding the educational sectors (only 8 of the 37 studies were conducted in schools). Technology-facilitated peer assessment had a significant and moderate mean ES (0.58) on learning achievements. The use of extra supporting strategies had a similar ES (0.54). Moderator variables such as training for assessors, duration, and grouping types were related to effect sizes.

Thus, it appeared that overall digital PA was more effective than face-to-face PA. Why should this be? We know that digital PA has the disadvantage of usually not easily allowing the development of social trust between the participants, although anonymity can be an advantage, especially early on in the process. Issues of social trust might be temporarily suspended, pending the benchmarking of the quality of PA by the assessee. This could be enhanced by having more than one assessor, which becomes more possible given the time saved in digital PA. Digital PA also permits asynchronous working, which participants may prefer and which may mean that they do PA work when they are more motivated and focused to do so. However, we do not really know why digital PA seems to be more effective than face-to-face PA.

We will now turn from the overall measures of effectiveness to more subtle issues of the differences between different kinds of PA.

A Typology of Peer Assessment

Several previous studies compare two or three types of PA, but the variety in types of PA goes far beyond that. O’Donnell and Topping (1998) described a typology of relevant variables. Gielen et al. (2011b) offered a more developed inventory. Further developments came from Topping (2018), outlining 45 variables. These are listed in Table 1, and many of them relate to all types of PA, not just digital PA. Learning how to do PA in a face-to-face environment can be a valuable precursor in extending into the digital environment. Once you can answer all questions implied in Table 1, you will have a good plan for your PA project. More details of these issues will be found in Topping (2018). But what processes should operate while PA is proceeding?

TABLE 1
www.frontiersin.org

TABLE 1. Variations in peer assessment.

A Theoretical Framework of Peer Assessment

A description of many of the key PA processes can be found in the comprehensive and integrated theoretical framework proposed by Topping (2021) (see Figure 1). Obviously, PA needs to be well planned and organised. As assessors and assessees work together, they will experience some cognitive conflicts (disagreements) (Piaget, 1926), and some scaffolding and support (Vygotsky, 1978), the balance between which will lead to negotiated meanings by co-construction. Beyond this, PA offers greater individualisation and differentiation, leading to greater engagement. In addition to cognitive gains, social and emotional factors are activated, which might enhance motivation or feelings for your partner(s).

FIGURE 1
www.frontiersin.org

FIGURE 1. Theoretical model of peer assessment.

Certainly, the communication skills of all participants will be developed, and this will lead the assessor into prompting error management as they seek to help the assessee to improve the work. PA also gives more practise—more than the teacher could ever offer—and this helps develop fluency with the task and other tasks alike. The assessor will give feedback, pointing out which parts are good and which parts might need improvement—but not all of this feedback will be accepted. From a single task, assessees learn to generalise other similar tasks, and this aids their metacognition—their ability to think about how they think. This metacognition enables them to begin to self-monitor how they operate and develop self-regulation (the ability to monitor and manage your energy, emotions, thoughts, and behaviours in ways that are acceptable and produce positive results, such as in learning). All these processes help develop self-confidence (or self-efficacy), not only in the assessee but also in the assessor. This leads to the application of these processes (whether implicitly or explicitly) in higher levels of learning and, indeed, in more distant types of learning.

Of course, not all these processes will occur, especially at the beginning of PA. However, with more practice, all these should develop, and the teacher will be monitoring to ensure they do emerge, prompting as necessary to aid the process. So, how do these processes operate in PA in digital environments? Now, let us consider the varieties of digital PA.

Varieties of Digital Peer Assessment

Access to devices and the internet are key to digital PA. In higher education most students will have access to a computer and the internet, although for some this is only on the campus, and when the campus is closed difficulties ensue. In primary and secondary schools, access to devices and an internet connection is more difficult in the school, and for many (particularly socioeconomically disadvantaged) students it is impossible outside of the school. Devices might include computers, tablets, mobile phones, and gaming systems. If the school is closed, internet availability might only be in a local library, other community centres, or in coffee shops or other commercial establishments. Mobile phones are more readily available, but internet access is costly, and the screen size is very small. Teachers need to conduct surveys of student access to devices and the internet in order to determine which is possible.

Computers might not be necessary if students have iPads or other tablets at home. However, problems can arise with transition from a PC to Mac, and vice versa, and with transition from a computer in the school to a tablet outside of it. Another kind of device is the personal digital assistant (PDA), but usually this would need to be provided to the students. This kind of device is very small and easily mobile and lends itself to being taken on field trips (perhaps where one scout does the trip and reports back to others in the classroom). Mobile phones can also be used for this purpose and have the possibility of sending a video back. Beyond this, there is considerable variety in the forms of PA.

World Wide Web–Based. Having peer assessors and assessees meet online on web applications at a time of their choice (whether synchronously or asynchronously) gets round the problems of personal availability. However, asynchronous access can lead to procrastination in under-motivated or anxious assessees. Another advantage of web-based PA is that it can easily be made anonymous, so students will not know who is assessing them, and this may encourage their responses to be somewhat more critical than they would otherwise be, at least in the early stages. Of course, web-based PA can be done locally (within one institution) or much more remotely (as in connecting students from different countries learning each other’s language). The kind of work assessed can be various: research proposals, teaching materials for peer tutoring, web-based case conferencing, and project work of various kinds, as well as the more standard written assignments. However, developing trust with someone you may never have met and who remains anonymous can be a challenge, so often social- and trust-building activities are built into initial training, and/or anonymous peer assessment is used only for the first stages of the project (Castle and McGuire, 2010). Web-based PA can be sustained over time with the same partners, or the partners can be alternated to create a wider social nexus and give them a broader experience.

Digital Software to Organise/Structure PA. When applied to large classes, as in higher education or even secondary education, PA can become difficult to manage. Technological software tools are available to organise, structure, and support PA (e.g., Luxton-Reilly, 2009) (e.g., Expertiza http://wiki.expertiza.ncsu.edu; PeerScholar https://doc.peerscholar.com; and PeerGrade https://www.peergrade.io). These software systems for managing PA on a large scale allocate assessors to assessees, collect the assessments, and average out assessments in small groups of students who mutually assess each other.

Video. Much of PA uses videos, e.g., in self-videos of consecutive rehearsal attempts to deliver a presentation and having it formatively peer assessed, before delivering it real-time and having it finally peer-assessed. This can, of course, be easily conducted on a mobile phone. Alternatively, videos can be imported and critiqued (e.g., from YouTube), and systems can be made available for tagging such videos, asking leading questions at key points, and requiring a response.

E-Portfolios. When students are encouraged to keep their work in e-portfolios, it creates an ideal opportunity for PA. It is very easy for a teacher to select a developing piece of work, arrange PA, and then see if the work has been improved as a result. This can be done with all kinds of work kept in an e-portfolio.

Social Media. Facebook and other social media can be used as an effective platform for sharing and PA of written work, videos, or pictures. Of course, students may use these platforms, irrespective of whether the teacher suggests it or not. Social media platforms which permit verbal conversation can also be used to develop foreign language skills. A problem that schools face is that social media in the school may be forbidden as per local policies, but, of course, outside of the school it is another story, and in that context teachers may still encourage students to use tools with which they are more familiar.

Wikis. Wikis are hypertext publication web sites which are collaboratively edited and managed by their own audience. Where these are used to co-construct a piece of writing (perhaps with illustrations), they lend themselves to multiple iterations of PA. An example would be an older group constructing an illustrated reading book for a younger group, which if done multiple times would create a mini-library for the younger group and considerable project activity for the older group. It would also be possible to do peer assessment of blogs.

Massive Open Online Courses (MOOCs). MOOCs are courses freely available over the internet, usually from universities, but some are relevant to younger students. A problem with MOOCs is that student retention is very low, so only a small proportion of enrolees finish the course. Another problem is that of assessment, which very often is some form of PA. However, even students who finish the course do not always participate in PA. Some MOOCs have tried to build in supports and prompts for students regarding PA, with various success. Anat et al. (2020) turned this on its head and investigated the student creation of parts of an MOOC, with positive effects.

Whatever kind of digital PA is used, one of the problems that arises is the establishment of trustworthy relationships prior to the activity.

Social Aspects of Digital PA

Van Gennip et al. (2009) conducted a systematic literature review on the social aspects of PA, noting that it was a highly interpersonal and interactional process. However, only four studies investigated interpersonal variables. Moreover, they were not used to explain the resulting learning gains. There seemed to be no relation to the occurrence of learning benefits.

The social context of online learning is, indeed, very different, especially when participants have no previous face-to-face experience. Van Popta et al. (2017) argued that while PA cognitive processes may be somewhat similar online and offline, social processes are likely to differ. McLuckie and Topping (2004) compared the social, organisational, and cognitive characteristics of effective peer learning interactions in face-to-face and online environments. In online PA, Cheng and Tsai (2012) found that higher psychological safety, lower value diversity for goals, more trust in the self as an assessor, and more positive social interdependence yielded deeper approaches to learning.

One feature of online PA is the affordance for anonymity, which has both advantages and disadvantages (Li, 2017). Anonymity may be more important as PA starts, when social insecurity is at its height, but later it may be less desirable. Further, in the online mode, it may be easier to build in more consistent methods of scaffolding (Hou et al., 2020), although whether students use these is another issue. Cultural differences regarding acceptability of PA are another problem (Yu and Lee, 2016). Students in countries where the predominant form of education is teacher-directed and not encouraging of independent thoughts may not like PA.

So, how do the primary and secondary schools and universities and colleges differ with regard to digital PA? Even primary and secondary schools are very different, let alone comparing them to universities and colleges (Michaelowa, 2007; Blatchford et al., 2011).

Differences Between Sectors

Primary. Primary schools (especially the smaller ones) tend to have a few computers (of various vintages) in each classroom, so at least they are geographically available, in principle, even if students have to queue for them. Primary teachers may indeed be generalists and used to teaching many subjects, but relatively few of them would regard themselves as experts in digital technology. Where a large primary school has an information technology “expert” teacher, they are in a much stronger position, but only large schools will have this facility. In any case, the digital domain is expanding so rapidly that even the most expert teacher will have difficulty in keeping up. Turning to devices and internet accessibility at home, while small and relatively advantaged families are likely to have a computer or tablet with internet connections and time to support children in using these, larger and less advantaged families are likely to have problems. The government has tried to supply these devices to such families, but the supply is much less than the need, and internet connectivity remains a major problem. Students might try to use the local library or other community centre facilities, but this might not be pleasant in the winter.

Secondary. Secondary schools tend to have computer laboratories which need to be booked, so there are more computers but not necessarily available at the point of need. Much of the same constraints regarding digital access at home apply here also. Secondary teachers will usually be subject specialists, experts in their subject but not that well-trained in pedagogical methods, let alone digital technology. However, secondary schools are likely to have at least one digital “expert”, but, here again, the constraints of expertise in a rapidly changing world are considerable.

University and College. Here we are dealing with a more selected and more advantaged population, who while on the campus will have easier access to many computer stations and internet connectivity, especially as university libraries have become more digital. In their term-time residences this might not be so easy, and disadvantaged students might not have devices or connectivity. At home, again disadvantaged students will have more problems, and, of course, disadvantaged students are more likely to continue to live at home and attend the local university. The digital revolution might increasingly militate against disadvantaged students. University teachers are again subject specialists, often without significant training in pedagogy, let alone digital technology. In-service training in universities can tend to be centralised, with limited support for in-department and individual needs, so departments who have their own “experts” are lucky.

So, the contexts are, indeed, very different in terms of the availability of devices and the internet. Beyond this is the issue of the teacher's familiarity with digital technology, both hardware and software, and this is likely to vary between institutions and (especially) between teachers. Even teachers who regard themselves as “up-to-date” can easily be overwhelmed by the torrent of new devices and applications, while students who regarded themselves as “digital natives” might be shocked that their knowledge of social media does not automatically generalise to other applications. Indeed, they might need to learn a different form of language and netiquettes for the purpose of PA.

Conclusion

Interest in digital PA is certainly growing as a necessary parallel to more traditional means of assessment. Digital PA has advantages such as ease of operation with very large classes and enabling anonymous PA for those students who are initially concerned about giving negative feedback to known associates. Of course, digital PA does not include face-to-face contact resulting in the consequent development of trust between assessors and assessees, so it has to make efforts to inject activities to promote social and emotional bonding. Additionally, applications go far beyond written products and presentations in basic academic skills and extend to physical skills such as football, art appreciation, learning to play a musical instrument, and even music compositions. The present article reviews the state of the art in primary schools, secondary schools, and higher education. Obviously, these are very different contexts for implementation, but as primary and secondary schools and higher education are increasingly needing to switch to online learning as a result of the pandemic lockdowns, digital PA also becomes highly relevant for them.

PA is used in a wide variety of subjects. Two factors are strongly emphasised: 1) the need to co-design explicit criteria with students, and 2) the important role of self-efficacy. Beyond this, there was much emphasis on the importance of training, which could include modelling or observation and should include practice. Rubrics were frequently mentioned (closely connected to co-designed criteria). The number of assessors was important, several being better than one, but not too many in view of moderating student workload. Psychological safety was important. Trust had to be developed between students and teachers and between students and students, and this is obviously linked to the issue of self-efficacy.

PA emphasises the importance of pupils becoming critical and creative thinkers, effective communicators, thoughtful problem-solvers, and collaborative team workers—all essential transferable skills for future employment and life. Although the literature is now quite extensive, PA is not as widely used as one might expect, given that it has the advantage that it appears to transfer the burden of assessment, at least partly, to the student and thus lightens the load of the teacher. Some schools have a whole-school policy on peer learning, which may well include PA, but these are in the minority. In the university sector, while PA might feature in many universities in a few departments or subjects, whole-university approaches are much less common.

Now that we are more aware of the conditions necessary to make these interventions work, we are in a stronger position to increase their take-up. The framework of the theory (above) gives many clues as to how this might be done, both in terms of ensuring that the required conditions are met, and in terms of ensuring that the correct planning decisions (of the options available) are made. A more detailed account of how to implement digital peer assessment is available in Topping (2018), with freely available resources at www.routledge.com/9780815367659.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Anat, K., Einav, K., and Shirley, R. (2020). Development of mathematics trainee teachers' knowledge while creating a MOOC. Int. J. Math. Edu. Sci. Tech. 51 (6), 939–953. doi:10.1080/0020739X.2019.1688402

CrossRef Full Text | Google Scholar

Blatchford, P., Bassett, P., and Brown, P. (2011). Examining the effect of class size on classroom engagement and teacher-pupil interaction: Differences in relation to pupil prior attainment and primary vs. secondary schools. Learn. Instruction 21 (6), 715–730. doi:10.1016/j.learninstruc.2011.04.001

CrossRef Full Text | Google Scholar

Castle, S. R., and McGuire, C. J. (2010). An analysis of student self-assessment of online, blended, and face-to-face learning environments: Implications for sustainable education delivery. Int. Edu. Stud. 3 (3), 36–40. doi:10.5539/ies.v3n3p36

CrossRef Full Text | Google Scholar

Cheng, K. H., and Tsai, C. C. (2012). Students’ interpersonal perspectives on, conceptions of and approaches to learning in online peer assessment. Australas. J. Educ. Tech. 28 (4). doi:10.14742/ajet.830

CrossRef Full Text | Google Scholar

Cohen, J. (1992). A power primer. Psychol. Bull. 112 (1), 155–159. doi:10.1037//0033-2909.112.1.155

PubMed Abstract | CrossRef Full Text | Google Scholar

Domínguez, C., Jaime, A., Sánchez, A., Blanco, J. M., and Heras, J. (2016). A comparative analysis of the consistency and difference among online self-, peer-, external- and instructor-assessments: The competitive effect. Comput. Hum. Behav. 60, 112–120. doi:10.1016/j.chb.2016.02.061

CrossRef Full Text | Google Scholar

Double, K. S., McGrane, J. A., and Hopfenbeck, T. N. (2020). The impact of peer assessment on academic performance: A meta-analysis of control Group studies. Educ. Psychol. Rev. 32, 481–509. doi:10.1007/s10648-019-09510-3

CrossRef Full Text | Google Scholar

Falchikov, N., and Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Rev. Educ. Res. 70 (3), 287–322. doi:10.3102/00346543070003287

CrossRef Full Text | Google Scholar

Gielen, S., Dochy, F., Onghena, P., Struyven, K., and Smeets, S. (2011a). Goals of peer assessment and their associated quality concepts. Stud. Higher Edu. 36 (6), 719–735. doi:10.1080/03075071003759037

CrossRef Full Text | Google Scholar

Gielen, S., Dochy, F., and Onghena, P. (2011b). An inventory of peer assessment diversity. Assess. Eval. Higher Edu. 36 (2), 137–155. doi:10.1080/02602930903221444

CrossRef Full Text | Google Scholar

Harlen, W. (2005). Trusting teachers' judgement: research evidence of the reliability and validity of teachers' assessment used for summative purposes. Res. Pap. Edu. 20 (3), 245–270. doi:10.1080/02671520500193744

CrossRef Full Text | Google Scholar

Hou, H.-T., Yu, T.-F., Chiang, F.-D., Lin, Y.-H., Chang, K.-E., and Kuo, C.-C. (2020). Development and Evaluation of Mindtool-Based Blogs to Promote Learners' Higher Order Cognitive Thinking in Online Discussions: An Analysis of Learning Effects and Cognitive Process. J. Educ. Comput. Res. 58 (2), 343–363. doi:10.1177/0735633119830735

CrossRef Full Text | Google Scholar

Johnson, S. (2013). On the reliability of high-stakes teacher assessment. Res. Pap. Edu. 28 (1), 91–105. doi:10.1080/02671522.2012.754229

CrossRef Full Text | Google Scholar

Li, H., Xiong, Y., Hunter, C. V., Guo, X., and Tywoniw, R. (2020). Does peer assessment promote student learning? A meta-analysis. Assess. Eval. Higher Edu. 45 (2), 193–211. doi:10.1080/02602938.2019.1620679

CrossRef Full Text | Google Scholar

Li, L., Liu, X., and Zhou, Y. (2012). Give and take: A re-analysis of assessor and assessee's roles in technology-facilitated peer assessment. Br. J. Educ. Tech. 43 (3), 376–384. doi:10.1111/j.1467-8535.2011.01180.x

CrossRef Full Text | Google Scholar

Li, L. (2017). The role of anonymity in peer assessment. Assess. Eval. Higher Edu. 42 (4), 645–656. doi:10.1080/02602938.2016.1174766

CrossRef Full Text | Google Scholar

Luxton-Reilly, A. (2009). A systematic review of tools that support peer assessment. Comp. Sci. Edu. 19 (4), 209–232. doi:10.1080/08993400903384844

CrossRef Full Text | Google Scholar

McLuckie, J., and Topping *, K. J. (2004). Transferable skills for online peer learning. Assess. Eval. Higher Edu. 29 (5), 563–584. doi:10.1080/02602930410001689144

CrossRef Full Text | Google Scholar

Michaelowa, K. (2007). The impact of primary and secondary education on higher education quality. Qual. Assur. Edu. 15 (2), 215–236. doi:10.1108/09684880710748956

CrossRef Full Text | Google Scholar

O’Donnell, A. M., and Topping, K. J. (1998). “Peers assessing peers: Possibilities and problems,” in Peer-assisted learning. Editors K. Topping, and S. Ehly (Mahwah, NJ: Lawrence Erlbaum).

Google Scholar

Piaget, J. (1926). The language and thought of the child.. San Diego, CABrace: Harcourt.

Topping, K. J. (2018). in Using peer assessment to inspire reflection and learning. Student assessment for educators series. Editor J. H. MacMillan (New York & London: Routledge). www.routledge.com/9780815367659.

Topping, K. J. (1998). Peer assessment between students in college and university. Rev. Educ. Res. 68 (3), 249–276. doi:10.3102/00346543068003249

CrossRef Full Text | Google Scholar

Topping, K. J. (2021). Peer assessment: Channels of operation. Education Sciences. In special issue, Cooperative/Collaborative Learning, Guest Editor R. M. Gillies. (in press).

Google Scholar

Topping, K. J. (2017). Peer assessment: Learning by judging and discussing the work of other learners. J. Interdiscip. Edu. Psychol. 1 (1), 7. Retrieved from http://riverapublications.com/assets/files/pdf_files/peer-assessment-learning-by-judging-and-discussing-the-work-of-other-learners.pdf.

CrossRef Full Text | Google Scholar

Van Gennip, N. A. E., Segers, M. S. R., and Tillema, H. H. (2009). Peer assessment for learning from a social perspective: The influence of interpersonal variables and structural features. Educ. Res. Rev. 4, 41–54. doi:10.1016/j.edurev.2008.11.002

CrossRef Full Text | Google Scholar

Van Popta, E., Kral, M., Camp, G., Martens, R. L., and Simons, P. R.-J. (2017). Exploring the value of peer feedback in online learning for the provider. Educ. Res. Rev. 20, 24–34. doi:10.1016/j.edurev.2016.10.003

CrossRef Full Text | Google Scholar

Vygotsky, L. S. (1978). in Mind in society: The development of higher psychological processes. Editors M. Cole, V. John-Steiner, S. Scribner, and E. Souberman (Cambridge, MA: MIT Press).

Yu, F.-Y. (2011). Multiple peer-assessment modes to augment online student question-Generation processes. Comput. Edu. 56 (2), 484–494. doi:10.1016/j.compedu.2010.08.025

CrossRef Full Text | Google Scholar

Yu, S., and Lee, I. (2016). Peer feedback in second language writing (2005-2014). Lang. Teach. 49 (4), 461–493. doi:10.1017/S0261444816000161

CrossRef Full Text | Google Scholar

Zheng, L., Zhang, X., and Cui, P. (2020). The role of technology-facilitated peer assessment and supporting strategies: A meta-analysis. Assess. Eval. Higher Edu. 45 (3), 372–386. doi:10.1080/02602938.2019.1644603

CrossRef Full Text | Google Scholar

Keywords: peer assessment, digital, primary school, secondary school, university

Citation: Topping KJ (2021) Digital Hardware for Peer Assessment in K-12 Schools and Universities. Front. Educ. 6:666538. doi: 10.3389/feduc.2021.666538

Received: 10 February 2021; Accepted: 05 August 2021;
Published: 22 September 2021.

Edited by:

Robyn M. Gillies, The University of Queensland, Australia

Reviewed by:

Frans Prins, Utrecht University, Netherlands
Dmytro Babik, James Madison University, United States
Chun-Ping Wu, National University of Tainan, Taiwan

Copyright © 2021 Topping. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Keith James Topping, ay5qLnRvcHBpbmdAZHVuZGVlLmFjLnVr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.