Skip to main content

MINI REVIEW article

Front. Educ., 03 March 2022
Sec. Assessment, Testing and Applied Measurement
This article is part of the Research Topic Validity, Reliability and Efficiency of Comparative Judgement to Assess Student Work View all 13 articles

A Review of the Valid Methodological Use of Adaptive Comparative Judgment in Technology Education Research

\r\nJeffrey Buckley,*Jeffrey Buckley1,2*Niall SeeryNiall Seery1Richard KimbellRichard Kimbell3
  • 1Faculty of Engineering and Informatics, Technological University of the Shannon: Midlands Midwest, Athlone, Ireland
  • 2Department of Learning, KTH Royal Institute of Technology, Stockholm, Sweden
  • 3Goldsmiths, University of London, London, United Kingdom

There is a continuing rise in studies examining the impact that adaptive comparative judgment (ACJ) can have on practice in technology education. This appears to stem from ACJ being seen to offer a solution to the difficulties faced in the assessment of designerly activity which is prominent in contemporary technology education internationally. Central research questions to date have focused on whether ACJ was feasible, reliable, and offered broad educational merit. With exploratory evidence indicating this to be the case, there is now a need to progress this research agenda in a more systematic fashion. To support this, a critical review of how ACJ has been used and studied in prior work was conducted. The findings are presented thematically and suggest the existence of internal validity threats in prior research, the need for a theoretical framework and the consideration of falsifiability, and the need to justify and make transparent methodological and analytical procedures. Research questions now of pertinent importance are presented, and it is envisioned that the observations made through this review will support the design of future inquiry.

Introduction

Technology education is relatively new to national curricula at primary and secondary levels in comparison to subjects such as mathematics, the natural sciences, and modern and classic languages. Broadly, technology education relates to subjects focused on thinking and teaching about technology (de Vries, 2016), with subjects taking different formats internationally (cf., Buckley et al., 2020b). For example, in Ireland there are four technology subjects at lower secondary level and four at upper secondary level. In contrast, in England the single subject of Design and Technology is offered at Key Stages 1, 2, and 3 of secondary education. A central feature of contemporary technology education is an emphasis on “nurturing the designerly” (Stables, 2008; Milne, 2013). Design tasks are therefore prominent within the technology classroom, the outcome of which is usually a portfolio of work and accompanying artifact which evidence the process and product of learning. While these portfolios, in response to the same activity, can vary widely in length, content, and content type, it would be typical to see progression from initial sketches and notes representing “hazy ideas,” through stages of idea refinement, to the technical presentation of a final proposed solution (e.g., Kimbell et al., 2009; Seery et al., 2012).

With pedagogical approaches in technology education growing in empirical support (cf., McLain, 2018, 2021), the integration of design has been problematized from the perspective of constructive alignment (Buckley et al., 2020b). A critical challenge remains in how, given the variety of ways through which technology learners can demonstrate capability (Kimbell, 2011), such as through varied portfolios, educators can validly and reliably assess open-ended, designerly outputs, without imposing an assessment architecture which infringes on the validity and meaningfulness of the associated learning processes. Comparative judgment (CJ), particularly adaptive comparative judgment (ACJ), is presented within the pertinent literature as auspicious in that it would appear to solve this particular disciplinary problem. The process of ACJ is described in detail by Hartell and Buckley (2021), but in brief it involves a cohort of assessors, typically referred to as “judges,” who individually make holistic pairwise comparisons on digital or digitized representations of student work which are subjected to assessment, i.e., portfolios (Kimbell, 2012; Pollitt, 2012a,b). Over a series of rounds, judges make value-laden, binary judgments on portfolios which are selected for comparison based on an adaptive sorting algorithm (Canty, 2012). Ultimately, this results in a rank order from “best” to “worst” with relative differences presented as parameter values. The attributes which lend to ACJ being a solution to the assessment of designerly outputs are that the rank order is derived through a consensus of the judging cohort which has been proven to be highly reliable, and it mitigates issues with traditional criterion referenced assessment stemming from rubrics which can lack content validity and which are difficult to implement reliably (Sadler, 2009).

Research on the use of ACJ in technology education is rising continuously (Bartholomew and Jones, 2021). However, the research questions which are investigated tend to be broad and relate to whether ACJ is feasible and whether it is appropriate and reliable in the assessment of designerly outputs. The resounding answer to these questions is “yes.” ACJ has been shown to be highly reliable in each relevant study which presents reliability statistics (Kimbell, 2012; Bartholomew and Yoshikawa-Ruesch, 2018; Bartholomew and Jones, 2021) and its validity can be seen as tied to the assessors (Buckley et al., 2020a; Hartell and Buckley, 2021) with outputted misfit statistics being useful to audit or gain insight into outlying judges or portfolios (Canty, 2012). While many of the conducted studies have taken the form of mechanistic, efficacy and effectiveness studies through the use of correlational and experimental designs, the research has largely been exploratory due to the lack of a theoretical framing regarding the place of ACJ within the technology classroom. Further, while in this research ACJ is examined as an assessment instrument, it is used as a research instrument in the collection of original data. This overlap in purpose has resulted in noteworthy limitations and validity threats as ACJ is a complex system which makes it difficult to interpret specific study results as the reason for any improved education outcomes. Given that ACJ can be used to assess designerly learning, and that the existing exploratory evidence indicates educational benefit, there is now a need to progress this research agenda in a more rigorous and systematic fashion.

With a view toward advancing this agenda, this article presents a review of existing ACJ studies relating to technology education. The intent of which is to highlight aspects of this area of scholarship which require methodological refinement to guide the design of future studies and to pose critical research questions stemming from existing evidence which are of immediate importance. This is of particular significance to technology education as ACJ has developed technologically to the point where it is becoming more frequently adopted in research and practice for both formative (Dewit et al., 2021) and summative purposes (Newhouse, 2014). Further, the agenda to “evolve” the use of ACJ for national assessment in technology education has been laid out by Kimbell (2012), and if this is to be successful the underpinning evidence base needs to be robust.

Two useful systematized reviews have already been conducted by Bartholomew and Yoshikawa-Ruesch (2018) and Bartholomew and Jones (2021) with aims of consolidating the pertinent evidence. Using the search outcomes of these two reviews, a combined total of 38 articles (see Supplementary Table 1 for details), a qualitative review and synthesis is herein conducted of which the outcomes are presented thematically in the following sections. Unlike the prior reviews which have been valuable in summarizing the outcomes of ACJ investigations, this paper presents a critical review of limitations in how ACJ has been investigated (cf., Grant and Booth, 2009). A critical review does not necessarily include a systematic search process, although the articles reviewed here result from two (Bartholomew and Yoshikawa-Ruesch, 2018; Bartholomew and Jones, 2021). The intent of a critical review is to “take stock” of the value of prior contributions through critique. Critical reviews do not intend to provide solutions, but rather questions and guidance which may “provide a “launch pad” for a new phase of conceptual development and subsequent “testing” (Grant and Booth, 2009, p. 93). The review process included a thorough review of each sampled article in terms of the alignment and appropriateness of presented aims and/or research questions, methodological approaches, data analysis, and conclusions drawn. Any limitations identified were then conceptually grouped into “themes” through a process of pattern coding (Saldaña, 2013). The themes are presented, not with an exhaustive critique of each reviewed article, but as summaries with descriptions and exemplars.

Themes Relating to Areas for Improvement in ACJ Scholarship in Technology Education Research

Theme 1: Validity Threats Through Making Inference Beyond What the Generated Evidence Can Support

The validity of ACJ as an assessment instrument is frequently commented on. What is often not discussed is the validity of the use of ACJ in research studies and associated validity threats. Due to the ethical implications of randomized control trials in denying students access to what researchers believe to be impactful for their learning (De Maeyer, 2021), much ACJ research in technology education is quasi-experimental. Inferences from this research, however, are often made which such a methodology cannot support. To take one example, Bartholomew et al. (2019a) present a quasi-experiment where at the mid-way point of a design project, each student in an experimental group made 17 judgments using ACJ on their peers work, where a control group engaged with a peer-sharing activity reflective of traditional practice. At the end of the study, all portfolios were combined into a single ACJ assessment session, but only the teacher and experimental group students acted as judges. The authors observed a significant difference in that the experimental group on average outperformed the control group and concluded that “our analysis suggests that students who participate in ACJ in the midst of a design assignment reach significantly better levels of achievement than students who do not” (p. 375). However, the inference that ACJ could be causal is not supported. The effect, for example, could have come from the experimental group simply being exposed at the mid-way point to a greater volume of examples (an exposure effect), to having to make judgments on quality or critique peer work (a judgment effect), or as only the experimental group assessed all work at the end, they may have judged in favor of familiar work (a recognition effect). Subsequent work addressed many of these limitations by mitigating the possible recognition effect (Bartholomew et al., 2020a), and the on-going “Learning by Evaluating” project (Bartholomew and Mentzer, 2021) is actively pursuing the qualification of explicit effects which can stem from ACJ, a need commented on further in Theme 2. A related issue comes from Newhouse (2014) where a cohort of judges noted that the digitized work presented in the ACJ session was a poor representation of the actual student work. One assessor commented on how the poor quality of some photographs made it more difficult to see faults which were easier to see in real life. This comment raises an important issue which is not regularly commented on—the use of ACJ may be valid from a process perspective, but if the portfolios are not accurate representations of the students learning or capability itself, the outcome of the ACJ session may be invalid. Through the review there were multiple examples where authors made inferences or suggestions which they could not support based on the described study. This is not to say that the studies themselves had no value or contribution—they have—but it is important not to infer beyond what an implemented methodology can substantiate.

Theme 2: Theoretical Framing to Define the Many Elements of Adaptive Comparative Judgment

Extending on the previous theme, nearly all studies where ACJ was used as an intervention which reported a positive effect attributed the effect to ACJ as a whole. In these studies, ACJ is often used by students in a way to support their learning (e.g., Bartholomew et al., 2019a; Seery et al., 2019). There is a need to move beyond this broad inference. The use of ACJ could offer educational benefit when learners act as judges through exposure to the work of peers, having to critique and compare the quality of work, having to explicate comments justifying a decision, or a combination of the these. The research needs to move to a stage of identifying the activity which has the educational benefit if it is to make a more significant contribution to knowledge. Further, all these activities can be conducted without an ACJ system in a classroom. Educators could organize activities where learners are exposed to, compare, and constructively critique the work of their peers outside of an ACJ software solution. The pedagogical benefits of the activities inherent to ACJ could be more easily transferred to classrooms if the focus of ACJ research was on defining the important processes rather than the broad benefit of the system holistically when used for learning.

The need to investigate the nuances of ACJ makes the need of a theoretical framework for ACJ apparent, and this would need to consider the intended purpose of ACJ, i.e., assessment as, for, or of learning. Related concepts merit further definition, in particular “time” and “criteria.” Many studies examine the efficiency of ACJ in comparison to traditional assessment practices (Rowsome et al., 2013; Bartholomew et al., 2018a, 2020b; Zhang, 2019) however, for ACJ time is usually considered in terms of total or average judging time. There is need to consider any set-up or training times to give a truer reflection of the impact this could have on practice, and any comparisons would need to consider the time educators put into developing rubrics and repeat usage as well. Similarly, many studies aim to determine judging criteria (Rowsome et al., 2013; Buckley et al., 2020a) but to understand the implications of such work, a theoretical framework which identifies whether criteria are relevant at a topic level, task level, or as specific as an individual judgment level merits qualification.

Theme 3: Validity in the Determination of Validity

The need for a theoretical framework for ACJ also encompasses the need to determine how claims can be falsified. Given the strength of evidence illustrating that ACJ is reliable, many efforts have turned to the valid use of ACJ. Specifically, the question is presented as to whether ACJ is a more valid alternative to traditional criterion reference assessment in the assessment of designerly student work. The validity of the rank can be assumed if (1) the cohort of judges is determined as appropriate, i.e., the rank is a valid representation of their consensus, and (2) judgments are based on reasoned decisions, i.e., judges take the task seriously and there are no technical errors (Buckley et al., 2020a). The first assumption is a decision of judge selection. For the second, Canty (2012) describes how misfit statistics can be used to identify outlier judges who importantly could have made reasoned judgments but are outliers in terms of having a different view of capability or learning than the majority of the cohort. Multiple studies use correlations between an ACJ rank and grades generated through the use of traditional rubrics as a measure of validity (Canty, 2012; Seery et al., 2012; Bartholomew et al., 2018a,b, 2019b; Strimel et al., 2021). Based on these studies, while not explicit, an implicit suggestion is being made that the hypothesis that ACJ offers a valid measure of assessment could be falsified if non-significant or negative correlations were observed in these investigations. If the study begins with a critique of rubrics, the issue is that the validity of ACJ is being determined by how closely it can re-produce the grades of the tool it is presented as being the better alternative to (e.g., Seery et al., 2012). This is further compounded by concerns regarding the content validity of rubrics for the assessment of design learning and who the assessors are. For example, the correlation between an ACJ and traditional rubric generated ranks when both are generated by experts has a very different meaning than if one rank comes from students. If the used rubrics are not critiqued in this way and are determined as valid, this application is not necessarily problematic.

Theme 4: There Is a Need to Justify Approaches to Statistical Data Analysis

A pedagogically useful attribute of ACJ stems from the parameter values within the final rank of portfolios. These follow a cubic function (Kimbell et al., 2007; Kimbell, 2012) and offer insight into relative performance between portfolios. This is commonly noted as a significant benefit of ACJ (Bartholomew et al., 2020b; Buckley et al., 2020a) and its potential was demonstrated by Seery et al. (2019) where parameter values were transposed into student grades. However, despite articles claiming the benefit of parameter values over the rank order which is linear and thus does not present relative differences, much of the data analysis does not utilize these values. Importantly, it may not be appropriate to use parameter values if model assumptions for parametric tests are violated. However, none of the reviewed articles which presented a formal statistical analysis provided any details of model assumptions which were tested. Statistical tests used have been both parametric and non-parametric, but this selection appears random. Where non-parametric tests are used it may be that authors are choosing to adopt tests which do not require certain assumptions to be met and which are more robust to outliers, but such a reason is not provided. Further, there was evidence of important information such as test statistics and/or degrees of freedom not being reported (e.g., Bartholomew et al., 2019b, p. 13) and only statistically significant results being reported with a note that there were non-significant results which were not presented (e.g., Bartholomew et al., 2017, p. 10). This is common in technology education research more generally (Buckley et al., 2021b), and is suggestive of the need for further transparency in data analysis.

Theme 5: Transparency in Adaptive Comparative Judgment Research

A final theme, which extends on occasional missing information in reported statistical tests relates more broadly to levels of transparency in the reporting of ACJ studies. There is a general need to improve levels of transparency in technology education research (Buckley et al., 2021a) and it was notable, particularly in conference publications that the methodology sections were not comprehensive enough for readers to fully understand the nature of investigations (e.g., Canty et al., 2017, 2019). The information which tended to be omitted was details on the design tasks that students would have engaged with, of which outcomes were assessed through ACJ. It is probable that this relates to space limitations with conference papers and that the authors would be providing this information during the conference presentation, but it would be useful to provide such information as an appendix, perhaps through an open access repository if space limitations are the issue. Finally, making research transparent relates not just to describing in detail how a study was conducted, but also to providing rationales for decisions which are made (Closa, 2021). No study which was conducted offered a clear justification of sample size. Study populations and sampling procedures were explained, but authors, to date, have not considered either empirical of ethical implications of having samples sizes which are too small or excessively large. It would be appropriate if, as this research progresses, decision making around sampling is made more apparent.

Discussion

Research using and on the use of ACJ in technology education to date has been useful in demonstrating that student work which is generated through the ill-defined and open-ended activities reflective of contemporary technology education can be reliably assessed. It is also clear that the validity of ACJ can be qualified in many ways, such as through the careful design of the judging cohort and by making use of misfit statistics. ACJ has been repeatedly observed as capable of providing reliable ranks and positive educational effects when used for learning, and the research to date has identified many important considerations such as that portfolios need to be accurate representations of the objects of assessment. Due to how often these outcomes have been observed, it is questionable whether further inquiry into these broad research questions would lead to any further insight. Instead, as an outcome of this review it is recommended that ACJ research becomes more systematic, nuanced, and explicit. Foremost, there is a need for appropriately designed methodologies and caution needs to be given when making inferential claims, but there are also ethical considerations associated with investing further resources into studies examining outcomes which have been repeatedly observed. For example, ACJ is continuously observed to be reliable, however, no studies have been conducted which examine a core proposition of this—that the reliability stems from the aggregation of judgments from cohorts of assessors with individual biases. It would be useful to examine the reliability of ACJ when the judging cohort is purposefully selected to include people with differing opinions, or who are provided with different criteria to make judgments on, in attempts to falsify this claim. Further, on this point and extending on the need for a theoretical framework outlined in theme 2, there is need to consider how reliable ACJ needs to be depending on its intended use, e.g., summative vs. formative, and what are the associated educational implications of different reliability thresholds (cf., Benton and Gallacher, 2018).

This need for more systematic inquiry creates the need for ACJ researchers to develop a theoretical framework. A current question is not whether the use of ACJ when used for learning (typically involving students as judges) has educational merit, but why could and why has ACJ been observed to have a positive effect? It is paramount that central concepts such as time/efficiency and criteria are adequately defined, and recognition must be given that at present it can be difficult for teachers to use ACJ due to, for example, cost and training implications. However, the nature of activity within the ACJ process such as making comparative judgments or being exposed to large variation in student work is immediately accessible to teachers as pedagogical approaches. There is significant potential for research to be conducted, either using or not using ACJ, which provides insight into the value of ACJ and which is immediately transferable into practice. The next phase of ACJ research should focus less on broad questions of feasibility and potential holistic benefit, and instead focus more on refining the use of ACJ for practice and on identifying the components of the ACJ process which have positive effects on learning and the student experience.

Author’s Note

This is a critical review, which included a self-review of the authors own published works.

Author Contributions

JB conceptualized the study and wrote the first draft. All authors then reviewed and edited the manuscript.

Funding

The authors received no explicit financial support for the conduction of this research. The publication of this article was supported by the Faculty of Engineering and Informatics at the Technological University of the Shannon: Midlands Midwest, Ireland.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2022.787926/full#supplementary-material

References

Bartholomew, S., and Jones, M. (2021). A systematized review of research with adaptive comparative judgment (ACJ) in higher education. Int. J. Technol. Des. Educ. 1–32. doi: 10.1007/s10798-020-09642-6

CrossRef Full Text | Google Scholar

Bartholomew, S., and Mentzer, N. (2021). Learning by Evaluating: Engaging Students in Evaluation as a Pedagogical Strategy to Improve Design Thinking. Community for Advancing Discovery Research in Education. Available online at: https://cadrek12.org/projects/learning-evaluating-engaging-students-evaluation-pedagogical-strategy-improve-design (accessed November 17, 2021).

Google Scholar

Bartholomew, S., Mentzer, N., Jones, M., Sherman, D., and Baniya, S. (2020a). Learning by evaluating (LbE) through adaptive comparative judgment. Int. J. Technol. Des. Educ. doi: 10.1007/s10798-020-09639-1

CrossRef Full Text | Google Scholar

Bartholomew, S., Reeve, E., Veon, R., Goodridge, W., Lee, V., and Nadelson, L. (2017). Relationships between access to mobile devices, student self-directed learning, and achievement. J. Technol. Educ. 29, 2–24. doi: 10.21061/jte.v29i1.a.1

CrossRef Full Text | Google Scholar

Bartholomew, S., Strimel, G., and Jackson, A. (2018a). A comparison of traditional and adaptive comparative judgment assessment techniques for freshmen engineering design projects. Int. J. Eng. Educ. 34, 20–33.

Google Scholar

Bartholomew, S., Strimel, G., and Yoshikawa, E. (2019a). Using adaptive comparative judgment for student formative feedback and learning during a middle school design project. Int. J. Technol. Des. Educ. 29, 363–385. doi: 10.1007/s10798-018-9442-7

CrossRef Full Text | Google Scholar

Bartholomew, S., Strimel, G., and Zhang, L. (2018b). Examining the potential of adaptive comparative judgment for elementary STEM design assessment. J. Technol. Stud. 44, 58–75. doi: 10.2307/26730731

CrossRef Full Text | Google Scholar

Bartholomew, S., Yoshikawa, E., Hartell, E., and Strimel, G. (2020b). Identifying design values across countries through adaptive comparative judgment. Int. J. Technol. Des. Educ. 30, 321–347. doi: 10.1007/s10798-019-09506-8

CrossRef Full Text | Google Scholar

Bartholomew, S., and Yoshikawa-Ruesch, E. (2018). “A systematic review of research around adaptive comparative judgement (ACJ) in K-16 education,” in CTETE - Research Monograph Series, ed. J. Wells (Virginia: Council on Technology and Engineering Teacher Education), 6–28. doi: 10.21061/ctete-rms.v1.c.1

CrossRef Full Text | Google Scholar

Bartholomew, S., Zhang, L., Bravo, E. G., and Strimel, G. (2019b). A tool for formative assessment and learning in a graphics design course: adaptive comparative judgement. Des. J. 22, 73–95. doi: 10.1080/14606925.2018.1560876

CrossRef Full Text | Google Scholar

Benton, T., and Gallacher, T. (2018). Is comparative judgement just a quick form of multiple marking? Res. Matters 26, 22–28.

Google Scholar

Buckley, J., Adams, L., Aribilola, I., Arshad, I., Azeem, M., Bracken, L., et al. (2021a). An assessment of the transparency of contemporary technology education research employing interview-based methodologies. Int. J. Technol. Des. Educ. doi: 10.1007/s10798-021-09695-1

CrossRef Full Text | Google Scholar

Buckley, J., Canty, D., and Seery, N. (2020a). An exploration into the criteria used in assessing design activities with adaptive comparative judgment in technology education. Irish Educ. Stud. doi: 10.1080/03323315.2020.1814838

CrossRef Full Text | Google Scholar

Buckley, J., Hyland, T., and Seery, N. (2021b). Examining the replicability of contemporary technology education research. Tech. Series 28, 1–9. doi: 10.1016/j.jgg.2018.07.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Buckley, J., Seery, N., Gumaelius, L., Canty, D., Doyle, A., and Pears, A. (2020b). Framing the constructive alignment of design within technology subjects in general education. Int. J. Techn. and Des. Educ. 31, 867–883. doi: 10.1007/s10798-020-09585-y

CrossRef Full Text | Google Scholar

Canty, D. (2012). The Impact of Holistic Assessment Using Adaptive Comparative Judgement on Student Learning. Doctoral Thesis. Limerick: University of Limerick.

Google Scholar

Canty, D., Buckley, J., and Seery, N. (2019). “Inducting ITE students in assessment practices through the use of comparative judgment,” in Proceedings of the 37th International Pupils’ Attitudes Towards Technology Conference, eds S. Pule and M. de Vries (Msida, Malta: PATT), 117–124.

Google Scholar

Canty, D., Seery, N., Hartell, E., and Doyle, A. (2017). Integrating Peer Assessment in Technology Education Through Adaptive Comparative Judgment. Philadelphia: Millersville University, 1–8.

Google Scholar

Closa, C. (2021). Planning, implementing and reporting: increasing transparency, replicability and credibility in qualitative political science research. Eur. Political Sci. 20, 270–280. doi: 10.1057/s41304-020-00299-2

CrossRef Full Text | Google Scholar

De Maeyer, S. (2021). Reproducible Stats in Education Sciences: Time to Switch? Reproducible Stats in Education Sciences. Available online at: https://svendemaeyer.netlify.app/posts/2021-03-24_Time-to-Switch/ (accessed April 26, 2021).

Google Scholar

de Vries, M. (2016). Teaching About Technology: an Introduction to the Philosophy of Technology for Non-philosophers. Switzerland: Springer.

Google Scholar

Dewit, I., Rohaert, S., and Corradi, D. (2021). How can comparative judgement become an effective means toward providing clear formative feedback to students to improve their learning process during their product-service-system design project? Des. Technol. Educ. 26, 276–293.

Google Scholar

Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Hartell, E., and Buckley, J. (2021). “Comparative judgement: An overview,” in Handbook for Online Learning Contexts: Digital, Mobile and Open, eds A. Marcus Quinn and T. Hourigan (Switzerland: Springer International Publishing), 289–307.

Google Scholar

Kimbell, R. (2011). Wrong but right enough. Des. Technol. Educ. 16, 6–7.

Google Scholar

Kimbell, R. (2012). Evolving project e-scape for national assessment. Int. J. Technol. Des. Educ. 22, 135–155. doi: 10.1007/s10798-011-9190-4

CrossRef Full Text | Google Scholar

Kimbell, R., Wheeler, T., Miller, S., and Pollitt, A. (2007). E-scape Portfolio Assessment: Phase 2 Report. London: Goldsmiths, University of London.

Google Scholar

Kimbell, R., Wheeler, T., Stables, K., Shepard, T., Martin, F., Davies, D., et al. (2009). E-scape Portfolio Assessment: Phase 3 Report. London: Goldsmiths, University of London.

Google Scholar

McLain, M. (2018). Emerging perspectives on “the demonstration” as a signature pedagogy in design and technology education. Int. J. Technol. Des. Educ. 28, 985–1000. doi: 10.1007/s10798-017-9425-0

CrossRef Full Text | Google Scholar

McLain, M. (2021). Developing perspectives on ‘the demonstration’ as a signature pedagogy in design and technology education. Int. J. Technol. Des. Educ. 31, 3–26. doi: 10.1007/s10798-019-09545-1

CrossRef Full Text | Google Scholar

Milne, L. (2013). Nurturing the designerly thinking and design capabilities of five-year-olds: technology in the new entrant classroom. Int. J. Technol. Des. Educ. 23, 349–360. doi: 10.1007/s10798-011-9182-4

CrossRef Full Text | Google Scholar

Newhouse, C. P. (2014). Using digital representations of practical production work for summative assessment. Assess. Educ. 21, 205–220. doi: 10.1080/0969594X.2013.868341

CrossRef Full Text | Google Scholar

Pollitt, A. (2012b). The method of adaptive comparative judgement. Assess. Educ. 19, 281–300. doi: 10.1080/0969594X.2012.665354

CrossRef Full Text | Google Scholar

Pollitt, B. (2012a). Comparative judgement for assessment. Int. J. Technol. Des. Educ. 22, 157–170. doi: 10.1007/s10798-011-9189-x

CrossRef Full Text | Google Scholar

Rowsome, P., Seery, N., Lane, D., and Gordon, S. (2013). “The development of pre-service design educator’s capacity to make professional judgments on design capability using adaptive comparative judgment,” in Paper Presented at 2013 ASEE Annual Conference & Exposition Proceedings, (Atlanta: ASEE).

Google Scholar

Sadler, D. R. (2009). “Transforming holistic assessment and grading into a vehicle for complex learning,” in Assessment, Learning and Judgement in Higher Education, ed. G. Joughin (Netherlands: Springer), 45–63.

Google Scholar

Saldaña, J. (2013). The Coding Manual for Qualitative Researchers, 2nd Edn. Los Angeles: SAGE.

Google Scholar

Seery, N., Buckley, J., Delahunty, T., and Canty, D. (2019). Integrating learners into the assessment process using adaptive comparative judgement with an ipsative approach to identifying competence based gains relative to student ability levels. Int. J. Technol. Des. Educ. 29, 701–715. doi: 10.1007/s10798-018-9468-x

CrossRef Full Text | Google Scholar

Seery, N., Canty, D., and Phelan, P. (2012). The validity and value of peer assessment using adaptive comparative judgement in design driven practical education. Int. J. Technol. Des. Educ. 22, 205–226. doi: 10.1007/s10798-011-9194-0

CrossRef Full Text | Google Scholar

Stables, K. (2008). Designing matters; designing minds: the importance of nurturing the designerly in young people. Des. Technol. Educ. 13, 8–18.

Google Scholar

Strimel, G. J., Bartholomew, S. R., Purzer, S., Zhang, L., and Ruesch, E. Y. (2021). Informing engineering design through adaptive comparative judgment. European J. Eng. Educ. 46, 227–246. doi: 10.1080/03043797.2020.1718614

CrossRef Full Text | Google Scholar

Zhang, L. (2019). Investigating Differences in Formative Critiquing Between Instructors and Students in Graphic Design. West Lafayette: Purdue University.

Google Scholar

Keywords: comparative judgment, technology education, design, validity, methodology, assessment

Citation: Buckley J, Seery N and Kimbell R (2022) A Review of the Valid Methodological Use of Adaptive Comparative Judgment in Technology Education Research. Front. Educ. 7:787926. doi: 10.3389/feduc.2022.787926

Received: 01 October 2021; Accepted: 24 January 2022;
Published: 03 March 2022.

Edited by:

Renske Bouwer, Utrecht University, Netherlands

Reviewed by:

Christian Bokhove, University of Southampton, United Kingdom
Wei Shin Leong, Ministry of Education, Singapore
Scott Bartholomew, Brigham Young University, United States

Copyright © 2022 Buckley, Seery and Kimbell. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jeffrey Buckley, amVmZnJleS5idWNrbGV5QHR1cy5pZQ==, amJ1Y2tsZXlAa3RoLnNl

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.