Skip to main content

POLICY BRIEF article

Front. Educ., 18 January 2023
Sec. Higher Education

Evaluation of access and participation plans: Understanding what works

  • 1College of Health and Life Sciences, Aston University, Birmingham, United Kingdom
  • 2Strategic Planning Office, University of Wolverhampton, Wolverhampton, United Kingdom
  • 3Government Relations and Policy, Aston University, Birmingham, United Kingdom
  • 4Directorate of Student Engagement Evaluation and Research, Sheffield Hallam University, Sheffield, United Kingdom

We present an analysis of two current policy options to improve evaluation of access and participation work: independent external evaluation vs. in-house evaluation. Evaluation of access and participation work needs to be well-conducted, objective and widely disseminated, regardless of the outcome. Independent external evaluation is likely to provide objectivity and the right skills, but providing effective and timely feedback may be prohibitively expensive. Without support, in-house practitioner teams risk lack of objectivity and skills. Neither external nor in-house evaluation is likely to solve issues of publication bias; usage of open science principles could help. Working with academics and other experts internal to the institution could provide the skills to work well under the open science framework. Working as a sector to avoid duplication of effort is likely to get us further, faster.

Introduction

Inequality of educational opportunity can have a long-term impact on later life chances (e.g., James et al., 2008; Education Policy Institute, 2018). In the UK, successive governments have attempted to address such inequalities through various agenda to improve social mobility – for example through the establishment of the Social Mobility and Child Poverty Commission in 2010 (now called the Social Mobility Commission), and latterly through the ‘levelling up’ programme, with its particular focus on skills development as a pathway toward securing rewarding employment (e.g., HM Government, 2022). Participation in Higher Education (HE) has often played a role in such agenda, with widening participation programmes being employed to encourage those who might otherwise not have considered HE to do so and to support raising of attainment in schools. In England, university outreach teams – often working in collaboration with schools, colleges, employers and third sector organisations – have driven such initiatives under requirements and regulations set out from 2018 by the Office for Students (the English HE regulator) and from 2006 to 2018 by the Office for Fair Access (OFFA). Resource allocations to these initiatives are large and – in the main – funded from tuition-fee income, so the stakes are high; the UK Government anticipated spend on widening participation by the English HE sector in 2020–2021 to reach around £860 m (Secretary of State for Education, 2018). However, given the resources allocated and the recent drives for improvement, knowledge of what interventions work seems to be remarkably sparse (see, e.g., Skilbeck, 2000; Gorard and Smith, 2006; Gorard et al., 2006, 2012; Younger et al., 2019; Robinson and Salvestrini, 2020; Austen et al., 2021).

Programme interventions to widen access to HE are typically delivered longitudinally, over at least one academic year and often more, some beginning at primary school age – although shorter interventions such as campus visits and taster classes are also offered. Such programmes usually comprise information, advice and guidance, application support, subject taster sessions, and campus visits; some interventions include residential summer schools and mentoring by current undergraduates. Successful evaluation of a programme is embedded from the design stage and commonly rests on a comprehensive ‘theory of change’ (see, e.g., Barkat, 2019; Dent et al., 2022). The ‘theory of change’, is a model which hypotheses how and why any given intervention should work, mapping the expected outputs of the programme of activities and the outcomes that can be measured to evaluate success. For example, the outputs of a programme of activities might be self-reports of increased knowledge and confidence in the ability to apply to HE, whereas the outcomes could be receiving an offer or eventual enrolment. Additionally, implementation and process evaluation should be carried out to understand how well the delivery of the intervention has gone and to help determine what parts of the programme have contributed toward its overall success (or lack thereof). This allows improvements to be made, often rapidly.

Although initially policy makers were more interested in tracking and monitoring spend (e.g., Office for Fair Access [OFFA], 2004), improving evaluation of access and participation work has been on the English policy agenda for some time. As early as 2008, the Higher Education Funding Council for England (the body that was responsible for oversight of English Higher Education prior to the creation of the Office for Students) outlined that its ‘Aimhigher’ partnerships (outreach consortia) needed to evaluate their own work (Higher Education Funding Council, 2008). Following on from Professor Sir Les Ebdon, Director of Fair Access at the Office for Fair Access (OFFA), as Director of Fair Access and Participation at the Office for Students (OfS) from 2018 to 2021, Professor Chris Millward continued efforts to improve evaluation of access and participation work and encouraged practitioners to evaluate rigorously and objectively (Office for Students, 2018). Higher Education providers and collaborative programmes, such as UniConnect, were strongly encouraged to produce theories of change and tools and resources were produced to support practitioners. These included, for example, a financial support evaluation toolkit, an evaluation self-assessment tool and – in 2019 – the creation of a ‘what works’ centre (now known as TASO: Centre for Transforming Access and Student Outcomes). The approach taken was therefore to upskill HE provider teams and work together as a sector to better understand what works, i.e., an ‘in-house’ approach.

The new Director of Fair Access and Participation, John Blake, appointed in November 2021, came in with strong intentions to further improve evaluation of access and participation work, observing that for 20 years or more of this work, we have nowhere near 20 years’ worth of evidence about what works. Critically, Blake said “But we expect the projects committed to in access and participation plans to be evaluated, for those evaluations to be independent, and for them to be published” (Office for Students, 2022a). It is assumed here that independent evaluation means evaluation by a third party not directly employed by the education provider (i.e., an external approach), although details of how this might work have not thus far been provided. John Blake (TASO International Conference, 2022) did acknowledge that this “needs thought about doing it correctly, so that we do not end up avoid incurring vast additional cost” as well as being keen to avoid the appearance of “institutions marking their own homework.” This has raised questions over the previous direction taken by many universities and colleges of upskilling, employing evaluation specialists, setting up specialist in-house units, and partnering with TASO to improve evaluation. It also elicited some concerns that the change of direction may be premature, having not given the previous policy time to work in terms of evaluation of projects where the outcome data takes longer than 1 year to collect. For example, university enrolment data from HESA is typically not released until 15–18 months after a student begins their course, internal student retention data would be available no sooner than 12 months after a student begins their course and final attainment data in terms of degree classification could take up to 5 years or more. Below we consider the advantages and disadvantages of each policy and propose a possible alternative way forward.

Policy options and implications

During the period of access regulation to date, the setting of a clear regulatory direction has been continually hampered by an unresolved ambiguity in terms of the espoused purpose of this evaluation. Regulatory guidance has emphasised the need for both value for money / return on investment assessments (particularly following the 2008 financial crash and the imposition of an austerity regime) and the identification and sharing of best practice. This dual approach is typified by Professor Sir Les Ebdon’s suggestion that there was an increased need for evidence and evaluation to ‘improve understanding of what works best, share best practice across the sector and demonstrate to Government the value of investment in this area’ (Office for Fair Access [OFFA], 2013). Yoking these two objectives together obscured a fundamental distinction between ‘black box’ evaluation approaches (quasi-scientific and trial-based designs) intended to identify the ‘effects of causes’, (Dawid, 2007) and produce robust evidence to support decision-making (e.g., about value for money), and theory-driven evaluation focused on exploring the ‘causes of effects’, and understanding how and why change happened the better to support practice development (Dawid, 2007; TASO, 2022). The different approaches necessarily invoke different methodologies and philosophical commitments.

Irrespective of the purpose of evaluation, to improve sector wide evaluation and knowledge about what works, two main policy options have thus far been espoused; these can be divided into an ‘internal’ vs. an ‘external’ approach. The first, upskilling ‘in-house’ practitioners and providing sector-wide support; the road Les Ebdon and Chris Millward pioneered. The second – independently generated and published evidence – the future envisioned by John Blake. From this perspective, good evaluation needs to be conducted by people with the appropriate skills for the methodology used, be objective, and – to avoid duplication of effort – be widely disseminated, either in academic journals or through sector bodies such as TASO.

Skills

Arguably, many practitioners lack the opportunity to develop the level of research skills necessary to produce a publishable level evaluation (Crawford et al., 2017; Harrison et al., 2018). There can also sometimes be ambiguity over whose responsibility evaluation is and the ubiquitous pressures of available time; many HE-based evaluators have roles split between delivery and evaluation. Upskilling all members of a team to a proficient level – certainly if requiring an academic type of publication – would take time, although a formal report made available in a repository would be attainable for most, and perhaps more accessible for the sector. By contrast, external evaluators could be selected on the basis of high proficiency in the particular methodology used for each individual project. However, as above, evaluation should take place at many different stages of an intervention and good evaluation would also usually be embedded within the design and development of the intervention itself. For many methodologies, particularly those based on a theory of change approach, they would also need to have a sophisticated understanding of the delivery practice. This means that an external evaluator would have to be involved from as early as the design stage of the intervention (identifying suitable control groups, for example), throughout the intervention, and at the end. This may prove challenging for a completely independent consultant, or prohibitively expensive for their employing institution. Certainly, there are also advantages of practitioners being involved in the evaluation design and process in order to further their understanding and practice and to draw on their professional experience to inform evaluation design.

Objectivity

As practitioners tend to be responsible for the development and delivery of interventions, it has been argued that they may not be best positioned to provide an objective and independent evaluation (Gorard and Smith, 2006; Loughlin, 2008; John Blake: TASO International Conference, 2022). Practitioners will have spent significant time and resource in designing and delivering the intervention and may therefore be seen as having a vested interest (for additional challenges faced by practitioner evaluation, see also Harrison and Waller, 2017a,b). At the same time, being closer to the practice, they will be more able to draw on experience and observation to construct a theory of change (see, e.g., Austen, 2021). Conversely, independent evaluation has the advantage of separating the evaluation from those heavily invested in it being successful. However, independence via ‘outsourcing’ is certainly no guarantee of quality or objectivity; where collaborations are long term, external consultants too may also be under pressure to produce results which reflect positively on the intervention, particularly if they perceive that success may govern whether they are awarded their next contract (see Morris and Jacobs, 2000; Markiewicz, 2008). A lack of familiarity with the complexity of delivery may also encourage the use of ‘cookie cutter’ evaluation approaches or insufficiently nuanced conclusions (see for example Nutt, 1980; Pringle, 1998). Involving stakeholders in the evaluation process may also prove more challenging for external evaluators. Both options therefore have flaws.

Dissemination

Sharing of good practice – what works – makes perfect sense. Sharing what does not work also makes sense; to save others from repeating unsuccessful interventions. Dissemination invariably furthers progress, at least if it is assumed that good practice can be generalised across a range of different contexts. Whatever the reason for dissemination, the most frequent methods of academic dissemination are publication in journals and presentation at conferences, whereas practitioners may be more likely to use informal networks and memberships. Whether this is more likely when evaluation is conducted externally, or by consultants who may have moved on to the next project, is unclear. Academic writing in peer reviewed journals is time consuming and a skilled process and probably likely to be avoided by anyone other than academics. For other evaluators, the time commitment costs are likely to outweigh the benefits. Unfortunately, whether evaluation is conducted internally or by external collaborators, interventions shown to work are far more likely to be more widely disseminated than those that do not (the well-known ‘file-drawer problem’, Rosenthal, 1979) and there is no mechanism proposed to remedy this in either approach.

In summary neither approach provides adequate resolution of the issues of either objectivity or dissemination, regardless of how skilled evaluation is provided. We therefore propose some alternative recommendations for consideration and discussion below.

Actionable recommendations

Adoption of an open science approach

Independence is neither a necessary nor a sufficient condition for objectivity and would not necessarily improve dissemination. Instead, Open Science principles could provide a means of ensuring objectivity and transparency at both the research and publication stages. Firstly, registering the principal activities that are going to be evaluated and their expected completion dates (perhaps in the HE provider’s access and participation plan1 and then merged to a central repository) would enable the sector to see what types of activities are being evaluated and avoid excessive duplication of effort, and provide opportunities for collaboration and the expansion of studies between different partners. Secondly, pre-registering a trial protocol on a centralised public database managed by a suitable organisation (e.g., TASO) with expected completion dates would allow scrutiny of the proposed evaluation to ensure quality and objectivity (to prevent hypothesising after the results are known – or ‘HARKing’). Finally, the results of the evaluation should be summarised on the same central registry as that of the trial protocol. Those researchers who want to disseminate their results in academic journals would be free to do so – perhaps even as a registered report – by submitting their trial protocol to a suitable journal, prior to the evaluation taking place. A central registry of proposed evaluations and their eventual outputs provides some mitigation against the risk that those activities that are judged unsuccessful are likely to languish as a hard-to-locate brief report on a university server.

Partnership working

Professional services staff delivering activities can sometimes be left isolated without resource and expertise to conduct robust causal evaluations, but – as discussed above – external evaluators may not always be able to provide thorough and timely support. Support from appropriate academic departments or central directorates within institutions could provide an effective and efficient compromise. Where those trained in research and evaluation lead on evaluation in collaboration with practitioners this could support a much more robust and objective approach. Evaluation experts would have less of a vested interest (removing aspects of bias) within the intervention and would have more interest in establishing what does and does not work in improving student outcomes. They could be encouraged to disseminate this work widely at conferences and in peer reviewed journals in collaboration with their practitioner partners. Although the process of academic publication contains peer review, and therefore cannot be viewed as ‘marking your own homework’, it is not infallible, is subject to publication bias, and can be slow; we address this by recommending the sector additionally follow Open Science principles as above.

Working together as a sector

As well as being objective, research needs to be generalisable and replicable. For more efficient progress we need to be wary of excessive duplication and consider the benefits of working together as a sector to answer the bigger questions. In most cases, some general guidance for the sector would be more helpful than a – potentially expensive or wasteful – trial and error approach by several institutions simultaneously. Practitioners tend to spread their efforts thinly across many evaluations, whilst a more focused and rigorous evaluation could occur if the burden was divided across several providers. To a large extent, the Centre for Transforming Access and Student Outcomes in Higher Education (TASO) has started the sector on this journey already, albeit on a relatively small scale, by identifying the important questions and working with a number of different providers to answer them. This would also potentially address the generalisability challenges, by building in opportunities to test interventions across a range of contexts. Challenges surrounding the public sharing of data due to GDPR concerns can be overcome by universities and colleges using higher education access trackers (e.g., AimHigher, HEAT) to record their activities and associated participants, allowing researchers from these tracking services to conduct large scale evaluations. This type of approach could also serve to avoid potentially under-powered studies (e.g., those with insufficient sample sizes to detect effects even when they are present). It would be beneficial to have a central body overseeing sector efforts and ensuring quality.

Another aspect for consideration is how ‘evidence’ is defined and disseminated. At its simplest level, a ‘what works’ approach tends to imply a binary outcome; either an intervention works, or it does not. This closes off the possibility of identifying partial successes or fragmentary learning. Realist evaluation, for example, is founded in the identification and assessment of configurations of contexts, mechanisms causing the change and the outcome that results (Pawson et al., 1997), This complexity opens the possibility of learning more about the conditions and approaches required to deliver successful outcomes and allows for a more nuanced definition of what ‘working’ means and a more detailed understanding of the conditions that might be required if a particular aspect of the intervention is to transferred to other contexts. Although, realist evaluation is often undertaken by ‘external’ evaluators the building of programme theories is reliant on internal practitioner expertise. For a discussion of this in the context of organisational interventions (see Nielsen and Miraglia, 2017).

Conclusion

Without appropriate resource and support practitioner-only evaluation alone may not deliver the rigour and objectivity required to fully move forward. Independent evaluation seems unlikely to overcome objectivity issues, if indeed they exist; the perceived problems with current approaches to evaluation in higher education have not been clearly articulated, rather only solutions offered. However, an opportunity exists to reframe the notion of independence, to focus on developing criticality and challenge both within and beyond organisations and support all stakeholders to be active critical thinkers, which is perhaps the real gap which needs to be addressed. Quality should be assessed using notions of criticality (objectivity), additionality (contribution to knowledge), timeliness (informs decision making) and materiality (with relevance and importance), rather than independence (Picciotto, 2013). Working together as a sector – in partnership with academics and other experts as outlined in Austen (2022) – and most importantly following open science principles, could provide the key to improving sector knowledge of what works faster. We have a timely opportunity to develop a new system with new Access and Participation Plans for English HE providers required for 2024.

Author contributions

EM coordinated the paper and initial draft. RJS, MH, LW, LA, and JC wrote particular sections and provided information, suggestions, or comments.

Funding

This work was supported by an award to EM from Aston University’s Teaching Research Fund.

Conflict of interest

Subsequent to the submission of this paper RJS began working for TASO. TASO had no input and are not associated with the views or recommendations expressed in this paper.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^This aspect has been included in a recent Office for Students (2022b), with the suggestions that HE providers should bolster their access and participation targets with an ‘intervention strategy’, which includes details of when evaluation outcomes are to be published.

References

Austen, L. (2021). Supporting the evaluation of academic practices: reflections for institutional change and professional development. J. Perspect. Appl. Acad. Pract. 9, 3–6. doi: 10.14297/jpaap.v9i2.470

CrossRef Full Text | Google Scholar

Austen, L. (2022). Working together on access and participation evaluation. WonkHE Blog. https://wonkhe.com/blogs/working-together-to-take-evaluation-seriously/ (Accessed July 19, 2022).

Google Scholar

Austen, L., Hodgson, R., Heaton, C., Pickering, N., and Dickinson, J. (2021). Access, retention, attainment and progression: an integrative review of demonstrable impact on student outcomes. Available at: https://www.advance-he.ac.uk/knowledge-hub/access-retention-attainment-and-progression-review-literature-2016-2021 (Accessed July 19, 2022).

Google Scholar

Barkat, S. (2019). Evaluating the impact of the academic enrichment programme on widening access to selective universities: application of the theory of change framework. Br. Educ. Res. J. 45, 1160–1185. doi: 10.1002/berj.3556

CrossRef Full Text | Google Scholar

Crawford, C., Dytham, S., and Naylor, R. (2017). Improving the Evaluation of Outreach: Interview Report, Bristol: Office for Fair Access.

Google Scholar

Dawid, A. P. (2007). Fundamentals of Statistical Causality. London: UCL.

Google Scholar

Dent, S., Mountford-Zimdars, A., and Burke, C. (2022). Theory of Change: Debates and Applications to Access and Participation in Higher Education. Bingley: Emerald Publishing Limited.

Google Scholar

Education Policy Institute (2018). Key Drivers of the Disadvantage gap. Literature Review. Education in England: Annual Report. Available at: https://www.basw.co.uk/system/files/resources/EPI-Annual-Report-2018-Lit-review.pdf (Accessed July 16, 2022).

Google Scholar

Gorard, S., See, B. H., and Davies, S. (2012). The impact of attitudes and aspirations on educational attainment and participation. Available at: http://www.jrf.org.uk/sites/files/jrf/education-young-people-parents-full.pdf (Accessed May 29, 2020).

Google Scholar

Gorard, S., and Smith, E. (2006). Beyond the ‘learning society’: what have we learnt from widening participation research? Int. J. Lifelong Educ. 25, 575–594. doi: 10.1080/02601370600989269

CrossRef Full Text | Google Scholar

Gorard, S., Smith, E., May, H., Thomas, L., Adnett, N., and Slack, K. (2006). Review of Widening Participation Research: Addressing the Barriers to Participation in Higher Education. Bristol: Higher Education Funding Council for England (HEFCE).

Google Scholar

Harrison, N., Vigurs, K., Crockford, J., McCaig, C., Squire, R., and Clark, L. (2018). Understanding the evaluation of access and participation outreach interventions for under 16 year olds, Office for Students. Available at: https://www.officeforstudents.org.uk/publications/understanding-the-evaluation-of-access-and-participation-outreach-interventions-for-under-16-year-olds/ (Accessed December 12, 2022).

Google Scholar

Harrison, N., and Waller, R. (2017a). Evaluating outreach activities: overcoming challenges through a realist ‘small steps’ approach. Perspectives 21, 81–87. doi: 10.1080/13603108.2016.1256353

CrossRef Full Text | Google Scholar

Harrison, N., and Waller, R. (2017b). Success and impact in widening participation policy; what works and how do we know? High Educ. Pol. 30, 141–160. doi: 10.1057/s41307-016-0020-x

CrossRef Full Text | Google Scholar

Higher Education Funding Council (2008). Guidance for AimHigher partnerships: updated for the 2008-2011 programme. Available at: https://dera.ioe.ac.uk/7539/1/08_05.pdf (Accessed July 16, 2022).

Google Scholar

HM Government (2022). Levelling Up the United Kingdom. Available at: www.gov.uk (Accessed July 16, 2022).

Google Scholar

James, R., Bexley, E., Anderson, A., Devlin, M., Garnett, R., Marginson, S., et al. (2008). Participation and Equity: A Review of the Participation in Higher Education of People from low Socioeconomic Backgrounds and Indigenous People, Centre for the Study of Higher Education, Melbourne, VIC.

Google Scholar

Loughlin, M. (2008). Reason, reality and objectivity – shared dogmas and distortions in the way both ‘scientistic’ and ‘postmodern’ commentators frame the EBM debate. J. Eval. Clin. Pract. 14, 665–671. doi: 10.1111/j.1365-2753.2008.01075.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Markiewicz, A. (2008). The political context of evaluation: what does this mean for independence and objectivity? Evaluat. J. Australas. 8, 35–41. doi: 10.1177/1035719X0800800205

CrossRef Full Text | Google Scholar

Morris, M., and Jacobs, L. (2000). You got a problem with that? Exploring evaluators’ disagreements about ethics. Eval. Rev. 24, 384–406. doi: 10.1177/0193841X0002400403

PubMed Abstract | CrossRef Full Text | Google Scholar

Nielsen, K., and Miraglia, M. (2017). What works for whom in which circumstances? On the need to move beyond the ‘what works?’ Question in organizational intervention research. Hum. Relat. 70, 40–62. doi: 10.1177/0018726716670226

CrossRef Full Text | Google Scholar

Nutt, P. (1980). On managed evaluation processes. Technol. Forecast. Soc. Chang. 17, 313–328. doi: 10.1016/0040-1625(80)90104-3

CrossRef Full Text | Google Scholar

Office for Fair Access [OFFA]. (2004). Producing Access Agreements: OFFA Guidance to Institutions, Bristol: Office for Fair Access.

Google Scholar

Office for Fair Access [OFFA]. (2013). How to Produce an Access Agreement for 2014–2015. Bristol: Office for Fair Access.

Google Scholar

Office for Students (2018). Regulatory notice 1: Access and participation guidance from 2019–2020. Available at: officeforstudents.org.uk (Accessed July 19, 2022).

Google Scholar

Office for Students (2022a). Next steps in access and participation. Available at: https://www.officeforstudents.org.uk/news-blog-and-events/press-and-media/next-steps-in-access-and-participation/ (Accessed July 19, 2022).

Google Scholar

Office for Students (2022b). Consultation on a new approach to regulating equality of opportunity in English higher education. Available at: https://www.officeforstudents.org.uk/publications/consultation-on-a-new-approach-to-regulating-equality-of-opportunity-in-english-higher-education/ (Accessed December 12, 2022).

Google Scholar

Pawson, R., Tilley, N., and Tilley, N. (1997). Realistic Evaluation. London: Sage

Google Scholar

Picciotto, R. (2013). Evaluation Independence in organizations. J. MultiDisciplinary Educat. 9, 18–32.

Google Scholar

Pringle, E. (1998). Do proprietary tools lead to cookie cutter consulting? J. Manag. Consult. 10, 3–7.

Google Scholar

Robinson, D., and Salvestrini, V. (2020). The impact of interventions for widening access to higher education: a review of the evidence. Report to TASO: Transforming Access and Student Outcomes in Higher Education. Available at: https://taso.org.uk/wp-content/uploads/Widening_participation-review_EPI-TASO_2020.pdf (Accessed October 14, 2020).

Google Scholar

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychol. Bull. 86, 638–641. doi: 10.1037/0033-2909.86.3.638

CrossRef Full Text | Google Scholar

Secretary of State for Education (2018). Access and Participation: Secretary of State for Education Guidance to the Office for Students (OfS). Available at: https://www.officeforstudents.org.uk/media/1112/access-and-participation-guidance.pdf (Accessed July 15, 2022).

Google Scholar

Skilbeck, M. (2000). Access and Equity in Higher Education: An International Perspective on Issues and Strategies. Dublin: The Higher Education Authority

Google Scholar

TASO (2022). Impact Evaluation with Small Cohorts: Methodological Guidance. Bristol: TASO.

Google Scholar

TASO International Conference (2022). TASO International Conference – Part 1. YouTube. TASO. May 13, 2022. Available at: https://www.youtube.com/watch?v=PS7imCzzKZQ (Accessed July 19, 2022).

Google Scholar

Younger, K., Gascoine, L., Menzies, V., and Torgerson, C. (2019). A systematic review of evidence on the effectiveness of interventions and strategies for widening participation in higher education. J. Furth. High. Educ. 43, 742–773. doi: 10.1080/0309877X.2017.1404558

CrossRef Full Text | Google Scholar

Keywords: evaluation, policy, access and participation, what works, widening access and participation

Citation: Moores E, Summers RJ, Horton M, Woodfield L, Austen L and Crockford J (2023) Evaluation of access and participation plans: Understanding what works. Front. Educ. 8:1002934. doi: 10.3389/feduc.2023.1002934

Received: 25 July 2022; Accepted: 04 January 2023;
Published: 18 January 2023.

Edited by:

Chris Millward, University of Birmingham, United Kingdom

Reviewed by:

Kos Saccone, Central Queensland University, Australia
Angela Gayton, University of Glasgow, United Kingdom

Copyright © 2023 Moores, Summers, Horton, Woodfield, Austen and Crockford. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Elisabeth Moores, ✉ ZS5qLm1vb3Jlc0Bhc3Rvbi5hYy51aw==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.