- Seton Hall University, South Orange, NY, United States
Quality P-12 student learning begins with quality educator preparation. An integral part of ensuring quality academic programs is ongoing programmatic assessment. Faculty and administrators tasked with overseeing the assessment processes in higher education institutions were faced with an added challenge during the COVID-19 pandemic when campuses across the country pivoted to virtual and hybrid learning. This transition not only meant that faculty and students were now teaching and learning in an online environment, but it also meant that assessment coordinators needed to find new ways to keep their quality assurance system operating well, while working in a pandemic, or post-pandemic reality. This chapter details how one assessment coordinator navigated the challenges and successes of supporting faculty engagement in a fully online programmatic assessment process in a college of education and human services in a private university in the northeast.
Introduction
Many factors contribute to successful P-12 student learning. Teachers, and how they are trained to teach, are an essential piece of the equation when looking at how today’s students are learning (Cochran-Smith et al., 2020). Higher education plays a crucial role in the educational system by designing and delivering academic programs to prepare our future teachers and school leaders for their roles in classrooms and schools. It is my perspective that to do this important work of training our future educators well, higher education must engage in a continuous cycle of ongoing accreditation, assessment and program evaluation of its educator preparation programs. The pandemic brought new complications to our ability to assess the quality of our teacher preparation programs, as well as some unexpected benefits and improvements to our processes.
Ongoing assessment of student learning and academic program effectiveness takes organization, consistency, and time, to improve candidate performance and program outcomes. With this chapter, I share my experience in my role as assessment coordinator for a college of education and human services, and how I engaged with our faculty and administrative staff for effective assessment processes in a virtual environment. The chapter will examine the challenges of working with faculty on program assessment in an online format, as well as some of the unexpected benefits that ultimately helped us to create a highly functioning and collaborative assessment system.
I will begin with a look at the national standards for educator preparation, and whether they are reasonable given the shift to virtual and hybrid learning in the past year. I will outline the meeting structures I developed to support faculty engagement with assessment, such as monthly committee meetings and bi-annual faculty assessment retreat, and the challenges and unexpected benefits of the virtual format.
Included in this review will be examples of successful data sharing for internal use and for public facing accountability, as well as involving our school partners through online advisory committee meetings. Examples of program improvements based on data shared virtually will be provided.
Integral to program assessment work is attention to data quality. I will describe our process for achieving validity and reliability virtually, and innovative practices we tried for improving response rate and data quality such as creating a study group and targeted outreach of our alumni and employers. I will revisit the accreditation demands on educator preparation, by describing our virtual site visit and how we used the online format as a strength for encouraging participation from external stakeholders.
I conclude with my perspective on the importance and personal socio-emotional value of online professional networking for assessment coordinators, in support of successful assessment processes that help ensure quality educator preparation for our future teachers and school leaders.
National Standards for Educator Preparation
Educator preparation programs in the United States must report to multiple sets of standards, and this requirement did not waver during the COVID-19 pandemic. The Council for Higher Education Accreditation (CHEA) oversees accreditation for institutions of higher education, as well as recognition of the program-specific accreditors. Educator preparation programs currently have two choices. There is the Council for the Accreditation of Educator Preparation (CAEP), recognized by CHEA in 2014, and the Association for Accreditation of Quality Educator Preparation (AAQEP), recognized by CHEA in 2021. Both accreditors base their standards, mission, and vision on a philosophy of continuous review and improvement of educator preparation programs. The focus of this article is on an experience with CAEP standards, as AAQEP has only recently become an option.
Early in the pandemic, in-person learning became scarce as P-12 schools and higher education programs alike shut down their physical buildings. For an educator preparation program, this raised questions about whether the demands by CAEP were reasonable given this shift to virtual format. Although many faculty and administrators expressed anxiety over meeting the CAEP standards during the pivot to virtual learning, I would argue that the standards were in fact reasonable and provided enough flexibility to be met even during a temporary shift to fully online teaching and learning. The shift meant that we needed to review the national CAEP standards, to be sure that we would be positioned to address them.
Council for the Accreditation of Educator Preparation defines their 2022 Revised Initial Standards1 as follows:
Standard 1: Content and Pedagogical Knowledge
The provider ensures that candidates develop an understanding of the critical concepts and principles of their discipline and facilitates candidates’ reflection of their personal biases to increase their understanding and practice of equity, diversity, and inclusion. The provider is intentional in the development of their curriculum and clinical experiences for candidates to demonstrate their ability to effectively work with diverse P-12 students and their families.
Standard 2: Clinical Partnerships and Practice
The provider ensures effective partnerships and high-quality clinical practice are central to candidate preparation. These experiences should be designed to develop a candidate’s knowledge, skills, and professional dispositions to demonstrate positive impact on diverse students’ learning and development. High quality clinical practice offers candidates experiences in different settings and modalities, as well as with diverse P-12 students, schools, families, and communities. Partners share responsibility to identify and address real problems of practice candidates experience in their engagement with P-12 students.
Standard 3: Candidate Recruitment, Progression, and Support
The provider demonstrates the quality of candidates is a continuous and purposeful focus from recruitment through completion. The provider demonstrates that development of candidate quality is the goal of educator preparation and that the EPP provides supports services (such as advising, remediation, and mentoring) in all phases of the program so candidates will be successful.
Standard 4: Program Impact
The provider demonstrates the effectiveness of its completers’ instruction on P-12 student learning and development, and completer and employer satisfaction with the relevance and effectiveness of preparation.
Standard 5: Quality Assurance System and Continuous Improvement
The provider maintains a quality assurance system that consists of valid data from multiple measures and supports continuous improvement that is sustained and evidence-based. The system is developed and maintained with input from internal and external stakeholders. The provider uses the results of inquiry and data collection to establish priorities, enhance program elements, and highlight innovations.
After close consideration, we concluded that CAEP’s standards for educator preparation are not hinged on in-person learning, or in-person administrative meetings. In my role as assessment coordinator, I found this fact to be quite helpful, as it gave me leverage to convene faculty for our regular data reviews, which empowered me to explore new technologies. We jumped into new options such as Microsoft Teams virtual meetings and online chat functions, although not fully confident, we were knowledgeable enough to get started.
Meeting Structures for Quality Assurance
It has been my experience that assessment work in higher education has the greatest impact when it is done in collaboration with the faculty members. This has held true, even in the virtual environment. To support faculty engagement with assessment, I rely on meeting structures such as monthly CAEP Committee meetings and our bi-annual faculty assessment retreat. These organizational structures help facilitate data discussions and set the expectation that assessment and accreditation are a shared endeavor for us to participate in together.
Our CAEP Committee is composed of program directors of each of our CAEP accredited programs, our assessment administrator, and me as facilitator. By having regularly scheduled meetings on the calendar, the message is clear that we will be engaging in data informed discussions on a regular basis. This group serves as a resource for my work as an administrator, and it also supports the program directors in their own professional development in becoming more comfortable with interpreting data and understanding accreditation standards.
With the pandemic, the meetings have shifted to be completely online. This meant that the regular meeting structures that were put into place on the academic calendar, kept their place and instead shifted to Microsoft Teams. These meetings are integral to data sharing and assessment, so I was determined to keep the pace even with the new platform. These meeting structures provide regular feedback loops for faculty to review data for program improvements. What was unexpected was that the new format supported even stronger democratic meeting structures, by enabling wider participation. Those who may have had family, teaching, or commuting conflicts were often now able to login from their home. This also meant that faculty teaching in our online School Counseling program who live out of state, were able to attend without traveling to campus once a month.
In these meetings, faculty and staff are given updates on our collective accreditation timeline leading to our next report, as well as reminders for the annual calendar of data collection and analysis. Data can come in the form of completer, alumni, and employer surveys, course embedded rubrics, clinical placement evaluations, and any number of other formats.
Data Quality
A requirement of CAEP accreditation is that all assessments must achieve validity and inter-rater reliability. Keeping track of validity and reliability studies for multiple programs can be an overwhelming task depending on how many teacher preparation programs are offered by a college or university. Any time there are substantive revisions to an assessment tool, the process must start all over again, with a validity study first, and a reliability study after that.
Pre-pandemic, our approach had been to engage faculty and school partners at in-person meetings to conduct this work. Our most common choice was to use an advisory committee meeting or a department meeting, where we would distribute paper copies of an assessment, accompanied by a printed questionnaire to be completed in real time during the meeting. Working together in-person created casual opportunities to learn from each other. For example, if someone raised a question, the other participants would hear the response which helped raise awareness for the whole group. This format created a situation in which the teaching and learning was shared among the participants of the validity study, which elevated and highlighted the expertise each of us brought to our work. Face-to-face interaction presented a welcome break from the usual routine, with a more relaxed atmosphere, and allowed us to come together as a community to have a shared experience.
With in-person meetings no longer an option for the time being, I had no choice but to shift the process to a virtual format. I converted our paper forms to a Qualtrics survey, worked with our program directors to develop contact lists of qualified individuals for each assessment and hoped for the best. My original plan was to conduct a minimum of one or two studies per program, per semester, but I was not sure we would be able to accomplish this without our tried and true in-person meetings.
A few unexpected outcomes came out of this initial approach. At first it seemed to be a great opportunity to tap into a wider group of experts for our panel, who did not need to be tied to an existing committee. It was now possible to include a mix of stakeholders who may not have previously been available for an in-person meeting due to scheduling conflicts or geographic distance. However, two problems became apparent: response rates and comprehension.
Response rates: With so much work taking place online, I found that it was not always easy to get a response. People reported being fatigued by the sheer volume of email messages in their inboxes. Response rates were negatively impacted.
Comprehension: Left to interpret the instructions on their own, some participants misunderstood what was expected of them, and in some cases misunderstood the assessment tool itself. Whereas in previous in-person formats, there was always a time for questions and answers, for shared understanding; the email format did little to help those who may not have felt comfortable to speak up or may not have known they had a misunderstanding. With the reality of online studies continuing for the foreseeable future, I knew I needed to address both these issues.
Our Graduate Special Education program found that its validity respondents did not understand what they were supposed to do, and more importantly did not understand the assessment tool they were supposed to be evaluating. Unfortunately, we did not realize this until the first study concluded and the responses indicated confusion and misunderstanding. To prepare for a second study, I worked closely with the program director to make sure that all instructions were made explicit and clear. We updated and elaborated the introductory text, including the descriptions for the responses. I made sure to include the assignment instructions with clear instructions to the panel, as to what they were reading and what their task was for the validity study. Perhaps equally important, I spent time revising the email invitation to explain the study more fully. I had been hesitant to add more text to the email, given the email fatigue I suspected we were all under, but in doing so, I neglected to provide sufficient detail to the panel for them to do their job. The results of our second study were so much more meaningful, and the responses demonstrated that the participants understood their assignment at hand.
In another example, our School Psychology program struggled with response rate. It seemed that our panel of experts were overloaded by other responsibilities and response rates dropped precipitately. While the first study conducted resulted in 18 responses, the assessment did not achieve validity and therefore needed to be revised and sent out again. The second study resulted in only two responses, even after repeated reminders. I knew at this time that we needed to take a different approach. In this case, instead of working on revising the instructions or the email messaging, we focused on strategies to increase the response rate. We first created a “validity study group” of individuals who were invited to participate in this work, to help the program with its accreditation demands. In my messaging to the group, I made sure to emphasize what a special group they were and how instrumental their expertise will be to the life of the program. To further sweeten the situation, we were able to offer small gifts that were left over from student events, such as t-shirts and padfolios. We also chose to send four separate assessments at one time and presented it as our spring validity study. With these changes, the response rate skyrocketed to 100% and we were able to collect the feedback we needed to achieve validity.
Limitations and Unexpected Benefits of the Virtual Format
The shift to working with faculty in a virtual format was sudden and unavoidable, and with that change, came concerns. The most noticeable and immediate limitation for the virtual implementation was the lack of real-time interaction, to assist participants with understanding the validity study. The studies were now being conducted exclusively by Qualtrics survey link, asynchronously. An additional caveat was that the participant groups were expanded during the pandemic, meaning that there were often people who were unfamiliar with the process left on their own to complete the task. I was naturally concerned about the impact on the quality and richness of the collected responses.
In the past, because the studies were conducted in a group setting, there was ample time to discuss the assignment for all to hear. Even people the quietest people in the room were able to benefit from listening to the questions of others. Now we had to rely on the written word to provide instructions. The first couple studies conducted were fraught with low ratings and I began to wonder about the data quality. Working closely with the program director, we developed a new set of instructions that elaborated on the expectations for the assignment. I suspected that people may not have understood the distinction between the different response choices, given the overall low data collected. I wrote the instructions in such a detailed way that I feared the participants might be offended. To my relief, responses were much more favorable in the second study, and the comments affirmed that folks were grateful for the guidance. I was now able to determine which areas were genuinely of concern, and what areas were merely a product of confusion. The virtual format certainly had its limitations and presented concerns for data quality, but with diligence and patience in refining the approach, I believe it is possible to collect reliable data.
As time went on and we adjusted to the new processes, I began to witness several unexpected benefits of virtual format. Let’s look at our biannual faculty assessment retreats as an example. Twice a year – once in the fall semester and once in the spring semester – we convene our college faculty and staff in an assessment retreat to share data and discuss ideas for program improvements. These retreats can be quite loud in past years, as we gather physically around tables, usually in one large room. There would be paper folders provided with numerous printed handouts displaying the assessment data for each academic program.
With the shift to a virtual format, we no longer had that same in-person social scene. We were not able to share a meal together or have impromptu side conversations as we walked past each other on the way to our seat in years past. However, what we did find was that there were many unexpected benefits to conducting a large-scale meeting in an online format. For example, it became easier for group work to take place simultaneously without that same noise factor that was present in-person. As one faculty member shared, they preferred online meetings because they could finally hear each speaker since they could control the volume on their computer. We also found that it was easier to look at the same data sets, as rather than shuffling through multiple packets of paper, we could now have one presenter share their screen, enabling the entire group to focus on the same data at the same time. With the aid of the computer screen, we could even enlarge the type, and point to it using various software tools.
As part of our quality assurance system, and to meet accreditation requirements, we also share data with program advisory groups to connect with external stakeholders such as internship supervisors, employers, and alumni. These folks do not work at our university and usually have a full-time job outside of higher education, which can present challenges to convening in-person meetings. With the virtual format, it became easier to convene these meetings, as the participants no longer had to travel to our physical campus. Although we were concerned at first that virtual meetings might feel impersonal, they actually proved to be more inclusive of people who may live geographically farther away, or have childcare issues and need to be home.
Socio-Emotional Support for Administrators
The research on assessment and accreditation all too often focuses on the data – quality, assessment design, analysis, and reporting. The pandemic brought to light the importance of paying attention to the wellbeing of everyone involved in education (Eadie et al., 2021; Van der Bijl-Brouwer and Price, 2021), but less has been researched on assessment coordinators and other higher education administrators tasked with assessment and accreditation. Just as the wellbeing of students and educators has been identified as essential to positive learning outcomes (Hill et al., 2021), it is my perspective that the wellbeing of education administrators is similarly important. After all, the administrators contribute to the culture of their work environment, and if they are struggling, it is only natural that their work relationships would struggle too.
During the pandemic, when interactions were mostly on virtual platforms such as Zoom or Microsoft Teams, it became even more important for assessment coordinators to connect with colleagues, to break the isolation. Research has shown the socio-emotional value of connecting with others through professional development and other professional connections (Worrall et al., 2021). Professional groups such as state affiliate chapters of the American Association for Colleges of Teacher Education or special interest groups within national associations such as the American Education Research Association provide a place for sharing successes, challenges, and questions, and for comradery among other assessment coordinators through regular meetings and committee work.
During the pandemic, all regular activities kept pace within my professional networks, including monthly meetings, committee work, conference planning, and other opportunities for professional development and volunteering which created a much-needed connection and break from the isolation of working from home. Pre-pandemic meetings may have had a social element to them, depending on the group, but not always. Meetings during the pandemic almost always took on dual roles, as they not only enabled ongoing learning and support, but they also provided a venue: (1) to cope with the stress of working during a pandemic; and (2) to get to know professional peers on a more personal level, as often the conversation would touch on the topic of pets, houseplants, cooking, or other hobbies. This socio-emotional support is always a benefit, but became particularly important as we became more closed off from the world during times of quarantining and office closures.
Discussion
I believe that to support quality P-12 educational outcomes, we need to support the quality preparation of the teachers and school building leaders. Quality educator preparation is rooted in assessment and accreditation, as these are the continuous improvement mechanisms that encourage ongoing conversations informed by data (Darling-Hammond, 2020; Tolo et al., 2020). How educators are engaged, educated, and taught to think deeply about data, becomes the foundation for how they approach their work with students.
It is my perspective that it is an ethical responsibility of higher education administrators to continuously assess how effective our educator preparation programs are because we are all part of the larger educational system. I believe that education is the basis on which all change can take place (Schofer et al., 2021). The shift to virtual meeting structures and learning does not have to slow us down. Virtual platforms provide a more equitable system for people to participate without traveling. It can be easier for those who may need auditory assistance, because with no in-person group, they can listen to other talk with little noise interference, if any at all. Enlarging screen images make it easier for those who may need visual assistance.
Programmatic assessment offers all involved a chance to look at data and think deeply about effectiveness of the program. It is in this time of self-reflection and revisiting of educational goals that we allow ourselves time to ponder what continuous improvement is needed to support educator preparation programs and the faculty, program directors, and department chairs who are tasked with engaging with the curriculum. I would argue that it is not only possible to continue assessment and accreditation work in a virtual learning environment – it is essential, as it helps faculty and staff to continue to grow and change their programs in new and innovative ways, which is the very definition of continuous improvement.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Author Contributions
The author confirms being the sole contributor of this work and has approved it for publication.
Conflict of Interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
References
Cochran-Smith, M., Grudnoff, L., Orland-Barak, L., and Smith, K. (2020). Educating teacher educators: international perspectives. New Educ. 16, 5–24. doi: 10.1080/1547688X.2019.1670309
Darling-Hammond, L. (2020). Accountability in teacher education. Action Teach. Educ. 42, 60–71. doi: 10.1080/01626620.2019.1704464
Eadie, P., Levickis, P., Murray, L., Page, J., Elek, C., and Church, A. (2021). Early childhood educators’ wellbeing during the COVID-19 pandemic. Early Child. Educ. J. 49, 903–913. doi: 10.1007/s10643-021-01203-3
Hill, J., Healey, R. L., West, H., and Déry, C. (2021). Pedagogic partnership in higher education: encountering emotion in learning and enhancing student wellbeing. J. Geogr. High. Educ. 45, 167–185. doi: 10.1080/03098265.2019.1661366
Schofer, E., Ramirez, F. O., and Meyer, J. W. (2021). The societal consequences of higher education. Sociol. Educ. 94, 1–19. doi: 10.1177/0038040720942912
Tolo, A., Lillejord, S., Florez Petour, M., and Hopfenbeck, T. (2020). Intelligent accountability in schools: a study of how school leaders work with the implementation of assessment for learning. J. Educ. Change 21, 59–82. doi: 10.1007/s10833-019-09359-x
Van der Bijl-Brouwer, M., and Price, R. (2021). An adaptive and strategic human-centred design approach to shaping pandemic design education that promotes wellbeing. Strateg. Des. Res. J. 14, 102–113. doi: 10.4013/sdrj.2021.141.09
Keywords: assessment and education, accreditation, faculty engagement, virtual learning and education environment (VLE), higher education, teacher education and development
Citation: Kline A (2022) Programmatic Assessment in a Virtual Learning Environment: Supporting Faculty Engagement for a Successful Quality Assurance System. Front. Educ. 7:821123. doi: 10.3389/feduc.2022.821123
Received: 23 November 2021; Accepted: 14 March 2022;
Published: 07 April 2022.
Edited by:
Vicki S. Napper, Weber State University, United StatesReviewed by:
Jessica To, Nanyang Technological University, SingaporeCopyright © 2022 Kline. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Amy Kline, amy.kline@shu.edu