- 1College of Staten Island, Staten Island, NY, United States
- 2The Graduate Center, The City University of New York, New York, NY, United States
- 3Tech Kids Unlimited, New York, NY, United States
- 4Education Development Center, Waltham, MA, United States
- 5Hunter College (CUNY), New York, NY, United States
- 6University of California, Davis, Davis, CA, United States
- 7Cornell University, Ithaca, NY, United States
- 8New York University, New York, NY, United States
- 9Carnegie Mellon University, Pittsburgh, PA, United States
- 10The Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, United States
- 11Pace University, New York, NY, United States
Introduction: Autistic people face systemic barriers to fair employment. Informal learning may promote the self-determination transition-age autistic youth need to overcome and/or transform these barriers. This report focuses on the iterative process of developing video game design workshops guided by feedback from autistic students about instructional strategies they found engaging. This study is part of a three-year-long NSF-funded program of research that seeks to empower autistic youth to move toward successful careers by teaching educators how to more effectively guide them.
Methods: In the Summer of 2021, educators at an award-winning NYC-based, not-for-profit, education program, Tech Kids Unlimited (TKU) collaborated with researchers, including autistic students, to iteratively develop and assess two online game design workshops for transition-age autistic youth. Participants selected which workshop they were available for (Workshop 1: n = 18; M age = 16.72 years; Workshop 2: n = 16; M age = 16.56 years). Students in Workshop 2 had more varied support needs and were less motivated to learn video game design than students in Workshop 1. Students completed assessments before and after each workshop and rated their interest in specific workshop activities after each activity. Guided by data from Workshop 1, we revised instructional strategies before conducting Workshop 2.
Results: We found little evidence for our hypothesis that attentional style would impact educational engagement. However, video game design self-efficacy and self-determination were often positively associated with engagement. Two industry speakers, one of whom was autistic, were among the highest-rated activities. As hypothesized, video game design self-efficacy and self-determination (and unexpectedly) spatial planning improved from pre- to post-test following Workshop 1. Despite our efforts to use what we learned in Workshop 1 to improve in Workshop 2, Workshop 2 did not lead to significant improvements in outcomes. However, students highlighted instructional strategies as a strength of Workshop 2 more often than they had for Workshop 1. Educators highlighted the importance of group “temperature checks,” individualized check-ins, social–emotional support for students and educators, and fostering a positive atmosphere.
Discussion: Findings suggest that interactive multimodal activities, stimulating discussions, and opportunities to engage with neurodivergent industry professionals may engage and empower diverse autistic youth.
Introduction
Autistic people around the world face systemic barriers to obtaining educational opportunities and jobs that are well-matched to their skills and interests (e.g., Shattuck et al., 2012; Hedley et al., 2017; Frank et al., 2018; Black et al., 2020; Lallukka et al., 2020). Rather than empowering autistic young people to be agents of change in systems that are clearly broken, existing interventions for transition-age autistic youth often fail to provide opportunities for them to critique and shape even their own educational experiences (McDonald and Machalicek, 2013). Interventions tend to prioritize remediating difficulties, often overlooking the strengths of autistic young people, and rarely focus on helping autistic youth develop self-determination skills (Bottema-Beutel, 2023). This is surprising as self-determination, or the ability to act as a causal agent in one’s life, is a key predictor of educational and employment success for people with and without disabilities (Wehmeyer, 1992; Shogren et al., 2015; Burke et al., 2020).
While often conceptualized, as it is above, as a characteristic of individuals, the term self-determination was first used to describe collective advocacy for the right to shape community destinies, including by indigenous people and by people with disabilities (Ward and Meyer, 1999; Kuokkanen, 2019). In the late 1990s, an autistic-led self-determination movement, the neurodiversity movement, emerged (Ward and Meyer, 1999; Kapp, 2020). The neurodiversity movement challenges deficit-oriented conceptualizations of autism and reframes autism and other forms of neurodivergence as valuable minority identities that need no normalization. The neurodiversity movement initially spread largely online, fueled by a common, although certainly not universal, autistic proclivity for computing (Murray and Lesser, 1999; Kapp et al., 2013; Gillespie-Lynch et al., 2014). In the current paper, we explore informal, technology-focused education as a strategy to foster self-determination and other skills and attitudes that could help autistic young people transform themselves, their learning environments, and, potentially, society.
Consistent with the tenets of the neurodiversity movement, autism is associated with both difficulties (e.g., bidirectional miscommunications with non-autistic people; Milton, 2012) and strengths, including honesty, attention to detail, and the ability to recognize and create patterns (e.g., Mottron et al., 2006; Baron‐Cohen, 2009; Cope and Remington, 2022).1 Even characteristics defined as difficulties by the diagnostic criteria for autism can confer strengths. For example, focused interests have long been recognized as powerful motivators that can help autistic people develop expertise and achieve meaningful roles in society (Kanner, 1971; Grandin and Duffy, 2008; Boven, 2018; Zeldovich, 2018). Opportunities to engage with one’s special interests may help autistic youth develop self-determination and other skills (Chen et al., 2022; Jones et al., 2023). However, formal education often provides insufficient opportunities for autistic people to engage with their interests (Patten Koenig and Hough Williams, 2017).
Mirroring the importance of the Internet for the early spread of the neurodiversity movement, informal, technology-focused educational programs have emerged as key spaces where autistic youth can learn self-determination and other skills that will help them succeed in adulthood (Dunn et al., 2015; Deiner et al., 2016; Lee et al., 2020; Martin et al., 2020; Begel et al., 2021; Chen et al., 2022; Moster et al., 2022; Jones et al., 2023). Emerging evidence suggests that such programs can promote technology-related self-efficacy, or autistic students’ beliefs in their ability to master technology-related goals. Whether programs can impact not only beliefs, but also tech and employment-related skills, such as computational thinking, remains an open question. Computational thinking (CT) is the ability to solve problems using abstraction, e.g., by creating models to solve problems in a way that a computer could carry out, running these models (with or without a computer), and revising the models to improve them (Ulrich Hoppe and Werneburg, 2019). CT has been identified as a key skill for autistic (and indeed all) students to learn as it provides a foundation for adaptation to diverse contexts, effective collaboration, and even social–emotional development (Oswald et al., 2023). However, clearly defined assessments of the potential impacts of informal, technology programs on the CT skills of autistic youth have, to our knowledge, not been conducted. Evidence that existing programs promote self-determination also remains weak, e.g., improvements in only one subdomain of a broader self-determination scale or isolated quotes suggesting improvements (Chen et al., 2022; Jones et al., 2023). Existing research has also neglected to attend to the original definition of self-determination described above, collaborative advocacy for community rights, as programs have either been developed without any input from autistic people or have been unconvincingly described as participatory (without providing any details about participatory processes).
In the research described in this report, researchers at CUNY and NYU, a participatory team of autistic students, and educators at an award-winning NYC-based, not-for-profit, informal, technology education program, Tech Kids Unlimited (TKU) sought to learn from autistic young people how to better engage them in informal, technology-focused learning opportunities.2 Since 2009, TKU has been providing hands-on, project-based learning opportunities to youth with diverse disabilities in an informal out-of-school-time environment. The research described in this report represents the first set of summer workshops in a three-year NSF-funded program of research that seeks to help autistic youth move toward successful careers by helping educators more effectively guide them.
Informal learning and universal design: opportunities and challenges
Informal learning opportunities, like those offered at TKU, are increasingly recognized as invaluable for helping people with disabilities overcome pronounced underrepresentation in both Science, Technology, Engineering, and Math (STEM) and non-STEM fields (e.g., Melber and Brown, 2008; Burgstahler and Chang, 2014; Fisher, 2017; Wehman et al., 2020). Extracurricular programming may help youth develop employment-related skills through active exploration. Active planning is needed to ensure that extracurricular activities are engaging and accessible for all learners (Melber and Brown, 2008). Therefore, it is not surprising that Universal Design (UD), a theoretical framework that guides active planning of instruction, has been recommended to help students with disabilities overcome pronounced underrepresentation in STEM fields (Melber and Brown, 2008; Dunn et al., 2012; Moon et al., 2012).
The central insight of UD is that the burden of adaptation can be placed on curricula rather than on learners by developing flexible curricula that include multiple paths to represent content (options for perception, communication, and comprehension), multiple ways to act upon ideas (options for action, communication, and executive functions) and multiple ways to engage (options for recruiting/sustaining interest and self-regulation; Rao and Meo, 2016; CAST, 2018). The goal of Universal Design for Learning (UDL), a branch of UD, is to develop “expert learners” who are knowledgeable, strategic, and motivated. CAST provides 31 checkpoints to help instructors build curricula aligned with the principles of UDL.
Although UD is increasingly described as effective in publications and legislation, it has been neither sufficiently well-defined nor sufficiently well-evaluated to merit this claim (Smith et al., 2019). For example, limited research has examined a central claim of UD, that adaptations to support one type of learner (e.g., autistic students) are also beneficial for a different type of learner (e.g., students with ADHD; King-Sears et al., 2015; Ok et al., 2017). While one study showed that a UD curriculum promoting vocabulary acquisition using multimedia podcasts was beneficial for high school students with and without learning disabilities (Kennedy et al., 2014), another study found that middle school students with learning disabilities did not demonstrate heightened learning from video games and text-based curricular supplements that were designed to match UD checkpoints (Marino et al., 2014). Importantly, the design of Kennedy and colleagues’ effective multimedia curriculum was guided by principles of UD and Mayer (2008) design principles for multimedia instruction.
Mayer’s design principles, rooted in systematic assessments of students without known disabilities, are sometimes counterintuitive. For example, Mayer found that using animation and voice is more effective than each on its own, but adding text decreases effectiveness by distracting people. This finding suggests that the UD principle of providing multiple paths to representation might be tempered based on how modalities interact with one another and the attentional needs of specific learners. Early research in the field of computer-assisted instruction revealed that motivationally-adapted instructional strategies, wherein the number of supports were aligned with the needs of students, were more effective at eliciting attention and motivation than instruction paired with all available supports or no extra supports (Song and Keller, 2001). Like Mayer’s work, this suggests that providing all available supports (as some interpretations of UD suggest) disadvantages some learners by distracting and/or annoying them. Our research expands upon these findings from youth without disabilities by examining if instructional redundancy increases engagement for autistic youth who experience difficulties focusing.
Autistic people vary greatly in their attentional skills. While autism is often associated with heightened focus, which can manifest as challenges shifting attention (Murray et al., 2005; Taurines et al., 2012; Lawson et al., 2015; Bradshaw et al., 2022), autism also commonly co-occurs with ADHD, which is associated with difficulties focusing attention (Rong et al., 2021). These associations are further complicated by interest, as hyperfocus (or becoming completely absorbed in tasks one is interested in) is common in both autism and ADHD (Hupfeld et al., 2019; Ashinoff and Abu-Akel, 2021).
Emerging research suggests that attentional differences among autistic youth can impact their learning outcomes (e.g., May et al., 2013; McDougal et al., 2020). Researchers have speculated that attentional differences may intersect with instructional practices to impact how autistic youth engage with learning opportunities (Mallory and Keehn, 2021). However, very little research has examined how environmental factors, including instructional practices, contribute to autistic students’ engagement (Keen et al., 2016, 2021; McDougal et al., 2020). No research, to our knowledge, has examined our hypothesis that the degree to which different instructional practices are engaging for autistic students varies as a function of their attentional skills.
Our approach to evaluating this hypothesis is consistent with Mayer’s emphasis on the importance of a “two-way street” to designing curricula wherein adaptation is guided by theory and students’ responses. Although innovative, Mayer’s education principles were derived without a focus on students with disabilities. When the lead author (KGL) wrote Mayer to ask for access to his assessments, he informed us that his materials do not “run on today’s computers” but “I have been interested in seeing how the principles apply to learners with ADHD or autism, so I am glad to see that you are taking up that challenge” (email correspondence, 6/2019).
By examining how autistic youth respond to varied instructional modalities in our video game design workshops, we seek to develop “diversity blueprints,” or instructional strategies designed to engage people who vary in their ability to focus. Although “diversity blueprints” prepare curricula to be truly accessible for varied learners, they are often missing from existing curricula. Indeed, Edyburn (2010; p. 36) stated, “I fear that the promise of UDL will not be achieved until we begin to focus on diversity blueprints…when designers assume that everyone is like them…the products they create will meet the needs of a narrow range of users.”
Given that it is difficult for people to understand the needs and perspectives of people who are different from them (Milton, 2012), participatory approaches, like the approach described in this report, may be essential for creating effective diversity blueprints for autistic students. Crucially, we are not proposing to “retrofit” existing instructional strategies to “accommodate” youth with specific disabilities. Instead, we use student feedback to make instructional strategies increasingly engaging for youth with diverse attentional profiles and interests. This approach to assessing instructional strategies and iteratively learning from student feedback, to continue to improve, is consistent with recommendations from UD experts that UD be recognized as an iterative process rather than a checklist of options (Meo, 2008; Smith et al., 2019). To learn from students, we must create conditions wherein they feel empowered to teach us by promoting beliefs about their abilities, including self-determination (discussed above) and self-efficacy (explored below).
Emotional engagement as a path to self-efficacy
Self-efficacy, or the belief that one can demonstrate mastery and attain one’s goals in specific domains, is a core aspect of human agency that gives people the “staying power” to advocate for themselves and overcome discrimination and other obstacles (Bandura, 1989). According to social cognitive career theory (SCCT), self-efficacy shapes career outcomes (Nauta and Epperson, 2003; Lent et al., 2010, 2015; Navarro et al., 2014). For example, self-efficacy is critical to the persistence of underrepresented youth in the STEM pipeline (Tellhed et al., 2017; Falco and Summers, 2019). Attempts to correct inequalities in the STEM pipeline often focus on providing under-represented students (typically racial/ethnic or gender minorities) with access to experiences that promote self-efficacy (e.g., mentors like them; Chemers et al., 2011).
Bandura theorized that people acquire self-efficacy beliefs from four sources: emotional arousal, performance outcomes, vicarious experiences, and social persuasion. In our research, we seek to identify strategies educators can use to tailor their instructional strategies to the needs of their students to promote optimal emotional arousal. Calls to broaden and enhance engagement with STEM opportunities highlight that emotions shape engagement and achievement (Murphy et al., 2019). However, research focused on STEM engagement often overlooks students’ positive emotional responses, like their interest in activities, which is the primary engagement measure in the current study. A study using text-based experience sampling revealed that adolescents without known disabilities express more positive emotions about academic activities when educators help them pay attention, understand, and visualize ideas (Goetz et al., 2013). Research tends to rely on purely text-based assessments of emotion, which may limit accessibility for diverse youth. In the current study, we adapted a picture-based engagement metric, initially developed in collaboration with autistic scholars (Riccio et al., 2020), to learn from autistic youth which instructional strategies they find engaging.
Aims and hypotheses
To help them prepare to face the systemic barriers that make it difficult for autistic people to achieve their educational and career goals (Shattuck et al., 2012; Black et al., 2020), strategies to help autistic youth engage with learning opportunities as empowered learners are sorely needed. This report focuses on the first two in a series of workshops that we have been developing iteratively over three years. In Summer 2021, we first conducted Workshop 1 and then Workshop 2 a few weeks later. Students completed pre- and post-tests and rated their interest in specific workshop activities as soon as each activity concluded. In between Workshop 1 and Workshop 2, we revised instructional strategies based on responses to Workshop 1.3
The core hypothesis motivating Summer 2021 research was that students’ engagement with activities in our game design workshops would be enhanced if instructional strategies were flexibly designed for youth with different attentional skills. Attention is a vital foundation for learning that effective instructional design can enhance (Song and Keller, 2001; Fredricks et al., 2004). Our research is rooted in, and seeks to improve, both UD and Mayer (2008) principles for effective multimedia instruction. Although UD is often endorsed to promote STEM learning among students with disabilities (Moon, 2012), our research is unique in its iterative approach to adapting instructional approaches to better engage neurodivergent students guided by analysis of how interested students are in specific activities.
Our research aims and hypotheses were to:
1. Identify strategies to engage autistic youth with diverse attentional profiles in informal STEM learning opportunities that are well matched to their attentional profiles.
Hypothesis 1: People with more focused attention will prefer unimodal instruction, and people with less focused attention will prefer multimodal instruction.
2. Examine if engagement with game design workshops is associated with increased STEM self-efficacy.
Hypothesis 2:Engagement with our game design workshops will be associated with increased STEM self-efficacy (i.e., video game design and technology self-efficacy).
3. Examine if engagement with game design workshops is associated with increased self-determination.4
Hypothesis 3:Engagement with our game design workshops will be associated with increased self-determination.5
Methods
Participatory processes
We received funding for this project in September 2020. We pre-registered the hypotheses in our funded NSF grant proposal on June 17th, 2021 (Open science Framework: https://osf.io/4pvq7/). In between obtaining our grant and pre-registering the hypotheses in the grant, we collaborated with a participatory team of neurodivergent high school, college, and graduate students to develop additional hypotheses and to refine assessments and curricular strategies. Our study has become increasingly participatory over time as we have iteratively improved our practices to make them more equitable and transparent. These efforts have been guided by an autistic member of our advisory board who is also a member of AASPIRE, the first research collective to conduct truly participatory autism research. We have adapted guidelines developed by AASPIRE to address power dynamics and make space for everyone to be heard and respected (e.g., using multimodal communication, distributing a record of meetings that includes accessible summaries as well as more complete notes, and utilizing a number-based method for voting and documenting consensus; Nicolaidis et al., 2019). The scope of decisions the participatory group guides on has expanded over time. The participatory group typically meets monthly, often via two meetings that cover the same content to ensure all interested group members can attend. The number of participatory group members varies as students can join and leave as they wish. Participatory group members receive a $25 gift card for each meeting they attend. Six co-authors of this report are members of our participatory group.
We held four sets of participatory meetings ahead of Workshop 1. Our first participatory meeting, in March of 2021, included eight neurodivergent students, three of whom had previously attended TKU, four of whom were part of a participatory mentorship program at CUNY, and one whom was a Ph.D. student. The first participatory meetings focused on instructional strategies in the workshop and strategies for measuring engagement. The April meeting focused on improving our participatory practices and reviewing pilot measures. In the May meeting, we used pilot data to improve measures, with a focus on providing guidance to the autistic artist and co-author, JDS, who drew our engagement measure. Concerns were raised about engagement looking different for people with different neurotypes, which led us to add questions about what engagement looks like for each student. We also discussed the limited time planned in the Workshop 1 for career exploration. Participatory group members were in favor of including guest talks by autistic professionals in the curriculum. For example, JDS said, “The presence of a successful autistic game dev says far more for visibility, self-advocacy, and the sheer possibility of that career than words can truly capture.” We would learn over time that this was a very prescient insight. The June meeting focused on further revisions of our engagement measure and discussions of how to teach game design in the workshop, e.g., whether the instructors should use both an image-based (Flowlab) and text-based (Twine) platform. While some group members did not feel knowledgeable enough about the platforms to comment, others recommended both.
After Workshop 1 concluded, the participatory group met to review data and provide recommendations for Workshop 2. We brainstormed how to incorporate self-advocacy and peer learning more effectively in Workshop 2. In meetings after Workshop 2, we continued to reflect on what we learned. Members have been invited to help with qualitative coding and/or to co-author presentations and papers if interested. Participatory meetings are ongoing.
Pilot assessments
We conducted online pilot assessments of survey measures with 16 currently-enrolled TKU students. They received a $25 gift card for their work. After each set of measures, pilot participants completed survey measures. They were asked after each set of measures to rate if they understood what the questions were asking on a 5-point scale and to share “If you wanted to change any of the questions you were asked, what would you change?” We later adapted this approach to provide study participants with opportunities to critique questions too. After completing the survey, pilot participants completed Miro boards6 to share how they would improve the survey. The completed boards were later used to guide discussions about revisions with the participatory group.
The boards were structured in terms of strengths, weaknesses, and other comments about our measures of engagement, Universal Design-aligned instructional practices, and computational thinking, as well as more general comments. Some students enjoyed the engagement-related questions [e.g., “The questions were questions that were nicely phrased (in my opinion)],” while others found them superficial, “I found them two dimensional, not much depth. There wasn’t an option for the midpoint of attention and inattention.” or questioned what they could teach us. “I felt like even though I knew why I was answering the questions, I didn’t really understand why, if that makes sense.” We found the first critique very insightful but did not add an intermediate prompt as our survey was already quite long. In response to the latter point, we added an explanation of why we were asking about engagement, “so we know what interest looks like for you.” The computational thinking measure also elicited mixed feedback, e.g., “They were challenging, a good exercise for my puzzle loving brain.” versus “Personally, they were too easy” and “Answering too many questions.” To address the latter point, we reduced the number of questions, as will be described below. In retrospect, we should also have addressed the former critique, but did not due to limitations in the range of the measure. Questions about Universal Design-aligned teaching practices also elicited mixed feedback, e.g. “I like questions I can input my own perspective into.” versus “It was a bit challenging directly after a bunch of brain challenges.” and “I don’t understand the purpose of these specific questions.” In response to the first critique, we moved the brain challenges (computational thinking) last in the surveys for the workshop. Recommendations to improve phrasing provided in the Miro boards and/or individual surveys were used to improve accessibility.
Participant recruitment and selection
Workshop participants and their families selected which workshop they were available for. They were not randomly assigned to a workshop. We had not planned to randomize participants to different workshops in our grant proposal as doing so would have imposed substantial recruitment challenges.
We distributed recruitment fliers inviting autistic participants between 14 and 21 years of age to participate in one of two free online game design workshops at TKU as part of an NSF-funded research study. Students in this age range, in high school or college, were eligible to participate. Fliers were distributed widely to (1) education, disability and/or technology listservs, (2) organizations across the United States that focus on supporting autistic students and their families, and (3) our personal networks. The workshops were held from 1 to 4:30 p.m. on July 12–23 (Workshop 1) and August 9–20 (Workshop 2). Additional inclusion criteria were that participants must be able to access the workshop via a computer rather than a tablet and must not have attended TKU previously (so past experiences at TKU would not impact engagement). Participants were also required to be available for 2-h assessments before and after the workshop and during the entirety of either Workshop 1 or 2. Potential participants were informed that they would receive one $50 gift card for attending the workshop and completing pre- and post-test assessments. Recruitment materials targeted parents because most potential participants were minors and TKU, which was leading recruitment, has had success targeting parents.
Screening process
Sixty-two parents filled out a parent screener. Three were not eligible because their child was not autistic. Five were excluded because they were unsure if they had attended TKU before. We prioritized accepting participants from families who reported lower annual income and/or who were racial/ethnic minorities.
Learning objectives
The learning objectives that our workshops sought to teach students were to:
(1) Learn game design concepts and practice game design tasks,
(2) Learn about ways that games can spread awareness and address social issues,
(3) Practice social and emotional skills throughout the duration of the workshop,
(4) Learn about careers in the field of game design.
Participants
Student participants completed pre- and post-test assessments before and after their workshop, consisting of a survey (via Qualtrics), an interview, and computerized assessments of cognition and attention. During each workshop, they rated their engagement with pre-selected probed activities that had key domains that might influence engagement, also via Qualtrics. Parent participants completed pre- and post-test surveys.
Workshop 1
Twenty potential student participants (all male) initially enrolled in Workshop 1. One decided not to continue after attending the first two days; he felt that he did not have the needed skills. His mother had expressed concerns that he might need additional help during the pre-test interview, wherein he had difficulties answering open-ended questions. Another potential participant attended the workshop but did not complete the post-tests due to an unexpected family issue. Both were not included in the final sample, described below.
Eighteen student participants, average age 16.72 years (SD = 2.74; range: 13–21; we included one student who was younger than our planned mininum cut off in an effort to prioritize intersectional representation), participated in Workshop 1. Two said they did not know their race/ethnicity, one of whose parents indicated that he was White/Caucasian. Based on combined student and parent responses, 10 students were White/Caucasian, two were Asian, two were Latine, two were Black/African-American, one was mixed race, and one did not know (no parent report). Nine parents reported an annual family income of $100,000 or higher (Table 1). Two families reported earning less than $25,000 a year, one reported earning $25,000–$49,000 a year, and one reported an annual income of $50,000–$74,000. The families who reported lower incomes were also racial/ethnic minorities.
Workshop 2
Twenty-one potential student participants (2 female; 19 male) initially enrolled in Workshop 2. Two potential participants required a caregiver. One, who answered “strongly agree” to every closed-ended survey question, attended the workshop every day. She refused the post-test interview but answered the question about how the workshop could be improved, “To do it in person.” The mother of the other student who required a caregiver filled out the survey on his behalf, reporting that he “does not have a lot of language.” He missed half of Workshop 2. During post-test, his mother reported that he would benefit from an in-person workshop with more individualized support.
Another potential participant, who used an app for help reading, attended only the first day and decided not to continue. Another potential participant, who was making his own computer and had a specific interest in electrical engineering, attended the first two days and a few minutes of the third day and decided not to continue. He completed a post-test interview where he explained his decision not to continue, “…what I do is actual hardware and stuff like that. But they were just working on software for like modern computers and whatnot. And like, storytelling games, I’m not really into that sort of thing.” Another potential participant attended the first four days of the workshop and was then offered a job at his new university, which conflicted with the timing of the workshop. He did not continue with the workshop but did complete the post-test interview where he shared, “the fact that my job started on the following week. Definitely took out like a huge chunk of what I felt I could have learned…. it was just bad timing. It’s nothing you guys could do about it.” When asked what he liked about the workshop, he said, “Well, learning how to use Twine was fun…I had something big I wanted to do, but it kind of got shot down. When I realized the site couldn’t really save it properly.” When asked what he didn’t like about the workshop, he said, “The fact that my games wouldn’t save really affected my motivation to be honest.” Therefore, it is possible that the job was not the only factor leading him not to continue. The two potential participants who required a caregiver and the three who did not complete the workshop were excluded from the final sample described below.
Sixteen participants completed Workshop 2. Their average age was 16.56 years (SD = 2.34; range: 14–21). Nine students were White/Caucasian, three were Latinx, two were Black/African-American, and two were mixed race. Seven of sixteen families reported an annual income of $100,000 or higher. One family reported less than $25,000 a year, two reported $25,000–$49,000, and three reported $50,000–$74,000. Again, families with lower incomes were also racial/ethnic minorities.
Missing and delayed data
Pre-test computerized cognition and attention data are not available for one participant due to a network error on the Cambridge Brain Sciences website. Two participants in Workshop 1 were accidentally administered the pre-test at post-test. This did not impact scales used to assess change over time as they were the same at pre- and post-test. However, it did impact ratings of what they felt the workshop helped them with and pedagogical support available, as these were not asked at post-test. Both were contacted to see if they would complete the missing questions for an additional gift card. One did so, but the other did not. Post-test interview data was not uploaded correctly for two participants, one in Workshop 1 and one in Workshop 2, so is not available for these participants.
Pre-tests occurred within a week of the start of the workshop as planned. However, some delays occurred when scheduling post-tests, particularly after Workshop 2. All post-test interviews for Workshop 1 occurred within a week of the end of the workshop, except one that occurred eight days and one that occurred nine days after the workshop ended. Three post-test assessments for Workshop 2 were delayed, with one at 19 days and two at 21 days after Workshop 2 ended.
Student measures
Measures are described in the order that they were typically administered, although there was some variability in whether the interview, Cambridge Brain Science tasks, or survey occurred first based on participant preferences. Survey measures were always administered in the order below.
Interviews
After completing the consent, participants completed a pre-test interview, via Zoom, with one of four researchers, who would also be present in the workshops. We sought to have the same interviewer at pre- and post-test and were successful in all but five cases. Participants could choose if they wished to answer via typing and/or speaking. One decided to type, four used a combination of typing and speaking, and the rest chose to speak.
Pre-test interview questions asked about motivations for enrolling in the workshop, career goals, self-advocacy, and how participants see themselves, the world, and autism. The post-test interview began by asking participants to reflect on their experiences in the workshop, followed by the same questions about themselves they had answered at pre-test. See Supplementary Appendix A for the full list of interview questions.
Responses to the following post-test interview questions were qualitatively coded by two coders who were not involved in the workshops and coded unaware of which workshop responses were from:
1. What did you like about this game design workshop?
2. What did you not like about this workshop?
Cognitive and attentional measures
We originally planned to use the Delis et al. (2001) Color Word Interference task to measure inhibition, as we had in an in-person pilot prior to obtaining funding. However, our study transitioned online due to COVID-19, so we pivoted to online measures of attention and cognition. We selected six constructs to assess using Cambridge Brain Sciences (now known as Creyos)7 game-based online assessments: double trouble (a Stroop task used to measure response inhibition; our primary attentional measure), odd one out (a deductive reasoning task), spatial planning (assesses sequencing to reach goals), feature match (assesses focusing attention to notice differences), spatial span (assesses short-term memory for visual relationships between objects in space), and grammatical reasoning (examines conclusions drawn from word combinations).
In Workshop 1, we administered double trouble, odd one out, and spatial planning at pre-test. We planned to administer the other three tasks at post-test, as we believed, based on descriptions of the tasks (e.g., Hampshire et al., 2012), that they measured relatively stable individual differences. However, spatial planning improved unexpectedly in Workshop 1. Therefore, we began to administer all six Cambridge Brain Sciences measures at post-test for Workshop 1 and continued doing so at Workshop 2 for both pre and post-test. Given that only three measures were administered at both pre- and post-test for Workshop 1, we focus on these measures in this manuscript. The first score for each game that was deemed valid by the scoring software is included in analyses (for more information about tasks, see Supplementary Appendix A).
Engagement rating scales
During each workshop, participants rated their engagement with probed activities using a multimodal scale of engagement comprised of four domains: (1) Happy to Sad, (2) Calm to Anxious, (3) Bored to Interested, and (4) Understand to Confused. Each of the four scales had five response options which consisted of visuals coupled with text-based labels (Supplementary Appendix A). This scale was adapted from a scale assessing Affect, Anxiety, Pride, and Energy (AAPE) developed by the lead evaluator for this study in collaboration with a participatory team (Riccio et al., 2020). The scale was adapted for this study by the same artist (JDS), a member of our participatory team, who had drawn the original AAPE. He revised the Affect and Anxiety dimensions and replaced the Pride and Energy dimensions with dimensions deemed more relevant for engagement by the participatory group: Bored to Interested and Understand to Confused. The Bored to Interested dimension was selected a priori as the most relevant domain for assessing engagement with workshop activities and is the focus of reported analyses.
To evaluate the validity of these scales, participants were presented with each of the scales without its text-based labels during the pre-test and asked to write in: “What emotions do you think this is showing?” They were then presented with each scale with its text-based labels and asked “Please rate how much you agree that these pictures show (the dimension depicted, e.g., boredom to interest.)” using a 5-point scale (strongly disagree, disagree, neither agree nor disagree, agree, strongly agree). For all uses of this response scale, strongly disagree was assigned a numerical score of −2 and strongly agree was assigned a numerical score of 2.
Interests and indicators of engagement
In the pre-test survey, participants were asked to share their favorite interests and/or hobbies and to rate 14 potential things they might do when engaged (e.g., look at the screen, listen to music, draw) using the above 5-point scale.
Motivations for enrolling in workshop and perceived gains
The pre-test survey asked, “Please share why you joined this workshop: I joined this workshop to:” make friends, learn more computer skills, learn more video game design skills, learn skills that will help me get a job, get better at working with other people, build my self-confidence, and have fun, rated on the above 5-point scale (strongly disagree—strongly agree). Perceived workshop gains were assessed by asking, “Please share how much you agree or disagree that this workshop helped you” followed by the same domains they rated when sharing their motivations for joining.
Video game design self-efficacy
Given that there were no measures of video game design self-efficacy, we adapted items from the STEM Career Interest Survey (STEM-CIS; Kier et al., 2014) to focus on video game design self-efficacy (Supplementary Appendix A). This three-item scale exhibited borderline internal consistency (α = 0.69).
Career decision-making self-efficacy (CDMSE)
We adapted 8 of the 12 original items from the CDMSE subscale of the Middle School Self-Efficacy Scale (Summers and Falco, 2018) for accessibility guided by pilot and participatory feedback (Supplementary Appendix A). The adapted version of this scale exhibited good internal consistency (α = 0.88). We did not hypothesize that this measure would change in the current study. We included the measure in Year 1 to better understand its psychometric properties. We only expected changes in this measure in Year 2, when the workshops would be expanded to focus on employment.
Technology self-efficacy
The STEM Careeer Interest Survey (STEM-CIS; Kier et al., 2014) assesses technology-related self-efficacy and career interests across four domains: science, technology, engineering, and math. STEM-CIS is designed to assess key aspects of social cognitive career theory: perception of one’s abilities, beliefs about the consequences of behaviors, personal characteristics and backgrounds, and contextual supports and barriers. A prior investigation of its psychometric properties revealed evidence that it is unidimensional and that the subscales are sufficiently distinct to be administered separately. Following pilot feedback, some items were modified to improve accessibility and to reduce the original measure’s emphasis on schoolwork, given that our workshops are part of an out-of-school program. The adapted version of this scale exhibited good internal consistency (α = 0.86; Supplementary Appendix A).
Instructional strategies that students liked and received
During the pre-test, participants were asked to “rate how much you agree with the following statements: In classes, I like it when…followed by 27 practices which they rated using the same 5-point scale as above (Supplementary Appendix A). At post-test, they were asked to reflect on whether the above teaching practices were apparent in the workshop using the same rating scale.
Self-determination
We adapted a widely used measure of self-determination, the Self Determination Inventory-Student Report (SDI-SR; Shogren et al., 2020), which has 21 items assessing three domains: autonomy and self-initiation, self-direction toward one’s goals, and empowerment and self-realization. Guided by pilot feedback, the participatory group selected 11 items to reduce redundancy. Although the measure typically uses a slider scale, we used the same 5-point rating scale as above for consistency. Our adaptation exhibited good internal consistency (α = 0.89).
Hyperfocus
The Adult Hyperfocus Questionnaire (Hupfeld et al., 2019) assesses the degree to which people experience long bouts of highly focused attention, often when engaged in activities that interest them. The full scale assesses hyperfocus in different contexts as well as the dispositional (or general) tendency toward hyperfocus. In our study, we focused on dispositional hyperfocus. We modified ten questions from the scale by reducing and simplifying words to increase accessibility. The response scale was changed from a frequency scale to the same 5-point scale above. The adapted version of the subscale exhibited acceptable internal consistency (α = 0.74).
Computational thinking
Computational thinking was assessed using an adapted version of the Computational Thinking Assessment for Middle Grades (CTA-M; Wiebe et al., 2019) which contains items from the Computational Thinking test (CTt; Román-González et al., 2017) and from the Bebras Computing Challenge (Blokhuis et al., 2016). We modified the CTA-M to create two assessments, each containing six items from the CTt and two Bebras items, due to concerns about practice effects if we administered the full CTA-M at both pre- and post-test, as researchers often do. Each version of the assessment contained one question from each of the computational thinking domains outlined within the CT, including basic directions and sequences, loops, simple conditionals, complex conditions, and while conditionals. Students were randomized to “A” and “B” groups to alternate which version of the assessment was given at each time point. The modified version of this computational thinking assessment achieved variable internal consistency, including an unacceptable level (α = 0.57) for pre-test A and post-test B and a marginally acceptable level for pre-test B (α = 0.68) and post-test A (α = 0.70). The low internal consistency appeared attributable to some items being too easy (i.e., ceiling effects). Due to its unacceptable internal consistency, we do not report potential CT changes in the results, but they were not significant.
Parent measures
Although any guardian could have completed the parent assessments, only mothers (including one stepmother) completed pre- and post-test surveys. The pre-test asked about the teen’s demographic characteristics, interests, strengths and challenges, diagnoses, aspirations, and instructional strategies that could help them learn the skills needed to attain their dream jobs. Parents also completed measures assessing their motivations for encouraging their child to enroll in the workshop and rated their child’s autistic characteristics and traits of ADHD.
Despite reminders, only 26 of the parent participants completed pre-tests. Therefore, we created a combined pre- and post-test parent survey, which focused on the essential questions, for parents who had not completed pre-tests. Three parents completed this composite survey, leading to pre-test data from 29 mothers and post-test data from only 18 mothers.
Motivations for encouraging student to enroll in workshop and perceived gains
During their pre-test survey, parents were presented with the prompt “I encouraged my child to enroll in this workshop to…” followed by the same domains as the students used to rate their own motivations (described above). At the post-test, parents were asked to “Please share how much you agree that this game design workshop helped your child…” using the same domains.
Attentional strengths and difficulties
The Strengths and Weaknesses of ADHD-symptoms and Normal-behavior (SWAN) scale was used to assess characteristics of ADHD in a manner that allows both strengths and challenges to be reported dimensionally (Swanson et al., 2012). Parents were asked to compare their child to other children based on behaviors observed in the past month and using a 7-option response scale ranging from “far below” to “far above.” Higher scores indicate greater attentional skills. We selected 18 items from this scale that focused on attentional difficulties. The adapted scale exhibited excellent internal consistency (α = 0.91).
Autistic characteristics
The Social Responsiveness Scale-Brief, a 16-item measure assessing autistic traits in four domains: autistic mannerisms, social awareness, social cognition, social communication, and social motivation (Swanson et al., 2012), exhibited good internal consistency (α = 0.84).8
Educator characteristics and training
The instructional and support staff included one lead teacher, an award-winning Twine game designer, an openly neurodivergent assistant teacher who had high-level programming and game design skills, three tech counselors, a social worker, and an occupational therapist. Most of the staff had extensive experience working at TKU and many had attended numerous training sessions in their past roles (Table 2).
For this study, educational staff attended a 1-h research orientation training, which explained what participatory research is, how it differs from dominant research methods, and why it is important, and completed a participatory online training about Autism and Universal Design that has been associated with improved autism understanding and acceptance and more positive attitudes toward UD (Waisman et al., 2022). They also received training about Zoom management and in the half hour before and after the workshop attended preparation and debrief sessions led by TKU’s then Education Director. These sessions provided opportunities to collaboratively strategize how to support individual students, brainstorm curricular improvements, and deliver targeted mini trainings, as needed. In response to evidence that some students and staff struggled with navigating the many discussions about social justice in the workshop, TKU’s Education Director and the Lead Teacher developed a training to help educators discuss sensitive topics with youth. This training emphasized the importance of recognizing and celebrating meaning-making, clarifying why some statements are offensive without assuming negative intent, but also naming and denouncing incidents of bias swiftly when they arise. The training encouraged staff to create safer spaces by educating one another, while noting that it is impossible to guarantee a completely safe space as everyone’s experiences are different. It highlighted the importance of being a learner as a teacher, of connecting to and building on what students already know, and of discussing sensitive topics openly but redirecting students to the social worker when topics may cause undue distress.
Staff also received diversity profiles which were 1–2 page long google docs that summarized characteristics of each student derived from the pre-tests, including age, information they wanted their teachers to know, motivations for joining, interests and how they expressed engagement, preferred teaching strategies, experience with video game design, and pre-test self-efficacy, self-determination, hyperfocus, and attentional/cognitive scores. Due to the timing of pre-tests, these profiles did not become available to educators until the beginning of the workshop, thus limiting their utility.
Educator measures
Perceived student engagement
After probed activities, instructors rated their perceptions of the degree to which each student they observed was interested in the activity using the same bored-to-interested scale students used. They were also invited to provide semi-anonymous feedback on activities they felt were or were not particularly engaging via an optional survey at the end of each day.
Perceived student learning
At the end of each week, educators were asked to share the degree to which they felt students learned each of the four learning objectives of the workshop (game design concepts, games and social justice, social emotional skills, and careers in game design). For each learning goal, educators rated what proportion of the students they believed achieved that goal (almost none, between 15–30%, between ~35 and 50%, between ~55 and 75%, about 75%, or almost all). Then they used the same 5-point response scale as students to rate “Do you believe that activities and instructional methods were effective in teaching Learning Goal X for most students?” and “Do you believe the training you received prepared you to achieve this learning goal with students? (“Training” here refers to both your initial pre-workshop training and daily staff debrief sessions).” They were then asked to provide support for their ratings (for additional educator questions, see Supplementary Appendix B).
Analytic approach
We used a prespecified alpha level of ≤0.05 due to limited power imposed by low sample size. We examined if continuous summed outcome variables were approximately normally distributed in each workshop by first examining kurtosis and skew and then following up with Shapiro–Wilk tests. For any variables that were not normally distributed, we note in footnotes if the finding is no longer apparent with non-parametric tests.
We first examined if participants in the two workshops differed using Mann–Whitney tests (for independent ordinal data), independent samples t-tests (for continuous data), and chi-square tests (for categorical data). Given that the motivation data was ordinal and from related samples, we used Wilcoxon Signed-Rank tests to compare students’ and parents’ motivations for participating.
Since the Workshop 1 was used to modify instruction for Workshop 2, we ran the following analyses in Workshop 1: Kendall’s tau correlations to examine associations between individual differences and engagement ratings. We had pre-registered our plan to use repeated-measures ANOVAs to examine differences in ratings due to characteristics of the rated activity. So we used these parametric analyses despite the ordinal nature of this data.
We used paired samples t-tests to examine potential improvements from pre- to post-test in Workshop 1. After revising the curriculum based on feedback for Workshop 1, we ran Workshop 2.
Two coding pairs, each of which contained at least one neurodivergent coder, qualitatively coded open-ended interview responses from students and survey responses from educators using primarily inductive content analysis (see Supplementary Appendix C for coding schemes). They obtained reliability of 85% or higher on 20% or more of the sample. We compared qualitative ratings across workshops using chi-square tests.
Results
Student characteristics across workshops
Participants in the two workshops did not differ in gender, age, or parent-reported ADHD diagnoses. However, participants in Workshop 2 were significantly less motivated by an interest in learning video game design than students in Workshop 1 and reported numerically lower pre-test video game design self-efficacy than their peers in Workshop 1 (Table 1).
Six students had additional diagnoses, most commonly co-occurring anxiety (n = 5). Nine participants in Workshop 1 and ten in Workshop 2 were diagnosed with ADHD, in addition to autism, according to parent report. Unexpectedly, a parent-reported student ADHD diagnosis was not associated with student hyperfocus (p = 0.48) or Cambridge Brain Sciences measures of inhibition (double trouble; p = 0.58), or spatial planning (p = 0.32). However, student participants with ADHD had less parent-reported ability to regulate their attention, r(24) = −0.53, p = 0.005. We used correlations to examine the pre−/post-test reliability of the Cambridge Brain Sciences measures (Table 3). Double trouble at pre-test exhibited a strong correlation with itself at post-test. Moderate pre-test post-test correlations were observed for the other two variables.
Students’ and parents’ motivations for participating across workshops
Mann Whitney tests comparing student and parent motivations for enrolling in the workshop or encouraging their child to enroll were conducted separately for participants in each workshop. In Workshop 1 (Wilcoxon-Signed Rank Test Z = 2.21, p =.03) and Workshop 2 (Wilcoxon-Signed Rank Z =-2.00, p =.046), parents were more motivated by a desire to help their child develop self-confidence than their child was (see Table 4). In Workshop 1, parents were more motivated by a desire for their child to get better at working with other people than their child was (Wilcoxon-Signed Rank Test Z = 2.00, p = 0.046). A similar trend was not statistically significant in Workshop 2 (p = 0.096). In Workshop 1, students were more motivated by their desire to learn computer skills than their parents were (Wilcoxon-Signed Rank Test Z = -2.45, p = 0.014).
The top two goals that motivated students to enroll were to learn more computer skills and have fun. In contrast, parents’ primary motivations were to help their teen build self-confidence, get better at working with others, and learn video game design skills. In Workshop 2, a trend was observed toward parents being more interested in their child learning video game skills than their child was (p = 0.058). Neither parents nor teens appeared particularly motivated by a desire for the teen to make friends in the workshop.
Initial evidence of validity of interest engagement rating
Across workshops, student pre-test ratings of the perceived meaning of the engagement dimensions revealed that our primary measure, boredom to interest, was rated the most representative of its target meaning (M = 1.53; SD = 0.71 on a scale from -2 to 2). Understand-to-confused received the lowest rating (M = 1.09; SD = 0.93).
Workshop 1
Attention and engagement
We hypothesized that students with more focused attention would prefer unimodal instructional strategies while students with less focused attention would prefer multimodal instructional practices. However, student-reported hyperfocus (ps > 0.12) and parent-reported attentional difficulties (ps > 0.08) were unrelated to student-reported engagement with any workshop activities. Parent-reported ADHD was positively associated with engagement with a demo of variables using math, r(13) = 0.49, p = 0.045, and negatively associated with interest in playing a text-based game in small groups, r(14) = −0.53, p = 0.028. Attentional inhibition was associated with engagement ratings for only one activity and in the opposite direction of what we had predicted; it was negatively associated with interest in a unimodal activity, r(14) = −0.39, p = 0.049, a whole-group voice-only role play.
A repeated measures analysis comparing engagement ratings for the first whole group explanation activity that students rated in Workshop 1 (explanations delivered by voice only without video, voice plus video, and voice plus video plus Zoom text transcriptions) was significant, F(2, 30) = 4.81, p = 0.015; η2 = 0.24. Post-hoc tests revealed that the video plus voice explanation (M = 1.56; SE = 0.27) was higher rated than both voice only (M = 0.63; SE = 0.30; p = 0.011) and voice plus video plus text (M = 1.00; SE = 0.32; p = 0.023), which did not differ from each other (p = 0.30). When ADHD was entered into this model, the pattern did not change and the interaction term was not significant (p = 0.49). Similarly, parent-reported attentional differences did not alter findings or generate a significant interaction (p = 0.22). The same pattern was observed for the second set of explanation activities that students experienced though this time, it was not significant (p = 0.15). Although these patterns initially appeared to provide some evidence for the generality of Mayer’s Redundancy Principle, the first voice plus video activity was a highly rated industry speaker who was also autistic and the second speaker was another highly rated industry professional. Given that the two industry speakers were among the highest-rated activities in the workshop, speaker rather than modality is a likelier explanation of the pattern. See Table 5 for average engagement ratings for probed Workshop 1 activities.
The most highly rated activities in Workshop 1 included group games (e.g., Werewolf) and individual games (e.g., Game Blast). Students appeared highly interested in multimodal activities across social structures. For example, working on one’s own to make characters move, playtesting Flowlab games in small groups, and a whole group map-making activity were all highly rated. Some social justice discussions were rated highly interesting, e.g., a whole group discussion of race in games and a small group discussion of games and cultural sensitivity, while other discussions were rated far less interesting, e.g., queer tropes in games and UD for diversity. Although engagement ratings varied substantially within each group, size, broad type of activity, modality structure, interactive multimodal activities, stimulating discussions, and opportunities to engage with industry professionals were consistently rated highly.
The above findings provide no clear support for our hypothesis that instructional modality or attention were particularly relevant factors contributing to autistic teens’ engagement with workshop activities. However, video game design self-efficacy and self-determination were positively associated with engagement with 10 out of 29 and 7 out of 29 activities, respectively (ps < 0.05; see Supplementary Appendix D for specific associations).
Examining changes from pre-test to post-test in Workshop 1
Consistent with our hypotheses, video game design self-efficacy and self-determination improved from pre-test to post-test following Workshop 1 (ps < 0.045; Table 6).9 We had not hypothesized that career decision-making self-efficacy would improve as the workshops in 2021 focused primarily on game design rather than job skills. It did not improve and even became numerically, albeit not significantly, lower following Workshop 1. Unexpectedly, spatial planning improved following Workshop 1 (p = 0.005).
Learning from students’ and educators’ feedback after Workshop 1
Student evaluations of Workshop 1
Students rated Workshop 1 as most helpful in supporting video game design skills (M = 1.59 out of 2), followed by computer skills and having fun (Table 7). They found Workshop 1 least helpful in terms of helping them make friends (M = 0.41).
Table 7. Students’ ratings of the degree to which their workshop helped them in each of the following domains (possible range -2 to 2).
When asked “What did you like about the workshop?,” most students focused on the content, particularly opportunities to learn about game design (Table 8). One student said, “I kind of liked the fact that I designed my own game…I don’t know how to work out the settings. The settings are a little bit hard, but I think I can get it done a little bit.” Six students specified that they found the workshop engaging, “It was fun. I learned a lot of interesting things as well as websites that I can use later on in the future…I met some new people. I met some teachers who I also like. I had something in common with some people.” Four students particularly liked the social justice discussions. For example, a student who later joined our participatory research team said, “I learned tropes and how to avoid some tropes… I learned how to make my game accessible.”
When asked “What did you not like about the workshop?”, three students mentioned the duration (two thought it was too long; Table 9). One thought it should be longer: “that’s probably my only complaint that the whole thing wasn’t long enough.” Three students critiqued instructional strategies; “Because like, there’s a bunch of different people, they all have to catch up on stuff. So I suppose that’s part of the reason why it dragged at times.”
Four critiqued the content. One felt it was too simple. “What I liked most about it was meeting new people um with different perspectives, backgrounds…And I’m gonna say up front… they’re going to find out anyways, especially because it’s being recorded…. I wasn’t crazy about the workshop overall. The reason is because… I expected it to be a bit more advanced. I expected it to be a bit more interesting…maybe I was higher functioning than many of the people there.”
Another student who liked that the workshop “looked at multiple perspectives” said that the discussions sometimes made him uncomfortable. “There are incredibly few things I didn’t, I didn’t like about the workshop, but one of them… I’m going to level with you here. I have an um. I have an extremely selective fear of nudity and sexual stuff. And remember, it’s selective. So sometimes it happens. Sometimes it doesn’t…. I want to learn I want knowledge, knowledge, even if even if it means I have to go through hell hell to get it. Trust me. I’ve been through worse.”
One student noted Internet problems, “I think I had nothing to complain about…. Because there’s, like, the teachers themselves are fine. It’s just the connection problem and how some classmates had their audio problems and all that stuff. So it was still good.”
Educators’ feedback
When asked what proportion of the students in Workshop 1 had learned each of the learning objectives, educators provided the highest ratings for careers in game design, followed by game design concepts (Table 10). However, they also reported feeling least prepared to teach about careers in game design. Qualitative coding of their rating explanations revealed that many (42.9%) felt students had learned about careers in game design from the industry speakers. However, many (42.9%) recommended devoting more time to careers in the future.
Their open-ended explanations of their ratings of students’ SEL indicated that many (85.7%) felt students were socially engaged. However, many (42.9%) also indicated a lack of clear evidence for SEL learning. Many (42.9%) referenced “temperature checks” as helpful.
When asked how they helped students achieve learning objectives, many emphasized the importance of individualized check-ins (71.4%) and social–emotional support (71.4%), with a particular focus on fostering a positive classroom atmosphere (42.9%). They more often indicated that they advocated for students (42.9%) than that they fostered self-advocacy in students (28.6%). When asked how they could better support students, educators suggested more student check-ins (57.1%) and more pre-workshop preparation (28.6%).
Methods continued: iterative changes in instructional approaches between Workshops 1 and 2
In response to data from Workshop 1, study leaders encouraged instructors to provide more time management supports, choices for students, and opportunities for differentiation in Workshop 2 (e.g., choices of whether students wished to attend a breakout room where they could continue what they were doing, move on to an advanced topic, or review) as well as more opportunities for students to share their work. We also included more breaks and asked staff to share career spotlights about their career experiences and goals throughout the workshop. We modified the engagement probe matrix to focus on dimensions that appeared important in Workshop 1.
Interim staff training
After initial analysis of data from Workshop 1, we conducted a 1-h Professional Development session with staff on August 5th, at the end of the two-week period between the two workshops. In this professional development session, we discussed activities that students rated as particularly engaging and unengaging. We highlighted the importance of early check-ins with students and of choices for promoting self-determination. We provided the following recommendations:
1. More project planning, e.g., presenting learning objectives, vocabulary, schedule, career spotlights, and options in terms of expression up front;
2. Giving students more time to plan/prepare presentation;
3. Providing more check in points (e.g., students share screen) and visuals/demos;
4. Building more opportunities for collaboration/interaction between peers;
5. Including more discussions of jobs;
6. Including more breaks but also activities for people who are bored during breaks to do;
7. Including opportunities for differentiation;
8. Calling on less vocal students more/giving students time to respond;
9. Considering reading diversity profiles. However, diversity profiles did not become available until the Sunday before Workshop 2 started due to the timing of pre-tests.
This training also included tips from the social worker to support student confidence, creativity, and engagement, such as specific feedback, step-by-step questions, encouraging communication, labeling SEL so students can feel pride, reframing negative comments and differentiating between honesty and disrespect. We also returned to the mini-training about sensitive topics, which we only had time to partially discuss during Workshop 1.
Results continued: did we observe evidence of improvements from Workshop 1 to Workshop 2?
Students’ feedback
No significant improvements were observed following Workshop 2 (Table 11). However, the reduction in career decision-making self-efficacy, observed in Workshop 1, reversed numerically, though changes in this measure remained not significant. Independent samples t-tests revealed no changes between Workshop 1 and 2 in the degree to which students felt that the workshop had helped them with specific skills (Table 7). Neither engagement ratings nor students’ perceptions of instructional practices changed noticeably between workshops (ps > 0.06).
A trend toward students being less likely to report that instructors asked them to work on their own in Workshop 2 relative to Workshop 1 was observed (p = 0.065). In their open-ended feedback, students highlighted specific instructional strategies more in Workshop 2 than Workshop 1, particularly advanced learning and facilitation. For example one student said, “I really enjoyed being able to create the games and the fact that I had guidance the whole way, but it wasn’t like but I could still do things on my own if I felt confident enough in them…And I also liked how I was able to contribute to discussions a lot… And I love… having the little games in society things. …Especially when I I would give my own little contributions on this to things, like, how to fix these problems in games.” A student who was wrapping up his undergraduate degree in game design who later joined our participatory group said, “I really liked…the videos on on just what’s wrong with the gamosphere right now and what we can do to improve it. I feel like that’s very important for people to think about as they’re designing their games.” Another student who later joined our participatory group said that he liked, “Making the games and learning how to code.”
Twine, the text-based game design platform, was frequently noted as a strength of Workshop 2. “I liked how I could um create something like, like create games and how it would actually be used in the real world?… Like Twine I could use in the real world.” Another student felt Twine supported creativity. The aforementioned student who was finishing his degree said, “I loved Twine a lot… Having a very simple text based game maker…I liked that there was always an opportunity to learn more. So that even though a lot of other people may not have caught on to how Twine works as quickly as I did, there was never a point where I had to stop for them” However, a few students wished that “We could have used high-level programming languages.”
Although some students noted social benefits of Workshop 2, a need for greater social opportunities remained apparent, “I like the workshop. I like that I met. I met like some new people…. Actually, they’re not my friends, but most likely, like, helpers.” One student said, “I guess I didn’t like that we didn’t talk with the other kids, like interact with them sometimes.”
Educators’ feedback
Educators felt that Workshop 2 was more impactful in teaching students about the social justice potential of games than Workshop 1 (Table 10). Educators were numerically more likely to indicate that students had learned about careers in game design in Workshop 2 relative to Workshop 1. When asked how they could better support students, after Workshop 2, educators no longer emphasized student check-ins (0% of responses in Workshop 2 vs. 57.1% in Workshop 1). When asked how staff training/ongoing support helped them support students, daily debrief sessions emerged as particularly important in Workshop 2 (42.9% Workshop 2; 28.6% Workshop 1) as did check-ins with individual students (57.1% Workshop 2; 28.6% Workshop 1). However, educators more often highlighted the need for additional pre-workshop preparation following Workshop 2 (57.1%) than 1 (28.6%). Although no educators had suggested this following Workshop 1, 28.6% of educators noted a need for more help planning for the diversity of students after Workshop 2.
Discussion
Findings suggest that interactive multimodal activities, intellectually stimulating discussions, and opportunities to engage with industry professionals are engaging for varied autistic youth. Contrary to our first hypothesis, we saw no clear evidence that autistic students’ engagement with different instructional practices was associated with their attentional skills. The lack of support for this hypothesis complicated our initial efforts to develop “diversity blueprints” by making it difficult to determine which student characteristics are important for helping their instructors prepare to teach them more effectively. In Summer 2023, we will ask students to co-create their diversity profiles with us by selecting which information about themselves they believe their educators should know in order to teach them more effectively.
Aligning with similar findings from another technology program (Jones et al., 2023) and our hypothesis, participation in our first game design workshop was associated with improvements in video game design self-efficacy. Participation in Workshop 1 was also associated with hypothesized improvements in self-determination and unexpected improvements in spatial planning. Together, these findings provide support for a central premise of this work, that interest-based workshops can empower autistic students.
Despite our collaborative efforts to use what we learned in Workshop 1 to improve Workshop 2, Workshop 2 did not lead to statistically significant improvements in any of the outcomes. Students in Workshop 2 exhibited greater variability in support needs than those in Workshop 1. Indeed, it was only after Workshop 2 that educators noted the need for more training to help them prepare for student diversity. The students in Workshop 2 also entered with less interest in learning video game design and it was harder to schedule post-tests after Workshop 2, partially due the proximity of Workshop 2 to the beginning of a new school year. Echoing a large body of research indicating that the quality of research about supports for autistic people is improved by random assignment (e.g., Bottema-Beutel et al., 2022). If we had randomly assigned students to either Workshop 1 or 2, our workshops would have been more comparable, thus improving our ability to determine if the changes we made led to improvements.
Mixed methods research provides educational insights that quantitative data cannot provide on its own
Our findings highlight the value of a mixed-methods approach to assessing supports for autistic youth. If we had focused solely on pre- to post-test quantitative findings, we might have assumed that we had somehow made Workshop 2 worse by trying to improve it. However, students’ engagement ratings and ratings of what they learned revealed no evidence that Workshop 2 was worse. In addition, students’ open-ended feedback suggested that our efforts to improve teaching approaches had a positive impact; 6 students in Workshop 2 noted instructional strategies as key strengths of Workshop 2 vs. only 1 student in Workshop 1. These student reports align with evidence that the games students made in Workshop 2 were more sophisticated (e.g., had more levels) than games produced in Workshop 1 (Hayes et al., 2023).
Educators’ open-ended reflections revealed remarkable insights about their students, despite difficulties they noted in determining how engaged students were in online workshops where students were not required to have their cameras on. For example, educators’ ratings of the degree to which Workshop 2 helped students understand careers in games were numerically higher than their ratings for Workshop 1, aligning with the shift toward more positive changes in career decision making self-efficacy from Workshop 1 to 2.
Qualitative coding also revealed deep insights about the types of supports their students and they themselves needed, insights that were not visible in their closed-ended ratings. For example, educators noted that their students could learn a great deal about game design careers from industry speakers even though they personally felt unprepared to teach their students about careers. Educators also highlighted the importance of group “temperature checks,” individualized check-ins, social emotional support for students, and in particular, fostering a positive classroom atmosphere. They noted the team dynamic among the educators and the space that daily debriefs provided for them to share their insights and support one another emotionally as key to their successfully supporting students. Aligning with reminders that UD involves both a priori planning and iterative adaptation (Smith et al., 2019), TKU educators highlighted the need for more advance planning to prepare them to deliver the curriculum effectively to diverse students and benefits of frequent check-ins with students and other staff. Despite notable strengths, educators sometimes struggled with encouraging students to self-advocate and with promoting social interactions and collaboration.
Students’ belief in themselves: more central to engagement than attention?
As noted above, attentional differences were only rarely associated with students’ engagement ratings. Instead, students’ beliefs about their ability to shape both games (i.e., video game design self-efficacy) and their lives (i.e., self-determination) were much more consistently linked to engagement ratings than attentional/cognitive skills. This aligns with evidence, briefly touched upon in the introduction, that attention is not a static characteristic of an individual. The same person can be highly distracted when faced with tedious tasks and fully focused when presented with activities that interest them (Ashinoff and Abu-Akel, 2021). Given that students’ motivations were central to engagement and students’ motivations often differed from their mothers,’ autistic teens should be given space to choose which interest-based communities they wish to join. Findings also align with evidence for bidirectional relationships between emotions about STEM learning and STEM self-efficacy (Simon et al., 2015; Pekrun et al., 2017).
Multimodal measures of engagement, like the interest rating scale developed for this study, which is available open-access, could be used to help students with varied language abilities shape their educational experiences. However, as was evident with the two students who could not engage with Workshop 2 without a caregiver, students with more support needs might require in-person support to be able to meaningfully express their interests.
Limitations and future directions
Key limitations of this study include insufficient racial/ethnic, gender, and socioeconomic diversity, incomparable groups due to the lack of random assignment, interviewers who were present during the workshop, which could contribute to both interviewer bias and demand characteristics, missing and delayed data, technological difficulties, relatively small sample sizes (albeit larger than most studies focused on similar programs), some measures that were originally developed for different age groups, difficulties operationalizing instructional activities without confounding aspects, unacceptable internal consistency of the computational thinking and borderline internal consistency of the video game design self-efficacy measures, and potential response fatigue due to the large number of assessments.
Most of the educators who delivered these workshops had extensive prior experience. The amount of support that less seasoned educators would need to demonstrate similar levels of insights and skills is unknown; prior experience was an unexamined variable. Even these skilled educators struggled to foster social and self-advocacy opportunities for students online. Similar difficulties have been documented in other programs (e.g., Moster et al., 2022).
Although students’ ratings of their interest are an invaluable tool for educators, the lack of credible evidence that adapting instruction to students’ self-perceived learning styles improves learning (e.g., Pashler et al., 2008; Cuevas, 2015; Rogowsky et al., 2020) raises questions about whether student-reported engagement will be directly related to learning. However, our strategy of inviting students’ to rate specific activities after they happen, rather than abstract learning preferences, may offer more grounded insights than learning style assessments typically provide. Indeed, pilot participants in the current study reported that it was hard to answer how they learn best when asked more abstractly and similar difficulties were not reported when rating interest after specific activities.
The measures of attention/cognition used in this study were not as reliable across time as we had anticipated. Indeed, while Cambridge Brain Sciences assessments have demonstrated reasonable psychometric properties in some samples (e.g., Hampshire et al., 2012; Laureys et al., 2022), they have weak psychometrics in others (Kochan et al., 2022). Future research is needed to determine if associations between attention and engagement are apparent with stronger measures. Indeed, a response by one autistic student with ADHD in this study suggests that such work is needed as the student’s response aligned with a central unsupported hypothesis of this study, “I really like the fact that there were closed captions….It feels like there are, there are all 3 of them….That you see… there is hearing. Because, like, you get to hear what they do. And then there’s tactile which….interacts….I like interacting.”
Looking back to look forward as we finalize the randomized design for the final set of workshops, in Summer 2023, in this set of studies, we recommend that future researchers learn from our iterative process by: (1) using random assignment and masked assessors whenever possible, (2) developing robust educator training strategies in collaboration with (i.e., participatory) or led by autistic people, (3) providing structured opportunities for students to develop their own diversity profiles, (4) using theoretical frameworks and iterative data collection and analyses in focus measures, and (5) pre-registering hypotheses and research plans and making de-identified data freely available. By recommending a participatory approach, we do not intend to imply that participatory approaches are easy. Participatory approaches are challenging and require a highly iterative approach wherein we collectively learn from difficulties enacting a truly participatory approach. Challenges we have experienced making our approach increasingly participatory (e.g., by more fully involving our participatory team in curriculum and training development) will be documented in an upcoming paper (O’Brien et al., in prep). However, the challenges we have faced have only strengthened our belief that participatory work is essential to improving autism research and practice.
Conclusion
Findings provide preliminary support that interest-based informal learning opportunities can begin to provide a foundation for success for neurodivergent youth, particularly when they include opportunities to engage with successful neurodivergent role models. Students’ beliefs about their ability to shape technology and the world were often linked with their engagement with educational opportunities while their cognitive and attentional skills generally were not. Although this suggests that diverse students can learn together if they share common interests, students with greater support needs were unable to take full advantage of our online workshops. A diversity of learning opportunities for autistic youth are needed, some online, some in-person, some hybrid, which provide spaces for students to explore varied interests. Technologies like Twine that require little knowledge to begin to use but can be used in very sophisticated ways may be uniquely well-suited to helping diverse students learn together. However, similar technologies that are not language-based are also needed.
Findings align with a large body of research by indicating that autistic students’ interests are an essential path to learning (e.g., Murray et al., 2005; Dawson et al., 2008) that can help them obtain meaningful jobs, at least when congruent with other factors such as labor market demands (Goldfarb et al., 2019). However, what other characteristics besides interests are important to highlight in diversity profiles remains an open question that we hope students this year will help us answer. To learn from neurodivergent students, we must create conditions wherein they feel empowered to teach us, including evidence that we are learning as we go and collaborating with neurodivergent people to do so. Returning to the two definitions of self-determination introduced at the beginning of this paper, we would like to leave you with two key take-home points: (1) Employment-related interventions for transition-age autistic youth should, whenever possible, target both individual and collective self-determination as broken systems do not change without collective advocacy, and (2) Informal computer-mediated learning environments have a radical potential that has only begun to be realized.
Data availability statement
De-identified data supporting the conclusions of this article are available via our project on the OSF: https://osf.io/4pvq7/.
Ethics statement
The studies involving human participants were reviewed and approved by the College of Staten Island Institutional Review Board. Parents and students who were 18 years of age or older completed Internet-based consent forms. Students below 18 years of age completed Internet-based assent forms.
Author contributions
KG-L wrote the research proposal, sparked participatory processes, hired some staff, conducted and transcribed interviews, provided feedback on coding schemes, led data analysis, and wrote and revised drafts of this manuscript. EG led educator coding, scaffolded participatory meetings, recruited, conducted interviews, collected daily engagement probes during the workshop, and edited this manuscript. JH attended participatory meetings, participated in daily debriefs with staff during workshop facilitation, and worked closely with the team to craft curriculum and assessments. AR led collection of staff feedback and student CT data for Summer 1 in collaboration with WBM, provided feedback on the educator coding schemes, and edited this manuscript. JDS provided feedback on study design as a participatory group member and drew and revised the engagement rating scales. SB, the lead teacher of the workshops, led curriculum and training development in collaboration with JH. BK attended participatory meetings, offered feedback and ideas for the workshop, coded educator data with EG, provided feedback on coding schemes, and edited this manuscript. PD attended monthly participatory meetings, provided guidance on assessments and curriculum, and edited this manuscript. BR made curriculum suggestions, participated in daily debriefs, attended participatory meetings, and edited this manuscript. AH contributed to study processes and led hiring of study staff through NYU. LH-G, SSH, and SD attended participatory meetings and provided feedback on learning objectives, workshop plans, and survey methods. SO’B and EG coded student and parent data and edited this manuscript. All authors contributed to the article and approved the submitted version.
Funding
This project was funded by a collaborative National Science Foundation AISL grant to PIs KG-L and AH (CSI Award Number: 2005772 | NYU Award Number 2005729).
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2023.1179548/full#supplementary-material
Footnotes
1. ^Of course, each autistic person is different, so it is important to not assume that a given autistic person will have a specific strength or difficulty (see Taylor et al., 2023, for further discussion of this issue).
2. ^Although our research seeks to build from some autistic students’ interest in video games to create engaging educational opportunities wherein autistic youth can develop employment-related attitudes and skills, we do not wish for our work to contribute to the misconception that all autistic people are drawn to STEM fields. Like all people, autistic people vary in their career interests; many autistic people are particularly drawn to the arts, education, and research (Cheriyan et al., 2021; Vincent and Ralston, 2023).
3. ^Although the overarching goal of this research is to help autistic youth learn both game design and employment skills, we focused the workshops described in this report primarily on game design so that we could identify effective instructional strategies before expanding the scope of our learning objectives during 2022 and 2023.
4. ^Although hypotheses 1 and 2 were described in the proposal that led to funding for this project, we developed hypothesis 3 after obtaining funding but before conducting this research.
5. ^As this research is part of a National Science Foundation-funded study, we also had evaluation questions, including: 1. Does participation in our game design workshops lead to improvements in computational thinking? 2. Do instructors believe diversity profiles are effective for engaging diverse students?
7. ^https://creyos.com/features/tasks
8. ^This scale is not a focus of analysis so additional detail is provided here. Parents are asked to indicate if descriptions were consistent with their teen’s behavior over the past six months using four response options: not true, sometimes true, often true, almost always true. There are three reverse-scored items. Higher scores indicate more autistic traits.
9. ^Self-determination was not normally distributed at post-test so we conducted a confirmatory Wilcoxon signed-rank test. The improvement in self-determination remained significant (p = 0.021).
References
Ashinoff, B. K., and Abu-Akel, A. (2021). Hyperfocus: the forgotten frontier of attention. Psychol. Res. 85, 1–19. doi: 10.1007/s00426-019-01245-8
Bandura, A. (1989). Human agency in social cognitive theory. Am. Psychol. 44, 1175–1184. doi: 10.1037/0003-066X.44.9.1175
Baron‐Cohen, S. (2009). Autism: the empathizing–systemizing (E‐S) theory. Ann. N. Y. Acad. Sci. 1156, 68–80. doi: 10.1111/j.1749-6632.2009.04467.x
Begel, A., Dominic, J., Phillis, C., Beeson, T., and Rodeghero, P. (2021). How a remote video game coding camp improved autistic college students self-efficacy in communication. The 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ’21), VirtualEvent, USA. ACM, NewYork, NY, USA, 142–148.
Black, M. H., Mahdi, S., Milbourn, B., Scott, M., Gerber, A., Esposito, C., et al. (2020). Multi-informant international perspectives on the facilitators and barriers to employment for autistic adults. Autism Res. 13, 1195–1214. doi: 10.1002/aur.2288
Blokhuis, D., Millican, P., Roffey, C., Schrijvers, E., and Sentance, S. (2016). UK Bebras computational thinking challenge. Oxford, UK: University of Oxford.
Bottema-Beutel, K., LaPoint, S. C., Kim, S. Y., Mohiuddin, S., Yu, Q., and McKinnon, R. (2022). An evaluation of intervention research for transition-age autistic youth. Autism 27, 890–904. doi: 10.1177/13623613221128761
Bottema-Beutel, K. (2023). “We must improve the low standards underlying” in evidence-based practice. ed. Autism (London, England: SAGE Publications Sage UK), 13623613221146440.
Boven, F. (2018). Special interests and inclusive academic learning: an autistic perspective. Adv. Autism 4, 155–164. doi: 10.1108/AIA-05-2018-0020
Bradshaw, J., Schwichtenberg, A. J., and Iverson, J. M. (2022). Capturing the complexity of autism: applying a developmental cascades framework. Child Dev. Perspect. 16, 18–26. doi: 10.1111/cdep.12439
Burgstahler, S., and Chang, C. (2014). Promising interventions for promoting STEM fields to students who have disabilities. Rev. Disabil. Stud. 5, 29–47.
Burke, K. M., Raley, S. K., Shogren, K. A., Hagiwara, M., Mumbardó-Adam, C., Uyanik, H., et al. (2020). A meta-analysis of interventions to promote self-determination for students with disabilities. Remedial Spec. Educ. 41, 176–188. doi: 10.1177/0741932518802274
CAST (2018). Universal design for learning guidelines, version 2.2. Wakefield, MA: Author. Available at: https://udlguidelines.cast.org/more/downloads
Chemers, M. M., Zurbriggen, E. L., Syed, M., Goza, B. K., and Bearman, S. (2011). The role of efficacy and identity in science career commitment among underrepresented minority students. J. Soc. Issues 67, 469–491. doi: 10.1111/j.1540-4560.2011.01710.x
Chen, Y. L., Murthi, K., Martin, W., Vidiksis, R., Riccio, A., and Patten, K. (2022). Experiences of students, teachers, and parents participating in an inclusive, school-based informal engineering education program. J. Autism Dev. Disord. 52, 3574–3585. doi: 10.1007/s10803-021-05230-2
Cheriyan, C., Shevchuk-Hill, S., Riccio, A., Vincent, J., Kapp, S. K., Cage, E., et al. (2021). Exploring the career motivations, strengths, and challenges of autistic and non-autistic university students: insights from a participatory study. Front. Psychol. 12:719827. doi: 10.3389/fpsyg.2021.719827
Cope, R., and Remington, A. (2022). The strengths and abilities of autistic people in the workplace. Autism Adulthood 4, 22–31. doi: 10.1089/aut.2021.0037
Cuevas, J. (2015). Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory Res. Educ. 13, 308–333. doi: 10.1177/1477878515606621
Dawson, M., Mottron, L., and Gernsbacher, M. A. (2008). Learning in autism, learning and memory: a comprehensive reference. Cogn. Psychol. 2, 759–772. doi: 10.1016/B978-012370509-9.00152-2
Delis, D. C., Kaplan, E., and Kramer, J. H. (2001). Delis-Kaplan executive function system. Assessment. doi: 10.1037/t15082-000
Deiner, M. L., Wright, C. A., Wright, S. D., and Anderson, L. L. (2016). Tapping into technical talent: Using technology to facilitate personal, social, and vocational skills in youth with autism spectrum disorder (ASD). Technology and the Treatment of Children with Autism Spectrum Disorder, 97–112.
Dunn, L., Diener, M., Wright, C., Wright, S., and Narumanchi, A. (2015). Vocational exploration in an extracurricular technology program for youth with autism. Work 52, 457–468. doi: 10.3233/WOR-152160
Dunn, C., Rabren, K. S., Taylor, S. L., and Dotson, C. K. (2012). Assisting students with high-incidence disabilities to pursue careers in science, technology, engineering, and mathematics. Interv. Sch. Clin. 48, 47–54. doi: 10.1177/1053451212443151
Edyburn, D. L. (2010). Would you recognize universal design for learning if you saw it? Ten propositions for new directions for the second decade of UDL. Learn. Disabil. Q. 33, 33–41. doi: 10.1177/073194871003300103
Falco, L. D., and Summers, J. J. (2019). Improving career decision self-efficacy and STEM self-efficacy in high school girls: evaluation of an intervention. J. Career Dev. 46, 62–76. doi: 10.1177/0894845317721651
Fisher, K. (2017). The importance of extracurricular STEM activities for students with disabilities. Proc. Interdiscipl. STEM Teach. Learn. Conf. 1, 4–18. doi: 10.20429/stem.2017.010103
Frank, F., Jablotschkin, M., Arthen, T., Riedel, A., Fangmeier, T., Hölzel, L. P., et al. (2018). Education and employment status of adults with autism spectrum disorders in Germany–a cross-sectional-survey. BMC Psychiatry 18, 1–10. doi: 10.1186/s12888-018-1645-7
Fredricks, J. A., Blumenfeld, P. C., and Paris, A. H. (2004). School engagement: potential of the concept, state of the evidence. Rev. Educ. Res. 74, 59–109. doi: 10.3102/00346543074001059
Gillespie-Lynch, K., Kapp, S. K., Shane-Simpson, C., Smith, D. S., and Hutman, T. (2014). Intersections between the autism spectrum and the internet: perceived benefits and preferred functions of computer-mediated communication. Intellect. Dev. Disabil. 52, 456–469. doi: 10.1352/1934-9556-52.6.456
Goetz, T., Lüdtke, O., Nett, U. E., Keller, M. M., and Lipnevich, A. A. (2013). Characteristics of teaching and students’ emotions in the classroom: investigating differences across domains. Contemp. Educ. Psychol. 38, 383–394. doi: 10.1016/j.cedpsych.2013.08.001
Goldfarb, Y., Gal, E., and Golan, O. (2019). A conflict of interests: a motivational perspective on special interests and employment success of adults with ASD. J. Autism Dev. Disord. 49, 3915–3923. doi: 10.1007/s10803-019-04098-7
Grandin, T., and Duffy, K. (2008). Developing talents: careers for individuals with asperger syndrome and high-functioning autism, Shawnee Mission, KS, USA: AAPC Publishing.
Hampshire, A., Highfield, R. R., Parkin, B. L., and Owen, A. M. (2012). Fractionating human intelligence. Neuron 76, 1225–1237. doi: 10.1016/j.neuron.2012.06.022
Hayes, L. (2023). Examining Relationships Between Reported Engagement, Games Created by Autistic Youths in a Game Design Workshop, and Individual Differences. Undergraduate honors thesis in Psychology at the College of Staten Island.
Hedley, D., Uljarević, M., Cameron, L., Halder, S., Richdale, A., and Dissanayake, C. (2017). Employment programmes and interventions targeting adults with autism spectrum disorder: a systematic review of the literature. Autism 21, 929–941. doi: 10.1177/1362361316661855
Hupfeld, K. E., Abagis, T. R., and Shah, P. (2019). Living “in the zone”: hyperfocus in adult ADHD. ADHD Attent. Deficit Hyperacti. Disord. 11, 191–208. doi: 10.1007/s12402-018-0272-y
Jones, M., Milbourn, B., Falkmer, M., Vinci, B., Tan, T., Bölte, S., et al. (2023). A practical framework for delivering strength-based technology clubs for autistic adolescents. Autism Adulthood. doi: 10.1089/aut.2022.0038
Kanner, L. (1971). Follow-up study of eleven autistic children originally reported in 1943. J. Autism Child. Schizophr. 1, 119–145. doi: 10.1007/BF01537953
Kapp, S. K. (2020). Autistic community and the neurodiversity movement: stories from the frontline. Singapore: Palgrave MacMillan.
Kapp, S. K., Gillespie-Lynch, K., Sherman, L. E., and Hutman, T. (2013). Deficit, difference, or both? Autism and neurodiversity. Dev. Psychol. 49, 59–71. doi: 10.1037/a0028353
Keen, D., Webster, A., and Ridley, G. (2016). How well are children with autism spectrum disorder doing academically at school? An overview of the literature. Autism 20, 276–294. doi: 10.1177/1362361315580962
Keen, D., Adams, D., and Simpson, K. (2021). Teacher ratings of academic skills and academic enablers of children on the autism spectrum. International Journal of Inclusive Education 1–17. doi: 10.1080/13603116.2021.1881626
Kennedy, M. J., Thomas, C. N., Meyer, J. P., Alves, K. D., and Lloyd, J. W. (2014). Using evidence-based multimedia to improve vocabulary performance of adolescents with LD: a UDL approach. Learn. Disabil. Q. 37, 71–86. doi: 10.1177/0731948713507262
Kier, M. W., Blanchard, M. R., Osborne, J. W., and Albert, J. L. (2014). The development of the STEM career interest survey (STEM-CIS). Res. Sci. Educ. 44, 461–481. doi: 10.1007/s11165-013-9389-3
King-Sears, M. E., Johnson, T. M., Berkeley, S., Weiss, M. P., Peters-Burton, E. E., Evmenova, A. S., et al. (2015). An exploratory study of universal design for teaching chemistry to students with and without disabilities. Learn. Disabil. Q. 38, 84–96. doi: 10.1177/0731948714564575
Kochan, N. A., Croot, K., Crawford, J. D., Allison, K. C., Rossie, M., Brodaty, H., et al. (2022). Computer-administered neuropsychological assessment batteries: validity, reliability, and user experience in an Australian sample of community-living older adults in the CogSCAN Study. Alzheimers Dement. 18:e067031. doi: 10.1002/alz.067031
Kuokkanen, R. (2019). Restructuring relations: indigenous self-determination, governance, and gender, New York: Oxford Academic.
Lallukka, T., Mittendorfer-Rutz, E., Ervasti, J., Alexanderson, K., and Virtanen, M. (2020). Unemployment trajectories and the early risk of disability pension among young people with and without autism spectrum disorder: a nationwide study in Sweden. Int. J. Environ. Res. Public Health 17:2486. doi: 10.3390/ijerph17072486
Laureys, F., De Waelle, S., Barendse, M. T., Lenoir, M., and Deconinck, F. J. (2022). The factor structure of executive function in childhood and adolescence. Intelligence 90:101600. doi: 10.1016/j.intell.2021.101600
Lawson, R. A., Papadakis, A. A., Higginson, C. I., Barnett, J. E., Wills, M. C., Strang, J. F., et al. (2015). Everyday executive function impairments predict comorbid psychopathology in autism spectrum and attention deficit hyperactivity disorders. Neuropsychology 29, 445–453. doi: 10.1037/neu0000145
Lee, E. A. L., Black, M. H., Falkmer, M., Tan, T., Sheehy, L., Bölte, S., et al. (2020). “We can see a bright future”: parents’ perceptions of the outcomes of participating in a strengths-based program for adolescents with autism spectrum disorder. J. Autism Dev. Disord. 50, 3179–3194. doi: 10.1007/s10803-020-04411-9
Lent, R. W., Miller, M. J., Smith, P. E., Watford, B. A., Hui, K., and Lim, R. H. (2015). Social cognitive model of adjustment to engineering majors: longitudinal test across gender and race/ethnicity. J. Vocat. Behav. 86, 77–85. doi: 10.1016/j.jvb.2014.11.004
Lent, R. W., Sheu, H. B., Gloster, C. S., and Wilkins, G. (2010). Longitudinal test of the social cognitive model of choice in engineering students at historically black universities. J. Vocat. Behav. 76, 387–394. doi: 10.1016/j.jvb.2009.09.002
Mallory, C., and Keehn, B. (2021). Implications of sensory processing and attentional differences associated with autism in academic settings: an integrative review. Front. Psych. 12:695825. doi: 10.3389/fpsyt.2021.695825
Marino, M. T., Gotch, C. M., Israel, M., Vasquez, E. III, Basham, J. D., and Becht, K. (2014). UDL in the middle school science classroom: Can video games and alternative text heighten engagement and learning for students with learning disabilities? Learning Disability Quarterly 37, 87–99.
Martin, W. B., Yu, J., Wei, X., Vidiksis, R., Patten, K. K., and Riccio, A. (2020). Promoting science, technology, and engineering self-efficacy and knowledge for all with an autism inclusion maker program. Front. Educ. 5:75. doi: 10.3389/feduc.2020.00075
May, T., Rinehart, N., Wilding, J., and Cornish, K. (2013). The role of attention in the academic attainment of children with autism spectrum disorder. J. Autism Dev. Disord. 43, 2147–2158. doi: 10.1007/s10803-013-1766-2
Mayer, R. E. (2008). Applying the science of learning: evidence-based principles for the design of multimedia instruction. Am. Psychol. 63, 760–769. doi: 10.1037/0003-066X.63.8.760
McDonald, T. A., and Machalicek, W. (2013). Systematic review of intervention research with adolescents with autism spectrum disorders. Res. Autism Spectr. Disord. 7, 1439–1460. doi: 10.1016/j.rasd.2013.07.015
McDougal, E., Riby, D. M., and Hanley, M. (2020). Profiles of academic achievement and attention in children with and without Autism Spectrum Disorder. Research in Developmental Disabilities 106:103749. doi: 10.1016/j.ridd.2020.103749
Melber, L. M., and Brown, K. D. (2008). “Not like a regular science class”: informal science education for students with disabilities. Clear. House 82, 35–39. doi: 10.3200/TCHS.82.1.35-39
Meo, G. (2008). Curriculum planning for all learners: applying universal design for learning (UDL) to a high school reading comprehension program. Prevent. School Fail. 52, 21–30. doi: 10.3200/PSFL.52.2.21-30
Milton, D. E. (2012). On the ontological status of autism: the ‘double empathy problem’. Disabil. Soc. 27, 883–887. doi: 10.1080/09687599.2012.710008
Moon, N. W., Todd, R. L., Morton, D. L., and Ivey, E. (2012). Accommodating students with disabilities in science, technology, engineering, and mathematics (STEM). Atlanta, GA: Center for Assistive Technology and Environmental Access, Georgia Institute of Technology.
Moster, M., Kokinda, E., Re, M., Dominic, J., Lehmann, J., Begel, A., et al. (2022). ““Can you help me?” An experience report of teamwork in a game coding camp for autistic high school students” in Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Software Engineering Education and Training, Pittsburgh, PA, USA: ICSE-SEET’22, 50–61.
Mottron, L., Dawson, M., Soulieres, I., Hubert, B., and Burack, J. (2006). Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception. J. Autism Dev. Disord. 36, 27–43. doi: 10.1007/s10803-005-0040-7
Murphy, S., MacDonald, A., Wang, C. A., and Danaia, L. (2019). Towards an understanding of STEM engagement: a review of the literature on motivation and academic emotions. Can. J. Sci. Math. Technol. Educ. 19, 304–320. doi: 10.1007/s42330-019-00054-w
Murray, D., and Lesser, M. (1999). Autism and computing. In Autism 99 Online Conference Organised by the NAS with the Shirley Foundation, London.
Murray, D., Lesser, M., and Lawson, W. (2005). Attention, monotropism and the diagnostic criteria for autism. Autism 9, 139–156.
Nauta, M. M., and Epperson, D. L. (2003). A longitudinal examination of the social-cognitive model applied to high school girls' choices of nontraditional college majors and aspirations. J. Couns. Psychol. 50, 448–457. doi: 10.1037/0022-0167.50.4.448
Navarro, R. L., Flores, L. Y., Lee, H. S., and Gonzalez, R. (2014). Testing a longitudinal social cognitive model of intended persistence with engineering students across gender and race/ethnicity. J. Vocat. Behav. 85, 146–155. doi: 10.1016/j.jvb.2014.05.007
Nicolaidis, C., Raymaker, D., Kapp, S. K., Baggs, A., Ashkenazy, E., McDonald, K., et al. (2019). The AASPIRE practice-based guidelines for the inclusion of autistic adults in research as co-researchers and study participants. Autism 23, 2007–2019. doi: 10.1177/1362361319830523
Ok, M. W., Rao, K., Bryant, B. R., and McDougall, D. (2017). Universal design for learning in pre-K to grade 12 classrooms: a systematic review of research. Exceptionality 25, 116–138. doi: 10.1080/09362835.2016.1196450
Oswald, C., Paleczek, L., Maitz, K., Husny, M., and Gasteiger-Klicpera, B. (2023). Fostering computational thinking and social-emotional skills in children with ADHD and/or ASD: a scoping review. Rev. J. Autism Dev. Disord., 1–20. doi: 10.1007/s40489-023-00369-3
Pashler, H., McDaniel, M., Rohrer, D., and Bjork, R. (2008). Learning styles: concepts and evidence. Psychol. Sci. Public Interest 9, 105–119. doi: 10.1111/j.1539-6053.2009.01038.x
Patten Koenig, K., and Hough Williams, L. (2017). Characterization and utilization of preferred interests: a survey of adults on the autism spectrum. Occup. Ther. Ment. Health 33, 129–140. doi: 10.1080/0164212X.2016.1248877
Pekrun, R., Lichtenfeld, S., Marsh, H. W., Murayama, K., and Goetz, T. (2017). Achievement emotions and academic performance: longitudinal models of reciprocal effects. Child Dev. 88, 1653–1670. doi: 10.1111/cdev.12704
Rao, K., and Meo, G. (2016). Using universal design for learning to design standards-based lessons. SAGE Open 6:215824401668068. doi: 10.1177/2158244016680688
Riccio, A., Delos Santos, J., Kapp, S. K., Jordan, A., DeNigris, D., and Gillespie-Lynch, K. (2020). Developing the multidimensional visual scale assessing affect, anxiety, pride, and energy through a research partnership with autistic scholars. Autism Adulthood 2, 87–100. doi: 10.1089/aut.2019.0067
Rogowsky, B. A., Calhoun, B. M., and Tallal, P. (2020). Providing instruction based on students’ learning style preferences does not improve learning. Front. Psychol. 11:164. doi: 10.3389/fpsyg.2020.00164
Román-González, M., Pérez-González, J.-C., and Jiménez-Fernández, C. (2017). Which cognitive abilities underlie computational thinking? Criterion validity of the computational thinking test. Comput. Hum. Behav. 72, 678–691. doi: 10.1016/j.chb.2016.08.047
Rong, Y., Yang, C. J., Jin, Y., and Wang, Y. (2021). Prevalence of attention-deficit/hyperactivity disorder in individuals with autism spectrum disorder: a meta-analysis. Res. Autism Spectr. Disord. 83:101759. doi: 10.1016/j.rasd.2021.101759
Shattuck, P. T., Narendorf, S. C., Cooper, B., Sterzing, P. R., Wagner, M., and Taylor, J. L. (2012). Postsecondary education and employment among youth with an autism spectrum disorder. Pediatrics 129, 1042–1049. doi: 10.1542/peds.2011-2864
Shogren, K. A., Little, T. D., Grandfield, E., Raley, S., Wehmeyer, M. L., Lang, K. M., et al. (2020). The Self-determination inventory–student report: confirming the factor structure of a new measure. Assess. Eff. Interv. 45, 110–120. doi: 10.1177/1534508418788168
Shogren, K. A., Wehmeyer, M. L., Palmer, S. B., Rifenbark, G. G., and Little, T. D. (2015). Relationships between self-determination and postschool outcomes for youth with disabilities. J. Spec. Educ. 48, 256–267. doi: 10.1177/0022466913489733
Simon, R. A., Aulls, M. W., Dedic, H., Hubbard, K., and Hall, N. C. (2015). Exploring student persistence in STEM programs: a motivational model. Can. J. Educ. 38:n1.
Smith, S. J., Rao, K., Lowrey, K. A., Gardner, J. E., Moore, E., Coy, K., et al. (2019). Recommendations for a national research agenda in UDL: outcomes from the UDL-IRN preconference on research. J. Disabil. Policy Stud. 30, 174–185. doi: 10.1177/1044207319826219
Song, S. H., and Keller, J. M. (2001). Effectiveness of motivationally adaptive computer-assisted instruction on the dynamic aspects of motivation. Educ. Technol. Res. Dev. 49, 5–22. doi: 10.1007/BF02504925
Summers, J. J., and Falco, L. D. (2018). Evaluating construct validity of the Middle School Self-Efficacy Scale with high school adolescents. Journal of Career Development.
Swanson, J. M., Schuck, S., Porter, M. M., Carlson, C., Hartman, C. A., Sergeant, J. A., et al. (2012). Categorical and dimensional definitions and evaluations of symptoms of ADHD: history of the SNAP and the SWAN rating scales. Int. J. Educ. Psychol. Assess. 10, 51–70.
Taurines, R., Schwenck, C., Westerwald, E., Sachse, M., Siniatchkin, M., and Freitag, C. (2012). ADHD and autism: differential diagnosis or overlapping traits? A selective review. ADHD Attent. Deficit Hyperact. Disord. 4, 115–139. doi: 10.1007/s12402-012-0086-2
Taylor, E. C., Livingston, L. A., Clutterbuck, R. A., Callan, M. J., and Shah, P. (2023). Psychological strengths and well-being: strengths use predicts quality of life, well-being and mental health in autism. Autism. doi: 10.1177/13623613221146440
Tellhed, U., Bäckström, M., and Björklund, F. (2017). Will I fit in and do well? The importance of social belongingness and self-efficacy for explaining gender differences in interest in STEM and HEED majors. Sex Roles 77, 86–96. doi: 10.1007/s11199-016-0694-y
Ulrich Hoppe, H., and Werneburg, S. (2019). Computational Thinking—More Than a Variant of Scientific Inquiry! in Computational Thinking Education, eds. S.-C. Kong & H. Abelson (Singapore: Springer), 13–30.
Vincent, J., and Ralston, K. (2023). Uncovering outcomes for autistic university graduates in the UK: an analysis of population data, Autism, 13623613231182756.
Waisman, T. C., Williams, Z. J., Cage, E., Santhanam, S. P., Magiati, I., Dwyer, P., et al. (2022). Learning from the experts: evaluating a participatory autism and universal design training for university educators. Autism 27, 356–370. doi: 10.1177/13623613221097207
Ward, M. J., and Meyer, R. N. (1999). Self-determination for people with developmental disabilities and autism: two self-advocates’ perspectives. Focus Autism Other Dev. Disabil. 14, 133–139. doi: 10.1177/108835769901400302
Wehman, P., Schall, C., McDonough, J., Sima, A., Brooke, A., Ham, W., et al. (2020). Competitive employment for transition-aged youth with significant impact from autism: a multi-site randomized clinical trial. J. Autism Dev. Disord. 50, 1882–1897. doi: 10.1007/s10803-019-03940-2
Wehmeyer, M. (1992). Self-determination: critical skills for outcome-oriented transition services. J. Vocat. Spec. Needs Educ. 15, 3–7.
Wiebe, E., London, J., Aksit, O., Mott, B. W., Boyer, K. E., and Lester, J. C. (2019). Development of a lean computational thinking abilities assessment for middle grades students. Proceedings of the 50th ACM Technical Symposium on Computer Science Education Minneapolis, MN, USA: ACM, 456–461.
Zeldovich, L. (2018). How history forgot the woman who defined autism. Spectrum. Available at: https://www.spectrumnews.org/features/deep-dive/history-forgot-woman-defined-autism/
Keywords: participatory, STEM—science technology engineering mathematics, autistic, youth, universal design (UD), engagement, self-efficacy, self-determination
Citation: Gillespie-Lynch K, Grossman E, Herrell J, Riccio A, Delos Santos J, Biswas S, Kofner B, Dwyer P, Rosenberg B, Hwang-Geddes L, Hurst A, Martin WB, Pak E, O'Brien S, Kilgallon E, Shevchuk-Hill S and Dave S (2023) A participatory approach to iteratively adapting game design workshops to empower autistic youth. Front. Educ. 8:1179548. doi: 10.3389/feduc.2023.1179548
Edited by:
Emily Hotez, UCLA Health System, United StatesReviewed by:
Dylan Cooper, Drexel University, United StatesMeghan Carey, Drexel University, United States
Copyright © 2023 Gillespie-Lynch, Grossman, Herrell, Riccio, Delos Santos, Biswas, Kofner, Dwyer, Rosenberg, Hwang-Geddes, Hurst, Martin, Pak, O’Brien, Kilgallon, Shevchuk-Hill and Dave. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Kristen Gillespie-Lynch, kgillyn@gmail.com