- 1Department of Middle & Secondary Education, Georgia State University, Atlanta, GA, United States
- 2Department of Learning Sciences, Georgia State University, Atlanta, GA, United States
The authors explore the intersection of AI and equity in education, presenting a workshop designed for marginalized youth in urban Mexico. This reflective essay stems from their participation in the International Society for Technology in Education’s AI and education course. The lead author, a language education researcher who emphasizes equity in her scholarship, crafted a presentation on AI’s everyday applications for marginalized Mexican youth. Collaborating organically, the co-authors positioned this project as the course’s final collective output, fostering a unique blend of expertise and community engagement. The lead author designed the presentation for an organization with which she has partnered for over a decade, an educational project that supports learning and life skills, rooted in Don Miguel Ruiz’s Four Agreements, for children who live in a community of unofficial housing on the edge of railroad tracks in Cuernavaca, Mexico. The project aimed to bridge the global application of AI to marginalized Mexicans, facilitating a two-hour workshop in Spring 2023. Two additional faculty, technology education researchers, joined the effort to promote computational literacy equitably through culturally relevant pedagogy. They highlight their diverse scholarly backgrounds, positioning themselves as individuals from the margins, and share their motivation for creating a cogent and engaging workshop for the youth. The lead author reports on the unexpectedly rich conversation that unfolded during the workshop, underscoring the potential for AI to be inclusive as society navigates its integration into education.
1 Introduction
When I, Sue, (author pseudonym) heard about the chance to take a course on AI and education with colleagues who understand learning technology and are faculty in this area, I gasped and recognized genuine enthusiasm in early 2023. This enthusiasm struck me as new, as I had been much more reluctant in the past to integrate, for instance, state-mandated computer science standards in my teacher education language methods course. My work has been historically more anthropological and overtly focused on equity. Suddenly, ChatGPT, the online.
AI that allows individuals generate unprecedented levels of (usually) useful information to problem-solve, had recently become a much-discussed online tool a couple of months prior.
Fortunately, I also trusted and knew the faculty member who invited me. Lauren, based on my prior work with her in the university, would be respectful of my oversights and misunderstandings. I am a migrant to the world of digital anything and trained as a skeptic. Also, the single creator of instructional technology standards, The International Society for Technology in Education (ISTE, 2023), was offering the course, which guaranteed a level of quality that was exciting and, to be honest, intimidating for me. At the same time, I was interested in exploring how these emerging technologies might or might not shift access to historically marginalized populations as well as ethical concerns regarding how humans approach our social contexts–concerns I realized I shared with my co-authors. Ultimately, our group consisted of the two instructional design co-authors here and an additional teacher education faculty member from a different department at our university.
For me, Lauren, as a professor of educational technology who focuses on computer science education, the topic and modality of this AI course were perfect for my needs. While I was aware of many perspectives and budding research about AI education and AI in education, I did not have time to keep up with the flurry of content being produced about it. This course provided consolidated content and professional development within my area, educational technology. Second, working in a cohort of educational technology and teacher preparation faculty was ideal from my perspective. Understanding the technology is only half the equation in educational technology. I also needed to understand how the technology could be used in education. Thus, the perspectives of my teacher preparation colleagues were invaluable.
For me, Janet, as a scholar in the field of educational technology for about 10 years, the concept of AI was not new to me. With technology constantly and rapidly evolving, it was clear to me that AI would soon make its way into our daily lives and K-12 classrooms. As my recent scholarly work has focused more on K-12 computer science education (e.g., Kim et al., 2022; Karlin et al., 2023; Margulieux et al., 2024), I prioritize staying up-to-date in the field, despite lacking a computer science background. When I first heard about AI in education from several leading researchers in the field, the discussion was centered around ethical concerns about how AI is used in our daily lives, including education. Therefore, when I learned about the ISTE AI Exploration course, I thought it was an excellent opportunity to learn more about AI and its applications and impacts on K-12 education and teacher education. For me, the most valuable experience was the engaging and interactive discussion with other teacher education faculty about AI in education, which helped me think outside of my silo and reflect on how I would teach the concept of AI to pre-service and in-service teachers.
We offer a critically reflective essay. Critical reflexivity is a qualitative approach which allows us to explore and question complex issues related to power and pose additional questions as we explore how we dialogued through our final project while centering the most novice’s work (Palaganas et al., 2017; Castell et al., 2018; Kasun, 2018). Critical reflexivity allows for reflection in a way that recognizes that attention to details related to issues of power and directionality of power flows allows the researcher to best reflect on their practice; the work centers thinking about what one did iteratively as opposed to other methods which are distinctly oriented toward systematicity in recording, analysis of data, and so on (Coburn and Gormally, 2017). Author1’s project eventually would aim to provide additional access to understanding AI to a group of 20 historically marginalized youth in an educational project in urban Mexico with whom she had collaborated for a decade (Kaneria et al., 2023). As part of work, Author1 kept field notes from the experience both prior to speaking with the youth, immediately after, and then among email dialog and discussions with her co-authors. These notes form the source of most of the reported critical reflections (including further reflection beyond the initial notes related to issues of power and positioning of the youth and the authors). That one workshop would help all the authors consider beginning entry points of access within what is often referred to as the Global South and education. The Global South is recognized, often, as the “majority world,” (Mignolo, 2007; Santos, 2014) which has both suffered the ill effects of colonization, resource destruction, and several forms of oppression. The Global South often successfully resists through heritage culture and language maintenance, collective organizing, and care for Mother Earth, among other strategies (Esteva and Prakash, 2014; Mignolo and Walsh, 2018). We, as co-authors, recognize we all, as humans, have much to learn from the persistence and resistance of the Global South in a planet that faces species annihilation (Kasun and Kaneria, 2020). We provide a contextualization of computer science, education, and AI as well as background on how it relates to the Global South. We then describe our experience and provide insights for future work.
2 Context
Since the 1960s, educators have explored how to apply computer science in education to improve learning (Papert, 1980). Their goal was not primarily to develop the next generation of computer scientists but instead to give children a domain-independent toolkit for interacting with the world, processing information, and utilizing additional tools for problem-solving (Papert, 1980; diSessa, 2000). Progress toward this goal reached an upward inflection point in 2006 when Jeanette Wing popularized the concept of computational thinking as a thought process for formulating problems so that they can be solved algorithmically (Wing, 2006; Cuny et al., 2010). For example, in our teacher preparation programs, we frame programming a computer as teaching a computer how to solve a problem (Margulieux et al., 2022). Because teaching often illuminates gaps in one’s own knowledge, creating a program allows students to explore how well they understand problem- solving concepts with feedback based on how well the computer can solve novel problems.
Computing (i.e., computer science) as a tool in education is separate from computers as a tool, as the latter requires someone else to create a technical solution that learners use, and the former allows learners to be the creators of their own solutions. Thus, computing integration is considered separate from educational technology integration, which is commonplace. The primary barrier to computing integration in education is the amount of time and effort it takes to develop technical skills (Kong and Lai, 2021; Margulieux et al., 2023). Computing integration has cycled through various phases, like computational thinking and data science, as educators have tried to improve the benefits for learners. In the context of this study, the goal of the workshop was to introduce computing concepts and equip marginalized Mexican youth with computational and life skills for solving localized problems instead of preparing them to all become computer scientists.
With generative artificial intelligence (AI) tools, like ChatGPT, the technical skill required to use computing tools has dropped dramatically. Now, AI can translate a user’s natural language prompt into a programming language that a computer would understand. Thus, suddenly, people have reasonable access to the tools that computer science education has been trying to give them for decades. However, to use these tools responsibly, ethically, and effectively, students still need to learn how computers solve problems and how generative AI creates responses. Teachers, and teacher education programs, are key in this work. Empowering teachers and learners with computing skills is necessary in order for them to create their own solutions.
AI is increasingly becoming a part of our everyday lives through tools such as chatbots, automated banking, and automobiles, and it is steadily making its way into K-12 education. Despite this, the concept of integrating AI into education is relatively new, and some teacher educators may be unsure about its effectiveness in enhancing teaching and learning due to a lack of understanding of AI’s potential benefits and applications (Crompton and Burke, 2022; Zafari et al., 2022). According to Crompton and Burke (2022), AI can be applied in K-12 education in three main areas: pedagogy, administration, and subject content. These educational applications demonstrate that AI can enhance instruction and expand students’ learning opportunities and outcomes by supporting personalized learning in K-12 education, including digital assistants and chatbots used to support classroom management. For instance, AI-enabled virtual teaching assistants, like Microsoft’s Cortana and Google Assistant, are being used for tasks like finding course content and learning materials in learning management systems. Some AI-integrated e-learning platforms, such as Duolingo and Khan Academy, can provide more personalized learning content and alternative learning solutions for learners with diverse needs. In addition, chatbots are gaining popularity for their ability to provide instant responses to students’ queries (Wu and Lin, 2023).
To further support AI education in K-12 settings, AI4K12 (2021) has proposed national guidelines and resources for five key AI concepts that they claim all students should learn: perception, representation and reasoning, learning, natural interaction, and social impact. These big ideas center around the knowledge of how AI and computers operate and interact with humans as well as their impacts on our society. The PowerSchool (2023) Education Focus Report highlighted that educational leaders believe AI has the potential to enhance personalized learning and revolutionize the future of education. The report, oriented toward a Global North context, also stated that “AI could level the playing field in K-12 schools by providing equitable support to students, allowing them to quickly learn basic skills such as essay writing or mathematics, while teachers can focus on more advanced concepts” (p. 3). However, it is crucial to address ethical issues, such as gender and racial bias, student data privacy concerns (Akgun and Greenhow, 2021; Crompton and Burke, 2022), and questions of global access. These challenges need to be thoroughly examined and addressed to ensure a safe, fair, and enhanced learning experience for all students.
AI holds the potential to transform education. At the same time, the opportunities and challenges AI presents vary, based on the unique educational landscapes of different regions as well as people who have been historically marginalized over several centuries. Many in the Global South have made efforts to incorporate AI into their educational systems. By Global South, we refer to those whose knowledges and identities have been challenged through the legacies of colonization and imperialism and who, yet, maintain often holistic, connectivist approaches to community, growth, and sustaining an increasingly ailing planet. We anticipate these as apertures toward dynamic and novel implementations that could also serve as examples for the Global North (the peoples who have historically done the colonizing and who most fully experience what we would call modernity)–a potential inversion of teaching and leading. For instance, in India, the government adopted an AI4All initiative and partnered with technology sectors to infuse AI in curricula and projects to ensure accessible AI education in schools to enhance digital literacy and engage learners in AI education for the digital-age workforce (UNESCO, 2022, report). As UNESCO being a key player advocating for ethical and accessible AI education in the Global South, they emphasize the need to develop AI strategies that are aligned with international norms and principles to ensure responsible AI application in education (UNESCO, 2023).
While AI has potential and has created more accessible knowledge and equitable opportunities in education, the structural limitations in the Global South and geographically concentrated benefits in the Global North have caused AI disparity and divide (Arun, 2020; Yu et al., 2023). It is important to recognize that the potential of AI as an equalizer is contingent on addressing the need for equitable, accessible, and ethical AI in education. Scholars also emphasized that policymakers and decision-makers should be mindful and aware of the systems of discrimination while considering AI applications in the Global South (Arun, 2020). This is because the fundamental concepts of AI, which include classification, ranking, and training of big data, can unintentionally incorporate and embed existing bias and discrimination (Barocas and Selbst, 2016). Therefore, it is crucial to take into account the potential impact of AI solutions in perpetuating or mitigating societal inequalities.
Without these careful considerations in education, the application and potential of AI might exacerbate educational disparities rather than equalize education accessibility and opportunities. To narrow the gap of AI divide, there have been suggestions to adopt a human-centered approach to AI, also known as human-centric AI in education. This approach prioritizes the needs, relevance, and personalization of learners and educators in the AI design and application process, with a focus on the transparency of AI algorithms and systems. To increase engagement and promote accessible, equitable, and ethical AI in education, various enabling factors such as the infrastructure for resource distributions and support, national policies and visions of AI education, training for educators, and culturally relevant and contextualized AI education materials must be considered (Arun, 2020; UNESCO, 2021; U.S. Department of Education, 2023).
3 Details
In mid-February 2023, four of us, two instructional technology faculty, an early childhood education literacy faculty member, and I, Sue, joined forces to take this 10-week course together voluntarily. Each week, we individually accessed the weekly curriculum. We then discussed, as a group of four, our questions and experiences engaging the material. My colleagues were gracious with all our queries, and I felt I was learning something new.
We needed to complete a final project as a group by the end of our 10 weeks of study and discussions. The parameters were productively broad—some way we would take our new learning into the education world with substance from the course. Each week we collectively submitted forms of work, usually thoughts and ideas surrounding working through and/or adapting certain AI components in education. However, by the end we needed a capstone project to be submitted in the collective.
Two of us intended to take our direct knowledge from the course into our larger work. For me, Sue, this meant providing training to young people with whom I would get to work in Cuernavaca, Mexico, regarding AI. These youth are participants in a program I have worked with for over 10 years; they usually no longer attend official school and learn life skills at an organization established to serve these communities who live right up against the railroad tracks. While no one was forcing me to do this, something in me yearned to see what would happen if AI was brought to them via an interactive, live presentation. This spoke to my sense of equity in terms of providing the youth with early tools related to everyday, online uses (and dangers) of AI.
It was scary to note that, despite having borrowed heavily (with full permission from ISTE) from presentations about AI already shared in the class, I had to have my peers think about and provide feedback on my presentation about AI and education. I decided to follow a similar line of thinking from the course—explaining in brief what it was, including information about its history, including, for instance, the Turing Test as well as Eliza, and then deeper knowledge about what AI is, such as neural networks, large language processing models, and perception.
Then, we would get to the rich stuff of AI’s application, including learning how to differentiate between AI-generated and real photos, spot deep fake news, generate novel images through language, and of course, use ChatGPT.
To me, it was interesting to convey the who of my audience to my thoughtful group member colleagues. On one hand, I was relieved that two were from other countries and all had seen what under-resourced countries are like, both in terms of community strengths and challenges such as weaker physical infrastructures. I was both guessing at and using on-the-ground knowledge from my years-long peers in Mexico to convey the “who.” These were people who, like much of what Illich (2013) or Esteva and Prakash (2014) refer to as the “two-thirds world” have regular Internet access through phones, not laptops or tablets, and who are as intensely curious as anyone in the world about this technology. I knew I would have a laptop, a projector, and wifi to work with for a two-hour presentation. I also have worked with these young people in the past and had several concerns: Would they see the relevance of AI in their lives? Would my presentation be dynamic enough to maintain their attention? Would they be able to take learning with them to engage AI or at least have a better grasp of what it was? I knew I would be speaking with about 20 people ages 12–17 after they had eaten their breakfast, provided at the organization, and after they had all collectively cleaned up the eating space and helped tidy the communal kitchen.
Our collective discussions were helpful prior to taking my presentation and later repackaging it into Spanish. First, my colleagues cautioned me that I was likely overly technical in my introduction and with the discussions of neural networks. Lauren was particularly helpful at hammering the point of finding a way for them to see the relevance early on. This was a curious task, as relevance in the U.S. might be established when thinking about robotic vacuums, ATM machines, or chatbots for use with companies when, say, paying bills. All these are non-starters with the audience I would work with. Instead, I gave them background about how self-driving cars had their first accident in 2016 in which a passenger died. And ultimately I decided to start the entire presentation with information about how voices can be very easily cloned with AI and present real dangers with fake kidnappings and extortion. It was through our dialogic process that I was able to move beyond my fears and extend the presentation into thinking of how I would really meet them. In truth, I was deeply fearful I would come off as boring.
The workshop went surprisingly well (the slides for the presentation can be found at:1). I knew a few of the adult teachers and asked the students if they also had short names like I did. Some children then told me their names. I explained we would look at artificial intelligence and asked if they had any notion of what it was. They said no, and then I explained it was like programming, like a language, just like I spoke English and they spoke Spanish, but that this programming used massive sets of data to create problem-solving and so on. Then I explained some of this type of problem-solving included facial recognition and personal assistants. I used my Siri as an example and asked it what the closest gas stations were. The students could see several good answers came up, and I explained how fast it was and dynamic to where I might be. I tried to get other kids’ faces to unlock my screen, and it did not work. We also discussed throughout the implications for medical research, such as having a radiologist (human) miss seeing a broken bone or tumor where AI might have a better probability of locating it and, thus, helping get better treatment. They were on the edge of their seats for much of that. We also discussed music streaming and how “it was God” behind the selection of music. I asked a kid if he had rancheras play, and he said never. Then, another kid I asked if he had rap music play. He looked at me surprised and said, “How did you know?” happily and meant it.
These were the kinds of examples I brought in to show the everyday and increasing use of AI in their lives. I was surprised to be able to go over ideas such as reasoning, natural language processing, and perception as elements of AI to eager listening and questions. We did a quiz of what is “computer programmed” versus “AI,” and the kids were correct in assigning calculators to programmed and self-driving cars to AI (among others). They enjoyed checking if they were right at the quiz’s end. The kids were surprised to learn of self-driving cars, though. This became an example where we all talked extensively about ethics—the electric bean sorter seemed too obvious an example to them. They wanted things that would clean their entire living space instead. It had occurred to me early on to discuss the cloning of voice software and how it could link with kidnappings, a real problem here, so I brought that up, and they were all concerned. We waited until the end to discuss having a safe word, a word that is only shared among trusted loved ones as code to show that people share the word (a safe word can be used, for instance, to indicate that when a young person calls her parent to get help, the parent understands there is a pressing emergency that the child does not want to discuss in front of others).
Some of the most dynamic work including playing on some AI tools together, where they gave me questions for ChatGPT (in hindsight I should have asked them how ChatGPT was programmed to help review). They asked questions such as, “Is there life outside of earth?” (the short answer was most likely) and “When will it rain in Cuernavaca?” (this answer was not robust, just that “it rains all year,” which was lackluster because there is a rainy season here).
Another kid asked what was Taylor Swift’s most popular song… the answer eventually came as “Shake It Off.”
The room I was in eventually got to over 90 degrees Fahrenheit with some fans providing minimal relief overhead. The 20 12 to 17-year-olds got a little distracted at times, but they mostly stuck with me. We did AI visual arts together, the first one they requested being of two boys from the group (just their names) as cowboys in the afternoon. The image was comical but also kind of cool. The kids asked for a wolf-eagle (neat!) and an elephant, a ghost, and a cow. That was all surreal to me. I had downloaded one of their teacher’s faces, and the images did not come back pretty, which was irritating. We also did guess the real face which was actually hard, perhaps in part due to projecting on a physical screen during full daylight. Subtleties were harder to tease out. One of the youths had a good understanding of programming and asked a lot of questions and provided strong answers, I even told him I wished my university students had such good answers at times.
We finally got to the safety word after nearly 2 hours, and I had the kids share their imagined safety words, cautioning them to go home and do a new one. They spoke with animation, gave a few examples, and then I implored them to use them at home. The students and adult facilitators applauded–an unusually enthusiastic ending–and I understood I had succeeded in keeping their attention about AI, and that we had all learned something from the experience.
4 Discussion section
A few days prior to speaking in Cuernavaca, I had a meeting with the woman who had established their most recent curricular format, the one that centered Ruiz’s (1997) The Four Agreements as a curricular foundation. She said, “Why would you go teach them about AI?” with trepidation about what my experience might be like. I had a lot of anxiety that this would go terribly, that they would be bored and let me know they were bored, and that I would not last longer than 15 min. Luckily, and this is perhaps the most important part of my learning, these young people who do not own personal computers were highly engaged and highly interested in what I had to share. They, too, wanted to be present in their quickly changing technological realities. When I said maybe they should be programmers, I did not mean it falsely, and it seems like the kind of thing they could bring themselves to learn to do online. I also was honest and explained that when sharing things some were new to me, such as understanding that computer science was based on neurological research.
If I were to get to work with them again, I would probably have them do deeper dives into where AI is in Mexico and where they might be further impacted, but preferably with their devices in their hands or on laptops. For a first, two-hour introduction, it was exciting. It was also evidence that a novice to AI in education could successfully share about the latest happenings and make it relatable to youth.
Lauren and Janet were not surprised by the results that Sue reported. As expected, the content of the course focused on the capabilities of AI, and the faculty team spent much of the course thinking about how teachers might apply these capabilities to solve problems. There was one aspect of how AI works that repeatedly surfaced as important for teachers to understand, though, which is bias in generative AI. The data upon which large language models are trained and how they are trained (i.e., by humans) leads to the same biases that humans have, particularly around our culture and worldview. While teaching how AI models are trained is not strictly necessary in order to use them, based on our discussions, it seemed worthwhile to include this aspect of how the technology works so that teachers could apply it more responsibly and ethically.
It is clear that AI can be used by all and useful for all, based on this workshop experience; indeed, we had succeeded in designing meaningful application of AI for youth in the Global South (Arun, 2020) in this one instance. The urban youth who experienced the workshop demonstrated not only a deep desire to learn about AI but a set of tools at their own disposal to create with AI. We also note that even when the person sharing the knowledge is still relatively novice, the AI itself can help as a tool in generating the source material for creation. Co-creation of images, of dialog related to making sense of fake images and news, and learning about what AI is were rich pathways into this early phase of sharing generative AI tools; we harnessed effective AI pedagogies with youth, despite the sense of generative AI being relatively nascent for the novice instructor (Crompton and Burke, 2022). We also practiced the ways we could engineer prompts to learn from AI through ChatGPT. In some ways, the tools are clearly universal, though they are also always contextual. The youth had asked ChatGPT about the climate and rains in their city; asking about, say, Tokyo, in this case, would have been less productive or interesting.
We also saw the technology’s current limits and recognized our own embodied knowledges (Mignolo, 2007) as systems of understanding well worth maintaining, still. Thus, we recognize the Global South should not only be a sort of consumer of AI but a generator of AI; a generator whose depth of knowledge sometimes surpasses and circumvents the modernity of the Global North. Teachers and teacher educators must be aware of these issues so as to help share the knowledge related to AI and providing access. As we remain concerned about bias and ethics in AI, we recognize the inclusion of Global South knowledges and knowledge production can not only be engaged, but we argue, a source of content that AI uses as it makes predictive knowledge further available.
We caution that the creative process is one that is best engaged through dialog, under a sense of the conditions in which the people engaging the AI live. For instance, in this context, a “safe word” in a non-AI tool that can be used to mitigate the negative impacts of AI and was of very high interest to the participants. We also suggest teaching the tools in a way that recognizes the lived realities of people in the Global South (e.g., Esteva and Prakash, 2014). For instance, instead of suggesting a robotic vacuum as an AI tool for cultures that do not use vacuum cleaners, we discussed what a bean sorter might work like (even though this still was not the best example!). For future work, we suggest meeting with local participants prior to offering the workshop (which happened to be couched in a larger, decolonial study abroad program). The initial intent was to pilot the workshop; a better design would have included a survey the participants could have taken with an opportunity to provide feedback on how to improve and for content suggestions for future workshops–ideally, workshops they could eventually help to create.
What does this mean for teacher education researchers outside of technology and education? We believe this work illustrates first, that it is worthwhile to find entry points of dialog and real learning with the “tech-averse” among us. Part of that bridging includes sensitivity from those who are trying to foster a broadening of participation from within the circle from which computing and educational technology is generally designed. In this case, Lauren and Janet were kind, answered questions, encouraged our participation, and even congratulated our efforts. We also recognize that cross-discipline dialog was fascinating and useful. Lauren and Janet were able to convey so much about technology and computational thinking to the two of us who were outsiders, and the outsiders were able to show both a deeply applied framework for engaging AI with early childhood literacy and a host of ethical concerns, such as the question related to epistemology relayed in the introduction above.
In the meantime, we each continue in our unique roles, but we continue to find some generative overlap. When we began the ISTE course together, this idea had not been born yet, but now, as with AI, it seems anything can be possible. For us, that possible must include the knowledges, designs, and voices of the Global South.
Data availability statement
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author/s.
Author contributions
GK: Conceptualization, Project administration, Writing – original draft, Writing – review & editing. Y-CL: Conceptualization, Writing – original draft, Writing – review & editing. LM: Conceptualization, Writing – original draft, Writing – review & editing. MW: Writing – original draft, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This work was supported by the National Science Foundation under grant #1941642.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1. ^https://docs.google.com/presentation/d/1l1FyhatAgv344LdAEBRTnrdi0ch4iEY7/edit?usp=sharing&ouid=117326083751569784291&rtpof=true&sd=true
References
AI4K12. (2021). Educational. Available at: https://ai4k12.org/
Akgun, S., and Greenhow, C. (2021). Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI Ethics 2, 431–440. doi: 10.1007/s43681-021-00096-7
Arun, C. (2020). “AI and the global south: designing for other worlds” in The Oxford handbook of ethics of AI. eds. M. D. Dubber, F. Pasquale, and S. Das (Oxford: Oxford University Press), 589–606.
Barocas, S., and Selbst, A. D. (2016). Big data's disparate impact. Calif. Law Rev. 104, 671–732. doi: 10.15779/Z38BG31
Castell, E., Bullen, J., Garvey, D., and Jones, N. (2018). Critical reflexivity in indigenous and cross-cultural psychology: a decolonial approach to curriculum? Am. J. Community Psychol. 62, 261–271. doi: 10.1002/ajcp.12291
Coburn, A., and Gormally, S. (2017). Critical reflexivity. Counterpoints 483, 111–126. http://www.jstor.org/stable/45177774
Crompton, H., and Burke, D. (2022). Artificial intelligence in K-12 education. SN. Soc. Sci. 2, 1–14. doi: 10.1007/S43545-022-00425-5
Cuny, J., Snyder, L., and Wing, J. M. (2010). Demystifying computational thinking for non-computer scientists. Available at: http://www.cs.cmu.edu/~CompThink/resources/TheLinkWing.pdf
Esteva, G., and Prakash, M. S. (2014). Grassroots postmodernism: remaking the soil of cultures. United Kingdom: Zed Books.
ISTE. (2023). Educational. Available at: https://iste.org/
Kaneria, A. J., Kasun, G. S., and Trinh, E. (2023). I am enough: A decolonial journey of conocimiento. J. Latinos Educ. 22, 874–892.
Karlin, M., Ottenbreit-Leftwich, A., and Liao, Y. C. (2023). Building a gender-inclusive secondary computer science program: teacher led and stakeholder supported. Comput. Sci. Educ. 33, 117–138.
Kasun, G. S. (2018). Chicana feminism as a bridge: the personal struggle of a white woman researcher seeking an alternative theoretical lens. J. Curric. Theor. 32, 115–133.
Kasun, G. S., and Kaneria, A. J. (2020). Decolonizing through a new tribalism: the recognition of warriors through a re-evolutionizing lifespace in urban Mexico, in Evidence-based inquiries in ethno-STEM research: Investigations in knowledge systems across disciplines and transcultural settings. Eds. I. C. Chahine and J. De Beer (Information Age Publishing), 377–396.
Kim, J., Liao, Y. C., Guo, M., Karlin, M., and Leftwich, A. (2022). Why should we be Integrating Computer Science into the Elementary Curriculum? Computer Science Teacher’s Perceptions and Practices, in Proceedings of the 54th ACM Technical Symposium on Computer Science Education. Vol. 2, 1426–1426.
Kong, S. C., and Lai, M. (2021). A proposed computational thinking teacher development framework for K-12 guided by the TPACK model. J. Comput. Educ. 9, 379–402. doi: 10.1007/s40692-021-00207-7
Margulieux, L., Parker, M. C., Uzun, G. C., and Cohen, J. D. (2023). Levels of programming concepts used in computing integration activities across disciplines. J. Tech. Teach. Educ. 31, 167–202.
Margulieux, L. E., Enderle, P., Junor Clarke, P., King, N., Sullivan, C., Zoss, M., et al. (2022). Integrating computing into preservice teacher preparation programs across the core: Language, mathematics, and science. J. Comput. Sci. Inte. 5, 1. doi: 10.26716/jcsi.2022.11.15.35
Margulieux, L. E., Liao, Y. C., Anderson, E., Parker, M. C., and Calandra, B. D. (2024). Intent and Extent: Computer Science Concepts and Practices in Integrated Computing. ACM Transactions on Computing Education. doi: 10.1145/3664825
Mignolo, W. D. (2007). Introduction: Coloniality of power and de-colonial thinking. Cult. Stud. 21, 155–167. doi: 10.1080/09502380601162498
Mignolo, W. D., and Walsh, C. E. (2018). On Decoloniality: concepts, analytics, praxis. United Kingdom: Duke University Press.
Palaganas, E. C., Sanchez, M. C., Molintas, M. P., and Caricativo, R. D. (2017). Reflexivity in qualitative research: a journey of learning. Qual. Rep. 22, 426–438. doi: 10.46743/2160-3715/2017.2552
Papert, S. (1980). The computer in the school: Tutor, tool, tutee. New York: Teacher’s College Press, 197–202.
PowerSchool (2023). Education focus report: Challenges, priorities, and innovating toward the possible. Available at: https://www.powerschool.com/edtech-focus-report-2024/
Ruiz, D. M. (1997). The four agreements: A Toltec wisdom book. San Rafael, CA: Amber-Allen Publishing.
Santos, B. S. (2014). Epistemologies of the south: Justice against epistemicide. New York: Routledge.
UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Available at: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
UNESCO. (2022). State of the education report for India: Artificial intelligence in education; here, there and everywhere [Education]. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000382661.locale=en
UNESCO. (2023). Education in the age of artificial intelligence. Available at: https://courier.unesco.org/en/articles/education-age-artificial-intelligence
U.S. Department of Education (2023). Artificial intelligence and future of teaching and learning: insights and recommendations. Washington, DC: U.S. Department of Education, Office of Educational Technology.
Wu, Y. H., and Lin, F. R. (2023). Experience-based knowledge management with a conversational AI Chatbot: taking hand-shaken tea Service in Taiwan as an example. In International conference on knowledge management in organizations. Cham: Springer Nature Switzerland.
Yu, D., Rosenfeld, H., and Gupta, A. (2023). The ‘AI divide’ between the Global North and Global South. Available at: https://www.weforum.org/agenda/2023/01/davos23-ai-divide-global-north-global-south/
Keywords: artificial intelligence, AI, equity, Global South, culturally relevant pedagogy, ChatGPT, educational technology, computer science education
Citation: Kasun GS, Liao Y-C, Margulieux LE and Woodall M (2024) Unexpected outcomes from an AI education course among education faculty: Toward making AI accessible with marginalized youth in urban Mexico. Front. Educ. 9:1368604. doi: 10.3389/feduc.2024.1368604
Edited by:
Raona Williams, Ministry of Education, United Arab EmiratesReviewed by:
Agnes Kukulska-Hulme, The Open University, United KingdomRanilson Oscar Araújo Paiva, Federal University of Alagoas, Brazil
Copyright © 2024 Kasun, Liao, Margulieux and Woodall. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: G. Sue Kasun, skasun@gsu.edu