Skip to main content

MINI REVIEW article

Front. Educ., 24 January 2025
Sec. Digital Learning Innovations
This article is part of the Research Topic Leveraging Technology to Improve Livelihood, Education, and Health Outcomes for Forced Migrants View all articles

Empowering marginalized communities through the digital transformation course

  • 1Centre for Teacher Education, University of Vienna, Vienna, Austria
  • 2Faculty of Computer Science, University of Vienna, Vienna, Austria

This article examines a project dedicated to comprehensively addressing the social impact of digital transformation. The project also emphasizes the effects of digital transformation on marginalized populations, especially forced migrants and individuals with special needs. The project involves the development of open-access course materials titled “Digital Life 1–2–3-4,” which are shared as open-access resources through four MOOCs on the iMooX platform. The primary goal is to increase awareness of the effects of digital transformation in daily life, such as algorithmic bias, inaccessibility, robots and digital divide, digital inclusion, and digital discrimination. By integrating ethical considerations, promoting digital literacy, and considering bringing users into the design process, the course mitigates the impact of digital transformation and promotes an equitable and empowering digital environment for the everyday use of technology, particularly for marginalized communities. In this article, we discuss specific course content, including digital inclusion, algorithmic bias, and emerging inequalities. The key goals are to understand and mitigate the risks of algorithmic bias, inaccessibility, and digital discrimination in educational technologies affecting diverse and vulnerable populations, and to promote digital literacy, access, and motivational design to encourage forced migrants’ active and safe participation in technology-enabled education. We conclude that it is essential to prioritize ethical principles in their design and application, elevate underrepresented voices, and foster a more equitable and inclusive digital landscape.

Introduction

Digitalization has permeated human existence and transformed our lives. The narrative of unstoppable progress and radical innovation shapes our attitudes to technology. This is most apparent in its profound impact on marginalized groups and individuals requiring special attention. This paper aims to raise awareness of the ethical implications of technology for these special groups and for others who interact with them. Discussing these issues is of great importance for empowering the marginalized groups in the face of discrimination and racism and their use of technology behind the scenes in a critical way, checking the accuracy of the information, recognizing possible algorithmic biases and evaluating the outputs of search engines and generative artificial intelligence (AI) tools.

Power arguably shapes the content and outcomes of research by influencing and directing research processes. Kalluri (2000) argued that instead of asking how AI is fair and good, it is time to discuss how AI manipulates power. Particularly in AI research, there are debates on the influence and direction of power (Farmer et al., 2024; Lazer et al., 2024). It is often not marginalized groups that shape AI, but powerful segments of society and large corporations (REF). This leads to the social dominance of the majority’s perspectives and interests. As a result, problems such as stereotypes and involuntary discrimination arise (Byrne et al., 2024).

Technology plays a major role in this process. While we have powerful tools at our disposal, these tools are often controlled by large corporations and influential groups. For example, Algorithms of Oppression (Noble, 2018) offers striking examples of how search engines reinforce racism. Noble presents evidence that Google’s algorithms generate stereotypical and discriminatory results about Black women. Following the publication of the book in 2018, Google reviewed its algorithms and made some changes in response to criticism. While many people assume that Google is an unbiased search engine, the book demonstrates its algorithmic bias. However, such interventions are not enough. Marginalized groups need to have more say in shaping AI and its algorithms. Their views should be heard and presented to society. It is important to give them equal rights in decision-making processes, rather than merely dismissing them as “minorities.”

It is also worrying that powerful technologies, such as facial recognition, are in the hands of repressive companies or authoritarian law enforcement agencies. When AI is believed to be neutral, biased data goes unnoticed, leading to the construction of systems that reinforce the status quo and protect the interests of the powerful. Therefore, now more than ever, ethical discussions and social impacts in the use of technology should be brought to the agenda (Skeem and Lowenkamp, 2016).

In light of these issues, while the digital transformation course covers the spectrum of digital transformation with the aim of raising awareness of the new shapes of democracy ethics and technology, in this article we evaluate the content of the course in terms of empowering marginalized groups to develop their critical thinking skills in relation to the possibility of algorithmic bias. Therefore this articles aims to understand and mitigate the risks of algorithmic bias, inaccessibility, and digital discrimination in education technologies for diverse and vulnerable forced migrant populations. It also aims to promote digital literacy, access, and motivational design to encourage active and safe participation of forced migrants in technology-enhanced education. Moreover, developing ethical frameworks and secure data-sharing policies to protect the privacy of forced migrants’ data and prevent misuse of education technologies address the ultimate goal of this article.

Digital transformation course

The digital transformation (DT) course is the first module of the extended curriculum “Understanding and shaping digitalization” at the University of Vienna. The course offers a critical and transdisciplinary examination of digitalization from different perspectives; it provides knowledge about the legal, ethical, technical, pedagogical, psychosocial and social aspects of digitalization and trains students to be digitally competent (Blossfeld et al., 2018; Kayali, 2023). The course is open to all students at the University of Vienna as part of the extended curriculum or on its own as an elective. It is particularly aimed at committed students who want to acquire or expand both their theoretical knowledge and digital skills in the sense of “21st century skills” and a “lifelong learning” perspective in their engagement with digitalization. The course contains five MOOCs placed in the iMooX platform* and each MOOC presents a series of videos created by experts on the effects of digital transformation. The topics of the course and the MOOCs cover topics that include technical basics, educational perspectives, socially relevant perspectives and topics in the philosophy of technology. Concepts from computer science, such as computational thinking, are combined with ethical questions (e.g., regarding the use of robots) and economic, social, historical and legal perspectives in education. The common thread is created by the interplay of topics, cross-references within and between the MOOCs, the small group sessions, and by discussion and reflection tasks that each cover several units (Haselberger et al., 2021; Posekany et al., 2023; Yüksel-Arslan et al., 2023).

In the DT course, the AMS (Austrian Public Employment Service) algorithm is presented as an example of the algorithmic bias of Austria’s immigrant society. The service aims to categorize job seekers into three groups based on their employment prospects, using variables such as gender, age, health conditions, and citizenship. This process, while reflective of labor market inequalities, has been criticized for reinforcing discrimination and exacerbating inequalities. Concerns have been raised about a lack of transparency, the potential for stigmatization, and the systematic disadvantaging of marginalized groups, particularly women, individuals with disabilities, and immigrants (Allhutter et al., 2020; Szigetvari, 2018). Another topic addressed in the course is human-centered design, which involves users in the technology design process, particularly users with special needs, such as people with dyslexia. Human-centered design ensures that a technology meets its users’ unique requirements, thereby enhancing accessibility and usability for marginalized groups (Hagelkruys et al., 2015; Motschnig-Pitrik and Hagelkruys, 2017). The Political Dimensions of Technology also discussed how technology can reinforce or challenge existing power structures, exemplifying how infrastructure and technological decisions reflect social and political biases (Amoore, 2017). This can reinforce the importance of participatory design techniques to ensure that technological developments foster social justice and inclusivity (Amoore, 2013; Amoore and Raley, 2017). In the next section we will have a closer look at these topics.

Algorithmic bias

Based on a statistical model of jobseekers’ employment prospects, the system, known as the AMS algorithm, divides AMS clients into three categories: those with a high chance of finding a job within six months (Group A), those with mediocre prospects in the labor market (Group B) and clients with a poor employment outlook in the next two years (Group C). Group A includes clients likely to secure a job for at least three months within the next seven months. Group C includes those with a low probability of being employed for at least six months within the next two years and are directed to external agencies for supervision; they receive fewer support measures from the AMS. Those who do not fit into either Group A or Group C are placed in Group B, which is reported to include a large number of people with an immigrant background.

This categorization process strongly influences the lives and potential prospects of job seekers (depending on how they are classified) and embodies the values and norms inherent in social policies. This uncertainty as to how and why specific time periods for target variables are chosen has also been criticized by the Austrian Ombudsman Board (OMB BOARD). The system uses data referring to the individual job seeker’s employment history, but also takes into account existing inequalities in the labor market based on gender, age, citizenship and health conditions to estimate chances of reintegration.

The project, referred to as the “AMS algorithm,” has generated a heated public debate since its nationwide announcement in 2020. Critics pointed to its lack of transparency, missed opportunities for AMS clients to correct mistakes, the use of sensitive information, possible unintended consequences such as discrimination, misinterpretation and stigmatization, and the potential to reinforce and increase inequality in the labor market. In particular, the inclusion of gender as a variable has raised public concerns about gender discrimination. The explanatory model that is part of the documentation lists “female” as detrimental to the chances of labor market integration, lowering the overall score for this group (Allhutter et al., 2020 p. 2).

The AMS algorithm has been criticized for potential bias and discrimination, particularly against women, people with disabilities, and women with care obligations. These concerns arise from the prominent inclusion of gender, disability, and other characteristics in the algorithm’s models. The justification for using these attributes is to reflect the “harsh reality” of structural discrimination in the job market. However, this approach may perpetuate existing inequalities by classifying marginalized groups, such as long-term unemployed job seekers, into categories with less support, reinforcing their disadvantage.

Furthermore, the algorithm incorporates cumulative disadvantage by using variables like gender, which may lead to marginalized groups being classified into less supported categories. The feedback loop in the algorithm continuously updates with new data, potentially further decreasing employment prospects for marginalized groups over time. While case workers may deviate from algorithmic recommendations, the effects of the profiling system are complex and influenced by various policies (Gandy, 2016). Additionally, the AMS algorithm introduces biases through the use of coarse predictors and the categorization of job seekers based on sectors like production and service. This binary classification may not accurately reflect the diverse employment prospects within these sectors, leading to significant bias for job seekers based on their assigned occupational group. The algorithm’s boundaries and reliance on regional labor market data also introduce biases, especially in regions dominated by specific industries, impacting job seekers’ access to resources (Lopez, 2019; Gandy, 2016).

Another example of algorithmic bias from another country can be found in COMPAS software from USA. Judges in several U.S. states are provided with recidivism risk assessments for defendants, which are generated by the COMPAS algorithm. By relying on these algorithmic predictions, judges transfer their normative decision-making authority to proprietary software, despite the well-documented inherent age-and race-related biases in such systems (Isley, 2022; Engel et al., 2024).

Human-centered design and participation

Human-centered design brings users into the design process. This is especially crucial in projects for users with special needs. People with special needs may be members of marginalized groups, disabled people or immigrants. In the case of computer communication, design principles specific to the target audience should be developed. If design principles specific to the needs are not applied, the model may not meet the needs of that audience. Motschnig-Pitrik and Hagelkruys (2017) investigated the link between inclusive education and technology design by emphasizing the adaptation of the Human-Centered Design (HCD) process to accommodate users with special needs, specifically individuals with dyslexia. They explored how involving such users from the earliest phases of a project can enhance the usability and accessibility of educational technologies. Their study highlights that the classical HCD framework, which prioritizes end-user inclusion, can be adapted to address cognitive and affective needs through specialized methods and tools in LITERACY project. The process involved extensive preparatory steps, including context and task analysis, persona development, and direct engagement with dyslexic users through semi-structured interviews. In the project, the special user group includes individuals with dyslexia, a learning disability, and testing spans. The process of planning, executing, and analyzing testing sessions with this special user group are examined and outlined in the tool designed for people with special needs. It discusses the considerations for planning the sessions, the special preparations needed to create an appropriate testing environment, the selection of feedback channels, and the analysis of the data collected during the sessions (Motschnig-Pitrik and Hagelkruys, 2017).

Politics and technology

Winner (1980) article “Do artifacts have politics?” discussed the political dimensions of technology. There are implicit connections to issues of racism and social justice, particularly in how technologies can reinforce or challenge existing power structures. The article uses the example of urban planning and infrastructure, focusing on the design of bridges and parkways in relation to public transportation and access. One notable example is the design of New York’s Wantagh Parkway, where low bridges were intentionally constructed to keep buses off the parkway. This design choice excluded bus passengers, who tended to be lower-income people of color, from accessing recreational areas and commuting routes that were primarily used by wealthier, predominantly automobile owners.

This example illustrates how infrastructure can embody political intentions and reinforce social inequalities. The decision to limit bus access to certain parkways reflects a broader pattern of urban planning that prioritizes the needs of affluent communities at the expense of those who rely on public transportation, often including racially marginalized groups. Such design choices can perpetuate systemic racism by restricting access to resources and opportunities for these communities, thereby maintaining existing power dynamics and social hierarchies. Therefore not all of the technological and infrastructural decisions can have significant implications for social justice and racial equity. Therefore, participatory design techniques when designing the technology might be good options to integrate the whole community for social justice.

Conclusion

This article highlights the critical need to address the social implications of digital transformation, particularly for marginalized populations such as forced migrants and individuals with special needs. By examining the open-access “Digital Life 1–2–3-4” course materials and their implementation through MOOCs on the iMooX platform, this work underscores the importance of integrating ethical frameworks, digital literacy, and participatory design in technology-enabled education.

The discussed topics including algorithmic bias, digital discrimination, and inaccessibility, all of which expose the inherent risks in digital systems that often reinforce existing inequalities. The AMS algorithm and human-centered design serve as compelling case studies, showcasing the potential for technology either to perpetuate exclusion or promote inclusion, depending on its development and application.

Ultimately, this article argues for a proactive approach to digital transformation that prioritizes the empowerment of marginalized groups. By fostering critical thinking, advocating for transparency in algorithmic processes, and promoting equitable access to digital tools, the course and its objectives align with broader goals of social justice and inclusion. As digital technologies continue to shape our world, it is imperative to ensure their design and use reflect ethical considerations, amplify marginalized voices, and contribute to a fairer, more inclusive digital future1.

Author contributions

PY-A: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. CP: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. FK: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. The authors thank the Austrian Federal Ministry of Education, Science and Research for funding for the project M795001 “Teaching Digital Thinking” that provided employment for PY-A.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The authors declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

References

Allhutter, D., Cech, F., Fischer, F., Grill, G., and Mager, A. (2020). Algorithmic profiling of job seekers in Austria: how austerity politics are made effective. Front. Big Data 3, 1–17. doi: 10.3389/fdata.2020.00005

PubMed Abstract | Crossref Full Text | Google Scholar

Amoore, L. (2013). The politics of possibility: Risk and security beyond probability. Durham and London: Duke University Press.

Google Scholar

Amoore, L. (2017). What does it mean to govern with algorithms. Antipode: a radical. J. Geogr. 49:129.

Google Scholar

Amoore, L., and Raley, R. (2017). Securing with algorithms: knowledge, decision, sovereignty. Secur. Dialogue 48, 3–10. doi: 10.1177/0967010616680753

Crossref Full Text | Google Scholar

Blossfeld, H. -P., Bos, W., Daniel, H. -D., Hannover, B., Köller, O., Lenzen, D., et al. (2018). Digitale Souveränität und Bildung. Gutachten des Aktionsrats Bildung. Münster: Waxmann.

Google Scholar

Byrne, A.-L., Mulvogue, J., Adhikari, S., and Cutmore, E. (2024). Discriminative and exploitive stereotypes: artificial intelligence generated images of aged care nurses and the impacts on recruitment and retention. Nurs. Inq. 31, e12651–e12611. doi: 10.1111/nin.12651

PubMed Abstract | Crossref Full Text | Google Scholar

Engel, C., Linhardt, L., and Schubert, M. (2024). Code is law: how COMPAS affects the way the judiciary handles the risk of recidivism. Artificial Intelligence Law. doi: 10.1007/s10506-024-09389-8

PubMed Abstract | Crossref Full Text | Google Scholar

Farmer, R. L., Lockwood, A. B., Goforth, A., and Thomas, C. (2024). Artificial intelligence in practice: opportunities, challenges, and ethical considerations. Prof. Psychol. Res. Pract. doi: 10.1037/pro0000595

Crossref Full Text | Google Scholar

Gandy, O. H. (2016). Coming to terms with chance: Engaging rational discrimination and cumulative disadvantage. New York: Routledge.

Google Scholar

Hagelkruys, D., Motschnig, R., Böhm, C., Vojtová, V., Kotasová, M., and Jurkova, K. (2015). “Human-centered design in action: designing and performing testing sessions with users with special needs,” in Proceedings of edmedia 2015--world conference on educational media and technology montreal, quebec, canada: association for the advancement of computing in education (AACE). eds. S. Carliner, C. Fulford, and N. Ostashewski 499–508. Available at: https://www.learntechlib.org/primary/p/151318/ (Accessed January 16, 2025).

Google Scholar

Haselberger, D., Steinböck, M., and Kayali, F. (2021). Facilitating interpersonal exchange on digital transformations by anchoring a MOOC in a distance-learning university course. Lincoln, Nebraska, United States: Paper presented at FIE 2021: Frontiers in Education.

PubMed Abstract | Google Scholar

Isley, R. (2022). Algorithmic Bias and its implications: How to maintain ethics through AI governance. N.Y.U. American Public Policy Review. doi: 10.21428/4b58ebd1.0e834dbb

Crossref Full Text | Google Scholar

Kalluri, P. (2000). Don't ask if artificial intelligence is good or fair, ask how it shifts power. Nature 583:169. doi: 10.1038/d41586-020-02003-2

PubMed Abstract | Crossref Full Text | Google Scholar

Kayali, F. (2023). VU Digitale Transformationen Lehrkonzept von Univ.-Prof. Dr. Fares Kayali. Available at: https://phaidra.univie.ac.at/o:1645623 (Accessed August, 2024).

Google Scholar

Lazer, D., Swire-Thompson, B., and Wilson, C. (2024). A normative framework for assessing the information curation algorithms of the internet. Perspect. Psychol. Sci. 19, 749–757. doi: 10.1177/17456916231186779

PubMed Abstract | Crossref Full Text | Google Scholar

Lopez, P. (2019). Reinforcing intersectional inequality via the AMS algorithm in Austria. In: Conference Proceedings of the 18th STS Conference Graz 2019: Critical Issues in Science, Technology and Society Studies, Graz, Austria, 6-7 May 2019. Graz: Verlag der Technischen Universität Graz. 289–309. doi: 10.3217/978-3-85125-668-0-16

Crossref Full Text | Google Scholar

Motschnig-Pitrik, R., and Hagelkruys, D. (2017). Inclusion of users with special needs in the human-centered design of a web-portal. Int. J. People Oriented Program. 6, 1–18. doi: 10.4018/ijpop.2017010101

Crossref Full Text | Google Scholar

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York: New York University Press.

Google Scholar

Posekany, A., Nöhrer, G., Haselberger, D., and Kayali, F. (2023). Analyzing students’ motivation for acquiring digital competences. Proceedings of 2023 IEEE Frontiers in Education Conference (FIE). USA: College Station, TX. IEEE. 18–21. doi: 10.1109/FIE58773.2023.10342956

PubMed Abstract | Crossref Full Text | Google Scholar

Skeem, J. L., and Lowenkamp, C. (2016). Risk, race, and recidivism: predictive bias and disparate ımpact. Available at: https://ssrn.com/abstract=2687339 (Accessed July, 2024).

Google Scholar

Szigetvari, A. (2018). Der Standard: AMS bewertet Arbeitslose künftig per Algorithmus. Available at: https://www.derstandard.at/story/2000089095393/ams-bewertet-arbeitslose-kuenftig-per-algorithmus (Accessed July, 2024).

Google Scholar

Winner, L. (1980). Do artifacts have politics? Daedalus 109, 121–136.

Google Scholar

Yüksel-Arslan, P., Nöhrer, G., and Kayali, F. (2023). Evaluating students’ learning expectations and concerns in a university course on digital transformations in the framework of learning objective taxonomies. Proceedings of 2023 IEEE Frontiers in Education Conference (FIE). USA: College Station, TX. IEEE. 18–21. doi: 10.1109/FIE58773.2023.10342988

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: algorithmic bias, digital inclusion, technology policies, human computer interaction, participation and inclusion

Citation: Yüksel-Arslan P, Plant C and Kayali F (2025) Empowering marginalized communities through the digital transformation course. Front. Educ. 10:1534104. doi: 10.3389/feduc.2025.1534104

Received: 25 November 2024; Accepted: 08 January 2025;
Published: 24 January 2025.

Edited by:

Wright Jacob, King’s College London, United Kingdom

Reviewed by:

Uwe H. Bittlingmayer, University of Education Freiburg, Germany
P. G. Schrader, University of Nevada, Las Vegas, United States

Copyright © 2025 Yüksel-Arslan, Plant and Kayali. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Pelin Yüksel-Arslan, cGVsaW4ueXVla3NlbC5hcnNsYW5AdW5pdmllLmFjLmF0

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.