- Department of Information and Communications Technology (ICT), University of South Africa, Pretoria, South Africa
This study explores the development and implementation of a design thinking Artificial Intelligence (AI)-driven auto-marking/grading system for practical assessments and accurate feedback aimed at alleviating the workload of lecturers at an Online Distance eLearning (ODeL) institution in South Africa. The study adopts an iterative approach to designing and prototyping the system, ensuring alignment with the unique needs and challenges at an ODeL higher learning institution (HLI). The study outlines a Design thinking framework for developing the AI system, emphasizing empathy with user needs, clear problem definition, ideation, prototyping, testing, and iterative improvements. Integrating such a system promises to enhance operational efficiency, ensure fair and unbiased grading for assessments, and provide students with consistent, timely, personalized feedback. Tapping on theorists such as Michael Foucault and Joseph Schumpeter, this study contributes to the ongoing discourse on innovative solutions for educational challenges in South Africa by employing a design thinking framework and qualitative research methods. It provides insights for developing and implementing AI-driven auto-marking/grading systems in higher education settings. Cognizant of data privacy laws, the study will highlight the essential adherence to ethical guidelines in automated assessment processes and the successful implementation of AI-driven auto-marking/grading systems in ODeL. Additionally, this study aligns with several Sustainable Development Goals (SDGs), such as Good Health and Wellbeing (SDG 3), Quality Education (SDG 4), Decent Work and Economic Growth (SDG 8), Industry, Innovation, and Infrastructure (SDG 9). The study will have a follow-up article that will report on the data collected, and we will conduct another study where we seek the users’ feedback regarding the system.
1 Introduction
The landscape of education is evolving, demanding innovative solutions to meet the challenges faced by educators. In South Africa, where higher learning institutions (HLIs) grapple with burgeoning student populations and the need for efficient assessment methods, the fusion of design thinking and AI presents a promising solution (Bear and Skorton, 2018). Adapting assessments due to generative AI tools is timely but requires effort. Thus, Generative AI tools can help education by boosting student engagement, giving timely feedback, fostering collaboration, and enhancing accessibility (Ghimire, 2024). Educators face challenges in creating and using new online assessments, including time and effort, logistics, technology access, consistency, usability, student preferences, preparation for new formats, and institutional policies (Smolansky et al., 2023). HLI should also be aware of the challenges, including ensuring the ethical use of AI, protecting student data privacy, and maintaining a balance between automated and human elements in education (Smuha, 2020). Combining design thinking principles with the development of an AI-driven auto-marking/grading system to reduce the workload of lecturers in a South African higher learning institution would involve a comprehensive thought and implementation process. Integrating AI in education, especially in assessment, is a promising step toward a more efficient, fair, and personalized learning experience. As AI in education continues to evolve, educators and institutions must stay informed about these developments (Luckin and Holmes, 2016).
2 Research objective: understanding the challenge
This study looks at a South African OdeL university that has a student throughput of above 370,000, from across South Africa, Africa and other parts of the world, where lecturers often face overwhelming workloads due to manual grading processes, even with the assistance of external human markers (Fourie-Malherbe, 2021; Olwal, 2023). The qualitative study will employ sample of seven colleges with data collection methods of interviews, document analysis and focus groups from university colleges through virtual Microsoft Teams where shared input with various college academic employees will strengthen the development of the AI Grading toll as they will establish the type of design they require according to their knowledge of assessment methods. The study has been granted approval from the research office to conduct interviews and focus groups from the university colleges as we cannot be able to conduct interviews without the research ethics approval. The invited colleges will be sent through email, and there will be documents that will be shared for the college academic lectures to add their inputs as a means of strengthen the document. This will require track changes to be activated, to enable us to distinguish what was added/altered. We will do follow ups where needed to ensure we understand the intention behind the inputs. The documents that will be shared will also include uses cases for their inputs, allowing user interaction with products or services in their natural environment, which help collect quantitative data on user preferences and behaviors. This Just like the empathy component of the design thinking framework, understanding this challenge is the first step in envisioning a solution that not only eases the lecturer’s burden but also enhances the learning experience for students. Thus, based on the above research problem, below is the primary question that aims at enquiring how to resolve the research question:
3 How might we develop a design thinking AI-driven auto-marking/grading system to reduce the workload of lecturers at a South African open distance eLearning (ODEL)?
The article aims to establish how an integrated AI-driven auto-marking system in an online higher-learning environment can improve grading efficiency, accuracy, and consistency. By incorporating AI-driven auto-marking/grading systems, the project aims to reduce the workload of lecturers by automating the grading process, enabling them to focus more on teaching and student engagements. This integration will further enhance ODeL HLI’s reputation as a forward-thinking institution, embracing innovation in the realm of educational technology and aligning with the organizational strategy by employing three complementary strategy approaches:
• Environmental approaches tailored to the higher learning institution’s context,
• Capabilities or resources-based approaches when seeking resources for prototype creation,
• Moreover, considering the lecture workload, it is a customer-centric approach (Diderich, 2020).
4 Theoretical framework
This study uses Foucault’s concept of biopower to examine how AI grading tools monitor student exams, acting as both surveillance and disciplinary mechanisms, reflecting biopolitical control over populations and creating an intrusive environment despite being seen as innovative disruptions (Foucault, 1977, p. 217). Additionally, riding on the waves of Schumpeterian economic disruption caused by “extreme automation and extreme connectivity” (Masheleni, 2022), the study is influenced by Joseph Schumpeter to create economic, innovative and commercial futures within ODeL HLIs environments. As a distinguish factor to the two philosophers on this study, by the order of introduction of the two, Foucalt serves as the primary philosopher and Schumpeter as the secondary philosopher that this study employs. Implementing Foucalt as a philosopher of ethics serves as a critical factor as the AI has possibilities of being unethical, thus through this philosopher, the study challenges entrenched surveillance practices in grading by proposing an AI Grading tool designed to ensure fair assessments, reduce biases, and address social injustices, while advocating for an educational approach that encourages students to critique and imagine equitable alternatives to current surveillance systems. It is therefore important to apply the ethical component to this disruptive AI as with Schumpeter’s disruptive innovative nature, AI serves as a disruptive innovation in the educational sector, this study draws even further into the AI Grading tool, disrupting the effective assessment methodologies addressing the lecture overload. The following section will discuss the two philosophers’ influence on the study.
5 Foucault’s concept of biopower examines AI grading tool
Foucault, a professor of Psychology and Ethics, explored the interconnections of truth, knowledge, power, ethics, and the subject, inspiring this article to assess student trustworthiness in a manner that reflects his evolving emphases over time, potentially enhancing teaching and learning processes (Willcocks, 2004; Finefter-Rosenbluh, 2022). Michel Foucault introduced biopower and biopolitics to address limitations in traditional theories of power, which focus only on state sovereignty and overlook power dynamics in non-state institutions like families, healthcare, and workplaces (Page, 2018). Foucault argues that power is not only about prohibition but also about producing knowledge, practices, and identities, with his concept of biopower referring to the control and regulation of human life at both individual and population levels (Rogers, 2013).
The study aims to contribute disruptive innovative solutions for workload reduction from an academic’s perspective through AI automation while providing ethical assessment methods to students without bias and incoherent marking. While AI grading tools may streamline assessment processes and enhance efficiency, they can also raise questions about objectivity, biases in algorithmic decision-making, and the potential for excluding alternative forms of knowledge and assessment criteria (Chai et al., 2024). Biopower does not aim to shape individuals directly; instead, it governs environments to make specific outcomes more likely than others, illustrating how social categories enable state violence against certain groups (Fleming, 2022). Foucault argues that power is not just about prohibition but also about producing knowledge, practices, and identities as concepts which help analyse how power works locally and productively (Lambert and Erlenbusch-Anderson, 2024).
Foucault’s use of these terms can be inconsistent, sometimes treating them interchangeably and other times distinguishing biopolitics as a form of biopower alongside disciplinary power (Erlenbusch-Anderson, 2020). For a more precise understanding, his lectures from 1975 to 1979 at the Collège de France provide a more detailed account of biopower as a method of controlling individual bodies and populations through surveillance and administration (Gros, 2016). In “Finding fault with Foucault: Teaching Surveillance in the Digital Humanities,” Christina Boyles of Michigan State University argues that modern surveillance tools are difficult, if not impossible, to identify the criteria they use to discriminate (Boyles, 2019). Treating these automated surveillance decisions as unquestionable has deepened inequality and injustice, which the study is cognisant of when it comes to student grading. Boyles suggests that scholars, activists, and the public should unite against unethical surveillance practices (Boyles, 2019). Thus, this AI Grading tool prototype design is meant to provide capabilities of a grading fairness approach. One approach is to educate students to critique surveillance systems and envision fairer futures, emphasizing the importance of questioning Foucault and Western surveillance principles as part of this effort. Drawing on Foucault’s concept of biopower, the paper examines how these technologies discipline individuals and manage populations (Lambert, 2020), exploring an AI-integrated graded tool aims to challenge social injustices such as delayed assessment feedback, ensure fair assessment without human biases, enhance transparency, and uphold accountability in AI systems (Lim et al., 2023).
This study advocates for promoting alternative pedagogies prioritizing integrity and ethical considerations (Eaton, 2022). Our argument submits that this automation is not aimed at policing students through online automated assessments grading but regulating marking to alleviate and optimize the workload of academics (Foucault, 2003, p. 246). Regulation, unlike discipline, addresses the challenge of liberal governance of appearing hands-off while consistently influencing lives by focusing on desired outcomes to regulate and improve individual freedom (Fleming, 2022). Foucault’s concept of “regulation” from liberal and neoliberal economics emphasizes the flow of capital, labor, and goods, focusing on enabling these movements rather than imposing limits, thereby governing environments to influence outcomes and improve individual freedom without direct control (Özgün et al., 2017).
From a Foucauldian perspective, AI-automated grading tools are a form of disciplinary technology that operates within the educational system as they introduce new surveillance and control mechanisms over students’ academic performance, shifting the power dynamics and disciplinary mechanisms within education (Seo et al., 2021). Given concerns about personal privacy, space, economic and social standing, and psychological context impacting student behavior (Mohamed et al., 2020), authorities from both government and educational grounds require leadership thought processes and applications of thoughts to alleviate student anxiety.
From a business value perspective, aligned with organizational strategy and catalytic niche areas, this research aims to enhance productivity, reduce cycle time, and improve quality by investing in the benefits and costs of an AI-integrated grading tool, thereby realizing value from enterprise generative (Gartner Inc, 2024a). This AI-integrated grading tool represents a technological intervention that standardizes assessment processes, imposes evaluation norms, and reinforces specific knowledge production and reproduction forms (Hartwell and Aull, 2024). Foucault would highlight the complexities and nuances of such technological disruptions.
This study advocates for promoting alternative pedagogies that prioritize integrity and ethical considerations, arguing that automation intention for policing of students through online automated assessment grading but to regulate marking to alleviate and optimize the workload of academics (Farazouli, 2024). This preparation aims to establish the specific requirements for our machine learning developer to create a prototype (see section 6.1), considering factors like language diversity, cultural sensitivity, and the system’s adaptability to different educational contexts within South Africa (Jantjies, 2014; Pedro et al., 2019). Finally, testing prototypes with real users in the Test phase of design thinking (see section 6.1), which is the framework the study employs, allows for feedback and improvements before implementation, ensuring that this iterative process aligns solutions with user needs and desires, thereby promoting continuous improvement and innovation (Zorzetti et al., 2022). Overall, Foucault’s review of AI automated grading tools as a disruptive innovation emphasizes the shifts in power, knowledge production, and disciplinary mechanisms within education, raising essential considerations about surveillance, control, and shaping academic practices. There are other scholars of disruptive technology, such as Schumpeter, from whom this study drew inspiration.
6 Applying a Schumpeter lens on the AI Automated graded tool
Joseph Schumpeter, a notable economist, focused on innovation, entrepreneurship, and economic development (Schumpeter and Swedberg, 2021). As a catalyst for economic growth in the early 20th century, Schumpeter argued that innovation arises from entrepreneurs challenging established companies with new products, services, or business models Itonics (2024). Though he did not directly discuss artificial intelligence, we can infer his perspective based on his ideas (Nardin, 2021). Schumpeter believed that economic growth comes from innovation, which involves introducing new products, production methods, and markets, often causing “creative destruction” in existing structures (Śledzik, 2013). Schumpeter’s theory aligns AI as a critical force for growth and innovation but recognizes its potential to disrupt innovation processes, including concepts, theory, research, and practice (Lester, 2018). Schumpeter’s perspective on AI automated grading tools would likely view them as disruptive innovations within the education sector because they can fundamentally change how assessments are conducted and graded, potentially disrupting traditional grading processes and roles.
According to Schumpeter’s theory, AI’s progress can lead to innovation and disruption and create new products, improve processes, open new markets, and disrupt industries and jobs through automation (Lester, 2018). This dynamic, termed “creative destruction,” suggests that AI will reshape industries, fostering new opportunities while affecting existing ones (Isaiah and Dickson, 2024). Additionally, disrupting industries or creating new markets by investing in strategic GenAI products, services, processes, and business models, albeit requiring higher risk tolerance and strategic foresight (Gartner Inc, 2024b).
Developing and integrating a digital platform for managing drones and 3D printing services at a university represents a disruptive, solutions-based innovation approach that is cost-effective and efficient, fostering collaboration and service enablement for the University community and external clients through tested use cases (Fernandes, 2023). Gledson et al. (2024) report on a web-based prototype dashboard that enhances productivity by managing design coordination, task prioritization, and reporting functionalities while also monitoring design production, assessing designer performance trends and effectively managing Technical Queries (TQs) and Requests for Information (RFIs) for construction design managers, alongside deploying a platform that can manage drone technologies and 3D printing services in construction firms (Gledson et al., 2024). Schumpeter’s concept of creative destruction could be applied here, suggesting that these AI-automated grading tools may lead to the obsolescence of certain manual grading practices while creating new educational opportunities and efficiencies, an approach higher learning institutions (HLI) would quickly adopt to aid their academic challenges. The philosopher’s engagement in enabling the study requires the methodology employed to enable the analysis of the study.
7 Literature review
In addition to the above theoretical framework, disrupting industries or creating new markets by investing in strategic GenAI products, services, processes, and business models, albeit requiring higher risk tolerance and strategic foresight (Gartner Inc, 2024b). Based on the notion that the incremental innovation environment must include the relevant organization’s various stakeholders, enhancing the human-centred approach (Wechsler, 2014), this study employed design thinking for considered university stakeholders. Successful pilots in generative AI focus on demonstrating business potential rather than just technical feasibility, requiring collaboration with business partners and engineers, rapid testing, and strategic value assessment to mitigate risks and achieve meaningful innovation (Gartner Inc, 2023a). Any HLI, serving students across Africa and beyond, with countless accolades over its 151-year period will encounter challenges, which is why, from an ICT support department point of view, the human-centredness approach is vital so that we apply empathy in seeking to understand pain points from stakeholders (David Carlson and Dobson, 2020). Additionally, we have a responsibility to seek solutions so we can resolve student challenges to improve the customer experience through taking small yet significant steps to maintain a competitive advantage (Woodruff, 1997). To achieve rapid and effective outcomes for innovation initiatives such as the AI Grading tool and leverage faster progress, blending human and AI capabilities to enhance innovation and aiming for nearly autonomous innovation remains critical (George and Wooden, 2023). Gartner Inc (2023a) predicts that 2027 AI-powered teams should deliver projects with up to 75% more success than traditional teams, leading to faster value creation from innovative projects (Gartner Inc, 2023a).
The conventional view of innovation as technological commercialization is challenged by emerging trends like social, sustainable, and responsible innovation, urging a broader conceptualization to avoid limiting these new developments within traditional frameworks (Von Schomberg and Blok, 2021; Blok, 2021). When deciding how best to allocate resources to incremental innovation, scale and maturity are critical factors that must establish the product’s value across the institution and how the concept can evolve (Jamil et al., 2022). The rapid expansion of artificial intelligence demands that HLIs at the forefront of technology foster innovation by establishing think tanks, creating communities of practice, engaging their institutional communities, building idea management tools, targeting innovation, managing progress, and recognizing success (Aithal and Aithal, 2023). For the effectiveness of this prototype creation, this initiative collected background information and identified the scope and projected duration to keep the innovation focused on the most relevant areas (Albuquerque et al., 2024). While many ideas exist, retaining and revisiting some ideas for future innovations is essential (Selwyn, 2024).
Organisations experimenting with generative AI should address its transformative nature and operational challenges, with ICT leaders conducting workshops with business stakeholders to generate innovative use-case ideas (Heiska, 2024). They should prioritize these ideas according to their potential business value and feasibility, form a diverse team for pilot projects, develop minimum viable products to validate each use case and iterate on successful outcomes to expand generative AI implementation (Gartner Inc, 2023b). At the forefront of designing, creating new products, identifying user/customer problems, and changing or improving various products, design thinking is the guiding philosophy behind developing an AI-driven auto-marking/grading system (Dutt, 2023). It places educators and students at the centre, empathizing with their needs and aspirations. This literature review aligns a mixed methods approach to apply to the study; thus, the discussion below follows.
8 Methodology
This qualitative case study explores the development and implementation of a design thinking Artificial Intelligence (AI)-driven auto-marking/grading system prototype aimed at alleviating the workload of lecturers in a higher learning institution in South Africa. Drawing on theorists such as Michael Foucault and Joseph Schumpeter, this study contributes to the ongoing discourse on innovative solutions for educational challenges in South Africa by employing a design thinking framework and mixed methods research methods. The study adopts an iterative approach to designing and prototyping the system, drawing upon design thinking principles and ensuring alignment with the unique needs and challenges of the educational context (Ferreira Martins et al., 2019). Moreover, the study will highlight the importance of considering ethical implications and ensuring human oversight in automated assessment processes. The prototype provides insights for developing and implementing AI-driven auto-marking/grading systems in higher education settings, emphasizing the importance of ethical considerations and human involvement. As elaborated in section 3, the research question helps to establish, “How might we develop a design thinking AI-driven auto-marking/grading system to reduce the workload of lecturers at a South African Open Distance eLearning (ODeL)?” Therefore, the following parallel discussion focuses on the question and how we apply the study’s methodology.
8.1 Design thinking
Design thinking is a methodology that addresses complex problems by creating user-centric solutions that are technically feasible, economically viable, and desirable for the user (Martins et al., 2020). Confirming the human-centricity, aligned with the organizational strategic vision, is the ICT Digitalisation Strategy (2022), which is committed to the strategic focus areas of the university, which are to be “academic-focused and student-centric” and provide incremental innovation to technology solutions. To apply design thinking and an AI-driven auto-marking/grading system effectively in South African Open Distance eLearning (ODEL) to reduce the workload of lecturers, several phases such as empathize, define, ideate, prototype and test (Dam, 2024). This section will discuss these phases, resuming with the empathize stage until the testing stage.
The Empathize stage is foundational in developing an AI-graded marking tool to alleviate lecturer workload. It centers on profoundly understanding lecturers’ and students’ experiences. This phase is pivotal in empathetic design, aligning with the core tenets of Design Thinking. Techniques such as user-based studies, beginner’s mind approach, and user interviews delve into lecturers’ challenges, daily routines, pain points in grading assessments, and the specific solutions they require (Srivastava, 2023). The study will employ use cases and shared input directly with various college academic employees and reviews to gather insights into their needs, behaviors, and experiences, enabling the development of the prototype (Dokter et al., 2023). This use case will be employed through the Microsoft Teams meeting approach, MS Outlook, and MS SharePoint, where we will share the use cases for their inputs, allowing user interaction with products or services in their natural environment, which help collect quantitative data on user preferences and behaviors.
The MS SharePoint use case document will enable the academic employees’ (users’) interactions and environments to capture detailed insights. Additionally, we will apply the beginner’s mind approach as it is essential in this process, involving observing without preconceived notions or judgments (Thorne et al., 2024). Applying the beginner’s mind approach means adopting a fresh perspective by looking at the problem as if seeing it for the first time, suspending assumptions to avoid letting past experiences and knowledge influence observations and embracing curiosity to be open to new insights and unexpected findings (Hasson, 2024, p. 15). These applied methods will help build empathy and ensure the design process remain user focused. By combining insights from various techniques, a comprehensive understanding of users will be developed which will lead to solutions that genuinely meet the study needs (Von Hippel, 2001). By immersing in lecturers’ perspectives, the study aims to grasp their needs, thoughts, and emotions comprehensively, paving the way for designing an AI tool that streamlines assessment grading processes, reduces workload, and enhances the overall teaching and learning experience.
The Define stage in the Design Thinking process for creating an AI-graded marking tool to alleviate lecturer workload is pivotal in shaping the project’s direction. This phase will define the problem statement or design challenge the tool aims to solve for lecturers and students. The goal is to create a problem statement that answers questions like what needs to be solved, who it is for, different approaches, and how to act (Inganah et al., 2023). Framing the problem statement with empathy toward lecturers and students ensures a human-centred approach, focusing on their needs and experiences (Pazell and Hamilton, 2021). Empathy maps, point of view (POV) statements, and “How Might We” (HMWs) statements are valuable techniques for refining problems and developing actionable insights. Therefore, this study seeks to establish, “How might we develop a design thinking AI-driven auto-marking/grading system to reduce the workload of lecturers at a South African Open Distance eLearning (ODeL)?.” This clear definition of the problem statement sets a solid foundation for developing an AI tool that effectively reduces lecturer workload and enhances the assessment grading process.
In the Ideate stage of the Design Thinking process for developing an AI-graded marking tool to alleviate lecturer workload, the focus is on generating many creative ideas rather than polished solutions. This phase encourages challenging the status quo and thinking outside the box, where project team members should feel safe to explore unconventional ideas without fear of judgment (Kumar, 2023, p. 69). For this prototype, techniques such as bodystorming, where participants simulate the AI-graded marking process to identify pain points, and lightning demos, which involve examining how other industries implement AI solutions, can spark innovative concepts (Alaoui, 2023). Therefore, we propose that body storming can be used as an effective tool at the early stages of the design process to communicate the need to find results by allowing participants to have an embodied experience situated in the context of the human-robot interaction (Abtahi et al., 2021).
The study will additionally apply the 3-Step Sketching technique to allow the development team members to sketch their ideas individually for the tool, promoting diverse thinking without fear of judgment. Ferguson (1994) identifies three kinds of sketches that help identify sketches’ role in creative design groups: the thinking sketch, the talking sketch, and the prescriptive sketch. Thinking sketches refer to the designers using the drawing surface to support their thinking processes (Ferguson, 1994). After generating numerous ideas to simulate the AI-graded marking process, this study will apply techniques like dot voting to effectively capture decision-making and mood to help select the most promising concepts for prototyping (Lewrick et al., 2020). This structured yet open-ended approach ensures that the team considers a wide range of possibilities, ultimately leading to a more effective and user-centred AI grading tool.
In the Prototyping phase of developing an AI-graded marking tool to alleviate lecturer workload, the focus is on creating simple, high-fidelity prototypes to test the solution in a realistic setting as it is essential for advancing toward a design that intersects desirability, feasibility, and viability (Lu et al., 2024). We will keep the AI-graded marking tool prototype cost-effective and straightforward, making informed design decisions based on real-world feedback from academic employees and valuing both validation and rebuttal of ideas. We will employ a High-fidelity prototype to ensure that the testing experience offers an authentic interaction with the tool, providing valuable insights for improvement (Polidoro et al., 2024). Throughout this process, maintaining empathy and keeping academic employees’ needs at the forefront ensures that the final AI tool effectively reduces their workload while meeting academic employee’s requirements.
The Testing phase for developing an AI-graded marking tool aimed at alleviating lecturer workload is essential for validating the effectiveness of the prototype. Thus, it is necessary to bring the prototype to the attention of actual academic employees for their viewing and to gather their feedback. This critical step allows our development team to observe how academics (users) interact with the tool, identifying which features are practical and which need improvement. By directly engaging with academics (users), the development team gains valuable insights into the tool’s functionality and user experience, informing necessary iterations and refinements, as well as can gage the product’s viability, ultimately saving time and money by addressing issues early in the design process (Leßenich and Sobernig, 2023). This process ensures that the final product is user-centred and meets the needs of academics (users), ultimately saving time and resources by addressing potential issues early in development (Abreu-Romu, 2023). Directly engaging with academics (users), the development team.
As this study is an ongoing project, it is and will be following these steps and applying design thinking principles, South African ODeL institutions can develop and implement an AI-driven auto-marking/grading system that significantly reduces lecturer workload while maintaining high-quality assessment standards in online distance education. From the above discussion, the proposed framework will be applicable:
9 Framework for an AI-driven auto-marking/grading system to alleviate lecturer academics for student assessments at a higher learning online distance education
The above discussion birthed a framework for developing an AI-driven auto-marking/grading system to alleviate lecturer workload for student assessments in higher learning online distance education. Through the Empathize phase, the shared case studies with academic employees for review and insight gatherings on academic employees’ time constraints, grading criteria, and feedback requirements will help achieve the prototype development as well as the desired improvements in the assessment process (Luh et al., 2011). The study framework proposes the Defining of the problem statement: “How might we develop an AI-driven auto-marking/grading system that efficiently grades assessments, provides accurate feedback, and saves time for lecturers in online distance education?” (Asano, 2023). To identify critical features and functionalities required in the AI grading system, such as automatic scoring, feedback generation, plagiarism detection, and customisable grading criteria. The study framework suggests brainstorming ideas in the Ideation phase for AI algorithms to accurately grade various assessments, including essays, multiple-choice questions, and practical assignments and exploring AI technologies, such as natural language processing (NLP), machine learning (ML), and deep learning (DL), to automate the grading process effectively (Denecke et al., 2023). For the prototype Implementation, the framework proposes integrating AI-powered tools for plagiarism detection, language translation, and accessibility features to enhance the overall assessment experience. Conducting extensive Testing of the AI grading system with diverse assessments and lecturer inputs to ensure accuracy, reliability, and usability requires gathering feedback from lecturers and students, addressing any issues or improvements before Deployment, and measuring the system’s effectiveness in reducing workload, improving grading consistency, and enhancing student learning outcomes in online distance education.
From a Design Thinking point of view, the five phases were included; however, for the strength of the study, the study proposes the framework to include an Implementation phase, where the Deployment of the AI-driven auto-marking/grading system in higher learning online distance education environments, integrating it with existing learning management systems (LMS) or assessment platforms. Academics require upskilling, training and support to use the AI grading system effectively, understand its capabilities, and leverage advanced features for personalized feedback and assessment insights. Additionally, in the Implementation phase, the focus shifts to deploying the AI-driven auto-marking/grading system within higher learning online distance education settings, seamlessly integrating it with existing learning management systems (LMS) or assessment platforms. Deployment involves technical setup and comprehensive training for academics to use the AI grading system effectively, understand its capabilities, and leverage advanced features for personalized feedback and insights, ensuring successful adoption and maximizing benefits.
The framework proposes to Continuously Iterate and improve the AI grading system based on user feedback, technological advancements, and evolving educational needs in online distance education (Deeva et al., 2021). In the Iteration phase, the focus lies on continuous improvement of the AI grading system, driven by academic (user) feedback, technological advancements, and evolving educational requirements in online distance education. This iterative process includes implementing updates and new features to enhance the AI grading system’s functionality, accuracy, and user experience, ensuring its relevance and effectiveness in online distance education by adapting to user needs and technological advancements, ensuring the tool’s relevance and effectiveness in addressing the changing landscape of online distance education.
By following this framework, institutions can develop and implement an AI-driven auto-marking/grading system that significantly alleviates lecturer workload while enhancing the quality and efficiency of student assessments in higher learning online distance education. The below Figure 1: AI-Graded Tool Design Thinking Framework reflects the discussion:
This figure organizes the framework stages and activities into a structured format, making it easier to follow the development process for an AI-driven auto-marking/grading system in higher learning online distance education.
10 AI graded tool and sustainability
10.1 AI grading and governance, risk, and compliance (GRC)
Integrating AI-driven auto-marking and grading systems in education has revolutionized the landscape of Governance, Risk, and Compliance (GRC). Historically, GRC involved extensive manual work, such as writing policies, managing controls, and staying updated with regulations, often using static templates and spreadsheets (Kjærvik, 2023). This approach, while functional, was time-consuming and prone to inaccuracies. With the advent of AI, particularly in auto-marking and grading, educational institutions can now leverage big data to streamline GRC processes. AI can analyse and process large quantities of data to identify patterns, assess risks, and provide real-time assessments, significantly reducing the manual workload and enhancing the accuracy of GRC activities (Mughal, 2018). This technological advancement improves efficiency and enables better-informed decision-making, ultimately fostering a more dynamic and responsive GRC environment.
Traditionally, GRC tasks involved manual effort and static tools like templates and spreadsheets, leading to time-consuming and error-prone processes; however, AI integration, mainly through big data analytics, is transforming the GRC landscape by automating data processing, enhancing decision-making, and enabling proactive risk management (Kjærvik, 2023). The rapid adoption of AI raises legal and ethical challenges around privacy, intellectual property, and academic integrity, highlighting the need for legal and ethical standards to protect individuals and hold companies accountable (Dhirani et al., 2023). Organisations require employees to embrace AI awareness (through various campaigns, trainings, workshops and more interventions), empathy, and other higher principles of humanity, ensuring a human-centric approach while letting robots and generative AI fulfill their intended roles (Rawas, 2024). By questioning, thinking creatively, and acting independently, we prepare our organization for the future, allowing innovation to flourish and ensuring that robots remain tools while humans retain mastery (Paesano, 2023).
AI implementation in education and providing a more in-depth analysis of technological feasibility and AI management such as Singapore’s AI integration efforts, provide actionable insights for overcoming operational challenges in the South African context (Foong et al., 2024). The study highlights the importance of South Africa investing in AI research and development to create trustworthy AI frameworks and regulations tailored to the local context rather than relying on Western-imported models; furthermore, a proactive governmental role is recommended to ensure that the country harnesses AI’s full potential rather than allowing the private sector to lead independently (Nyathi, 2023) Research examining barriers to AI adoption in smart cities found that issues such as inadequate infrastructure, limited funding, cybersecurity risks, and a lack of trust in AI and IoT technologies are key factors slowing adoption, with causal relationships among these challenges illustrated through a network map and cause-effect diagram, aiding policymakers and researchers in developing solutions for sustainable smart cities (Wang et al., 2021). Additionally, a thorough exploration of AI management strategies—including routine bias audits, algorithmic transparency, and rigorous POPIA compliance—would address concerns related to data governance and ethical responsibility (Ramaliba and Jacobs, 2024). Integrating these elements would enable a more comprehensive framework for AI deployment in education that upholds standards of fairness, transparency, and sustainability, ensuring that AI systems meet the diverse needs of ODeL institutions while supporting an equitable and environmentally conscious approach to AI. All the AI innovative solutions involve significant resource demands, such as energy and water, which can create local and global sustainability challenges, thus innovations such as the AI Grading Tool must consider the environment.
10.2 AI and climate change
AI’s overall impact is relatively small; it necessitates careful management and investment in innovative solutions and governance to mitigate environmental effects and unlock its potential for positive sustainability outcomes (Nishant et al., 2020; Francisco and Linnér, 2023). The benefits of AI for Sustainability are not guaranteed and require deliberate efforts and trusted governance to ensure AI increases efficiency accelerates decarbonisation and avoids unsustainable outcomes (Channi and Kumar, 2024). Unlocking AI’s potential for Sustainability requires increased investment, inclusive digital and data infrastructure, minimizing resource use, robust AI policy and governance, and workforce capacity building (Adewale et al., 2024).
AI systems, which vary in energy consumption based on their complexity and usage, generally require substantial electricity to process and analyse data efficiently; the extra energy demand from platforms like ChatGPT, which uses around 10 times the electricity of a Google search per response, contributes to global greenhouse gas emissions (Dev, 2023), as evidenced by rising Carbon dioxide (CO2) emissions from tech giants like Microsoft and Google due to data center expansion (Li and Porter, 2019), with the overall impact of AI’s energy use expected to grow as its adoption increases across industries (Kemene et al., 2024).
Despite concerns that AI’s rise could increase energy demand and fossil fuel use, Bill Gates, Microsoft co-founder, told journalists at the Breakthrough Energy Summit in London held in June 2024 that, AI will enable countries to use less energy overall by making systems more efficient (Ambrose and Hern, 2024). Governments must align energy investment with AI development by facilitating the integration of clean energy and AI goals for companies and promoting best practices across the AI value chain through voluntary and mandated policies (Chan et al., 2024). The extra demand from AI data centers is likely offset by investments in green electricity as tech companies commit to using clean energy while shifting energy governance, including changes in electricity consumption patterns, cooling methods, and circular economy practices (Wong and White, 2024). Gates acknowledges that while AI can help reduce carbon emissions, the transition to green energy is not occurring quickly enough, and the world may miss its 2050 net-zero emissions target by up to 15 years due to insufficient green electricity to phase out fossil fuels (Ambrose and Hern, 2024).
Technological and operational challenges further complicate AI integration in educational settings. While AI-driven grading tools are advancing rapidly, their successful deployment requires robust technological infrastructure and skilled support, resources not always readily available in South African distance learning institutions (Patel and Ragolane, 2024). Case studies in Singapore have demonstrated the critical role of IT infrastructure and trained staff in achieving seamless AI integration (Foong et al., 2024). Furthermore, the pressure of rapid commercialization, as characterized by Schumpeter’s disruptive innovation, can compromise ethical standards and compliance with South Africa’s Protection of Personal Information Act (POPIA). Prioritizing swift AI deployment over rigorous quality and ethical checks risks overlooking key issues related to data protection, transparency, and institutional accountability (Lim et al., 2023). Proprietary AI models also complicate compliance, as educational institutions often lack insight into the data processing methodologies used, raising ethical and legal concerns regarding student privacy (Foucault, 1977; Baker and Hawn, 2022). Implementing an AI-driven auto-marking/grading system at a South African Open Distance eLearning (ODEL) HLI to reduce the workload of lecturers requires further engagement on how it will add value to the environment and how it can contribute positively to several Sustainable Development Goals (SDGs).
10.3 AI and sustainable development goals (SDGs)
When integrating an AI-driven auto-marking/grading system into South African Open Distance eLearning (ODEL) to alleviate lecturer workload, it is vital to prioritize considerations regarding data privacy policies, labor laws, and the importance of Sustainable Development Goals (SDGs). Adhering to South Africa’s data protection laws, such as the Protection of Personal Information Act (POPIA), is essential to ensure that student data collected and processed for assessment grading is handled ethically and legally, contributing to SDG 16 on Peace, Justice, and Strong Institutions (POPIA, 2020). Foucault’s concept of biopower underscores the importance of ethical governance and control mechanisms in handling sensitive data and ensuring fair practices, aligning with SDG 9 on Industry, Innovation, and Infrastructure Roberts (2009, p. 23). Schumpeter’s ideas on innovation emphasize the need for continuous improvement and adaptation in integrating AI systems responsibly within educational frameworks to achieve broader societal goals and advancements, supporting SDG 17 on Partnerships for the Goals (Śledzik et al., 2023). Adhering to ethical assessment grading involves obtaining consent, maintaining data accuracy, implementing robust security measures, and safeguarding against unauthorized access or disclosure of personal information (Lee et al., 2016; Alawadhi, 2024), aligning with SDG 9 on Industry, Innovation, and Infrastructure.
Compliance with labor laws, including fair working hours, adequate training, and addressing job-related concerns due to automation, is crucial to upholding ethical employment practices, supporting SDG 8 on Decent Work and Economic Growth (Fossa, 2023). Data security measures, such as cybersecurity protocols, encryption, access controls, and regular audits, are also imperative to protect sensitive data from breaches, contributing to SDG 16 on Peace, Justice, and Strong Institutions (Yulchiev, 2024; Iborra and Juergensen, 2024). Continuous monitoring and compliance assessments ensure ongoing adherence to data privacy, labor laws, ethical guidelines, and security standards, promoting AI technology’s responsible and ethical use in education and supporting various SDGs related to quality education, inclusive and equitable work environments, reduced inequalities, and sustainable development. The table below Table 1: AI Grading Tool aligned Sustainable Development Goals describes how each SDG is affiliated with the AI-driven auto-marking/grading system in South African Open Distance eLearning (ODeL) to reduce the workload of lecturers can improve the quality of education delivery:
This integration of AI technology in ODEL supports the broader agenda of sustainable development by promoting inclusive education, supporting decent work practices, reducing inequalities, and fostering collaboration for achieving shared goals. Thus, this study below reveals the limitations that require further exploration and publication, whilst addressing and applying more thinking and strategical alignment with regards to AI deployment across higher learning sector.
11 Limitations
The study outlines a promising AI-driven grading system, yet several limitations must be addressed to ensure its ethical, equitable, and practical effectiveness, particularly within Online Distance eLearning (ODeL) contexts. A significant limitation is the lack of empirical data validating the proposed system’s effectiveness in real-world applications, especially in ODeL frameworks. This absence of data limits the generalizability of anticipated benefits, such as operational efficiency and workload reduction, across diverse ODeL institutions that often feature varied student demographics and institutional setups (Lim et al., 2023). Although AI promises grading efficiency, the scarcity of quantitative evaluations currently limits the reliability of these claims, highlighting the need for empirical studies to substantiate AI’s impact on lecturer workloads and grading processes (Jonäll, 2024; Lim et al., 2023). Additionally, machine learning, the primary skill underpinning AI-based grading, remains a relatively new field, resulting in resource limitations for developing and deploying prototype and production systems. Many machine learning resources particularly in Africa are in their infancy, and institutions often struggle to find experienced professionals, appropriate tools, and scalable infrastructures for AI development and maintenance (Boateng, 2024). This scarcity of mature resources poses challenges for those advancing machine learning in education and seeking to bring new AI products to the operational stage, requiring ongoing efforts to expand both expertise and technological assets as part of the institution’s growth strategy (Gikunda, 2024).
Another critical concern involves the ethical implications tied to the surveillance functions embedded within AI grading systems. Drawing on Foucault’s concept of biopower, AI-based assessments act as tools for institutional surveillance, continuously monitoring student behaviors, performance, and interactions (Foucault, 1977). Such systems have the potential to function as disciplinary mechanisms, subtly guiding student behaviors to align with institutional norms and expectations. This concern is especially pertinent in the context of assessing African languages and Indigenous knowledge systems, where AI grading systems may lack the cultural sensitivity to accurately evaluate contextually rich knowledge and linguistic nuances (Baker and Hawn, 2022). The absence of a focused bias analysis exacerbates these risks, potentially reinforcing structural inequities and disadvantaging students from marginalized linguistic and cultural backgrounds (Baker and Hawn, 2022).
The environmental impact of AI systems is another pertinent issue, especially in an era where responsible technology deployment is critical. Although the study briefly mentions the carbon footprint of AI, there requires specific strategies to address this environmental concern as AI systems are known to require substantial computational power, which contributes to their environmental impact. By prioritizing sustainability, the environmentally sustainable computing (ESC) framework provides a comprehensive tool for academia and industry to integrate eco-friendly practices across computing domains, significantly reducing carbon footprints through lifecycle management, regulatory compliance, and resource optimization (Pazienza et al., 2024). With all the limitations that this study discusses, also reveals an opportunity to further study and research on future approaches, thus below is a recommendation on what to carry on.
12 Recommendations
To mitigate the above limitations, higher learning responsible and sustainable institutions, must commit to utilizing cloud platforms powered by renewable energy sources and integrating energy-efficient algorithms in AI systems, exploring renewable energy solutions for cloud-based AI processing. This aligns with recent recommendations in sustainable computing literature, which emphasize reducing the carbon footprint of AI by adopting renewable-powered infrastructure and optimizing data processing for energy efficiency (Mehta et al., 2023). These initiatives reflect dedications to minimizing environmental harm while continuing to innovate in AI-driven educational technology. With the above the study recommends the following:
Empathize with users: Start by understanding the needs of lecturers and students. Conduct interviews and user-based studies to gather insights into lecturers’ challenges with grading in online distance education. Use empathy maps to explore their thoughts, feelings, and pain points deeply. Drawing from Foucault’s theories on power and knowledge, consider how the current assessment practices might impose undue stress on lecturers and how an AI-driven system could empower them by redistributing workload.
Define clear problem statements: Develop a focused problem statement that addresses creating an AI-driven auto-marking/grading system to efficiently grade assessments, provide accurate feedback, and save lecturers’ time. Identify and prioritize critical features like automatic scoring, feedback generation, and plagiarism detection. Incorporate Schumpeter’s concept of innovation-driven economies to emphasize the need for continuous improvement and adaptation in educational tools to stay competitive and relevant.
Ideate innovative solutions: Brainstorm AI algorithms and models for grading various types of assessments. Explore natural language processing, machine learning, and deep learning technologies. Consider integrating tools for plagiarism detection, language translation, and accessibility. Reflect on Foucault’s ideas about surveillance and control to ensure that the AI system respects privacy and does not create a new form of disciplinary power over students and lecturers.
Prototype High-Fidelity Models: Create prototypes of the AI grading system, including user interfaces for lecturers to input criteria, view graded assignments, and provide feedback. Demonstrate the system’s capabilities through integrated AI algorithms and real-time analytics. Ensure that the prototype aligns with Schumpeter’s principle of creative destruction, where innovative, more efficient methods replace old, inefficient practices.
Test extensively: Conduct comprehensive testing with diverse assessments and inputs to ensure accuracy and usability. Gather feedback from lecturers and students and measure the system’s effectiveness in reducing workload and improving learning outcomes. Utilize Foucault’s concept of feedback loops to continuously monitor and adjust the AI system based on user experience, maintaining a balance of power and efficiency.
Implement thoughtfully: Deploy the AI system in online distance education environments, integrating it with existing LMS or assessment platforms. Provide thorough training and support for lecturers. Ensure compliance with data privacy policies and labor laws—Leverage Schumpeter’s innovation theory to encourage a culture of ongoing improvement and adaptation within the educational institution.
Iterate continuously: Improve the AI grading system based on feedback, technological advancements, and evolving educational needs. Regularly update and enhance the system’s functionality and user experience. Foucault’s theories can guide the iteration process to ensure that improvements do not inadvertently increase stress or reduce autonomy for lecturers and students.
Align with sustainable development goals (SDGs): Highlight contributions to Quality Education (SDG 4), Decent Work and Economic Growth (SDG 8), Reduced Inequality (SDG 10), and Partnerships for the Goals (SDG 17). Emphasize how the system promotes inclusive education, supports decent work environments, reduces inequalities, and fosters collaboration for sustainable educational development. Incorporate Foucault’s ideas on social justice and equity to ensure the system addresses power imbalances and promotes fair access to education. Integrate Schumpeter’s focus on economic growth through innovation to highlight the system’s potential to drive educational and economic advancements.
By integrating insights from Foucault and Schumpeter, these recommendations ensure that the AI-driven auto-marking/grading system alleviates lecturer workload and aligns with broader educational, social, and economic goals. This approach supports a more sustainable, equitable, and innovative educational environment in South African Open Distance eLearning (ODEL).
13 Conclusion
Integrating AI-driven auto-marking/grading systems in higher education represents a significant leap toward addressing educators’ challenges, especially in South Africa. By combining design thinking principles with AI development, institutions can create innovative solutions that reduce lecturer workload and enhance students’ learning experience. This approach aligns with the evolving landscape of education, where technology plays a pivotal role in driving efficiency and improving outcomes.
The potential benefits of AI integration in education, such as consistency, efficiency, timeliness, data-driven insights, enhanced academic integrity, and ethical considerations, highlight such systems’ transformative impact. However, navigating challenges such as ensuring fairness, lack of bias, data privacy, and continuous evaluation of the system’s impact is crucial. This is coming out strongly on the limitations that the study encounters and recommends for further research development. Collaborative efforts between stakeholders, adherence to ethical guidelines, and ongoing evaluation are essential to realize the full potential of AI-driven solutions in education.
In conclusion, integrating AI-driven auto-marking/grading systems represents a promising step toward a more effective, fair, and personalized learning experience. As Higher Learning institutions embrace innovation and leverage technological advancements, cognizant of GRC, Climate change as well as SDGs, HLIs can meet the evolving needs of educators and students while maintaining high standards of academic integrity and ethical use of AI.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
KT: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. MN-M: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. Publication fees were from the University of South Africa Professional Research office and ICT department.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declare that Generative AI was used in the creation of this manuscript. The article was edited with Grammarly.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Abreu-Romu, T. (2023). Using service design methods to improve discovery process in an e-learning business: Accelerating innovation in user-centric product development–A case study. Available at: https://www.theseus.fi/bitstream/handle/10024/817885/Abreu-Romu_Tatiana.pdf?sequence=2 (Accessed June 23, 2024).
Abtahi, P., Sharma, N., Landay, J. A., and Follmer, S. (2021). “Presenting and exploring challenges in human-robot interaction design through bodystorming” in Design thinking research: Interrogating the doing. eds. C. Meinel and L. Leifer (Cham: Springer Nature), 327–344.
Adewale, B. A., Ene, V. O., Ogunbayo, B. F., and Aigbavboa, C. O. (2024). A systematic review of the applications of AI in a sustainable building’s lifecycle. Buildings 14:2137. doi: 10.3390/buildings14072137
Aithal, P. S., and Aithal, S. (2023). Super innovation in higher education by nurturing business leaders through incubationship. IJAEML 7, 142–167. doi: 10.47992/IJAEML.2581.7000.0192
Alaoui, S. F. (2023). Dance-led research (doctoral dissertation). Paris: Université Paris Saclay (COMUE).
Alawadhi, A. (2024). A Case Study Exploring ESL Instructors’ Perspectives of Blended Learning in UAE Higher Education. In BUiD Doctoral Research Conference 2023: Multidisciplinary. 110–117. Cham: Springer Nature Switzerland.
Albuquerque, D., Damásio, J., Santos, D., Almeida, H., Perkusich, M., and Perkusich, A. (2024). Leveraging the innovation index (IVI): A research, development, and innovation-centric measurement approach. J. Open Innov.: Technol. Mark. Complex., 10:100346.
Ambrose, J., and Hern, A. (2024). AI will be help rather than hindrance in hitting climate targets, bill gates says. London: The Guardian.
Asano, Y. (2023). “Defining the problem’s solution to lead to the ideation phase: -a case study on the use of “how might we…”” in International conference on human-computer interaction. eds. G. Sinha and R. Shahi (Cham: Springer Nature Switzerland), 27–37.
Baker, R. S., and Hawn, A. (2022). Algorithmic bias in education. Int. J. Artif. Intell. Educ. 11, 1–41. doi: 10.1007/s40593-022-00256-6
Bear, A., and Skorton, D. (2018). Integrating the humanities and arts with sciences, engineering, and medicine in higher education: Branches from the same tree. Washington, DC: The National Academies Press.
Blok, V. (2021). “Philosophical reflections on the concept of innovation” in Handbook on alternative theories of innovation. ed. R. Joseph (Cheltenham: Edward Elgar Publishing), 354–367.
Boateng, G. (2024). Leveraging AI to advance science and computing education across Africa: challenges, progress, and opportunities.
Boyles, C. (2019). Finding fault with Foucault: teaching surveillance in the digital humanities. “Finding fault with Foucault: surveillance and the digital humanities” | manifold @CUNY. Available at: https://cuny.manifoldapp.org/read/finding-fault-with-foucault-surveillance-and-the-digital-humanities-76c4006c-bc97-4eda-8c18-89b96d30322a/section/21ba90d3-9aa2-4e12-8bf9-15a5caa553a2 (Accessed June 27, 2024).
Chai, F., Ma, J., Wang, Y., Zhu, J., and Han, T. (2024). Grading by AI makes me feel fairer? How different evaluators affect college students’ perception of fairness. Front. Psychol. 15:1221177. doi: 10.3389/fpsyg.2024.1221177
Chan, K., West, D., Teo, M., Brown, H., Westgarth, T., and Smith, T. (2024). Greening AI: a policy agenda for the artificial intelligence and energy revolutions. Environ. Policy Gov. 32, 390–410. doi: 10.1002/eet.1978
Channi, H. K., and Kumar, R. (2024). “Digital technologies for fostering sustainability in industry 4.0” in Evolution and trends of sustainable approaches. eds. M. M. Serrano-Arcos, B. Payán-Sánchez, and A. Labella-Fernández (Amsterdam: Elsevier), 227–251.
Dam, R. F. (2024). The 5 stages in the design thinking process. Aarhus: Interaction Design Foundation – IxDF.
David Carlson, J., and Dobson, T. (2020). Fostering empathy through an inclusive pedagogy for career creatives. Int. J. Art Design Educ. 39, 430–444. doi: 10.1111/jade.12289
Deeva, G., Bogdanova, D., Serral, E., Snoeck, M., and De Weerdt, J. (2021). A review of automated feedback systems for learners: classification framework, challenges and opportunities. Comput. Educ. 162:104094. doi: 10.1016/j.compedu.2020.104094
Denecke, K., Glauser, R., and Reichenpfader, D. (2023). Assessing the potential and risks of AI-based tools in higher education: results from an eSurvey and SWOT analysis. Trends Higher Educ. 2, 667–688. doi: 10.3390/higheredu2040039
Dev, K. (2023). ChatGPT needs SPADE (sustainability, privacy, digital divide, and ethics) evaluation: A review.
Dhirani, L. L., Mukhtiar, N., Chowdhry, B. S., and Newe, T. (2023). Ethical dilemmas and privacy issues in emerging technologies: a review. Sensors 23:1151. doi: 10.3390/s23031151
Dokter, G., Boks, C., Rahe, U., Jansen, B. W., Hagejärd, S., and Thuvander, L. (2023). The role of prototyping and co-creation in circular economy-oriented innovation: a longitudinal case study in the kitchen industry. Sust. Prod. Consumpt. 39, 230–243. doi: 10.1016/j.spc.2023.05.012
Dutt, R. (2023). Product-designthinking-productmanagement-activity. [LinkedIn]. Available at: https://www.linkedin.com/posts/radhika-dutt_product-designthinking-productmanagement-activity-7132797226338136065-r0Gd/?trk=public_profile_like_view (Accessed July 23, 2024).
Eaton, S. E. (2022). New priorities for academic integrity: equity, diversity, inclusion, decolonisation and indigenization. Int. J. Educ. Integr. 18:10. doi: 10.1007/s40979-022-00105-0
Erlenbusch-Anderson, V. (2020). The beginning of a study of biopower: Foucault’s 1978 lectures at the Collège de France. Foucault Stu. Lectur. 12, 5–26. doi: 10.22439/fsl.vi0.6151
Farazouli, A. (2024). “Automation and assessment: exploring ethical issues of automated grading systems from a relational ethics approach” in Framing futures in postdigital education: Critical concepts for data-driven practices. eds. A. Buch, Y. Lindberg, and T. C. Pargman (Springer Nature Switzerland: Cham), 209–226.
Fernandes, M. A. (2023). Study of effective frameworks to support medical technology innovation in the hospital setting.
Ferreira Martins, H., de Oliveira, C., Junior, A., Dias Canedo, E., Dias Kosloski, R. A., Ávila Paldês, R., et al. (2019). Design thinking: challenges for software requirements elicitation. Information 10:371. doi: 10.3390/info10120371
Finefter-Rosenbluh, I. (2022). Between student voice-based assessment and teacher-student relationships: teachers’ responses to ‘techniques of power’ in schools. Br. J. Sociol. Educ. 43, 842–859. doi: 10.1080/01425692.2022.2080043
Fleming, P. (2022). How biopower puts freedom to work: conceptualising 'pivoting mechanisms' in the neoliberal university. Hum. Relat. 75, 1986–2007. doi: 10.1177/00187267221079578
Foong, Y. P., Pidani, R., Sithira Vadivel, V., and Dongyue, Y. (2024). “Singapore smart nation: journey into a new digital landscape for higher education,” in Emerging Technologies in Business: Innovation Strategies for Competitive Advantage. Singapore: Springer Nature Singapore, 281–304.
Fossa, F. (2023). “Sustainable mobility. From driving automation to ethical commitment” in Ethics of driving automation: Artificial agency and human values. ed. F. Fossa (Cham: Springer Nature Switzerland), 117–137.
Foucault, M. (2003). "society must be defended": Lectures at the Collège de France, 1975–1976. New York, NY: Macmillan.
Fourie-Malherbe, M. (Ed.) (2021). Creating conditions for student success: Social justice perspectives from a south African university. Stellenbosch: African Sun Media.
Francisco, M., and Linnér, B. O. (2023). AI and the governance of sustainable development: an idea analysis of the European Union, the United Nations, and the world economic forum. Environ. Sci. Pol. 150:103590. doi: 10.1016/j.envsci.2023.103590
Gartner Inc (2023a). Use generative AI in applied innovation to drive business value. Stamford, CT: Gartner Inc.
Gartner Inc (2024a). How to calculate business value and cost for generative AI use cases. Stamford, CT: Gartner Inc.
George, B., and Wooden, O. (2023). Managing the strategic transformation of higher education through artificial intelligence. Admin. Sci. 13:196. doi: 10.3390/admsci13090196
Ghimire, A. (2024). Generative AI in education from the perspective of students, educators, and administrators. Utah State University DigitalCommons@USU. Available at: https://digitalcommons.usu.edu/cgi/viewcontent.cgi?article=1119&context=etd2023 (Accessed July 20, 2024).
Gikunda, K. (2024). Leveraging artificial intelligence for sustainable agricultural development in Africa: Opportunities, challenges, and impact. Dedan Kimathi University of Technology, Kenya. Available at: https://arxiv.org/pdf/2401.06171 (Accessed July 04, 2024).
Gledson, B., Rogage, K., Thompson, A., and Ponton, H. (2024). Reporting on the development of a web-based prototype dashboard for construction design managers, achieved through design science research methodology (DSRM). Buildings 14:335. doi: 10.3390/buildings14020335
Gros, F. (2016). “Is there a biopolitical subject? Foucault and the birth of biopolitics” in Biopower: Foucault and beyond. eds. V. W. Cisney and N. Morar (Chicago, IL: University of Chicago Press), 259–273.
Hartwell, K., and Aull, L. (2024). Navigating innovation and equity in writing assessment. Assess. Writ. 61:100873. doi: 10.1016/j.asw.2024.100873
Heiska, O. (2024). Use of generative artificial intelligence in business-to-business sales of industrial services in the machinery manufacturing industry: Case studies from various machinery manufacturing industries. Use of generative artificial intelligence in business-to-business sales of industrial services in the machinery manufacturing industry: case studies from various machinery manufacturing industries - LUTPub.. LUT University. Available at: https://lutpub.lut.fi/handle/10024/167192 (Accessed September 23, 2024).
Iborra, A. H., and Juergensen, O. (2024). The sustainable development outcomes of mine action in Jordan. Available at: https://www.jmu.edu.
Inganah, S., Darmayanti, R., and Rizki, N. (2023). Problems, solutions, and expectations: 6C integration of 21st-century education into learning mathematics. JEMS 11, 220–238. doi: 10.30740/jems.v11i1.2301
Isaiah, O. S., and Dickson, R. K. (2024). New venture dynamics: a creative destruction model for economic development. Available at: https://www.ibm.com.
Jamil, K., Dunnan, L., Gul, R. F., Shehzad, M. U., Gillani, S. H. M., and Awan, F. H. (2022). Role of social media marketing activities in influencing customer intentions: a perspective of a new emerging era. Front. Psychol. 12:808525.
Jantjies, M. (2014). A framework to support multilingual mobile learning: A South African perspective (Doctoral dissertation. England: University of Warwick.
Jonäll, K. (2024). Artificial intelligence in academic grading: A mixed-methods study. (Masters Dissertation, University of Gothenburg). Available at: https://gupea.ub.gu.se/bitstream/handle/2077/83561/Jona%CC%88ll_HPA202_VT24.pdf?sequence=1&isAllowed=y”Jonäll_HPA202_VT24.pdf (Accessed September 13, 2024).
Kemene, E., Valkhof, B., and Tladi, T. (2024). AI and energy: Will AI help reduce emissions or increase demand? Here’s what to know. Cologny: World Economic Forum.
Kjærvik, S. B. (2023). Utilisation of ServiceNow's risk management functionality within the GRC module: A case study (Master's thesis. Trondheim: NTNU.
Kumar, M. S. (2023). “Creative problem-solving: thinking outside the box” in The art of critical thinking: Exploring ideas in liberal arts. ed. J. Scheuer (Lanham, MD: Rowman & Littlefield).
Lambert, G., and Erlenbusch-Anderson, V. (2024). Biopolitics and biopower. Oxford: Oxford Bibliographies in Literary and Critical Theory.
Leßenich, O., and Sobernig, S. (2023). Usefulness and usability of heuristic walkthroughs for evaluating domain-specific developer tools in industry: evidence from four field simulations. Inf. Softw. Technol. 160:107220. doi: 10.1016/j.infsof.2023.107220
Lee, W. W., Zankl, W., and Chang, H. (2016). An Ethical Approach to Data Privacy Protection. ISACA, 6.
Lester, M. (2018). “The creation and disruption of innovation? Key developments in innovation as concept, theory, research and practice” in Innovation in the Asia Pacific: From manufacturing to the knowledge economy. ed. C. Greenhalgh (Cham: Springer), 271–328.
Lewrick, M., Link, P., and Leifer, L. (2020). The design thinking toolbox: A guide to mastering the most popular and valuable innovation methods. New York, NY: John Wiley & Sons.
Li, M., and Porter, A. L. (2019). Can nanogenerators contribute to the global greening of data centres? Nano Energy 60, 235–246. doi: 10.1016/j.nanoen.2019.03.046
Lim, T., Gottipati, S., and Cheong, M. (2023). Artificial intelligence in today’s education landscape: Understanding and managing ethical issues for educational assessment. ResearchSquare. Available at: https://assets-eu.researchsquare.com/files/rs-2696273/v1/440100a7006d63f8dd32e3e6.pdf?c=1678953441”440100a7006d63f8dd32e3e6.pdf (Accessed September 3, 2024).
Lu, Y., Yang, Y., Zhao, Q., Zhang, C., and Li, T. J. J. (2024). AI assistance for UX: a literature review through human-centered AI. AI Assistance for UX: A literature review through human-centered AI. Conference’17, July 2017, Washington, DC, USA. Available at: https://arxiv.org/pdf/2402.06089 (Accessed August 23, 2024).
Luckin, R., and Holmes, W. (2016). Intelligence unleashed: An argument for AI in education. London: Pearson Education.
Luh, D. B., Ma, C. H., Hsieh, M. H., and Huang, C. Y. (2011). Applying an empathic design model to gain an understanding of consumers' cognitive orientations and develop a product prototype. JIEM 4, 229–258. doi: 10.3926/jiem.201
Martins, F., Almeida, M. F., Calili, R., and Oliveira, A. (2020). Design thinking applied to smart home projects: a user-centric and sustainable perspective. Sustain. For. 12:10031. doi: 10.3390/su122310031
Masheleni, C. I. (2022). Fourth industrial banking: Case studies into digitising banking models and the foreseeable effects in South Africa. Master’s Dissertation. University of Cape Town. Available at: https://open.uct.ac.za/bitstream/handle/11427/36483/thesis_hum_2022_masheleni%20celine%20intombiyenhle.pdf?sequence=1 (Accessed August 23, 2024).
Mehta, Y., Xu, R., Lim, B., Wu, J., and Gao, J. (2023). A review for green energy machine learning and AI services. Energies 16:5718. doi: 10.3390/en16155718
Mohamed, N., Al-Jaroodi, J., Jawhar, I., Idries, A., and Mohammed, F. (2020). Unmanned aerial vehicles applications in future smart cities. Technol. Forecast. Soc. Change. 153:119293.
Mughal, A. A. (2018). Artificial intelligence in information security: exploring the advantages, challenges, and future directions. J. Artif. Int. Mach. Learn. Manage. 2, 22–34.
Nardin, A. (2021). Artificial intelligence as a general purpose technology: An exploratory analysis of PCT patents. Masters Dissertation. University Ca’ Foscari Venezia. Available at: http://dspace.unive.it/bitstream/handle/10579/19003/858966-1246883.pdf?sequence=2”858966-1246883.pdf (Accessed June 3, 2024).
Nishant, R., Kennedy, M., and Corbett, J. (2020). Artificial intelligence for sustainability: challenges, opportunities, and a research agenda. Int. J. Inf. Manag. 53:102104. doi: 10.1016/j.ijinfomgt.2020.102104
Nyathi, W. G. (2023). The role of artificial intelligence in improving public policy-making and implementation in South Africa (Doctoral dissertation. Johannesburg: University of Johannesburg.
Olwal, T. (2023). Developing an optimal academic workload distribution model: A case study of an academic department at a university of technology in South Africa. Masters Dissertation. University of Applied Sciences Ltd. Available at: https://www.theseus.fi/bitstream/handle/10024/808300/Olwal_Thomas.pdf?sequence=2 Opinnäytetyö. (Accessed September 23, 2024).
Özgün, A., Dholakia, N., and Atik, D. (2017). Marketization and Foucault. Glob. Bus. Rev. 18, S191–S202. doi: 10.1177/0972150917693335
Paesano, A. (2023). Artificial intelligence and creative activities inside organisational behavior. Int. J. Organ. Anal. 31, 1694–1723. doi: 10.1108/IJOA-09-2020-2421
Page, B. R. E. (2018). The [bio]–politics of genocide: An Agambenian approach (Doctoral dissertation. Callaghan NSW: The University of Newcastle.
Patel, S., and Ragolane, M. (2024). The implementation of artificial intelligence in South African higher education institutions: opportunities and challenges. Techn. Educ. Hum. 9, 51–65. doi: 10.47577/teh.v9i.11452
Pazell, S., and Hamilton, A. (2021). A student-centred approach to undergraduate course design in occupational therapy. Higher Educ. Res. Dev. 40, 1497–1514. doi: 10.1080/07294360.2020.1818697
Pazienza, A., Baselli, G., Vinci, D. C., and Trussoni, M. V. (2024). A holistic approach to environmentally sustainable computing. Innov. Syst. Softw. Eng. 1–25.
Pedro, F., Subosa, M., Rivas, A., and Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development. United Nations Educational, Scientific and Cultural Organization(UNESCO). Available at: https://repositorio.minedu.gob.pe/bitstream/handle/20.500.12799/6533/Artificial%20intelligence%20in%20education%20challenges%20and%20opportunities%20for%20sustainable%20development.pdf (Accessed August 10, 2024).
Polidoro, F., Liu, Y., and Craig, P. (2024). “Enhancing mobile visualisation interactivity: insights on a mixed-fidelity prototyping approach,” in Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1–7.
Ramaliba, T., and Jacobs, L. (2024). Artificial intelligence technology to enhance data quality management practices in the banking industry in South Africa. South Afr. J. Libr. Inf. Sci. 90, 1–10. doi: 10.7553/90-2-2312
Rawas, S. (2024). AI: the future of humanity. Discov. Artif. Intell. 4:25. doi: 10.1007/s44163-024-00118-3
Roberts, D. (2009). Global governance and biopolitics: Regulating human security. London: Bloomsbury Publishing.
Rogers, M. (2013). ‘Transforming practice’: Understanding trans people's experience of domestic abuse and social care agencies (Doctoral dissertation. England: University of Sheffield.
Selwyn, N. (2024). Digital degrowth: Toward radically sustainable education technology. Learn. Media Technol. 49, 186–199.
Seo, K., Tang, J., Roll, I., Fels, S., and Yoon, D. (2021). The impact of artificial intelligence on learner–instructor interaction in online learning. Int. J. Educ. Technol. High. Educ. 18, 1–23. doi: 10.1186/s41239-021-00256-2
Śledzik, K. (2013). “Schumpeter’s view on innovation and entrepreneurship” in Management trends in theory and practice. ed. S. Hittmar (Zilina: University of Zilina), 65–77.
Śledzik, K., Szmelter-Jarosz, A., Schmidt, E. K., Bielawski, K., and Declich, A. (2023). Are Schumpeter’s innovations responsible? A reflection on the concept of responsible (research and) innovation from a neo-Schumpeterian perspective. J. Knowl. Econ. 14, 5065–5085. doi: 10.1007/s13132-023-01487-3
Smolansky, A., Cram, A., Raduescu, C., Zeivots, S., Huber, E., and Kizilcec, R. F. (2023). “Educator and student perspectives on the impact of generative AI on assessments in higher education,” in Proceedings of the Tenth ACM Conference on Learning@ Scale, 378–382.
Smuha, N. A. (2020). Trustworthy artificial intelligence in education: Pitfalls and pathways. SSRN. Available at: https://ssrn.com/abstract=3742421 (Accessed July 16, 2024).
Srivastava, S. (2023). Winning together: A UX researcher’s guide to building strong cross-functional relationships. London: CRC Press.
Thorne, S., Mentzer, N., Bartholomew, S., Strimel, G. J., and Ware, J. (2024). Learning by evaluating as an interview primer to inform design thinking. Int. J. Technol. Des. Educ. 24, 1–18. doi: 10.1007/s10798-024-09714-1
Von Hippel, E. (2001). User toolkits for innovation. J. Prod. Innov. Manag. 18, 247–257. doi: 10.1111/1540-5885.1840247
Von Schomberg, L., and Blok, V. (2021). Technology in the age of innovation: Responsible innovation as a new subdomain within the philosophy of technology. Philos. Technol. 34, 309–323.
Wang, K., Zhao, Y., Gangadhari, R. K., and Li, Z. (2021). Analyzing the adoption challenges of the internet of things (IoT) and artificial intelligence (AI) for smart cities in China. Sustain. For. 13:10983. doi: 10.3390/su131910983
Wechsler, J. (2014). Scaffolding human-centred innovation through design artefacts (doctoral dissertation). University of Technology, Sydney, Australia. Available at: https://opus.lib.uts.edu.au/bitstream/10453/34478/10/02whole.pdf (Accessed July 16, 2024).
Willcocks, L. (2004). “Foucault, power/knowledge and information systems: reconstructing the present” in Social theory and philosophy for information systems. eds. J. Mingers and L. Willcocks (New York, NY: John Wiley and Sons), 238–296.
Wong, P., and White, J. (2024). AI's critical impact on electricity and energy demand. Sprott Energy Transition Materials Monthly. Sprott. Available at: https://sprott.com/insights/ais-critical-impact-on-electricity-and-energy-demand/#:~:text=AI%20data%20centers%20are%20likely,2.2%25%20of%20global%20electricity%20usage (Accessed July 16, 2024).
Woodruff, R. B. (1997). Customer value: the next source for competitive advantage. J. Acad. Mark. Sci. 25, 139–153. doi: 10.1007/BF02894350
Yulchiev, K. (2024). The importance of data security today. Texas J. Multidisciplinary Stu. 33, 9–14.
Keywords: design thinking, AI-driven auto-marking/grading, higher education, sustainable, ethical considerations in AI
Citation: Twabu K and Nakene-Mginqi M (2024) Developing a design thinking artificial intelligence driven auto-marking/grading system for assessments to reduce the workload of lecturers at a higher learning institution in South Africa. Front. Educ. 9:1512569. doi: 10.3389/feduc.2024.1512569
Edited by:
Raman Grover, Consultant, Vancouver, BC, CanadaReviewed by:
Quan Zhang, Jiaxing University, ChinaNajoua Hrich, Regional Center for Education and Training Professions, Morocco
Copyright © 2024 Twabu and Nakene-Mginqi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Khanyisile Twabu, bWF5ZWtreUB1bmlzYS5hYy56YQ==; Mathabo Nkene, bmFrZW5tbUB1bmlzYS5hYy56YQ==