Skip to main content

ORIGINAL RESEARCH article

Front. Educ., 12 March 2024
Sec. Digital Education

Development and validation of a scale for dependence on artificial intelligence in university students

  • 1Escuela de Medicina Humana, Facultad de Ciencias de la Salud, Universidad Peruana Unión, Lima, Peru
  • 2Escuela de Posgrado, Universidad Peruana Unión, Lima, Peru
  • 3Facultad de Teología, Universidad Peruana Unión, Lima, Peru
  • 4Sociedad Científica de Investigadores Adventistas (SOCIA), Universidad Peruana Unión, Lima, Peru
  • 5Escuela Profesional de Psicología, Facultad de Ciencias de la Salud, Universidad Peruana Unión, Lima, Peru
  • 6Departamento Académico de Enfermería, Obstetricia y Farmacia, Facultad de Farmacia y Bioquímica, Universidad Científica del Sur, Lima, Peru
  • 7Unidad de Salud, Escuela de Posgrado, Universidad Peruana Unión, Lima, Peru

Background: Artificial Intelligence (AI) has permeated various aspects of daily life, including education, specifically within higher education settings. These AI technologies have transformed pedagogy and learning, enabling a more personalized approach. However, ethical and practical concerns have also emerged, including the potential decline in cognitive skills and student motivation due to excessive reliance on AI.

Objective: To develop and validate a Scale for Dependence on Artificial Intelligence (DIA).

Methods: An Exploratory Factor Analysis (EFA) was used to identify the underlying structure of the DIA scale, followed by a Confirmatory Factor Analysis (CFA) to assess and confirm this structure. In addition, the scale’s invariance based on participants’ gender was evaluated.

Results: A total of 528 university students aged between 18 and 37 years (M = 20.31, SD = 3.8) participated. The EFA revealed a unifactorial structure for the scale, which was subsequently confirmed by the CFA. Invariance analyses showed that the scale is applicable and consistent for both men and women.

Conclusion: The DAI scale emerges as a robust and reliable tool for measuring university students’ dependence on AI. Its gender invariance makes it applicable in diverse population studies. In the age of digitalization, it is essential to understand the dynamics between humans and AI to navigate wisely and ensure a beneficial coexistence.

1 Introduction

The growing influence of Artificial Intelligence (AI) in contemporary society has given rise to a broad spectrum of interdisciplinary and multidisciplinary research. From its initial focus on automation and data-driven decision-making to its evolution toward enhancing daily life, addressing complex social problems, and mitigating environmental challenges, AI has revolutionized the way we interact with the world (Gruetzemacher and Whittlestone, 2022). Thus, AI, and in particular generative-like technologies, has transformed the processing of unstructured data and has demonstrated the capability to produce human-like real-time responses (Dwivedi et al., 2023b). Through innovations such as Dall-E 2, GPT-4, and Copilot, this technology has not only impacted the artistic realm but has also provided assistance in tasks of knowledge and daily needs. Yet, the widespread adoption of AI poses challenges and risks that must be responsibly addressed to ensure its sustainable and beneficial use for society (Feuerriegel et al., 2023).

In the field of education, AI has revolutionized how teaching and learning occur. From early levels to higher education, AI applications are reshaping traditional teaching methods. This technology provides automated assistance and facilitates virtual interaction, fundamentally altering the dynamics of conventional teaching and holds the potential to craft more versatile curricula adaptable to the demands of the 21st century (Ocaña-Fernández et al., 2019). Moreover, the integration of digital technologies in education has significantly improved access and learning efficiency, preparing students for an increasingly technological and ever-changing world (Haleem et al., 2022). Higher education, in particular, experienced a digitalization surge due to the COVID-19 pandemic, leading to a swift transition toward online education. This crisis compelled educational institutions to embrace communication technologies and pedagogical innovations, sparking increased interest in creating a shared digital learning space in higher education (Bygstad et al., 2022). AI has played a pivotal role in this transformation, enabling the customization and adaptation of teaching to individual student needs through learning management systems and intelligent tutoring systems (Bhutoria, 2022).

Despite the advantages AI brings to education, its adoption raises ethical and practical concerns. Among these concerns, an over-reliance on AI by students and educators stands out. The interaction between students and instructors in online learning environments is influenced by AI, eliciting both promises and concerns in higher education (Seo et al., 2021). Reliance on AI could lead to the loss of cognitive skills and a decline in student motivation (Ahmad et al., 2023). Dependence, in its broader sense, implies a compulsive need for something to function or feel complete. This dependency can manifest as a powerful drive affecting decision-making, self-perception, and one’s relationship with other life aspects. In a clinical context, dependence has traditionally been associated with substances, such as alcohol or drugs, characterized by tolerance and withdrawal symptoms (Brown et al., 1995; Aharonovich et al., 2002; Gilder et al., 2004; Nunes and Rounsaville, 2006; Schuckit et al., 2007; DMS, 2013). However, in the AI era, dependence has extended to behavioral addictions, like pathological gambling and internet addiction disorder, sharing similarities in their neurobiological and behavioral processes (Fu et al., 2010; Weinstein and Lejoyeux, 2010; Van Rooij et al., 2011; Sairitupa-Sanchez et al., 2023). In this regard, considering artificial intelligence (AI), dependence can be conceptualized as a propensity or need to overly rely on automated systems for decisions, tasks, or validation. Advanced technologies like AI have the potential to simplify processes and enhance efficiency, but they can also instill a sense of insecurity or fear of being left behind if one does not keep pace with them.

Artificial Intelligence (AI)-based technologies, such as ChatGPT, have brought significant transformations in areas like writing, research, and programming, providing considerable support in these fields. However, concerns have been raised about potential negative consequences, such as the loss of critical thinking skills and an excessive dependency on these tools (Liu et al., 2023). In this context, it’s crucial for users, especially students, to be informed about both the opportunities and ethical dilemmas associated with these technologies, to ensure their responsible and critical use in academic and professional development (Dwivedi et al., 2023a). Despite the benefits AI can offer, it is not without faults and requires expert human supervision to mitigate unpredictable errors and biases. Furthermore, its development and use must be ethical and beneficial for society, taking into account concerns about potential social harms associated with these technologies (Tai, 2020). With AI’s expansion beyond laboratories, the need for adequate regulations for its integration into society and governmental processes becomes evident, presenting challenges in terms of scope and impact on innovation (Sheikh et al., 2023).

Moreover, the integration of service chatbots in areas like mental health and financial advising demonstrates AI’s potential benefits in terms of enhancing psychological well-being and reducing loneliness. However, these interactions can also lead to social withdrawal and addiction (Xie et al., 2023), and the increased reliance on virtual assistants and smart devices poses particular risks for people with high social anxiety (Hu et al., 2023). The COVID-19 pandemic has exacerbated this trend, increasing emotional and psychological dependence on these technologies (Pentina et al., 2023). In the therapeutic domain, social chatbots have become popular as companionship and emotional support tools, but there are concerns about their potential to create emotional dependency and other negative effects (Laestadius et al., 2022). Additionally, AI’s influence on Consumer Engagement (CE) is undeniable, with a significant impact on business performance (Hollebeek et al., 2024).

To address these challenges associated with growing technological dependency, various scales have been conceived. These range from Young’s Internet Addiction Scale Young (1998) to more specific instruments focused on dependencies like online gaming (Blinka and Smahel, 2011; Sairitupa-Sanchez et al., 2023). However, cultural variability poses an additional challenge in understanding the impact of technology, as perceptions and use of technology widely differ across cultural contexts. Information and Communication Technologies, in particular, deeply shape our way of living and interacting (Tripathi, 2017). Given the complexity of dependency on AI use, the need for a specific tool to better assess and understand this phenomenon is evident. This tool would not only facilitate a deeper understanding of AI dependency but also help mitigate associated risks. In this context, unidimensional measures emerge as a valuable option, allowing assessment through few items. This simplicity is crucial for large-scale research, reducing the time and effort required by participants. Moreover, in a context where the mental health of university students is of growing concern, having global measures, easy to interpret and parsimonious in structure, is essential. Therefore, the main objective of this study is to develop and validate a Scale of Dependency toward Artificial Intelligence (DIA).

2 Methods

2.1 Design and participants

The present study, of an instrumental nature (Ato et al., 2013), relied on convenience sampling. For sample selection, an electronic calculator (Soper, 2023) was utilized, considering variables such as the number of observed and latent variables in the model, the anticipated effect size (λ = 0.10), the desired statistical significance (α = 0.05), and the level of statistical power (1 – β = 0.90). With these criteria, the minimum sample size required was 199 participants. However, the sample was expanded, and a total of 528 Peruvian medical students were recruited. The ages of the participants ranged from 18 to 37 years (M = 20.31, SD = 3.8). The inclusion criteria stipulated that participants must be over 18 years old, residents of Peru, and with regular access to Artificial Intelligence platforms. We excluded individuals who did not meet age and residence criteria, as well as those without regular access to Artificial Intelligence platforms or enrolled in the Medical School considering the first and tenth semesters. Gender distribution showed 53.0% female participants and 47.0% male. Regarding study semester, the highest concentrations were found in the first semester at 26.1%. Analyzing the place of origin, the Coast was the region with the most representation, encompassing 50.9% of the participants (Table 1).

Table 1
www.frontiersin.org

Table 1. Sociodemographic characteristics.

2.2 Instruments

Dependence on AI. The construction process of the scale of Dependence on AI (DAI) incorporated criteria adapted from compulsive behaviors or dependencies described in DSM-5 (American Psychiatric Association, 2013; Hasin et al., 2013). These criteria were broken down into five key components: (1) Feeling of vulnerability, addressing the perceived insecurity when lacking access to technological tools (Young, 1998); (2) Concern about relevance and performance, related to performance anxiety and the need to integrate technologies in the workplace and academic setting (Caplan, 2002); (3) Need to maintain an updated image, reflecting the social motivation behind the use of emerging technologies (Ryan and Xenos, 2011); (4) Seeking external validation, indicating emotional dependence on these tools for decision-making (Kuss and Griffiths, 2011); and (5) Fear of personal obsolescence, considering the fear of human replacement by technological automation (Chui et al., 2016). To ensure an assessment that captures the diversity of individual experiences regarding dependence on AI, the DAI items are presented in a Likert scale format. This format offers five response options ranging from “Completely false for me” to “Describes me perfectly.” This structure allows for accurately and detailed reflection of individual nuances of dependence on AI, emphasizing personalization in studying this phenomenon.

2.3 Procedure

This study was developed under stringent ethical standards, ensuring fundamental principles of integrity, transparency, and above all, respect for human dignity, aiming to guarantee the authenticity and accuracy of the results obtained. The research underwent a review by the Ethics Committee of a Peruvian university, which approved our protocol under the registration (2023-CEUPeU-033). Following the guidelines set by the Helsinki agreement, each individual participating in our study was thoroughly informed about its nature and objectives. The informed consent process not only complied with the relevant legal regulations but also reflected our firm commitment to respect the autonomy of participants and their inalienable right to decide about their involvement in the research. It’s important to note that data collection was conducted in person, emphasizing at each opportunity that participation was entirely voluntary and ensuring the anonymity of those involved.

2.4 Data analysis

Initially, a total sample was prepared, which was segmented into two groups for the purpose of cross-validation. Sample 1, comprising 226 participants, was designated for exploratory factor analysis (EFA), while Sample 2, with 302 participants, was geared toward confirmatory factor analysis (CFA) (VandenBos and American Psychological Association, 2015). A descriptive characterization of the IA Dependence (DAI) was performed, calculating parameters such as mean, standard deviation, skewness (g1), and kurtosis (g2) with values ranging between ±1.5 considered appropriate (Pérez and Medrano, 2010). Item quality was verified through a corrected inter-test correlation analysis [r(i-tc) = <0.2], and items that did not meet the set criteria were removed (Kline, 2023).

For the EFA, unweighted least squares with oblique rotation (promax) were applied. Using parallel analysis, we determined the optimal number of factors. Prior to these analyses, data adequacy was confirmed using Bartlett’s test of sphericity and the Kaiser-Meyer-Olkin (KMO) coefficient (Kaiser, 2016; Worthington and Whittaker, 2016). After establishing the number of factors in the EFA, the CFA was performed using the MLR estimator (Muthen and Muthen, 2017). To evaluate the model fit, various metrics were employed, including the CFI, TLI (≥ 0.95) (Schumacker and Lomax, 2016), RMSEA, and SRMSR (≤ 0.05) (Kline, 2023). Scale reliability was assessed through Cronbach’s alpha coefficient and McDonald’s omega coefficient (McDonald, 1999). Additionally, for measurement invariance (IM) by sex, we applied a multi-group confirmatory factor analysis, considering four levels of invariance. Intergroup invariance determination was based on ΔCFI differences less than 0.010 (Chen, 2007).

All analyses were performed in the RStudio environment (RStudio Team, 2018) with R version 4.1.1 (R Foundation for Statistical Computing, Vienna, Austria). The “lavaan” (Rosseel, 2012) and “semTools” (Jorgensen et al., 2021) packages were crucial for executing the CFA, structural equation modeling, and measurement invariance analysis.

3 Results

3.1 Descriptive statistics of items

In Table 2, descriptive results are displayed. Item 5 recorded the highest mean of 2.8, while item 1 had the lowest mean of 2.44. Likewise, the skewness values (g1) of all items are within the considered normal range (± 1.5), ranging between −0.01 and 0.39. Similarly, the kurtosis (g2) values for all items are also within this range, with values between −0.77 and − 0.98. Finally, regarding item-total correlations (r.cor), all surpass the acceptable limit of 0.30. This indicates that each item significantly contributes to the scale’s overall construct.

Table 2
www.frontiersin.org

Table 2. Descriptive statistics.

3.2 Evidence of validity related to internal structure

To determine the underlying structure of the items of a unifactorial scale, an Exploratory Factor Analysis (EFA) was conducted. To ensure data adequacy for the EFA, we used the Kaiser-Meyer-Olkin (KMO) coefficient and Bartlett’s test of sphericity. The results revealed a KMO coefficient of 0.88, indicating proper data adequacy for factor analysis. Similarly, the significance of Bartlett’s test of sphericity (p < 0.001) reinforces the relevance of the EFA. To determine the optimal number of factors to extract, a parallel analysis and a scree plot were used, both suggesting the extraction of a single factor (Figure 1). We used the maximum likelihood extraction method combined with the varimax rotation method. This process allowed us to assess the factor loadings of each item and, if necessary, remove those that did not meet the established criteria. Specifically, we considered removing items with factor loadings below 0.50 on the proposed factor or with communalities (h2 < 0.30) (Costello and Osborne, 2005; Lloret-Segura et al., 2014). However, by the end of the process, all items met the criteria, and therefore none were removed (Table 3).

Figure 1
www.frontiersin.org

Figure 1. Parallel analysis.

Table 3
www.frontiersin.org

Table 3. EFA, CFA, and reliability.

3.3 Confirmatory factor analysis and reliability

Based on preliminary evidence provided by the Exploratory Factor Analysis (EFA), a Confirmatory Factor Analysis (CFA) was conducted to assess the previously identified factorial structure (Table 3). The goodness-of-fit indices for the model proved satisfactory: χ2(5) = 8.450, p = 0.133; CFI = 0.99; TLI = 0.98; RMSEA = 0.05 (90% CI 0.00–0.09) and SRMR = 0.02. Moreover, item factor loadings were robust, all surpassing the 0.50 threshold, highlighting the significance of each item in depicting the studied construct. Lastly, reliability indicators, both the alpha (α) coefficient and the omega (ω) coefficient, showed proper internal consistency (α and ω = 87).

3.4 Invariance

To examine measurement invariance based on participants’ gender, a hierarchical sequence of invariance models was applied, ranging from configural to strict. Using the Confirmatory Fit Index (CFI) as the main metric, it was found that the scale is invariant between men and women. Specifically, differences in CFI (ΔCFI) across each invariance level (Chen, 2007) confirmed that both factor loadings, as well as intercepts and residual variances, are consistent across groups (Table 4).

Table 4
www.frontiersin.org

Table 4. Factorial invariance by sex.

4 Discussion

AI has influenced numerous areas of daily life, from automation to solving complex issues. Its impact on education has been significant, transforming teaching and learning approaches, especially in higher education where AI technologies enable personalized learning. However, the increasing reliance on AI presents ethical and practical challenges, including potential loss of cognitive skills and decreased student motivation. This study focused on developing and validating a scale for Dependence on Artificial Intelligence (DAI) among university students.

The results of the Exploratory Factor Analysis (EFA) indicated a clear unifactorial structure for the DIA scale, with all factor loadings exceeding 0.50 and communalities (h2) greater than 0.30. This suggests that all items adequately contribute to the proposed factor and are conceptually aligned, indicating that the single factor effectively represents the measured construct (Xie and DeVellis, 1992).

Subsequently, a Confirmatory Factor Analysis (CFA) was conducted, which demonstrated that all items have significant factor loadings, indicative of a relevant contribution to the representation of the construct. Thus, the unifactorial structure suggested by the previous EFA is confirmed. Moreover, the items surpassed the generally accepted threshold of 0.50 (Hair et al., 2010), suggesting that all items make a significant contribution to the construct. This is relevant, as factor loadings reflect the correlation between each item and the latent factor, and high loadings are indicative of a good representation of the construct (Worthington and Whittaker, 2016). Additionally, the reliability of the scale, as indicated by the Cronbach’s alpha (α) and omega (ω) coefficients, showed adequate internal consistency, suggesting that the scale is reliable for measuring the construct.

The study revealed that the DIA scale is invariant between men and women, indicating that the measurements are consistent and comparable across these groups. The level of strict invariance achieved in this study, where residual variances are consistent between groups, is particularly noteworthy. This suggests that not only are the factor loadings and intercepts equivalent between men and women, but so are the residual variances of the items, reflecting greater stability in measurement across genders (Gregorich, 2006).

5 Implications

The increasing dependency on Artificial Intelligence (AI) in various social and professional spheres calls for a profound reconsideration of our interactions with technology. AI, transcending its role as a mere technological tool, has begun to significantly influence self-perception and human relationships. These dynamic highlights the importance of developing inclusive and adaptive policies that view AI as a complement, rather than a substitute, to human abilities. In this context, organizations and professionals must address not only the development of technical skills but also strengthen emotional and psychological competencies to navigate an AI-dominated environment. From a theoretical perspective, these findings challenge traditional theories on human-technology interaction, suggesting the need for an integrative theoretical framework that merges psychology, sociology, and computer science to address the growing affective centrality of AI in our lives.

Furthermore, there is a suggested need for deeper research in fields such as educational psychology, to explore how AI dependency affects cognitive and emotional development, particularly in students. It’s vital to integrate teachings on critical and ethical AI use into university curricula, promoting critical thinking skills and independent analysis. Educators should be trained to balance the use of advanced technologies with teaching methods that foster independent analytical skills. At the organizational level, policies should be formulated to regulate the ethical use of AI, focusing on preventing dependency and ensuring data security. Additionally, educational institutions should implement awareness programs, provide self-assessment tools, and offer counseling services for students with AI dependency.

Future studies should include longitudinal research to understand the evolution of AI dependency in university students over time. Furthermore, it’s essential to expand research to different populations and cultural contexts to understand variations in AI dependency. Investigating the long-term impact of this dependency on mental health and well-being and designing effective interventions to reduce AI dependency are also crucial directions for future research. These joint efforts will ensure a future where AI and humanity coexist in harmony and mutual benefit.

6 Limitations

While this study provides valuable insights into people’s relationship with artificial intelligence, it’s essential to recognize and address its limitations. Firstly, the cross-sectional nature of the study means data was captured at only one specific point in time. Longitudinal studies could offer clearer insights into how this relationship evolves and what factors may influence these changes. The sample used, although extensive, may not be representative of the entire population. Cultural, socioeconomic, or educational differences that influence people’s relationship with AI might exist, and these variables were neither controlled for nor explored in depth in this study. Lastly, while we have examined measurement invariance based on gender, other demographic and psychosocial factors like age, education level, and prior tech familiarity could impact attitudes and behaviors toward AI. Future research should consider these factors to gain a more nuanced understanding of people’s relationship with artificial intelligence.

7 Conclusion

The findings indicate that the DIA scale is a valid and reliable tool for assessing dependency on Artificial Intelligence (AI) in university students. AI dependency can have significant consequences on students’ cognitive, emotional, and social development. Therefore, the application of the scale, corroborated by its gender invariance, allows for comparative analyses across various demographic groups, thus enriching its utility. Additionally, the importance of adopting an interdisciplinary approach for a more comprehensive understanding of human-AI interaction is emphasized. Moreover, it’s crucial to develop policies and educational strategies that promote a balanced and critical use of AI. Future research should focus on longitudinal studies to track the evolution of AI dependency in students over time. It’s equally essential to expand this research to different populations and cultural contexts to understand variations in AI dependency and its long-term effects on mental health and well-being.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by the Ethics Committee of the Universidad Peruana Unión under registration (2023-CEUPeU-033). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

WM-G: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. LS-S: Conceptualization, Data curation, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing. SM-G: Conceptualization, Formal analysis, Investigation, Validation, Writing – original draft, Writing – review & editing. MM-G: Data curation, Formal analysis, Investigation, Resources, Software, Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Aharonovich, E., Liu, X., Nunes, E., and Hasin, D. S. (2002). Suicide attempts in substance abusers: effects of major depression in relation to substance use disorders. Am. J. Psychiatry 159, 1600–1602. doi: 10.1176/appi.ajp.159.9.1600

PubMed Abstract | Crossref Full Text | Google Scholar

Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., et al. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and social sciences. Communications 10, 1–14. doi: 10.1057/s41599-023-01787-8

PubMed Abstract | Crossref Full Text | Google Scholar

American Psychiatric Association, D. S. M. T. F., & American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (DSM-5®) Vol. 5 Washington, DC: American Psychiatric Pub.

Google Scholar

Ato, M., López, J. J., and Benavente, A. (2013). Un sistema de clasificación de los diseños de investigación en psicología. Anals Psicologia 29, 1038–1059. doi: 10.6018/analesps.29.3.178511

Crossref Full Text | Google Scholar

Bhutoria, A. (2022). Personalized education and artificial intelligence in the United States, China, and India: a systematic review using a human-in-the-loop model. Comp. Educ. Artif. Intell. 3:100068. doi: 10.1016/j.caeai.2022.100068

Crossref Full Text | Google Scholar

Blinka, L., and Smahel, D. (2011). “Addiction to online role-playing games” in Internet addiction: a handbook and guide to evaluation and treatment In: (eds) K Young and C Abreu. (Hoboken, NJ: John Wiley & Sons) 73–90.

Google Scholar

Brown, S. A., Inaba, R. K., Gillin, J. C., Schuckit, M. A., Stewart, M. A., and Irwin, M. R. (1995). Alcoholism and affective disorder: clinical course of depressive symptoms. Am. J. Psychiatry 152, 45–52. doi: 10.1176/ajp.152.1.45

Crossref Full Text | Google Scholar

Bygstad, B., Øvrelid, E., Ludvigsen, S., and Dæhlen, M. (2022). From dual digitalization to digital learning space: exploring the digital transformation of higher education. Comp. Educ. 182:104463. doi: 10.1016/j.compedu.2022.104463

Crossref Full Text | Google Scholar

Caplan, S. E. (2002). Problematic internet use and psychosocial well-being: development of a theory-based cognitive-behavioral measurement instrument. Comput. Hum. Behav. 18, 553–575. doi: 10.1016/S0747-5632(02)00004-3

Crossref Full Text | Google Scholar

Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Struct. Equ. Model. Multidiscip. J. 14, 464–504. doi: 10.1080/10705510701301834

Crossref Full Text | Google Scholar

Chui, M., Manyika, J., and Miremadi, M. (2016). Where machines could replace humans-and where they can’t (yet). McKinsey Q. 2016 1–12.

Google Scholar

Costello, A. B., and Osborne, J. W. (2005). Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 10:7. doi: 10.7275/jyj1-4868

Crossref Full Text | Google Scholar

DMS. (2013). Diagnostic and statistical manual of mental disorders (DSM-5®).

Google Scholar

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., et al. (2023a). Opinion paper: “so what if ChatGPT wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manag. 71:102642. doi: 10.1016/j.ijinfomgt.2023.102642

Crossref Full Text | Google Scholar

Dwivedi, Y. K., Sharma, A., Rana, N. P., Giannakis, M., Goel, P., and Dutot, V. (2023b). Evolution of artificial intelligence research in technological forecasting and social change: research topics, trends, and future directions. Technol. Forecast. Soc. Chang. 192:122579. doi: 10.1016/j.techfore.2023.122579

Crossref Full Text | Google Scholar

Feuerriegel, S., Hartmann, J., Janiesch, C., and Zschech, P. (2023). Generative AI. Bus. Inf. Syst. Eng. 66, 111–126. doi: 10.1007/s12599-023-00834-7

Crossref Full Text | Google Scholar

Fu, K. W., Chan, W. S. C., Wong, P. W. C., and Yip, P. S. F. (2010). Internet addiction: prevalence, discriminant validity and correlates among adolescents in Hong Kong. Br. J. Psychiatry 196, 486–492. doi: 10.1192/bjp.bp.109.075002

Crossref Full Text | Google Scholar

Gilder, D. A., Wall, T. L., and Ehlers, C. L. (2004). Comorbidity of select anxiety and affective disorders with alcohol dependence in Southwest California Indians. Alcohol. Clin. Exp. Res. 28, 1805–1813. doi: 10.1097/01.ALC.0000148116.27875.B0

Crossref Full Text | Google Scholar

Gregorich, S. E. (2006). Do self-report instruments allow meaningful comparisons across diverse population groups? Med. Care 44, S78–S94. doi: 10.1097/01.mlr.0000245454.12228.8f

PubMed Abstract | Crossref Full Text | Google Scholar

Gruetzemacher, R., and Whittlestone, J. (2022). The transformative potential of artificial intelligence. Futures 135:102884. doi: 10.1016/j.futures.2021.102884

Crossref Full Text | Google Scholar

Hair, J., Black, W., Babin, B., and Anderson, R. (2010). Multivariate Data Analysis (Hoboken: Pearson Prentice Hall 7th).

Google Scholar

Haleem, A., Javaid, M., Qadri, M. A., and Suman, R. (2022). Understanding the role of digital technologies in education: a review. Sustain. Operat. Comp. 3, 275–285. doi: 10.1016/j.susoc.2022.05.004

Crossref Full Text | Google Scholar

Hasin, D. S., O’Brien, C. P., Auriacombe, M., Borges, G., Bucholz, K., Budney, A., et al. (2013). DSM-5 criteria for substance use disorders: recommendations and rationale. In. Am. J. Psychiatry 170, 834–851. doi: 10.1176/appi.ajp.2013.12060782

PubMed Abstract | Crossref Full Text | Google Scholar

Hollebeek, L. D., Menidjel, C., Sarstedt, M., Jansson, J., and Urbonavicius, S. (2024). Engaging consumers through artificially intelligent technologies: systematic review, conceptual model, and further research. Psychol. Mark. 1–19.doi: 10.1002/mar.21957

Crossref Full Text | Google Scholar

Hu, B., Mao, Y., and Kim, K. J. (2023). How social anxiety leads to problematic use of conversational AI: the roles of loneliness, rumination, and mind perception. Comput. Hum. Behav. 145:107760. doi: 10.1016/j.chb.2023.107760

Crossref Full Text | Google Scholar

Jorgensen, T. D., Pornprasertmanit, S., Schoemann, A. M., and Rosseel, Y. (2021). “semTools: Useful tools for structural equation modeling,”in The Comprehensive R Archive Network.

Google Scholar

Kaiser, H. F. (2016). The application of electronic computers to factor analysis. Educ. Psychol. Meas. 20, 141–151. doi: 10.1177/001316446002000116

Crossref Full Text | Google Scholar

Kline, R. B. (2023). Principles and practice of structural equation modeling. New York, NY: Guilford publications.

Google Scholar

Kuss, D. J., and Griffiths, M. D. (2011). Online social networking and addiction-a review of the psychological literature. In. Int. J. Environ. Res. Public Health 8, 3528–3552. doi: 10.3390/ijerph8093528

PubMed Abstract | Crossref Full Text | Google Scholar

Laestadius, L., Bishop, A., Gonzalez, M., Illenčík, D., and Campos-Castillo, C. (2022). Too human and not human enough: a grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media Soc. :11420. doi: 10.1177/14614448221142007

Crossref Full Text | Google Scholar

Liu, M., Ren, Y., Nyagoga, L. M., Stonier, F., Wu, Z., and Yu, L. (2023). Future of education in the era of generative artificial intelligence: consensus among Chinese scholars on applications of ChatGPT in schools. Future Educ. Res. 1, 72–101. doi: 10.1002/fer3.10

Crossref Full Text | Google Scholar

Lloret-Segura, S., Ferreres-Traver, A., Hernández-Baeza, A., and Tomás-Marco, I. (2014). Exploratory item factor analysis: a practical guide revised and updated. Anales de Psicologia 30, 1151–1169. doi: 10.6018/analesps.30.3.199361

Crossref Full Text | Google Scholar

McDonald, R. P. (1999). Test Theory: A United Treatment. Lawrence Erlbaum.

Google Scholar

Muthen, L., and Muthen, B. (2017). MPlus user’ guide. 8th Edn. Los Angeles, CA: Muthén & Muthén.

Google Scholar

Nunes, E. V., and Rounsaville, B. J. (2006). Comorbidity of substance use with depression and other mental disorders: from diagnostic and statistical manual of mental disorders, fourth edition (DSM-IV) to DSM-V. Addiction 101, 89–96. doi: 10.1111/j.1360-0443.2006.01585.x

Crossref Full Text | Google Scholar

Ocaña-Fernández, Y., Valenzuela-Fernández, L. A., and Garro-Aburto, L. L. (2019). Artificial intelligence and its implications in higher education. Purp. Represent. 7, 536–568. doi: 10.20511/pyr2019.v7n2.274

Crossref Full Text | Google Scholar

Pentina, I., Hancock, T., and Xie, T. (2023). Exploring relationship development with social chatbots: a mixed-method study of replika. Comput. Hum. Behav. 140:107600. doi: 10.1016/j.chb.2022.107600

Crossref Full Text | Google Scholar

Pérez, E. R., and Medrano, L. (2010). Análisis factorial exploratorio: bases conceptuales y metodológicas. Revista Argentina de Ciencias Del Comportamiento 2, 58–66.

Google Scholar

Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. J. Stat. Softw. 48, 1–93. doi: 10.18637/jss.v048.i02

Crossref Full Text | Google Scholar

RStudio Team. (2018). RStudio: Integrated development environment for R. RStudio, Inc. Available at: http://www.rstudio.com/

Google Scholar

Ryan, T., and Xenos, S. (2011). Who uses Facebook? An investigation into the relationship between the big five, shyness, narcissism, loneliness, and Facebook usage. Comput. Hum. Behav. 27, 1658–1664. doi: 10.1016/j.chb.2011.02.004

Crossref Full Text | Google Scholar

Sairitupa-Sanchez, L. Z., Collantes-Vargas, A., Rivera-Lozada, O., and Morales-García, W. C. (2023). Development and validation of a scale for streaming dependence (SDS) of online games in a Peruvian population. Front. Psychol. 14:647. doi: 10.3389/fpsyg.2023.1184647

PubMed Abstract | Crossref Full Text | Google Scholar

Schuckit, M. A., Smith, T. L., Danko, G. P., Pierson, J., Trim, R., Nurnberger, J. I., et al. (2007). A comparison of factors associated with substance-induced versus independent depressions. J. Stud. Alcohol Drugs 68, 805–812. doi: 10.15288/jsad.2007.68.805

PubMed Abstract | Crossref Full Text | Google Scholar

Schumacker, R. E., and Lomax, R. G. (2016). A Beginner’s guide to structural equation modeling. 4th Edn New York, NY: Taylor & Francis.

Google Scholar

Seo, K., Tang, J., Roll, I., Fels, S., and Yoon, D. (2021). The impact of artificial intelligence on learner–instructor interaction in online learning. International journal of educational technology. High. Educ. 18, 1–23. doi: 10.1186/s41239-021-00292-9

PubMed Abstract | Crossref Full Text | Google Scholar

Sheikh, H., Prins, C., and Schrijvers, E. (2023). Regulation. Mission AI, 241–286. doi: 10.1007/978-3-031-21448-6_8

Crossref Full Text | Google Scholar

Soper, D. (2023). A-priori Sample Size Calculator for structural equation models software.

Google Scholar

Tai, M. C. T. (2020). The impact of artificial intelligence on human society and bioethics. Tzu chi Med. J. 32, 339–343. doi: 10.4103/tcmj.tcmj_71_20

PubMed Abstract | Crossref Full Text | Google Scholar

Tripathi, A. K. (2017). Hermeneutics of technological culture. AI Soc. 32. doi: 10.1007/s00146-017-0717-4

Crossref Full Text | Google Scholar

Van Rooij, A. J., Schoenmakers, T. M., Vermulst, A. A., Van Den Eijnden, R. J. J. M., and Van De Mheen, D. (2011). Online video game addiction: identification of addicted adolescent gamers. Addiction 106, 205–212. doi: 10.1111/j.1360-0443.2010.03104.x

PubMed Abstract | Crossref Full Text | Google Scholar

VandenBos, G. R.American Psychological Association. (2015). APA dictionary of psychology 2nd). Washington: American Psychological Association

Google Scholar

Weinstein, A., and Lejoyeux, M. (2010). Internet addiction or excessive internet use. Am. J. Drug Alcohol Abuse 36, 277–283. doi: 10.3109/00952990.2010.491880

Crossref Full Text | Google Scholar

Worthington, R. L., and Whittaker, T. A. (2016). Scale development research: a content analysis and recommendations for best practices. Couns. Psychol. 34, 806–838. doi: 10.1177/0011000006288127

Crossref Full Text | Google Scholar

Xie, Y., and DeVellis, R. F. (1992). Scale development: theory and applications. Contemp. Sociol. 21:876. doi: 10.2307/2075704

Crossref Full Text | Google Scholar

Xie, T., Pentina, I., and Hancock, T. (2023). Friend, mentor, lover: does chatbot engagement lead to psychological dependence? J. Serv. Manag. 34, 806–828. doi: 10.1108/JOSM-02-2022-0072

Crossref Full Text | Google Scholar

Young, K. S. (1998). Internet addiction: the emergence of a new clinical disorder. CyberPsychol. Behav. 1, 237–244. doi: 10.1089/cpb.1998.1.237

Crossref Full Text | Google Scholar

Keywords: artificial intelligence, dependence, student, technology, interaction, university

Citation: Morales-García WC, Sairitupa-Sanchez LZ, Morales-García SB and Morales-García M (2024) Development and validation of a scale for dependence on artificial intelligence in university students. Front. Educ. 9:1323898. doi: 10.3389/feduc.2024.1323898

Received: 19 October 2023; Accepted: 14 February 2024;
Published: 12 March 2024.

Edited by:

Manuel Gentile, Institute for Educational Technology - National Research Council of Italy, Italy

Reviewed by:

Nieves Gutiérrez Ángel, University of Almeria, Spain
Ciprian Marius Ceobanu, Alexandru Ioan Cuza University, Romania

Copyright © 2024 Morales-García, Sairitupa-Sanchez, Morales-García and Morales-García. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Wilter C. Morales-García, wiltermorales@upeu.edu.pe

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.