- 1Technical University of Manabí, Portoviejo, Ecuador
- 2Graduate School of the State University of Milagro, Milagro, Ecuador
- 3Catholic University of Ecuador, Santo Domingo de los Colorados, Ecuador
Introduction: Digital competencies are increasingly recognized as a fundamental pillar in the professional development of educators, particularly in Higher Education, where the integration of educational technologies is crucial for enhancing teaching and learning processes.
Methods: This study assessed the digital competencies of faculty at the Technical University of Manabí using a descriptive, non-experimental approach with a sample of 279 professors. Data collection was conducted through a quantitative multimodal design utilizing the Higher Education Digital Competencies Assessment Questionnaire (CDES). The data were analyzed using a structural equation model in AMOS software.
Results: The findings revealed a significant correlation between faculty members' perceptions and the evaluated dimensions. However, the analysis identified discrepancies in the goodness-of-fit indices, suggesting the need for adjustments in the model.
Discussion: The study underscores the importance of ongoing evaluation and optimization of the structural model to refine the integration of digital competencies. It demonstrates the potential of these competencies to enrich teaching practices and concludes that continuous validation and adjustment of the model are essential to align faculty perceptions with their actual digital competencies.
1 Introduction
Over the last decade, the integration of Digital Competencies in education has gained undeniable relevance, driven by technological advancement and the digitization of information (Nanto et al., 2021; Sá et al., 2021). Previous studies have highlighted the importance of not only acquiring digital skills by teachers but also applying these competencies in their pedagogical practice to enhance the learning process (Røkenes and Krumsvik, 2014; Falloon, 2020). However, a comprehensive assessment of these competencies continues to present methodological and conceptual challenges (Van Der Vleuten, 1996; Patrick and Care, 2015).
The rapid evolution of Information and Communication Technologies (ICT) demands a constant update in teachers' digital competencies (Moreira-Choez et al., 2024). Despite growing research in this field, there is a knowledge gap regarding the precise assessment of these competencies through advanced statistical models (DeLuca and Klinger, 2010; Moreira-Choez et al., 2023). Particularly, there is a lack of studies that apply structural equation modeling to analyze digital competencies in the context of Higher Education in Latin America (Torrent-Sellens et al., 2021).
Current literature reveals a lack of uniformity in the instruments used for assessing digital competencies and a scarcity of analytical models that integrate both the theoretical and empirical dimensions of the construct (Wong et al., 2023). Additionally, research rarely addresses the self-perception of educators concerning their digital competence, a crucial aspect for professional development and the adoption of ICT in teaching (Noskova et al., 2021).
This study is pivotal in filling the identified gaps and providing a comprehensive assessment of the digital competencies of educators. By applying a structural equation model, the research offers a holistic view that considers multiple dimensions of digital competencies and their interrelationships (Durak and Saritepeci, 2018; Scherer et al., 2019). The findings could have significant implications for the design of educational policies and professional development programs in the region.
The central research question posed by the study is: What results are obtained from an assessment process of digital competencies of faculty at the Technical University of Manabí, using a multimodal approach and Artificial Intelligence? To answer this question, the study aimed to evaluate the digital competencies of the faculty at the Technical University of Manabí, with the purpose of determining their competence level and the implications for their educational practice.
2 Theoretical framework
The study of digital competencies in educators, mediated by artificial intelligence, is based on the understanding that digital literacy is multidimensional and extends beyond the mere instrumental use of technological tools. This theoretical framework addresses five crucial factors that delineate digital competencies in the educational context.
2.1 Technological literacy
Technological literacy represents the foundation upon which all other digital competencies are built. It involves not only the ability to operate devices and software but also an understanding of their workings and educational potential (Hasse, 2017). In the context of artificial intelligence, technological literacy also includes understanding how AI systems can support the educational process by enhancing the personalization and adaptability of learning (Bhutoria, 2022).
Understanding the inner workings of technological tools allows educators not only to use them efficiently but also to integrate them effectively into their pedagogical practices. Technological literacy, therefore, is not limited to the instrumental use of technology; it extends to the ability to adapt and personalize the use of these tools to meet the specific educational needs of students.
Moreover, technological literacy in the context of AI involves knowing the applications and limitations of these systems, enabling educators to make informed decisions about their implementation in the classroom. Thus, the ability to evaluate and select the most appropriate technologies for different educational contexts becomes an integral part of this competence.
Finally, technological literacy fosters a critical and reflective attitude toward technology, encouraging educators to continually question and evaluate the tools they use. This approach not only enhances the effectiveness of the educational process but also contributes to the development of a more dynamic and inclusive learning environment.
2.2 Access and use of information
This factor refers to the ability of educators to locate, evaluate, and effectively utilize information (Mumtaz, 2000; Claro et al., 2018). Literacy in accessing and using information involves not only searching for relevant data but also discerning its validity and applicability in specific educational contexts. Thus, the integration of pertinent content into teaching practice is facilitated, improving the quality of the teaching-learning process.
In the context of artificial intelligence, these skills are significantly enhanced. Advanced AI systems for data search and analysis enable educators to filter out irrelevant information and focus on reliable and useful sources (Yu and Lu, 2021). This ability to filter and select relevant information is crucial for maintaining relevance and accuracy in teaching.
Additionally, artificial intelligence offers tools that not only simplify access to large volumes of information but also facilitate its organization and presentation in a coherent and structured manner. This optimization of educational material preparation enriches pedagogical content with up-to-date and relevant data.
Finally, the effective use of information supported by AI promotes a more dynamic and adaptive approach to teaching. Educators can quickly adjust their pedagogical strategies based on the most recent information, contributing to a more flexible and responsive learning environment that meets the changing needs of students.
2.3 Communication and collaboration
Communication and collaboration focus on the ability to interact efficiently and work together in digital environments, utilizing a variety of communicative and collaborative tools (Haderer and Ciolacu, 2022; Zhu and Sun, 2023). These skills are essential for developing an integrated and collaborative educational practice, where educators can share knowledge and experiences.
In this context, artificial intelligence plays a crucial role. AI systems facilitate these interactions by providing platforms that enable richer and more diverse collaboration. These platforms not only enhance communication among educators but also allow them to establish learning networks and participate in global professional communities (Papadopoulos et al., 2021). Thus, a continuous exchange of ideas and resources is fostered, enriching educational practice and promoting professional development.
Moreover, AI offers advanced tools that support the coordination and management of collaborative projects. These tools help educators organize and supervise group activities, ensuring that each member contributes effectively. The ability to integrate and utilize these technologies facilitates more effective and efficient collaboration.
Furthermore, artificial intelligence enhances real-time communication, enabling educators and students to interact without geographical barriers. This feature is particularly valuable in the context of distance education, where direct and constant interaction is crucial for academic success.
Finally, the integration of AI tools in communication and collaboration not only improves the efficiency of these interactions but also fosters a more inclusive and accessible learning environment. By reducing technological barriers and improving accessibility, it ensures that all participants can contribute to and benefit from the educational process.
2.4 Digital citizenship
The ethical and responsible use of technology is essential for digital citizenship, encompassing knowledge of digital rights and duties, online safety, privacy, and digital health. These aspects are crucial for educators to guide their students in the appropriate use of technology (Buchholz et al., 2020; Searson et al., 2015). Artificial intelligence plays a crucial role in this area by providing advanced tools to monitor and promote safe online behaviors. These tools enable the identification and prevention of risky activities, ensuring a secure digital environment and facilitating the implementation of effective privacy policies that protect the personal information of students and educators.
Furthermore, AI supports education on the ethical implications of technology use and digital health. Through specific educational programs, AI systems help students understand the importance of privacy and online safety while also promoting healthy technology use habits (Akgun and Greenhow, 2022). The integration of these tools not only enhances online safety and privacy but also fosters a more inclusive and aware learning environment, strengthening the educational community's capacity to face the challenges of the digital age.
El uso ético y responsable de la tecnología es esencial para la ciudadanía digital, abarcando el conocimiento de derechos y deberes digitales, seguridad en línea, privacidad y salud digital. Estos aspectos son cruciales para que los educadores guíen a sus estudiantes en el uso adecuado de la tecnología (Searson et al., 2015; Buchholz et al., 2020).
La inteligencia artificial juega un papel crucial en este ámbito, proporcionando herramientas avanzadas para monitorear y promover comportamientos seguros en línea. Estas herramientas permiten identificar y prevenir actividades riesgosas, garantizando un entorno digital seguro y facilitando la implementación de políticas de privacidad efectivas que protegen la información personal de estudiantes y docentes.
Además, la IA apoya la educación sobre las implicaciones éticas del uso de la tecnología y la salud digital. A través de programas educativos específicos, los sistemas de IA ayudan a los estudiantes a comprender la importancia de la privacidad y la seguridad en línea, mientras que también promueven hábitos saludables en el uso de la tecnología. La integración de estas herramientas no solo mejora la seguridad y privacidad en línea, sino que también fomenta un entorno de aprendizaje más inclusivo y consciente, fortaleciendo la capacidad de la comunidad educativa para enfrentar los desafíos de la era digital.
2.5 Creativity and innovation
Creativity and innovation refer to the ability to generate new and valuable ideas and to apply technology to solve complex problems (Heinen et al., 2015). AI can support this factor by providing environments that stimulate creativity and knowledge generation, and by offering tools that allow educators to explore new ways of teaching and learning (George and Wooden, 2023).
The intersection of artificial intelligence with digital competencies opens up a rich and complex field of study, promising to transform education by providing new avenues for the professional development of educators and enhancing student learning. Research in this field is at the forefront of pedagogy and educational technology, exploring how AI tools can be used to assess and enhance digital competencies in educators, and how these, in turn, can integrate such tools into their educational practice to enrich the learning experience of students.
In this context, it is crucial to understand how the emerging reality of AI relates to the skills measured by the tool used in this study. AI not only facilitates creativity and innovation by automating routine tasks and providing advanced data analysis but also acts as a catalyst for the development of advanced digital skills. Educators can use AI platforms to design personalized learning experiences that foster innovation and complex problem-solving among students.
Moreover, AI offers analytical tools that allow educators to more precisely evaluate students' digital competencies, identifying areas for improvement and adapting teaching strategies accordingly. The ability of AI to analyze large volumes of data and provide real-time feedback is particularly valuable in the identification and development of creativity and innovation skills.
3 Materials and methods
In this research, a quantitative multimodal design of a descriptive and non-experimental type was adopted to optimize data collection and analysis. The study population comprised all faculty members of the Technical University of Manabí, totaling 1,012 professors. To determine a representative sample size, the formula for calculating the finite population sample size was employed. The parameters used included a 95% confidence level (Z = 1.96), an expected proportion (p) of 0.50, its complement (q) of 0.50, and a maximum acceptable margin of error (e) of 0.05. Applying these values to the formula resulted in a sample size of 279 professors, ensuring that the study results are representative of the university's total population. This precise calculation supports the study's objective of providing reliable and generalizable findings on the integration of artificial intelligence tools in evaluating and enhancing digital competencies among educators.
The Higher Education Digital Competencies Assessment Questionnaire (CDES), created by Mengual in 2011 (Mengual-Andrés et al., 2016), was used for data collection. This instrument, consisting of 48 items divided into five dimensions technological literacy; access and use of information; communication and collaboration; digital citizenship; creativity and innovation was utilized to assess the digital competencies of the faculty. Additionally, the degree of acceptance and the application of Information and Communication Technologies (ICT) in the educational setting were examined.
Faculty members were asked to perform a self-assessment of their digital competencies, using a rating system that ranged from 1 (Not Important) to 5 (Very Important). The reliability analysis of the questionnaire was conducted using SPSS software version 21 for Windows, yielding a Cronbach's alpha coefficient of 0.977. This result evidences the high reliability of the instrument for its application in studies of this nature (Moreira-Choez et al., 2024).
For the analysis of the collected data, a structural equation model was applied using AMOS software. The observed variables were associated with specific digital competencies, as illustrated in the attached diagram (see Figure 1). The indicators P1–P48 represent the responses to the questionnaire items, while the latent variables, ACINF, ALTE, COMCO, CIDDI, and CREIN, represent the five dimensions of the aforementioned CDES questionnaire. Factor loadings were calculated to evaluate the contribution of each item to the corresponding dimension. Standard errors associated with each indicator, identified as e1–e48, allowed for assessing the variability and precision of the measures. The fit indices of the model will be calculated and reported to provide an assessment of the goodness of fit of the proposed structural model (Figure 1).
4 Results and discussion
The analysis of the data collected yielded significant findings in understanding the digital competencies of faculty members at the Technical University of Manabí. These findings were interpreted in light of the proposed structural equation model, designed to examine the relationship between the dimensions assessed in the Digital Competence Assessment Questionnaire in Higher Education (CDES) and the responses obtained from the study sample (Table 1).
The proposed model has allowed the estimation of 154 distinct parameters from 1,224 different moments in the sample. This results in 1,070 degrees of freedom, indicating a robust quantity for conducting goodness-of-fit tests. According to Preacher et al. (2013), a model with a high number of degrees of freedom relative to the number of parameters to estimate may indicate a well-specified structure and, potentially, a good ability to replicate the observed covariance matrix. The substantial number of degrees of freedom suggests that the structural model has the necessary flexibility to adjust to the diversity of observed data, which is consistent with the assertions of Höge et al. (2018) regarding the importance of maintaining a balance between model complexity and the ability to capture data variability. It is important to note, as established by Mulaik et al. (1989), that model adequacy depends not only on the degrees of freedom but also on the quality of fit based on empirical and theoretical adequacy indices.
The application of the structural equation model to analyze the digital competencies of faculty members resulted in a chi-square value (Table 2).
The magnitude of the chi-square statistic is considerable, and given the practically nil associated probability, the null hypothesis of a perfect fit of the model to the data is rejected (Kramer and Schmidhammer, 1992). However, it is well-recognized in the specialized literature that the χ2 can be influenced by the sample size, being more prone to indicate a lack of fit as the number of observations increases (Fritz et al., 2012). Given the substantial size of the sample in this study, this effect could be influencing the χ2 result.
It is essential to consider that the rejection of the null hypothesis does not necessarily imply that the model is inappropriate. McNeish et al. (2018) argue that other fit indices should be examined to obtain a more nuanced assessment of the model's quality. These include comparative fit indices such as the Comparative Fit Index (CFI) and the Root Mean Square Error of Approximation (RMSEA), which can provide valuable information on the model's adequacy beyond the χ2.
Figure 2, which illustrates a structural equation model examining digital competencies in teachers. The included fit indices suggest a meticulous interpretation to assess the model's adequacy to the collected data.
The adjusted model presents a chi-square of 2,831.517, indicating statistical significance in the relationship between the observed variables and the latent variables. Despite a probability level of p = 0.000, which points to a statistically significant fit of the model, a detailed analysis of the fit indices is required to validate the model's adequacy (Bone et al., 1989). With an RMSEA of 0.077, the model falls within the “good fit” range according to the criteria established by Kenny et al. (2015), who suggest that RMSEA values below 0.08 are indicative of a good model fit. However, the CFI of 0.881, though close, does not reach the generally accepted threshold of 0.90 for considering an excellent fit. This fact suggests that while the model reasonably fits the data, there is room to improve the model's specification.
The application of this model to the assessment of digital competencies allows for the examination of the complex interaction between different aspects of digital literacy. The results indicate that digital competencies do not manifest in isolation but as a multifaceted, interconnected construct (Wang et al., 2021). The high factor loadings between the observed and latent variables, as seen between ACINF and its indicators, suggest a significant correspondence between the teachers' perceptions and the theoretical dimensions of the CDES questionnaire (Mengual-Andrés et al., 2016).
However, the interpretation of these results must consider the limitations imposed by the fit indices. Although the TL of 875 is considerable, and the PRATIO of 949 is robust, the AIC of 3,043.517 suggests the possibility of an overdimensioned model that could benefit from simplification. Moreover, as Falke et al. (2020) warn, a model with a good fit in terms of RMSEA and CFI does not guarantee the validity of the inferences made. Therefore, a more critical evaluation of the model and the included variables is recommended to ensure the validity and applicability of the findings.
The regression analysis presented next evaluates the impact of multiple latent variables on different parameters, identified as P1 to P48. This statistical analysis was conducted using a predetermined model in study group number 1. Each evaluated parameter (denoted as “P”) relates to one of several key independent variables, including Technological Literacy (ALTE), Access and Use of Information (ACINF), Communication and Collaboration (COMCO), Digital Citizenship (CIDI), and Creativity and Innovation (CREIN). These variables represent theoretical constructs whose specific nature is deduced by their impact on the observed parameters (Table 3).
The results indicate that all independent variables have a statistically significant effect on their respective parameters, as demonstrated by the ‘***' value in the significance (P) column. These values indicate statistical significance with a confidence level above 99%. For example, parameter P2, influenced by ALTE, has a regression weight of 1.131 with a critical ratio of 8.591, indicating a strong effect of this variable on the parameter in question.
According to similar studies, such as that by Yu et al. (2017), it is common to observe that latent variables like ALTE and ACINF have significant effects on multiple dimensions of parameters related to specific behaviors or processes. The high critical ratios observed in this analysis are consistent with the literature, which suggests that latent variables can have strong influences on the observed constructs, depending on the nature of the structural relationships modeled (Grace and Bollen, 2008).
It is important to note that the standard error varies slightly among the parameters but generally remains within a narrow range, indicating consistent precision in the estimates of the effects. This pattern of robust and significant results reinforces the validity of the model used and the relevance of the variables studied.
The following analysis focuses on assessing correlations among latent variables within a predefined structural model for group number 1. Determining the magnitude of the correlations between these variables provides deep insight into how they interact with each other, which is essential for understanding the underlying relationships in the proposed theoretical model. This study provides key evidence on the interdependence of the variables, which is crucial for future interpretations and applications of the findings (Table 4).
The correlations presented reflect significant relationships between the latent variables in the model. For instance, the correlation between ALTE and ACINF is 0.761, indicating a strong positive association. These correlations suggest that changes in one variable tend to be associated with changes in the other in the same direction. Values close to 1, like the correlation between COMCO and CIDI (0.952), denote an almost perfect association, implying that these variables may share a common foundation or heavily influence each other.
The high levels of correlation between ACINF and the other variables (COMCO and CIDI with values of 0.934 and 0.878, respectively) are consistent with findings in the literature that indicate strong interdependencies among similar constructs in complex models (Krefeld-Schwalb et al., 2022). This evidence suggests that ACINF's influence in the system is central and could act as a mediator between other relevant constructs.
Moreover, the consistent and high correlation between CREIN and the variables COMCO and CIDI (0.942 and 0.943, respectively) reinforces the idea that CREIN might play a structuring role in the model dynamics. These patterns of elevated correlation support the theory that latent variables do not operate in isolation, but rather form an interconnected web of influences that should be considered when applying or interpreting the model (Lowry and Gaskin, 2014).
This section provides an evaluative synthesis of the fit indicators of a structural statistical model. These indicators are essential tools for verifying the goodness of fit of the proposed model with respect to the observed data. Such evaluation is imperative to ensure that the inferences derived from the model are based on a solid empirical foundation. A detailed analysis of each indicator will be provided, and their relevance in the context of the model's fit will be discussed (Table 5).
The Chi-square to degrees of freedom ratio (CMIN/DF) of 2.646 falls within the threshold considered excellent (Blalock, 2017), suggesting the model has a relatively adequate specification. However, the Comparative Fit Index (CFI) with an estimate of 0.881 is below the recommended threshold of 0.95 (Peugh and Feldon, 2020), which denotes insufficient fit and may indicate a need for model revision.
In contrast, the Standardized Root Mean Square Residual (SRMR) with a value of 0.054, and the Root Mean Square Error of Approximation (RMSEA) with 0.077, meet their respective criteria, indicating excellent and acceptable fits, respectively. The discrepancy among these indicators suggests that while the model fits well in terms of standardized residuals and approximation error, it might fail to capture the overall covariance structure in the data.
The PClose value, which assesses the probability that the RMSEA is < 0.05, is 0.000. This indicates that, under the established significance level, it cannot be concluded that the approximation error is below the desired threshold. In other words, the model does not pass the closeness test regarding the ideal RMSEA value (Maydeu-Olivares et al., 2018).
Construct validity analysis is a cornerstone in verifying the conceptual soundness of a structural model. This process examines the extent to which latent variables accurately represent theoretical constructs. The table below presents key results from this analysis, providing a quantitative insight into the reliability and validity of each latent variable in the model (Table 6).
The Composite Reliability (CR) of the variables well-exceeds the threshold of 0.7, which is indicative of high internal reliability (Surucu and Maslakci, 2020). However, the Average Variance Extracted (AVE) of ALTE is below the acceptable standard of 0.5, which could question the sufficiency of the variable to capture the construct it represents (Sofiyabadi et al., 2022).
The analysis of Maximum Shared Variance (MSV) and Maximum Squared Correlation [MaxR(H)] shows that the latent variables maintain adequate differentiation among themselves, supporting the discriminant validity of the model. This is confirmed by the fact that, for all variables, the AVE is greater than both the MSV and the squared correlations, a condition for establishing discriminant validity according to Uppal and Gulliver (2018).
The correlations among the latent variables reflect significant associations, interpreted as statistically significant at the 0.001 level. The high correlation between COMCO and CIDI (0.952) suggests they might be measuring similar or related aspects of the construct, which would justify a more detailed review to avoid redundancies in the model.
The HTMT (Heterotrait-Monotrait ratio) analysis is a contemporary technique used to assess discriminant validity among constructs in structural equation models. This method provides a perspective on the adequacy with which constructs are distinguished from each other in a model. An HTMT ratio below 0.85 generally suggests adequate discriminant validity between pairs of constructs, although some authors allow a limit of up to 0.90 in less stringent research contexts (Voorhees et al., 2016; Franke and Sarstedt, 2019) (Table 7).
The analysis reveals that the HTMT ratios range from 0.726 to 0.944. The ratios between ALTE and other variables such as ACINF (0.765), COMCO (0.758), and CIDI (0.732) are below the threshold of 0.85, which supports strong discriminant validity according to the stricter criterion. However, the ratios involving COMCO, CIDI, and CREIN exceed this threshold, which may suggest that these constructs are not as distinctly discriminated as would be desirable.
For example, the HTMT ratio of 0.952 between COMCO and CIDI is particularly high, indicating a possible significant overlap in what these constructs are measuring. This highlights the need for a conceptual and empirical review of these constructs to ensure they are distinct and do not reflect the same phenomenon.
5 Conclusion
The study focused on assessing the digital competencies of the faculty at the Technical University of Manabí. The findings reveal significant aspects that enhance the understanding of these competencies through a structural equation model, which demonstrated the ability to estimate complex parameters, indicating a well-specified structure and notable flexibility to adjust to the diversity of the observed data.
Nevertheless, although the model's fit indices are acceptable, areas with potential for improvement were identified. This suggests that, although robust, the model can be refined to more accurately represent the evaluated digital competencies. The analysis of construct validity and HTMT ratios reinforces the model's discriminant validity. However, a need for greater differentiation between certain constructs was observed, particularly between communication competence and digital citizenship. This high correlation suggests the existence of common underlying constructs, justifying further research to clarify and refine the model's structure.
Furthermore, among the evaluated digital competencies, the competence in access and use of information (ACINF) notably predominated among the teachers. This capacity to locate, evaluate, and utilize digital information effectively showed a high correspondence with the evaluated theoretical dimensions, reflecting strong integration into teaching practice. Likewise, the competence in communication and collaboration (COMCO) and the competence in digital citizenship (CIDI) also stood out. However, their high correlation indicates the need for greater conceptual distinction between them. Creativity and innovation (CREIN) showed significant structural influence in the model, underscoring the importance of fostering these skills in the digital educational context.
Consequently, the structural equation model has proven to be an effective tool for unraveling the complex interrelationships among digital competencies. The results illustrate that these competencies do not operate in isolation but as an interconnected set of skills and knowledge. The implications for educational practice are significant, providing clear direction for the professional development of teachers in the digital realm.
However, the study presents some limitations. Firstly, the model, although robust, could benefit from greater simplification to avoid overfitting. Additionally, the high correlation between certain competencies suggests the need for a more detailed analysis to adequately differentiate between them. Lastly, the sample was limited to a specific university, which could restrict the generalization of the findings to other educational contexts.
To improve the digital competencies of the faculty, it is suggested to strengthen training programs in skills for searching, evaluating, and using digital information, utilizing advanced artificial intelligence tools to personalize teaching. It is also recommended to develop specific programs that separately address communication in digital environments and the ethical and citizenship aspects of digital literacy, thereby improving the understanding and application of these competencies. Furthermore, it is essential to encourage creativity and innovation through the use of emerging technologies and learning environments that promote experimentation and the generation of new ideas. Simplifying the model to improve its explanatory capacity and reduce the possibility of overfitting is essential, ensuring continuous evaluation and refinement of the model.
Finally, future research should focus on replicating this study in different educational contexts to validate the findings and improve the generalization of the results. It is recommended to explore the integration of new technologies and teaching methods that can further enhance the digital competencies of the faculty. A longitudinal analysis could provide valuable information on the evolution of these competencies over time and their impact on educational practice. These actions will significantly contribute to the development of digital competencies in higher education, aligning the perceptions of the faculty with the digital competencies necessary for effective performance in the digital age.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving humans were approved by the Postgraduate Ethics Committee of the State University of Milagro. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
JM-C: Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. KG: Data curation, Writing – original draft, Writing – review & editing. TL: Writing – original draft, Methodology. AS-G: Data curation, Investigation, Methodology, Writing – review & editing. JC: Data curation, Investigation, Methodology, Writing – review & editing. LC: Conceptualization, Methodology, Supervision, Validation, Writing – original draft.
Funding
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Akgun, S., and Greenhow, C. (2022). Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI Ethics 2, 431–440. doi: 10.1007/s43681-021-00096-7
Bhutoria, A. (2022). Personalized education and artificial intelligence in the United States, China, and india: a systematic review using a human-in-the-loop model. Comp. Educ. 3:100068. doi: 10.1016/j.caeai.2022.100068
Blalock, H. M. Jr. (2017). Causal Models in the Social Sciences. New York, NY: Routledge. doi: 10.4324/9781315081663
Bone, P. F., Sharma, S., and Shimp, T. A. (1989). A bootstrap procedure for evaluating goodness-of-fit indices of structural equation and confirmatory factor models. J. Market. Res. 26, 105–111. doi: 10.1177/002224378902600109
Buchholz, B. A., DeHart, J., and Moorman, G. (2020). Digital citizenship during a global pandemic: moving beyond digital literacy. J. Adolesc. Adult Liter. 64, 11–17. doi: 10.1002/jaal.1076
Claro, M., Salinas, A., Cabello-Hutt, T., Martín, E. S., Preiss, D. D., Valenzuela, S., et al. (2018). Teaching in a Digital Environment (TIDE): defining and measuring teachers' capacity to develop students' digital information and communication skills. Comput. Educ. 121, 162–174. doi: 10.1016/j.compedu.2018.03.001
DeLuca, C., and Klinger, D. A. (2010). Assessment literacy development: identifying gaps in teacher candidates' learning. Assess. Educ. 17, 419–438. doi: 10.1080/0969594X.2010.516643
Durak, H. Y., and Saritepeci, M. (2018). Analysis of the relation between computational thinking skills and various variables with the structural equation model. Comput. Educ. 116, 191–202. doi: 10.1016/j.compedu.2017.09.004
Falke, A., Schröder, N., and Endres, H. (2020). A first fit index on estimation accuracy in structural equation models. J. Bus. Econ. 90, 277–302. doi: 10.1007/s11573-019-00952-3
Falloon, G. (2020). From digital literacy to digital competence: the teacher digital competency (TDC) framework. Educ. Technol. Res. Dev. 68, 2449–2472. doi: 10.1007/s11423-020-09767-4
Franke, G., and Sarstedt, M. (2019). Heuristics versus statistics in discriminant validity testing: a comparison of four procedures. Int. Res. 29, 430–447. doi: 10.1108/IntR-12-2017-0515
Fritz, C. O., Morris, P. E., and Richler, J. J. (2012). Effect size estimates: current use, calculations, and interpretation. J. Exp. Psychol. 141, 2–18. doi: 10.1037/a0024338
George, B., and Wooden, O. (2023). Managing the strategic transformation of higher education through artificial intelligence. Administr. Sci. 13:196. doi: 10.3390/admsci13090196
Grace, J. B., and Bollen, K. A. (2008). Representing general theoretical concepts in structural equation models: the role of composite variables. Environ. Ecol. Stat. 15, 191–213. doi: 10.1007/s10651-007-0047-7
Haderer, B., and Ciolacu, M. (2022). Education 4.0: artificial intelligence assisted task- and time planning system. Proc. Comput. Sci. 200, 1328–1337. doi: 10.1016/j.procs.2022.01.334
Hasse, C. (2017). Technological literacy for teachers. Oxf. Rev. Educ. 43, 365–378. doi: 10.1080/03054985.2017.1305057
Heinen, R., Leone, S. A., Fairchild, J., Cushenbery, L., and Hunter, S. T. (2015). Tools for the Process, 374–403.
Höge, M., Wöhling, T., and Nowak, W. (2018). A primer for model selection: the decisive role of model complexity. Water Resour. Res. 54, 1688–1715. doi: 10.1002/2017WR021902
Kenny, D. A., Kaniskan, B., and Betsy McCoach, D. (2015). The performance of RMSEA in models with small degrees of freedom. Sociol. Methods Res. 44, 486–507. doi: 10.1177/0049124114543236
Kramer, M., and Schmidhammer, J. (1992). The chi-squared statistic in ethology: use and misuse. Anim. Behav. 44, 833–841. doi: 10.1016/S0003-3472(05)80579-2
Krefeld-Schwalb, A., Pachur, T., and Scheibehenne, B. (2022). Structural parameter interdependencies in computational models of cognition. Psychol. Rev. 129, 313–339. doi: 10.1037/rev0000285
Lowry, P. B., and Gaskin, J. (2014). Partial Least Squares (PLS) Structural Equation Modeling (SEM) for building and testing behavioral causal theory: when to choose it and how to use it. IEEE Transact. Prof. Commun. 57, 123–146. doi: 10.1109/TPC.2014.2312452
Maydeu-Olivares, A., Shi, D., and Rosseel, Y. (2018). Assessing fit in structural equation models: a Monte-Carlo Evaluation of RMSEA versus srmr confidence intervals and tests of close fit. Struct. Eq. Model. 25, 389–402. doi: 10.1080/10705511.2017.1389611
McNeish, D., An, J., and Hancock, G. R. (2018). The Thorny relation between measurement quality and fit index cutoffs in latent variable models. J. Pers. Assess. 100, 43–52. doi: 10.1080/00223891.2017.1281286
Mengual-Andrés, S., Roig-Vila, R., and Mira, J. B. (2016). Delphi study for the design and validation of a questionnaire about digital competences in higher education. Int. J. Educ. Technol. High. Educ. 13:12. doi: 10.1186/s41239-016-0009-y
Moreira-Choez, J. S., Lamus de Rodríguez, T. M., Olmedo-Cañarte, P. A., and Macías-Macías, J. D. (2024). Valorando el futuro de la educación: Competencias Digitales y Tecnologías de Información y Comunicación en Universidades. Rev. Venezolana Gerencia 29, 271–288. doi: 10.52080/rvgluz.29.105.18
Moreira-Choez, J. S., Zambrano-Acosta, J. M., and López-Padrón, A. (2023). Digital teaching competence of higher education professors: self-perception study in an Ecuadorian University. F1000Research 12:1484. doi: 10.12688/f1000research.139064.1
Mulaik, S. A., James, L. R., Van Alstine, J., Bennett, N., Lind, S., and Dean Stilwell, C. (1989). Evaluation of goodness-of-fit indices for structural equation models. Psychol. Bull. 105, 430–445. doi: 10.1037/0033-2909.105.3.430
Mumtaz, S. (2000). Factors affecting teachers' use of information and communications technology: a review of the literature. J. Inf. Technol. Teach. Educ. 9, 319–342. doi: 10.1080/14759390000200096
Nanto, D., Rahiem, D. M. H., and Maryati, T. K. (2021). Emerging Trends in Technology for Education in an Uncertain World. London: Routledge.
Noskova, A. V., Goloukhova, D. V., and Kuzmina, E. I. (2021). “Digital competence of university teachers: self-perception of skills in online-environment,” in Perishable and Eternal: Mythologies and Social Technologies of Digital Civilization, Vol. 120. European Proceedings of Social and Behavioural Sciences, eds. D. Y. Krapchunov, S. A. Malenko, V. O. Shipulin, E. F. Zhukova, A. G. Nekita, and O. A. Fikhtner (European Publisher), 856–863. doi: 10.15405/epsbs.2021.12.03.113
Papadopoulos, G. T., Antona, M., and Stephanidis, C. (2021). Towards open and expandable cognitive AI architectures for large-scale multi-agent human-robot collaborative learning. IEEE Access 9, 73890–73909. doi: 10.1109/ACCESS.2021.3080517
Patrick, P., and Care, E., (eds.). (2015). Assessment and Teaching of 21st Century Skills. Dordrecht: Springer Netherlands.
Peugh, J., and Feldon, D. F. (2020). ‘How well does your structural equation model fit your data?': Is Marcoulides and Yuan's Equivalence Test the Answer? CBE Life Sci. Educ. 19:es5. doi: 10.1187/cbe.20-01-0016
Preacher, K. J., Zhang, G., Kim, C., and Mels, G. (2013). Choosing the optimal number of factors in exploratory factor analysis: a model selection perspective. Multivariate Behav. Res. 48, 28–56. doi: 10.1080/00273171.2012.710386
Røkenes, F. M., and Krumsvik, R. J. (2014). Development of student teachers' digital competence in teacher education - a literature review. Nordic J. Digit. Liter. 9, 250–280. doi: 10.18261/ISSN1891-943X-2014-04-03
Sá, M. J., Santos, A. I., Serpa, S., and Ferreira, C. M. (2021). Digitainability—digital competences post-COVID-19 for a sustainable society. Sustainability 13:9564. doi: 10.3390/su13179564
Scherer, R., Siddiq, F., and Tondeur, J. (2019). The Technology Acceptance Model (TAM): a meta-analytic structural equation modeling approach to explaining teachers' adoption of digital technology in education. Comput. Educ. 128, 13–35. doi: 10.1016/j.compedu.2018.09.009
Searson, M., Hancock, M., Soheil, N., and Shepherd, G. (2015). Digital citizenship within global contexts. Educ. Inf. Technol. 20, 729–741. doi: 10.1007/s10639-015-9426-0
Sofiyabadi, J., Valmohammadi, C., and Asl, A. S. (2022). Impact of knowledge management practices on innovation performance. IEEE Transact. Eng. Manag. 69, 3225–3239. doi: 10.1109/TEM.2020.3032233
Surucu, L., and Maslakci, A. (2020). Validity and reliability in cuantitative research. Bus. Manag. Stud. 8, 2694–2726. doi: 10.15295/bmij.v8i3.1540
Torrent-Sellens, J., Salazar-Concha, C., Ficapal-Cusí, P., and Saigí-Rubió, F. (2021). Using digital platforms to promote blood donation: motivational and preliminary evidence from Latin America and Spain. Int. J. Environ. Res. Public Health 18:4270. doi: 10.3390/ijerph18084270
Uppal, M. A„ Ali, S., and Gulliver, S. R. (2018). Factors determining e-learning service quality. Br. J. Educ. Technol. 49, 412–426. doi: 10.1111/bjet.12552
Van Der Vleuten, C. P. M. (1996). The assessment of professional competence: developments, research and practical implications. Adv. Health Sci. Educ. 1, 41–67. doi: 10.1007/BF00596229
Voorhees, C. M., Brady, M. K., Calantone, R., and Ramirez, E. (2016). Discriminant validity testing in marketing: an analysis, causes for concern, and proposed remedies. J. Acad. Market. Sci. 44, 119–134. doi: 10.1007/s11747-015-0455-4
Wang, X., Zhang, R., Wang, Z., and Li, T. (2021). How does digital competence preserve university students' psychological well-being during the pandemic? An investigation from self-determined theory. Front. Psychol. 12:652594. doi: 10.3389/fpsyg.2021.652594
Wong, G. K.-W., Reichert, F., and Law, N. (2023). Reorienting the assessment of digital literacy in the twenty-first century: a product-lifecycle and experience dependence perspective. Educ. Technol. Res. Dev. 71, 2389–2412. doi: 10.1007/s11423-023-10278-1
Yu, S., and Lu, Y. (2021). An Introduction to Artificial Intelligence in Education. Singapore: Springer.
Yu, T.-K., Lin, M.-L., and Liao, Y.-K. (2017). Understanding factors influencing information communication technology adoption behavior: the moderators of information literacy and digital skills. Comput. Human Behav. 71, 196–208. doi: 10.1016/j.chb.2017.02.005
Keywords: assessment, digital competencies, higher education, structural equations, artificial intelligence
Citation: Moreira-Choez JS, Gómez Barzola KE, Lamus de Rodríguez TM, Sabando-García AR, Cruz Mendoza JC and Cedeño Barcia LA (2024) Assessment of digital competencies in higher education faculty: a multimodal approach within the framework of artificial intelligence. Front. Educ. 9:1425487. doi: 10.3389/feduc.2024.1425487
Received: 29 April 2024; Accepted: 13 August 2024;
Published: 30 August 2024.
Edited by:
Kevin Mario Laura De La Cruz, Neumann Graduate School, PeruReviewed by:
Petros Roussos, National and Kapodistrian University of Athens, GreeceOsbaldo Turpo Gebera, National University of Saint Augustine, Peru
Copyright © 2024 Moreira-Choez, Gómez Barzola, Lamus de Rodríguez, Sabando-García, Cruz Mendoza and Cedeño Barcia. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jenniffer Sobeida Moreira-Choez, amVubmlmZmVyLm1vcmVpcmEmI3gwMDA0MDt1dG0uZWR1LmVj