RESUMEN
Moodle is an open-source learning management system that is widely used today, especially in higher education settings. Although its technological acceptance by undergraduate students has been extensively studied in the past, very little is known about its acceptance by university professors. In particular, as far as we know, the literature contains no previous experiences related to South American teachers. This paper aims to bridge this gap by quantifying and analyzing the drivers of Moodle's technological acceptance among Ecuadorian academic staff. Considering the responses of 538 teachers and taking a modified UTAUT2 model as a theoretical basis, we found that Ecuadorian teachers have high levels of acceptance of Moodle, regardless of their age, gender, ethnicity, or discipline. However, this acceptance is significantly higher in teachers with high levels of education and with considerable previous experience with e-learning systems. The main determinants of this acceptance are attitude strength, effort expectancy, performance expectancy, and facilitating conditions. We found no moderating effects in relation to the age, gender, or previous experience of the participants (including second- and third-order interactions derived from these variables). We conclude that, albeit moderately (e.g., adjusted R2=0.588), the model tested confirms the predictive power of the part of UTAUT2 that was inherited from UTAUT.
RESUMEN
In this study, we report on a Systematic Mapping Study (SMS) on how the quality of the quantitative instruments used to measure digital competencies in higher education is assured. 73 primary studies were selected from the published literature in the last 10 years in order to 1) characterize the literature, 2) evaluate the reporting practice of quality assessments, and 3) analyze which variables explain such reporting practices. The results indicate that most of the studies focused on medium to large samples of European university students, who attended social science programs. Ad hoc, self-reported questionnaires measuring various digital competence areas were the most commonly used method for data collection. The studies were mostly published in low tier journals. 36% of the studies did not report any quality assessment, while less than 50% covered both groups of reliability and validity assessments at the same time. In general, the studies had a moderate to high depth of evidence on the assessments performed. We found that studies in which several areas of digital competence were measured were more likely to report quality assessments. In addition, we estimate that the probability of finding studies with acceptable or good reporting practices increases over time.