Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Acta Psychol (Amst) ; 250: 104493, 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39288693

RESUMEN

The increasing usage of smartphones globally necessitates the creation of reliable and valid scales to evaluate their psychological effects, particularly within academic settings such as universities. The current study aimed to identify the factorial structure of the Smartphone Addiction Inventory (SPAI) in the Republic of Yemen. The sample consisted of 1920 university students (1136 males and 784 females). The data was analyzed with the AMOS V25 statistical program. The results of the factor analysis supported the goodness of fit of the five-factor model to the data with excellent indices: RMSEA = 0.052, CFI = 0.910, GFI = 0.931, AGFI = 0.915, TLI = 0.907, NFI = 0.915, RFI = 0.916, and RMR = 0.032, all of which are within the ideal range to support the goodness of fit of the model to the factorial structure of the inventory, as the values of the explained variances ranged between 0.740 and 0.834., with indices of reliability in measurement. The results of the confirmatory factor analysis revealed that four items loaded on the Time Spent factor, four items on the Compulsivity factor, eight items on the Daily Life Interference factor, five items on the Craving factor, and three items on the Sleep Interference factor, with all loadings being statistically significant (>0.001). Based on these findings, research direction and recommendations were provided.

2.
Multivariate Behav Res ; : 1-20, 2024 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-39154220

RESUMEN

A popular measure of model fit in structural equation modeling (SEM) is the standardized root mean squared residual (SRMR) fit index. Equivalence testing has been used to evaluate model fit in structural equation modeling (SEM) but has yet to be applied to SRMR. Accordingly, the present study proposed equivalence-testing based fit tests for the SRMR (ESRMR). Several variations of ESRMR were introduced, incorporating different equivalence bounds and methods of computing confidence intervals. A Monte Carlo simulation study compared these novel tests with traditional methods for evaluating model fit. The results demonstrated that certain ESRMR tests based on an analytic computation of the confidence interval correctly reject poor-fitting models and are well-powered for detecting good-fitting models. We also present an illustrative example with real data to demonstrate how ESRMR may be incorporated into model fit evaluation and reporting. Our recommendation is that ESRMR tests be presented in addition to descriptive fit indices for model fit reporting in SEM.

4.
Educ Psychol Meas ; 84(3): 481-509, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38756464

RESUMEN

A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's information criterion (DIC), sample size adjusted BIC (SABIC), relative entropy, the integrated classification likelihood criterion (ICL-BIC), the adjusted Lo-Mendell-Rubin (LMR), and Vuong-Lo-Mendell-Rubin (VLMR). The accuracy of the fit indices was assessed for correct detection of the number of latent classes for different simulation conditions including sample size (2,500 and 5,000), test length (15, 30, and 45), mixture proportions (equal and unequal), number of latent classes (2, 3, and 4), and latent class separation (no-separation and small separation). Simulation study results indicated that as the number of examinees or number of items increased, correct identification rates also increased for most of the indices. Correct identification rates by the different fit indices, however, decreased as the number of estimated latent classes or parameters (i.e., model complexity) increased. Results were good for BIC, CAIC, DIC, SABIC, ICL-BIC, LMR, and VLMR, and the relative entropy index tended to select correct models most of the time. Consistent with previous studies, AIC and AICc showed poor performance. Most of these indices had limited utility for three-class and four-class mixture 3PL model conditions.

5.
Front Psychol ; 15: 1366850, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38765833

RESUMEN

This study informed researchers about the performance of different level-specific and target-specific model fit indices in the Multilevel Latent Growth Model (MLGM) with unbalanced design. As the use of MLGMs is relatively new in applied research domain, this study helped researchers using specific model fit indices to evaluate MLGMs. Our simulation design factors included three levels of number of groups (50, 100, and 200) and three levels of unbalanced group sizes (5/15, 10/20, and 25/75), based on simulated datasets derived from a correctly specified MLGM. We evaluated the descriptive information of the model fit indices under various simulation conditions. We also conducted ANOVA to calculated the extent to which these fit indices could be influenced by different design factors. Based on the results, we made recommendations for practical and theoretical research about the fit indices. CFI- and TFI-related fit indices performed well in the MLGM and could be trustworthy to use to evaluate model fit under similar conditions found in applied settings. However, RMSEA-related fit indices, SRMR-related fit indices, and chi square-related fit indices varied by the factors included in this study and should be used with caution for evaluating model fit in the MLGM.

6.
Educ Psychol Meas ; 84(1): 123-144, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38250508

RESUMEN

Confirmatory factor analyses (CFA) are often used in psychological research when developing measurement models for psychological constructs. Evaluating CFA model fit can be quite challenging, as tests for exact model fit may focus on negligible deviances, while fit indices cannot be interpreted absolutely without specifying thresholds or cutoffs. In this study, we review how model fit in CFA is evaluated in psychological research using fit indices and compare the reported values with established cutoff rules. For this, we collected data on all CFA models in Psychological Assessment from the years 2015 to 2020 (NStudies=221). In addition, we reevaluate model fit with newly developed methods that derive fit index cutoffs that are tailored to the respective measurement model and the data characteristics at hand. The results of our review indicate that the model fit in many studies has to be seen critically, especially with regard to the usually imposed independent clusters constraints. In addition, many studies do not fully report all results that are necessary to re-evaluate model fit. We discuss these findings against new developments in model fit evaluation and methods for specification search.

7.
Br J Math Stat Psychol ; 77(1): 103-129, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37448144

RESUMEN

It has been suggested that equivalence testing (otherwise known as negligible effect testing) should be used to evaluate model fit within structural equation modelling (SEM). In this study, we propose novel variations of equivalence tests based on the popular root mean squared error of approximation and comparative fit index fit indices. Using Monte Carlo simulations, we compare the performance of these novel tests to other existing equivalence testing-based fit indices in SEM, as well as to other methods commonly used to evaluate model fit. Results indicate that equivalence tests in SEM have good Type I error control and display considerable power for detecting well-fitting models in medium to large sample sizes. At small sample sizes, relative to traditional fit indices, equivalence tests limit the chance of supporting a poorly fitting model. We also present an illustrative example to demonstrate how equivalence tests may be incorporated in model fit reporting. Equivalence tests in SEM also have unique interpretational advantages compared to other methods of model fit evaluation. We recommend that equivalence tests be utilized in conjunction with descriptive fit indices to provide more evidence when evaluating model fit.


Asunto(s)
Análisis de Clases Latentes , Tamaño de la Muestra , Método de Montecarlo
8.
Educ Psychol Meas ; 83(5): 907-928, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37663541

RESUMEN

Social desirability bias (SDB) has been a major concern in educational and psychological assessments when measuring latent variables because it has the potential to introduce measurement error and bias in assessments. Person-fit indices can detect bias in the form of misfitted response vectors. The objective of this study was to compare the performance of 14 person-fit indices to identify SDB in simulated responses. The area under the curve (AUC) of receiver operating characteristic (ROC) curve analysis was computed to evaluate the predictive power of these statistics. The findings showed that the agreement statistic (A) outperformed all other person-fit indices, while the disagreement statistic (D), dependability statistic (E), and the number of Guttman errors (G) also demonstrated high AUCs to detect SDB. Recommendations for practitioners to use these fit indices are provided.

9.
Educ Psychol Meas ; 83(3): 586-608, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37187692

RESUMEN

In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as χ2, M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian information criterion (BIC) for relative comparisons. Recent developments show a merging trend of psychometric and machine learnings, yet there remains a gap in the model fit evaluation, specifically the use of the area under curve (AUC). This study focuses on the behaviors of AUC in fitting IRT models. Rounds of simulations were conducted to investigate AUC's appropriateness (e.g., power and Type I error rate) under various conditions. The results show that AUC possessed certain advantages under certain conditions such as high-dimensional structure with two-parameter logistic (2PL) and some three-parameter logistic (3PL) models, while disadvantages were also obvious when the true model is unidimensional. It cautions researchers about the dangers of using AUC solely in evaluating psychometric models.

10.
Multivariate Behav Res ; 58(1): 189-194, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36787513

RESUMEN

To evaluate the fit of a confirmatory factor analysis model, researchers often rely on fit indices such as SRMR, RMSEA, and CFI. These indices are frequently compared to benchmark values of .08, .06, and .96, respectively, established by Hu and Bentler (Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55). However, these indices are affected by model characteristics and their sensitivity to misfit can change across models. Decisions about model fit can therefore be improved by tailoring cutoffs to each model. The methodological literature has proposed methods for deriving customized cutoffs, although it can require knowledge of linear algebra and Monte Carlo simulation. Given that many empirical researchers do not have training in these technical areas, empirical studies largely continue to rely on fixed benchmarks even though they are known to generalize poorly and can be poor arbiters of fit. To address this, this paper introduces the R package, dynamic, to make computation of dynamic fit index cutoffs (which are tailored to the user's model) more accessible to empirical researchers. dynamic heavily automatizes this process and only requires a lavaan object to automatically conduct several custom Monte Carlo simulations and output fit index cutoffs designed to be sensitive to misfit with the user's model characteristics.


Asunto(s)
Modelos Estadísticos , Simulación por Computador , Análisis de Clases Latentes , Análisis Factorial , Método de Montecarlo
11.
Suma psicol ; 29(2)dic. 2022.
Artículo en Inglés | LILACS-Express | LILACS | ID: biblio-1536886

RESUMEN

Introduction: The COVID-19 pandemic has had a very negative impact on people's overall mental health and psychosocial well-being, but the study of available social support to cope with such an adverse situation has received hardly any attention. Objective: To examine the psychometric properties of the MOS Perceived Social Support Questionnaire among the Mexican population in the context of the COVID-19 pandemic. Method: Non-experimental cross-sectional study. A sociodemographic questionnaire and the Medical Outcomes Study were applied in a non-probabilistic sample. A total of 898 people from different regions in Mexico, 258 males and 640 females, participated in the study in the context of the COVID-19 pandemic. Results: The analysis yielded a bi-factor model with two factors, Emotional/informational support and Tangible support, with satisfactory goodness of fit indices. Reliability was adequate with a high hierarchical omega coefficient, as well as in the factors. Likewise, the H coefficient was adequate in the general factor and its dimensions. Conclusions: Results showed that the scale is a valid and reliable measure of perceived social support among the Mexican population.


Introducción: La pandemia de COVID-19 ha tenido un impacto muy negativo en la salud mental y el bienestar psicosocial general de las personas, pero el estudio del apoyo social disponible para hacer frente a una situación tan adversa como esta ha recibido muy poca atención. Objetivo: Examinar las propiedades psicométricas del Cuestionario MOS de Apoyo Social Percibido en población mexicana en contexto de pandemia por COVID-19. Método: Diseño no experimental transversal. Se aplicó un cuestionario sociodemográfico y el Medical Outcomes Study en una muestra no probabilística por conveniencia. Participaron 898 personas de diferentes regiones de México, 258 hombres y 640 mujeres, durante el contexto de la pandemia por COVID-19. Resultados: El análisis arrojó un modelo Bi-factor de dos factores Apoyo emocional/ informacional y Apoyo tangible, con índices de bondad que se ajustaron a los datos. La fiabilidad fue adecuada con un coeficiente de omega jerárquico alto, así como en los factores. Asimismo, el coeficiente H fue adecuado en el factor general y sus dimensiones. Conclusiones: La escala presenta validez y confiabilidad para medir el apoyo social percibido en población mexicana.

12.
Appl Psychol Meas ; 46(8): 705-719, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36262524

RESUMEN

Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit indices (e.g., Yen's Q 1 , McKinley and Mill's G 2 , Orlando and Thissen's S-X 2 and S-G 2 ) are the most widely used statistics to assess item-level fit. However, the role of total scores with complete data used in S-X 2 and S-G 2 is different from that with incomplete data. As a result, S-X 2 and S-G 2 cannot handle incomplete data directly. To this end, we propose several modified versions of S-X 2 and S-G 2 to evaluate item-level fit when response data are incomplete, named as M impute -X 2 and M impute -G 2 , of which the subscript "impute" denotes different imputation methods. Instead of using observed total scores for grouping, the new indices rely on imputed total scores by either a single imputation method or three multiple imputation methods (i.e., two-way with normally distributed errors, corrected item-mean substitution with normally distributed errors and response function imputation). The new indices are equivalent to S-X 2 and S-G 2 when response data are complete. Their performances are evaluated and compared via simulation studies; the manipulated factors include test length, sources of misfit, misfit proportion, and missing proportion. The results from simulation studies are consistent with those of Orlando and Thissen (2000, 2003), and different indices are recommended under different conditions.

13.
Front Psychol ; 12: 783226, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34887821

RESUMEN

Fit indices provide helpful information for researchers to assess the fit of their structural equation models to their data. However, like many statistics and methods, researchers can misuse fit indices, which suggest the potential for questionable research practices that might arise during the analytic and interpretative processes. In the current paper, the author highlights two critical ethical dilemmas regarding the use of fit indices, which are (1) the selective reporting of fit indices and (2) using fit indices to justify poorly-fitting models. The author highlights the dilemmas and provides potential solutions for researchers and journals to follow to reduce these questionable research practices.

14.
Educ Psychol Meas ; 81(5): 817-846, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34565809

RESUMEN

This study examined the impact of omitting covariates interaction effect on parameter estimates in multilevel multiple-indicator multiple-cause models as well as the sensitivity of fit indices to model misspecification when the between-level, within-level, or cross-level interaction effect was left out in the models. The parameter estimates produced in the correct and the misspecified models were compared under varying conditions of cluster number, cluster size, intraclass correlation, and the magnitude of the interaction effect in the population model. Results showed that the two main effects were overestimated by approximately half of the size of the interaction effect, and the between-level factor mean was underestimated. None of comparative fit index, Tucker-Lewis index, root mean square error of approximation, and standardized root mean square residual was sensitive to the omission of the interaction effect. The sensitivity of information criteria varied depending majorly on the magnitude of the omitted interaction, as well as the location of the interaction (i.e., at the between level, within level, or cross level). Implications and recommendations based on the findings were discussed.

15.
Educ Psychol Meas ; 81(3): 413-440, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33994558

RESUMEN

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the ubiquity of correlated residuals and imperfect model specification. Our research focuses on a scale evaluation context and the performance of four standard model fit indices: root mean square error of approximate (RMSEA), standardized root mean square residual (SRMR), comparative fit index (CFI), and Tucker-Lewis index (TLI), and two equivalence test-based model fit indices: RMSEAt and CFIt. We use Monte Carlo simulation to generate and analyze data based on a substantive example using the positive and negative affective schedule (N = 1,000). We systematically vary the number and magnitude of correlated residuals as well as nonspecific misspecification, to evaluate the impact on model fit indices in fitting a two-factor exploratory factor analysis. Our results show that all fit indices, except SRMR, are overly sensitive to correlated residuals and nonspecific error, resulting in solutions that are overfactored. SRMR performed well, consistently selecting the correct number of factors; however, previous research suggests it does not perform well with categorical data. In general, we do not recommend using model fit indices to select number of factors in a scale evaluation framework.

16.
Eat Weight Disord ; 26(3): 859-868, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32430884

RESUMEN

PURPOSE: The Eating Disorder Examination Questionnaire (EDE-Q) is one of the most commonly used tools for identification of eating disorder (ED) symptoms. The purpose of this study was to develop and validate the Croatian version of the EDE-Q 6.0. METHODS: Participants were 279 individuals from a community sample (215 females; 64 males) with an average age of 24.61 ± 5.68 years. The Eating Attitudes Test-26 and Body Image Satisfaction Scale were used to determine the convergent validity of the EDE-Q. Four-, three-, two-, and single-factor models were tested, together with a brief 8-item version of the EDE-Q. RESULTS: Confirmatory factor analysis yielded a better fit of the original four-factor model when compared to other models, although the best model-data fit was obtained when testing subscales individually with correlations between factors ranging from 0.30 to 0.99. However, item 10 had to be excluded from the shape concern subscale to reach an acceptable fit. Correlation analyses showed that the EDE-Q has good convergent validity, but additional calculations discovered its tendency to overestimate ED symptomatology. CONCLUSIONS: This study is the first to show satisfactory psychometric properties of the Croatian version of the EDE-Q with minor modifications of the original questionnaire. The Croatian translation and validation of the EDE-Q enables researchers and clinicians in Croatia to employ the most widely and commonly used instrument for the assessment of core ED features. LEVEL OF EVIDENCE: Descriptive cross-sectional study, Level V.


Asunto(s)
Trastornos de Alimentación y de la Ingestión de Alimentos , Adolescente , Adulto , Croacia , Estudios Transversales , Trastornos de Alimentación y de la Ingestión de Alimentos/diagnóstico , Femenino , Humanos , Masculino , Psicometría , Reproducibilidad de los Resultados , Encuestas y Cuestionarios , Adulto Joven
17.
Appl Psychol Meas ; 44(4): 282-295, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32536730

RESUMEN

This study examined whether cutoffs in fit indices suggested for traditional formats with maximum likelihood estimators can be utilized to assess model fit and to test measurement invariance when a multiple group confirmatory factor analysis was employed for the Thurstonian item response theory (IRT) model. Regarding the performance of the evaluation criteria, detection of measurement non-invariance and Type I error rates were examined. The impact of measurement non-invariance on estimated scores in the Thurstonian IRT model was also examined through accuracy and efficiency in score estimation. The fit indices used for the evaluation of model fit performed well. Among six cutoffs for changes in model fit indices, only ΔCFI > .01 and ΔNCI > .02 detected metric non-invariance when the medium magnitude of non-invariance occurred and none of the cutoffs performed well to detect scalar non-invariance. Based on the generated sampling distributions of fit index differences, this study suggested ΔCFI > .001 and ΔNCI > .004 for scalar non-invariance and ΔCFI > .007 for metric non-invariance. Considering Type I error rate control and detection rates of measurement non-invariance, ΔCFI was recommended for measurement non-invariance tests for forced-choice format data. Challenges in measurement non-invariance tests in the Thurstonian IRT model were discussed along with the direction for future research to enhance the utility of forced-choice formats in test development for cross-cultural and international settings.

18.
Educ Psychol Meas ; 80(3): 421-445, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32425213

RESUMEN

We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR). We considered different types and levels of misspecification in factor analysis models: misspecified dimensionality, omitting cross-loadings, and ignoring residual correlations. Estimation methods had substantial impacts on the RMSEA and CFI so that different cutoff values need to be employed for different estimators. In contrast, SRMR is robust to the method used to estimate the model parameters. The same criterion can be applied at the population level when using the SRMR to evaluate model fit, regardless of the choice of estimation method.

19.
Educ Psychol Meas ; 79(6): 1017-1037, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31619838

RESUMEN

Factor score regression (FSR) is a popular alternative for structural equation modeling. Naively applying FSR induces bias for the estimators of the regression coefficients. Croon proposed a method to correct for this bias. Next to estimating effects without bias, interest often lies in inference of regression coefficients or in the fit of the model. In this article, we propose fit indices for FSR that can be used to inspect the model fit. We also introduce a model comparison test based on one of these newly proposed fit indices that can be used for inference of the estimators on the regression coefficients. In a simulation study we compare FSR with Croon's corrections and structural equation modeling in terms of bias of the regression coefficients, Type I error rate and power.

20.
Interacciones ; 5(3): 7, 01 de septiembre de 2019.
Artículo en Español | LILACS | ID: biblio-1049656

RESUMEN

Introducción: La inclusión de correlaciones entre residuales en modelos de medición es una práctica común en la investigación psicométrica y está orientada, predominantemente, a la mejora estadística del modelo por medio del aumento (e.g., CFI) o disminución (e.g., RMSEA) de la magnitud de determinados índices de ajuste, más a que a comprender la naturaleza de dichas asociaciones. El presente reporte metodológico tiene como objetivo presentar al lector el modelamiento, manejo e interpretación de los residuales correlacionados en un marco de análisis factorial confirmatorio y malas especificaciones. Método: Se utilizando los datos de un estudio presentado anteriormente de 521 estudiantes de psicología en una universidad privada de Lima Metropolitana (75.8% mujeres). Se utiliza la Escala de Florecimiento para realizar los análisis. Resultados y Discusión: Esas especificaciones no tendrían un impacto real en la relación de los ítems con el constructo que evalúan, por lo que no aportarían sustancialmente a la comprensión del modelo. Por tanto, especificar correlaciones entre residuales podría enmascarar un modelo mal especificado, o con falencias internas, mediante el incremento espurio de los índices de ajuste.


Introduction: The inclusion of correlations between residuals in measurement models is a common practice in psychometric research and is predominantly oriented to the statistical improvement of the model through increase (for example, IFC) or decrease (for example, RMSEA) of the magnitude of certain adjustment indices, rather than understanding the nature of these associations. This methodological report aims to present to the reader the modeling, management, and interpretation of correlated residuals in a framework of confirmatory factor analysis and poor specifications. Method: Using data from a previously presented study of 521 psychology students at a private university in Metropolitan Lima (75.8% women). The Flowering Scale is used to perform the analyses. Results and Discussion: These specifications would not have a real impact on the relationship of the elements with the construct they evaluate, so they do not contribute modifications to the understanding of the model. Therefore, specifying correlations between residuals could mask a poorly specified model, or with internal failures, by increasing spurious adjustment rates.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA