Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Clin Linguist Phon ; 38(2): 97-115, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-36592050

RESUMEN

To study the possibility of using acoustic parameters, i.e., Acoustic Voice Quality Index (AVQI) and Maximum Phonation Time (MPT) for predicting the degree of lung involvement in COVID-19 patients. This cross-sectional case-control study was conducted on the voice samples collected from 163 healthy individuals and 181 patients with COVID-19. Each participant produced a sustained vowel/a/, and a phonetically balanced Persian text containing 36 syllables. AVQI and MPT were measured using Praat scripts. Each patient underwent a non-enhanced chest computed tomographic scan and the Total Opacity score was rated to assess the degree of lung involvement. The results revealed significant differences between patients with COVID-19 and healthy individuals in terms of AVQI and MPT. A significant difference was also observed between male and female participants in AVQI and MPT. The results from the receiver operating characteristic curve analysis and area under the curve indicated that MPT (0.909) had higher diagnostic accuracy than AVQI (0.771). A significant relationship was observed between AVQI and TO scores. In the case of MPT, however, no such relationship was observed. The findings indicated that MPT was a better classifier in differentiating patients from healthy individuals, in comparison with AVQI. The results also showed that AVQI can be used as a predictor of the degree of patients' and recovered individuals' lung involvement. A formula is suggested for calculating the degree of lung involvement using AVQI.


Asunto(s)
COVID-19 , Disfonía , Humanos , Masculino , Femenino , Disfonía/diagnóstico , Acústica del Lenguaje , Estudios de Casos y Controles , Estudios de Factibilidad , Estudios Transversales , Reproducibilidad de los Resultados , Índice de Severidad de la Enfermedad , Acústica , Tomografía , Medición de la Producción del Habla/métodos
2.
J Voice ; 36(6): 879.e13-879.e19, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33051108

RESUMEN

OBJECTIVES: With the COVID-19 outbreak around the globe and its potential effect on infected patients' voice, this study set out to evaluate and compare the acoustic parameters of voice between healthy and infected people in an objective manner. METHODS: Voice samples of 64 COVID-19 patients and 70 healthy Persian speakers who produced a sustained vowel /a/ were evaluated. Between-group comparisons of the data were performed using the two-way ANOVA and Wilcoxon's rank-sum test. RESULTS: The results revealed significant differences in CPP, HNR, H1H2, F0SD, jitter, shimmer, and MPT values between COVID-19 patients and the healthy participants. There were also significant differences between the male and female participants in all the acoustic parameters, except jitter, shimmer and MPT. No interaction was observed between gender and health status in any of the acoustic parameters. CONCLUSION: The statistical analysis of the data revealed significant differences between the experimental and control groups in this study. Changes in the acoustic parameters of voice are caused by the insufficient airflow, and increased aperiodicity, irregularity, signal perturbation and level of noise, which are the consequences of pulmonary and laryngological involvements in patients with COVID-19.


Asunto(s)
COVID-19 , Trastornos de la Voz , Humanos , Masculino , Femenino , Calidad de la Voz , Acústica del Lenguaje , COVID-19/diagnóstico , Acústica , Trastornos de la Voz/diagnóstico , Trastornos de la Voz/etiología
3.
J Acoust Soc Am ; 150(3): 1945, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34598596

RESUMEN

This study aimed to develop an artificial intelligence (AI)-based tool for screening COVID-19 patients based on the acoustic parameters of their voices. Twenty-five acoustic parameters were extracted from voice samples of 203 COVID-19 patients and 171 healthy individuals who produced a sustained vowel, i.e., /a/, as long as they could after a deep breath. The selected acoustic parameters were from different categories including fundamental frequency and its perturbation, harmonicity, vocal tract function, airflow sufficiency, and periodicity. After the feature extraction, different machine learning methods were tested. A leave-one-subject-out validation scheme was used to tune the hyper-parameters and record the test set results. Then the models were compared based on their accuracy, precision, recall, and F1-score. Based on accuracy (89.71%), recall (91.63%), and F1-score (90.62%), the best model was the feedforward neural network (FFNN). Its precision function (89.63%) was a bit lower than the logistic regression (90.17%). Based on these results and confusion matrices, the FFNN model was employed in the software. This screening tool could be practically used at home and public places to ensure the health of each individual's respiratory system. If there are any related abnormalities in the test taker's voice, the tool recommends that they seek a medical consultant.


Asunto(s)
Inteligencia Artificial , COVID-19 , Acústica , Humanos , Redes Neurales de la Computación , SARS-CoV-2
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA