Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39255074

RESUMEN

Cardiovascular diseases (CVDs) are a leading cause of mortality worldwide, responsible for 32% of all deaths, with the annual death toll projected to reach 23.3 million by 2030. The early identification of individuals at high risk of CVD is crucial for the effectiveness of preventive strategies. In the field of deep learning, automated CVD-detection methods have gained traction, with phonocardiogram (PCG) data emerging as a valuable resource. However, deep-learning models rely on large datasets, which are often challenging to obtain. In recent years, data augmentation has become a viable solution to the problem of scarce data. In this paper, we propose a novel data-augmentation technique named PCGmix, specifically engineered for the augmentation of PCG data. The PCGmix algorithm employs a process of segmenting and reassembling PCG recordings, incorporating meticulous interpolation to ensure the preservation of the cardinal diagnostic features pertinent to CVD detection. The empirical assessment of the PCGmix method was utilized on a publicly available database of normal and abnormal heart-sound recordings. To evaluate the impact of data augmentation across a range of dataset sizes, we conducted experiments encompassing both limited and extensive amounts of training data. The experimental results demonstrate that the novel method is superior to the compared state-of-the-art, time-series augmentation. Notably, on limited data, our method achieves comparable accuracy to the no-augmentation approach when trained on 31% to 69% larger datasets. This study suggests that PCGmix can enhance the accuracy of deep-learning models for CVD detection, especially in data-constrained environments.

2.
Healthcare (Basel) ; 11(12)2023 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-37372812

RESUMEN

Postpartum anemia is a very common maternal health problem and remains a persistent public health issue globally. It negatively affects maternal mood and could lead to depression, increased fatigue, and decreased cognitive abilities. It can and should be treated by restoring iron stores. However, in most health systems, there is typically a six-week gap between birth and the follow-up postpartum visit. Risks of postpartum maternal complications are usually assessed shortly after birth by clinicians intuitively, taking into account psychosocial and physical factors, such as the presence of anemia and the type of iron supplementation. In this paper, we investigate the possibility of using machine-learning algorithms to more reliably forecast three parameters related to patient wellbeing, namely depression (measured by Edinburgh Postnatal Depression Scale-EPDS), overall tiredness, and physical tiredness (both measured by Multidimensional Fatigue Inventory-MFI). Data from 261 patients were used to train the forecasting models for each of the three parameters, and they outperformed the baseline models that always predicted the mean values of the training data. The mean average error of the elastic net regression model for predicting the EPDS score (with values ranging from 0 to 19) was 2.3 and outperformed the baseline, which already hints at the clinical usefulness of using such a model. We further investigated what features are the most important for this prediction, where the EDPS score and both tiredness indexes at birth turned out to be by far the most prominent prediction features. Our study indicates that the machine-learning model approach has the potential for use in clinical practice to predict the onset of depression and severe fatigue in anemic patients postpartum and potentially improve the detection and management of postpartum depression and fatigue.

3.
Front Public Health ; 11: 1073581, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36860399

RESUMEN

One key task in the early fight against the COVID-19 pandemic was to plan non-pharmaceutical interventions to reduce the spread of the infection while limiting the burden on the society and economy. With more data on the pandemic being generated, it became possible to model both the infection trends and intervention costs, transforming the creation of an intervention plan into a computational optimization problem. This paper proposes a framework developed to help policy-makers plan the best combination of non-pharmaceutical interventions and to change them over time. We developed a hybrid machine-learning epidemiological model to forecast the infection trends, aggregated the socio-economic costs from the literature and expert knowledge, and used a multi-objective optimization algorithm to find and evaluate various intervention plans. The framework is modular and easily adjustable to a real-world situation, it is trained and tested on data collected from almost all countries in the world, and its proposed intervention plans generally outperform those used in real life in terms of both the number of infections and intervention costs.


Asunto(s)
Inteligencia Artificial , COVID-19 , Humanos , COVID-19/epidemiología , COVID-19/prevención & control , Pandemias , Algoritmos , Aprendizaje Automático
4.
Comput Methods Programs Biomed ; 231: 107435, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36842345

RESUMEN

BACKGROUND AND OBJECTIVE: Colorectal cancer is a major health concern. It is now the third most common cancer and the fourth leading cause of cancer mortality worldwide. The aim of this study was to evaluate the performance of machine learning algorithms for predicting survival of colorectal cancer patients 1 to 5 years after diagnosis, and identify the most important variables. METHODS: A sample of 1236 patients diagnosed with colorectal cancer and 118 predictor variables has been used. The outcome of interest was a binary variable indicating whether the patient survived the number of years in question or not. 20 predictor variables were selected using mutual information score with the outcome. We implemented 11 machine learning algorithms and evaluated their performance with a 5 by 2-fold cross-validation with stratified folds and with paired Student's t-tests. We compared the results with the Kaplan-Meier estimator and Cox's proportional hazard regression. RESULTS: Using the 20 most important predictor variables for each of the survival years, the logistic regression algorithm achieved an area under the receiver operating characteristic curve of 0.850 (0.014 SD, 0.840-0.860 95 % CI) for the 1-year, and 0.872 (0.014 SD, 0.861-0.882 95% CI) for the 5-year survival prediction. Using only the 5 most important predictor variables, the corresponding values are 0.793 (0.020 SD, 0.778-0.807 95% CI) and 0.794 (0.011 SD, 0.785-0.802 95% CI). The most important variables for 1-year prediction were number of R residual, M distant metastasis, overall stage, probable recurrence within 5 years, and tumour length, whereas for 5-year prediction the most important were probable recurrence within 5 years, R residual, M distant metastasis, number of positive lymph nodes, and palliative chemotherapy. Biomarkers do not appear among the top 20 most important ones. For all survival intervals, the probability of the top model agrees with the Kaplan-Meier estimate, both in the interval of one standard deviation and in the 95% confidence interval. CONCLUSIONS: The findings suggest that machine learning algorithms can predict the survival probability of colorectal cancer patients and can be used to inform the patients and assist decision-making in clinical care management. In addition, this study unveils the most essential variables for estimating survival short- and long-term among patients with Colorectal cancer.


Asunto(s)
Inteligencia Artificial , Neoplasias Colorrectales , Humanos , Algoritmos , Aprendizaje Automático , Curva ROC , Neoplasias Colorrectales/patología , Estudios Retrospectivos
5.
Front Cardiovasc Med ; 9: 1009821, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36457810

RESUMEN

Decompensation episodes in chronic heart failure patients frequently result in unplanned outpatient or emergency room visits or even hospitalizations. Early detection of these episodes in their pre-symptomatic phase would likely enable the clinicians to manage this patient cohort with the appropriate modification of medical therapy which would in turn prevent the development of more severe heart failure decompensation thus avoiding the need for heart failure-related hospitalizations. Currently, heart failure worsening is recognized by the clinicians through characteristic changes of heart failure-related symptoms and signs, including the changes in heart sounds. The latter has proven to be largely unreliable as its interpretation is highly subjective and dependent on the clinicians' skills and preferences. Previous studies have indicated that the algorithms of artificial intelligence are promising in distinguishing the heart sounds of heart failure patients from those of healthy individuals. In this manuscript, we focus on the analysis of heart sounds of chronic heart failure patients in their decompensated and recompensated phase. The data was recorded on 37 patients using two types of electronic stethoscopes. Using a combination of machine learning approaches, we obtained up to 72% classification accuracy between the two phases, which is better than the accuracy of the interpretation by cardiologists, which reached 50%. Our results demonstrate that machine learning algorithms are promising in improving early detection of heart failure decompensation episodes.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA