Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(19)2023 Oct 02.
Artículo en Inglés | MEDLINE | ID: mdl-37837049

RESUMEN

Flat foot is a postural deformity in which the plantar part of the foot is either completely or partially contacted with the ground. In recent clinical practices, X-ray radiographs have been introduced to detect flat feet because they are more affordable to many clinics than using specialized devices. This research aims to develop an automated model that detects flat foot cases and their severity levels from lateral foot X-ray images by measuring three different foot angles: the Arch Angle, Meary's Angle, and the Calcaneal Inclination Angle. Since these angles are formed by connecting a set of points on the image, Template Matching is used to allocate a set of potential points for each angle, and then a classifier is used to select the points with the highest predicted likelihood to be the correct point. Inspired by literature, this research constructed and compared two models: a Convolutional Neural Network-based model and a Random Forest-based model. These models were trained on 8000 images and tested on 240 unseen cases. As a result, the highest overall accuracy rate was 93.13% achieved by the Random Forest model, with mean values for all foot types (normal foot, mild flat foot, and moderate flat foot) being: 93.38 precision, 92.56 recall, 96.46 specificity, 95.42 accuracy, and 92.90 F-Score. The main conclusions that were deduced from this research are: (1) Using transfer learning (VGG-16) as a feature-extractor-only, in addition to image augmentation, has greatly increased the overall accuracy rate. (2) Relying on three different foot angles shows more accurate estimations than measuring a single foot angle.


Asunto(s)
Calcáneo , Pie Plano , Humanos , Pie Plano/diagnóstico por imagen , Pie/diagnóstico por imagen , Radiografía
2.
Sensors (Basel) ; 23(12)2023 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-37420812

RESUMEN

Early diagnosis of mild cognitive impairment (MCI) with magnetic resonance imaging (MRI) has been shown to positively affect patients' lives. To save time and costs associated with clinical investigation, deep learning approaches have been used widely to predict MCI. This study proposes optimized deep learning models for differentiating between MCI and normal control samples. In previous studies, the hippocampus region located in the brain is used extensively to diagnose MCI. The entorhinal cortex is a promising area for diagnosing MCI since severe atrophy is observed when diagnosing the disease before the shrinkage of the hippocampus. Due to the small size of the entorhinal cortex area relative to the hippocampus, limited research has been conducted on the entorhinal cortex brain region for predicting MCI. This study involves the construction of a dataset containing only the entorhinal cortex area to implement the classification system. To extract the features of the entorhinal cortex area, three different neural network architectures are optimized independently: VGG16, Inception-V3, and ResNet50. The best outcomes were achieved utilizing the convolution neural network classifier and the Inception-V3 architecture for feature extraction, with accuracy, sensitivity, specificity, and area under the curve scores of 70%, 90%, 54%, and 69%, respectively. Furthermore, the model has an acceptable balance between precision and recall, achieving an F1 score of 73%. The results of this study validate the effectiveness of our approach in predicting MCI and may contribute to diagnosing MCI through MRI.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Aprendizaje Profundo , Humanos , Enfermedad de Alzheimer/patología , Disfunción Cognitiva/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Corteza Entorrinal/diagnóstico por imagen , Corteza Entorrinal/patología
3.
Sensors (Basel) ; 22(17)2022 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-36081048

RESUMEN

Epilepsy is a nervous system disorder. Encephalography (EEG) is a generally utilized clinical approach for recording electrical activity in the brain. Although there are a number of datasets available, most of them are imbalanced due to the presence of fewer epileptic EEG signals compared with non-epileptic EEG signals. This research aims to study the possibility of integrating local EEG signals from an epilepsy center in King Abdulaziz University hospital into the CHB-MIT dataset by applying a new compatibility framework for data integration. The framework comprises multiple functions, which include dominant channel selection followed by the implementation of a novel algorithm for reading XLtek EEG data. The resulting integrated datasets, which contain selective channels, are tested and evaluated using a deep-learning model of 1D-CNN, Bi-LSTM, and attention. The results achieved up to 96.87% accuracy, 96.98% precision, and 96.85% sensitivity, outperforming the other latest systems that have a larger number of EEG channels.


Asunto(s)
Electroencefalografía , Epilepsia , Algoritmos , Encéfalo , Electroencefalografía/métodos , Epilepsia/diagnóstico , Humanos , Convulsiones/diagnóstico , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA