Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Dentomaxillofac Radiol ; 53(1): 32-42, 2024 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-38214940

RESUMEN

OBJECTIVES: The objective of this study is to assess the accuracy of computer-assisted periodontal classification bone loss staging using deep learning (DL) methods on panoramic radiographs and to compare the performance of various models and layers. METHODS: Panoramic radiographs were diagnosed and classified into 3 groups, namely "healthy," "Stage1/2," and "Stage3/4," and stored in separate folders. The feature extraction stage involved transferring and retraining the feature extraction layers and weights from 3 models, namely ResNet50, DenseNet121, and InceptionV3, which were proposed for classifying the ImageNet dataset, to 3 DL models designed for classifying periodontal bone loss. The features obtained from global average pooling (GAP), global max pooling (GMP), or flatten layers (FL) of convolutional neural network (CNN) models were used as input to the 8 different machine learning (ML) models. In addition, the features obtained from the GAP, GMP, or FL of the DL models were reduced using the minimum redundancy maximum relevance (mRMR) method and then classified again with 8 ML models. RESULTS: A total of 2533 panoramic radiographs, including 721 in the healthy group, 842 in the Stage1/2 group, and 970 in the Stage3/4 group, were included in the dataset. The average performance values of DenseNet121 + GAP-based and DenseNet121 + GAP + mRMR-based ML techniques on 10 subdatasets and ML models developed using 2 feature selection techniques outperformed CNN models. CONCLUSIONS: The new DenseNet121 + GAP + mRMR-based support vector machine model developed in this study achieved higher performance in periodontal bone loss classification compared to other models in the literature by detecting effective features from raw images without the need for manual selection.


Asunto(s)
Pérdida de Hueso Alveolar , Aprendizaje Profundo , Humanos , Pérdida de Hueso Alveolar/diagnóstico por imagen , Redes Neurales de la Computación , Radiografía Panorámica
2.
J Biomed Inform ; 141: 104357, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37031755

RESUMEN

The degree of motor impairment and profile of recovery after stroke are difficult to predict for each individual. Measures obtained from clinical assessments, as well as neurophysiological and neuroimaging techniques have been used as potential biomarkers of motor recovery, with limited accuracy up to date. To address this, the present study aimed to develop a deep learning model based on structural brain images obtained from stroke participants and healthy volunteers. The following inputs were used in a multi-channel 3D convolutional neural network (CNN) model: fractional anisotropy, mean diffusivity, radial diffusivity, and axial diffusivity maps obtained from Diffusion Tensor Imaging (DTI) images, white and gray matter intensity values obtained from Magnetic Resonance Imaging, as well as demographic data (e.g., age, gender). Upper limb motor function was classified into "Poor" and "Good" categories. To assess the performance of the DL model, we compared it to more standard machine learning (ML) classifiers including k-nearest neighbor, support vector machines (SVM), Decision Trees, Random Forests, Ada Boosting, and Naïve Bayes, whereby the inputs of these classifiers were the features taken from the fully connected layer of the CNN model. The highest accuracy and area under the curve values were 0.92 and 0.92 for the 3D-CNN and 0.91 and 0.91 for the SVM, respectively. The multi-channel 3D-CNN with residual blocks and SVM supported by DL was more accurate than traditional ML methods to classify upper limb motor impairment in the stroke population. These results suggest that combining volumetric DTI maps and measures of white and gray matter integrity can improve the prediction of the degree of motor impairment after stroke. Identifying the potential of recovery early on after a stroke could promote the allocation of resources to optimize the functional independence of these individuals and their quality of life.


Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular , Humanos , Imagen de Difusión Tensora/métodos , Teorema de Bayes , Calidad de Vida , Neuroimagen/métodos , Accidente Cerebrovascular/diagnóstico por imagen
3.
Eur J Endocrinol ; 188(1)2023 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-36747333

RESUMEN

OBJECTIVE: Despite improvements in diagnostic methods, acromegaly is still a late-diagnosed disease. In this study, it was aimed to automatically recognize acromegaly disease from facial images by using deep learning methods and to facilitate the detection of the disease. DESIGN: Cross-sectional, single-centre study. METHODS: The study included 77 acromegaly (52.56 ± 11.74, 34 males/43 females) patients and 71 healthy controls (48.47 ± 8.91, 39 males/32 females), considering gender and age compatibility. At the time of the photography, 56/77 (73%) of the acromegaly patients were in remission. Normalized images were obtained by scaling, aligning, and cropping video frames. Three architectures named ResNet50, DenseNet121, and InceptionV3 were used for the transfer learning-based convolutional neural network (CNN) model developed to classify face images as "Healthy" or "Acromegaly". Additionally, we trained and integrated these CNN machine learning methods to create an Ensemble Method (EM) for facial detection of acromegaly. RESULTS: The positive predictive values obtained for acromegaly with the ResNet50, DenseNet121, InceptionV3, and EM were calculated as 0.958, 0.965, 0.962, and 0.997, respectively. The average sensitivity, specificity, precision, and correlation coefficient values calculated for each of the ResNet50, DenseNet121, and InceptionV3 models are quite close. On the other hand, EM outperformed these three CNN architectures and provided the best overall performance in terms of sensitivity, specificity, accuracy, and precision as 0.997, 0.997, 0.997, and 0.998, respectively. CONCLUSIONS: The present study provided evidence that the proposed AcroEnsemble Model might detect acromegaly from facial images with high performance. This highlights that artificial intelligence programs are promising methods for detecting acromegaly in the future.


Asunto(s)
Acromegalia , Inteligencia Artificial , Femenino , Masculino , Humanos , Estudios Transversales , Redes Neurales de la Computación , Aprendizaje Automático , Acromegalia/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA