Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1827-1833, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086628

RESUMEN

Extravasation occurs secondary to the leakage of medication from blood vessels into the surrounding tissue during intravenous administration resulting in significant soft tissue injury and necrosis. If treatment is delayed, invasive management such as surgical debridement, skin grafting, and even amputation may be required. Thus, it is imperative to develop a smartphone application for predicting extravasation severity from skin image. Two Deep Neural Network (DNN) architectures, U-Net and DenseNet-121, were used to segment skin and lesion, and to classify extravasation severity. Sensitivity and specificity for predicting between asymptomatic and abnormal cases were 77.78 and 90.24%. For each severity in abnormal cases, mild extravasation attained the highest F1-score of 0.8049, followed by severe extravasation of 0.6429, and moderate extravasation of 0.6250. The F1-score of moderate-to-severe extravasation classification can improve by applying the our proposed rule-based for multi-class classification. These findings proposed a novel and feasible DNN approach for screening extravasation from skin images. The implementation of DNN-based applications on mobile devices has a strong potential for clinical application in low-resource countries. Clinical relevance- The application can serve as a valuable tool in monitoring when extravasation occurs during intravaneous administration. It can also help in the scheduling process across worksite to reduce the risks associated with working shifts.


Asunto(s)
Redes Neurales de la Computación , Enfermedades de la Piel , Humanos , Investigación , Sensibilidad y Especificidad , Piel/diagnóstico por imagen
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1229-1233, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018209

RESUMEN

AIChest4All is the name of the model used to label and screening diseases in our area of focus, Thailand, including heart disease, lung cancer, and tuberculosis. This is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. Deep learning is used in our methodology to classify the chest X-ray images from datasets namely, NIH set, which is separated into 14 observations, and the Montgomery and Shenzhen set, which contains chest X-ray images of patients with tuberculosis, further supplemented by the dataset from Udonthani Cancer hospital and the National Chest Institute of Thailand. The images are classified into six categories: no finding, suspected active tuberculosis, suspected lung malignancy, abnormal heart and great vessels, Intrathoracic abnormal findings, and Extrathroacic abnormal findings. A total of 201,527 images were used. Results from testing showed that the accuracy values of the categories heart disease, lung cancer, and tuberculosis were 94.11%, 93.28%, and 92.32%, respectively with sensitivity values of 90.07%, 81.02%, and 82.33%, respectively and the specificity values were 94.65%, 94.04%, and 93.54%, respectively. In conclusion, the results acquired have sufficient accuracy, sensitivity, and specificity values to be used. Currently, AIChest4All is being used to help several of Thailand's government funded hospitals, free of charge.Clinical relevance- AIChest4All is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. It is being used to help several of Thailand's goverment funded hospitals, free of charege to screening heart disease, lung cancer, and tubeculosis with 94.11%, 93.28%, and 92.32% accuracy.


Asunto(s)
Neoplasias Pulmonares , Tuberculosis , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Tamizaje Masivo , Sensibilidad y Especificidad , Tailandia
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1996-2002, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018395

RESUMEN

This work proposes an automated algorithms for classifying retinal fundus images as cytomegalovirus retinitis (CMVR), normal, and other diseases. Adaptive wavelet packet transform (AWPT) was used to extract features. The retinal fundus images were transformed using a 4-level Haar wavelet packet (WP) transform. The first two best trees were obtained using Shannon and log energy entropy, while the third best tree was obtained using the Daubechies-4 mother wavelet with Shannon entropy. The coefficients of each node were extracted, where the feature value of each leaf node of the best tree was the average of the WP coefficients in that node, while those of other non-leaf nodes were set to zero. The feature vector was classified using an artificial neural network (ANN). The effectiveness of the algorithm was evaluated using ten-fold cross-validation over a dataset consisting of 1,011 images (310 CMVR, 240 normal, and 461 other diseases). In testing, a dataset consisting of 101 images (31 CMVR, 24 normal, and 46 other diseases), the AWPT-based ANN had sensitivities of 90.32%, 83.33%, and 91.30% and specificities of 95.71%, 94.81%, and 92.73%. In conclusion, the proposed algorithm has promising potential in CMVR screening, for which the AWPT-based ANN is applicable with scarce data and limited resources.


Asunto(s)
Retinitis por Citomegalovirus , Algoritmos , Retinitis por Citomegalovirus/diagnóstico , Fondo de Ojo , Humanos , Redes Neurales de la Computación , Análisis de Ondículas
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 7044-7048, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31947460

RESUMEN

This study aims to apply Mask Regional Convolutional Neural Network (Mask R-CNN) to cervical cancer screening using pap smear histological slides. Based on our current literature review, this is the first attempt of using Mask R-CNN to detect and analyze the nucleus of the cervical cell, screening for normal and abnormal nuclear features. The data set were liquid-based histological slides obtained from Thammasat University (TU) Hospital. The slides contained both cervical cells and various artifacts such as white blood cells, mimicking the slides obtained in actual clinical settings. The proposed algorithm achieved mean average precision (mAP) of 57.8%, accuracy of 91.7%, sensitivity of 91.7%, and specificity of 91.7% per image. As we needed to evaluate the efficiency of our algorithm in comparison to single cell classification algorithm (Zhang et al., IEEE JBHI, vol. 21, no. 6, pp. 1633, 2017), we modified our method to also classify single cells on TU dataset test using Mask R-CNN segmentation. The results obtained had an accuracy of 89.8%, sensitivity of 72.5%, and specificity of 94.3%.


Asunto(s)
Neoplasias del Cuello Uterino , Aprendizaje Profundo , Detección Precoz del Cáncer , Femenino , Humanos , Prueba de Papanicolaou , Frotis Vaginal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA