Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 116
Filtrar
1.
Sci Rep ; 14(1): 20637, 2024 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232043

RESUMEN

Skin cancer (SC) is an important medical condition that necessitates prompt identification to ensure timely treatment. Although visual evaluation by dermatologists is considered the most reliable method, its efficacy is subjective and laborious. Deep learning-based computer-aided diagnostic (CAD) platforms have become valuable tools for supporting dermatologists. Nevertheless, current CAD tools frequently depend on Convolutional Neural Networks (CNNs) with huge amounts of deep layers and hyperparameters, single CNN model methodologies, large feature space, and exclusively utilise spatial image information, which restricts their effectiveness. This study presents SCaLiNG, an innovative CAD tool specifically developed to address and surpass these constraints. SCaLiNG leverages a collection of three compact CNNs and Gabor Wavelets (GW) to acquire a comprehensive feature vector consisting of spatial-textural-frequency attributes. SCaLiNG gathers a wide range of image details by breaking down these photos into multiple directional sub-bands using GW, and then learning several CNNs using those sub-bands and the original picture. SCaLiNG also combines attributes taken from various CNNs trained with the actual images and subbands derived from GW. This fusion process correspondingly improves diagnostic accuracy due to the thorough representation of attributes. Furthermore, SCaLiNG applies a feature selection approach which further enhances the model's performance by choosing the most distinguishing features. Experimental findings indicate that SCaLiNG maintains a classification accuracy of 0.9170 in categorising SC subcategories, surpassing conventional single-CNN models. The outstanding performance of SCaLiNG underlines its ability to aid dermatologists in swiftly and precisely recognising and classifying SC, thereby enhancing patient outcomes.


Asunto(s)
Redes Neurales de la Computación , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/patología , Aprendizaje Profundo , Diagnóstico por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
3.
Curr Med Imaging ; 20(1): e15734056313837, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39039669

RESUMEN

INTRODUCTION: This study introduces SkinLiTE, a lightweight supervised contrastive learning model tailored to enhance the detection and typification of skin lesions in dermoscopic images. The core of SkinLiTE lies in its unique integration of supervised and contrastive learning approaches, which leverages labeled data to learn generalizable representations. This approach is particularly adept at handling the challenge of complexities and imbalances inherent in skin lesion datasets. METHODS: The methodology encompasses a two-phase learning process. In the first phase, SkinLiTE utilizes an encoder network and a projection head to transform and project dermoscopic images into a feature space where contrastive loss is applied, focusing on minimizing intra-class variations while maximizing inter-class differences. The second phase freezes the encoder's weights, leveraging the learned representations for classification through a series of dense and dropout layers. The model was evaluated using three datasets from Skin Cancer ISIC 2019-2020, covering a wide range of skin conditions. RESULTS: SkinLiTE demonstrated superior performance across various metrics, including accuracy, AUC, and F1 scores, particularly when compared with traditional supervised learning models. Notably, SkinLiTE achieved an accuracy of 0.9087 using AugMix augmentation for binary classification of skin lesions. It also showed comparable results with the state-of-the-art approaches of ISIC challenge without relying on external data, underscoring its efficacy and efficiency. The results highlight the potential of SkinLiTE as a significant step forward in the field of dermatological AI, offering a robust, efficient, and accurate tool for skin lesion detection and classification. Its lightweight architecture and ability to handle imbalanced datasets make it particularly suited for integration into Internet of Medical Things environments, paving the way for enhanced remote patient monitoring and diagnostic capabilities. CONCLUSION: This research contributes to the evolving landscape of AI in healthcare, demonstrating the impact of innovative learning methodologies in medical image analysis.


Asunto(s)
Dermoscopía , Neoplasias Cutáneas , Aprendizaje Automático Supervisado , Humanos , Dermoscopía/métodos , Neoplasias Cutáneas/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Piel/diagnóstico por imagen
4.
Comput Biol Med ; 179: 108851, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39004048

RESUMEN

In dermoscopic images, which allow visualization of surface skin structures not visible to the naked eye, lesion shape offers vital insights into skin diseases. In clinically practiced methods, asymmetric lesion shape is one of the criteria for diagnosing Melanoma. Initially, we labeled data for a non-annotated dataset with symmetrical information based on clinical assessments. Subsequently, we propose a supporting technique-a supervised learning image processing algorithm-to analyze the geometrical pattern of lesion shape, aiding non-experts in understanding the criteria of an asymmetric lesion. We then utilize a pre-trained convolutional neural network (CNN) to extract shape, color, and texture features from dermoscopic images for training a multiclass support vector machine (SVM) classifier, outperforming state-of-the-art methods from the literature. In the geometry-based experiment, we achieved a 99.00 % detection rate for dermatological asymmetric lesions. In the CNN-based experiment, the best performance is found 94 % Kappa Score, 95 % Macro F1-score, and 97 % weighted F1-score for classifying lesion shapes (Asymmetric, Half-Symmetric, and Symmetric).


Asunto(s)
Melanoma , Redes Neurales de la Computación , Neoplasias Cutáneas , Máquina de Vectores de Soporte , Humanos , Melanoma/diagnóstico por imagen , Melanoma/patología , Melanoma/clasificación , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/clasificación , Dermoscopía/métodos , Interpretación de Imagen Asistida por Computador/métodos , Algoritmos , Piel/diagnóstico por imagen , Piel/patología
5.
Diagnostics (Basel) ; 14(13)2024 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-39001229

RESUMEN

Skin lesion classification is vital for the early detection and diagnosis of skin diseases, facilitating timely intervention and treatment. However, existing classification methods face challenges in managing complex information and long-range dependencies in dermoscopic images. Therefore, this research aims to enhance the feature representation by incorporating local, global, and hierarchical features to improve the performance of skin lesion classification. We introduce a novel dual-track deep learning (DL) model in this research for skin lesion classification. The first track utilizes a modified Densenet-169 architecture that incorporates a Coordinate Attention Module (CoAM). The second track employs a customized convolutional neural network (CNN) comprising a Feature Pyramid Network (FPN) and Global Context Network (GCN) to capture multiscale features and global contextual information. The local features from the first track and the global features from second track are used for precise localization and modeling of the long-range dependencies. By leveraging these architectural advancements within the DenseNet framework, the proposed neural network achieved better performance compared to previous approaches. The network was trained and validated using the HAM10000 dataset, achieving a classification accuracy of 93.2%.

6.
Comput Biol Med ; 178: 108798, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38925085

RESUMEN

Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.


Asunto(s)
Aprendizaje Profundo , Dermoscopía , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/clasificación , Dermoscopía/métodos , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos
7.
Life (Basel) ; 14(6)2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38929643

RESUMEN

Background: The differential diagnosis of atypical melanocytic skin lesions localized on palms and soles represents a diagnostic challenge: indeed, this spectrum encompasses atypical nevi (AN) and early-stage melanomas (EN) displaying overlapping clinical and dermoscopic features. This often generates unnecessary excisions or delayed diagnosis. Investigations to date were mostly carried out in specific populations, focusing either on acrolentiginous melanomas or morphologically typical acquired nevi. Aims: To investigate the dermoscopic features of atypical melanocytic palmoplantar skin lesions (aMPPLs) as evaluated by variously skilled dermatologists and assess their concordance; to investigate the variations in dermoscopic appearance according to precise location on palms and soles; to detect the features with the strongest association with malignancy/benignity in each specific site. Methods: A dataset of 471 aMPPLs-excised in the suspect of malignancy-was collected from 10 European Centers, including a standardized dermoscopic picture (17×) and lesion/patient metadata. An anatomical classification into 17 subareas was considered, along with an anatomo-functional classification considering pressure/friction, (4 macroareas). A total of 156 participants (95 with less than 5 years of experience in dermoscopy and 61 with ≥than 5 years) from 17 countries performed a blinded tele-dermoscopic pattern analysis over 20 cases through a specifically realized web platform. Results: A total of 37,440 dermoscopic evaluations were obtained over 94 (20%) EM and 377 (80%) AN. The areas with the highest density of EM compared to AN were the heel (40.3% EM/aMPPLs) of the sole and the "fingers area" (33%EM/aMPPLs) of the palm, both characterized by intense/chronic traumatism/friction. Globally, the recognition rates of 12 dermoscopic patterns were non statistically different between 95 dermatology residents and 61 specialists: aMPPLs in the plantar arch appeared to be the most "difficult" to diagnose, the parallel ridge pattern was poorly recognized and irregular/regular fibrillar patterns often misinterpreted. Regarding the aMPPL of the "heel area", the parallel furrow pattern (p = 0.014) and lattice-like pattern (p = 0.001) significantly discriminated benign cases, while asymmetry of colors (p = 0.002) and regression structures (p = 0.025) malignant ones. In aMPPLs of the "plantar arch", the lattice-like pattern (p = 0.012) was significant for benignity and asymmetry of structures, asymmetry of colors, regression structures, or blue-white veil for malignancy. In palmar lesions, no data were significant in the discrimination between malignant and benign aMPPLs. Conclusions: This study highlights that (i) the pattern analysis of aMPPLs is challenging for both experienced and novice dermoscopists; (ii) the histological distribution varies according to the anatomo-functional classification; and (iii) different dermoscopic patterns are able to discriminate malignant from benign aMPPLs within specific plantar and palmar areas.

8.
Comput Biol Med ; 178: 108758, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38905895

RESUMEN

Melanoma, one of the deadliest types of skin cancer, accounts for thousands of fatalities globally. The bluish, blue-whitish, or blue-white veil (BWV) is a critical feature for diagnosing melanoma, yet research into detecting BWV in dermatological images is limited. This study utilizes a non-annotated skin lesion dataset, which is converted into an annotated dataset using a proposed imaging algorithm (color threshold techniques) on lesion patches based on color palettes. A Deep Convolutional Neural Network (DCNN) is designed and trained separately on three individual and combined dermoscopic datasets, using custom layers instead of standard activation function layers. The model is developed to categorize skin lesions based on the presence of BWV. The proposed DCNN demonstrates superior performance compared to the conventional BWV detection models across different datasets. The model achieves a testing accuracy of 85.71 % on the augmented PH2 dataset, 95.00 % on the augmented ISIC archive dataset, 95.05 % on the combined augmented (PH2+ISIC archive) dataset, and 90.00 % on the Derm7pt dataset. An explainable artificial intelligence (XAI) algorithm is subsequently applied to interpret the DCNN's decision-making process about the BWV detection. The proposed approach, coupled with XAI, significantly improves the detection of BWV in skin lesions, outperforming existing models and providing a robust tool for early melanoma diagnosis.


Asunto(s)
Aprendizaje Profundo , Melanoma , Neoplasias Cutáneas , Humanos , Melanoma/diagnóstico por imagen , Melanoma/diagnóstico , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/diagnóstico , Dermoscopía/métodos , Redes Neurales de la Computación , Algoritmos , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Bases de Datos Factuales , Piel/diagnóstico por imagen , Piel/patología
9.
J Clin Med ; 13(11)2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38892988

RESUMEN

Background: The rising incidence of Basal Cell Carcinoma (BCC), especially among individuals with significant sun exposure, underscores the need for effective and minimally invasive treatment alternatives. Traditional surgical approaches, while effective, often result in notable cosmetic and functional limitations, particularly for lesions located on the face. This study explores High-Intensity Focused Ultrasound (HIFU) as a promising, non-invasive treatment option that aims to overcome these challenges, potentially revolutionizing BCC treatment by offering a balance between efficacy and cosmetic outcomes. Methods: Our investigation enrolled 8 patients, presenting a total of 15 BCC lesions, treated with a 20 MHz HIFU device. The selection of treatment parameters was precise, utilizing probe depths from 0.8 mm to 2.3 mm and energy settings ranging from 0.7 to 1.3 Joules (J) per pulse, determined by the lesion's infiltration depth as assessed via pre-procedure ultrasonography. A key component of our methodology included dermatoscopic monitoring, which allowed for detailed observation of the lesions' response to treatment over time. Patient-reported outcomes and satisfaction levels were systematically recorded, providing insights into the comparative advantages of HIFU. Results: Initial responses after HIFU treatment included whitening and edema, indicative of successful lesion ablation. Early post-treatment observations revealed minimal discomfort and quick recovery, with crust formation resolving within two weeks for most lesions. Over a period of three to six months, patients reported significant improvement, with lesions becoming lighter and blending into the surrounding skin, demonstrating effective and aesthetically pleasing outcomes. Patient satisfaction surveys conducted six months post-treatment revealed high levels of satisfaction, with 75% of participants reporting very high satisfaction due to minimal scarring and the non-invasive nature of the procedure. No recurrences of BCC were noted, attesting to the efficacy of HIFU as a treatment option. Conclusions: The findings from this study confirm that based on dermoscopy analysis, HIFU is a highly effective and patient-preferred non-invasive treatment modality for Basal Cell Carcinoma. HIFU offers a promising alternative to traditional surgical and non-surgical treatments, reducing the cosmetic and functional repercussions associated with BCC management. Given its efficacy, safety, and favorable patient satisfaction scores, HIFU warrants further investigation and consideration for broader clinical application in the treatment of BCC, potentially setting a new standard in dermatologic oncology care. This work represents a pilot study that is the first to describe the use of HIFU in the treatment of BCC.

10.
Med Biol Eng Comput ; 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38833025

RESUMEN

Melanoma is an uncommon and dangerous type of skin cancer. Dermoscopic imaging aids skilled dermatologists in detection, yet the nuances between melanoma and non-melanoma conditions complicate diagnosis. Early identification of melanoma is vital for successful treatment, but manual diagnosis is time-consuming and requires a dermatologist with training. To overcome this issue, this article proposes an Optimized Attention-Induced Multihead Convolutional Neural Network with EfficientNetV2-fostered melanoma classification using dermoscopic images (AIMCNN-ENetV2-MC). The input pictures are extracted from the dermoscopic images dataset. Adaptive Distorted Gaussian Matched Filter (ADGMF) is used to remove the noise and maximize the superiority of skin dermoscopic images. These pre-processed images are fed to AIMCNN. The AIMCNN-ENetV2 classifies acral melanoma and benign nevus. Boosted Chimp Optimization Algorithm (BCOA) optimizes the AIMCNN-ENetV2 classifier for accurate classification. The proposed AIMCNN-ENetV2-MC is implemented using Python. The proposed approach attains an outstanding overall accuracy of 98.75%, less computation time of 98 s compared with the existing models.

11.
JMIR Med Inform ; 12: e49613, 2024 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-38904996

RESUMEN

BACKGROUND: Dermoscopy is a growing field that uses microscopy to allow dermatologists and primary care physicians to identify skin lesions. For a given skin lesion, a wide variety of differential diagnoses exist, which may be challenging for inexperienced users to name and understand. OBJECTIVE: In this study, we describe the creation of the dermoscopy differential diagnosis explorer (D3X), an ontology linking dermoscopic patterns to differential diagnoses. METHODS: Existing ontologies that were incorporated into D3X include the elements of visuals ontology and dermoscopy elements of visuals ontology, which connect visual features to dermoscopic patterns. A list of differential diagnoses for each pattern was generated from the literature and in consultation with domain experts. Open-source images were incorporated from DermNet, Dermoscopedia, and open-access research papers. RESULTS: D3X was encoded in the OWL 2 web ontology language and includes 3041 logical axioms, 1519 classes, 103 object properties, and 20 data properties. We compared D3X with publicly available ontologies in the dermatology domain using a semiotic theory-driven metric to measure the innate qualities of D3X with others. The results indicate that D3X is adequately comparable with other ontologies of the dermatology domain. CONCLUSIONS: The D3X ontology is a resource that can link and integrate dermoscopic differential diagnoses and supplementary information with existing ontology-based resources. Future directions include developing a web application based on D3X for dermoscopy education and clinical practice.

12.
Comput Biol Med ; 176: 108594, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38761501

RESUMEN

Skin cancer is one of the common types of cancer. It spreads quickly and is not easy to detect in the early stages, posing a major threat to human health. In recent years, deep learning methods have attracted widespread attention for skin cancer detection in dermoscopic images. However, training a practical classifier becomes highly challenging due to inter-class similarity and intra-class variation in skin lesion images. To address these problems, we propose a multi-scale fusion structure that combines shallow and deep features for more accurate classification. Simultaneously, we implement three approaches to the problem of class imbalance: class weighting, label smoothing, and resampling. In addition, the HAM10000_RE dataset strips out hair features to demonstrate the role of hair features in the classification process. We demonstrate that the region of interest is the most critical classification feature for the HAM10000_SE dataset, which segments lesion regions. We evaluated the effectiveness of our model using the HAM10000 and ISIC2019 dataset. The results showed that this method performed well in dermoscopic classification tasks, with ACC and AUC of 94.0% and 99.3%, on the HAM10000 dataset and ACC of 89.8% for the ISIC2019 dataset. The overall performance of our model is excellent in comparison to state-of-the-art models.


Asunto(s)
Dermoscopía , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/clasificación , Dermoscopía/métodos , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Piel/diagnóstico por imagen , Piel/patología , Bases de Datos Factuales , Algoritmos
13.
Cureus ; 16(4): e57945, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38738153

RESUMEN

This case report presents the clinical scenario of a 50-year-old man who developed swelling and itching around both eyes after applying tropicamide eye drops for an ophthalmic examination. The swelling appeared suddenly, progressed over time, and was accompanied by redness, watery discharge, and conjunctival congestion. A dermoscopic examination revealed congestion and erythema in the affected area. Visual acuity was compromised in the left eye. Prompt identification of the eyedrops as plain tropicamide with chlorbutol as a preservative enabled timely treatment with intravenous hydrocortisone and topical steroids, resulting in symptom improvement within two days. Allergic reactions to mydriatic agents such as tropicamide are infrequent but should be considered in patients with acute ocular symptoms post-application. This case underscores the importance of recognising and managing allergic reactions to ophthalmic medications for optimal patient care.

15.
Diagnostics (Basel) ; 14(7)2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38611666

RESUMEN

A crucial challenge in critical settings like medical diagnosis is making deep learning models used in decision-making systems interpretable. Efforts in Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet, many XAI methods are evaluated on broad classifiers and fail to address complex, real-world issues, such as medical diagnosis. In our study, we focus on enhancing user trust and confidence in automated AI decision-making systems, particularly for diagnosing skin lesions, by tailoring an XAI method to explain an AI model's ability to identify various skin lesion types. We generate explanations using synthetic images of skin lesions as examples and counterexamples, offering a method for practitioners to pinpoint the critical features influencing the classification outcome. A validation survey involving domain experts, novices, and laypersons has demonstrated that explanations increase trust and confidence in the automated decision system. Furthermore, our exploration of the model's latent space reveals clear separations among the most common skin lesion classes, a distinction that likely arises from the unique characteristics of each class and could assist in correcting frequent misdiagnoses by human professionals.

16.
Front Big Data ; 7: 1366312, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38590699

RESUMEN

Background: Melanoma is one of the deadliest skin cancers that originate from melanocytes due to sun exposure, causing mutations. Early detection boosts the cure rate to 90%, but misclassification drops survival to 15-20%. Clinical variations challenge dermatologists in distinguishing benign nevi and melanomas. Current diagnostic methods, including visual analysis and dermoscopy, have limitations, emphasizing the need for Artificial Intelligence understanding in dermatology. Objectives: In this paper, we aim to explore dermoscopic structures for the classification of melanoma lesions. The training of AI models faces a challenge known as brittleness, where small changes in input images impact the classification. A study explored AI vulnerability in discerning melanoma from benign lesions using features of size, color, and shape. Tests with artificial and natural variations revealed a notable decline in accuracy, emphasizing the necessity for additional information, such as dermoscopic structures. Methodology: The study utilizes datasets with clinically marked dermoscopic images examined by expert clinicians. Transformers and CNN-based models are employed to classify these images based on dermoscopic structures. Classification results are validated using feature visualization. To assess model susceptibility to image variations, classifiers are evaluated on test sets with original, duplicated, and digitally modified images. Additionally, testing is done on ISIC 2016 images. The study focuses on three dermoscopic structures crucial for melanoma detection: Blue-white veil, dots/globules, and streaks. Results: In evaluating model performance, adding convolutions to Vision Transformers proves highly effective for achieving up to 98% accuracy. CNN architectures like VGG-16 and DenseNet-121 reach 50-60% accuracy, performing best with features other than dermoscopic structures. Vision Transformers without convolutions exhibit reduced accuracy on diverse test sets, revealing their brittleness. OpenAI Clip, a pre-trained model, consistently performs well across various test sets. To address brittleness, a mitigation method involving extensive data augmentation during training and 23 transformed duplicates during test time, sustains accuracy. Conclusions: This paper proposes a melanoma classification scheme utilizing three dermoscopic structures across Ph2 and Derm7pt datasets. The study addresses AI susceptibility to image variations. Despite a small dataset, future work suggests collecting more annotated datasets and automatic computation of dermoscopic structural features.

17.
Sci Rep ; 14(1): 9127, 2024 04 21.
Artículo en Inglés | MEDLINE | ID: mdl-38644396

RESUMEN

Vitiligo is a hypopigmented skin disease characterized by the loss of melanin. The progressive nature and widespread incidence of vitiligo necessitate timely and accurate detection. Usually, a single diagnostic test often falls short of providing definitive confirmation of the condition, necessitating the assessment by dermatologists who specialize in vitiligo. However, the current scarcity of such specialized medical professionals presents a significant challenge. To mitigate this issue and enhance diagnostic accuracy, it is essential to build deep learning models that can support and expedite the detection process. This study endeavors to establish a deep learning framework to enhance the diagnostic accuracy of vitiligo. To this end, a comparative analysis of five models including ResNet (ResNet34, ResNet50, and ResNet101 models) and Swin Transformer series (Swin Transformer Base, and Swin Transformer Large models), were conducted under the uniform condition to identify the model with superior classification capabilities. Moreover, the study sought to augment the interpretability of these models by selecting one that not only provides accurate diagnostic outcomes but also offers visual cues highlighting the regions pertinent to vitiligo. The empirical findings reveal that the Swin Transformer Large model achieved the best performance in classification, whose AUC, accuracy, sensitivity, and specificity are 0.94, 93.82%, 94.02%, and 93.5%, respectively. In terms of interpretability, the highlighted regions in the class activation map correspond to the lesion regions of the vitiligo images, which shows that it effectively indicates the specific category regions associated with the decision-making of dermatological diagnosis. Additionally, the visualization of feature maps generated in the middle layer of the deep learning model provides insights into the internal mechanisms of the model, which is valuable for improving the interpretability of the model, tuning performance, and enhancing clinical applicability. The outcomes of this study underscore the significant potential of deep learning models to revolutionize medical diagnosis by improving diagnostic accuracy and operational efficiency. The research highlights the necessity for ongoing exploration in this domain to fully leverage the capabilities of deep learning technologies in medical diagnostics.


Asunto(s)
Aprendizaje Profundo , Vitíligo , Vitíligo/diagnóstico , Humanos
18.
Sci Rep ; 14(1): 9336, 2024 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-38653997

RESUMEN

Skin cancer is the most prevalent kind of cancer in people. It is estimated that more than 1 million people get skin cancer every year in the world. The effectiveness of the disease's therapy is significantly impacted by early identification of this illness. Preprocessing is the initial detecting stage in enhancing the quality of skin images by removing undesired background noise and objects. This study aims is to compile preprocessing techniques for skin cancer imaging that are currently accessible. Researchers looking into automated skin cancer diagnosis might use this article as an excellent place to start. The fully convolutional encoder-decoder network and Sparrow search algorithm (FCEDN-SpaSA) are proposed in this study for the segmentation of dermoscopic images. The individual wolf method and the ensemble ghosting technique are integrated to generate a neighbour-based search strategy in SpaSA for stressing the correct balance between navigation and exploitation. The classification procedure is accomplished by using an adaptive CNN technique to discriminate between normal skin and malignant skin lesions suggestive of disease. Our method provides classification accuracies comparable to commonly used incremental learning techniques while using less energy, storage space, memory access, and training time (only network updates with new training samples, no network sharing). In a simulation, the segmentation performance of the proposed technique on the ISBI 2017, ISIC 2018, and PH2 datasets reached accuracies of 95.28%, 95.89%, 92.70%, and 98.78%, respectively, on the same dataset and assessed the classification performance. It is accurate 91.67% of the time. The efficiency of the suggested strategy is demonstrated through comparisons with cutting-edge methodologies.


Asunto(s)
Algoritmos , Dermoscopía , Redes Neurales de la Computación , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/diagnóstico , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/clasificación , Neoplasias Cutáneas/patología , Dermoscopía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos , Piel/patología , Piel/diagnóstico por imagen
19.
PeerJ Comput Sci ; 10: e1953, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660169

RESUMEN

Melanoma is the most aggressive and prevalent form of skin cancer globally, with a higher incidence in men and individuals with fair skin. Early detection of melanoma is essential for the successful treatment and prevention of metastasis. In this context, deep learning methods, distinguished by their ability to perform automated and detailed analysis, extracting melanoma-specific features, have emerged. These approaches excel in performing large-scale analysis, optimizing time, and providing accurate diagnoses, contributing to timely treatments compared to conventional diagnostic methods. The present study offers a methodology to assess the effectiveness of an AlexNet-based convolutional neural network (CNN) in identifying early-stage melanomas. The model is trained on a balanced dataset of 10,605 dermoscopic images, and on modified datasets where hair, a potential obstructive factor, was detected and removed allowing for an assessment of how hair removal affects the model's overall performance. To perform hair removal, we propose a morphological algorithm combined with different filtering techniques for comparison: Fourier, Wavelet, average blur, and low-pass filters. The model is evaluated through 10-fold cross-validation and the metrics of accuracy, recall, precision, and the F1 score. The results demonstrate that the proposed model performs the best for the dataset where we implemented both a Wavelet filter and hair removal algorithm. It has an accuracy of 91.30%, a recall of 87%, a precision of 95.19%, and an F1 score of 90.91%.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA