Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 20795, 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39242659

RESUMEN

Smart cities have developed advanced technology that improves people's lives. A collaboration of smart cities with autonomous vehicles shows the development towards a more advanced future. Cyber-physical system (CPS) are used blend the cyber and physical world, combined with electronic and mechanical systems, Autonomous vehicles (AVs) provide an ideal model of CPS. The integration of 6G technology with Autonomous Vehicles (AVs) marks a significant advancement in Intelligent Transportation Systems (ITS), offering enhanced self-sufficiency, intelligence, and effectiveness. Autonomous vehicles rely on a complex network of sensors, cameras, and software to operate. A cyber-attack could interfere with these systems, leading to accidents, injuries, or fatalities. Autonomous vehicles are often connected to broader transportation networks and infrastructure. A successful cyber-attack could disrupt not only individual vehicles but also public transportation systems, causing widespread chaos and economic damage. Autonomous vehicles communicate with other vehicles (V2V) and infrastructure (V2I) for safe and efficient operation. If these communication channels are compromised, it could lead to collisions, traffic jams, or other dangerous situations. So we present a novel approach to mitigating these security risks by leveraging pre-trained Convolutional Neural Network (CNN) models for dynamic cyber-attack detection within the cyber-physical systems (CPS) framework of AVs. The proposed Intelligent Intrusion Detection System (IIDS) employs a combination of advanced learning techniques, including Data Fusion, One-Class Support Vector Machine, Random Forest, and k-Nearest Neighbor, to improve detection accuracy. The study demonstrates that the EfficientNet model achieves superior performance with an accuracy of up to 99.97%, highlighting its potential to significantly enhance the security of AV networks. This research contributes to the development of intelligent cyber-security models that align with 6G standards, ultimately supporting the safe and efficient integration of AVs into smart cities.

2.
Artículo en Inglés | MEDLINE | ID: mdl-39286921

RESUMEN

Motor imagery brain computer interface (BCI) systems are considered one of the most crucial paradigms and have received extensive attention from researchers worldwide. However, the non-stationary from subject-to-subject transfer is a substantial challenge for robust BCI operations. To address this issue, this paper proposes a novel approach that integrates joint multi-feature extraction, specifically combining common spatial patterns (CSP) and wavelet packet transforms (WPT), along with transfer learning (TL) in motor imagery BCI systems. This approach leverages the time-frequency characteristics of WPT and the spatial characteristics of CSP while utilizing transfer learning to facilitate EEG identification for target subjects based on knowledge acquired from non-target subjects. Using dataset IVa from BCI Competition III, our proposed approach achieves an impressive average classification accuracy of 93.4%, outperforming five kinds of state-of-the-art approaches. Furthermore, it offers the advantage of enabling the design of various auxiliary problems to learn different aspects of the target problem from unlabeled data through transfer learning, thereby facilitating the implementation of innovative ideas within our proposed approach. Simultaneously, it demonstrates that integrating CSP and WPT while transferring knowledge from other subjects is highly effective in enhancing the average classification accuracy of EEG signals and it provides a novel solution to address subject-to-subject transfer challenges in motor imagery BCI systems.

3.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39123888

RESUMEN

The efficient fault detection (FD) of traction control systems (TCSs) is crucial for ensuring the safe operation of high-speed trains. Transient faults (TFs) can arise due to prolonged operation and harsh environmental conditions, often being masked by background noise, particularly during dynamic operating conditions. Moreover, acquiring a sufficient number of samples across the entire scenario presents a challenging task, resulting in imbalanced data for FD. To address these limitations, an unsupervised transfer learning (TL) method via federated Cycle-Flow adversarial networks (CFANs) is proposed to effectively detect TFs under various operating conditions. Firstly, a CFAN is specifically designed for extracting latent features and reconstructing data in the source domain. Subsequently, a transfer learning framework employing federated CFANs collectively adjusts the modified knowledge resulting from domain alterations. Finally, the designed federated CFANs execute transient FD by constructing residuals in the target domain. The efficacy of the proposed methodology is demonstrated through comparative experiments.

4.
Sci Rep ; 14(1): 6173, 2024 03 14.
Artículo en Inglés | MEDLINE | ID: mdl-38486010

RESUMEN

A kidney stone is a solid formation that can lead to kidney failure, severe pain, and reduced quality of life from urinary system blockages. While medical experts can interpret kidney-ureter-bladder (KUB) X-ray images, specific images pose challenges for human detection, requiring significant analysis time. Consequently, developing a detection system becomes crucial for accurately classifying KUB X-ray images. This article applies a transfer learning (TL) model with a pre-trained VGG16 empowered with explainable artificial intelligence (XAI) to establish a system that takes KUB X-ray images and accurately categorizes them as kidney stones or normal cases. The findings demonstrate that the model achieves a testing accuracy of 97.41% in identifying kidney stones or normal KUB X-rays in the dataset used. VGG16 model delivers highly accurate predictions but lacks fairness and explainability in their decision-making process. This study incorporates the Layer-Wise Relevance Propagation (LRP) technique, an explainable artificial intelligence (XAI) technique, to enhance the transparency and effectiveness of the model to address this concern. The XAI technique, specifically LRP, increases the model's fairness and transparency, facilitating human comprehension of the predictions. Consequently, XAI can play an important role in assisting doctors with the accurate identification of kidney stones, thereby facilitating the execution of effective treatment strategies.


Asunto(s)
Inteligencia Artificial , Cálculos Renales , Humanos , Rayos X , Calidad de Vida , Cálculos Renales/diagnóstico por imagen , Fluoroscopía
5.
Phys Eng Sci Med ; 47(2): 633-642, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38358619

RESUMEN

In this study, we have developed a novel method based on deep learning and brain effective connectivity to classify responders and non-responders to selective serotonin reuptake inhibitors (SSRIs) antidepressants in major depressive disorder (MDD) patients prior to the treatment using EEG signal. The effective connectivity of 30 MDD patients was determined by analyzing their pretreatment EEG signals, which were then concatenated into delta, theta, alpha, and beta bands and transformed into images. Using these images, we then fine tuned a hybrid Convolutional Neural Network that is enhanced with bidirectional Long Short-Term Memory cells based on transfer learning. The Inception-v3, ResNet18, DenseNet121, and EfficientNet-B0 models are implemented as base models. Finally, the models are followed by BiLSTM and dense layers in order to classify responders and non-responders to SSRI treatment. Results showed that the EfficiencyNet-B0 has the highest accuracy of 98.33, followed by DensNet121, ResNet18 and Inception-v3. Therefore, a new method was proposed in this study that uses deep learning models to extract both spatial and temporal features automatically, which will improve classification results. The proposed method provides accurate identification of MDD patients who are responding, thereby reducing the cost of medical facilities and patient care.


Asunto(s)
Trastorno Depresivo Mayor , Electroencefalografía , Redes Neurales de la Computación , Humanos , Trastorno Depresivo Mayor/tratamiento farmacológico , Trastorno Depresivo Mayor/diagnóstico por imagen , Adulto , Femenino , Masculino , Resultado del Tratamiento , Procesamiento de Señales Asistido por Computador , Aprendizaje Profundo , Inhibidores Selectivos de la Recaptación de Serotonina/farmacología , Inhibidores Selectivos de la Recaptación de Serotonina/uso terapéutico , Persona de Mediana Edad
6.
Med Biol Eng Comput ; 62(6): 1689-1701, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38342784

RESUMEN

Motor imagery (MI) paradigms have been widely used in neural rehabilitation and drowsiness state assessment. The progress in brain-computer interface (BCI) technology has emphasized the importance of accurately and efficiently detecting motor imagery intentions from electroencephalogram (EEG). Despite the recent breakthroughs made in developing EEG-based algorithms for decoding MI, the accuracy and efficiency of these models remain limited by technical challenges posed by cross-subject heterogeneity in EEG data processing and the scarcity of EEG data for training. Inspired by the optimal transport theory, this study aims to develop a novel three-stage transfer learning (TSTL) method, which uses the existing labeled data from a source domain to improve classification performance on an unlabeled target domain. Notably, the proposed method comprises three components, namely, the Riemannian tangent space mapping (RTSM), source domain transformer (SDT), and optimal subspace mapping (OSM). The RTSM maps a symmetric positive definite matrix from the Riemannian space to the tangent space to minimize the marginal probability distribution drift. The SDT transforms the source domain to a target domain by finding the optimal transport mapping matrix to reduce the joint probability distribution differences. The OSM finally maps the transformed source domain and original target domain to the same subspace to further mitigate the distribution discrepancy. The performance of the proposed method was validated on two public BCI datasets, and the average accuracy of the algorithm on two datasets was 72.24% and 69.29%. Our results demonstrated the improved performance of EEG-based MI detection in comparison with state-of-the-art algorithms.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Electroencefalografía , Humanos , Electroencefalografía/métodos , Aprendizaje Automático , Procesamiento de Señales Asistido por Computador , Imaginación/fisiología
7.
Sensors (Basel) ; 23(23)2023 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-38067888

RESUMEN

The primary objective of this study is to develop an advanced, automated system for the early detection and classification of leaf diseases in potato plants, which are among the most cultivated vegetable crops worldwide. These diseases, notably early and late blight caused by Alternaria solani and Phytophthora infestans, significantly impact the quantity and quality of global potato production. We hypothesize that the integration of Vision Transformer (ViT) and ResNet-50 architectures in a new model, named EfficientRMT-Net, can effectively and accurately identify various potato leaf diseases. This approach aims to overcome the limitations of traditional methods, which are often labor-intensive, time-consuming, and prone to inaccuracies due to the unpredictability of disease presentation. EfficientRMT-Net leverages the CNN model for distinct feature extraction and employs depth-wise convolution (DWC) to reduce computational demands. A stage block structure is also incorporated to improve scalability and sensitive area detection, enhancing transferability across different datasets. The classification tasks are performed using a global average pooling layer and a fully connected layer. The model was trained, validated, and tested on custom datasets specifically curated for potato leaf disease detection. EfficientRMT-Net's performance was compared with other deep learning and transfer learning techniques to establish its efficacy. Preliminary results show that EfficientRMT-Net achieves an accuracy of 97.65% on a general image dataset and 99.12% on a specialized Potato leaf image dataset, outperforming existing methods. The model demonstrates a high level of proficiency in correctly classifying and identifying potato leaf diseases, even in cases of distorted samples. The EfficientRMT-Net model provides an efficient and accurate solution for classifying potato plant leaf diseases, potentially enabling farmers to enhance crop yield while optimizing resource utilization. This study confirms our hypothesis, showcasing the effectiveness of combining ViT and ResNet-50 architectures in addressing complex agricultural challenges.


Asunto(s)
Solanum tuberosum , Agricultura , Productos Agrícolas , Cultura , Enfermedades de las Plantas , Hojas de la Planta
8.
Front Artif Intell ; 6: 804682, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37547229

RESUMEN

Intuitively, experience playing against one mixture of opponents in a given domain should be relevant for a different mixture in the same domain. If the mixture changes, ideally we would not have to train from scratch, but rather could transfer what we have learned to construct a policy to play against the new mixture. We propose a transfer learning method, Q-Mixing, that starts by learning Q-values against each pure-strategy opponent. Then a Q-value for any distribution of opponent strategies is approximated by appropriately averaging the separately learned Q-values. From these components, we construct policies against all opponent mixtures without any further training. We empirically validate Q-Mixing in two environments: a simple grid-world soccer environment, and a social dilemma game. Our experiments find that Q-Mixing can successfully transfer knowledge across any mixture of opponents. Next, we consider the use of observations during play to update the believed distribution of opponents. We introduce an opponent policy classifier-trained reusing Q-learning data-and use the classifier results to refine the mixing of Q-values. Q-Mixing augmented with the opponent policy classifier performs better, with higher variance, than training directly against a mixed-strategy opponent.

9.
Med Biol Eng Comput ; 61(8): 2033-2049, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37296285

RESUMEN

In this study, we propose an ensemble model for the detection of diabetic retinopathy (DR) illness that is driven by transfer learning. Due to diabetes, the DR is a problem that affects the eyes. The retinal blood vessels in a person with high blood sugar deteriorate. The blood arteries may enlarge and leak as a result, or they may close and stop the flow of blood. If DR is not treated, it can become severe, damage vision, and eventually result in blindness. Medical experts study the colored fundus photos for this reason in order to manually diagnose disease, however this is a perilous technique. As a result, the condition was automatically identified utilizing retinal scans and a number of computer vision-based methods. A model is trained on one task or datasets employing the transfer learning (TL) technique, and then the pre-trained models or weights are applied to another task or dataset. Six deep learning (DL)-based convolutional neural network (CNN) models were trained in this study using huge datasets of reasonable photos, including DenseNet-169, VGG-19, ResNet101-V2, Mobilenet-V2, and Inception-V3. We also applied a data-preprocessing strategy to improve the accuracy and lower the training costs in order to improve the results. The experimental results demonstrate that the suggested model works better than existing approaches on the same dataset, with an accuracy of up to 98%, and detects the stage of DR.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico por imagen , Redes Neurales de la Computación , Retina/diagnóstico por imagen , Fondo de Ojo , Aprendizaje Automático
10.
Front Med (Lausanne) ; 10: 1106717, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37089598

RESUMEN

Renal diseases are common health problems that affect millions of people around the world. Among these diseases, kidney stones, which affect anywhere from 1 to 15% of the global population and thus; considered one of the leading causes of chronic kidney diseases (CKD). In addition to kidney stones, renal cancer is the tenth most prevalent type of cancer, accounting for 2.5% of all cancers. Artificial intelligence (AI) in medical systems can assist radiologists and other healthcare professionals in diagnosing different renal diseases (RD) with high reliability. This study proposes an AI-based transfer learning framework to detect RD at an early stage. The framework presented on CT scans and images from microscopic histopathological examinations will help automatically and accurately classify patients with RD using convolutional neural network (CNN), pre-trained models, and an optimization algorithm on images. This study used the pre-trained CNN models VGG16, VGG19, Xception, DenseNet201, MobileNet, MobileNetV2, MobileNetV3Large, and NASNetMobile. In addition, the Sparrow search algorithm (SpaSA) is used to enhance the pre-trained model's performance using the best configuration. Two datasets were used, the first dataset are four classes: cyst, normal, stone, and tumor. In case of the latter, there are five categories within the second dataset that relate to the severity of the tumor: Grade 0, Grade 1, Grade 2, Grade 3, and Grade 4. DenseNet201 and MobileNet pre-trained models are the best for the four-classes dataset compared to others. Besides, the SGD Nesterov parameters optimizer is recommended by three models, while two models only recommend AdaGrad and AdaMax. Among the pre-trained models for the five-class dataset, DenseNet201 and Xception are the best. Experimental results prove the superiority of the proposed framework over other state-of-the-art classification models. The proposed framework records an accuracy of 99.98% (four classes) and 100% (five classes).

11.
Ultrasonics ; 130: 106931, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36681008

RESUMEN

Damage localization algorithms for ultrasonic guided wave-based structural health monitoring (GW-SHM) typically utilize manually-defined features and supervised machine learning on data collected under various conditions. This scheme has limitations that affect prediction accuracy in practical settings when the model encounters data with a distribution different from that used for training, especially due to variation in environmental factors (e.g., temperature) and types of damages. While deep learning based models that overcome these limitations have been reported in literature, they typically comprise of millions of trainable parameters. As an alternative, we propose an unsupervised approach for temperature-compensated damage identification and localization in GW-SHM systems based on transferring learning from a convolutional auto encoder (TL-CAE). Remarkably, without using signals corresponding to the damages during training (unsupervised), our method demonstrates more accurate damage detection and localization as well as robustness to temperature variations than supervised approaches reported on the publicly available Open Guided Waves (OGW) dataset. Additionally, we have demonstrated reduction in number of trainable parameters using transfer learning (TL) to leverage similarities between time-series among various sensor paths. It is also worth noting that the proposed framework uses raw time-domain signals, without any pre-processing or knowledge of material properties. It should therefore be scalable and adaptable for other materials, structures, damages, and temperature ranges, should more data become available in the future. We present an extensive parametric study to demonstrate feasibility of the proposed method.

12.
MAGMA ; 36(1): 43-53, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36326937

RESUMEN

OBJECTIVE: Despite the critical role of Magnetic Resonance Imaging (MRI) in the diagnosis of brain tumours, there are still many pitfalls in the exact grading of them, in particular, gliomas. In this regard, it was aimed to examine the potential of Transfer Learning (TL) and Machine Learning (ML) algorithms in the accurate grading of gliomas on MRI images. MATERIALS AND METHODS: Dataset has included four types of axial MRI images of glioma brain tumours with grades I-IV: T1-weighted, T2-weighted, FLAIR, and T1-weighted Contrast-Enhanced (T1-CE). Images were resized, normalized, and randomly split into training, validation, and test sets. ImageNet pre-trained Convolutional Neural Networks (CNNs) were utilized for feature extraction and classification, using Adam and SGD optimizers. Logistic Regression (LR) and Support Vector Machine (SVM) methods were also implemented for classification instead of Fully Connected (FC) layers taking advantage of features extracted by each CNN. RESULTS: Evaluation metrics were computed to find the model with the best performance, and the highest overall accuracy of 99.38% was achieved for the model containing an SVM classifier and features extracted by pre-trained VGG-16. DISCUSSION: It was demonstrated that developing Computer-aided Diagnosis (CAD) systems using pre-trained CNNs and classification algorithms is a functional approach to automatically specify the grade of glioma brain tumours in MRI images. Using these models is an excellent alternative to invasive methods and helps doctors diagnose more accurately before treatment.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Imagen por Resonancia Magnética , Glioma/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Diagnóstico por Computador , Aprendizaje Automático
13.
Front Mol Biosci ; 10: 1250596, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38577506

RESUMEN

Introduction: Chronic Suppurative Otitis Media (CSOM) and Middle Ear Cholesteatoma are two common chronic otitis media diseases that often cause confusion among physicians due to their similar location and shape in clinical CT images of the internal auditory canal. In this study, we utilized the transfer learning method combined with CT scans of the internal auditory canal to achieve accurate lesion segmentation and automatic diagnosis for patients with CSOM and middle ear cholesteatoma. Methods: We collected 1019 CT scan images and utilized the nnUnet skeleton model along with coarse grained focal segmentation labeling to pre-train on the above CT images for focal segmentation. We then fine-tuned the pre-training model for the downstream three-classification diagnosis task. Results: Our proposed algorithm model achieved a classification accuracy of 92.33% for CSOM and middle ear cholesteatoma, which is approximately 5% higher than the benchmark model. Moreover, our upstream segmentation task training resulted in a mean Intersection of Union (mIoU) of 0.569. Discussion: Our results demonstrate that using coarse-grained contour boundary labeling can significantly enhance the accuracy of downstream classification tasks. The combination of deep learning and automatic diagnosis of CSOM and internal auditory canal CT images of middle ear cholesteatoma exhibits high sensitivity and specificity.

14.
Sensors (Basel) ; 22(21)2022 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-36366228

RESUMEN

Existing data-driven technology for prediction of state of health (SOH) has insufficient feature extraction capability and limited application scope. To deal with this challenge, this paper proposes a battery SOH prediction model based on multi-feature fusion. The model is based on a convolutional neural network (CNN) and a long short-term memory network (LSTM). The CNN can learn the cycle features in the battery data, the LSTM can learn the aging features of the battery over time, and regression prediction can be made through the full-connection layer (FC). In addition, for the aging differences caused by different battery operating conditions, this paper introduces transfer learning (TL) to improve the prediction effect. Across cycle data of the same battery under 12 different charging conditions, the fusion model in this paper shows higher prediction accuracy than with either LSTM and CNN in isolation, reducing RMSPE by 0.21% and 0.19%, respectively.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación
15.
Biomed Phys Eng Express ; 8(6)2022 10 21.
Artículo en Inglés | MEDLINE | ID: mdl-36223710

RESUMEN

Reducing the radiation dose will cause severe image noise and artifacts, and degradation of image quality will also affect the accuracy of diagnosis. To find a solution, we comprise a 2D and 3D concatenating convolutional encoder-decoder (CCE-3D) and the structural sensitive loss (SSL), via transfer learning (TL) denoising in the projection domain for low-dose computed tomography (LDCT), radiography, and tomosynthesis. The simulation and real-world practicing results show that many of the figures-of-merit (FOMs) increase in both projections (2-3 times) and CT imaging (1.5-2 times). From the PSNR and structural similarity index of measurement (SSIM), the CCE-3D model is effective in denoising but keeps the shape of the structure. Hence, we have developed a denoising model that can be served as a promising tool to be implemented in the next generation of x-ray radiography, tomosynthesis, and LDCT systems.


Asunto(s)
Aprendizaje Profundo , Tomografía Computarizada de Haz Cónico , Tomografía Computarizada por Rayos X/métodos , Artefactos , Simulación por Computador
16.
J Ambient Intell Humaniz Comput ; : 1-21, 2022 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-36042792

RESUMEN

Parkinson's disease (PD) is a neurodegenerative disorder with slow progression whose symptoms can be identified at late stages. Early diagnosis and treatment of PD can help to relieve the symptoms and delay progression. However, this is very challenging due to the similarities between the symptoms of PD and other diseases. The current study proposes a generic framework for the diagnosis of PD using handwritten images and (or) speech signals. For the handwriting images, 8 pre-trained convolutional neural networks (CNN) via transfer learning tuned by Aquila Optimizer were trained on the NewHandPD dataset to diagnose PD. For the speech signals, features from the MDVR-KCL dataset are extracted numerically using 16 feature extraction algorithms and fed to 4 different machine learning algorithms tuned by Grid Search algorithm, and graphically using 5 different techniques and fed to the 8 pretrained CNN structures. The authors propose a new technique in extracting the features from the voice dataset based on the segmentation of variable speech-signal-segment-durations, i.e., the use of different durations in the segmentation phase. Using the proposed technique, 5 datasets with 281 numerical features are generated. Results from different experiments are collected and recorded. For the NewHandPD dataset, the best-reported metric is 99.75% using the VGG19 structure. For the MDVR-KCL dataset, the best-reported metrics are 99.94% using the KNN and SVM ML algorithms and the combined numerical features; and 100% using the combined the mel-specgram graphical features and VGG19 structure. These results are better than other state-of-the-art researches.

17.
Front Public Health ; 10: 924432, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35859776

RESUMEN

Cancer is a major public health issue in the modern world. Breast cancer is a type of cancer that starts in the breast and spreads to other parts of the body. One of the most common types of cancer that kill women is breast cancer. When cells become uncontrollably large, cancer develops. There are various types of breast cancer. The proposed model discussed benign and malignant breast cancer. In computer-aided diagnosis systems, the identification and classification of breast cancer using histopathology and ultrasound images are critical steps. Investigators have demonstrated the ability to automate the initial level identification and classification of the tumor throughout the last few decades. Breast cancer can be detected early, allowing patients to obtain proper therapy and thereby increase their chances of survival. Deep learning (DL), machine learning (ML), and transfer learning (TL) techniques are used to solve many medical issues. There are several scientific studies in the previous literature on the categorization and identification of cancer tumors using various types of models but with some limitations. However, research is hampered by the lack of a dataset. The proposed methodology is created to help with the automatic identification and diagnosis of breast cancer. Our main contribution is that the proposed model used the transfer learning technique on three datasets, A, B, C, and A2, A2 is the dataset A with two classes. In this study, ultrasound images and histopathology images are used. The model used in this work is a customized CNN-AlexNet, which was trained according to the requirements of the datasets. This is also one of the contributions of this work. The results have shown that the proposed system empowered with transfer learning achieved the highest accuracy than the existing models on datasets A, B, C, and A2.


Asunto(s)
Neoplasias de la Mama , Redes Neurales de la Computación , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Aprendizaje Automático
18.
Artif Intell Rev ; 55(6): 5063-5108, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35125606

RESUMEN

The sudden appearance of COVID-19 has put the world in a serious situation. Due to the rapid spread of the virus and the increase in the number of infected patients and deaths, COVID-19 was declared a pandemic. This pandemic has its destructive effect not only on humans but also on the economy. Despite the development and availability of different vaccines for COVID-19, scientists still warn the citizens of new severe waves of the virus, and as a result, fast diagnosis of COVID-19 is a critical issue. Chest imaging proved to be a powerful tool in the early detection of COVID-19. This study introduces an entire framework for the early detection and early prognosis of COVID-19 severity in the diagnosed patients using laboratory test results. It consists of two phases (1) Early Diagnostic Phase (EDP) and (2) Early Prognostic Phase (EPP). In EDP, COVID-19 patients are diagnosed using CT chest images. In the current study, 5, 159 COVID-19 and 10, 376 normal computed tomography (CT) images of Egyptians were used as a dataset to train 7 different convolutional neural networks using transfer learning. Data augmentation normal techniques and generative adversarial networks (GANs), CycleGAN and CCGAN, were used to increase the images in the dataset to avoid overfitting issues. 28 experiments were applied and multiple performance metrics were captured. Classification with no augmentation yielded 99.61 % accuracy by EfficientNetB7 architecture. By applying CycleGAN and CC-GAN Augmentation, the maximum reported accuracies were 99.57 % and 99.14 % by MobileNetV1 and VGG-16 architectures respectively. In EPP, the prognosis of the severity of COVID-19 in patients is early determined using laboratory test results. In this study, 25 different classification techniques were applied and from the different results, the highest accuracies were 98.70 % and 97.40 % reported by the Ensemble Bagged Trees and Tree (Fine, Medium, and Coarse) techniques respectively.

19.
Sensors (Basel) ; 22(4)2022 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-35214317

RESUMEN

Transfer learning is a pervasive technology in computer vision and natural language processing fields, yielding exponential performance improvements by leveraging prior knowledge gained from data with different distributions. However, while recent works seek to mature machine learning and deep learning techniques in applications related to wireless communications, a field loosely termed radio frequency machine learning, few have demonstrated the use of transfer learning techniques for yielding performance gains, improved generalization, or to address concerns of training data costs. With modifications to existing transfer learning taxonomies constructed to support transfer learning in other modalities, this paper presents a tailored taxonomy for radio frequency applications, yielding a consistent framework that can be used to compare and contrast existing and future works. This work offers such a taxonomy, discusses the small body of existing works in transfer learning for radio frequency machine learning, and outlines directions where future research is needed to mature the field.


Asunto(s)
Aprendizaje Automático , Procesamiento de Lenguaje Natural , Ondas de Radio , Encuestas y Cuestionarios , Tecnología
20.
Sensors (Basel) ; 22(2)2022 Jan 16.
Artículo en Inglés | MEDLINE | ID: mdl-35062641

RESUMEN

Motion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing to the complexity and noise of EEG signals, and the electrode placement bias, and low-resolution of EMG signals. We herein propose a novel system of two-dimensional (2D) input image feature multimodal fusion based on an EEG/EMG-signal transfer learning (TL) paradigm for detection of hand movements in transforearm amputees. A feature extraction method in the frequency domain of the EEG and EMG signals was adopted to establish a 2D image. The input images were used for training on a model based on the convolutional neural network algorithm and TL, which requires 2D images as input data. For the purpose of data acquisition, five transforearm amputees and nine healthy controls were recruited. Compared with the conventional single-modal EEG signal trained models, the proposed multimodal fusion method significantly improved classification accuracy in both the control and patient groups. When the two signals were combined and used in the pretrained model for EEG TL, the classification accuracy increased by 4.18-4.35% in the control group, and by 2.51-3.00% in the patient group.


Asunto(s)
Amputados , Interfaces Cerebro-Computador , Aprendizaje Profundo , Algoritmos , Electroencefalografía , Electromiografía , Humanos , Muñeca
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA