Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(16)2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39205000

RESUMEN

Deep learning has recently made significant progress in semantic segmentation. However, the current methods face critical challenges. The segmentation process often lacks sufficient contextual information and attention mechanisms, low-level features lack semantic richness, and high-level features suffer from poor resolution. These limitations reduce the model's ability to accurately understand and process scene details, particularly in complex scenarios, leading to segmentation outputs that may have inaccuracies in boundary delineation, misclassification of regions, and poor handling of small or overlapping objects. To address these challenges, this paper proposes a Semantic Segmentation Network Based on Adaptive Attention and Deep Fusion with the Multi-Scale Dilated Convolutional Pyramid (SDAMNet). Specifically, the Dilated Convolutional Atrous Spatial Pyramid Pooling (DCASPP) module is developed to enhance contextual information in semantic segmentation. Additionally, a Semantic Channel Space Details Module (SCSDM) is devised to improve the extraction of significant features through multi-scale feature fusion and adaptive feature selection, enhancing the model's perceptual capability for key regions and optimizing semantic understanding and segmentation performance. Furthermore, a Semantic Features Fusion Module (SFFM) is constructed to address the semantic deficiency in low-level features and the low resolution in high-level features. The effectiveness of SDAMNet is demonstrated on two datasets, revealing significant improvements in Mean Intersection over Union (MIOU) by 2.89% and 2.13%, respectively, compared to the Deeplabv3+ network.

2.
Methods Mol Biol ; 2809: 263-274, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38907903

RESUMEN

The availability of extensive MHC-peptide binding data has boosted machine learning-based approaches for predicting binding affinity and identifying binding motifs. These computational tools leverage the wealth of binding data to extract essential features and generate a multitude of potential peptides, thereby significantly reducing the cost and time required for experimental procedures. MAM is one such tool for predicting the MHC-I-peptide binding affinity, extracting binding motifs, and generating new peptides with high affinity. This manuscript provides step-by-step guidance on installing, configuring, and executing MAM while also discussing the best practices when using this tool.


Asunto(s)
Biología Computacional , Antígenos de Histocompatibilidad Clase I , Péptidos , Unión Proteica , Programas Informáticos , Antígenos de Histocompatibilidad Clase I/metabolismo , Antígenos de Histocompatibilidad Clase I/química , Péptidos/química , Péptidos/metabolismo , Biología Computacional/métodos , Humanos , Simulación por Computador , Aprendizaje Automático , Sitios de Unión
3.
Interdiscip Sci ; 16(3): 1-12, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38568406

RESUMEN

With the rapid development of NGS technology, the number of protein sequences has increased exponentially. Computational methods have been introduced in protein functional studies because the analysis of large numbers of proteins through biological experiments is costly and time-consuming. In recent years, new approaches based on deep learning have been proposed to overcome the limitations of conventional methods. Although deep learning-based methods effectively utilize features of protein function, they are limited to sequences of fixed-length and consider information from adjacent amino acids. Therefore, new protein analysis tools that extract functional features from proteins of flexible length and train models are required. We introduce DeepPI, a deep learning-based tool for analyzing proteins in large-scale database. The proposed model that utilizes Global Average Pooling is applied to proteins of flexible length and leads to reduced information loss compared to existing algorithms that use fixed sizes. The image generator converts a one-dimensional sequence into a distinct two-dimensional structure, which can extract common parts of various shapes. Finally, filtering techniques automatically detect representative data from the entire database and ensure coverage of large protein databases. We demonstrate that DeepPI has been successfully applied to large databases such as the Pfam-A database. Comparative experiments on four types of image generators illustrated the impact of structure on feature extraction. The filtering performance was verified by varying the parameter values and proved to be applicable to large databases. Compared to existing methods, DeepPI outperforms in family classification accuracy for protein function inference.


Asunto(s)
Aprendizaje Profundo , Proteínas , Proteínas/química , Algoritmos , Bases de Datos de Proteínas , Biología Computacional/métodos
4.
Sensors (Basel) ; 23(22)2023 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-38005604

RESUMEN

Monocular panoramic depth estimation has various applications in robotics and autonomous driving due to its ability to perceive the entire field of view. However, panoramic depth estimation faces two significant challenges: global context capturing and distortion awareness. In this paper, we propose a new framework for panoramic depth estimation that can simultaneously address panoramic distortion and extract global context information, thereby improving the performance of panoramic depth estimation. Specifically, we introduce an attention mechanism into the multi-scale dilated convolution and adaptively adjust the receptive field size between different spatial positions, designing the adaptive attention dilated convolution module, which effectively perceives distortion. At the same time, we design the global scene understanding module to integrate global context information into the feature maps generated using the feature extractor. Finally, we trained and evaluated our model on three benchmark datasets which contains the virtual and real-world RGB-D panorama datasets. The experimental results show that the proposed method achieves competitive performance, comparable to existing techniques in both quantitative and qualitative evaluations. Furthermore, our method has fewer parameters and more flexibility, making it a scalable solution in mobile AR.

5.
Sensors (Basel) ; 23(18)2023 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-37765819

RESUMEN

The reliable and safe operation of industrial systems needs to detect and diagnose bearing faults as early as possible. Intelligent fault diagnostic systems that use deep learning convolutional neural network (CNN) techniques have achieved a great deal of success in recent years. In a traditional CNN, the fully connected layer is located in the final three layers, and such a layer consists of multiple layers that are all connected. However, the fully connected layer of the CNN has the disadvantage of too many training parameters, which makes the model training and testing time longer and incurs overfitting. Additionally, because the working load is constantly changing and noise from the place of operation is unavoidable, the efficiency of intelligent fault diagnosis techniques suffers great reductions. In this research, we propose a novel technique that can effectively solve the problem of traditional CNN and accurately identify the bearing fault. Firstly, the best pre-trained CNN model is identified by considering the classification's success rate for bearing fault diagnosis. Secondly, the selected CNN model is modified to effectively reduce the parameter quantities, overfitting, and calculating time of this model. Finally, the best classifier is identified to make a hybrid model concept to achieve the best performance. It is found that the proposed technique performs well under different load conditions, even in noisy environments, with variable signal-to-noise ratio (SNR) values. Our experimental results confirm that this proposed method is highly reliable and efficient in detecting and classifying bearing faults.

6.
Front Plant Sci ; 14: 1205151, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37484459

RESUMEN

Weeds remain one of the most important factors affecting the yield and quality of corn in modern agricultural production. To use deep convolutional neural networks to accurately, efficiently, and losslessly identify weeds in corn fields, a new corn weed identification model, SE-VGG16, is proposed. The SE-VGG16 model uses VGG16 as the basis and adds the SE attention mechanism to realize that the network automatically focuses on useful parts and allocates limited information processing resources to important parts. Then the 3 × 3 convolutional kernels in the first block are reduced to 1 × 1 convolutional kernels, and the ReLU activation function is replaced by Leaky ReLU to perform feature extraction while dimensionality reduction. Finally, it is replaced by a global average pooling layer for the fully connected layer of VGG16, and the output is performed by softmax. The experimental results verify that the SE-VGG16 model classifies corn weeds superiorly to other classical and advanced multiscale models with an average accuracy of 99.67%, which is more than the 97.75% of the original VGG16 model. Based on the three evaluation indices of precision rate, recall rate, and F1, it was concluded that SE-VGG16 has good robustness, high stability, and a high recognition rate, and the network model can be used to accurately identify weeds in corn fields, which can provide an effective solution for weed control in corn fields in practical applications.

7.
Front Cell Infect Microbiol ; 13: 1116285, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36936770

RESUMEN

Background: There is an urgent need to find an effective and accurate method for triaging coronavirus disease 2019 (COVID-19) patients from millions or billions of people. Therefore, this study aimed to develop a novel deep-learning approach for COVID-19 triage based on chest computed tomography (CT) images, including normal, pneumonia, and COVID-19 cases. Methods: A total of 2,809 chest CT scans (1,105 COVID-19, 854 normal, and 850 non-3COVID-19 pneumonia cases) were acquired for this study and classified into the training set (n = 2,329) and test set (n = 480). A U-net-based convolutional neural network was used for lung segmentation, and a mask-weighted global average pooling (GAP) method was proposed for the deep neural network to improve the performance of COVID-19 classification between COVID-19 and normal or common pneumonia cases. Results: The results for lung segmentation reached a dice value of 96.5% on 30 independent CT scans. The performance of the mask-weighted GAP method achieved the COVID-19 triage with a sensitivity of 96.5% and specificity of 87.8% using the testing dataset. The mask-weighted GAP method demonstrated 0.9% and 2% improvements in sensitivity and specificity, respectively, compared with the normal GAP. In addition, fusion images between the CT images and the highlighted area from the deep learning model using the Grad-CAM method, indicating the lesion region detected using the deep learning method, were drawn and could also be confirmed by radiologists. Conclusions: This study proposed a mask-weighted GAP-based deep learning method and obtained promising results for COVID-19 triage based on chest CT images. Furthermore, it can be considered a convenient tool to assist doctors in diagnosing COVID-19.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Neumonía , Humanos , COVID-19/diagnóstico por imagen , SARS-CoV-2 , Triaje/métodos , Estudios Retrospectivos , Neumonía/diagnóstico , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos
8.
Front Comput Neurosci ; 16: 1004988, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36457992

RESUMEN

With the increasing demand for deep learning in the last few years, CNNs have been widely used in many applications and have gained interest in classification, regression, and image recognition tasks. The training of these deep neural networks is compute-intensive and takes days or even weeks to train the model from scratch. The compute-intensive nature of these deep neural networks sometimes limits the practical implementation of CNNs in real-time applications. Therefore, the computational speedup in these networks is of utmost importance, which generates interest in CNN training acceleration. Much research is going on to meet the computational requirement and make it feasible for real-time applications. Because of its simplicity, data parallelism is used primarily, but it performs badly sometimes. In most cases, researchers prefer model parallelism to data parallelism, but it is not always the best choice. Therefore, in this study, we implement a hybrid of both data and model parallelism to improve the computational speed without compromising accuracy. There is only a 1.5% accuracy drop in our proposed study with an increased speed up of 3.62X. Also, a novel activation function Normalized Non-linear Activation Unit NNLU is proposed to introduce non-linearity in the model. The activation unit is non-saturated and helps avoid the model's over-fitting. The activation unit is free from the vanishing gradient problem. Also, the fully connected layer in the proposed CNN model is replaced by the Global Average Pooling layers (GAP) to enhance the model's accuracy and computational performance. When tested on a bio-medical image dataset, the model achieves an accuracy of 98.89% and requires a training time of only 1 s. The model categorizes medical images into different categories of glioma, meningioma, and pituitary tumor. The model is compared with existing state-of-art techniques, and it is observed that the proposed model outperforms others in classification accuracy and computational speed. Also, results are observed for different optimizers', different learning rates, and various epoch numbers.

9.
Sensors (Basel) ; 22(13)2022 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-35808358

RESUMEN

Walking is an exercise that uses muscles and joints of the human body and is essential for understanding body condition. Analyzing body movements through gait has been studied and applied in human identification, sports science, and medicine. This study investigated a spatiotemporal graph convolutional network model (ST-GCN), using attention techniques applied to pathological-gait classification from the collected skeletal information. The focus of this study was twofold. The first objective was extracting spatiotemporal features from skeletal information presented by joint connections and applying these features to graph convolutional neural networks. The second objective was developing an attention mechanism for spatiotemporal graph convolutional neural networks, to focus on important joints in the current gait. This model establishes a pathological-gait-classification system for diagnosing sarcopenia. Experiments on three datasets, namely NTU RGB+D, pathological gait of GIST, and multimodal-gait symmetry (MMGS), validate that the proposed model outperforms existing models in gait classification.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Marcha , Humanos
10.
Math Biosci Eng ; 19(1): 997-1025, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34903023

RESUMEN

Classifying and identifying surface defects is essential during the production and use of aluminum profiles. Recently, the dual-convolutional neural network(CNN) model fusion framework has shown promising performance for defects classification and recognition. Spurred by this trend, this paper proposes an improved dual-CNN model fusion framework to classify and identify defects in aluminum profiles. Compared with traditional dual-CNN model fusion frameworks, the proposed architecture involves an improved fusion layer, fusion strategy, and classifier block. Specifically, the suggested method extracts the feature map of the aluminum profile RGB image from the pre-trained VGG16 model's pool5 layer and the feature map of the maximum pooling layer of the suggested A4 network, which is added after the Alexnet model. then, weighted bilinear interpolation unsamples the feature maps extracted from the maximum pooling layer of the A4 part. The network layer and upsampling schemes ensure equal feature map dimensions ensuring feature map merging utilizing an improved wavelet transform. Finally, global average pooling is employed in the classifier block instead of dense layers to reduce the model's parameters and avoid overfitting. The fused feature map is then input into the classifier block for classification. The experimental setup involves data augmentation and transfer learning to prevent overfitting due to the small-sized data sets exploited, while the K cross-validation method is employed to evaluate the model's performance during the training process. The experimental results demonstrate that the proposed dual-CNN model fusion framework attains a classification accuracy higher than current techniques, and specifically 4.3% higher than Alexnet, 2.5% for VGG16, 2.9% for Inception v3, 2.2% for VGG19, 3.6% for Resnet50, 3% for Resnet101, and 0.7% and 1.2% than the conventional dual-CNN fusion framework 1 and 2, respectively, proving the effectiveness of the proposed strategy.


Asunto(s)
Aluminio , Redes Neurales de la Computación , Análisis de Ondículas
11.
Biomed Signal Process Control ; 68: 102583, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33828610

RESUMEN

Due to the unforeseen turn of events, our world has undergone another global pandemic from a highly contagious novel coronavirus named COVID-19. The novel virus inflames the lungs similarly to Pneumonia, making it challenging to diagnose. Currently, the common standard to diagnose the virus's presence from an individual is using a molecular real-time Reverse-Transcription Polymerase Chain Reaction (rRT-PCR) test from fluids acquired through nasal swabs. Such a test is difficult to acquire in most underdeveloped countries with a few experts that can perform the test. As a substitute, the widely available Chest X-Ray (CXR) became an alternative to rule out the virus. However, such a method does not come easy as the virus still possesses unknown characteristics that even experienced radiologists and other medical experts find difficult to diagnose through CXRs. Several studies have recently used computer-aided methods to automate and improve such diagnosis of CXRs through Artificial Intelligence (AI) based on computer vision and Deep Convolutional Neural Networks (DCNN), which some require heavy processing costs and other tedious methods to produce. Therefore, this work proposed the Fused-DenseNet-Tiny, a lightweight DCNN model based on a densely connected neural network (DenseNet) truncated and concatenated. The model trained to learn CXR features based on transfer learning, partial layer freezing, and feature fusion. Upon evaluation, the proposed model achieved a remarkable 97.99 % accuracy, with only 1.2 million parameters and a shorter end-to-end structure. It has also shown better performance than some existing studies and other massive state-of-the-art models that diagnosed COVID-19 from CXRs.

12.
Brief Bioinform ; 22(5)2021 09 02.
Artículo en Inglés | MEDLINE | ID: mdl-33498086

RESUMEN

Transcription factors (TFs) play an important role in regulating gene expression, thus identification of the regions bound by them has become a fundamental step for molecular and cellular biology. In recent years, an increasing number of deep learning (DL) based methods have been proposed for predicting TF binding sites (TFBSs) and achieved impressive prediction performance. However, these methods mainly focus on predicting the sequence specificity of TF-DNA binding, which is equivalent to a sequence-level binary classification task, and fail to identify motifs and TFBSs accurately. In this paper, we developed a fully convolutional network coupled with global average pooling (FCNA), which by contrast is equivalent to a nucleotide-level binary classification task, to roughly locate TFBSs and accurately identify motifs. Experimental results on human ChIP-seq datasets show that FCNA outperforms other competing methods significantly. Besides, we find that the regions located by FCNA can be used by motif discovery tools to further refine the prediction performance. Furthermore, we observe that FCNA can accurately identify TF-DNA binding motifs across different cell lines and infer indirect TF-DNA bindings.


Asunto(s)
Secuenciación de Inmunoprecipitación de Cromatina , Redes Neurales de la Computación , Elementos de Respuesta , Análisis de Secuencia de ADN , Análisis de Secuencia de Proteína , Factores de Transcripción , Células A549 , Secuencias de Aminoácidos , Humanos , Células MCF-7 , Factores de Transcripción/genética , Factores de Transcripción/metabolismo
13.
BMC Med Imaging ; 20(1): 83, 2020 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-32698839

RESUMEN

BACKGROUND: Colonic polyps are more likely to be cancerous, especially those with large diameter, large number and atypical hyperplasia. If colonic polyps cannot be treated in early stage, they are likely to develop into colon cancer. Colonoscopy is easily limited by the operator's experience, and factors such as inexperience and visual fatigue will directly affect the accuracy of diagnosis. Cooperating with Hunan children's hospital, we proposed and improved a deep learning approach with global average pooling (GAP) in colonoscopy for assisted diagnosis. Our approach for assisted diagnosis in colonoscopy can prompt endoscopists to pay attention to polyps that may be ignored in real time, improve the detection rate, reduce missed diagnosis, and improve the efficiency of medical diagnosis. METHODS: We selected colonoscopy images from the gastrointestinal endoscopy room of Hunan children's hospital to form the colonic polyp datasets. And we applied the image classification method based on Deep Learning to the classification of Colonic Polyps. The classic networks we used are VGGNets and ResNets. By using global average pooling, we proposed the improved approaches: VGGNets-GAP and ResNets-GAP. RESULTS: The accuracies of all models in datasets exceed 98%. The TPR and TNR are above 96 and 98% respectively. In addition, VGGNets-GAP networks not only have high classification accuracies, but also have much fewer parameters than those of VGGNets. CONCLUSIONS: The experimental results show that the proposed approach has good effect on the automatic detection of colonic polyps. The innovations of our method are in two aspects: (1) the detection accuracy of colonic polyps has been improved. (2) our approach reduces the memory consumption and makes the model lightweight. Compared with the original VGG networks, the parameters of our VGG19-GAP networks are greatly reduced.


Asunto(s)
Pólipos del Colon/diagnóstico , Colonoscopía/métodos , Diagnóstico por Computador/métodos , Adolescente , Niño , Preescolar , China , Bases de Datos Factuales , Aprendizaje Profundo , Femenino , Humanos , Lactante , Recién Nacido , Masculino , Sensibilidad y Especificidad
14.
Sensors (Basel) ; 19(19)2019 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-31557958

RESUMEN

Plant leaf diseases are closely related to people's daily life. Due to the wide variety of diseases, it is not only time-consuming and labor-intensive to identify and classify diseases by artificial eyes, but also easy to be misidentified with having a high error rate. Therefore, we proposed a deep learning-based method to identify and classify plant leaf diseases. The proposed method can take the advantages of the neural network to extract the characteristics of diseased parts, and thus to classify target disease areas. To address the issues of long training convergence time and too-large model parameters, the traditional convolutional neural network was improved by combining a structure of inception module, a squeeze-and-excitation (SE) module and a global pooling layer to identify diseases. Through the Inception structure, the feature data of the convolutional layer were fused in multi-scales to improve the accuracy on the leaf disease dataset. Finally, the global average pooling layer was used instead of the fully connected layer to reduce the number of model parameters. Compared with some traditional convolutional neural networks, our model yielded better performance and achieved an accuracy of 91.7% on the test data set. At the same time, the number of model parameters and training time have also been greatly reduced. The experimental classification on plant leaf diseases indicated that our method is feasible and effective.


Asunto(s)
Redes Neurales de la Computación , Enfermedades de las Plantas , Hojas de la Planta , Procesamiento de Imagen Asistido por Computador/métodos , Enfermedades de las Plantas/microbiología
15.
J Biomed Inform ; 98: 103271, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31454648

RESUMEN

OBJECTIVE: The aim of this study is to analyze and visualize blood pressure (BP) patterns during continuous hemodialysis (HD) sessions, referred to as multiple-session patterns (MSPs), and explore whether deep learning models with MSPs have better performance. MATERIAL AND METHODS: Data from 3.79 million hemodialysis BP records collected from July 30, 2007, to August 25, 2016, were obtained from the health system's electronic health records. We analyzed BP patterns during 36 continuous HD sessions (approximately 3 months) and selected 1311 (survival: 1246; death: 65) end-stage renal disease patients to classify 1-year outcomes (survival or death). Convolution kernels of different sizes were used to construct convolutional neural networks to recognize MSPs and BP patterns during a single HD session, referred to as single-session patterns (SSPs). BP patterns corresponded to convolution kernels and were represented and visualized as the input patches that activate the feature maps most. We used global average pooling (GAP) to measure the overall response of the inputs to each convolution kernel (pattern). The weights of the fully connected layers after GAP can measure the correlations between the convolution kernels (patterns) and the classification results. We solved the problem of data imbalance with a two-phase training strategy. RESULTS: The F1_score was 0.782 ±â€¯0.058 (95% CI) in the models with SSPs and was approximately 19.5% higher (0.977 ±â€¯0.014, 95% CI) in the models with MSPs. CONCLUSIONS: The results indicated that consistent with previous studies, patients with lower BPs and longer HD sessions have better prognoses. BP patterns during continuous HD sessions can represent patients' 1-year mortality risk better than BP patterns during a single HD session and therefore improve the performance of prediction models.


Asunto(s)
Determinación de la Presión Sanguínea/métodos , Presión Sanguínea , Fallo Renal Crónico/fisiopatología , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Diálisis Renal/efectos adversos , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Aprendizaje Profundo , Registros Electrónicos de Salud , Femenino , Humanos , Fallo Renal Crónico/terapia , Masculino , Informática Médica/métodos , Persona de Mediana Edad , Dinámicas no Lineales , Pronóstico , Reproducibilidad de los Resultados , Sístole , Adulto Joven
16.
Sensors (Basel) ; 19(7)2019 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-30970672

RESUMEN

Intelligent fault diagnosis methods based on deep learning becomes a research hotspot in the fault diagnosis field. Automatically and accurately identifying the incipient micro-fault of rotating machinery, especially for fault orientations and severity degree, is still a major challenge in the field of intelligent fault diagnosis. The traditional fault diagnosis methods rely on the manual feature extraction of engineers with prior knowledge. To effectively identify an incipient fault in rotating machinery, this paper proposes a novel method, namely improved the convolutional neural network-support vector machine (CNN-SVM) method. This method improves the traditional convolutional neural network (CNN) model structure by introducing the global average pooling technology and SVM. Firstly, the temporal and spatial multichannel raw data from multiple sensors is directly input into the improved CNN-Softmax model for the training of the CNN model. Secondly, the improved CNN are used for extracting representative features from the raw fault data. Finally, the extracted sparse representative feature vectors are input into SVM for fault classification. The proposed method is applied to the diagnosis multichannel vibration signal monitoring data of a rolling bearing. The results confirm that the proposed method is more effective than other existing intelligence diagnosis methods including SVM, K-nearest neighbor, back-propagation neural network, deep BP neural network, and traditional CNN.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA