Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
Sci Rep ; 14(1): 21537, 2024 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-39278949

RESUMEN

Assisted living facilities cater to the demands of the elderly population, providing assistance and support with day-to-day activities. Fall detection is fundamental to ensuring their well-being and safety. Falls are frequent among older persons and might cause severe injuries and complications. Incorporating computer vision techniques into assisted living environments is revolutionary for these issues. By leveraging cameras and complicated approaches, a computer vision (CV) system can monitor residents' movements continuously and identify any potential fall events in real time. CV, driven by deep learning (DL) techniques, allows continuous surveillance of people through cameras, investigating complicated visual information to detect potential fall risks or any instances of falls quickly. This system can learn from many visual data by leveraging DL, improving its capability to identify falls while minimalizing false alarms precisely. Incorporating CV and DL enhances the efficiency and reliability of fall detection and allows proactive intervention, considerably decreasing response times in emergencies. This study introduces a new Deep Feature Fusion with Computer Vision for Fall Detection and Classification (DFFCV-FDC) technique. The primary purpose of the DFFCV-FDC approach is to employ the CV concept for detecting fall events. Accordingly, the DFFCV-FDC approach uses the Gaussian filtering (GF) approach for noise eradication. Besides, a deep feature fusion process comprising MobileNet, DenseNet, and ResNet models is involved. To improve the performance of the DFFCV-FDC technique, improved pelican optimization algorithm (IPOA) based hyperparameter selection is performed. Finally, the detection of falls is identified using the denoising autoencoder (DAE) model. The performance analysis of the DFFCV-FDC methodology was examined on the benchmark fall database. A widespread comparative study reported the supremacy of the DFFCV-FDC approach with existing techniques.


Asunto(s)
Accidentes por Caídas , Instituciones de Vida Asistida , Aprendizaje Profundo , Humanos , Accidentes por Caídas/prevención & control , Anciano , Algoritmos
2.
Transfusion ; 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39268576

RESUMEN

BACKGROUND: Deep learning methods are revolutionizing natural science. In this study, we aim to apply such techniques to develop blood type prediction models based on cheap to analyze and easily scalable screening array genotyping platforms. METHODS: Combining existing blood types from blood banks and imputed screening array genotypes for ~111,000 Danish and 1168 Finnish blood donors, we used deep learning techniques to train and validate blood type prediction models for 36 antigens in 15 blood group systems. To account for missing genotypes a denoising autoencoder initial step was utilized, followed by a convolutional neural network blood type classifier. RESULTS: Two thirds of the trained blood type prediction models demonstrated an F1-accuracy above 99%. Models for antigens with low or high frequencies like, for example, Cw, low training cohorts like, for example, Cob, or very complicated genetic underpinning like, for example, RhD, proved to be more challenging for high accuracy (>99%) DL modeling. However, in the Danish cohort only 4 out of 36 models (Cob, Cw, D-weak, Kpa) failed to achieve a prediction F1-accuracy above 97%. This high predictive performance was replicated in the Finnish cohort. DISCUSSION: High accuracy in a variety of blood groups proves viability of deep learning-based blood type prediction using array chip genotypes, even in blood groups with nontrivial genetic underpinnings. These techniques are suitable for aiding in identifying blood donors with rare blood types by greatly narrowing down the potential pool of candidate donors before clinical grade confirmation.

3.
Comput Biol Med ; 179: 108921, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39059210

RESUMEN

Single-cell RNA sequencing (scRNA-seq) is the sequencing technology of a single cell whose expression reflects the overall characteristics of the individual cell, facilitating the research of problems at the cellular level. However, the problems of scRNA-seq such as dimensionality reduction processing of massive data, technical noise in data, and visualization of single-cell type clustering cause great difficulties for analyzing and processing scRNA-seq data. In this paper, we propose a new single-cell data analysis model using denoising autoencoder and multi-type graph neural networks (scDMG), which learns cell-cell topology information and latent representation of scRNA-seq data. scDMG introduces the zero-inflated negative binomial (ZINB) model into a denoising autoencoder (DAE) to perform dimensionality reduction and denoising on the raw data. scDMG integrates multiple-type graph neural networks as the encoder to further train the preprocessed data, which better deals with various types of scRNA-seq datasets, resolves dropout events in scRNA-seq data, and enables preliminary classification of scRNA-seq data. By employing TSNE and PCA algorithms for the trained data and invoking Louvain algorithm, scDMG has better dimensionality reduction and clustering optimization. Compared with other mainstream scRNA-seq clustering algorithms, scDMG outperforms other state-of-the-art methods in various clustering performance metrics and shows better scalability, shorter runtime, and great clustering results.


Asunto(s)
Redes Neurales de la Computación , Análisis de Secuencia de ARN , Análisis de la Célula Individual , Análisis de la Célula Individual/métodos , Humanos , Análisis de Secuencia de ARN/métodos , Algoritmos
4.
Sci Rep ; 14(1): 9143, 2024 04 21.
Artículo en Inglés | MEDLINE | ID: mdl-38644402

RESUMEN

Hepatitis C, a particularly dangerous form of viral hepatitis caused by hepatitis C virus (HCV) infection, is a major socio-economic and public health problem. Due to the rapid development of deep learning, it has become a common practice to apply deep learning to the healthcare industry to improve the effectiveness and accuracy of disease identification. In order to improve the effectiveness and accuracy of hepatitis C detection, this study proposes an improved denoising autoencoder (IDAE) and applies it to hepatitis C disease detection. Conventional denoising autoencoder introduces random noise at the input layer of the encoder. However, due to the presence of these features, encoders that directly add random noise may mask certain intrinsic properties of the data, making it challenging to learn deeper features. In this study, the problem of data information loss in traditional denoising autoencoding is addressed by incorporating the concept of residual neural networks into an enhanced denoising autoencoder. In our experimental study, we applied this enhanced denoising autoencoder to the open-source Hepatitis C dataset and the results showed significant results in feature extraction. While existing baseline machine learning methods have less than 90% accuracy and integrated algorithms and traditional autoencoders have only 95% correctness, the improved IDAE achieves 99% accuracy in the downstream hepatitis C classification task, which is a 9% improvement over a single algorithm, and a nearly 4% improvement over integrated algorithms and other autoencoders. The above results demonstrate that IDAE can effectively capture key disease features and improve the accuracy of disease prediction in hepatitis C data. This indicates that IDAE has the potential to be widely used in the detection and management of hepatitis C and similar diseases, especially in the development of early warning systems, progression prediction and personalised treatment strategies.


Asunto(s)
Aprendizaje Profundo , Hepatitis C , Redes Neurales de la Computación , Humanos , Hepatitis C/virología , Hepatitis C/diagnóstico , Hepacivirus/aislamiento & purificación , Hepacivirus/genética , Algoritmos
5.
Bioengineering (Basel) ; 11(4)2024 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-38671757

RESUMEN

Many new reconstruction techniques have been deployed to allow low-dose CT examinations. Such reconstruction techniques exhibit nonlinear properties, which strengthen the need for a task-based measure of image quality. The Hotelling observer (HO) is the optimal linear observer and provides a lower bound of the Bayesian ideal observer detection performance. However, its computational complexity impedes its widespread practical usage. To address this issue, we proposed a self-supervised learning (SSL)-based model observer to provide accurate estimates of HO performance in very low-dose chest CT images. Our approach involved a two-stage model combining a convolutional denoising auto-encoder (CDAE) for feature extraction and dimensionality reduction and a support vector machine for classification. To evaluate this approach, we conducted signal detection tasks employing chest CT images with different noise structures generated by computer-based simulations. We compared this approach with two supervised learning-based methods: a single-layer neural network (SLNN) and a convolutional neural network (CNN). The results showed that the CDAE-based model was able to achieve similar detection performance to the HO. In addition, it outperformed both SLNN and CNN when a reduced number of training images was considered. The proposed approach holds promise for optimizing low-dose CT protocols across scanner platforms.

6.
Sensors (Basel) ; 24(6)2024 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-38544221

RESUMEN

The BeiDou Navigation Satellite System (BDS) provides real-time absolute location services to users around the world and plays a key role in the rapidly evolving field of autonomous driving. In complex urban environments, the positioning accuracy of BDS often suffers from large deviations due to non-line-of-sight (NLOS) signals. Deep learning (DL) methods have shown strong capabilities in detecting complex and variable NLOS signals. However, these methods still suffer from the following limitations. On the one hand, supervised learning methods require labeled samples for learning, which inevitably encounters the bottleneck of difficulty in constructing databases with a large number of labels. On the other hand, the collected data tend to have varying degrees of noise, leading to low accuracy and poor generalization performance of the detection model, especially when the environment around the receiver changes. In this article, we propose a novel deep neural architecture named convolutional denoising autoencoder network (CDAENet) to detect NLOS in urban forest environments. Specifically, we first design a denoising autoencoder based on unsupervised DL to reduce the long time series signal dimension and extract the deep features of the data. Meanwhile, denoising autoencoders improve the model's robustness in identifying noisy data by introducing a certain amount of noise into the input data. Then, an MLP algorithm is used to identify the non-linearity of the BDS signal. Finally, the performance of the proposed CDAENet model is validated on a real urban forest dataset. The experimental results show that the satellite detection accuracy of our proposed algorithm is more than 95%, which is about an 8% improvement over existing machine-learning-based methods and about 3% improvement over deep-learning-based approaches.

7.
Brief Bioinform ; 25(2)2024 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-38493338

RESUMEN

In recent years, there has been a growing trend in the realm of parallel clustering analysis for single-cell RNA-seq (scRNA) and single-cell Assay of Transposase Accessible Chromatin (scATAC) data. However, prevailing methods often treat these two data modalities as equals, neglecting the fact that the scRNA mode holds significantly richer information compared to the scATAC. This disregard hinders the model benefits from the insights derived from multiple modalities, compromising the overall clustering performance. To this end, we propose an effective multi-modal clustering model scEMC for parallel scRNA and Assay of Transposase Accessible Chromatin data. Concretely, we have devised a skip aggregation network to simultaneously learn global structural information among cells and integrate data from diverse modalities. To safeguard the quality of integrated cell representation against the influence stemming from sparse scATAC data, we connect the scRNA data with the aggregated representation via skip connection. Moreover, to effectively fit the real distribution of cells, we introduced a Zero Inflated Negative Binomial-based denoising autoencoder that accommodates corrupted data containing synthetic noise, concurrently integrating a joint optimization module that employs multiple losses. Extensive experiments serve to underscore the effectiveness of our model. This work contributes significantly to the ongoing exploration of cell subpopulations and tumor microenvironments, and the code of our work will be public at https://github.com/DayuHuu/scEMC.


Asunto(s)
Cromatina , ARN Citoplasmático Pequeño , Análisis de Expresión Génica de una Sola Célula , Análisis por Conglomerados , Aprendizaje , ARN Citoplasmático Pequeño/genética , Transposasas , Análisis de Secuencia de ARN , Perfilación de la Expresión Génica
8.
Spectrochim Acta A Mol Biomol Spectrosc ; 311: 124015, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38359515

RESUMEN

Rice grains are often infected by Sitophilus oryzae due to improper storage, resulting in quality and quantity losses. The efficacy of terahertz time-domain spectroscopy (THz-TDS) technology in detecting Sitophilus oryzae at different stages of infestation in stored rice was employed in the current research. Terahertz (THz) spectra for rice grains infested by Sitophilus oryzae at different growth stages were acquired. Then, the convolutional denoising autoencoder (CDAE) was used to reconstruct THz spectra to reduce the noise-to-signal ratio. Finally, a random forest classification (RFC) model was developed to identify the infestation levels. Results showed that the RFC model based on the reconstructed second-order derivative spectrum with an accuracy of 84.78%, a specificity of 86.75%, a sensitivity of 86.36% and an F1-score of 85.87% performed better than the original first-order derivative THz spectrum with an accuracy of 89.13%, a specificity of 91.38%, a sensitivity of 88.18% and an F1-score of 89.16%. In addition, the convolutional layers inside the CDAE were visualized using feature maps to explain the improvement in results, illustrating that the CDAE can eliminate noise in the spectral data. Overall, THz spectra reconstructed with the CDAE provided a novel method for effective THz detection of infected grains.


Asunto(s)
Oryza , Espectroscopía de Terahertz , Gorgojos , Animales , Oryza/química , Espectroscopía de Terahertz/métodos
9.
Sensors (Basel) ; 23(24)2023 Dec 08.
Artículo en Inglés | MEDLINE | ID: mdl-38139543

RESUMEN

Supervisory control and data acquisition (SCADA) systems are widely utilized in power equipment for condition monitoring. For the collected data, there generally exists a problem-missing data of different types and patterns. This leads to the poor quality and utilization difficulties of the collected data. To address this problem, this paper customizes methodology that combines an asymmetric denoising autoencoder (ADAE) and moving average filter (MAF) to perform accurate missing data imputation. First, convolution and gated recurrent unit (GRU) are applied to the encoder of the ADAE, while the decoder still utilizes the fully connected layers to form an asymmetric network structure. The ADAE extracts the local periodic and temporal features from monitoring data and then decodes the features to realize the imputation of the multi-type missing. On this basis, according to the continuity of power data in the time domain, the MAF is utilized to fuse the prior knowledge of the neighborhood of missing data to secondarily optimize the imputed data. Case studies reveal that the developed method achieves greater accuracy compared to existing models. This paper adopts experiments under different scenarios to justify that the MAF-ADAE method applies to actual power equipment monitoring data imputation.

10.
J Integr Bioinform ; 20(3)2023 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-37978847

RESUMEN

Bacillus strains are ubiquitous in the environment and are widely used in the microbiological industry as valuable enzyme sources, as well as in agriculture to stimulate plant growth. The Bacillus genus comprises several closely related groups of species. The rapid classification of these remains challenging using existing methods. Techniques based on MALDI-TOF MS data analysis hold significant promise for fast and precise microbial strains classification at both the genus and species levels. In previous work, we proposed a geometric approach to Bacillus strain classification based on mass spectra analysis via the centroid method (CM). One limitation of such methods is the noise in MS spectra. In this study, we used a denoising autoencoder (DAE) to improve bacteria classification accuracy under noisy MS spectra conditions. We employed a denoising autoencoder approach to convert noisy MS spectra into latent variables representing molecular patterns in the original MS data, and the Random Forest method to classify bacterial strains by latent variables. Comparison of the DAE-RF with the CM method using the artificially noisy test samples showed that DAE-RF offers higher noise robustness. Hence, the DAE-RF method could be utilized for noise-robust, fast, and neat classification of Bacillus species according to MALDI-TOF MS data.


Asunto(s)
Bacillus , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción/métodos , Bacterias
11.
Comput Biol Med ; 166: 107553, 2023 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-37806059

RESUMEN

BACKGROUND: The denoising autoencoder (DAE) is commonly used to denoise bio-signals such as electrocardiogram (ECG) signals through dimensional reduction. Typically, the DAE model needs to be trained using correlated input segments such as QRS-aligned segments or long ECG segments. However, using long ECG segments as an input can result in a complex deep DAE model that requires many hidden layers to achieve a low-dimensional representation, which is a major drawback. METHODS: This work proposes a novel DAE model, called running DAE (RunDAE), for denoising short ECG segments without relying on the R-peak detection algorithm for alignment. The proposed RunDAE model employs a sample-by-sample processing approach, considering the correlation between consecutive, overlapped ECG segments. The performance of both the classical DAE and RunDAE models with convolutional and dense layers, respectively, is evaluated using corrupted QRS-aligned and non-aligned ECG segments with physical noise such as motion artifacts, electrode movement, baseline wander, and simulated noise such as Gaussian white noise. RESULTS: The simulation results indicate that 1. QRS-aligned segments are preferable to non-aligned segments, 2. the RunDAE model outperforms the classical DAE model in denoising ECG signals, especially when using dense layers and QRS-aligned segments, 3. training the RunDAE models with normal and arrhythmic ECG signals enhance model's properties/capabilities, and 4. the RunDAE is a multistage, non-causal, nonlinear adaptive filter. CONCLUSION: A shallow learning model, which consists of a couple of hidden layers, could achieve outstanding denoising performance using only the correlation among neighboring samples.

12.
Sensors (Basel) ; 23(12)2023 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-37420709

RESUMEN

In indoor environments, estimating localization using a received signal strength indicator (RSSI) is difficult because of the noise from signals reflected and refracted by walls and obstacles. In this study, we used a denoising autoencoder (DAE) to remove noise in the RSSI of Bluetooth Low Energy (BLE) signals to improve localization performance. In addition, it is known that the signal of an RSSI can be exponentially aggravated when the noise is increased proportionally to the square of the distance increment. Based on the problem, to effectively remove the noise by adapting this characteristic, we proposed adaptive noise generation schemes to train the DAE model to reflect the characteristics in which the signal-to-noise ratio (SNR) considerably increases as the distance between the terminal and beacon increases. We compared the model's performance with that of Gaussian noise and other localization algorithms. The results showed an accuracy of 72.6%, a 10.2% improvement over the model with Gaussian noise. Furthermore, our model outperformed the Kalman filter in terms of denoising.


Asunto(s)
Algoritmos , Fenómenos Biológicos , Relación Señal-Ruido , Distribución Normal
13.
Sensors (Basel) ; 23(14)2023 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-37514597

RESUMEN

Urban intersections are one of the most common sources of traffic congestion. Especially for multiple intersections, an appropriate control method should be able to regulate the traffic flow within the control area. The intersection signal-timing problem is crucial for ensuring efficient traffic operations, with the key issues being the determination of a traffic model and the design of an optimization algorithm. So, an optimization method for signalized intersections integrating a multi-objective model and an NSGAIII-DAE algorithm is established in this paper. Firstly, the multi-objective model is constructed including the usual signal control delay and traffic capacity indices. In addition, the conflict delay caused by right-turning vehicles crossing straight-going non-motor vehicles is considered and combined with the proposed algorithm, enabling the traffic model to better balance the traffic efficiency of intersections without adding infrastructure. Secondly, to address the challenges of diversity and convergence faced by the classic NSGA-III algorithm in solving traffic models with high-dimensional search spaces, a denoising autoencoder (DAE) is adopted to learn the compact representation of the original high-dimensional search space. Some genetic operations are performed in the compressed space and then mapped back to the original search space through the DAE. As a result, an appropriate balance between the local and global searching in an iteration can be achieved. To validate the proposed method, numerical experiments were conducted using actual traffic data from intersections in Jinzhou, China. The numerical results show that the signal control delay and conflict delay are significantly reduced compared with the existing algorithm, and the optimal reduction is 33.7% and 31.3%, respectively. The capacity value obtained by the proposed method in this paper is lower than that of the compared algorithm, but it is also 11.5% higher than that of the current scheme in this case. The comparisons and discussions demonstrate the effectiveness of the proposed method designed for improving the efficiency of signalized intersections.

14.
Sensors (Basel) ; 23(14)2023 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-37514900

RESUMEN

Recently, remarkable successes have been achieved in the quality assurance of automotive software systems (ASSs) through the utilization of real-time hardware-in-the-loop (HIL) simulation. Based on the HIL platform, safe, flexible and reliable realistic simulation during the system development process can be enabled. However, notwithstanding the test automation capability, large amounts of recordings data are generated as a result of HIL test executions. Expert knowledge-based approaches to analyze the generated recordings, with the aim of detecting and identifying the faults, are costly in terms of time, effort and difficulty. Therefore, in this study, a novel deep learning-based methodology is proposed so that the faults of automotive sensor signals can be efficiently and automatically detected and identified without human intervention. Concretely, a hybrid GRU-based denoising autoencoder (GRU-based DAE) model with the k-means algorithm is developed for the fault-detection and clustering problem in sequential data. By doing so, based on the real-time historical data, not only individual faults but also unknown simultaneous faults under noisy conditions can be accurately detected and clustered. The applicability and advantages of the proposed method for the HIL testing process are demonstrated by two automotive case studies. To be specific, a high-fidelity gasoline engine and vehicle dynamic system along with an entire vehicle model are considered to verify the performance of the proposed model. The superiority of the proposed architecture compared to other autoencoder variants is presented in the results in terms of reconstruction error under several noise levels. The validation results indicate that the proposed model can perform high detection and clustering accuracy of unknown faults compared to stand-alone techniques.

15.
Multimed Tools Appl ; : 1-22, 2023 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-37362667

RESUMEN

The goal of medical visual question answering (Med-VQA) is to correctly answer a clinical question posed by a medical image. Medical images are fundamentally different from images in the general domain. As a result, using general domain Visual Question Answering (VQA) models to the medical domain is impossible. Furthermore, the large-scale data required by VQA models is rarely available in the medical arena. Existing approaches of medical visual question answering often rely on transfer learning with external data to generate good image feature representation and use cross-modal fusion of visual and language features to acclimate to the lack of labelled data. This research provides a new parallel multi-head attention framework (MaMVQA) for dealing with Med-VQA without the use of external data. The proposed framework addresses image feature extraction using the unsupervised Denoising Auto-Encoder (DAE) and language feature extraction using term-weighted question embedding. In addition, we present qf-MI, a unique supervised term-weighting (STW) scheme based on the concept of mutual information (MI) between the word and the corresponding class label. Extensive experimental findings on the VQA-RAD public medical VQA benchmark show that the proposed methodology outperforms previous state-of-the-art methods in terms of accuracy while requiring no external data to train the model. Remarkably, the presented MaMVQA model achieved significantly increased accuracy in predicting answers to both close-ended (78.68%) and open-ended (55.31%) questions. Also, an extensive set of ablations are studied to demonstrate the significance of individual components of the system.

16.
Brief Bioinform ; 24(3)2023 05 19.
Artículo en Inglés | MEDLINE | ID: mdl-36971393

RESUMEN

MOTIVATION: A large number of studies have shown that circular RNA (circRNA) affects biological processes by competitively binding miRNA, providing a new perspective for the diagnosis, and treatment of human diseases. Therefore, exploring the potential circRNA-miRNA interactions (CMIs) is an important and urgent task at present. Although some computational methods have been tried, their performance is limited by the incompleteness of feature extraction in sparse networks and the low computational efficiency of lengthy data. RESULTS: In this paper, we proposed JSNDCMI, which combines the multi-structure feature extraction framework and Denoising Autoencoder (DAE) to meet the challenge of CMI prediction in sparse networks. In detail, JSNDCMI integrates functional similarity and local topological structure similarity in the CMI network through the multi-structure feature extraction framework, then forces the neural network to learn the robust representation of features through DAE and finally uses the Gradient Boosting Decision Tree classifier to predict the potential CMIs. JSNDCMI produces the best performance in the 5-fold cross-validation of all data sets. In the case study, seven of the top 10 CMIs with the highest score were verified in PubMed. AVAILABILITY: The data and source code can be found at https://github.com/1axin/JSNDCMI.


Asunto(s)
MicroARNs , Humanos , MicroARNs/genética , ARN Circular , Redes Neurales de la Computación , Programas Informáticos , Biología Computacional/métodos
17.
Biomed Tech (Berl) ; 68(3): 275-284, 2023 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-36724089

RESUMEN

OBJECTIVES: Denoising autoencoder (DAE) with a single hidden layer of neurons can recode a signal, i.e., converting the original signal into a noise-reduced signal. The DAE approach has shown a good performance in denoising bio-signals, like electrocardiograms (ECG). In this paper, we study the effect of correlated, uncorrelated and jittered datasets on the performance of the DAE model. METHODS: Vectors of multiple concatenated ECG segments of simultaneously recorded Einthoven recordings I, II, III are considered to establish the following dataset cases: (1) correlated, (2) uncorrelated, and (3) jittered. We consider our previous work in finding the optimal number of hidden neurons receiving the input signal with respect to signal quality and computational burden by applying Akaike's information criterion. To evaluate DAE, these datasets are corrupted with six types of noise, namely mix noise (MX), motion artifact noise (MA), electrode movement (EM), baseline wander (BW), Gaussian white noise (GWN) and high-frequency noise (HFN), to simulate real case scenario. Spectral analysis is used to study the effects of noise whose power spectrum may overlap with the power spectrum of the wanted signal on DAE performance. RESULTS: The simulation results show (a) that the number of hidden neurons to denoise multiple correlated ECG is much lower than for jittered signals, (b) QRS-complex based ECG alignment preferable, (c) noises with slightly overlapping power spectrum, like BW and HFN, can be easily removed with sufficient number of neurons, while the noise with completely overlapping spectrum, like GWN, requires a very low-dimensional and thus coarser reduction to recover the signal. CONCLUSIONS: The performance of DAE model in terms of signal-to-noise ratio improvement and the required number of hidden neurons can be improved by utilizing the correlation among simultaneous Einthoven I, II, III records.


Asunto(s)
Algoritmos , Procesamiento de Señales Asistido por Computador , Simulación por Computador , Electrocardiografía/métodos , Relación Señal-Ruido
18.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36631401

RESUMEN

The advances in single-cell ribonucleic acid sequencing (scRNA-seq) allow researchers to explore cellular heterogeneity and human diseases at cell resolution. Cell clustering is a prerequisite in scRNA-seq analysis since it can recognize cell identities. However, the high dimensionality, noises and significant sparsity of scRNA-seq data have made it a big challenge. Although many methods have emerged, they still fail to fully explore the intrinsic properties of cells and the relationship among cells, which seriously affects the downstream clustering performance. Here, we propose a new deep contrastive clustering algorithm called scDCCA. It integrates a denoising auto-encoder and a dual contrastive learning module into a deep clustering framework to extract valuable features and realize cell clustering. Specifically, to better characterize and learn data representations robustly, scDCCA utilizes a denoising Zero-Inflated Negative Binomial model-based auto-encoder to extract low-dimensional features. Meanwhile, scDCCA incorporates a dual contrastive learning module to capture the pairwise proximity of cells. By increasing the similarities between positive pairs and the differences between negative ones, the contrasts at both the instance and the cluster level help the model learn more discriminative features and achieve better cell segregation. Furthermore, scDCCA joins feature learning with clustering, which realizes representation learning and cell clustering in an end-to-end manner. Experimental results of 14 real datasets validate that scDCCA outperforms eight state-of-the-art methods in terms of accuracy, generalizability, scalability and efficiency. Cell visualization and biological analysis demonstrate that scDCCA significantly improves clustering and facilitates downstream analysis for scRNA-seq data. The code is available at https://github.com/WJ319/scDCCA.


Asunto(s)
Perfilación de la Expresión Génica , Análisis de Expresión Génica de una Sola Célula , Humanos , Perfilación de la Expresión Génica/métodos , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos , Algoritmos , Análisis por Conglomerados
19.
Food Chem ; 409: 135251, 2023 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-36586261

RESUMEN

The purpose of this study was to develop a deep learning method involving wavelet transform (WT) and stacked denoising autoencoder (SDAE) for extracting deep features of heavy metal lead (Pb) detection of oilseed rape leaves. Firstly, the standard normalized variable (SNV) algorithm was established as the best preprocessing algorithm, and the SNV-treated fluorescence spectral data was used for further data analysis. Then, WT was used to decompose the SNV-treated fluorescence spectra of oilseed rape leaves to obtain the optimal wavelet decomposition layers using different wavelet basis functions, and SDAE was used for deep feature learning under the optimal wavelet decomposition layer. Finally, the best established support vector machine regression (SVR) model prediction set parameters Rp2, RMSEP and RPD were 0.9388, 0.0199 mg/kg and 3.275 using sym7 as the wavelet basis function. The results of this study verified that the huge potential of fluorescence hyperspectral technology combined with deep learning algorithms to detect heavy metals.


Asunto(s)
Brassica napus , Aprendizaje Profundo , Espectroscopía Infrarroja Corta/métodos , Imágenes Hiperespectrales , Análisis de los Mínimos Cuadrados , Hojas de la Planta , Algoritmos
20.
Mol Divers ; 27(3): 1333-1343, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35871213

RESUMEN

Drug-target interaction is crucial in the discovery of new drugs. Computational methods can be used to identify new drug-target interactions at low costs and with reasonable accuracy. Recent studies pay more attention to machine-learning methods, ranging from matrix factorization to deep learning, in the DTI prediction. Since the interaction matrix is often extremely sparse, DTI prediction performance is significantly decreased with matrix factorization-based methods. Therefore, some matrix factorization methods utilize side information to address both the sparsity issue of the interaction matrix and the cold-start issue. By combining matrix factorization and autoencoders, we propose a hybrid DTI prediction model that simultaneously learn the hidden factors of drugs and targets from their side information and interaction matrix. The proposed method is composed of two steps: the pre-processing of the interaction matrix, and the hybrid model. We leverage the similarity matrices of both drugs and targets to address the sparsity problem of the interaction matrix. The comparison of our approach against other algorithms on the same reference datasets has shown good results regarding area under receiver operating characteristic curve and the area under precision-recall curve. More specifically, experimental results achieve high accuracy on golden standard datasets (e.g., Nuclear Receptors, GPCRs, Ion Channels, and Enzymes) when performed with five repetitions of tenfold cross-validation. Display graphical of the hybrid model of Matrix Factorization with Denoising Autoencoders with the help side information of drugs and targets for Prediction of Drug-Target Interactions.


Asunto(s)
Algoritmos , Aprendizaje Automático , Interacciones Farmacológicas , Proyectos de Investigación , Curva ROC
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA