Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
J Bioinform Comput Biol ; 22(4): 2450020, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39262053

RESUMEN

Polypharmacy, the use of drug combinations, is an effective approach for treating complex diseases, but it increases the risk of adverse effects. To predict novel polypharmacy side effects based on known ones, many computational methods have been proposed. However, most of them generate deterministic low-dimensional embeddings when modeling the latent space of drugs, which cannot effectively capture potential side effect associations between drugs. In this study, we present SIPSE, a novel approach for predicting polypharmacy side effects. SIPSE integrates single-drug side effect information and drug-target protein data to construct novel drug feature vectors. Leveraging a semi-implicit graph variational auto-encoder, SIPSE models known polypharmacy side effects and generates flexible latent distributions for drug nodes. SIPSE infers the current node distribution by combining the distributions of neighboring nodes with embedding noise. By sampling node embeddings from these distributions, SIPSE effectively predicts polypharmacy side effects between drugs. One key innovation of SIPSE is its incorporation of uncertainty propagation through noise embedding and neighborhood sharing, enhancing its graph analysis capabilities. Extensive experiments on a benchmark dataset of polypharmacy side effects demonstrated that SIPSE significantly outperformed five state-of-the-art methods in predicting polypharmacy side effects.


Asunto(s)
Biología Computacional , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Polifarmacia , Biología Computacional/métodos , Humanos , Algoritmos
2.
Cureus ; 16(7): e65886, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39219951

RESUMEN

Introduction Periodontal bone resorption is a significant dental problem causing tooth loss and impaired oral function. It is influenced by factors such as bacterial plaque, genetic predisposition, smoking, systemic diseases, medications, hormonal changes, and poor oral hygiene. This condition disrupts bone remodeling, favoring resorptive processes. Variational autoencoders (VAEs) can learn the distribution of drug-gene interactions from existing data, identify potential drug targets, and predict therapeutic effects. This study investigates the generation of drug-gene interactions in periodontal bone resorption using VAEs. Methods A bone resorptive drugs dataset was retrieved from Probes and Drugs and analyzed using Cytoscape (https://cytoscape.org/) and CytoHubba (https://apps.cytoscape.org/apps/cytohubba), powerful tools for studying drug-gene interactions in bone resorption. The dataset was then prepared for matrix representation, with normalized input data. It was subsequently divided into training, validation, and testing sets. We then built an encoder-decoder network, defined a loss function, optimized parameters, and fine-tuned hyperparameters. Using VAEs, we generated new drug-gene interactions, assessed model performance, and visualized the latent space with reconstructed drug-gene interactions for further insights. Results The analysis revealed the top hub genes in drug-gene interactions, including Matrix Metalloproteinase (MMP) 14, MMP 9, HIF1A, STAT1, MAPT, CAS9, MMP2, CASP3, MMP1, and MAK1. The VAE's reconstruction accuracy was measured using mean squared error (MSE), with an average squared difference of 0.077. Additionally, the KL divergence value was 2.349, and the average reconstruction log-likelihood was -246. Conclusion The generative variational encoder model for drug-gene interactions in bone resorption demonstrates high accuracy and reliability in representing complex drug-gene relationships within this context.

3.
Magn Reson Med ; 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39221515

RESUMEN

PURPOSE: To develop an automated deep learning model for MRI-based segmentation and detection of intracranial arterial calcification. METHODS: A novel deep learning model under the variational autoencoder framework was developed. A theoretically grounded dissimilarity loss was proposed to refine network features extracted from MRI and restrict their complexity, enabling the model to learn more generalizable MR features that enhance segmentation accuracy and robustness for detecting calcification on MRI. RESULTS: The proposed method was compared with nine baseline methods on a dataset of 113 subjects and showed superior performance (for segmentation, Dice similarity coefficient: 0.620, area under precision-recall curve [PR-AUC]: 0.660, 95% Hausdorff Distance: 0.848 mm, Average Symmetric Surface Distance: 0.692 mm; for slice-wise detection, F1 score: 0.823, recall: 0.764, precision: 0.892, PR-AUC: 0.853). For clinical needs, statistical tests confirmed agreement between the true calcification volumes and predicted values using the proposed approach. Various MR sequences, namely T1, time-of-flight, and SNAP, were assessed as inputs to the model, and SNAP provided unique and essential information pertaining to calcification structures. CONCLUSION: The proposed deep learning model with a dissimilarity loss to reduce feature complexity effectively improves MRI-based identification of intracranial arterial calcification. It could help establish a more comprehensive and powerful pipeline for vascular image analysis on MRI.

4.
Sci Rep ; 14(1): 17881, 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39095485

RESUMEN

In situ Electron Energy Loss Spectroscopy (EELS) combined with Transmission Electron Microscopy (TEM) has traditionally been pivotal for understanding how material processing choices affect local structure and composition. However, the ability to monitor and respond to ultrafast transient changes, now achievable with EELS and TEM, necessitates innovative analytical frameworks. Here, we introduce a machine learning (ML) framework tailored for the real-time assessment and characterization of in operando EELS Spectrum Images (EELS-SI). We focus on 2D MXenes as the sample material system, specifically targeting the understanding and control of their atomic-scale structural transformations that critically influence their electronic and optical properties. This approach requires fewer labeled training data points than typical deep learning classification methods. By integrating computationally generated structures of MXenes and experimental datasets into a unified latent space using Variational Autoencoders (VAE) in a unique training method, our framework accurately predicts structural evolutions at latencies pertinent to closed-loop processing within the TEM. This study presents a critical advancement in enabling automated, on-the-fly synthesis and characterization, significantly enhancing capabilities for materials discovery and the precision engineering of functional materials at the atomic scale.

5.
Cell Syst ; 15(8): 725-737.e7, 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39106868

RESUMEN

Evolution-based deep generative models represent an exciting direction in understanding and designing proteins. An open question is whether such models can learn specialized functional constraints that control fitness in specific biological contexts. Here, we examine the ability of generative models to produce synthetic versions of Src-homology 3 (SH3) domains that mediate signaling in the Sho1 osmotic stress response pathway of yeast. We show that a variational autoencoder (VAE) model produces artificial sequences that experimentally recapitulate the function of natural SH3 domains. More generally, the model organizes all fungal SH3 domains such that locality in the model latent space (but not simply locality in sequence space) enriches the design of synthetic orthologs and exposes non-obvious amino acid constraints distributed near and far from the SH3 ligand-binding site. The ability of generative models to design ortholog-like functions in vivo opens new avenues for engineering protein function in specific cellular contexts and environments.


Asunto(s)
Aprendizaje Profundo , Transducción de Señal , Dominios Homologos src , Saccharomyces cerevisiae/genética , Saccharomyces cerevisiae/metabolismo , Proteínas de Saccharomyces cerevisiae/genética , Proteínas de Saccharomyces cerevisiae/metabolismo
6.
Bioengineering (Basel) ; 11(8)2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39199761

RESUMEN

Soft sensors based on deep learning regression models are promising approaches to predict real-time fermentation process quality measurements. However, experimental datasets are generally sparse and may contain outliers or corrupted data. This leads to insufficient model prediction performance. Therefore, datasets with a fully distributed solution space are required that enable effective exploration during model training. In this study, the robustness and predictive capability of the underlying model of a soft sensor was improved by generating synthetic datasets for training. The monitoring of intensified ethanol fermentation is used as a case study. Variational autoencoders were employed to create synthetic datasets, which were then combined with original datasets (experimental) to train neural network regression models. These models were tested on original versus augmented datasets to assess prediction improvements. Using the augmented datasets, the soft sensor predictive capability improved by 34%, and variability was reduced by 82%, based on R2 scores. The proposed method offers significant time and cost savings for dataset generation for the deep learning modeling of ethanol fermentation and can be easily adapted to other fermentation processes. This work contributes to the advancement of soft sensor technology, providing practical solutions for enhancing reliability and robustness in large-scale production.

7.
Biomed Eng Online ; 23(1): 90, 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39217355

RESUMEN

Medical imaging datasets for research are frequently collected from multiple imaging centers using different scanners, protocols, and settings. These variations affect data consistency and compatibility across different sources. Image harmonization is a critical step to mitigate the effects of factors like inherent differences between various vendors, hardware upgrades, protocol changes, and scanner calibration drift, as well as to ensure consistent data for medical image processing techniques. Given the critical importance and widespread relevance of this issue, a vast array of image harmonization methodologies have emerged, with deep learning-based approaches driving substantial advancements in recent times. The goal of this review paper is to examine the latest deep learning techniques employed for image harmonization by analyzing cutting-edge architectural approaches in the field of medical image harmonization, evaluating both their strengths and limitations. This paper begins by providing a comprehensive fundamental overview of image harmonization strategies, covering three critical aspects: established imaging datasets, commonly used evaluation metrics, and characteristics of different scanners. Subsequently, this paper analyzes recent structural MRI (Magnetic Resonance Imaging) harmonization techniques based on network architecture, network learning algorithm, network supervision strategy, and network output. The underlying architectures include U-Net, Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), flow-based generative models, transformer-based approaches, as well as custom-designed network architectures. This paper investigates the effectiveness of Disentangled Representation Learning (DRL) as a pivotal learning algorithm in harmonization. Lastly, the review highlights the primary limitations in harmonization techniques, specifically the lack of comprehensive quantitative comparisons across different methods. The overall aim of this review is to serve as a guide for researchers and practitioners to select appropriate architectures based on their specific conditions and requirements. It also aims to foster discussions around ongoing challenges in the field and shed light on promising future research directions with the potential for significant advancements.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Encuestas y Cuestionarios
8.
Behav Sci (Basel) ; 14(7)2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-39062350

RESUMEN

Latent variables analysis is an important part of psychometric research. In this context, factor analysis and other related techniques have been widely applied for the investigation of the internal structure of psychometric tests. However, these methods perform a linear dimensionality reduction under a series of assumptions that could not always be verified in psychological data. Predictive techniques, such as artificial neural networks, could complement and improve the exploration of latent space, overcoming the limits of traditional methods. In this study, we explore the latent space generated by a particular artificial neural network: the variational autoencoder. This autoencoder could perform a nonlinear dimensionality reduction and encourage the latent features to follow a predefined distribution (usually a normal distribution) by learning the most important relationships hidden in data. In this study, we investigate the capacity of autoencoders to model item-factor relationships in simulated data, which encompasses linear and nonlinear associations. We also extend our investigation to a real dataset. Results on simulated data show that the variational autoencoder performs similarly to factor analysis when the relationships among observed and latent variables are linear, and it is able to reproduce the factor scores. Moreover, results on nonlinear data show that, differently than factor analysis, it can also learn to reproduce nonlinear relationships among observed variables and factors. The factor score estimates are also more accurate with respect to factor analysis. The real case results confirm the potential of the autoencoder in reducing dimensionality with mild assumptions on input data and in recognizing the function that links observed and latent variables.

9.
Am J Transplant ; 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38901561

RESUMEN

Generative artificial intelligence (AI), a subset of machine learning that creates new content based on training data, has witnessed tremendous advances in recent years. Practical applications have been identified in health care in general, and there is significant opportunity in transplant medicine for generative AI to simplify tasks in research, medical education, and clinical practice. In addition, patients stand to benefit from patient education that is more readily provided by generative AI applications. This review aims to catalyze the development and adoption of generative AI in transplantation by introducing basic AI and generative AI concepts to the transplant clinician and summarizing its current and potential applications within the field. We provide an overview of applications to the clinician, researcher, educator, and patient. We also highlight the challenges involved in bringing these applications to the bedside and need for ongoing refinement of generative AI applications to sustainably augment the transplantation field.

10.
Bioengineering (Basel) ; 11(6)2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38927803

RESUMEN

Screening is critical for prevention and early detection of cervical cancer but it is time-consuming and laborious. Supervised deep convolutional neural networks have been developed to automate pap smear screening and the results are promising. However, the interest in using only normal samples to train deep neural networks has increased owing to the class imbalance problems and high-labeling costs that are both prevalent in healthcare. In this study, we introduce a method to learn explainable deep cervical cell representations for pap smear cytology images based on one-class classification using variational autoencoders. Findings demonstrate that a score can be calculated for cell abnormality without training models with abnormal samples, and we localize abnormality to interpret our results with a novel metric based on absolute difference in cross-entropy in agglomerative clustering. The best model that discriminates squamous cell carcinoma (SCC) from normals gives 0.908±0.003 area under operating characteristic curve (AUC) and one that discriminates high-grade epithelial lesion (HSIL) 0.920±0.002 AUC. Compared to other clustering methods, our method enhances the V-measure and yields higher homogeneity scores, which more effectively isolate different abnormality regions, aiding in the interpretation of our results. Evaluation using an external dataset shows that our model can discriminate abnormality without the need for additional training of deep models.

11.
BMC Biomed Eng ; 6(1): 4, 2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38698495

RESUMEN

Since their inception more than 50 years ago, Brain-Computer Interfaces (BCIs) have held promise to compensate for functions lost by people with disabilities through allowing direct communication between the brain and external devices. While research throughout the past decades has demonstrated the feasibility of BCI to act as a successful assistive technology, the widespread use of BCI outside the lab is still beyond reach. This can be attributed to a number of challenges that need to be addressed for BCI to be of practical use including limited data availability, limited temporal and spatial resolutions of brain signals recorded non-invasively and inter-subject variability. In addition, for a very long time, BCI development has been mainly confined to specific simple brain patterns, while developing other BCI applications relying on complex brain patterns has been proven infeasible. Generative Artificial Intelligence (GAI) has recently emerged as an artificial intelligence domain in which trained models can be used to generate new data with properties resembling that of available data. Given the enhancements observed in other domains that possess similar challenges to BCI development, GAI has been recently employed in a multitude of BCI development applications to generate synthetic brain activity; thereby, augmenting the recorded brain activity. Here, a brief review of the recent adoption of GAI techniques to overcome the aforementioned BCI challenges is provided demonstrating the enhancements achieved using GAI techniques in augmenting limited EEG data, enhancing the spatiotemporal resolution of recorded EEG data, enhancing cross-subject performance of BCI systems and implementing end-to-end BCI applications. GAI could represent the means by which BCI would be transformed into a prevalent assistive technology, thereby improving the quality of life of people with disabilities, and helping in adopting BCI as an emerging human-computer interaction technology for general use.

12.
Comput Biol Med ; 177: 108614, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38796884

RESUMEN

Integration analysis of cancer multi-omics data for pan-cancer classification has the potential for clinical applications in various aspects such as tumor diagnosis, analyzing clinically significant features, and providing precision medicine. In these applications, the embedding and feature selection on high-dimensional multi-omics data is clinically necessary. Recently, deep learning algorithms become the most promising cancer multi-omic integration analysis methods, due to the powerful capability of capturing nonlinear relationships. Developing effective deep learning architectures for cancer multi-omics embedding and feature selection remains a challenge for researchers in view of high dimensionality and heterogeneity. In this paper, we propose a novel two-phase deep learning model named AVBAE-MODFR for pan-cancer classification. AVBAE-MODFR achieves embedding by a multi2multi autoencoder based on the adversarial variational Bayes method and further performs feature selection utilizing a dual-net-based feature ranking method. AVBAE-MODFR utilizes AVBAE to pre-train the network parameters, which improves the classification performance and enhances feature ranking stability in MODFR. Firstly, AVBAE learns high-quality representation among multiple omics features for unsupervised pan-cancer classification. We design an efficient discriminator architecture to distinguish the latent distributions for updating forward variational parameters. Secondly, we propose MODFR to simultaneously evaluate multi-omics feature importance for feature selection by training a designed multi2one selector network, where the efficient evaluation approach based on the average gradient of random mask subsets can avoid bias caused by input feature drift. We conduct experiments on the TCGA pan-cancer dataset and compare it with four state-of-the-art methods for each phase. The results show the superiority of AVBAE-MODFR over SOTA methods.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Neoplasias/clasificación , Neoplasias/metabolismo , Neoplasias/genética , Algoritmos , Genómica , Multiómica
13.
BMC Med Imaging ; 24(1): 100, 2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38684964

RESUMEN

PURPOSE: To detect the Marchiafava Bignami Disease (MBD) using a distinct deep learning technique. BACKGROUND: Advanced deep learning methods are becoming more crucial in contemporary medical diagnostics, particularly for detecting intricate and uncommon neurological illnesses such as MBD. This rare neurodegenerative disorder, sometimes associated with persistent alcoholism, is characterized by the loss of myelin or tissue death in the corpus callosum. It poses significant diagnostic difficulties owing to its infrequency and the subtle signs it exhibits in its first stages, both clinically and on radiological scans. METHODS: The novel method of Variational Autoencoders (VAEs) in conjunction with attention mechanisms is used to identify MBD peculiar diseases accurately. VAEs are well-known for their proficiency in unsupervised learning and anomaly detection. They excel at analyzing extensive brain imaging datasets to uncover subtle patterns and abnormalities that traditional diagnostic approaches may overlook, especially those related to specific diseases. The use of attention mechanisms enhances this technique, enabling the model to concentrate on the most crucial elements of the imaging data, similar to the discerning observation of a skilled radiologist. Thus, we utilized the VAE with attention mechanisms in this study to detect MBD. Such a combination enables the prompt identification of MBD and assists in formulating more customized and efficient treatment strategies. RESULTS: A significant breakthrough in this field is the creation of a VAE equipped with attention mechanisms, which has shown outstanding performance by achieving accuracy rates of over 90% in accurately differentiating MBD from other neurodegenerative disorders. CONCLUSION: This model, which underwent training using a diverse range of MRI images, has shown a notable level of sensitivity and specificity, significantly minimizing the frequency of false positive results and strengthening the confidence and dependability of these sophisticated automated diagnostic tools.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Enfermedad de Marchiafava-Bignami , Humanos , Enfermedad de Marchiafava-Bignami/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Masculino , Femenino , Persona de Mediana Edad , Adulto , Interpretación de Imagen Asistida por Computador/métodos , Sensibilidad y Especificidad
14.
Int J Mol Sci ; 25(7)2024 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-38612602

RESUMEN

Molecular property prediction is an important task in drug discovery, and with help of self-supervised learning methods, the performance of molecular property prediction could be improved by utilizing large-scale unlabeled dataset. In this paper, we propose a triple generative self-supervised learning method for molecular property prediction, called TGSS. Three encoders including a bi-directional long short-term memory recurrent neural network (BiLSTM), a Transformer, and a graph attention network (GAT) are used in pre-training the model using molecular sequence and graph structure data to extract molecular features. The variational auto encoder (VAE) is used for reconstructing features from the three models. In the downstream task, in order to balance the information between different molecular features, a feature fusion module is added to assign different weights to each feature. In addition, to improve the interpretability of the model, atomic similarity heat maps were introduced to demonstrate the effectiveness and rationality of molecular feature extraction. We demonstrate the accuracy of the proposed method on chemical and biological benchmark datasets by comparative experiments.


Asunto(s)
Benchmarking , Descubrimiento de Drogas , Animales , Suministros de Energía Eléctrica , Estro , Aprendizaje Automático Supervisado
15.
Front Pharmacol ; 15: 1331062, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38384298

RESUMEN

There are two main ways to discover or design small drug molecules. The first involves fine-tuning existing molecules or commercially successful drugs through quantitative structure-activity relationships and virtual screening. The second approach involves generating new molecules through de novo drug design or inverse quantitative structure-activity relationship. Both methods aim to get a drug molecule with the best pharmacokinetic and pharmacodynamic profiles. However, bringing a new drug to market is an expensive and time-consuming endeavor, with the average cost being estimated at around $2.5 billion. One of the biggest challenges is screening the vast number of potential drug candidates to find one that is both safe and effective. The development of artificial intelligence in recent years has been phenomenal, ushering in a revolution in many fields. The field of pharmaceutical sciences has also significantly benefited from multiple applications of artificial intelligence, especially drug discovery projects. Artificial intelligence models are finding use in molecular property prediction, molecule generation, virtual screening, synthesis planning, repurposing, among others. Lately, generative artificial intelligence has gained popularity across domains for its ability to generate entirely new data, such as images, sentences, audios, videos, novel chemical molecules, etc. Generative artificial intelligence has also delivered promising results in drug discovery and development. This review article delves into the fundamentals and framework of various generative artificial intelligence models in the context of drug discovery via de novo drug design approach. Various basic and advanced models have been discussed, along with their recent applications. The review also explores recent examples and advances in the generative artificial intelligence approach, as well as the challenges and ongoing efforts to fully harness the potential of generative artificial intelligence in generating novel drug molecules in a faster and more affordable manner. Some clinical-level assets generated form generative artificial intelligence have also been discussed in this review to show the ever-increasing application of artificial intelligence in drug discovery through commercial partnerships.

16.
Stud Health Technol Inform ; 310: 951-955, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269949

RESUMEN

Segmentation of pancreatic tumors on CT images is essential for the diagnosis and treatment of pancreatic cancer. However, low contrast between the pancreas and the tumor, as well as variable tumor shape and position, makes segmentation challenging. To solve the problem, we propose a Position Prior Attention Network (PPANet) with a pseudo segmentation generation module (PSGM) and a position prior attention module (PPAM). PSGM and PPAM maps pancreatic and tumor pseudo segmentation to latent space to generate position prior attention map and supervises location classification. The proposed method is evaluated on pancreatic patient data collected from local hospital and the experimental results demonstrate that our method can significantly improve the tumor segmentation results by introducing the position information in the training phase.


Asunto(s)
Neoplasias Pancreáticas , Humanos , Neoplasias Pancreáticas/diagnóstico por imagen , Hospitales
17.
J Colloid Interface Sci ; 659: 739-750, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38211491

RESUMEN

HYPOTHESIS: The formation of distorted lamellar phases, distinguished by their arrangement of crumpled, stacked layers, is frequently accompanied by the disruption of long-range order, leading to the formation of interconnected network structures commonly observed in the sponge phase. Nevertheless, traditional scattering functions grounded in deterministic modeling fall short of fully representing these intricate structural characteristics. Our hypothesis posits that a deep learning method, in conjunction with the generalized leveled wave approach used for describing structural features of distorted lamellar phases, can quantitatively unveil the inherent spatial correlations within these phases. EXPERIMENTS AND SIMULATIONS: This report outlines a novel strategy that integrates convolutional neural networks and variational autoencoders, supported by stochastically generated density fluctuations, into a regression analysis framework for extracting structural features of distorted lamellar phases from small angle neutron scattering data. To evaluate the efficacy of our proposed approach, we conducted computational accuracy assessments and applied it to the analysis of experimentally measured small angle neutron scattering spectra of AOT surfactant solutions, a frequently studied lamellar system. FINDINGS: The findings unambiguously demonstrate that deep learning provides a dependable and quantitative approach for investigating the morphology of wide variations of distorted lamellar phases. It is adaptable for deciphering structures from the lamellar to sponge phase including intermediate structures exhibiting fused topological features. This research highlights the effectiveness of deep learning methods in tackling complex issues in the field of soft matter structural analysis and beyond.

18.
bioRxiv ; 2024 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-37662280

RESUMEN

Background and Objectives: Previous approaches pursuing normative modelling for analyzing heterogeneity in Alzheimer's Disease (AD) have relied on a single neuroimaging modality. However, AD is a multi-faceted disorder, with each modality providing unique and complementary info about AD. In this study, we used a deep-learning based multimodal normative model to assess the heterogeneity in regional brain patterns for ATN (amyloid-tau-neurodegeneration) biomarkers. Methods: We selected discovery (n = 665) and replication (n = 430) cohorts with simultaneous availability of ATN biomarkers: Florbetapir amyloid, Flortaucipir tau and T1-weighted MRI (magnetic resonance imaging) imaging. A multimodal variational autoencoder (conditioned on age and sex) was used as a normative model to learn the multimodal regional brain patterns of a cognitively unimpaired (CU) control group. The trained model was applied on individuals on the ADS (AD Spectrum) to estimate their deviations (Z-scores) from the normative distribution, resulting in a Z-score regional deviation map per ADS individual per modality. Regions with Z-scores < -1.96 for MRI and Z-scores > 1.96 for amyloid and tau were labelled as outliers. Hamming distance was used to quantify the dissimilarity between individual based on their outlier deviations across each modality. We also calculated a disease severity index (DSI) for each ADS individual which was estimated by averaging the deviations across all outlier regions corresponding to each modality. Results: ADS individuals with moderate or severe dementia showed higher proportion of regional outliers for each modality as well as more dissimilarity in modality-specific regional outlier patterns compared to ADS individuals with early or mild dementia. DSI was associated with the progressive stages of dementia, (ii) showed significant associations with neuropsychological composite scores and (iii) related to the longitudinal risk of CDR progression. Findings were reproducible in both discovery and replication cohorts. Discussion: Our is the first study to examine the heterogeneity in AD through the lens of multiple neuroimaging modalities (ATN), based on distinct or overlapping patterns of regional outlier deviations. Regional MRI and tau outliers were more heterogenous than regional amyloid outliers. DSI has the potential to be an individual patient metric of neurodegeneration that can help in clinical decision making and monitoring patient response for anti-amyloid treatments.

19.
Water Res ; 249: 120983, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38118223

RESUMEN

The reduction of water leakage is essential for ensuring sustainable and resilient water supply systems. Despite recent investments in sensing technologies, pipe leakage remains a significant challenge for the water sector, particularly in developed nations like the UK, which suffer from aging water infrastructure. Conventional models and analytical methods for detecting pipe leakage often face reliability issues and are generally limited to detecting leaks during nighttime hours. Moreover, leakages are frequently detected by the customers rather than the water companies. To achieve substantial reductions in leakage and enhance public confidence in water supply and management, adopting an intelligent detection method is crucial. Such a method should effectively leverage existing sensor data for reliable leakage identification across the network. This not only helps in minimizing water loss and the associated energy costs of water treatment but also aids in steering the water sector towards a more sustainable and resilient future. As a step towards 'self-healing' water infrastructure systems, this study presents a novel framework for rapidly identifying potential leakages at the district meter area (DMA) level. The framework involves training a domain-informed variational autoencoder (VAE) for real-time dimensionality reduction of water flow time series data and developing a two-dimensional surrogate latent variable (LV) mapping which sufficiently and efficiently captures the distinct characteristics of leakage and regular (non-leakage) flow. The domain-informed training employs a novel loss function that ensures a distinct but regulated LV space for the two classes of flow groupings (i.e., leakage and non-leakage). Subsquently, a binary SVM classifier is used to provide a hyperplane for separating the two classes of LVs corresponding to the flow groupings. Hence, the proposed framework can be efficiently utilised to classify the incoming flow as leakage or non-leakage based on the encoded surrogates LVs of the flow time series using the trained VAE encoder. The framework is trained and tested on a dataset of over 2000 DMAs in North Yorkshire, UK, containing water flow time series recorded at 15-minute intervals over one year. The framework performs exceptionally well for both regular and leakage water flow groupings with a classification accuracy of over 98 % on the unobserved test dataset.


Asunto(s)
Redes Neurales de la Computación , Máquina de Vectores de Soporte , Reproducibilidad de los Resultados , Abastecimiento de Agua
20.
Artículo en Inglés | MEDLINE | ID: mdl-38130873

RESUMEN

Normative modelling is a method for understanding the underlying heterogeneity within brain disorders like Alzheimer Disease (AD), by quantifying how each patient deviates from the expected normative pattern that has been learned from a healthy control distribution. Existing deep learning based normative models have been applied on only single modality Magnetic Resonance Imaging (MRI) neuroimaging data. However, these do not take into account the complementary information offered by multimodal M RI, which is essential for understanding a multifactorial disease like AD. To address this limitation, we propose a multi-modal variational autoencoder (mmVAE) based normative modelling framework that can capture the joint distribution between different modalities to identify abnormal brain volume deviations due to AD. Our multi-modal framework takes as input Freesurfer processed brain region volumes from T1-weighted (cortical and subcortical) and T2-weighed (hippocampal) scans of cognitively normal participants to learn the morphological characteristics of the healthy brain. The estimated normative model is then applied on AD patients to quantify the deviation in brain volumes and identify abnormal brain pattern deviations due to the progressive stages of AD. We compared our proposed mmVAE with a baseline unimodal VAE having a single encoder and decoder and the two modalities concatenated as unimodal input. Our experimental results show that deviation maps generated by mmVAE are more sensitive to disease staging within AD, have a better correlation with patient cognition and result in higher number of brain regions with statistically significant deviations compared to the unimodal baseline model.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA