Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neuroimage ; 299: 120815, 2024 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-39191358

RESUMEN

Using machine learning techniques to predict brain age from multimodal data has become a crucial biomarker for assessing brain development. Among various types of brain imaging data, structural magnetic resonance imaging (sMRI) and diffusion magnetic resonance imaging (dMRI) are the most commonly used modalities. sMRI focuses on depicting macrostructural features of the brain, while dMRI reveals the orientation of major white matter fibers and changes in tissue microstructure. However, their differential capabilities in reflecting newborn age and clinical implications have not been systematically studied. This study aims to explore the impact of sMRI and dMRI on brain age prediction. Comparing predictions based on T2-weighted(T2w) and fractional anisotropy (FA) images, we found their mean absolute errors (MAE) in predicting infant age to be similar. Exploratory analysis revealed for T2w images, areas such as the cerebral cortex and ventricles contribute most significantly to age prediction, whereas FA images highlight the cerebral cortex and regions of the main white matter tracts. Despite both modalities focusing on the cerebral cortex, they exhibit significant region-wise differences, reflecting developmental disparities in macro- and microstructural aspects of the cortex. Additionally, we examined the effects of prematurity, gender, and hemispherical asymmetry of the brain on age prediction for both modalities. Results showed significant differences (p<0.05) in age prediction biases based on FA images across gender and hemispherical asymmetry, whereas no significant differences were observed with T2w images. This study underscores the differences between T2w and FA images in predicting infant brain age, offering new perspectives for studying infant brain development and aiding more effective assessment and tracking of infant development.

2.
Brief Bioinform ; 25(3)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38557672

RESUMEN

Lung adenocarcinoma (LUAD) is the most common histologic subtype of lung cancer. Early-stage patients have a 30-50% probability of metastatic recurrence after surgical treatment. Here, we propose a new computational framework, Interpretable Biological Pathway Graph Neural Networks (IBPGNET), based on pathway hierarchy relationships to predict LUAD recurrence and explore the internal regulatory mechanisms of LUAD. IBPGNET can integrate different omics data efficiently and provide global interpretability. In addition, our experimental results show that IBPGNET outperforms other classification methods in 5-fold cross-validation. IBPGNET identified PSMC1 and PSMD11 as genes associated with LUAD recurrence, and their expression levels were significantly higher in LUAD cells than in normal cells. The knockdown of PSMC1 and PSMD11 in LUAD cells increased their sensitivity to afatinib and decreased cell migration, invasion and proliferation. In addition, the cells showed significantly lower EGFR expression, indicating that PSMC1 and PSMD11 may mediate therapeutic sensitivity through EGFR expression.


Asunto(s)
Adenocarcinoma del Pulmón , Neoplasias Pulmonares , Humanos , Adenocarcinoma del Pulmón/genética , Adenocarcinoma del Pulmón/metabolismo , Neoplasias Pulmonares/metabolismo , Línea Celular Tumoral , Biomarcadores de Tumor/genética , Biomarcadores de Tumor/metabolismo , Regulación Neoplásica de la Expresión Génica , Receptores ErbB/genética , Proliferación Celular
3.
Health Inf Sci Syst ; 12(1): 31, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38645838

RESUMEN

Early and accurate diagnosis of osteosarcomas (OS) is of great clinical significance, and machine learning (ML) based methods are increasingly adopted. However, current ML-based methods for osteosarcoma diagnosis consider only X-ray images, usually fail to generalize to new cases, and lack explainability. In this paper, we seek to explore the capability of deep learning models in diagnosing primary OS, with higher accuracy, explainability, and generality. Concretely, we analyze the added value of integrating the biochemical data, i.e., alkaline phosphatase (ALP) and lactate dehydrogenase (LDH), and design a model that incorporates the numerical features of ALP and LDH and the visual features of X-ray imaging through a late fusion approach in the feature space. We evaluate this model on real-world clinic data with 848 patients aged from 4 to 81. The experimental results reveal the effectiveness of incorporating ALP and LDH simultaneously in a late fusion approach, with the accuracy of the considered 2608 cases increased to 97.17%, compared to 94.35% in the baseline. Grad-CAM visualizations consistent with orthopedic specialists further justified the model's explainability.

4.
Neural Netw ; 139: 305-325, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33873122

RESUMEN

How can deep neural networks encode information that corresponds to words in human speech into raw acoustic data? This paper proposes two neural network architectures for modeling unsupervised lexical learning from raw acoustic inputs: ciwGAN (Categorical InfoWaveGAN) and fiwGAN (Featural InfoWaveGAN). These combine Deep Convolutional GAN architecture for audio data (WaveGAN; Donahue et al., 2019) with the information theoretic extension of GAN - InfoGAN (Chen et al., 2016) - and propose a new latent space structure that can model featural learning simultaneously with a higher level classification and allows for a very low-dimension vector representation of lexical items. In addition to the Generator and Discriminator networks, the architectures introduce a network that learns to retrieve latent codes from generated audio outputs. Lexical learning is thus modeled as emergent from an architecture that forces a deep neural network to output data such that unique information is retrievable from its acoustic outputs. The networks trained on lexical items from the TIMIT corpus learn to encode unique information corresponding to lexical items in the form of categorical variables in their latent space. By manipulating these variables, the network outputs specific lexical items. The network occasionally outputs innovative lexical items that violate training data, but are linguistically interpretable and highly informative for cognitive modeling and neural network interpretability. Innovative outputs suggest that phonetic and phonological representations learned by the network can be productively recombined and directly paralleled to productivity in human speech: a fiwGAN network trained on suit and dark outputs innovative start, even though it never saw start or even a [st] sequence in the training data. We also argue that setting latent featural codes to values well beyond training range results in almost categorical generation of prototypical lexical items and reveals underlying values of each latent code. Probing deep neural networks trained on well understood dependencies in speech bears implications for latent space interpretability and understanding how deep neural networks learn meaningful representations, as well as potential for unsupervised text-to-speech generation in the GAN framework.


Asunto(s)
Aprendizaje Automático , Procesamiento de Lenguaje Natural , Acústica , Software de Reconocimiento del Habla
5.
Shape Med Imaging (2020) ; 12474: 95-107, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33283214

RESUMEN

We propose a mesh-based technique to aid in the classification of Alzheimer's disease dementia (ADD) using mesh representations of the cortex and subcortical structures. Deep learning methods for classification tasks that utilize structural neuroimaging often require extensive learning parameters to optimize. Frequently, these approaches for automated medical diagnosis also lack visual interpretability for areas in the brain involved in making a diagnosis. This work: (a) analyzes brain shape using surface information of the cortex and subcortical structures, (b) proposes a residual learning framework for state-of-the-art graph convolutional networks which offer a significant reduction in learnable parameters, and (c) offers visual interpretability of the network via class-specific gradient information that localizes important regions of interest in our inputs. With our proposed method leveraging the use of cortical and subcortical surface information, we outperform other machine learning methods with a 96.35% testing accuracy for the ADD vs. healthy control problem. We confirm the validity of our model by observing its performance in a 25-trial Monte Carlo cross-validation. The generated visualization maps in our study show correspondences with current knowledge regarding the structural localization of pathological changes in the brain associated to dementia of the Alzheimer's type.

6.
Adv Intell Data Anal ; 12080: 509-521, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34131660

RESUMEN

While neural networks are powerful approximators used to classify or embed data into lower dimensional spaces, they are often regarded as black boxes with uninterpretable features. Here we propose Graph Spectral Regularization for making hidden layers more interpretable without significantly impacting performance on the primary task. Taking inspiration from spatial organization and localization of neuron activations in biological networks, we use a graph Laplacian penalty to structure the activations within a layer. This penalty encourages activations to be smooth either on a predetermined graph or on a feature-space graph learned from the data via co-activations of a hidden layer of the neural network. We show numerous uses for this additional structure including cluster indication and visualization in biological and image data sets.

7.
Front Artif Intell ; 3: 44, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33733161

RESUMEN

Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations. This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture and proposes a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties. The Generative Adversarial architecture is uniquely appropriate for modeling phonetic and phonological learning because the network is trained on unannotated raw acoustic data and learning is unsupervised without any language-specific assumptions or pre-assumed levels of abstraction. A Generative Adversarial Network was trained on an allophonic distribution in English, in which voiceless stops surface as aspirated word-initially before stressed vowels, except if preceded by a sibilant [s]. The network successfully learns the allophonic alternation: the network's generated speech signal contains the conditional distribution of aspiration duration. The paper proposes a technique for establishing the network's internal representations that identifies latent variables that correspond to, for example, presence of [s] and its spectral properties. By manipulating these variables, we actively control the presence of [s] and its frication amplitude in the generated outputs. This suggests that the network learns to use latent variables as an approximation of phonetic and phonological representations. Crucially, we observe that the dependencies learned in training extend beyond the training interval, which allows for additional exploration of learning representations. The paper also discusses how the network's architecture and innovative outputs resemble and differ from linguistic behavior in language acquisition, speech disorders, and speech errors, and how well-understood dependencies in speech data can help us interpret how neural networks learn their representations.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA