Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.004
Filtrar
1.
Proc Natl Acad Sci U S A ; 121(37): e2319804121, 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39226356

RESUMEN

The rapid growth of large-scale spatial gene expression data demands efficient and reliable computational tools to extract major trends of gene expression in their native spatial context. Here, we used stability-driven unsupervised learning (i.e., staNMF) to identify principal patterns (PPs) of 3D gene expression profiles and understand spatial gene distribution and anatomical localization at the whole mouse brain level. Our subsequent spatial correlation analysis systematically compared the PPs to known anatomical regions and ontology from the Allen Mouse Brain Atlas using spatial neighborhoods. We demonstrate that our stable and spatially coherent PPs, whose linear combinations accurately approximate the spatial gene data, are highly correlated with combinations of expert-annotated brain regions. These PPs yield a brain ontology based purely on spatial gene expression. Our PP identification approach outperforms principal component analysis and typical clustering algorithms on the same task. Moreover, we show that the stable PPs reveal marked regional imbalance of brainwide genetic architecture, leading to region-specific marker genes and gene coexpression networks. Our findings highlight the advantages of stability-driven machine learning for plausible biological discovery from dense spatial gene expression data, streamlining tasks that are infeasible by conventional manual approaches.


Asunto(s)
Encéfalo , Animales , Ratones , Encéfalo/metabolismo , Perfilación de la Expresión Génica/métodos , Transcriptoma , Algoritmos , Aprendizaje Automático no Supervisado , Ontología de Genes , Atlas como Asunto , Redes Reguladoras de Genes , Análisis de Componente Principal
2.
Physiol Meas ; 45(9)2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39231468

RESUMEN

Objective.We investigated fluctuations of the photoplethysmography (PPG) waveform in patients undergoing surgery. There is an association between the morphologic variation extracted from arterial blood pressure (ABP) signals and short-term surgical outcomes. The underlying physiology could be the numerous regulatory mechanisms on the cardiovascular system. We hypothesized that similar information might exist in PPG waveform. However, due to the principles of light absorption, the noninvasive PPG signals are more susceptible to artifacts and necessitate meticulous signal processing.Approach.Employing the unsupervised manifold learning algorithm, dynamic diffusion map, we quantified multivariate waveform morphological variations from the PPG continuous waveform signal. Additionally, we developed several data analysis techniques to mitigate PPG signal artifacts to enhance performance and subsequently validated them using real-life clinical database.Main results.Our findings show similar associations between PPG waveform during surgery and short-term surgical outcomes, consistent with the observations from ABP waveform analysis.Significance.The variation of morphology information in the PPG waveform signal in major surgery provides clinical meanings, which may offer new opportunity of PPG waveform in a wider range of biomedical applications, due to its non-invasive nature.


Asunto(s)
Fotopletismografía , Procesamiento de Señales Asistido por Computador , Aprendizaje Automático no Supervisado , Fotopletismografía/métodos , Humanos , Femenino , Masculino , Persona de Mediana Edad , Artefactos , Anciano , Adulto
3.
PLoS Comput Biol ; 20(9): e1012378, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39226313

RESUMEN

Understanding the mechanism by which the brain achieves relatively consistent information processing contrary to its inherent inconsistency in activity is one of the major challenges in neuroscience. Recently, it has been reported that the consistency of neural responses to stimuli that are presented repeatedly is enhanced implicitly in an unsupervised way, and results in improved perceptual consistency. Here, we propose the term "selective consistency" to describe this input-dependent consistency and hypothesize that it will be acquired in a self-organizing manner by plasticity within the neural system. To test this, we investigated whether a reservoir-based plastic model could acquire selective consistency to repeated stimuli. We used white noise sequences randomly generated in each trial and referenced white noise sequences presented multiple times. The results showed that the plastic network was capable of acquiring selective consistency rapidly, with as little as five exposures to stimuli, even for white noise. The acquisition of selective consistency could occur independently of performance optimization, as the network's time-series prediction accuracy for referenced stimuli did not improve with repeated exposure and optimization. Furthermore, the network could only achieve selective consistency when in the region between order and chaos. These findings suggest that the neural system can acquire selective consistency in a self-organizing manner and that this may serve as a mechanism for certain types of learning.


Asunto(s)
Biología Computacional , Modelos Neurológicos , Redes Neurales de la Computación , Plasticidad Neuronal , Plasticidad Neuronal/fisiología , Humanos , Aprendizaje Automático no Supervisado , Red Nerviosa/fisiología , Encéfalo/fisiología , Aprendizaje/fisiología , Percepción/fisiología
4.
Bioinformatics ; 40(Suppl 2): ii105-ii110, 2024 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-39230695

RESUMEN

The data deluge in biology calls for computational approaches that can integrate multiple datasets of different types to build a holistic view of biological processes or structures of interest. An emerging paradigm in this domain is the unsupervised learning of data embeddings that can be used for downstream clustering and classification tasks. While such approaches for integrating data of similar types are becoming common, there is scarcer work on consolidating different data modalities such as network and image information. Here, we introduce DICE (Data Integration through Contrastive Embedding), a contrastive learning model for multi-modal data integration. We apply this model to study the subcellular organization of proteins by integrating protein-protein interaction data and protein image data measured in HEK293 cells. We demonstrate the advantage of data integration over any single modality and show that our framework outperforms previous integration approaches. Availability: https://github.com/raminass/protein-contrastive Contact: raminass@gmail.com.


Asunto(s)
Biología Computacional , Humanos , Células HEK293 , Biología Computacional/métodos , Mapeo de Interacción de Proteínas/métodos , Proteínas/metabolismo , Proteínas/química , Aprendizaje Automático no Supervisado
5.
Bioinformatics ; 40(Suppl 2): ii198-ii207, 2024 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-39230698

RESUMEN

MOTIVATION: In the realm of precision medicine, effective patient stratification and disease subtyping demand innovative methodologies tailored for multi-omics data. Clustering techniques applied to multi-omics data have become instrumental in identifying distinct subgroups of patients, enabling a finer-grained understanding of disease variability. Meanwhile, clinical datasets are often small and must be aggregated from multiple hospitals. Online data sharing, however, is seen as a significant challenge due to privacy concerns, potentially impeding big data's role in medical advancements using machine learning. This work establishes a powerful framework for advancing precision medicine through unsupervised random forest-based clustering in combination with federated computing. RESULTS: We introduce a novel multi-omics clustering approach utilizing unsupervised random forests. The unsupervised nature of the random forest enables the determination of cluster-specific feature importance, unraveling key molecular contributors to distinct patient groups. Our methodology is designed for federated execution, a crucial aspect in the medical domain where privacy concerns are paramount. We have validated our approach on machine learning benchmark datasets as well as on cancer data from The Cancer Genome Atlas. Our method is competitive with the state-of-the-art in terms of disease subtyping, but at the same time substantially improves the cluster interpretability. Experiments indicate that local clustering performance can be improved through federated computing. AVAILABILITY AND IMPLEMENTATION: The proposed methods are available as an R-package (https://github.com/pievos101/uRF).


Asunto(s)
Medicina de Precisión , Humanos , Análisis por Conglomerados , Medicina de Precisión/métodos , Aprendizaje Automático no Supervisado , Aprendizaje Automático , Neoplasias , Privacidad , Algoritmos , Bosques Aleatorios
6.
IEEE Trans Image Process ; 33: 4882-4895, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39236126

RESUMEN

Unsupervised domain adaptation medical image segmentation is aimed to segment unlabeled target domain images with labeled source domain images. However, different medical imaging modalities lead to large domain shift between their images, in which well-trained models from one imaging modality often fail to segment images from anothor imaging modality. In this paper, to mitigate domain shift between source domain and target domain, a style consistency unsupervised domain adaptation image segmentation method is proposed. First, a local phase-enhanced style fusion method is designed to mitigate domain shift and produce locally enhanced organs of interest. Second, a phase consistency discriminator is constructed to distinguish the phase consistency of domain-invariant features between source domain and target domain, so as to enhance the disentanglement of the domain-invariant and style encoders and removal of domain-specific features from the domain-invariant encoder. Third, a style consistency estimation method is proposed to obtain inconsistency maps from intermediate synthesized target domain images with different styles to measure the difficult regions, mitigate domain shift between synthesized target domain images and real target domain images, and improve the integrity of interested organs. Fourth, style consistency entropy is defined for target domain images to further improve the integrity of the interested organ by the concentration on the inconsistent regions. Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático no Supervisado , Tomografía Computarizada por Rayos X/métodos
7.
Commun Biol ; 7(1): 1062, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39215205

RESUMEN

Multiplexed imaging technologies have made it possible to interrogate complex tissue microenvironments at sub-cellular resolution within their native spatial context. However, proper quantification of this complexity requires the ability to easily and accurately segment cells into their sub-cellular compartments. Within the supervised learning paradigm, deep learning-based segmentation methods demonstrating human level performance have emerged. However, limited work has been done in developing such generalist methods within the unsupervised context. Here we present an easy-to-use unsupervised segmentation (UNSEG) method that achieves deep learning level performance without requiring any training data via leveraging a Bayesian-like framework, and nucleus and cell membrane markers. We show that UNSEG is internally consistent and better at generalizing to the complexity of tissue morphology than current deep learning methods, allowing it to unambiguously identify the cytoplasmic compartment of a cell, and localize molecules to their correct sub-cellular compartment. We also introduce a perturbed watershed algorithm for stably and automatically segmenting a cluster of cell nuclei into individual nuclei that increases the accuracy of classical watershed. Finally, we demonstrate the efficacy of UNSEG on a high-quality annotated gastrointestinal tissue dataset we have generated, on publicly available datasets, and in a range of practical scenarios.


Asunto(s)
Núcleo Celular , Aprendizaje Profundo , Humanos , Aprendizaje Automático no Supervisado , Procesamiento de Imagen Asistido por Computador/métodos , Teorema de Bayes , Algoritmos
8.
Sensors (Basel) ; 24(16)2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39205077

RESUMEN

Stroke is the second leading cause of death and a major cause of disability around the world, and the development of atherosclerotic plaques in the carotid arteries is generally considered the leading cause of severe cerebrovascular events. In recent years, new reports have reinforced the role of an accurate histopathological analysis of carotid plaques to perform the stratification of affected patients and proceed to the correct prevention of complications. This work proposes applying an unsupervised learning approach to analyze complex whole-slide images (WSIs) of atherosclerotic carotid plaques to allow a simple and fast examination of their most relevant features. All the code developed for the present analysis is freely available. The proposed method offers qualitative and quantitative tools to assist pathologists in examining the complexity of whole-slide images of carotid atherosclerotic plaques more effectively. Nevertheless, future studies using supervised methods should provide evidence of the correspondence between the clusters estimated using the proposed textural-based approach and the regions manually annotated by expert pathologists.


Asunto(s)
Arterias Carótidas , Placa Aterosclerótica , Aprendizaje Automático no Supervisado , Humanos , Placa Aterosclerótica/patología , Placa Aterosclerótica/diagnóstico por imagen , Arterias Carótidas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Interpretación de Imagen Asistida por Computador/métodos
9.
Environ Health Perspect ; 132(8): 85002, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39106156

RESUMEN

BACKGROUND: The field of toxicology has witnessed substantial advancements in recent years, particularly with the adoption of new approach methodologies (NAMs) to understand and predict chemical toxicity. Class-based methods such as clustering and classification are key to NAMs development and application, aiding the understanding of hazard and risk concerns associated with groups of chemicals without additional laboratory work. Advances in computational chemistry, data generation and availability, and machine learning algorithms represent important opportunities for continued improvement of these techniques to optimize their utility for specific regulatory and research purposes. However, due to their intricacy, deep understanding and careful selection are imperative to align the adequate methods with their intended applications. OBJECTIVES: This commentary aims to deepen the understanding of class-based approaches by elucidating the pivotal role of chemical similarity (structural and biological) in clustering and classification approaches (CCAs). It addresses the dichotomy between general end point-agnostic similarity, often entailing unsupervised analysis, and end point-specific similarity necessitating supervised learning. The goal is to highlight the nuances of these approaches, their applications, and common misuses. DISCUSSION: Understanding similarity is pivotal in toxicological research involving CCAs. The effectiveness of these approaches depends on the right definition and measure of similarity, which varies based on context and objectives of the study. This choice is influenced by how chemical structures are represented and the respective labels indicating biological activity, if applicable. The distinction between unsupervised clustering and supervised classification methods is vital, requiring the use of end point-agnostic vs. end point-specific similarity definition. Separate use or combination of these methods requires careful consideration to prevent bias and ensure relevance for the goal of the study. Unsupervised methods use end point-agnostic similarity measures to uncover general structural patterns and relationships, aiding hypothesis generation and facilitating exploration of datasets without the need for predefined labels or explicit guidance. Conversely, supervised techniques demand end point-specific similarity to group chemicals into predefined classes or to train classification models, allowing accurate predictions for new chemicals. Misuse can arise when unsupervised methods are applied to end point-specific contexts, like analog selection in read-across, leading to erroneous conclusions. This commentary provides insights into the significance of similarity and its role in supervised classification and unsupervised clustering approaches. https://doi.org/10.1289/EHP14001.


Asunto(s)
Aprendizaje Automático , Análisis por Conglomerados , Aprendizaje Automático no Supervisado , Toxicología/métodos , Algoritmos
10.
Phys Med Biol ; 69(16)2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39119998

RESUMEN

Objective.Deep learning has markedly enhanced the performance of sparse-view computed tomography reconstruction. However, the dependence of these methods on supervised training using high-quality paired datasets, and the necessity for retraining under varied physical acquisition conditions, constrain their generalizability across new imaging contexts and settings.Approach.To overcome these limitations, we propose an unsupervised approach grounded in the deep image prior framework. Our approach advances beyond the conventional single noise level input by incorporating multi-level linear diffusion noise, significantly mitigating the risk of overfitting. Furthermore, we embed non-local self-similarity as a deep implicit prior within a self-attention network structure, improving the model's capability to identify and utilize repetitive patterns throughout the image. Additionally, leveraging imaging physics, gradient backpropagation is performed between the image domain and projection data space to optimize network weights.Main Results.Evaluations with both simulated and clinical cases demonstrate our method's effective zero-shot adaptability across various projection views, highlighting its robustness and flexibility. Additionally, our approach effectively eliminates noise and streak artifacts while significantly restoring intricate image details.Significance. Our method aims to overcome the limitations in current supervised deep learning-based sparse-view CT reconstruction, offering improved generalizability and adaptability without the need for extensive paired training data.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Difusión , Relación Señal-Ruido , Aprendizaje Automático no Supervisado
11.
Sci Rep ; 14(1): 17956, 2024 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-39095606

RESUMEN

The symptoms of diseases can vary among individuals and may remain undetected in the early stages. Detecting these symptoms is crucial in the initial stage to effectively manage and treat cases of varying severity. Machine learning has made major advances in recent years, proving its effectiveness in various healthcare applications. This study aims to identify patterns of symptoms and general rules regarding symptoms among patients using supervised and unsupervised machine learning. The integration of a rule-based machine learning technique and classification methods is utilized to extend a prediction model. This study analyzes patient data that was available online through the Kaggle repository. After preprocessing the data and exploring descriptive statistics, the Apriori algorithm was applied to identify frequent symptoms and patterns in the discovered rules. Additionally, the study applied several machine learning models for predicting diseases, including stepwise regression, support vector machine, bootstrap forest, boosted trees, and neural-boosted methods. Several predictive machine learning models were applied to the dataset to predict diseases. It was discovered that the stepwise method for fitting outperformed all competitors in this study, as determined through cross-validation conducted for each model based on established criteria. Moreover, numerous significant decision rules were extracted in the study, which can streamline clinical applications without the need for additional expertise. These rules enable the prediction of relationships between symptoms and diseases, as well as between different diseases. Therefore, the results obtained in this study have the potential to improve the performance of prediction models. We can discover diseases symptoms and general rules using supervised and unsupervised machine learning for the dataset. Overall, the proposed algorithm can support not only healthcare professionals but also patients who face cost and time constraints in diagnosing and treating these diseases.


Asunto(s)
Algoritmos , Aprendizaje Automático Supervisado , Aprendizaje Automático no Supervisado , Humanos , Masculino , Femenino , Máquina de Vectores de Soporte , Persona de Mediana Edad , Adulto , Enfermedad
12.
Neuroimage ; 298: 120758, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39094809

RESUMEN

Recent advances in calcium imaging, including the development of fast and sensitive genetically encoded indicators, high-resolution camera chips for wide-field imaging, and resonant scanning mirrors in laser scanning microscopy, have notably improved the temporal and spatial resolution of functional imaging analysis. Nonetheless, the variability of imaging approaches and brain structures challenges the development of versatile and reliable segmentation methods. Standard techniques, such as manual selection of regions of interest or machine learning solutions, often fall short due to either user bias, non-transferability among systems, or computational demand. To overcome these issues, we developed CalciSeg, a data-driven and reproducible approach for unsupervised functional calcium imaging data segmentation. CalciSeg addresses the challenges associated with brain structure variability and user bias by offering a computationally efficient solution for automatic image segmentation based on two parameters: regions' size limits and number of refinement iterations. We evaluated CalciSeg efficacy on datasets of varied complexity, different insect species (locusts, bees, and cockroaches), and imaging systems (wide-field, confocal, and multiphoton), showing the robustness and generality of our approach. Finally, the user-friendly nature and open-source availability of CalciSeg facilitate the integration of this algorithm into existing analysis pipelines.


Asunto(s)
Encéfalo , Calcio , Calcio/metabolismo , Calcio/análisis , Animales , Encéfalo/diagnóstico por imagen , Encéfalo/metabolismo , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático no Supervisado , Abejas , Programas Informáticos , Algoritmos , Cucarachas , Neuroimagen/métodos
13.
Ultrasonics ; 143: 107408, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39094387

RESUMEN

Plane wave imaging (PWI) in medical ultrasound is becoming an important reconstruction method with high frame rates and new clinical applications. Recently, single PWI based on deep learning (DL) has been studied to overcome lowered frame rates of traditional PWI with multiple PW transmissions. However, due to the lack of appropriate ground truth images, DL-based PWI still remains challenging for performance improvements. To address this issue, in this paper, we propose a new unsupervised learning approach, i.e., deep coherence learning (DCL)-based DL beamformer (DL-DCL), for high-quality single PWI. In DL-DCL, the DL network is trained to predict highly correlated signals with a unique loss function from a set of PW data, and the trained DL model encourages high-quality PWI from low-quality single PW data. In addition, the DL-DCL framework based on complex baseband signals enables a universal beamformer. To assess the performance of DL-DCL, simulation, phantom and in vivo studies were conducted with public datasets, and it was compared with traditional beamformers (i.e., DAS with 75-PWs and DMAS with 1-PW) and other DL-based methods (i.e., supervised learning approach with 1-PW and generative adversarial network (GAN) with 1-PW). From the experiments, the proposed DL-DCL showed comparable results with DMAS with 1-PW and DAS with 75-PWs in spatial resolution, and it outperformed all comparison methods in contrast resolution. These results demonstrated that the proposed unsupervised learning approach can address the inherent limitations of traditional PWIs based on DL, and it also showed great potential in clinical settings with minimal artifacts.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Ultrasonografía , Ultrasonografía/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático no Supervisado
14.
Neural Netw ; 179: 106557, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39106566

RESUMEN

Unsupervised semantic segmentation is important for understanding that each pixel belongs to known categories without annotation. Recent studies have demonstrated promising outcomes by employing a vision transformer backbone pre-trained on an image-level dataset in a self-supervised manner. However, those methods always depend on complex architectures or meticulously designed inputs. Naturally, we are attempting to explore the investment with a straightforward approach. To prevent over-complication, we introduce a simple Dense Embedding Contrast network (DECNet) for unsupervised semantic segmentation in this paper. Specifically, we propose a Nearest Neighbor Similarity strategy (NNS) to establish well-defined positive and negative pairs for dense contrastive learning. Meanwhile, we optimize a contrastive objective named Ortho-InfoNCE to alleviate the false negative problem inherent in contrastive learning for further enhancing dense representations. Finally, extensive experiments conducted on COCO-Stuff and Cityscapes datasets demonstrate that our approach outperforms state-of-the-art methods.


Asunto(s)
Redes Neurales de la Computación , Semántica , Aprendizaje Automático no Supervisado , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
15.
Neural Netw ; 179: 106583, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39111163

RESUMEN

Entity alignment is a crucial task in knowledge graphs, aiming to match corresponding entities from different knowledge graphs. Due to the scarcity of pre-aligned entities in real-world scenarios, research focused on unsupervised entity alignment has become more popular. However, current unsupervised entity alignment methods suffer from a lack of informative entity guidance, hindering their ability to accurately predict challenging entities with similar names and structures. To solve these problems, we present an unsupervised multi-view contrastive learning framework with an attention-based reranking strategy for entity alignment, named AR-Align. In AR-Align, two kinds of data augmentation methods are employed to provide a complementary view for neighborhood and attribute, respectively. Next, a multi-view contrastive learning method is introduced to reduce the semantic gap between different views of the augmented entities. Moreover, an attention-based reranking strategy is proposed to rerank the hard entities through calculating their weighted sum of embedding similarities on different structures. Experimental results indicate that AR-Align outperforms most both supervised and unsupervised state-of-the-art methods on three benchmark datasets.


Asunto(s)
Aprendizaje Automático no Supervisado , Atención , Semántica , Redes Neurales de la Computación , Algoritmos , Humanos
16.
Neural Netw ; 179: 106584, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39142174

RESUMEN

Contrastive learning has emerged as a cornerstone in unsupervised representation learning. Its primary paradigm involves an instance discrimination task utilizing InfoNCE loss where the loss has been proven to be a form of mutual information. Consequently, it has become a common practice to analyze contrastive learning using mutual information as a measure. Yet, this analysis approach presents difficulties due to the necessity of estimating mutual information for real-world applications. This creates a gap between the elegance of its mathematical foundation and the complexity of its estimation, thereby hampering the ability to derive solid and meaningful insights from mutual information analysis. In this study, we introduce three novel methods and a few related theorems, aimed at enhancing the rigor of mutual information analysis. Despite their simplicity, these methods can carry substantial utility. Leveraging these approaches, we reassess three instances of contrastive learning analysis, illustrating the capacity of the proposed methods to facilitate deeper comprehension or to rectify pre-existing misconceptions. The main results can be summarized as follows: (1) While small batch sizes influence the range of training loss, they do not inherently limit learned representation's information content or affect downstream performance adversely; (2) Mutual information, with careful selection of positive pairings and post-training estimation, proves to be a superior measure for evaluating practical networks; and (3) Distinguishing between task-relevant and irrelevant information presents challenges, yet irrelevant information sources do not necessarily compromise the generalization of downstream tasks.


Asunto(s)
Redes Neurales de la Computación , Humanos , Algoritmos , Aprendizaje/fisiología , Aprendizaje Automático no Supervisado
17.
Neural Netw ; 179: 106581, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39128276

RESUMEN

Unsupervised domain adaptation (UDA) is a weakly supervised learning technique that classifies images in the target domain when the source domain has labeled samples, and the target domain has unlabeled samples. Due to the complexity of imaging conditions and the content of remote sensing images, the use of UDA to accurately extract artificial features such as buildings from high-spatial-resolution (HSR) imagery is still challenging. In this study, we propose a new UDA method for building extraction, the contrastive domain adaptation network (CDANet), by utilizing adversarial learning and contrastive learning techniques. CDANet consists of a single multitask generator and dual discriminators. The generator employs a region and edge dual-branch structure that strengthens its edge extraction ability and is beneficial for the extraction of small and densely distributed buildings. The dual discriminators receive the region and edge prediction outputs and achieve multilevel adversarial learning. During adversarial training processing, CDANet aligns the cross-domain of similar pixel features in the embedding space by constructing the regional pixelwise contrastive loss. A self-training (ST) strategy based on pseudolabel generation is further utilized to address the target intradomain discrepancy. Comprehensive experiments are conducted to validate CDANet on three publicly accessible datasets, namely the WHU, Austin, and Massachusetts. Ablation experiments show that the generator network structure, contrastive loss and ST strategy all improve the building extraction accuracy. Method comparisons validate that CDANet achieves superior performance to several state-of-the-art methods, including AdaptSegNet, AdvEnt, IntraDA, FDANet and ADRS, in terms of F1 score and mIoU.


Asunto(s)
Redes Neurales de la Computación , Semántica , Aprendizaje Automático no Supervisado , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Humanos
18.
J Dent ; 149: 105260, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39096996

RESUMEN

OBJECTIVES: The aim of this study was to predict the risk of dental implant loss by clustering features associated with implant survival rates. MATERIALS AND METHODS: Multiple clinical features from 8513 patients who underwent single implant placement were retrospectively analysed. A hybrid method integrating unsupervised learning algorithms with survival analysis was employed for data mining. Two-step cluster, univariate Cox regression, and Kaplan‒Meier survival analyses were performed to identify the clustering features associated with implant survival rates. To predict the risk of dental implant loss, nomograms were constructed on the basis of time-stratified multivariate Cox regression. RESULTS: Six clusters with distinct features and prognoses were identified using two-step cluster analysis and Kaplan‒Meier survival analysis. Compared with the other clusters, only one cluster presented significantly lower implant survival rates, and six specific clustering features within this cluster were identified as high-risk factors, including age, smoking history, implant diameter, implant length, implant position, and surgical procedure. Nomograms were created to assess the impact of the six high-risk factors on implant loss for three periods: 1) 0-120 days, 2) 120-310 days, and 3) more than 310 days after implant placement. The concordance indices of the models were 0.642, 0.781, and 0.715, respectively. CONCLUSIONS: The hybrid unsupervised clustering method, which clusters and identifies high-risk clinical features associated with implant loss without relying on predefined labels or target variables, represents an effective approach for developing a visual model for predicting implant prognosis. However, further validation with a multimodal, multicentre, prospective cohort is needed. CLINICAL SIGNIFICANCE: Visual prognosis prediction utilizing this nomogram that predicts the risk of implant loss on the basis of clustering features can assist dentists in preoperative assessments and clinical decision-making, potentially improving dental implant prognosis.


Asunto(s)
Implantes Dentales , Nomogramas , Humanos , Análisis por Conglomerados , Femenino , Persona de Mediana Edad , Masculino , Estudios Retrospectivos , Factores de Riesgo , Adulto , Implantes Dentales/efectos adversos , Fracaso de la Restauración Dental , Anciano , Estimación de Kaplan-Meier , Medición de Riesgo , Aprendizaje Automático no Supervisado , Modelos de Riesgos Proporcionales , Implantación Dental Endoósea/efectos adversos , Algoritmos , Minería de Datos , Implantes Dentales de Diente Único
19.
Int J Neural Syst ; 34(10): 2450055, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39136190

RESUMEN

Automatic seizure detection from Electroencephalography (EEG) is of great importance in aiding the diagnosis and treatment of epilepsy due to the advantages of convenience and economy. Existing seizure detection methods are usually patient-specific, the training and testing are carried out on the same patient, limiting their scalability to other patients. To address this issue, we propose a cross-subject seizure detection method via unsupervised domain adaptation. The proposed method aims to obtain seizure specific information through shallow and deep feature alignments. For shallow feature alignment, we use convolutional neural network (CNN) to extract seizure-related features. The distribution gap of the shallow features between different patients is minimized by multi-kernel maximum mean discrepancies (MK-MMD). For deep feature alignment, adversarial learning is utilized. The feature extractor tries to learn feature representations that try to confuse the domain classifier, making the extracted deep features more generalizable to new patients. The performance of our method is evaluated on the CHB-MIT and Siena databases in epoch-based experiments. Additionally, event-based experiments are also conducted on the CHB-MIT dataset. The results validate the feasibility of our method in diminishing the domain disparities among different patients.


Asunto(s)
Electroencefalografía , Redes Neurales de la Computación , Convulsiones , Aprendizaje Automático no Supervisado , Humanos , Electroencefalografía/métodos , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Aprendizaje Profundo , Procesamiento de Señales Asistido por Computador
20.
Stud Health Technol Inform ; 316: 1607-1611, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176518

RESUMEN

Blinking contributes to the health and protection of the eye and also holds potential in the context of muscle or nerve disorder diagnosis. Traditional methods of classifying eye blinking as open or closed are insufficient, as they do not capture medical-relevant aspects like closure speed, duration, or percentage. The issue could be solved by reliably detecting blinking intervals in high-temporal recordings. Our research demonstrates the reliable detection of blinking events through data-driven analysis of the eye aspect ratio. In an unsupervised manner, we establish an eye state prototype to identify blink intervals and measure inter-eye synchronicity between moments of peak closure. Additionally, our research shows that manually defined prototypes yield comparable results. Our results demonstrate inter-eye synchronicity up to 4.16 ms. We anticipate that medical professionals could utilize our methods to identify or define disease-specific prototypes as potential diagnostic tools.


Asunto(s)
Parpadeo , Parpadeo/fisiología , Humanos , Aprendizaje Automático no Supervisado , Semántica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA