Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Neural Netw Learn Syst ; 33(12): 7610-7620, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34156951

RESUMEN

Clustering algorithms based on deep neural networks have been widely studied for image analysis. Most existing methods require partial knowledge of the true labels, namely, the number of clusters, which is usually not available in practice. In this article, we propose a Bayesian nonparametric framework, deep nonparametric Bayes (DNB), for jointly learning image clusters and deep representations in a doubly unsupervised manner. In doubly unsupervised learning, we are dealing with the problem of "unknown unknowns," where we estimate not only the unknown image labels but also the unknown number of labels as well. The proposed algorithm alternates between generating a potentially unbounded number of clusters in the forward pass and learning the deep networks in the backward pass. With the help of the Dirichlet process mixtures, the proposed method is able to partition the latent representations space without specifying the number of clusters a priori. An important feature of this work is that all the estimation is realized with an end-to-end solution, which is very different from the methods that rely on post hoc analysis to select the number of clusters. Another key idea in this article is to provide a principled solution to the problem of "trivial solution" for deep clustering, which has not been much studied in the current literature. With extensive experiments on benchmark datasets, we show that our doubly unsupervised method achieves good clustering performance and outperforms many other unsupervised image clustering methods.

2.
IEEE Trans Cybern ; 52(7): 6555-6566, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33544685

RESUMEN

The cross-lingual sentiment analysis (CLSA) aims to leverage label-rich resources in the source language to improve the models of a resource-scarce domain in the target language, where monolingual approaches based on machine learning usually suffer from the unavailability of sentiment knowledge. Recently, the transfer learning paradigm that can transfer sentiment knowledge from resource-rich languages, for example, English, to resource-poor languages, for example, Chinese, has gained particular interest. Along this line, in this article, we propose semisupervised learning with SCL and space transfer (ssSCL-ST), a semisupervised transfer learning approach that makes use of structural correspondence learning as well as space transfer for cross-lingual sentiment analysis. The key idea behind ssSCL-ST, at a high level, is to explore the intrinsic sentiment knowledge in the target-lingual domain and to reduce the loss of valuable knowledge due to the knowledge transfer via semisupervised learning. ssSCL-ST also features in pivot set extension and space transfer, which helps to enhance the efficiency of knowledge transfer and improve the classification accuracy in the target language domain. Extensive experimental results demonstrate the superiority of ssSCL-ST to the state-of-the-art approaches without using any parallel corpora.


Asunto(s)
Aprendizaje Automático , Aprendizaje Automático Supervisado
3.
Artículo en Inglés | MEDLINE | ID: mdl-33729944

RESUMEN

Due to the shortage of COVID-19 viral testing kits, radiology is used to complement the screening process. Deep learning methods are promising in automatically detecting COVID-19 disease in chest x-ray images. Most of these works first train a Convolutional Neural Network (CNN) on an existing large-scale chest x-ray image dataset and then fine-tune the model on the newly collected COVID-19 chest x-ray dataset, often at a much smaller scale. However, simple fine-tuning may lead to poor performance due to two issues, firstly the large domain shift present in chest x-ray datasets and secondly the relatively small scale of the COVID-19 chest x-ray dataset. In an attempt to address these issues, we formulate the problem of COVID-19 chest x-ray image classification in a semi-supervised open set domain adaptation setting and propose a novel domain adaptation method, Semi-supervised Open set Domain Adversarial network (SODA). SODA is designed to align the data distributions across different domains in the general domain space and also in the common subspace of source and target data. In our experiments, SODA achieves a leading classification performance compared with recent state-of-the-art models in separating COVID-19 with common pneumonia. We also present results showing that SODA produces better pathology localizations.

4.
IEEE Trans Neural Netw Learn Syst ; 32(2): 736-747, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-32287008

RESUMEN

Cross-lingual sentiment classification (CLSC) aims to leverage rich-labeled resources in the source language to improve prediction models of a resource-scarce domain in the target language. Existing feature representation learning-based approaches try to minimize the difference of latent features between different domains by exact alignment, which is achieved by either one-to-one topic alignment or matrix projection. Exact alignment, however, restricts the representation flexibility and further degrades the model performances on CLSC tasks if the distribution difference between two language domains is large. On the other hand, most previous studies proposed document-level models or ignored sentiment polarities of topics that might lead to insufficient learning of latent features. To solve the abovementioned problems, we propose a coarse alignment mechanism to enhance the model's representation by a group-to-group topic alignment into an aspect-level fine-grained model. First, we propose an unsupervised aspect, opinion, and sentiment unification model (AOS), which trimodels aspects, opinions, and sentiments of reviews from different domains and helps capture more accurate latent feature representation by a coarse alignment mechanism. To further boost AOS, we propose ps-AOS, a partial supervised AOS model, in which labeled source language data help minimize the difference of feature representations between two language domains with the help of logistics regression. Finally, an expectation-maximization framework with Gibbs sampling is then proposed to optimize our model. Extensive experiments on various multilingual product review data sets show that ps-AOS significantly outperforms various kinds of state-of-the-art baselines.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA