Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-37339041

RESUMEN

This paper aims at unpaired shape-to-shape transformation for 3D point clouds, for instance, turning a chair to its table counterpart. Recent work for 3D shape transfer or deformation highly relies on paired inputs or specific correspondences. However, it is usually not feasible to assign precise correspondences or prepare paired data from two domains. A few methods start to study unpaired learning, but the characteristics of a source model may not be preserved after transformation. To overcome the difficulty of unpaired learning for transformation, we propose alternately training the autoencoder and translators to construct shape-aware latent space. This latent space based on novel loss functions enables our translators to transform 3D point clouds across domains and maintain the consistency of shape characteristics. We also crafted a test dataset to objectively evaluate the performance of point-cloud translation. The experiments demonstrate that our framework can construct high-quality models and retain more shape characteristics during cross-domain translation compared to the state-of-the-art methods. Moreover, we also present shape editing applications with our proposed latent space, including shape-style mixing and shape-type shifting, which do not require retraining a model.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA