Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9259-9273, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35294365

RESUMEN

Band selection (BS) reduces effectively the spectral dimension of a hyperspectral image (HSI) by selecting relatively few representative bands, which allows efficient processing in subsequent tasks. Existing unsupervised BS methods based on subspace clustering are built on matrix-based models, where each band is reshaped as a vector. They encode the correlation of data only in the spectral mode (dimension) and neglect strong correlations between different modes, i.e., spatial modes and spectral mode. Another issue is that the subspace representation of bands is performed in the raw data space, where the dimension is often excessively high, resulting in a less efficient and less robust performance. To address these issues, in this article, we propose a tensor-based subspace clustering model for hyperspectral BS. Our model is developed on the well-known Tucker decomposition. The three factor matrices and a core tensor in our model encode jointly the multimode correlations of HSI, avoiding effectively to destroy the tensor structure and information loss. In addition, we propose well-motivated heterogeneous regularizations (HRs) on the factor matrices by taking into account the important local and global properties of HSI along three dimensions, which facilitates the learning of the intrinsic cluster structure of bands in the low-dimensional subspaces. Instead of learning the correlations of bands in the original domain, a common way for the matrix-based models, our model learns naturally the band correlations in a low-dimensional latent feature space, which is derived by the projections of two factor matrices associated with spatial dimensions, leading to a computationally efficient model. More importantly, the latent feature space is learned in a unified framework. We also develop an efficient algorithm to solve the resulting model. Experimental results on benchmark datasets demonstrate that our model yields improved performance compared to the state-of-the-art.

2.
Artículo en Inglés | MEDLINE | ID: mdl-36327181

RESUMEN

The tensor nuclear norm (TNN), defined as the sum of nuclear norms of frontal slices of the tensor in a frequency domain, has been found useful in solving low-rank tensor recovery problems. Existing TNN-based methods use either fixed or data-independent transformations, which may not be the optimal choices for the given tensors. As the consequence, these methods cannot exploit the potential low-rank structure of tensor data adaptively. In this article, we propose a framework called self-adaptive learnable transform (SALT) to learn a transformation matrix from the given tensor. Specifically, SALT aims to learn a lossless transformation that induces a lower average-rank tensor, where the Schatten- p quasi-norm is used as the rank proxy. Then, because SALT is less sensitive to the orientation, we generalize SALT to other dimensions of tensor (SALTS), namely, learning three self-adaptive transformation matrices simultaneously from given tensor. SALTS is able to adaptively exploit the potential low-rank structures in all directions. We provide a unified optimization framework based on alternating direction multiplier method for SALTS model and theoretically prove the weak convergence property of the proposed algorithm. Experimental results in hyperspectral image (HSI), color video, magnetic resonance imaging (MRI), and COIL-20 datasets show that SALTS is much more accurate in tensor completion than existing methods. The demo code can be found at https://faculty.uestc.edu.cn/gaobin/zh_ CN/lwcg/153392/list/index.htm.

3.
IEEE Trans Cybern ; 52(12): 13887-13901, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35081033

RESUMEN

Recently, tensor sparsity modeling has achieved great success in the tensor completion (TC) problem. In real applications, the sparsity of a tensor can be rationally measured by low-rank tensor decomposition. However, existing methods either suffer from limited modeling power in estimating accurate rank or have difficulty in depicting hierarchical structure underlying such data ensembles. To address these issues, we propose a parametric tensor sparsity measure model, which encodes the sparsity for a general tensor by Laplacian scale mixture (LSM) modeling based on three-layer transform (TLT) for factor subspace prior with Tucker decomposition. Specifically, the sparsity of a tensor is first transformed into factor subspace, and then factor sparsity in the gradient domain is used to express the local similarity in within-mode. To further refine the sparsity, we adopt LSM by the transform learning scheme to self-adaptively depict deeper layer structured sparsity, in which the transformed sparse matrices in the sense of a statistical model can be modeled as the product of a Laplacian vector and a hidden positive scalar multiplier. We call the method as parametric tensor sparsity delivered by LSM-TLT. By a progressive transformation operator, we formulate the LSM-TLT model and use it to address the TC problem, and then the alternating direction method of multipliers-based optimization algorithm is designed to solve the problem. The experimental results on RGB images, hyperspectral images (HSIs), and videos demonstrate the proposed method outperforms state of the arts.

4.
IEEE Trans Neural Netw Learn Syst ; 33(11): 6916-6930, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34143740

RESUMEN

Existing methods for tensor completion (TC) have limited ability for characterizing low-rank (LR) structures. To depict the complex hierarchical knowledge with implicit sparsity attributes hidden in a tensor, we propose a new multilayer sparsity-based tensor decomposition (MLSTD) for the low-rank tensor completion (LRTC). The method encodes the structured sparsity of a tensor by the multiple-layer representation. Specifically, we use the CANDECOMP/PARAFAC (CP) model to decompose a tensor into an ensemble of the sum of rank-1 tensors, and the number of rank-1 components is easily interpreted as the first-layer sparsity measure. Presumably, the factor matrices are smooth since local piecewise property exists in within-mode correlation. In subspace, the local smoothness can be regarded as the second-layer sparsity. To describe the refined structures of factor/subspace sparsity, we introduce a new sparsity insight of subspace smoothness: a self-adaptive low-rank matrix factorization (LRMF) scheme, called the third-layer sparsity. By the progressive description of the sparsity structure, we formulate an MLSTD model and embed it into the LRTC problem. Then, an effective alternating direction method of multipliers (ADMM) algorithm is designed for the MLSTD minimization problem. Various experiments in RGB images, hyperspectral images (HSIs), and videos substantiate that the proposed LRTC methods are superior to state-of-the-art methods.

5.
IEEE Trans Image Process ; 30: 3084-3097, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33596175

RESUMEN

Hyperspectral image super-resolution by fusing high-resolution multispectral image (HR-MSI) and low-resolution hyperspectral image (LR-HSI) aims at reconstructing high resolution spatial-spectral information of the scene. Existing methods mostly based on spectral unmixing and sparse representation are often developed from a low-level vision task perspective, they cannot sufficiently make use of the spatial and spectral priors available from higher-level analysis. To this issue, this paper proposes a novel HSI super-resolution method that fully considers the spatial/spectral subspace low-rank relationships between available HR-MSI/LR-HSI and latent HSI. Specifically, it relies on a new subspace clustering method named "structured sparse low-rank representation" (SSLRR), to represent the data samples as linear combinations of the bases in a given dictionary, where the sparse structure is induced by low-rank factorization for the affinity matrix. Then we exploit the proposed SSLRR model to learn the SSLRR along spatial/spectral domain from the MSI/HSI inputs. By using the learned spatial and spectral low-rank structures, we formulate the proposed HSI super-resolution model as a variational optimization problem, which can be readily solved by the ADMM algorithm. Compared with state-of-the-art hyperspectral super-resolution methods, the proposed method shows better performance on three benchmark datasets in terms of both visual and quantitative evaluation.

6.
IEEE Trans Neural Netw Learn Syst ; 31(11): 4567-4581, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31880566

RESUMEN

Conventional tensor completion (TC) methods generally assume that the sparsity of tensor-valued data lies in the global subspace. The so-called global sparsity prior is measured by the tensor nuclear norm. Such assumption is not reliable in recovering low-rank (LR) tensor data, especially when considerable elements of data are missing. To mitigate this weakness, this article presents an enhanced sparsity prior model for LRTC using both local and global sparsity information in a latent LR tensor. In specific, we adopt a doubly weighted strategy for nuclear norm along each mode to characterize global sparsity prior of tensor. Different from traditional tensor-based local sparsity description, the proposed factor gradient sparsity prior in the Tucker decomposition model describes the underlying subspace local smoothness in real-world tensor objects, which simultaneously characterizes local piecewise structure over all dimensions. Moreover, there is no need to minimize the rank of a tensor for the proposed local sparsity prior. Extensive experiments on synthetic data, real-world hyperspectral images, and face modeling data demonstrate that the proposed model outperforms state-of-the-art techniques in terms of prediction capability and efficiency.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA