Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 179: 106523, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39053300

RESUMEN

Community detection in multi-layer networks stands as a prominent subject within network analysis research. However, the majority of existing techniques for identifying communities encounter two primary constraints: they lack suitability for high-dimensional data within multi-layer networks and fail to fully leverage additional auxiliary information among communities to enhance detection accuracy. To address these limitations, a novel approach named weighted prior tensor training decomposition (WPTTD) is proposed for multi-layer network community detection. Specifically, the WPTTD method harnesses the tensor feature optimization techniques to effectively manage high-dimensional data in multi-layer networks. Additionally, it employs a weighted flattened network to construct prior information for each dimension of the multi-layer network, thereby continuously exploring inter-community connections. To preserve the cohesive structure of communities and to harness comprehensive information within the multi-layer network for more effective community detection, the common community manifold learning (CCML) is integrated into the WPTTD framework for enhancing the performance. Experimental evaluations conducted on both artificial and real-world networks have verified that this algorithm outperforms several mainstream multi-layer network community detection algorithms.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Aprendizaje Automático , Humanos
2.
IEEE Access ; 9: 145334-145362, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34824964

RESUMEN

Functional magnetic resonance imaging (fMRI) is a powerful, noninvasive tool that has significantly contributed to the understanding of the human brain. FMRI data provide a sequence of whole-brain volumes over time and hence are inherently four dimensional (4D). Missing data in fMRI experiments arise from image acquisition limits, susceptibility and motion artifacts or during confounding noise removal. Hence, significant brain regions may be excluded from the data, which can seriously undermine the quality of subsequent analyses due to the significant number of missing voxels. We take advantage of the four dimensional (4D) nature of fMRI data through a tensor representation and introduce an effective algorithm to estimate missing samples in fMRI data. The proposed Riemannian nonlinear spectral conjugate gradient (RSCG) optimization method uses tensor train (TT) decomposition, which enables compact representations and provides efficient linear algebra operations. Exploiting the Riemannian structure boosts algorithm performance significantly, as evidenced by the comparison of RSCG-TT with state-of-the-art stochastic gradient methods, which are developed in the Euclidean space. We thus provide an effective method for estimating missing brain voxels and, more importantly, clearly show that taking the full 4D structure of fMRI data into account provides important gains when compared with three-dimensional (3D) and the most commonly used two-dimensional (2D) representations of fMRI data.

3.
Neural Netw ; 144: 320-333, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34547670

RESUMEN

Deep neural network (DNN) compression has become a hot topic in the research of deep learning since the scale of modern DNNs turns into too huge to implement on practical resource constrained platforms such as embedded devices. Among variant compression methods, tensor decomposition appears to be a relatively simple and efficient strategy owing to its solid mathematical foundations and regular data structure. Generally, tensorizing neural weights into higher-order tensors for better decomposition, and directly mapping efficient tensor structure to neural architecture with nonlinear activation functions, are the two most common ways. However, the considerable accuracy loss is still a fly in the ointment for the tensorizing way especially for convolutional neural networks (CNNs), while the number of studies in the mapping way is comparatively limited and corresponding compression ratio appears to be not considerable. Therefore, in this work, by researching multiple types of tensor decompositions, we realize that tensor train (TT), which has specific and efficient sequenced contractions, is potential to take into account both of tensorizing and mapping ways. Then we propose a novel nonlinear tensor train (NTT) format, which contains extra nonlinear activation functions embedded in sequenced contractions and convolutions on the top of the normal TT decomposition and the proposed TT format connected by convolutions, to compensate the accuracy loss that normal TT cannot give. Further than just shrinking the space complexity of original weight matrices and convolutional kernels, we prove that NTT can afford an efficient inference time as well. Extensive experiments and discussions demonstrate that the compressed DNNs in our NTT format can almost maintain the accuracy at least on MNIST, UCF11 and CIFAR-10 datasets, and the accuracy loss caused by normal TT could be compensated significantly on large-scale datasets such as ImageNet.


Asunto(s)
Compresión de Datos , Redes Neurales de la Computación , Algoritmos , Fenómenos Físicos
4.
Neural Netw ; 141: 420-432, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34146969

RESUMEN

Relying on the rapidly increasing capacity of computing clusters and hardware, convolutional neural networks (CNNs) have been successfully applied in various fields and achieved state-of-the-art results. Despite these exciting developments, the huge memory cost is still involved in training and inferring a large-scale CNN model and makes it hard to be widely used in resource-limited portable devices. To address this problem, we establish a training framework for three-dimensional convolutional neural networks (3DCNNs) named QTTNet that combines tensor train (TT) decomposition and data quantization together for further shrinking the model size and decreasing the memory and time cost. Through this framework, we can fully explore the superiority of TT in reducing the number of trainable parameters and the advantage of quantization in decreasing the bit-width of data, particularly compressing 3DCNN model greatly with little accuracy degradation. In addition, due to the low bit quantization to all parameters during the inference process including TT-cores, activations, and batch normalizations, the proposed method naturally takes advantage in memory and time cost. Experimental results of compressing 3DCNNs for 3D object and video recognition on ModelNet40, UCF11, and UCF50 datasets verify the effectiveness of the proposed method. The best compression ratio we have obtained is up to nearly 180× with competitive performance compared with other state-of-the-art researches. Moreover, the total bytes of our QTTNet models on ModelNet40 and UCF11 datasets can be 1000× lower than some typical practices such as MVCNN.


Asunto(s)
Redes Neurales de la Computación , Compresión de Datos , Imagenología Tridimensional
5.
Neural Netw ; 131: 215-230, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32805632

RESUMEN

Three-dimensional convolutional neural networks (3DCNNs) have been applied in many tasks, e.g., video and 3D point cloud recognition. However, due to the higher dimension of convolutional kernels, the space complexity of 3DCNNs is generally larger than that of traditional two-dimensional convolutional neural networks (2DCNNs). To miniaturize 3DCNNs for the deployment in confining environments such as embedded devices, neural network compression is a promising approach. In this work, we adopt the tensor train (TT) decomposition, a straightforward and simple in situ training compression method, to shrink the 3DCNN models. Through proposing tensorizing 3D convolutional kernels in TT format, we investigate how to select appropriate TT ranks for achieving higher compression ratio. We have also discussed the redundancy of 3D convolutional kernels for compression, core significance and future directions of this work, as well as the theoretical computation complexity versus practical executing time of convolution in TT. In the light of multiple contrast experiments based on VIVA challenge, UCF11, UCF101, and ModelNet40 datasets, we conclude that TT decomposition can compress 3DCNNs by around one hundred times without significant accuracy loss, which will enable its applications in extensive real world scenarios.


Asunto(s)
Compresión de Datos/métodos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA