Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 230
Filtrar
1.
Neural Netw ; 179: 106628, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39168071

RESUMEN

Dictionary learning is an important sparse representation algorithm which has been widely used in machine learning and artificial intelligence. However, for massive data in the big data era, classical dictionary learning algorithms are computationally expensive and even can be infeasible. To overcome this difficulty, we propose new dictionary learning methods based on randomized algorithms. The contributions of this work are as follows. First, we find that dictionary matrix is often numerically low-rank. Based on this property, we apply randomized singular value decomposition (RSVD) to the dictionary matrix, and propose a randomized algorithm for linear dictionary learning. Compared with the classical K-SVD algorithm, an advantage is that one can update all the elements of the dictionary matrix simultaneously. Second, to the best of our knowledge, there are few theoretical results on why one can solve the involved matrix computation problems inexactly in dictionary learning. To fill-in this gap, we show the rationality of this randomized algorithm with inexact solving, from a matrix perturbation analysis point of view. Third, based on the numerically low-rank property and Nyström approximation of the kernel matrix, we propose a randomized kernel dictionary learning algorithm, and establish the distance between the exact solution and the computed solution, to show the effectiveness of the proposed randomized kernel dictionary learning algorithm. Fourth, we propose an efficient scheme for the testing stage in kernel dictionary learning. By using this strategy, there is no need to form nor store kernel matrices explicitly both in the training and the testing stages. Comprehensive numerical experiments are performed on some real-world data sets. Numerical results demonstrate the rationality of our strategies, and show that the proposed algorithms are much efficient than some state-of-the-art dictionary learning algorithms. The MATLAB codes of the proposed algorithms are publicly available from https://github.com/Jiali-yang/RALDL_RAKDL.


Asunto(s)
Algoritmos , Aprendizaje Automático , Inteligencia Artificial , Humanos
2.
Sci Rep ; 14(1): 17122, 2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39054308

RESUMEN

Images captured in low-light environments are severely degraded due to insufficient light, which causes the performance decline of both commercial and consumer devices. One of the major challenges lies in how to balance the image enhancement properties of light intensity, detail presentation, and colour integrity in low-light enhancement tasks. This study presents a novel image enhancement framework using a detailed-based dictionary learning and camera response model (CRM). It combines dictionary learning with edge-aware filter-based detail enhancement. It assumes each small detail patch could be sparsely characterised in the over-complete detail dictionary that was learned from many training detail patches using iterative ℓ 1 -norm minimization. Dictionary learning will effectively address several enhancement concerns in the progression of detail enhancement if we remove the visibility limit of training detail patches in the enhanced detail patches. We apply illumination estimation schemes to the selected CRM and the subsequent exposure ratio maps, which recover a novel enhanced detail layer and generate a high-quality output with detailed visibility when there is a training set of higher-quality images. We estimate the exposure ratio of each pixel using illumination estimation techniques. The selected camera response model adjusts each pixel to the desired exposure based on the computed exposure ratio map. Extensive experimental analysis shows an advantage of the proposed method that it can obtain enhanced results with acceptable distortions. The proposed research article can be generalised to address numerous other similar problems, such as image enhancement for remote sensing or underwater applications, medical imaging, and foggy or dusty conditions.

3.
Neural Netw ; 178: 106434, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38941739

RESUMEN

Low-rank representation (LRR) is a classic subspace clustering (SC) algorithm, and many LRR-based methods have been proposed. Generally, LRR-based methods use denoized data as dictionaries for data reconstruction purpose. However, the dictionaries used in LRR-based algorithms are fixed, leading to poor clustering performance. In addition, most of these methods assume that the input data are linearly correlated. However, in practice, data are mostly nonlinearly correlated. To address these problems, we propose a novel adaptive kernel dictionary-based LRR (AKDLRR) method for SC. Specifically, to explore nonlinear information, the given data are mapped to the Hilbert space via the kernel technique. The dictionary in AKDLRR is not fixed; it adaptively learns from the data in the kernel space, making AKDLRR robust to noise and yielding good clustering performance. To solve the AKDLRR model, an efficient procedure including an alternative optimization strategy is proposed. In addition, a theoretical analysis of the convergence performance of AKDLRR is presented, which reveals that AKDLRR can converge in at most three iterations under certain conditions. The experimental results show that AKDLRR can achieve the best clustering performance and has excellent speed in comparison with other algorithms.


Asunto(s)
Algoritmos , Análisis por Conglomerados , Dinámicas no Lineales
4.
Med Biol Eng Comput ; 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38861055

RESUMEN

Blindness is preventable by early detection of ocular abnormalities. Computer-aided diagnosis for ocular abnormalities is built by analyzing retinal imaging modalities, for instance, Color Fundus Photography (CFP). This research aims to propose a multi-label detection of 28 ocular abnormalities consisting of frequent and rare abnormalities from a single CFP by using transformer-based semantic dictionary learning. Rare labels are usually ignored because of a lack of features. We tackle this condition by adding the co-occurrence dependency factor to the model from the linguistic features of the labels. The model learns the relation between spatial features and linguistic features represented as a semantic dictionary. The proposed method treats the semantic dictionary as one of the main important parts of the model. It acts as the query while the spatial features are the key and value. The experiments are conducted on the RFMiD dataset. The results show that the proposed method achieves the top 30% in Evaluation Set on the RFMiD dataset challenge. It also shows that treating the semantic dictionary as one of the strong factors in model detection increases the performance when compared with the method that treats the semantic dictionary as a weak factor.

5.
Quant Imaging Med Surg ; 14(4): 2884-2903, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38617145

RESUMEN

Background: Multi-echo chemical-shift-encoded magnetic resonance imaging (MRI) has been widely used for fat quantification and fat suppression in clinical liver examinations. Clinical liver water-fat imaging typically requires breath-hold acquisitions, with the free-breathing acquisition method being more desirable for patient comfort. However, the acquisition for free-breathing imaging could take up to several minutes. The purpose of this study is to accelerate four-dimensional free-breathing whole-liver water-fat MRI by jointly using high-dimensional deep dictionary learning and model-guided (MG) reconstruction. Methods: A high-dimensional model-guided deep dictionary learning (HMDDL) algorithm is proposed for the acceleration. The HMDDL combines the powers of the high-dimensional dictionary learning neural network (hdDLNN) and the chemical shift model. The neural network utilizes the prior information of the dynamic multi-echo data in spatial respiratory motion, and echo dimensions to exploit the features of images. The chemical shift model is used to guide the reconstruction of field maps, R2∗ maps, water images, and fat images. Data acquired from ten healthy subjects and ten subjects with clinically diagnosed nonalcoholic fatty liver disease (NAFLD) were selected for training. Data acquired from one healthy subject and two NAFLD subjects were selected for validation. Data acquired from five healthy subjects and five NAFLD subjects were selected for testing. A three-dimensional (3D) blipped golden-angle stack-of-stars multi-gradient-echo pulse sequence was designed to accelerate the data acquisition. The retrospectively undersampled data were used for training, and the prospectively undersampled data were used for testing. The performance of the HMDDL was evaluated in comparison with the compressed sensing-based water-fat separation (CS-WF) algorithm and a parallel non-Cartesian recurrent neural network (PNCRNN) algorithm. Results: Four-dimensional water-fat images with ten motion states for whole-liver are demonstrated at several R values. In comparison with the CS-WF and PNCRNN, the HMDDL improved the mean peak signal-to-noise ratio (PSNR) of images by 9.93 and 2.20 dB, respectively, and improved the mean structure similarity (SSIM) of images by 0.058 and 0.009, respectively, at R=10. The paired t-test shows that there was no significant difference between HMDDL and ground truth for proton-density fat fraction (PDFF) and R2∗ values at R up to 10. Conclusions: The proposed HMDDL enables features of water images and fat images from the highly undersampled multi-echo data along spatial, respiratory motion, and echo dimensions, to improve the performance of accelerated four-dimensional (4D) free-breathing water-fat imaging.

6.
Proc Natl Acad Sci U S A ; 121(11): e2314697121, 2024 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-38451944

RESUMEN

We propose a method for imaging in scattering media when large and diverse datasets are available. It has two steps. Using a dictionary learning algorithm the first step estimates the true Green's function vectors as columns in an unordered sensing matrix. The array data comes from many sparse sets of sources whose location and strength are not known to us. In the second step, the columns of the estimated sensing matrix are ordered for imaging using the multidimensional scaling algorithm with connectivity information derived from cross-correlations of its columns, as in time reversal. For these two steps to work together, we need data from large arrays of receivers so the columns of the sensing matrix are incoherent for the first step, as well as from sub-arrays so that they are coherent enough to obtain connectivity needed in the second step. Through simulation experiments, we show that the proposed method is able to provide images in complex media whose resolution is that of a homogeneous medium.

7.
ISA Trans ; 147: 55-70, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38309975

RESUMEN

As a vital mechanical sub-component, the health monitoring of rolling bearings is important. Vibration signal analysis is a commonly used approach for fault diagnosis of bearings. Nevertheless, the collected vibration signals cannot avoid interference from noises which has a negative influence on fault diagnosis. Thus, denoising needs to be utilized as an essential step of vibration signal processing. Traditional denoising methods need expert knowledge to select hyperparameters. And data-driven methods based on deep learning lack interpretability and a clear justification for the design of architecture in a "black-box" deep neural network. An approach to systematically design neural networks is by unrolling algorithms, such as learned iterative soft-thresholding (LISTA). In this paper, the multi-layer convolutional LISTA (ML-CLISTA) algorithm is derived by embedding a designed multi-layer sparse coder to the convolutional extension of LISTA. Then the multi-layer convolutional dictionary learning (ML-CDL) network for mechanical vibration signal denoising is proposed by unrolling ML-CLISTA. By combining ML-CDL network with a classifier, the proposed denoising method is applied to the explainable rolling bearing fault diagnosis. The experiments on two bearing datasets show the superiority of the ML-CDL network over other typical denoising methods.

8.
Comput Methods Programs Biomed ; 244: 108010, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38199137

RESUMEN

Purpose Numerous techniques based on deep learning have been utilized in sparse view computed tomography (CT) imaging. Nevertheless, the majority of techniques are instinctively constructed utilizing state-of-the-art opaque convolutional neural networks (CNNs) and lack interpretability. Moreover, CNNs tend to focus on local receptive fields and neglect nonlocal self-similarity prior information. Obtaining diagnostically valuable images from sparsely sampled projections is a challenging and ill-posed task. Method To address this issue, we propose a unique and understandable model named DCDL-GS for sparse view CT imaging. This model relies on a network comprised of convolutional dictionary learning and a nonlocal group sparse prior. To enhance the quality of image reconstruction, we utilize a neural network in conjunction with a statistical iterative reconstruction framework and perform a set number of iterations. Inspired by group sparsity priors, we adopt a novel group thresholding operation to improve the feature representation and constraint ability and obtain a theoretical interpretation. Furthermore, our DCDL-GS model incorporates filtered backprojection (FBP) reconstruction, fast sliding window nonlocal self-similarity operations, and a lightweight and interpretable convolutional dictionary learning network to enhance the applicability of the model. Results The efficiency of our proposed DCDL-GS model in preserving edges and recovering features is demonstrated by the visual results obtained on the LDCT-P and UIH datasets. Compared to the results of the most advanced techniques, the quantitative results are enhanced, with increases of 0.6-0.8 dB for the peak signal-to-noise ratio (PSNR), 0.005-0.01 for the structural similarity index measure (SSIM), and 1-1.3 for the regulated Fréchet inception distance (rFID) on the test dataset. The quantitative results also show the effectiveness of our proposed deep convolution iterative reconstruction module and nonlocal group sparse prior. Conclusion In this paper, we create a consolidated and enhanced mathematical model by integrating projection data and prior knowledge of images into a deep iterative model. The model is more practical and interpretable than existing approaches. The results from the experiment show that the proposed model performs well in comparison to the others.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Relación Señal-Ruido , Algoritmos , Fantasmas de Imagen
9.
J Heart Lung Transplant ; 43(3): 394-402, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37778525

RESUMEN

BACKGROUND: Assessment and selection of donor lungs remain largely subjective and experience based. Criteria to accept or decline lungs are poorly standardized and are not compliant with the current donor pool. Using ex vivo computed tomography (CT) images, we investigated the use of a CT-based machine learning algorithm for screening donor lungs before transplantation. METHODS: Clinical measures and ex situ CT scans were collected from 100 cases as part of a prospective clinical trial. Following procurement, donor lungs were inflated, placed on ice according to routine clinical practice, and imaged using a clinical CT scanner before transplantation while stored in the icebox. We trained and tested a supervised machine learning method called dictionary learning, which uses CT scans and learns specific image patterns and features pertaining to each class for a classification task. The results were evaluated with donor and recipient clinical measures. RESULTS: Of the 100 lung pairs donated, 70 were considered acceptable for transplantation (based on standard clinical assessment) before CT screening and were consequently implanted. The remaining 30 pairs were screened but not transplanted. Our machine learning algorithm was able to detect pulmonary abnormalities on the CT scans. Among the patients who received donor lungs, our algorithm identified recipients who had extended stays in the intensive care unit and were at 19 times higher risk of developing chronic lung allograft dysfunction within 2 years posttransplant. CONCLUSIONS: We have created a strategy to ex vivo screen donor lungs using a CT-based machine learning algorithm. As the use of suboptimal donor lungs rises, it is important to have in place objective techniques that will assist physicians in accurately screening donor lungs to identify recipients most at risk of posttransplant complications.


Asunto(s)
Trasplante de Pulmón , Donantes de Tejidos , Humanos , Pulmón/diagnóstico por imagen , Aprendizaje Automático , Estudios Prospectivos , Tomografía Computarizada por Rayos X , Ensayos Clínicos como Asunto
10.
Comput Biol Med ; 168: 107763, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38056208

RESUMEN

BACKGROUND: Aortic stenosis (AS) is the most prevalent type of valvular heart disease (VHD), traditionally diagnosed using echocardiogram or phonocardiogram. Seismocardiogram (SCG), an emerging wearable cardiac monitoring modality, is proved to be feasible in non-invasive and cost-effective AS diagnosis. However, SCG waveforms acquired from patients with heart diseases are typically weak, making them more susceptible to noise contamination. While most related researches focus on motion artifacts, sensor noise and quantization noise have been mostly overlooked. These noises pose additional challenges for extracting features from the SCG, especially impeding accurate AS classification. METHOD: To address this challenge, we present a convolutional dictionary learning-based method. Based on sparse modeling of SCG, the proposed method generates a personalized adaptive-size dictionary from noisy measurements. The dictionary is used for sparse coding of the noisy SCG into a transform domain. Reconstruction from the domain removes the noise while preserving the individual waveform pattern of SCG. RESULTS: Using two self-collected SCG datasets, we established optimal dictionary learning parameters and validated the denoising performance. Subsequently, the proposed method denoised SCG from 50 subjects (25 AS and 25 non-AS). Leave-one-subject-out cross-validation (LOOCV) was applied to 5 machine learning classifiers. Among the classifiers, a bi-layer neural network achieved a moderate accuracy of 90.2%, with an improvement of 13.8% from the denoising. CONCLUSIONS: The proposed sparsity-based denoising technique effectively removes stochastic sensor noise and quantization noise from SCG, consequently improving AS classification performance. This approach shows promise for overcoming instrumentation constraints of SCG-based diagnosis.


Asunto(s)
Algoritmos , Estenosis de la Válvula Aórtica , Humanos , Redes Neurales de la Computación , Aprendizaje Automático , Estenosis de la Válvula Aórtica/diagnóstico por imagen , Artefactos
11.
Brain Struct Funct ; 229(1): 161-181, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38012283

RESUMEN

The analysis and understanding of brain characteristics often require considering region-level information rather than voxel-sampled data. Subject-specific parcellations have been put forward in recent years, as they can adapt to individual brain organization and thus offer more accurate individual summaries than standard atlases. However, the price to pay for adaptability is the lack of group-level consistency of the data representation. Here, we investigate whether the good representations brought by individualized models are merely an effect of circular analysis, in which individual brain features are better represented by subject-specific summaries, or whether this carries over to new individuals, i.e., whether one can actually adapt an existing parcellation to new individuals and still obtain good summaries in these individuals. For this, we adapt a dictionary-learning method to produce brain parcellations. We use it on a deep-phenotyping dataset to assess quantitatively the patterns of activity obtained under naturalistic and controlled-task-based settings. We show that the benefits of individual parcellations are substantial, but that they vary a lot across brain systems.


Asunto(s)
Benchmarking , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo , Mapeo Encefálico/métodos , Adaptación Fisiológica
12.
Sensors (Basel) ; 23(22)2023 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-38005564

RESUMEN

(1) Background: The ability to recognize identities is an essential component of security. Electrocardiogram (ECG) signals have gained popularity for identity recognition because of their universal, unique, stable, and measurable characteristics. To ensure accurate identification of ECG signals, this paper proposes an approach which involves mixed feature sampling, sparse representation, and recognition. (2) Methods: This paper introduces a new method of identifying individuals through their ECG signals. This technique combines the extraction of fixed ECG features and specific frequency features to improve accuracy in ECG identity recognition. This approach uses the wavelet transform to extract frequency bands which contain personal information features from the ECG signals. These bands are reconstructed, and the single R-peak localization determines the ECG window. The signals are segmented and standardized based on the located windows. A sparse dictionary is created using the standardized ECG signals, and the KSVD (K-Orthogonal Matching Pursuit) algorithm is employed to project ECG target signals into a sparse vector-matrix representation. To extract the final representation of the target signals for identification, the sparse coefficient vectors in the signals are maximally pooled. For recognition, the co-dimensional bundle search method is used in this paper. (3) Results: This paper utilizes the publicly available European ST-T database for our study. Specifically, this paper selects ECG signals from 20, 50 and 70 subjects, each with 30 testing segments. The method proposed in this paper achieved recognition rates of 99.14%, 99.09%, and 99.05%, respectively. (4) Conclusion: The experiments indicate that the method proposed in this paper can accurately capture, represent and identify ECG signals.


Asunto(s)
Identificación Biométrica , Humanos , Identificación Biométrica/métodos , Algoritmos , Electrocardiografía/métodos , Análisis de Ondículas , Bases de Datos Factuales
13.
Bioengineering (Basel) ; 10(9)2023 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-37760114

RESUMEN

Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. Methods such as parallel MRI and Compressive Sensing (CS) have significantly reduced the MRI acquisition time by acquiring less data through undersampling k-space. The state-of-the-art of fast MRI has recently been redefined by integrating Deep Learning (DL) models with these undersampled approaches. This Systematic Literature Review (SLR) comprehensively analyzes deep MRI reconstruction models, emphasizing the key elements of recently proposed methods and highlighting their strengths and weaknesses. This SLR involves searching and selecting relevant studies from various databases, including Web of Science and Scopus, followed by a rigorous screening and data extraction process using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. It focuses on various techniques, such as residual learning, image representation using encoders and decoders, data-consistency layers, unrolled networks, learned activations, attention modules, plug-and-play priors, diffusion models, and Bayesian methods. This SLR also discusses the use of loss functions and training with adversarial networks to enhance deep MRI reconstruction methods. Moreover, we explore various MRI reconstruction applications, including non-Cartesian reconstruction, super-resolution, dynamic MRI, joint learning of reconstruction with coil sensitivity and sampling, quantitative mapping, and MR fingerprinting. This paper also addresses research questions, provides insights for future directions, and emphasizes robust generalization and artifact handling. Therefore, this SLR serves as a valuable resource for advancing fast MRI, guiding research and development efforts of MRI reconstruction for better image quality and faster data acquisition.

14.
J Xray Sci Technol ; 31(6): 1165-1187, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37694333

RESUMEN

BACKGROUND: Recently, one promising approach to suppress noise/artifacts in low-dose CT (LDCT) images is the CNN-based approach, which learns the mapping function from LDCT to normal-dose CT (NDCT). However, most CNN-based methods are purely data-driven, thus lacking sufficient interpretability and often losing details. OBJECTIVE: To solve this problem, we propose a deep convolutional dictionary learning method for LDCT denoising, in which a novel convolutional dictionary learning model with adaptive window (CDL-AW) is designed, and a corresponding enhancement-based convolutional dictionary learning network (called ECDAW-Net) is constructed to unfold the CDL-AW model iteratively using the proximal gradient descent technique. METHODS: In detail, the adaptive window-constrained convolutional dictionary atom is proposed to alleviate spectrum leakage caused by data truncation during convolution. Furthermore, in the ECDAW-Net, a multi-scale edge extraction module that consists of LoG and Sobel convolution layers is proposed in the unfolding iteration, to supplement lost textures and details. Additionally, to further improve the detail retention ability, the ECDAW-Net is trained by the compound loss function of the pixel-level MSE loss and the proposed patch-level loss, which can assist to retain richer structural information. RESULTS: Applying ECDAW-Net to the Mayo dataset, we obtained the highest peak signal-to-noise ratio (33.94) and sub-optimal structural similarity (0.92). CONCLUSIONS: Compared with some state-of-art methods, the interpretable ECDAW-Net performs well in suppressing noise/artifacts and preserving textures of tissue.


Asunto(s)
Tomografía Computarizada por Rayos X , Relación Señal-Ruido
15.
Magn Reson Med ; 90(6): 2443-2453, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37466029

RESUMEN

PURPOSE: Temporal resolution of time-lapse MRI to track individual iron-labeled cells is limited by the required data-acquisition time to fill k-space and to reach sufficient SNR. Although motion of slowly patrolling monocytes can be resolved, detection of fast-moving immune cells requires improved acquisition and reconstruction strategies. THEORY AND METHODS: For accelerated MRI cell tracking, a Cartesian sampling scheme was designed, in which the fully sampled and undersampled k-space data for different acceleration factors were acquired simultaneously, and multiple undersampling ratios could be chosen retrospectively. Compressed-sensing reconstruction was applied using dictionary learning and low-rank constraints. Detection of iron-labeled monocytes was evaluated with simulations, rotating phantom experiments and in vivo mouse brain measurements at 9.4 T. RESULTS: Fully sampled and 2.4-times and 4.8-times accelerated images were reconstructed and had sufficient contrast-to-noise ratio (CNR) for single cells to be resolved and followed dynamically. The phantom experiments showed an improvement in CNR of 6.1% per µm/s in the 4.8-times undersampled images. Geometric distortion of cells caused by motion was visibly reduced in the accelerated images, which enabled detection of moving cells with velocities of up to 7.0 µm/s. In vivo, additional cells were resolved in the accelerated images due to the improved temporal resolution. CONCLUSION: The easy-to-implement flexible Cartesian sampling scheme with compressed-sensing reconstruction permits simultaneous acquisition of both fully sampled and high temporal resolution images. The CNR of moving cells is effectively improved, enabling the recovery of high velocity cells with sufficient contrast at virtually no cost.


Asunto(s)
Rastreo Celular , Imagen por Resonancia Magnética , Animales , Ratones , Estudios Retrospectivos , Imagen de Lapso de Tiempo , Imagen por Resonancia Magnética/métodos , Movimiento (Física) , Procesamiento de Imagen Asistido por Computador/métodos
16.
Front Neurosci ; 17: 1199150, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37397459

RESUMEN

One of human brain's remarkable traits lies in its capacity to dynamically coordinate the activities of multiple brain regions or networks, adapting to an externally changing environment. Studying the dynamic functional brain networks (DFNs) and their role in perception, assessment, and action can significantly advance our comprehension of how the brain responds to patterns of sensory input. Movies provide a valuable tool for studying DFNs, as they offer a naturalistic paradigm that can evoke complex cognitive and emotional experiences through rich multimodal and dynamic stimuli. However, most previous research on DFNs have predominantly concentrated on the resting-state paradigm, investigating the topological structure of temporal dynamic brain networks generated via chosen templates. The dynamic spatial configurations of the functional networks elicited by naturalistic stimuli demand further exploration. In this study, we employed an unsupervised dictionary learning and sparse coding method combing with a sliding window strategy to map and quantify the dynamic spatial patterns of functional brain networks (FBNs) present in naturalistic functional magnetic resonance imaging (NfMRI) data, and further evaluated whether the temporal dynamics of distinct FBNs are aligned to the sensory, cognitive, and affective processes involved in the subjective perception of the movie. The results revealed that movie viewing can evoke complex FBNs, and these FBNs were time-varying with the movie storylines and were correlated with the movie annotations and the subjective ratings of viewing experience. The reliability of DFNs was also validated by assessing the Intra-class coefficient (ICC) among two scanning sessions under the same naturalistic paradigm with a three-month interval. Our findings offer novel insight into comprehending the dynamic properties of FBNs in response to naturalistic stimuli, which could potentially deepen our understanding of the neural mechanisms underlying the brain's dynamic changes during the processing of visual and auditory stimuli.

17.
Neural Netw ; 165: 298-309, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37315486

RESUMEN

Dictionary learning has found broad applications in signal and image processing. By adding constraints to the traditional dictionary learning model, dictionaries with discriminative capability can be obtained which can deal with the task of image classification. The Discriminative Convolutional Analysis Dictionary Learning (DCADL) algorithm proposed recently has achieved promising results with low computational complexity. However, DCADL is still limited in classification performance because of the lack of constraints on dictionary structures. To solve this problem, this study introduces an adaptively ordinal locality preserving (AOLP) term to the original model of DCADL to further improve the classification performance. With the AOLP term, the distance ranking in the neighborhood of each atom can be preserved, which can improve the discrimination of coding coefficients. In addition, a linear classifier for the classification of coding coefficients is trained along with the dictionary. A new method is designed specifically to solve the optimization problem corresponding to the proposed model. Experiments are performed on several commonly used datasets to show the promising results of the proposed algorithm in classification performance and computational efficiency.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje , Aprendizaje Discriminativo
18.
Diagnostics (Basel) ; 13(8)2023 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-37189496

RESUMEN

Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract both low- and high-frequency image components. A novel approach is proposed for fusing low-frequency components using a modified sum-modified Laplacian (MSML)-based clustered dictionary learning technique. In the NSST domain, directed contrast can be used to fuse high-frequency coefficients. Using the inverse NSST method, a multimodal medical image is obtained. Compared to state-of-the-art fusion techniques, the proposed method provides superior edge preservation. According to performance metrics, the proposed method is shown to be approximately 10% better than existing methods in terms of standard deviation, mutual information, etc. Additionally, the proposed method produces excellent visual results regarding edge preservation, texture preservation, and more information.

19.
Front Oncol ; 13: 1123493, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37091168

RESUMEN

Introduction: The successful use of machine learning (ML) for medical diagnostic purposes has prompted myriad applications in cancer image analysis. Particularly for hepatocellular carcinoma (HCC) grading, there has been a surge of interest in ML-based selection of the discriminative features from high-dimensional magnetic resonance imaging (MRI) radiomics data. As one of the most commonly used ML-based selection methods, the least absolute shrinkage and selection operator (LASSO) has high discriminative power of the essential feature based on linear representation between input features and output labels. However, most LASSO methods directly explore the original training data rather than effectively exploiting the most informative features of radiomics data for HCC grading. To overcome this limitation, this study marks the first attempt to propose a feature selection method based on LASSO with dictionary learning, where a dictionary is learned from the training features, using the Fisher ratio to maximize the discriminative information in the feature. Methods: This study proposes a LASSO method with dictionary learning to ensure the accuracy and discrimination of feature selection. Specifically, based on the Fisher ratio score, each radiomic feature is classified into two groups: the high-information and the low-information group. Then, a dictionary is learned through an optimal mapping matrix to enhance the high-information part and suppress the low discriminative information for the task of HCC grading. Finally, we select the most discrimination features according to the LASSO coefficients based on the learned dictionary. Results and discussion: The experimental results based on two classifiers (KNN and SVM) showed that the proposed method yielded accuracy gains, compared favorably with another 5 state-of-the-practice feature selection methods.

20.
Sensors (Basel) ; 23(7)2023 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-37050627

RESUMEN

In recent decades, falls have posed multiple critical health issues, especially for the older population, with their emerging growth. Recent research has shown that a wrist-based fall detection system offers an accessory-like comfortable solution for Internet of Things (IoT)-based monitoring. Nevertheless, an autonomous device for anywhere-anytime may present an energy consumption concern. Hence, this paper proposes a novel energy-aware IoT-based architecture for Message Queuing Telemetry Transport (MQTT)-based gateway-less monitoring for wearable fall detection. Accordingly, a hybrid double prediction technique based on Supervised Dictionary Learning was implemented to reinforce the detection efficiency of our previous works. A controlled dataset was collected for training (offline), while a real set of measurements of the proposed system was used for validation (online). It achieved a noteworthy offline and online detection performance of 99.8% and 91%, respectively, overpassing most of the related works using only an accelerometer. In the worst case, the system showed a battery consumption optimization by a minimum of 27.32 working hours, significantly higher than other research prototypes. The approach presented here proves to be promising for real applications, which require a reliable and long-term anywhere-anytime solution.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA