Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
Sensors (Basel) ; 21(13)2021 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-34201455

RESUMO

High-resolution 3D scanning devices produce high-density point clouds, which require a large capacity of storage and time-consuming processing algorithms. In order to reduce both needs, it is common to apply surface simplification algorithms as a preprocessing stage. The goal of point cloud simplification algorithms is to reduce the volume of data while preserving the most relevant features of the original point cloud. In this paper, we present a new point cloud feature-preserving simplification algorithm. We use a global approach to detect saliencies on a given point cloud. Our method estimates a feature vector for each point in the cloud. The components of the feature vector are the normal vector coordinates, the point coordinates, and the surface curvature at each point. Feature vectors are used as basis signals to carry out a dictionary learning process, producing a trained dictionary. We perform the corresponding sparse coding process to produce a sparse matrix. To detect the saliencies, the proposed method uses two measures, the first of which takes into account the quantity of nonzero elements in each column vector of the sparse matrix and the second the reconstruction error of each signal. These measures are then combined to produce the final saliency value for each point in the cloud. Next, we proceed with the simplification of the point cloud, guided by the detected saliency and using the saliency values of each point as a dynamic clusterization radius. We validate the proposed method by comparing it with a set of state-of-the-art methods, demonstrating the effectiveness of the simplification method.


Assuntos
Algoritmos
2.
Comput Biol Med ; 132: 104310, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33721733

RESUMO

Skin burns in color images must be accurately detected and classified according to burn degree in order to assist clinicians during diagnosis and early treatment. Especially in emergency cases in which clinical experience might not be available to conduct a thorough examination with high accuracy, an automated assessment may benefit patient outcomes. In this work, detection and classification of burnt areas are performed by using the sparse representation of feature vectors by over-redundant dictionaries. Feature vectors are extracted from image patches so that each patch is assigned to a class representing a burn degree. Using color and texture information as features, detection and classification achieved 95.65% sensitivity and 94.02% precision. Experiments used two methods to build dictionaries for burn severity classes to apply to observed skin regions: (1) direct collection of feature vectors from patches in various images and locations and (2) collection of feature vectors followed by dictionary learning accompanied by K-singular value decomposition.


Assuntos
Algoritmos , Humanos
3.
Sensors (Basel) ; 20(11)2020 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-32516976

RESUMO

Denoising the point cloud is fundamental for reconstructing high quality surfaces with details in order to eliminate noise and outliers in the 3D scanning process. The challenges for a denoising algorithm are noise reduction and sharp features preservation. In this paper, we present a new model to reconstruct and smooth point clouds that combine L1-median filtering with sparse L1 regularization for both denoising the normal vectors and updating the position of the points to preserve sharp features in the point cloud. The L1-median filter is robust to outliers and noise compared to the mean. The L1 norm is a way to measure the sparsity of a solution, and applying an L1 optimization to the point cloud can measure the sparsity of sharp features, producing clean point set surfaces with sharp features. We optimize the L1 minimization problem by using the proximal gradient descent algorithm. Experimental results show that our approach is comparable to the state-of-the-art methods, as it filters out 3D models with a high level of noise, but keeps their geometric features.

4.
PeerJ Comput Sci ; 5: e192, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-33816845

RESUMO

Sparse coding aims to find a parsimonious representation of an example given an observation matrix or dictionary. In this regard, Orthogonal Matching Pursuit (OMP) provides an intuitive, simple and fast approximation of the optimal solution. However, its main building block is anchored on the minimization of the Mean Squared Error cost function (MSE). This approach is only optimal if the errors are distributed according to a Gaussian distribution without samples that strongly deviate from the main mode, i.e. outliers. If such assumption is violated, the sparse code will likely be biased and performance will degrade accordingly. In this paper, we introduce five robust variants of OMP (RobOMP) fully based on the theory of M-Estimators under a linear model. The proposed framework exploits efficient Iteratively Reweighted Least Squares (IRLS) techniques to mitigate the effect of outliers and emphasize the samples corresponding to the main mode of the data. This is done adaptively via a learned weight vector that models the distribution of the data in a robust manner. Experiments on synthetic data under several noise distributions and image recognition under different combinations of occlusion and missing pixels thoroughly detail the superiority of RobOMP over MSE-based approaches and similar robust alternatives. We also introduce a denoising framework based on robust, sparse and redundant representations that open the door to potential further applications of the proposed techniques. The five different variants of RobOMP do not require parameter tuning from the user and, hence, constitute principled alternatives to OMP.

5.
Magn Reson Imaging ; 36: 77-85, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27742436

RESUMO

High-quality cardiac magnetic resonance (CMR) images can be hardly obtained when intrinsic noise sources are present, namely heart and breathing movements. Yet heart images may be acquired in real time, the image quality is really limited and most sequences use ECG gating to capture images at each stage of the cardiac cycle during several heart beats. This paper presents a novel super-resolution algorithm that improves the cardiac image quality using a sparse Bayesian approach. The high-resolution version of the cardiac image is constructed by combining the information of the low-resolution series -observations from different non-orthogonal series composed of anisotropic voxels - with a prior distribution of the high-resolution local coefficients that enforces sparsity. In addition, a global prior, extracted from the observed data, regularizes the solution. Quantitative and qualitative validations were performed in synthetic and real images w.r.t to a baseline, showing an average increment between 2.8 and 3.2 dB in the Peak Signal-to-Noise Ratio (PSNR), between 1.8% and 2.6% in the Structural Similarity Index (SSIM) and 2.% to 4% in quality assessment (IL-NIQE). The obtained results demonstrated that the proposed method is able to accurately reconstruct a cardiac image, recovering the original shape with less artifacts and low noise.


Assuntos
Algoritmos , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Artefatos , Teorema de Bayes , Humanos , Imagens de Fantasmas , Reprodutibilidade dos Testes , Razão Sinal-Ruído
6.
Braz. arch. biol. technol ; Braz. arch. biol. technol;60: e17160480, 2017. tab, graf
Artigo em Inglês | LILACS | ID: biblio-951455

RESUMO

ABSTRACT In photography, face recognition and face retrieval play an important role in many applications such as security, criminology and image forensics. Advancements in face recognition make easier for identity matching of an individual with attributes. Latest development in computer vision technologies enables us to extract facial attributes from the input image and provide similar image results. In this paper, we propose a novel LOP and sparse codewords method to provide similar matching results with respect to input query image. To improve accuracy in image results with input image and dynamic facial attributes, Local octal pattern algorithm [LOP] and Sparse codeword applied in offline and online. The offline and online procedures in face image binning techniques apply with sparse code. Experimental results with Pubfig dataset shows that the proposed LOP along with sparse codewords able to provide matching results with increased accuracy of 90%.

7.
Braz. arch. biol. technol ; Braz. arch. biol. technol;59(spe2): e16161052, 2016. tab, graf
Artigo em Inglês | LILACS | ID: biblio-839057

RESUMO

ABSTRACT The robustness and speed of image classification is still a challenging task in satellite image processing. This paper introduces a novel image classification technique that uses the particle filter framework (PFF)-based optimisation technique for satellite image classification. The framework uses a template-matching algorithm, comprising fast marching algorithm (FMA) and level set method (LSM)-based segmentation which assists in creating the initial templates for comparison with other test images. The created templates are trained and used as inputs for the optimisation. The optimisation technique used in this proposed work is multikernel sparse representation (MKSR). The combined execution of FMA, LSM, PFF and MKSR approaches has resulted in a substantial reduction in processing time for various classes in a satellite image which is small when compared with Support Vector Machine (SVM) and Independent Component Discrimination Analysis (ICDA)based image classifications obtained for comparison purposes. This study aims to improve the robustness of image classification based on overall accuracy (OA) and kappa coefficient. The variation of OA with this technique, between different classes of a satellite image, is only10%, whereas that with the SVM and ICDA techniques is more than 50%.

8.
West Indian Med J ; 65(2): 271-276, 2015 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-28358437

RESUMO

OBJECTIVE: The goal of super-resolution is to generate high-resolution images from low-resolution input images. METHODS: In this paper, a combined method based on sparse signal representation and adaptive M-estimator is proposed for single-image super-resolution. With the sparse signal representation, the correlation between the sparse representation of high-resolution patches and that of low-resolution patches for the identical image is learned as a set of joint dictionaries and a set of high-resolution patches is obtained for high- and low-resolution patches. Then the dictionaries and high-resolution patches are used to produce the high-resolution image for a low-resolution single image. RESULTS: At the post-processing phase, the adaptive M-estimator, combining the advantages of traditional L1 and L2 norms, is used to give further processing for the resultant high-resolution image, to reduce the artefact by learning and reconstitution, and improve the performance. CONCLUSION: Three experimental results show the performance improvement of the proposed algorithm over other methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA