Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
IEEE Trans Vis Comput Graph ; 30(2): 1638-1651, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37930922

RESUMEN

This article presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters Turner et al. (2014), Vidal et al. (2020) of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.

2.
IEEE Trans Vis Comput Graph ; 28(1): 291-301, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34596544

RESUMEN

This paper presents a unified computational framework for the estimation of distances, geodesics and barycenters of merge trees. We extend recent work on the edit distance [104] and introduce a new metric, called the Wasserstein distance between merge trees, which is purposely designed to enable efficient computations of geodesics and barycenters. Specifically, our new distance is strictly equivalent to the $L$2-Wasserstein distance between extremum persistence diagrams, but it is restricted to a smaller solution space, namely, the space of rooted partial isomorphisms between branch decomposition trees. This enables a simple extension of existing optimization frameworks [110] for geodesics and barycenters from persistence diagrams to merge trees. We introduce a task-based algorithm which can be generically applied to distance, geodesic, barycenter or cluster computation. The task-based nature of our approach enables further accelerations with shared-memory parallelism. Extensive experiments on public ensembles and SciVis contest benchmarks demonstrate the efficiency of our approach - with barycenter computations in the orders of minutes for the largest examples - as well as its qualitative ability to generate representative barycenter merge trees, visually summarizing the features of interest found in the ensemble. We show the utility of our contributions with dedicated visualization applications: feature tracking, temporal reduction and ensemble clustering. We provide a lightweight C++ implementation that can be used to reproduce our results.

3.
IEEE Trans Image Process ; 25(11): 5455-5468, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-27552752

RESUMEN

This paper addresses the problem of tonal fluctuation in videos. Due to the automatic settings of consumer cameras, the colors of objects in image sequences might change over time. We propose here a fast and computationally light method to stabilize this tonal appearance, while remaining robust to motion and occlusions. To do so, a minimally viable color correction model is used, in conjunction with an effective estimation of dominant motion. The final solution is a temporally weighted correction, explicitly driven by the motion magnitude, both visually efficient and very fast, with potential to real time processing. Experimental results obtained on a variety of sequences outperform the current state of the art in terms of tonal stability, at a much reduced computational complexity.

4.
Vision Res ; 120: 22-38, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26024561

RESUMEN

We provide a theoretical analysis of some empirical facts about the second order spatiochromatic structure of natural images in color. In particular, we show that two simple assumptions on the covariance matrices of color images yield eigenvectors made by the Kronecker product of Fourier features times the triad given by luminance plus color opponent channels. The first of these assumptions is second order stationarity while the second one is commutativity between color correlation matrices. The validity of these assumptions and the predicted shape of the PCA components of color images are experimentally observed on two large image databases. As a by-product of this experimental study, we also provide novel data to support an exponential decay law of the spatiochromatic covariance between pairs of pixels as a function of their spatial distance.


Asunto(s)
Visión de Colores/fisiología , Reconocimiento Visual de Modelos/fisiología , Procesamiento Espacial/fisiología , Humanos , Modelos Estadísticos
5.
IEEE Trans Image Process ; 24(6): 1944-55, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25826801

RESUMEN

This paper introduces a new approach for the automatic estimation of illuminants in a digital color image. The method relies on two assumptions. First, the image is supposed to contain at least a small set of achromatic pixels. The second assumption is physical and concerns the set of possible illuminants, assumed to be well approximated by black body radiators. The proposed scheme is based on a projection of selected pixels on the Planckian locus in a well chosen chromaticity space, followed by a voting procedure yielding the estimation of the illuminant. This approach is very simple and learning-free. The voting procedure can be extended for the detection of multiple illuminants when necessary. Experiments on various databases show that the performances of this approach are similar to those of the best learning-based state-of-the-art algorithms.

6.
J Physiol Paris ; 106(5-6): 266-83, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22343519

RESUMEN

Gestalt theory gives a list of geometric grouping laws that could in principle give a complete account of human image perception. Based on an extensive thesaurus of clever graphical images, this theory discusses how grouping laws collaborate, and conflict toward a global image understanding. Unfortunately, as shown in the bibliographical analysis herewith, the attempts to formalize the grouping laws in computer vision and psychophysics have at best succeeded to compute individual partial structures (or partial gestalts), such as alignments or symmetries. Nevertheless, we show here that a never formalized clever Gestalt experimental procedure, the Nachzeichnung suggests a numerical set up to implement and test the collaboration of partial gestalts. The new computational procedure proposed here analyzes a digital image, and performs a numerical simulation that we call Nachtanz or Gestaltic dance. In this dance, the analyzed digital image is gradually deformed in a random way, but maintaining the detected partial gestalts. The resulting dancing images should be perceptually indistinguishable if and only if the grouping process was complete. Like the Nachzeichnung, the Nachtanz permits a visual exploration of the degrees of freedom still available to a figure after all partial groups (or gestalts) have been detected. In the new proposed procedure, instead of drawing themselves, subjects will be shown samples of the automatic Gestalt dances and required to evaluate if the figures are similar. Several numerical preliminary results with this new Gestaltic experimental setup are thoroughly discussed.


Asunto(s)
Teoría Gestáltica , Matemática , Psicofísica , Visión Ocular/fisiología , Percepción Visual/fisiología , Algoritmos , Humanos , Modelos Biológicos
7.
Med Image Anal ; 16(1): 114-26, 2012 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-21911309

RESUMEN

A differential analysis framework of longitudinal FLAIR MRI volumes is proposed, based on non-linear gray value mapping, to quantify low-grade glioma growth. First, MRI volumes were mapped to a common range of gray levels via a midway-based histogram mapping. This mapping enabled direct comparison of MRI data and computation of difference maps. A statistical analysis framework of intensity distributions in midway-mapped MRI volumes as well as in their difference maps was designed to identify significant difference values, enabling quantification of low-grade glioma growth, around the borders of an initial segmentation. Two sets of parameters, corresponding to optimistic and pessimistic growth estimations, were proposed. The influence and modeling of MRI inhomogeneity field on a novel midway-mapping framework using image models with multiplicative contrast changes was studied. Clinical evaluation was performed on 32 longitudinal clinical cases from 13 patients. Several growth indices were measured and evaluated in terms of accuracy, compared to manual tracing. Results from the clinical evaluation showed that millimetric precision on a specific volumetric radius growth index measurement can be obtained automatically with the proposed differential analysis. The automated optimistic and pessimistic growth estimates behaved as expected, providing upper and lower bounds around the manual growth estimations.


Asunto(s)
Neoplasias Encefálicas/patología , Encéfalo/patología , Glioma/patología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Humanos , Aumento de la Imagen/métodos , Estadificación de Neoplasias , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Técnica de Sustracción
8.
IEEE Trans Image Process ; 20(11): 3073-85, 2011 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-21507772

RESUMEN

This work is concerned with the modification of the gray level or color distribution of digital images. A common drawback of classical methods aiming at such modifications is the revealing of artefacts or the attenuation of details and textures. In this work, we propose a generic filtering method enabling, given the original image and the radiometrically corrected one, to suppress artefacts while preserving details. The approach relies on the key observation that artefacts correspond to spatial irregularity of the so-called transportation map, defined as the difference between the original and the corrected image. The proposed method draws on the nonlocal Yaroslavsky filter to regularize the transportation map. The efficiency of the method is shown on various radiometric modifications: contrast equalization, midway histogram, color enhancement, and color transfer. A comparison with related approaches is also provided.

9.
IEEE Trans Image Process ; 16(1): 253-61, 2007 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-17283783

RESUMEN

In this work, we propose a method to segment a 1-D histogram without a priori assumptions about the underlying density function. Our approach considers a rigorous definition of an admissible segmentation, avoiding over and under segmentation problems. A fast algorithm leading to such a segmentation is proposed. The approach is tested both with synthetic and real data. An application to the segmentation of written documents is also presented. We shall see that this application requires the detection of very small histogram modes, which can be accurately detected with the proposed method.


Asunto(s)
Algoritmos , Inteligencia Artificial , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Simulación por Computador , Almacenamiento y Recuperación de la Información/métodos , Modelos Estadísticos
10.
IEEE Trans Image Process ; 15(1): 241-8, 2006 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-16435553

RESUMEN

Image flicker is a general film effect, which can be observed in videos as well as in old films, and consists of fast variations of the frame contrast and brightness. Reducing flicker of a sequence improves its visual quality and can be an essential first treatment before ulterior manipulations. This paper presents an axiomatic analysis of the problem, which leads to a global and fast method of "de-flicker," based on the scale-space theory. The stability of this process, called scale-time equalization, is ensured by the scale-time framework. Results on different sequences are given and show great visual improvement.


Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Almacenamiento y Recuperación de la Información/métodos , Grabación en Video/métodos , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA