Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(23)2023 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-38067835

RESUMEN

Many works in the state of the art are interested in the increase of the camera depth of field (DoF) via the joint optimization of an optical component (typically a phase mask) and a digital processing step with an infinite deconvolution support or a neural network. This can be used either to see sharp objects from a greater distance or to reduce manufacturing costs due to tolerance regarding the sensor position. Here, we study the case of an embedded processing with only one convolution with a finite kernel size. The finite impulse response (FIR) filter coefficients are learned or computed based on a Wiener filter paradigm. It involves an optical model typical of codesigned systems for DoF extension and a scene power spectral density, which is either learned or modeled. We compare different FIR filters and present a method for dimensioning their sizes prior to a joint optimization. We also show that, among the filters compared, the learning approach enables an easy adaptation to a database, but the other approaches are equally robust.

2.
Appl Opt ; 61(29): 8843-8849, 2022 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-36256020

RESUMEN

We present a novel, to the best of our knowledge, patch-based approach for depth regression from defocus blur. Most state-of-the-art methods for depth from defocus (DFD) use a patch classification approach among a set of potential defocus blurs related to a depth, which induces errors due to the continuous variation of the depth. Here, we propose to adapt a simple classification model using a soft-assignment encoding of the true depth into a membership probability vector during training and a regression scale to predict intermediate depth values. Our method uses no blur model or scene model; it only requires a training dataset of image patches (either raw, gray scale, or RGB) and their corresponding depth label. We show that our method outperforms both classification and direct regression on simulated images from structured or natural texture datasets, and on raw real data having optical aberrations from an active DFD experiment.

3.
Appl Opt ; 60(31): 9966-9974, 2021 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-34807187

RESUMEN

In this paper, we propose what we believe is a new monocular depth estimation algorithm based on local estimation of defocus blur, an approach referred to as depth from defocus (DFD). Using a limited set of calibration images, we directly learn image covariance, which encodes both scene and blur (i.e., depth) information. Depth is then estimated from a single image patch using a maximum likelihood criterion defined using the learned covariance. This method is applied here within a new active DFD method using a dense textured projection and a chromatic lens for image acquisition. The projector adds texture for low-textured objects, which is usually a limitation of DFD, and the chromatic aberration increases the estimated depth range with respect to a conventional DFD. Here, we provide quantitative evaluations of the depth estimation performance of our method on simulated and real data of fronto-parallel untextured scenes. The proposed method is then experimentally evaluated qualitatively using a 3D printed benchmark.

4.
Appl Opt ; 57(17): 4761-4770, 2018 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-30118091

RESUMEN

We present an ultracompact infrared cryogenic camera integrated inside a standard Sofradir's detector dewar cooler assembly (DDCA) whose field of view is equal to 120°. The multichannel optical architecture produces four nonredundant images on a single SCORPIO detector with a pixel pitch of 15 µm. This ultraminiaturized optical system brings a very low additional optical and mechanical mass to be cooled in the DDCA: the cool-down time is comparable to an equivalent DDCA without an imagery function. Limiting the number of channels is necessary to keep the highest number of resolved points in the final image. However, optical tolerances lead to irregular shifts between the channels. This paper discusses the limits of multichannel architectures. With an image-processing algorithm, the four images produced by the camera are combined to process a single full-resolution image with an equivalent sampling pitch equal to 7.5 µm. Experimental measurements on the modulation transfer function and noise equivalent temperature difference show that this camera achieves good optical performance.

5.
J Opt Soc Am A Opt Image Sci Vis ; 31(12): 2650-62, 2014 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-25606754

RESUMEN

In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

6.
Appl Opt ; 52(29): 7152-64, 2013 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-24217733

RESUMEN

In this paper, we propose a new method for passive depth estimation based on the combination of a camera with longitudinal chromatic aberration and an original depth from defocus (DFD) algorithm. Indeed a chromatic lens, combined with an RGB sensor, produces three images with spectrally variable in-focus planes, which eases the task of depth extraction with DFD. We first propose an original DFD algorithm dedicated to color images having spectrally varying defocus blurs. Then we describe the design of a prototype chromatic camera so as to evaluate experimentally the effectiveness of the proposed approach for depth estimation. We provide comparisons with results of an active ranging sensor and real indoor/outdoor scene reconstructions.

7.
Appl Opt ; 51(31): 7701-13, 2012 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-23128722

RESUMEN

This paper deals with point target detection in nonstationary backgrounds such as cloud scenes in aerial or satellite imaging. We propose an original spatial detection method based on first- and second-order modeling (i.e., mean and covariance) of local background statistics. We first show that state-of-the-art nonlocal denoising methods can be adapted with minimal effort to yield edge-preserving background mean estimates. These mean estimates lead to very efficient background suppression (BS) detection. However, we propose that BS be followed by a matched filter based on an estimate of the local spatial covariance matrix. The identification of these matrices derives from a robust classification of pixels in classes with homogeneous second-order statistics based on a Gaussian mixture model. The efficiency of the proposed approaches is demonstrated by evaluation on two cloudy sky background databases.

8.
J Opt Soc Am A Opt Image Sci Vis ; 26(7): 1730-46, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19568310

RESUMEN

We address performance modeling of superresolution (SR) techniques. Superresolution consists in combining several images of the same scene to produce an image with better resolution and contrast. We propose a discrete data-continuous reconstruction framework to conduct SR performance analysis and derive a theoretical expression of the reconstruction mean squared error (MSE) as a compact, computationally tractable function of signal-to-noise ratio (SNR), scene model, sensor transfer function, number of frames, interframe translation motion, and SR reconstruction filter. A formal expression for the MSE is obtained that allows a qualitative study of SR behavior. In particular we provide an original outlook on the balance between noise and aliasing reduction in linear SR. Explicit account for the SR reconstruction filter is an original feature of our model. It allows for the first time to study not only optimal filters but also suboptimal ones, which are often used in practice.

9.
IEEE Trans Image Process ; 15(11): 3325-37, 2006 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-17076393

RESUMEN

Super-resolution (SR) techniques make use of subpixel shifts between frames in an image sequence to yield higher resolution images. We propose an original observation model devoted to the case of nonisometric inter-frame motion as required, for instance, in the context of airborne imaging sensors. First, we describe how the main observation models used in the SR literature deal with motion, and we explain why they are not suited for nonisometric motion. Then, we propose an extension of the observation model by Elad and Feuer adapted to affine motion. This model is based on a decomposition of affine transforms into successive shear transforms, each one efficiently implemented by row-by-row or column-by-column one-dimensional affine transforms. We demonstrate on synthetic and real sequences that our observation model incorporated in a SR reconstruction technique leads to better results in the case of variable scale motions and it provides equivalent results in the case of isometric motions.


Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Técnica de Sustracción , Grabación en Video/métodos , Inteligencia Artificial , Simulación por Computador , Almacenamiento y Recuperación de la Información/métodos , Modelos Teóricos , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
10.
IEEE Trans Image Process ; 15(10): 3201-6, 2006 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-17022281

RESUMEN

Robust estimation of the optical flow is addressed through a multiresolution energy minimization. It involves repeated evaluation of spatial and temporal gradients of image intensity which rely usually on bilinear interpolation and image filtering. We propose to base both computations on a single pyramidal cubic B-spline model of image intensity. We show empirically improvements in convergence speed and estimation error and validate the resulting algorithm on real test sequences.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Análisis Numérico Asistido por Computador , Grabación en Video/métodos , Simulación por Computador , Almacenamiento y Recuperación de la Información/métodos , Modelos Estadísticos , Movimiento (Física)
11.
Appl Opt ; 43(2): 257-63, 2004 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-14735945

RESUMEN

We address the issue of distinguishing point objects from a cluttered background and estimating their position by image processing. We are interested in the specific context in which the object's signature varies significantly relative to its random subpixel location because of aliasing. The conventional matched filter neglects this phenomenon and causes a consistent degradation of detection performance. Thus alternative detectors are proposed, and numerical results show the improvement brought by approximate and generalized likelihood-ratio tests compared with pixel-matched filtering. We also study the performance of two types of subpixel position estimator. Finally, we put forward the major influence of sensor design on both estimation and point object detection.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA