Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Light Sci Appl ; 13(1): 194, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39152120

RESUMEN

Imaging through dynamic scattering media is one of the most challenging yet fascinating problems in optics, with applications spanning from biological detection to remote sensing. In this study, we propose a comprehensive learning-based technique that facilitates real-time, non-invasive, incoherent imaging of real-world objects through dense and dynamic scattering media. We conduct extensive experiments, demonstrating the capability of our technique to see through turbid water and natural fog. The experimental results indicate that the proposed technique surpasses existing approaches in numerous aspects and holds significant potential for imaging applications across a broad spectrum of disciplines.

2.
Sci Rep ; 14(1): 15857, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38982213

RESUMEN

According to the atmospheric scattering model (ASM), the object signal's attenuation diminishes exponentially as the imaging distance increases. This imposes limitations on ASM-based methods in situations where the scattering medium one wish to look through is inhomogeneous. Here, we extend ASM by taking into account the spatial variation of the medium density, and propose a two-step method for imaging through inhomogeneous scattering media. In the first step, the proposed method eliminates the direct current component of the scattered pattern by subscribing to the estimated global distribution (background). In the second step, it eliminates the randomized components of the scattered light by using threshold truncation, followed by the histogram equalization to further enhance the contrast. Outdoor experiments were carried out to demonstrate the proposed method.

3.
Opt Express ; 32(8): 13688-13700, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38859332

RESUMEN

Imaging through scattering media is a long-standing challenge in optical imaging, holding substantial importance in fields like biology, transportation, and remote sensing. Recent advancements in learning-based methods allow accurate and rapid imaging through optically thick scattering media. However, the practical application of data-driven deep learning faces substantial hurdles due to its inherent limitations in generalization, especially in scenarios such as imaging through highly non-static scattering media. Here we utilize the concept of transfer learning toward adaptive imaging through dense dynamic scattering media. Our approach specifically involves using a known segment of the imaging target to fine-tune the pre-trained de-scattering model. Since the training data of downstream tasks used for transfer learning can be acquired simultaneously with the current test data, our method can achieve clear imaging under varying scattering conditions. Experiment results show that the proposed approach (with transfer learning) is capable of providing more than 5dB improvements when optical thickness varies from 11.6 to 13.1 compared with the conventional deep learning approach (without transfer learning). Our method holds promise for applications in video surveillance and beacon guidance under dense dynamic scattering conditions.

4.
Appl Opt ; 60(10): B32-B37, 2021 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-33798134

RESUMEN

In this paper, we propose a single-shot three-dimensional imaging technique. This is achieved by simply placing a normal thin scattering layer in front of a two-dimensional image sensor, making it a light-field-like camera. The working principle of the proposed technique is based on the statistical independence and spatial ergodicity of the speckle produced by the scattering layer. Thus, the local point responses of the scattering layer should be measured in advance and are used for image reconstruction. We demonstrate the proposed method with proof-of-concept experiments and analyze the factors that affect its performance.

5.
Light Sci Appl ; 9: 77, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32411362

RESUMEN

Most of the neural networks proposed so far for computational imaging (CI) in optics employ a supervised training strategy, and thus need a large training set to optimize their weights and biases. Setting aside the requirements of environmental and system stability during many hours of data acquisition, in many practical applications, it is unlikely to be possible to obtain sufficient numbers of ground-truth images for training. Here, we propose to overcome this limitation by incorporating into a conventional deep neural network a complete physical model that represents the process of image formation. The most significant advantage of the resulting physics-enhanced deep neural network (PhysenNet) is that it can be used without training beforehand, thus eliminating the need for tens of thousands of labeled data. We take single-beam phase imaging as an example for demonstration. We experimentally show that one needs only to feed PhysenNet a single diffraction pattern of a phase object, and it can automatically optimize the network and eventually produce the object phase through the interplay between the neural network and the physical model. This opens up a new paradigm of neural network design, in which the concept of incorporating a physical model into a neural network can be generalized to solve many other CI problems.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA