Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(13)2024 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-39001132

RESUMEN

Acquiring underwater depth maps is essential as they provide indispensable three-dimensional spatial information for visualizing the underwater environment. These depth maps serve various purposes, including underwater navigation, environmental monitoring, and resource exploration. While most of the current depth estimation methods can work well in ideal underwater environments with homogeneous illumination, few consider the risk caused by irregular illumination, which is common in practical underwater environments. On the one hand, underwater environments with low-light conditions can reduce image contrast. The reduction brings challenges to depth estimation models in accurately differentiating among objects. On the other hand, overexposure caused by reflection or artificial illumination can degrade the textures of underwater objects, which is crucial to geometric constraints between frames. To address the above issues, we propose an underwater self-supervised monocular depth estimation network integrating image enhancement and auxiliary depth information. We use the Monte Carlo image enhancement module (MC-IEM) to tackle the inherent uncertainty in low-light underwater images through probabilistic estimation. When pixel values are enhanced, object recognition becomes more accessible, allowing for a more precise acquisition of distance information and thus resulting in more accurate depth estimation. Next, we extract additional geometric features through transfer learning, infusing prior knowledge from a supervised large-scale model into a self-supervised depth estimation network to refine loss functions and a depth network to address the overexposure issue. We conduct experiments with two public datasets, which exhibited superior performance compared to existing approaches in underwater depth estimation.

2.
PeerJ Comput Sci ; 10: e1783, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38855239

RESUMEN

Underwater images suffer from color shift, low contrast, and blurred details as a result of the absorption and scattering of light in the water. Degraded quality images can significantly interfere with underwater vision tasks. The existing data-driven based underwater image enhancement methods fail to sufficiently consider the impact related to the inconsistent attenuation of spatial areas and the degradation of color channel information. In addition, the dataset used for model training is small in scale and monotonous in the scene. Therefore, our approach solves the problem from two aspects of the network architecture design and the training dataset. We proposed a fusion attention block that integrate the non-local modeling ability of the Swin Transformer block into the local modeling ability of the residual convolution layer. Importantly, it can adaptively fuse non-local and local features carrying channel attention. Moreover, we synthesize underwater images with multiple water body types and different degradations using the underwater imaging model and adjusting the degradation parameters. There are also perceptual loss functions introduced to improve image vision. Experiments on synthetic and real-world underwater images have shown that our method is superior. Thus, our network is suitable for practical applications.

3.
Sensors (Basel) ; 24(10)2024 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-38793924

RESUMEN

Underwater images suffer from low contrast and color distortion. In order to improve the quality of underwater images and reduce storage and computational resources, this paper proposes a lightweight model Rep-UWnet to enhance underwater images. The model consists of a fully connected convolutional network and three densely connected RepConv blocks in series, with the input images connected to the output of each block with a Skip connection. First, the original underwater image is subjected to feature extraction by the SimSPPF module and is processed through feature summation with the original one to be produced as the input image. Then, the first convolutional layer with a kernel size of 3 × 3, generates 64 feature maps, and the multi-scale hybrid convolutional attention module enhances the useful features by reweighting the features of different channels. Second, three RepConv blocks are connected to reduce the number of parameters in extracting features and increase the test speed. Finally, a convolutional layer with 3 kernels generates enhanced underwater images. Our method reduces the number of parameters from 2.7 M to 0.45 M (around 83% reduction) but outperforms state-of-the-art algorithms by extensive experiments. Furthermore, we demonstrate our Rep-UWnet effectively improves high-level vision tasks like edge detection and single image depth estimation. This method not only surpasses the contrast method in objective quality, but also significantly improves the contrast, colorimetry, and clarity of underwater images in subjective quality.

4.
Neural Netw ; 169: 685-697, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37972512

RESUMEN

With the growing exploration of marine resources, underwater image enhancement has gained significant attention. Recent advances in convolutional neural networks (CNN) have greatly impacted underwater image enhancement techniques. However, conventional CNN-based methods typically employ a single network structure, which may compromise robustness in challenging conditions. Additionally, commonly used UNet networks generally force fusion from low to high resolution for each layer, leading to inaccurate contextual information encoding. To address these issues, we propose a novel network called Cascaded Network with Multi-level Sub-networks (CNMS), which encompasses the following key components: (a) a cascade mechanism based on local modules and global networks for extracting feature representations with richer semantics and enhanced spatial precision, (b) information exchange between different resolution streams, and (c) a triple attention module for extracting attention-based features. CNMS selectively cascades multiple sub-networks through triple attention modules to extract distinct features from underwater images, bolstering the network's robustness and improving generalization capabilities. Within the sub-network, we introduce a Multi-level Sub-network (MSN) that spans multiple resolution streams, combining contextual information from various scales while preserving the original underwater images' high-resolution spatial details. Comprehensive experiments on multiple underwater datasets demonstrate that CNMS outperforms state-of-the-art methods in image enhancement tasks.


Asunto(s)
Generalización Psicológica , Aumento de la Imagen , Redes Neurales de la Computación , Semántica , Procesamiento de Imagen Asistido por Computador
5.
Sensors (Basel) ; 23(21)2023 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-37960682

RESUMEN

Images captured during marine engineering operations suffer from color distortion and low contrast. Underwater image enhancement helps to alleviate these problems. Many deep learning models can infer multi-source data, where images with different perspectives exist from multiple sources. To this end, we propose a multichannel deep convolutional neural network (MDCNN) linked to a VGG that can target multi-source (multi-domain) underwater image enhancement. The designed MDCNN feeds data from different domains into separate channels and implements parameters by linking VGGs, which improves the domain adaptation of the model. In addition, to optimize performance, multi-domain image perception loss functions, multilabel soft edge loss for specific image enhancement tasks, pixel-level loss, and external monitoring loss for edge sharpness preprocessing are proposed. These loss functions are set to effectively enhance the structural and textural similarity of underwater images. A series of qualitative and quantitative experiments demonstrate that our model is superior to the state-of-the-art Shallow UWnet in terms of UIQM, and the performance evaluation conducted on different datasets increased by 0.11 on average.

6.
Sensors (Basel) ; 23(19)2023 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-37836859

RESUMEN

Optical cameras equipped with an underwater scooter can perform efficient shallow marine mapping. In this paper, an underwater image stitching method is proposed for detailed large scene awareness based on a scooter-borne camera, including preprocessing, image registration and post-processing. An underwater image enhancement algorithm based on the inherent underwater optical attenuation characteristics and dark channel prior algorithm is presented to improve underwater feature matching. Furthermore, an optimal seam algorithm is utilized to generate a shape-preserving seam-line in the superpixel-restricted area. The experimental results show the effectiveness of the proposed method for different underwater environments and the ability to generate natural underwater mosaics with few artifacts or visible seams.

7.
Sensors (Basel) ; 23(19)2023 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-37837125

RESUMEN

Underwater autonomous driving devices, such as autonomous underwater vehicles (AUVs), rely on visual sensors, but visual images tend to produce color aberrations and a high turbidity due to the scattering and absorption of underwater light. To address these issues, we propose the Dense Residual Generative Adversarial Network (DRGAN) for underwater image enhancement. Firstly, we adopt a multi-scale feature extraction module to obtain a range of information and increase the receptive field. Secondly, a dense residual block is proposed, to realize the interaction of image features and ensure stable connections in the feature information. Multiple dense residual modules are connected from beginning to end to form a cyclic dense residual network, producing a clear image. Finally, the stability of the network is improved via adjustment to the training with multiple loss functions. Experiments were conducted using the RUIE and Underwater ImageNet datasets. The experimental results show that our proposed DRGAN can remove high turbidity from underwater images and achieve color equalization better than other methods.

8.
Biomimetics (Basel) ; 8(3)2023 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-37504163

RESUMEN

Continuous exploration of the ocean has made underwater image processing an important research field, and plenty of CNN (convolutional neural network)-based underwater image enhancement methods have emerged over time. However, the feature-learning ability of existing CNN-based underwater image enhancement is limited. The networks were designed to be complicated or embed other algorithms for better results, which cannot simultaneously meet the requirements of suitable underwater image enhancement effects and real-time performance. Although the composite backbone network (CBNet) was introduced in underwater image enhancement, we proposed OECBNet (optimal underwater image-enhancing composite backbone network) to obtain a better enhancement effect and shorten the running time. Herein, a comprehensive study of different composite architectures in an underwater image enhancement network was carried out by comparing the number of backbones, connection strategies, pruning strategies for composite backbones, and auxiliary losses. Then, a CBNet with optimal performance was obtained. Finally, cross-sectional research of the obtained network with the state-of-the-art underwater enhancement network was performed. The experiments showed that our optimized composite backbone network achieved better-enhanced images than those of existing CNN-based methods.

9.
Sensors (Basel) ; 23(13)2023 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-37447624

RESUMEN

This paper presents an efficient underwater image enhancement method, named ECO-GAN, to address the challenges of color distortion, low contrast, and motion blur in underwater robot photography. The proposed method is built upon a preprocessing framework using a generative adversarial network. ECO-GAN incorporates a convolutional neural network that specifically targets three underwater issues: motion blur, low brightness, and color deviation. To optimize computation and inference speed, an encoder is employed to extract features, whereas different enhancement tasks are handled by dedicated decoders. Moreover, ECO-GAN employs cross-stage fusion modules between the decoders to strengthen the connection and enhance the quality of output images. The model is trained using supervised learning with paired datasets, enabling blind image enhancement without additional physical knowledge or prior information. Experimental results demonstrate that ECO-GAN effectively achieves denoising, deblurring, and color deviation removal simultaneously. Compared with methods relying on individual modules or simple combinations of multiple modules, our proposed method achieves superior underwater image enhancement and offers the flexibility for expansion into multiple underwater image enhancement functions.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Aumento de la Imagen , Tomografía Computarizada por Rayos X , Movimiento (Física)
10.
Sensors (Basel) ; 23(7)2023 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-37050592

RESUMEN

Domain experts prefer interactive and targeted control-point tone mapping operations (TMOs) to enhance underwater image quality and feature visibility; though this comes at the expense of time and training. In this paper, we provide end-users with a simpler and faster interactive tone-mapping approach. This is built upon Weibull Tone Mapping (WTM) theory; introduced in previous work as a preferred tool to describe and improve domain expert TMOs. We allow end-users to easily shape brightness distributions according to the Weibull distribution, using two parameter sliders which modify the distribution peak and spread. Our experiments showed that 10 domain experts found the two-slider Weibull manipulation sufficed to make a desired adjustment in >80% of images in a large dataset. For the remaining ∼20%, observers opted for a control-point TMO which can, broadly, encompass many global tone mapping algorithms. Importantly, 91% of these control-point TMOs can actually be visually well-approximated by our Weibull slider manipulation, despite users not identifying slider parameters themselves. Our work stresses the benefit of the Weibull distribution and significance of image purpose in underwater image enhancement.

11.
Heliyon ; 9(4): e14442, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37025801

RESUMEN

Light is scattered and partially absorbed while traveling through water, hence, underwater captured images often exhibit issues such as low contrast, detail blurring, color attenuation, and low illumination. To improve the visual performance of underwater imaging, herein, we propose a two-step method of zero-shot dehazing and level adjustment. In the newly developed approach, the original image is fed into a "zero-shot" dehazing network and further enhanced by an improved level adjustment methodology combined with auto-contrast. By conducting experiments, we then compare the performance of the proposed method with six classical state-of-the-art methods. The qualitative results confirm that the proposed method is capable of effectively removing haze, correcting color deviations, and maintaining the naturalness of images. We further perform a quantitative evaluation, revealing that the proposed method outperforms the comparison methods in terms of peak signal-to-noise ratio and structural similarity. The enhancement results are also measured by employing the underwater color image quality evaluation index (UCIQE), indicating that the proposed approach exhibits the highest mean values of 0.58 and 0.53 on the two data sets. The experimental results collectively validate the efficiency of the proposed methodology in enhancing underwater blurred images.

12.
Sensors (Basel) ; 23(4)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36850633

RESUMEN

Recently, rapidly developing artificial intelligence and computer vision techniques have provided technical solutions to promote production efficiency and reduce labor costs in aquaculture and marine resource surveys. Traditional manual surveys are being replaced by advanced intelligent technologies. However, underwater object detection and recognition are suffering from the image distortion and degradation issues. In this work, automatic monitoring of sea cucumber in natural conditions is implemented based on a state-of-the-art object detector, YOLOv7. To depress the image distortion and degradation issues, image enhancement methods are adopted to improve the accuracy and stability of sea cucumber detection across multiple underwater scenes. Five well-known image enhancement methods are employed to improve the detection performance of sea cucumber by YOLOv7 and YOLOv5. The effectiveness of these image enhancement methods is evaluated by experiments. Non-local image dehazing (NLD) was the most effective in sea cucumber detection from multiple underwater scenes for both YOLOv7 and YOLOv5. The best average precision (AP) of sea cucumber detection was 0.940, achieved by YOLOv7 with NLD. With NLD enhancement, the APs of YOLOv7 and YOLOv5 were increased by 1.1% and 1.6%, respectively. The best AP was 2.8% higher than YOLOv5 without image enhancement. Moreover, the real-time ability of YOLOv7 was examined and its average prediction time was 4.3 ms. Experimental results demonstrated that the proposed method can be applied to marine organism surveying by underwater mobile platforms or automatic analysis of underwater videos.

13.
Sensors (Basel) ; 23(3)2023 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-36772779

RESUMEN

Clear underwater images can help researchers detect cold seeps, gas hydrates, and biological resources. However, the quality of these images suffers from nonuniform lighting, a limited range of visibility, and unwanted signals. CycleGAN has been broadly studied in regard to underwater image enhancement, but it is difficult to apply the model for the further detection of Haima cold seeps in the South China Sea because the model can be difficult to train if the dataset used is not appropriate. In this article, we devise a new method of building a dataset using MSRCR and choose the best images based on the widely used UIQM scheme to build the dataset. The experimental results show that a good CycleGAN could be trained with the dataset using the proposed method. The model has good potential for applications in detecting the Haima cold seeps and can be applied to other cold seeps, such as the cold seeps in the North Sea. We conclude that the method used for building the dataset can be applied to train CycleGAN when enhancing images from cold seeps.

14.
Sensors (Basel) ; 21(21)2021 Oct 29.
Artículo en Inglés | MEDLINE | ID: mdl-34770509

RESUMEN

Underwater vision-based detection plays an increasingly important role in underwater security, ocean exploration and other fields. Due to the absorption and scattering effects of water on light, as well as the movement of the carrier, underwater images generally have problems such as noise pollution, color cast and motion blur, which seriously affect the performance of underwater vision-based detection. To address these problems, this study proposes an end-to-end marine organism detection framework that can jointly optimize the image enhancement and object detection. The framework uses a two-stage detection network with dynamic intersection over union (IoU) threshold as the backbone and adds an underwater image enhancement module (UIEM) composed of denoising, color correction and deblurring sub-modules to greatly improve the framework's ability to deal with severely degraded underwater images. Meanwhile, a self-built dataset is introduced to pre-train the UIEM, so that the training of the entire framework can be performed end-to-end. The experimental results show that compared with the existing end-to-end models applied to marine organism detection, the detection precision of the proposed framework can improve by at least 6%, and the detection speed has not been significantly reduced, so that it can complete the high-precision real-time detection of marine organisms.


Asunto(s)
Algoritmos , Organismos Acuáticos , Aumento de la Imagen , Movimiento , Visión Ocular
15.
Sensors (Basel) ; 21(9)2021 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-34068741

RESUMEN

Underwater images are important carriers and forms of underwater information, playing a vital role in exploring and utilizing marine resources. However, underwater images have characteristics of low contrast and blurred details because of the absorption and scattering of light. In recent years, deep learning has been widely used in underwater image enhancement and restoration because of its powerful feature learning capabilities, but there are still shortcomings in detailed enhancement. To address the problem, this paper proposes a deep supervised residual dense network (DS_RD_Net), which is used to better learn the mapping relationship between clear in-air images and synthetic underwater degraded images. DS_RD_Net first uses residual dense blocks to extract features to enhance feature utilization; then, it adds residual path blocks between the encoder and decoder to reduce the semantic differences between the low-level features and high-level features; finally, it employs a deep supervision mechanism to guide network training to improve gradient propagation. Experiments results (PSNR was 36.2, SSIM was 96.5%, and UCIQE was 0.53) demonstrated that the proposed method can fully retain the local details of the image while performing color restoration and defogging compared with other image enhancement methods, achieving good qualitative and quantitative effects.

16.
Sensors (Basel) ; 19(24)2019 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-31888303

RESUMEN

In the shallow-water environment, underwater images often present problems like color deviation and low contrast due to light absorption and scattering in the water body, but for deep-sea images, additional problems like uneven brightness and regional color shift can also exist, due to the use of chromatic and inhomogeneous artificial lighting devices. Since the latter situation is rarely studied in the field of underwater image enhancement, we propose a new model to include it in the analysis of underwater image degradation. Based on the theoretical study of the new model, a comprehensive method for enhancing underwater images under different illumination conditions is proposed in this paper. The proposed method is composed of two modules: color-tone correction and fusion-based descattering. In the first module, the regional or full-extent color deviation caused by different types of incident light is corrected via frequency-based color-tone estimation. And in the second module, the residual low contrast and pixel-wise color shift problems are handled by combining the descattering results under the assumption of different states of the image. The proposed method is experimented on laboratory and open-water images of different depths and illumination states. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms many other methods in enhancing the quality of different types of underwater images, and is especially effective in improving the color accuracy and information content in badly-illuminated regions of underwater images with non-uniform illumination, such as deep-sea images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA