Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 173: 106165, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38340469

RESUMEN

Single image dehazing is a challenging computer vision task for other high-level applications, e.g., object detection, navigation, and positioning systems. Recently, most existing dehazing methods have followed a "black box" recovery paradigm that obtains the haze-free image from its corresponding hazy input by network learning. Unfortunately, these algorithms ignore the effective utilization of relevant image priors and non-uniform haze distribution problems, causing insufficient or excessive dehazing performance. In addition, they pay little attention to image detail preservation during the dehazing process, thus inevitably producing blurry results. To address the above problems, we propose a novel priors-assisted dehazing network (called PADNet), which fully explores relevant image priors from two new perspectives: attention supervision and detail preservation. For one thing, we leverage the dark channel prior to constrain the attention map generation that denotes the haze pixel position information, thereby better extracting non-uniform feature distributions from hazy images. For another, we find that the residual channel prior of the hazy images contains rich structural information, so it is natural to incorporate it into our dehazing architecture to preserve more structural detail information. Furthermore, since the attention map and dehazed image are simultaneously predicted during the convergence of our model, a self-paced semi-curriculum learning strategy is utilized to alleviate the learning ambiguity. Extensive quantitative and qualitative experiments on several benchmark datasets demonstrate that our PADNet can perform favorably against existing state-of-the-art methods. The code will be available at https://github.com/leandepk/PADNet.


Asunto(s)
Algoritmos , Benchmarking , Aprendizaje
2.
Bioengineering (Basel) ; 10(12)2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38135976

RESUMEN

Wound image classification is a crucial preprocessing step to many intelligent medical systems, e.g., online diagnosis and smart medical. Recently, Convolutional Neural Network (CNN) has been widely applied to the classification of wound images and obtained promising performance to some extent. Unfortunately, it is still challenging to classify multiple wound types due to the complexity and variety of wound images. Existing CNNs usually extract high- and low-frequency features at the same convolutional layer, which inevitably causes information loss and further affects the accuracy of classification. To this end, we propose a novel High and Low-frequency Guidance Network (HLG-Net) for multi-class wound classification. To be specific, HLG-Net contains two branches: High-Frequency Network (HF-Net) and Low-Frequency Network (LF-Net). We employ pre-trained models ResNet and Res2Net as the feature backbone of the HF-Net, which makes the network capture the high-frequency details and texture information of wound images. To extract much low-frequency information, we utilize a Multi-Stream Dilation Convolution Residual Block (MSDCRB) as the backbone of the LF-Net. Moreover, a fusion module is proposed to fully explore informative features at the end of these two separate feature extraction branches, and obtain the final classification result. Extensive experiments demonstrate that HLG-Net can achieve maximum accuracy of 98.00%, 92.11%, and 82.61% in two-class, three-class, and four-class wound image classifications, respectively, which outperforms the previous state-of-the-art methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA