Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Comput Biol Med ; 177: 108670, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38838558

RESUMEN

No-reference image quality assessment (IQA) is a critical step in medical image analysis, with the objective of predicting perceptual image quality without the need for a pristine reference image. The application of no-reference IQA to CT scans is valuable in providing an automated and objective approach to assessing scan quality, optimizing radiation dose, and improving overall healthcare efficiency. In this paper, we introduce DistilIQA, a novel distilled Vision Transformer network designed for no-reference CT image quality assessment. DistilIQA integrates convolutional operations and multi-head self-attention mechanisms by incorporating a powerful convolutional stem at the beginning of the traditional ViT network. Additionally, we present a two-step distillation methodology aimed at improving network performance and efficiency. In the initial step, a "teacher ensemble network" is constructed by training five vision Transformer networks using a five-fold division schema. In the second step, a "student network", comprising of a single Vision Transformer, is trained using the original labeled dataset and the predictions generated by the teacher network as new labels. DistilIQA is evaluated in the task of quality score prediction from low-dose chest CT scans obtained from the LDCT and Projection data of the Cancer Imaging Archive, along with low-dose abdominal CT images from the LDCTIQAC2023 Grand Challenge. Our results demonstrate DistilIQA's remarkable performance in both benchmarks, surpassing the capabilities of various CNNs and Transformer architectures. Moreover, our comprehensive experimental analysis demonstrates the effectiveness of incorporating convolutional operations within the ViT architecture and highlights the advantages of our distillation methodology.


Asunto(s)
Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación
2.
PLoS One ; 16(6): e0253027, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34111201

RESUMEN

Fast and accurate taxonomic identification of invasive trans-located ladybird beetle species is essential to prevent significant impacts on biological communities, ecosystem functions, and agricultural business economics. Therefore, in this work we propose a two-step automatic detector for ladybird beetles in random environment images as the first stage towards an automated classification system. First, an image processing module composed of a saliency map representation, simple linear iterative clustering superpixels segmentation, and active contour methods allowed us to generate bounding boxes with possible ladybird beetles locations within an image. Subsequently, a deep convolutional neural network-based classifier selects only the bounding boxes with ladybird beetles as the final output. This method was validated on a 2, 300 ladybird beetle image data set from Ecuador and Colombia obtained from the iNaturalist project. The proposed approach achieved an accuracy score of 92% and an area under the receiver operating characteristic curve of 0.977 for the bounding box generation and classification tasks. These successful results enable the proposed detector as a valuable tool for helping specialists in the ladybird beetle detection problem.


Asunto(s)
Escarabajos/clasificación , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Animales , Colombia , Aprendizaje Profundo , Ecuador , Especies Introducidas , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA