Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Interdiscip Sci ; 14(2): 566-581, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35482216

RESUMEN

Recent period has witnessed benchmarked performance of transfer learning using deep architectures in computer-aided diagnosis (CAD) of breast cancer. In this perspective, the pre-trained neural network needs to be fine-tuned with relevant data to extract useful features from the dataset. However, in addition to the computational overhead, it suffers the curse of overfitting in case of feature extraction from smaller datasets. Handcrafted feature extraction techniques as well as feature extraction using pre-trained deep networks come into rescue in aforementioned situation and have proved to be much more efficient and lightweight compared to deep architecture-based transfer learning techniques. This research has identified the competence of classifying breast cancer images using feature engineering and representation learning over the established and contemporary notion of using transfer learning techniques. Moreover, it has revealed superior feature learning capacity with feature fusion in contrast to the conventional belief of understanding unknown feature patterns better with representation learning alone. Experiments have been conducted on two different and popular breast cancer image datasets, namely, KIMIA Path960 and BreakHis datasets. A comparison of image-level accuracy is performed on these datasets using the above-mentioned feature extraction techniques. Image level accuracy of 97.81% is achieved for KIMIA Path960 dataset using individual features extracted with handcrafted (color histogram) technique. Fusion of uniform Local Binary Pattern (uLBP) and color histogram features has resulted in 99.17% of highest accuracy for the same dataset. Experimentation with BreakHis dataset has resulted in highest classification accuracy of 88.41% with color histogram features for images with 200X magnification factor. Finally, the results are contrasted to that of state-of-the-art and superior performances are observed on many occasions with the proposed fusion-based techniques. In case of BreakHis dataset, the highest accuracies 87.60% (with least standard deviation) and 85.77% are recorded for 200X and 400X magnification factors, respectively, and the results for the aforesaid magnification factors of images have exceeded the state-of-the-art.


Asunto(s)
Neoplasias de la Mama , Neoplasias de la Mama/diagnóstico por imagen , Diagnóstico por Computador , Femenino , Humanos , Redes Neurales de la Computación
2.
Magn Reson Imaging ; 75: 107-115, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33148512

RESUMEN

Motion artifacts are a common occurrence in Magnetic Resonance Imaging exam. Motion during acquisition has a profound impact on workflow efficiency, often requiring a repeat of sequences. Furthermore, motion artifacts may escape notice by technologists, only to be revealed at the time of reading by the radiologists, affecting their diagnostic quality. There is a paucity of clinical tools to identify and quantitatively assess the severity of motion artifacts in MRI. An image with subtle motion may still have diagnostic value, while severe motion may be uninterpretable by radiologists and requires the exam to be repeated. Therefore, a tool for the automatic identification of motion artifacts would aid in maintaining diagnostic quality, while potentially driving workflow efficiencies. Here we aim to quantify the severity of motion artifacts from MRI images using deep learning. Impact of subject movement parameters like displacement and rotation on image quality is also studied. A state-of-the-art, stacked ensemble model was developed to classify motion artifacts into five levels (no motion, slight, mild, moderate and severe) in brain scans. The stacked ensemble model is able to robustly predict rigid-body motion severity across different acquisition parameters, including T1-weighted and T2-weighted slices acquired in different anatomical planes. The ensemble model with XGBoost metalearner achieves 91.6% accuracy, 94.8% area under the curve, 90% Cohen's Kappa, and is observed to be more accurate and robust than the individual base learners.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Movimiento , Humanos , Neuroimagen , Rotación
3.
Med Image Anal ; 58: 101552, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31521965

RESUMEN

Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.


Asunto(s)
Imagen por Resonancia Magnética , Redes Neurales de la Computación , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
4.
J Digit Imaging ; 30(4): 477-486, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28695342

RESUMEN

With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign" cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.


Asunto(s)
Diagnóstico por Computador/métodos , Redes Neurales de la Computación , Nódulo Tiroideo/clasificación , Nódulo Tiroideo/diagnóstico por imagen , Biopsia con Aguja Fina , Humanos , Sensibilidad y Especificidad , Glándula Tiroides/diagnóstico por imagen , Glándula Tiroides/patología , Nódulo Tiroideo/patología , Ultrasonografía/métodos
5.
Mach Vis Appl ; 28(1): 201-218, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-32269425

RESUMEN

Archaeologists are currently producing huge numbers of digitized photographs to record and preserve artefact finds. These images are used to identify and categorize artefacts and reason about connections between artefacts and perform outreach to the public. However, finding specific types of images within collections remains a major challenge. Often, the metadata associated with images is sparse or is inconsistent. This makes keyword-based exploratory search difficult, leaving researchers to rely on serendipity and slowing down the research process. We present an image-based retrieval system that addresses this problem for biface artefacts. In order to identify artefact characteristics that need to be captured by image features, we conducted a contextual inquiry study with experts in bifaces. We then devised several descriptors for matching images of bifaces with similar artefacts. We evaluated the performance of these descriptors using measures that specifically look at the differences between the sets of images returned by the search system using different descriptors. Through this nuanced approach, we have provided a comprehensive analysis of the strengths and weaknesses of the different descriptors and identified implications for design in the search systems for archaeology.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA