Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Imaging Inform Med ; 2024 Aug 20.
Artigo em Inglês | MEDLINE | ID: mdl-39164453

RESUMO

The elasticity of soft tissues has been widely considered a characteristic property for differentiation of healthy and lesions and, therefore, motivated the development of several elasticity imaging modalities, for example, ultrasound elastography, magnetic resonance elastography, and optical coherence elastography to directly measure the tissue elasticity. This paper proposes an alternative approach of modeling the elasticity for prior knowledge-based extraction of tissue elastic characteristic features for machine learning (ML) lesion classification using computed tomography (CT) imaging modality. The model describes a dynamic non-rigid (or elastic) soft tissue deformation in differential manifold to mimic the tissues' elasticity under wave fluctuation in vivo. Based on the model, a local deformation invariant is formulated using the 1st and 2nd order derivatives of the lesion volumetric CT image and used to generate elastic feature map of the lesion volume. From the feature map, tissue elastic features are extracted and fed to ML to perform lesion classification. Two pathologically proven image datasets of colon polyps and lung nodules were used to test the modeling strategy. The outcomes reached the score of area under the curve of receiver operating characteristics of 94.2% for the polyps and 87.4% for the nodules, resulting in an average gain of 5 to 20% over several existing state-of-the-art image feature-based lesion classification methods. The gain demonstrates the importance of extracting tissue characteristic features for lesion classification, instead of extracting image features, which can include various image artifacts and may vary for different protocols in image acquisition and different imaging modalities.

2.
Vis Comput Ind Biomed Art ; 5(1): 16, 2022 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-35699865

RESUMO

Textures have become widely adopted as an essential tool for lesion detection and classification through analysis of the lesion heterogeneities. In this study, higher order derivative images are being employed to combat the challenge of the poor contrast across similar tissue types among certain imaging modalities. To make good use of the derivative information, a novel concept of vector texture is firstly introduced to construct and extract several types of polyp descriptors. Two widely used differential operators, i.e., the gradient operator and Hessian operator, are utilized to generate the first and second order derivative images. These derivative volumetric images are used to produce two angle-based and two vector-based (including both angle and magnitude) textures. Next, a vector-based co-occurrence matrix is proposed to extract texture features which are fed to a random forest classifier to perform polyp classifications. To evaluate the performance of our method, experiments are implemented over a private colorectal polyp dataset obtained from computed tomographic colonography. We compare our method with four existing state-of-the-art methods and find that our method can outperform those competing methods over 4%-13% evaluated by the area under the receiver operating characteristics curves.

3.
Sensors (Basel) ; 22(3)2022 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-35161653

RESUMO

Objective: As an effective lesion heterogeneity depiction, texture information extracted from computed tomography has become increasingly important in polyp classification. However, variation and redundancy among multiple texture descriptors render a challenging task of integrating them into a general characterization. Considering these two problems, this work proposes an adaptive learning model to integrate multi-scale texture features. Methods: To mitigate feature variation, the whole feature set is geometrically split into several independent subsets that are ranked by a learning evaluation measure after preliminary classifications. To reduce feature redundancy, a bottom-up hierarchical learning framework is proposed to ensure monotonic increase of classification performance while integrating these ranked sets selectively. Two types of classifiers, traditional (random forest + support vector machine)- and convolutional neural network (CNN)-based, are employed to perform the polyp classification under the proposed framework with extended Haralick measures and gray-level co-occurrence matrix (GLCM) as inputs, respectively. Experimental results are based on a retrospective dataset of 63 polyp masses (defined as greater than 3 cm in largest diameter), including 32 adenocarcinomas and 31 benign adenomas, from adult patients undergoing first-time computed tomography colonography and who had corresponding histopathology of the detected masses. Results: We evaluate the performance of the proposed models by the area under the curve (AUC) of the receiver operating characteristic curve. The proposed models show encouraging performances of an AUC score of 0.925 with the traditional classification method and an AUC score of 0.902 with CNN. The proposed adaptive learning framework significantly outperforms nine well-established classification methods, including six traditional methods and three deep learning ones with a large margin. Conclusions: The proposed adaptive learning model can combat the challenges of feature variation through a multiscale grouping of feature inputs, and the feature redundancy through a hierarchal sorting of these feature groups. The improved classification performance against comparative models demonstrated the feasibility and utility of this adaptive learning procedure for feature integration.


Assuntos
Colonografia Tomográfica Computadorizada , Área Sob a Curva , Humanos , Redes Neurais de Computação , Estudos Retrospectivos , Máquina de Vetores de Suporte
4.
Cureus ; 12(7): e9448, 2020 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-32864270

RESUMO

Introduction The need to streamline patient management for coronavirus disease-19 (COVID-19) has become more pressing than ever. Chest X-rays (CXRs) provide a non-invasive (potentially bedside) tool to monitor the progression of the disease. In this study, we present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images. Such a tool can gauge the severity of COVID-19 lung infections (and pneumonia in general) that can be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. Methods Images from a public COVID-19 database were scored retrospectively by three blinded experts in terms of the extent of lung involvement as well as the degree of opacity. A neural network model that was pre-trained on large (non-COVID-19) chest X-ray datasets is used to construct features for COVID-19 images which are predictive for our task. Results This study finds that training a regression model on a subset of the outputs from this pre-trained chest X-ray model predicts our geographic extent score (range 0-8) with 1.14 mean absolute error (MAE) and our lung opacity score (range 0-6) with 0.78 MAE. Conclusions These results indicate that our model's ability to gauge the severity of COVID-19 lung infections could be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. To enable follow up work, we make our code, labels, and data available online.

5.
IEEE Trans Med Imaging ; 39(6): 2013-2024, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31899419

RESUMO

Accurately classifying colorectal polyps, or differentiating malignant from benign ones, has a significant clinical impact on early detection and identifying optimal treatment of colorectal cancer. Convolution neural network (CNN) has shown great potential in recognizing different objects (e.g. human faces) from multiple slice (or color) images, a task similar to the polyp differentiation, given a large learning database. This study explores the potential of CNN learning from multiple slice (or feature) images to differentiate malignant from benign polyps from a relatively small database with pathological ground truth, including 32 malignant and 31 benign polyps represented by volumetric computed tomographic (CT) images. The feature image in this investigation is the gray-level co-occurrence matrix (GLCM). For each volumetric polyp, there are 13 GLCMs, computed from each of the 13 directions through the polyp volume. For comparison purpose, the CNN learning is also applied to the multi-slice CT images of the volumetric polyps. The comparison study is further extended to include Random Forest (RF) classification of the Haralick texture features (derived from the GLCMs). From the relatively small database, this study achieved scores of 0.91/0.93 (two-fold/leave-one-out evaluations) AUC (area under curve of the receiver operating characteristics) by using the CNN on the GLCMs, while the RF reached 0.84/0.86 AUC on the Haralick features and the CNN rendered 0.79/0.80 AUC on the multiple-slice CT images. The presented CNN learning from the GLCMs can relieve the challenge associated with relatively small database, improve the classification performance over the CNN on the raw CT images and the RF on the Haralick features, and have the potential to perform the clinical task of differentiating malignant from benign polyps with pathological ground truth.


Assuntos
Colonografia Tomográfica Computadorizada , Humanos , Redes Neurais de Computação , Curva ROC
6.
Vis Comput Ind Biomed Art ; 2(1): 25, 2019 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-32240410

RESUMO

Texture features have played an essential role in the field of medical imaging for computer-aided diagnosis. The gray-level co-occurrence matrix (GLCM)-based texture descriptor has emerged to become one of the most successful feature sets for these applications. This study aims to increase the potential of these features by introducing multi-scale analysis into the construction of GLCM texture descriptor. In this study, we first introduce a new parameter - stride, to explore the definition of GLCM. Then we propose three multi-scaling GLCM models according to its three parameters, (1) learning model by multiple displacements, (2) learning model by multiple strides (LMS), and (3) learning model by multiple angles. These models increase the texture information by introducing more texture patterns and mitigate direction sparsity and dense sampling problems presented in the traditional Haralick model. To further analyze the three parameters, we test the three models by performing classification on a dataset of 63 large polyp masses obtained from computed tomography colonoscopy consisting of 32 adenocarcinomas and 31 benign adenomas. Finally, the proposed methods are compared to several typical GLCM-texture descriptors and one deep learning model. LMS obtains the highest performance and enhances the prediction power to 0.9450 with standard deviation 0.0285 by area under the curve of receiver operating characteristics score which is a significant improvement.

7.
J Med Imaging (Bellingham) ; 6(4): 044503, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32280727

RESUMO

Polyp classification is a feature selection and clustering process. Picking the most effective features from multiple polyp descriptors without redundant information is a great challenge in this procedure. We propose a multilayer feature selection method to construct an optimized descriptor for polyp classification with a feature-grouping strategy in a hierarchical framework. First, the proposed method makes good use of image metrics, such as intensity, gradient, and curvature, to divide their corresponding polyp descriptors into several feature groups, which are the preliminary units of this method. Then each preliminary unit generates two ranked descriptors, i.e., their optimized variable groups (OVGs) and preliminary classification measurements. Next, a feature dividing-merging (FDM) algorithm is designed to perform feature merging operation hierarchically and iteratively. Unlike traditional feature selection methods, the proposed FDM algorithm includes two steps for feature dividing and feature merging. At each layer, feature dividing selects the OVG with the highest area under the receiver operating characteristic curve (AUC) as the baseline while other descriptors are treated as its complements. In the fusion step, the FDM merges some variables with gains into the baseline from the complementary descriptors iteratively on every layer until the final descriptor is obtained. This proposed model (including the forward step algorithm and the FDM algorithm) is a greedy method that guarantees clustering monotonicity of all OVGs from the bottom to the top layer. In our experiments, all the selected results from each layer are reported by both graphical illustration and data analysis. Performance of the proposed method is compared to five existing classification methods by a polyp database of 63 samples with pathological reports. The experimental results show that our proposed method outperforms other methods by 4% to 23% gains in terms of AUC scores.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA