Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Intervalo de ano de publicação
1.
Biomedica ; 42(1): 170-183, 2022 03 01.
Artigo em Inglês, Espanhol | MEDLINE | ID: mdl-35471179

RESUMO

INTRODUCTION: The coronavirus disease 2019 (COVID-19) has become a significant public health problem worldwide. In this context, CT-scan automatic analysis has emerged as a COVID-19 complementary diagnosis tool allowing for radiological finding characterization, patient categorization, and disease follow-up. However, this analysis depends on the radiologist's expertise, which may result in subjective evaluations. OBJECTIVE: To explore deep learning representations, trained from thoracic CT-slices, to automatically distinguish COVID-19 disease from control samples. MATERIALS AND METHODS: Two datasets were used: SARS-CoV-2 CT Scan (Set-1) and FOSCAL clinic's dataset (Set-2). The deep representations took advantage of supervised learning models previously trained on the natural image domain, which were adjusted following a transfer learning scheme. The deep classification was carried out: (a) via an end-to-end deep learning approach and (b) via random forest and support vector machine classifiers by feeding the deep representation embedding vectors into these classifiers. RESULTS: The end-to-end classification achieved an average accuracy of 92.33% (89.70% precision) for Set-1 and 96.99% (96.62% precision) for Set-2. The deep feature embedding with a support vector machine achieved an average accuracy of 91.40% (95.77% precision) and 96.00% (94.74% precision) for Set-1 and Set-2, respectively. CONCLUSION: Deep representations have achieved outstanding performance in the identification of COVID-19 cases on CT scans demonstrating good characterization of the COVID-19 radiological patterns. These representations could potentially support the COVID-19 diagnosis in clinical settings.


Introducción. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Objetivo. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Materiales y métodos. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. Resultados. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Conclusión. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico.


Assuntos
COVID-19 , Aprendizado Profundo , Teste para COVID-19 , Humanos , Redes Neurais de Computação , SARS-CoV-2 , Tomografia Computadorizada por Raios X
2.
Biomédica (Bogotá) ; Biomédica (Bogotá);42(1): 170-183, ene.-mar. 2022. tab, graf
Artigo em Inglês | LILACS | ID: biblio-1374516

RESUMO

Introduction: The coronavirus disease 2019 (COVID-19) has become a significant public health problem worldwide. In this context, CT-scan automatic analysis has emerged as a COVID-19 complementary diagnosis tool allowing for radiological finding characterization, patient categorization, and disease follow-up. However, this analysis depends on the radiologist's expertise, which may result in subjective evaluations. Objective: To explore deep learning representations, trained from thoracic CT-slices, to automatically distinguish COVID-19 disease from control samples. Materials and methods: Two datasets were used: SARS-CoV-2 CT Scan (Set-1) and FOSCAL clinic's dataset (Set-2). The deep representations took advantage of supervised learning models previously trained on the natural image domain, which were adjusted following a transfer learning scheme. The deep classification was carried out: (a) via an end-to-end deep learning approach and (b) via random forest and support vector machine classifiers by feeding the deep representation embedding vectors into these classifiers. Results: The end-to-end classification achieved an average accuracy of 92.33% (89.70% precision) for Set-1 and 96.99% (96.62% precision) for Set-2. The deep feature embedding with a support vector machine achieved an average accuracy of 91.40% (95.77% precision) and 96.00% (94.74% precision) for Set-1 and Set-2, respectively. Conclusion: Deep representations have achieved outstanding performance in the identification of COVID-19 cases on CT scans demonstrating good characterization of the COVID-19 radiological patterns. These representations could potentially support the COVID-19 diagnosis in clinical settings.


Introducción. La enfermedad por coronavirus (COVID-19) es actualmente el principal problema de salud pública en el mundo. En este contexto, el análisis automático de tomografías computarizadas (TC) surge como una herramienta diagnóstica complementaria que permite caracterizar hallazgos radiológicos, y categorizar y hacer el seguimiento de pacientes con COVID-19. Sin embargo, este análisis depende de la experiencia de los radiólogos, por lo que las valoraciones pueden ser subjetivas. Objetivo. Explorar representaciones de aprendizaje profundo entrenadas con cortes de TC torácica para diferenciar automáticamente entre los casos de COVID-19 y personas no infectadas. Materiales y métodos. Se usaron dos conjuntos de datos de TC: de SARS-CoV-2 CT (conjunto 1) y de la clínica FOSCAL (conjunto 2). Los modelos de aprendizaje supervisados y previamente entrenados en imágenes naturales, se ajustaron usando aprendizaje por transferencia. La clasificación se llevó a cabo mediante aprendizaje de extremo a extremo y clasificadores tales como los árboles de decisiones y las máquinas de soporte vectorial, alimentados por la representación profunda previamente aprendida. Resultados. El enfoque de extremo a extremo alcanzó una exactitud promedio de 92,33 % (89,70 % de precisión) para el conjunto 1 y de 96,99 % (96,62 % de precisión) para el conjunto-2. La máquina de soporte vectorial alcanzó una exactitud promedio de 91,40 % (precisión del 95,77 %) para el conjunto-1 y del 96,00 % (precisión del 94,74 %) para el conjunto 2. Conclusión. Las representaciones profundas lograron resultados sobresalientes al caracterizar patrones radiológicos usados en la detección de casos de COVID-19 a partir de estudios de TC y demostraron ser una potencial herramienta de apoyo del diagnóstico.


Assuntos
Infecções por Coronavirus/diagnóstico , Aprendizado Profundo , Tomografia Computadorizada por Raios X
3.
Biomed Eng Lett ; 12(1): 75-84, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35186361

RESUMO

Cardiac cine-MRI is one of the most important diagnostic tools used to assess the morphology and physiology of the heart during the cardiac cycle. Nonetheless, the analysis on cardiac cine-MRI is poorly exploited and remains highly dependent on the observer's expertise. This work introduces an imaging cardiac disease representation, coded as an embedding vector, that fully exploits hidden mapping between the latent space and a generated cine-MRI data distribution. The resultant representation is progressively learned and conditioned by a set of cardiac conditions. A generative cardiac descriptor is achieved from a progressive generative-adversarial network trained to produce MRI synthetic images, conditioned to several heart conditions. The generator model is then used to recover a digital biomarker, coded as an embedding vector, following a backpropagation scheme. Then, an UMAP strategy is applied to build a topological low dimensional embedding space that discriminates among cardiac pathologies. Evaluation of the approach is carried out by using an embedded representation as a potential disease descriptor in 2296 pathological cine-MRI slices. The proposed strategy yields an average accuracy of 0.8 to discriminate among heart conditions. Furthermore, the low dimensional space shows a remarkable grouping of cardiac classes that may suggest its potential use as a tool to support diagnosis. The learned progressive and generative representation, from cine-MRI slices, allows retrieves and coded complex descriptors that results useful to discriminate among heart conditions. The cardiac disease representation expressed as a hidden embedding vector could potentially be used to support cardiac analysis on cine-MRI sequences.

4.
Cytometry A ; 91(6): 566-573, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28192639

RESUMO

The treatment and management of early stage estrogen receptor positive (ER+) breast cancer is hindered by the difficulty in identifying patients who require adjuvant chemotherapy in contrast to those that will respond to hormonal therapy. To distinguish between the more and less aggressive breast tumors, which is a fundamental criterion for the selection of an appropriate treatment plan, Oncotype DX (ODX) and other gene expression tests are typically employed. While informative, these gene expression tests are expensive, tissue destructive, and require specialized facilities. Bloom-Richardson (BR) grade, the common scheme employed in breast cancer grading, has been shown to be correlated with the Oncotype DX risk score. Unfortunately, studies have also shown that the BR grade determined experiences notable inter-observer variability. One of the constituent categories in BR grading is the mitotic index. The goal of this study was to develop a deep learning (DL) classifier to identify mitotic figures from whole slides images of ER+ breast cancer, the hypothesis being that the number of mitoses identified by the DL classifier would correlate with the corresponding Oncotype DX risk categories. The mitosis detector yielded an average F-score of 0.556 in the AMIDA mitosis dataset using a 6-fold validation setup. For a cohort of 174 whole slide images with early stage ER+ breast cancer for which the corresponding Oncotype DX score was available, the distributions of the number of mitoses identified by the DL classifier was found to be significantly different between the high vs low Oncotype DX risk groups (P < 0.01). Comparisons of other risk groups, using both ODX score and histological grade, were also found to present significantly different automated mitoses distributions. Additionally, a support vector machine classifier trained to separate low/high Oncotype DX risk categories using the mitotic count determined by the DL classifier yielded a 83.19% classification accuracy. © 2017 International Society for Advancement of Cytometry.


Assuntos
Biomarcadores Tumorais/genética , Neoplasias da Mama/diagnóstico , Interpretação de Imagem Assistida por Computador/métodos , Mitose , Receptor ErbB-2/genética , Máquina de Vetores de Suporte , Neoplasias da Mama/genética , Neoplasias da Mama/patologia , Amarelo de Eosina-(YS) , Feminino , Expressão Gênica , Hematoxilina , Histocitoquímica/métodos , Humanos , Índice Mitótico , Gradação de Tumores , Risco
5.
Sci Rep ; 6: 32706, 2016 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-27599752

RESUMO

Early stage estrogen receptor positive (ER+) breast cancer (BCa) treatment is based on the presumed aggressiveness and likelihood of cancer recurrence. Oncotype DX (ODX) and other gene expression tests have allowed for distinguishing the more aggressive ER+ BCa requiring adjuvant chemotherapy from the less aggressive cancers benefiting from hormonal therapy alone. However these tests are expensive, tissue destructive and require specialized facilities. Interestingly BCa grade has been shown to be correlated with the ODX risk score. Unfortunately Bloom-Richardson (BR) grade determined by pathologists can be variable. A constituent category in BR grading is tubule formation. This study aims to develop a deep learning classifier to automatically identify tubule nuclei from whole slide images (WSI) of ER+ BCa, the hypothesis being that the ratio of tubule nuclei to overall number of nuclei (a tubule formation indicator - TFI) correlates with the corresponding ODX risk categories. This correlation was assessed in 7513 fields extracted from 174 WSI. The results suggests that low ODX/BR cases have a larger TFI than high ODX/BR cases (p < 0.01). The low ODX/BR cases also presented a larger TFI than that obtained for the rest of cases (p < 0.05). Finally, the high ODX/BR cases have a significantly smaller TFI than that obtained for the rest of cases (p < 0.01).


Assuntos
Automação , Neoplasias da Mama/metabolismo , Núcleo Celular/metabolismo , Receptores de Estrogênio/metabolismo , Neoplasias da Mama/tratamento farmacológico , Neoplasias da Mama/patologia , Feminino , Humanos , Prognóstico , Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA