Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 16.744
Filtrar
2.
Radiol Technol ; 96(1): 13-18, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39237322

RESUMEN

PURPOSE: To establish a standardized method of reformatting axial images for computed tomography (CT) brain examinations. METHODS: An anatomic line between the superior orbital rim and the base of the occipital bone (SOR-BS line) was chosen as the standardized reference line. In June 2022, CT technologists at a tertiary care center received an educational presentation and a 1-page reference handout on making standardized CT reformats. This was the quality-of-care intervention. Subsequently, 100 CT brain examinations performed on July 1 to 10, 2020 (preintervention) were analyzed and compared with 100 CT brain examinations performed on July 1 to 10, 2022 (postintervention). RESULTS: There were no significant differences in the mean angle differences measured between the preintervention (6.2 ± 5.8°) and the postintervention (5.8 ± 4.7°) groups (P = .67). However, the number of CT brain studies with an angle difference of more than 20° decreased from 4 studies to 1 study. In addition, the number of CT brain studies without reformatted images decreased from 5 to 2 studies. DISCUSSION: The cause for the less-than-optimal adoption of the expected change in CT workflow might be complex and multifactorial. However, the institution in this study is a busy tertiary care center with a chronic shortage of CT technologists. The busy workflow might have contributed to lack of significance for the parameters assessed. CONCLUSION: There was a slight but not significant improvement between preintervention and postintervention data.


Asunto(s)
Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Masculino , Femenino , Persona de Mediana Edad , Adulto , Encéfalo/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Anciano
3.
Radiology ; 312(3): e233435, 2024 09.
Artículo en Inglés | MEDLINE | ID: mdl-39225600

RESUMEN

Background It is increasingly recognized that interstitial lung abnormalities (ILAs) detected at CT have potential clinical implications, but automated identification of ILAs has not yet been fully established. Purpose To develop and test automated ILA probability prediction models using machine learning techniques on CT images. Materials and Methods This secondary analysis of a retrospective study included CT scans from patients in the Boston Lung Cancer Study collected between February 2004 and June 2017. Visual assessment of ILAs by two radiologists and a pulmonologist served as the ground truth. Automated ILA probability prediction models were developed that used a stepwise approach involving section inference and case inference models. The section inference model produced an ILA probability for each CT section, and the case inference model integrated these probabilities to generate the case-level ILA probability. For indeterminate sections and cases, both two- and three-label methods were evaluated. For the case inference model, we tested three machine learning classifiers (support vector machine [SVM], random forest [RF], and convolutional neural network [CNN]). Receiver operating characteristic analysis was performed to calculate the area under the receiver operating characteristic curve (AUC). Results A total of 1382 CT scans (mean patient age, 67 years ± 11 [SD]; 759 women) were included. Of the 1382 CT scans, 104 (8%) were assessed as having ILA, 492 (36%) as indeterminate for ILA, and 786 (57%) as without ILA according to ground-truth labeling. The cohort was divided into a training set (n = 96; ILA, n = 48), a validation set (n = 24; ILA, n = 12), and a test set (n = 1262; ILA, n = 44). Among the models evaluated (two- and three-label section inference models; two- and three-label SVM, RF, and CNN case inference models), the model using the three-label method in the section inference model and the two-label method and RF in the case inference model achieved the highest AUC, at 0.87. Conclusion The model demonstrated substantial performance in estimating ILA probability, indicating its potential utility in clinical settings. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Zagurovskaya in this issue.


Asunto(s)
Enfermedades Pulmonares Intersticiales , Neoplasias Pulmonares , Aprendizaje Automático , Interpretación de Imagen Radiográfica Asistida por Computador , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Estudios Retrospectivos , Femenino , Masculino , Neoplasias Pulmonares/diagnóstico por imagen , Anciano , Persona de Mediana Edad , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Boston , Pulmón/diagnóstico por imagen , Probabilidad
4.
Cancer Imaging ; 24(1): 123, 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39278933

RESUMEN

OBJECTIVE: To explore the effects of tube voltage, radiation dose and adaptive statistical iterative reconstruction (ASiR-V) strength level on the detection and characterization of pulmonary nodules by an artificial intelligence (AI) software in ultra-low-dose chest CT (ULDCT). MATERIALS AND METHODS: An anthropomorphic thorax phantom containing 12 spherical simulated nodules (Diameter: 12 mm, 10 mm, 8 mm, 5 mm; CT value: -800HU, -630HU, 100HU) was scanned with three ULDCT protocols: Dose-1 (70kVp:0.11mSv, 100kVp:0.10mSv), Dose-2 (70kVp:0.34mSv, 100kVp:0.32mSv), Dose-3 (70kVp:0.53mSv, 100kVp:0.51mSv). All scanning protocols were repeated five times. CT images were reconstructed using four different strength levels of ASiR-V (0%=FBP, 30%, 50%, 70%ASiR-V) with a slice thickness of 1.25 mm. The characteristics of the physical nodules were used as reference standards. All images were analyzed using a commercially available AI software to identify nodules for calculating nodule detection rate (DR) and to obtain their long diameter and short diameter, which were used to calculate the deformation coefficient (DC) and size measurement deviation percentage (SP) of nodules. DR, DC and SP of different imaging groups were statistically compared. RESULTS: Image noise decreased with the increase of ASiR-V strength level, and the 70 kV images had lower noise under the same strength level (mean-value 70 kV: 40.14 ± 7.05 (dose 1), 27.55 ± 7.38 (dose 2), 23.88 ± 6.98 (dose 3); 100 kV: 42.36 ± 7.62 (dose 1); 30.78 ± 6.87 (dose 2); 26.49 ± 6.61 (dose 3)). Under the same dose level, there were no differences in DR between 70 kV and 100 kV (dose 1: 58.76% vs. 58.33%; dose 2: 73.33% vs. 70.83%; dose 3: 75.42% vs. 75.42%, all p > 0.05). The DR of GGNs increased significantly at dose 2 and higher (70 kV: 38.12% (dose 1), 60.63% (dose 2), 64.38% (dose 3); 100 kV: 37.50% (dose 1), 59.38% (dose 2), 66.25% (dose 3)). In general, the use of ASiR-V at higher strength levels (> 50%) and 100 kV provided better (lower) DC and SP. CONCLUSION: Detection rates are similar between 70 kV and 100 kV scans. The 70 kV images have better noise performance under the same ASiR-V level, while images of 100 kV and higher ASiR-V levels are better in preserving the nodule morphology (lower DC and SP); the dose levels above 0.33mSv provide high sensitivity for nodules detection, especially the simulated ground glass nodules.


Asunto(s)
Nódulos Pulmonares Múltiples , Fantasmas de Imagen , Dosis de Radiación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Nódulos Pulmonares Múltiples/patología , Neoplasias Pulmonares/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía Torácica/métodos
5.
Nat Commun ; 15(1): 7620, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39223122

RESUMEN

Recently, multi-modal vision-language foundation models have gained significant attention in the medical field. While these models offer great opportunities, they still face crucial challenges, such as the requirement for fine-grained knowledge understanding in computer-aided diagnosis and the capability of utilizing very limited or even no task-specific labeled data in real-world clinical applications. In this study, we present MaCo, a masked contrastive chest X-ray foundation model that tackles these challenges. MaCo explores masked contrastive learning to simultaneously achieve fine-grained image understanding and zero-shot learning for a variety of medical imaging tasks. It designs a correlation weighting mechanism to adjust the correlation between masked chest X-ray image patches and their corresponding reports, thereby enhancing the model's representation learning capabilities. To evaluate the performance of MaCo, we conducted extensive experiments using 6 well-known open-source X-ray datasets. The experimental results demonstrate the superiority of MaCo over 10 state-of-the-art approaches across tasks such as classification, segmentation, detection, and phrase grounding. These findings highlight the significant potential of MaCo in advancing a wide range of medical image analysis tasks.


Asunto(s)
Algoritmos , Humanos , Radiografía Torácica/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
7.
Cardiovasc Diabetol ; 23(1): 328, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39227844

RESUMEN

BACKGROUND: The aim of this study (EPIDIAB) was to assess the relationship between epicardial adipose tissue (EAT) and the micro and macrovascular complications (MVC) of type 2 diabetes (T2D). METHODS: EPIDIAB is a post hoc analysis from the AngioSafe T2D study, which is a multicentric study aimed at determining the safety of antihyperglycemic drugs on retina and including patients with T2D screened for diabetic retinopathy (DR) (n = 7200) and deeply phenotyped for MVC. Patients included who had undergone cardiac CT for CAC (Coronary Artery Calcium) scoring after inclusion (n = 1253) were tested with a validated deep learning segmentation pipeline for EAT volume quantification. RESULTS: Median age of the study population was 61 [54;67], with a majority of men (57%) a median duration of the disease 11 years [5;18] and a mean HbA1c of7.8 ± 1.4%. EAT was significantly associated with all traditional CV risk factors. EAT volume significantly increased with chronic kidney disease (CKD vs no CKD: 87.8 [63.5;118.6] vs 82.7 mL [58.8;110.8], p = 0.008), coronary artery disease (CAD vs no CAD: 112.2 [82.7;133.3] vs 83.8 mL [59.4;112.1], p = 0.0004, peripheral arterial disease (PAD vs no PAD: 107 [76.2;141] vs 84.6 mL[59.2; 114], p = 0.0005 and elevated CAC score (> 100 vs < 100 AU: 96.8 mL [69.1;130] vs 77.9 mL [53.8;107.7], p < 0.0001). By contrast, EAT volume was neither associated with DR, nor with peripheral neuropathy. We further evidenced a subgroup of patients with high EAT volume and a null CAC score. Interestingly, this group were more likely to be composed of young women with a high BMI, a lower duration of T2D, a lower prevalence of microvascular complications, and a higher inflammatory profile. CONCLUSIONS: Fully-automated EAT volume quantification could provide useful information about the risk of both renal and macrovascular complications in T2D patients.


Asunto(s)
Tejido Adiposo , Automatización , Enfermedad de la Arteria Coronaria , Aprendizaje Profundo , Diabetes Mellitus Tipo 2 , Pericardio , Valor Predictivo de las Pruebas , Calcificación Vascular , Humanos , Masculino , Femenino , Diabetes Mellitus Tipo 2/complicaciones , Diabetes Mellitus Tipo 2/diagnóstico , Pericardio/diagnóstico por imagen , Persona de Mediana Edad , Tejido Adiposo/diagnóstico por imagen , Anciano , Calcificación Vascular/diagnóstico por imagen , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Angiopatías Diabéticas/diagnóstico por imagen , Angiopatías Diabéticas/etiología , Angiopatías Diabéticas/diagnóstico , Medición de Riesgo , Interpretación de Imagen Radiográfica Asistida por Computador , Angiografía por Tomografía Computarizada , Adiposidad , Angiografía Coronaria , Factores de Riesgo , Reproducibilidad de los Resultados , Pronóstico , Tejido Adiposo Epicárdico
8.
Semin Vasc Surg ; 37(3): 306-313, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39277346

RESUMEN

Current planning of aortic and peripheral endovascular procedures is based largely on manual measurements performed from the 3-dimensional reconstruction of preoperative computed tomography scans. Assessment of device behavior inside patient anatomy is often difficult, and available tools, such as 3-dimensional-printed models, have several limitations. Digital twin (DT) technology has been used successfully in automotive and aerospace industries and applied recently to endovascular aortic aneurysm repair. Artificial intelligence allows the treatment of large amounts of data, and its use in medicine is increasing rapidly. The aim of this review was to present the current status of DTs combined with artificial intelligence for planning endovascular procedures. Patient-specific DTs of the aorta are generated from preoperative computed tomography and integrate aorta mechanical properties using finite element analysis. The same methodology is used to generate 3-dimensional models of aortic stent-grafts and simulate their deployment. Post processing of DT models is then performed to generate multiple parameters related to stent-graft oversizing and apposition. Machine learning algorithms allow parameters to be computed into a synthetic index to predict Type 1A endoleak risk. Other planning and sizing applications include custom-made fenestrated and branched stent-grafts for complex aneurysms. DT technology is also being investigated for planning peripheral endovascular procedures, such as carotid artery stenting. DT provides detailed information on endovascular device behavior. Analysis of DT-derived parameters with machine learning algorithms may improve accuracy in predicting complications, such as Type 1A endoleaks.


Asunto(s)
Implantación de Prótesis Vascular , Prótesis Vascular , Angiografía por Tomografía Computarizada , Procedimientos Endovasculares , Valor Predictivo de las Pruebas , Diseño de Prótesis , Interpretación de Imagen Radiográfica Asistida por Computador , Stents , Humanos , Procedimientos Endovasculares/instrumentación , Procedimientos Endovasculares/efectos adversos , Implantación de Prótesis Vascular/instrumentación , Implantación de Prótesis Vascular/efectos adversos , Modelos Cardiovasculares , Resultado del Tratamiento , Aortografía , Modelación Específica para el Paciente , Aprendizaje Automático , Impresión Tridimensional , Inteligencia Artificial , Cirugía Asistida por Computador , Selección de Paciente , Toma de Decisiones Clínicas , Factores de Riesgo
9.
BMC Med Imaging ; 24(1): 237, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39251996

RESUMEN

BACKGROUND: Spectral imaging of photon-counting detector CT (PCD-CT) scanners allows for generating virtual non-contrast (VNC) reconstruction. By analyzing 12 abdominal organs, we aimed to test the reliability of VNC reconstructions in preserving HU values compared to real unenhanced CT images. METHODS: Our study included 34 patients with pancreatic cystic neoplasm (PCN). The VNC reconstructions were generated from unenhanced, arterial, portal, and venous phase PCD-CT scans using the Liver-VNC algorithm. The observed 11 abdominal organs were segmented by the TotalSegmentator algorithm, the PCNs were segmented manually. Average densities were extracted from unenhanced scans (HUunenhanced), postcontrast (HUpostcontrast) scans, and VNC reconstructions (HUVNC). The error was calculated as HUerror=HUVNC-HUunenhanced. Pearson's or Spearman's correlation was used to assess the association. Reproducibility was evaluated by intraclass correlation coefficients (ICC). RESULTS: Significant differences between HUunenhanced and HUVNC[unenhanced] were found in vertebrae, paraspinal muscles, liver, and spleen. HUVNC[unenhanced] showed a strong correlation with HUunenhanced in all organs except spleen (r = 0.45) and kidneys (r = 0.78 and 0.73). In all postcontrast phases, the HUVNC had strong correlations with HUunenhanced in all organs except the spleen and kidneys. The HUerror had significant correlations with HUunenhanced in the muscles and vertebrae; and with HUpostcontrast in the spleen, vertebrae, and paraspinal muscles in all postcontrast phases. All organs had at least one postcontrast VNC reconstruction that showed good-to-excellent agreement with HUunenhanced during ICC analysis except the vertebrae (ICC: 0.17), paraspinal muscles (ICC: 0.64-0.79), spleen (ICC: 0.21-0.47), and kidneys (ICC: 0.10-0.31). CONCLUSIONS: VNC reconstructions are reliable in at least one postcontrast phase for most organs, but further improvement is needed before VNC can be utilized to examine the spleen, kidneys, and vertebrae.


Asunto(s)
Tomografía Computarizada por Rayos X , Humanos , Femenino , Masculino , Reproducibilidad de los Resultados , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Anciano , Bazo/diagnóstico por imagen , Hígado/diagnóstico por imagen , Algoritmos , Neoplasias Pancreáticas/diagnóstico por imagen , Adulto , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Anciano de 80 o más Años , Músculos Paraespinales/diagnóstico por imagen , Fotones , Columna Vertebral/diagnóstico por imagen
10.
Eur J Radiol ; 180: 111685, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39197270

RESUMEN

OBJECTIVE: To develop and externally validate a binary classification model for lumbar vertebral body fractures based on CT images using deep learning methods. METHODS: This study involved data collection from two hospitals for AI model training and external validation. In Cohort A from Hospital 1, CT images from 248 patients, comprising 1508 vertebrae, revealed that 20.9% had fractures (315 vertebrae) and 79.1% were non-fractured (1193 vertebrae). In Cohort B from Hospital 2, CT images from 148 patients, comprising 887 vertebrae, indicated that 14.8% had fractures (131 vertebrae) and 85.2% were non-fractured (756 vertebrae). The AI model for lumbar spine fractures underwent two stages: vertebral body segmentation and fracture classification. The first stage utilized a 3D V-Net convolutional deep neural network, which produced a 3D segmentation map. From this map, region of each vertebra body were extracted and then input into the second stage of the algorithm. The second stage employed a 3D ResNet convolutional deep neural network to classify each proposed region as positive (fractured) or negative (not fractured). RESULTS: The AI model's accuracy for detecting vertebral fractures in Cohort A's training set (n = 1199), validation set (n = 157), and test set (n = 152) was 100.0 %, 96.2 %, and 97.4 %, respectively. For Cohort B (n = 148), the accuracy was 96.3 %. The area under the receiver operating characteristic curve (AUC-ROC) values for the training, validation, and test sets of Cohort A, as well as Cohort B, and their 95 % confidence intervals (CIs) were as follows: 1.000 (1.000, 1.000), 0.978 (0.944, 1.000), 0.986 (0.969, 1.000), and 0.981 (0.970, 0.992). The area under the precision-recall curve (AUC-PR) values were 1.000 (0.996, 1.000), 0.964 (0.927, 0.985), 0.907 (0.924, 0.984), and 0.890 (0.846, 0.971), respectively. According to the DeLong test, there was no significant difference in the AUC-ROC values between the test set of Cohort A and Cohort B, both for the overall data and for each specific vertebral location (all P>0.05). CONCLUSION: The developed model demonstrates promising diagnostic accuracy and applicability for detecting lumbar vertebral fractures.


Asunto(s)
Aprendizaje Profundo , Vértebras Lumbares , Fracturas de la Columna Vertebral , Tomografía Computarizada por Rayos X , Humanos , Fracturas de la Columna Vertebral/diagnóstico por imagen , Vértebras Lumbares/diagnóstico por imagen , Vértebras Lumbares/lesiones , Femenino , Masculino , Tomografía Computarizada por Rayos X/métodos , Anciano , Persona de Mediana Edad , Anciano de 80 o más Años , Adulto , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Reproducibilidad de los Resultados
11.
Korean J Radiol ; 25(9): 833-842, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39197828

RESUMEN

OBJECTIVE: To assess the effect of a new lung enhancement filter combined with deep learning image reconstruction (DLIR) algorithm on image quality and ground-glass nodule (GGN) sharpness compared to hybrid iterative reconstruction or DLIR alone. MATERIALS AND METHODS: Five artificial spherical GGNs with various densities (-250, -350, -450, -550, and -630 Hounsfield units) and 10 mm in diameter were placed in a thorax anthropomorphic phantom. Four scans at four different radiation dose levels were performed using a 256-slice CT (Revolution Apex CT, GE Healthcare). Each scan was reconstructed using three different reconstruction algorithms: adaptive statistical iterative reconstruction-V at a level of 50% (AR50), Truefidelity (TF), which is a DLIR method, and TF with a lung enhancement filter (TF + Lu). Thus, 12 sets of reconstructed images were obtained and analyzed. Image noise, signal-to-noise ratio, and contrast-to-noise ratio were compared among the three reconstruction algorithms. Nodule sharpness was compared among the three reconstruction algorithms using the full-width at half-maximum value. Furthermore, subjective image quality analysis was performed. RESULTS: AR50 demonstrated the highest level of noise, which was decreased by using TF + Lu and TF alone (P = 0.001). TF + Lu significantly improved nodule sharpness at all radiation doses compared to TF alone (P = 0.001). The nodule sharpness of TF + Lu was similar to that of AR50. Using TF alone resulted in the lowest nodule sharpness. CONCLUSION: Adding a lung enhancement filter to DLIR (TF + Lu) significantly improved the nodule sharpness compared to DLIR alone (TF). TF + Lu can be an effective reconstruction technique to enhance image quality and GGN evaluation in ultralow-dose chest CT scans.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Fantasmas de Imagen , Interpretación de Imagen Radiográfica Asistida por Computador , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Nódulo Pulmonar Solitario/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Dosis de Radiación , Relación Señal-Ruido , Radiografía Torácica/métodos , Intensificación de Imagen Radiográfica/métodos
12.
Radiology ; 312(2): e233197, 2024 08.
Artículo en Inglés | MEDLINE | ID: mdl-39162636

RESUMEN

Background Deep learning (DL) could improve the labor-intensive, challenging processes of diagnosing cerebral aneurysms but requires large multicenter data sets. Purpose To construct a DL model using a multicenter data set for accurate cerebral aneurysm segmentation and detection on CT angiography (CTA) images and to compare its performance with radiology reports. Materials and Methods Consecutive head or head and neck CTA images of suspected unruptured cerebral aneurysms were gathered retrospectively from eight hospitals between February 2018 and October 2021 for model development. An external test set with reference standard digital subtraction angiography (DSA) scans was obtained retrospectively from one of the eight hospitals between February 2022 and February 2023. Radiologists (reference standard) assessed aneurysm segmentation, while model performance was evaluated using the Dice similarity coefficient (DSC). The model's aneurysm detection performance was assessed by sensitivity and comparing areas under the receiver operating characteristic curves (AUCs) between the model and radiology reports in the DSA data set with use of the DeLong test. Results Images from 6060 patients (mean age, 56 years ± 12 [SD]; 3375 [55.7%] female) were included for model development (training: 4342; validation: 1086; and internal test set: 632). Another 118 patients (mean age, 59 years ± 14; 79 [66.9%] female) were included in an external test set to evaluate performance based on DSA. The model achieved a DSC of 0.87 for aneurysm segmentation performance in the internal test set. Using DSA, the model achieved 85.7% (108 of 126 aneurysms [95% CI: 78.1, 90.1]) sensitivity in detecting aneurysms on per-vessel analysis, with no evidence of a difference versus radiology reports (AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.91 [95% CI: 0.87, 0.94]; P = .67). Model processing time from reconstruction to detection was 1.76 minutes ± 0.32 per scan. Conclusion The proposed DL model could accurately segment and detect cerebral aneurysms at CTA with no evidence of a significant difference in diagnostic performance compared with radiology reports. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Payabvash in this issue.


Asunto(s)
Angiografía por Tomografía Computarizada , Aprendizaje Profundo , Aneurisma Intracraneal , Humanos , Aneurisma Intracraneal/diagnóstico por imagen , Angiografía por Tomografía Computarizada/métodos , Femenino , Persona de Mediana Edad , Masculino , Estudios Retrospectivos , Angiografía Cerebral/métodos , Angiografía de Substracción Digital/métodos , Adulto , Anciano , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
13.
Radiology ; 312(2): e232303, 2024 08.
Artículo en Inglés | MEDLINE | ID: mdl-39189901

RESUMEN

Background Artificial intelligence (AI) systems can be used to identify interval breast cancers, although the localizations are not always accurate. Purpose To evaluate AI localizations of interval cancers (ICs) on screening mammograms by IC category and histopathologic characteristics. Materials and Methods A screening mammography data set (median patient age, 57 years [IQR, 52-64 years]) that had been assessed by two human readers from January 2011 to December 2018 was retrospectively analyzed using a commercial AI system. The AI outputs were lesion locations (heatmaps) and the highest per-lesion risk score (range, 0-100) assigned to each case. AI heatmaps were considered false positive (FP) if they occurred on normal screening mammograms or on IC screening mammograms (ie, in patients subsequently diagnosed with IC) but outside the cancer boundary. A panel of consultant radiology experts classified ICs as normal or benign (true negative [TN]), uncertain (minimal signs of malignancy [MS]), or suspicious (false negative [FN]). Several specificity and sensitivity thresholds were applied. Mann-Whitney U tests, Kruskal-Wallis tests, and χ2 tests were used to compare groups. Results A total of 2052 screening mammograms (514 ICs and 1548 normal mammograms) were included. The median AI risk score was 50 (IQR, 32-82) for TN ICs, 76 (IQR, 41-90) for ICs with MS, and 89 (IQR, 81-95) for FN ICs (P = .005). Higher median AI scores were observed for invasive tumors (62 [IQR, 39-88]) than for noninvasive tumors (33 [IQR, 20-55]; P < .01) and for high-grade (grade 2-3) tumors (62 [IQR, 40-87]) than for low-grade (grade 0-1) tumors (45 [IQR, 26-81]; P = .02). At the 96% specificity threshold, the AI algorithm flagged 121 of 514 (23.5%) ICs and correctly localized the IC in 93 of 121 (76.9%) cases, with 48 FP heatmaps on the mammograms for ICs (rate, 0.093 per case) and 74 FP heatmaps on normal mammograms (rate, 0.048 per case). The AI algorithm correctly localized a lower proportion of TN ICs (54 of 427; 12.6%) than ICs with MS (35 of 76; 46%) and FN ICs (four of eight; 50% [95% CI: 13, 88]; P < .001). The AI algorithm localized a higher proportion of node-positive than node-negative cancers (P = .03). However, no evidence of a difference by cancer type (P = .09), grade (P = .27), or hormone receptor status (P = .12) was found. At 89.8% specificity and 79% sensitivity thresholds, AI detection increased to 181 (35.2%) and 256 (49.8%) of the 514 ICs, respectively, with FP heatmaps on 158 (10.2%) and 307 (19.8%) of the 1548 normal mammograms. Conclusion Use of a standalone AI system improved early cancer detection by correctly identifying some cancers missed by two human readers, with no differences based on histopathologic features except for node-positive cancers. © RSNA, 2024 Supplemental material is available for this article.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Detección Precoz del Cáncer , Mamografía , Sensibilidad y Especificidad , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Mamografía/métodos , Persona de Mediana Edad , Estudios Retrospectivos , Detección Precoz del Cáncer/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Mama/diagnóstico por imagen , Mama/patología , Reproducibilidad de los Resultados
14.
PLoS One ; 19(8): e0300090, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39186484

RESUMEN

BAKGROUND: To evaluate the quantitative and qualitative image quality using deep learning image reconstruction (DLIR) of pediatric cardiac computed tomography (CT) compared with conventional image reconstruction methods. METHODS: Between January 2020 and December 2022, 109 pediatric cardiac CT scans were included in this study. The CT scans were reconstructed using an adaptive statistical iterative reconstruction-V (ASiR-V) with a blending factor of 80% and three levels of DLIR with TrueFidelity (low-, medium-, and high-strength settings). Quantitative image quality was measured using signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). The edge rise distance (ERD) and angle between 25% and 75% of the line density profile were drawn to evaluate sharpness. Qualitative image quality was assessed using visual grading analysis scores. RESULTS: A gradual improvement in the SNR and CNR was noted among the strength levels of the DLIR in sequence from low to high. Compared to ASiR-V, high-level DLIR showed significantly improved SNR and CNR (P<0.05). ERD decreased with increasing angle as the level of DLIR increased. CONCLUSION: High-level DLIR showed improved SNR and CNR compared to ASiR-V, with better sharpness on pediatric cardiac CT scans.


Asunto(s)
Aprendizaje Profundo , Relación Señal-Ruido , Tomografía Computarizada por Rayos X , Humanos , Niño , Tomografía Computarizada por Rayos X/métodos , Femenino , Preescolar , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Lactante , Corazón/diagnóstico por imagen , Adolescente , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Recién Nacido
15.
Eur J Radiol ; 179: 111667, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39121746

RESUMEN

OBJECTIVES: To evaluate the performance of artificial intelligence (AI) in the preoperative detection of lung metastases on CT. MATERIALS AND METHODS: Patients who underwent lung metastasectomy in our institution between 2016 and 2020 were enrolled, their preoperative CT reports having been performed before an AI solution (Veye Lung Nodules, version 3.9.2, Aidence) became available as a second reader in our department. All CT scans were retrospectively processed by AI. The sensitivities of unassisted radiologists (original CT radiology reports), AI reports alone and both combined were compared. Ground truth was established by a consensus reading of two radiologists, who analyzed whether the nodules mentioned in the pathology report were retrospectively visible on CT. Multivariate analysis was performed to identify nodule characteristics associated with detectability. RESULTS: A total of 167 patients (men: 62.9 %; median age, 59 years [47-68]) with 475 resected nodules were included. AI detected an average of 4 nodules (0-17) per CT, of which 97 % were true nodules. The combination of radiologist plus AI (92.4 %) had significantly higher sensitivity than unassisted radiologists (80.4 %) (p < 0.001). In 27/57 (47.4 %) patients who had multiple preoperative CT scans, AI detected lung nodules earlier than the radiologist. Vascular contact was associated with non-detection by radiologists (OR:0.32[0.19, 0.54], p < 0.001), whilst the presence of cavitation (OR:0.26[0.13, 0.54], p < 0.001) or pleural contact (OR:0.10[0.04, 0.22], p < 0.001) was associated with non-detection by AI. CONCLUSION: AI significantly increases the sensitivity of preoperative detection of lung metastases and enables earlier detection, with a significant potential benefit for patient management.


Asunto(s)
Inteligencia Artificial , Neoplasias Pulmonares , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Neoplasias Pulmonares/secundario , Neoplasias Pulmonares/cirugía , Masculino , Femenino , Persona de Mediana Edad , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Anciano , Cuidados Preoperatorios/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Pulmón/diagnóstico por imagen , Pulmón/patología
17.
Eur J Radiol ; 179: 111677, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39178684

RESUMEN

PURPOSE: To investigate the diagnostic performance of an automatic pipeline for detection of hydronephrosis on kidney's parenchyma on unenhanced low-dose CT of the abdomen. METHODS: This retrospective study included 95 patients with confirmed unilateral hydronephrosis in an unenhanced low-dose CT of the abdomen. Data were split into training (n = 67) and test (n = 28) cohorts. Both kidneys for each case were included in further analyses, whereas the kidney without hydronephrosis was used as control. Using the training cohort, we developed a pipeline consisting of a deep-learning model for automatic segmentation (a Convolutional Neural Network based on nnU-Net architecture) of the kidney's parenchyma and a radiomics classifier to detect hydronephrosis. The models were assessed using standard classification metrics, such as area under the ROC curve (AUC), sensitivity and specificity, as well as semantic segmentation metrics, including Dice coefficient and Jaccard index. RESULTS: Using manual segmentation of the kidney's parenchyma, hydronephrosis can be detected with an AUC of 0.84, a sensitivity of 75% and a specificity of 82%, a PPV of 81% and a NPV of 77%. Automatic kidney segmentation achieved a mean Dice score of 0.87 and 0.91 for the right and left kidney, respectively. Additionally, automatic segmentation achieved an AUC of 0.83, a sensitivity of 86%, specificity of 64%, PPV of 71%, and NPV of 82%. CONCLUSION: Our proposed radiomics signature using automatic kidney's parenchyma segmentation allows for accurate hydronephrosis detection on unenhanced low-dose CT scans of the abdomen independently of widened renal pelvis. This method could be used in clinical routine to highlight hydronephrosis to radiologists as well as clinicians, especially in patients with concurrent parapelvic cysts and might reduce time and costs associated with diagnosing hydronephrosis.


Asunto(s)
Hidronefrosis , Dosis de Radiación , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X , Humanos , Hidronefrosis/diagnóstico por imagen , Masculino , Femenino , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Persona de Mediana Edad , Anciano , Adulto , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Aprendizaje Profundo , Anciano de 80 o más Años , Radiómica
18.
Tomography ; 10(8): 1205-1221, 2024 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-39195726

RESUMEN

COVID-19 poses a global health crisis, necessitating precise diagnostic methods for timely containment. However, accurately delineating COVID-19-affected regions in lung CT scans is challenging due to contrast variations and significant texture diversity. In this regard, this study introduces a novel two-stage classification and segmentation CNN approach for COVID-19 lung radiological pattern analysis. A novel Residual-BRNet is developed to integrate boundary and regional operations with residual learning, capturing key COVID-19 radiological homogeneous regions, texture variations, and structural contrast patterns in the classification stage. Subsequently, infectious CT images undergo lesion segmentation using the newly proposed RESeg segmentation CNN in the second stage. The RESeg leverages both average and max-pooling implementations to simultaneously learn region homogeneity and boundary-related patterns. Furthermore, novel pixel attention (PA) blocks are integrated into RESeg to effectively address mildly COVID-19-infected regions. The evaluation of the proposed Residual-BRNet CNN in the classification stage demonstrates promising performance metrics, achieving an accuracy of 97.97%, F1-score of 98.01%, sensitivity of 98.42%, and MCC of 96.81%. Meanwhile, PA-RESeg in the segmentation phase achieves an optimal segmentation performance with an IoU score of 98.43% and a dice similarity score of 95.96% of the lesion region. The framework's effectiveness in detecting and segmenting COVID-19 lesions highlights its potential for clinical applications.


Asunto(s)
COVID-19 , Pulmón , SARS-CoV-2 , Tomografía Computarizada por Rayos X , Humanos , COVID-19/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Pulmón/diagnóstico por imagen , Aprendizaje Profundo , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
19.
BMC Med Imaging ; 24(1): 205, 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39112928

RESUMEN

In order to increase the likelihood of obtaining treatment and achieving a complete recovery, early illness identification and diagnosis are crucial. Artificial intelligence is helpful with this process by allowing us to rapidly start the necessary protocol for treatment in the early stages of disease development. Artificial intelligence is a major contributor to the improvement of medical treatment for patients. In order to prevent and foresee this problem on the individual, family, and generational levels, Monitoring the patient's therapy and recovery is crucial. This study's objective is to outline a non-invasive method for using mammograms to detect breast abnormalities, classify breast disorders, and identify cancerous or benign tumor tissue in the breast. We used classification models on a dataset that has been pre-processed so that the number of samples is balanced, unlike previous work on the same dataset. Identifying cancerous or benign breast tissue requires the use of supervised learning techniques and algorithms, such as random forest (RF) and decision tree (DT) classifiers, to examine up to thirty features, such as breast size, mass, diameter, circumference, and the nature of the tumor (solid or cystic). To ascertain if the tissue is malignant or benign, the examination's findings are employed. These features are mostly what determines how effectively anything may be categorized. The DT classifier was able to get a score of 95.32%, while the RF satisfied a far higher 98.83 percent.


Asunto(s)
Neoplasias de la Mama , Mamografía , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Mamografía/métodos , Algoritmos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Sensibilidad y Especificidad , Árboles de Decisión , Persona de Mediana Edad
20.
BMC Med Imaging ; 24(1): 212, 2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39134937

RESUMEN

BACKGROUND: Prostate cancer is one of the most common malignant tumors in middle-aged and elderly men and carries significant prognostic implications, and recent studies suggest that dual-energy computed tomography (DECT) utilizing new virtual monoenergetic images can enhance cancer detection rates. This study aimed to assess the impact of virtual monoenergetic images reconstructed from DECT arterial phase scans on the image quality of prostate lesions and their diagnostic performance for prostate cancer. METHODS: We conducted a retrospective analysis of 83 patients with prostate cancer or prostatic hyperplasia who underwent DECT scans at Meizhou People's Hospital between July 2019 and December 2023. The variables analyzed included age, tumor diameter and serum prostate-specific antigen (PSA) levels, among others. We also compared CT values, signal-to-noise ratio (SNR), subjective image quality ratings, and contrast-to-noise ratio (CNR) between virtual monoenergetic images (40-100 keV) and conventional linear blending images. Receiver operating characteristic (ROC) curve analyses were performed to evaluate the diagnostic efficacy of virtual monoenergetic images (40 keV and 50 keV) compared to conventional images. RESULTS: Virtual monoenergetic images at 40 keV showed significantly higher CT values (168.19 ± 57.14) compared to conventional linear blending images (66.66 ± 15.5) for prostate cancer (P < 0.001). The 50 keV images also demonstrated elevated CT values (121.73 ± 39.21) compared to conventional images (P < 0.001). CNR values for the 40 keV (3.81 ± 2.13) and 50 keV (2.95 ± 1.50) groups were significantly higher than the conventional blending group (P < 0.001). Subjective evaluations indicated markedly better image quality scores for 40 keV (median score of 5) and 50 keV (median score of 5) images compared to conventional images (P < 0.05). ROC curve analysis revealed superior diagnostic accuracy for 40 keV (AUC: 0.910) and 50 keV (AUC: 0.910) images based on CT values compared to conventional images (AUC: 0.849). CONCLUSIONS: Virtual monoenergetic images reconstructed at 40 keV and 50 keV from DECT arterial phase scans substantially enhance the image quality of prostate lesions and improve diagnostic efficacy for prostate cancer.


Asunto(s)
Neoplasias de la Próstata , Relación Señal-Ruido , Tomografía Computarizada por Rayos X , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Estudios Retrospectivos , Anciano , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Curva ROC , Imagen Radiográfica por Emisión de Doble Fotón/métodos , Hiperplasia Prostática/diagnóstico por imagen , Anciano de 80 o más Años , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Antígeno Prostático Específico/sangre
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA