Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.868
Filtrar
1.
NPJ Cardiovasc Health ; 1(1): 14, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39246665

RESUMEN

Indocyanine green (ICG)-enhanced intravascular near-infrared fluorescence (NIRF) imaging enhances the information obtained with intravascular ultrasound (IVUS) by visualizing pathobiological characteristics of atherosclerotic plaques. To advance our understanding of this hybrid method, we aimed to assess the potential of NIRF-IVUS to identify different stages of atheroma progression by characterizing ICG uptake in human pathological specimens. After excision, 15 human coronary specimens from 13 adult patients were ICG-perfused and imaged with NIRF-IVUS. All specimens were then histopathologically and immunohistochemically assessed. NIRF-IVUS imaging revealed colocalization of ICG-deposition to plaque areas of lipid accumulation, endothelial disruption, neovascularization and inflammation. Moreover, ICG concentrations were significantly higher in advanced coronary artery disease stages (p < 0.05) and correlated significantly to plaque macrophage burden (r = 0.67). Current intravascular methods fail to detect plaque biology. Thus, we demonstrate how human coronary atheroma stage can be assessed based on pathobiological characteristics uniquely captured by ICG-enhanced intravascular NIRF.

2.
Sci Rep ; 14(1): 20647, 2024 09 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232180

RESUMEN

Lung cancer (LC) is a life-threatening and dangerous disease all over the world. However, earlier diagnoses and treatment can save lives. Earlier diagnoses of malevolent cells in the lungs responsible for oxygenating the human body and expelling carbon dioxide due to significant procedures are critical. Even though a computed tomography (CT) scan is the best imaging approach in the healthcare sector, it is challenging for physicians to identify and interpret the tumour from CT scans. LC diagnosis in CT scan using artificial intelligence (AI) can help radiologists in earlier diagnoses, enhance performance, and decrease false negatives. Deep learning (DL) for detecting lymph node contribution on histopathological slides has become popular due to its great significance in patient diagnoses and treatment. This study introduces a computer-aided diagnosis for LC by utilizing the Waterwheel Plant Algorithm with DL (CADLC-WWPADL) approach. The primary aim of the CADLC-WWPADL approach is to classify and identify the existence of LC on CT scans. The CADLC-WWPADL method uses a lightweight MobileNet model for feature extraction. Besides, the CADLC-WWPADL method employs WWPA for the hyperparameter tuning process. Furthermore, the symmetrical autoencoder (SAE) model is utilized for classification. An investigational evaluation is performed to demonstrate the significant detection outputs of the CADLC-WWPADL technique. An extensive comparative study reported that the CADLC-WWPADL technique effectively performs with other models with a maximum accuracy of 99.05% under the benchmark CT image dataset.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Diagnóstico por Computador , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/patología , Tomografía Computarizada por Rayos X/métodos , Diagnóstico por Computador/métodos
3.
Comput Struct Biotechnol J ; 24: 542-560, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39252818

RESUMEN

This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, and exploring evaluation metrics used to assess XAI approaches. With the growing efficiency of Machine Learning (ML) and Deep Learning (DL) in medical applications, there's a critical need for adoption in healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance in clinical settings where decisions have significant medicolegal consequences. Our review highlights the advanced XAI methods, identifying how they address the need for transparency and trust in ML/DL decisions. We also outline the challenges faced by these methods and propose future research directions to improve XAI in healthcare. This paper aims to bridge the gap between cutting-edge computational techniques and their practical application in healthcare, nurturing a more transparent, trustworthy, and effective use of AI in medical settings. The insights guide both research and industry, promoting innovation and standardisation in XAI implementation in healthcare.

4.
Neuroimage Clin ; 44: 103668, 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39265321

RESUMEN

The VASARI MRI feature set is a quantitative system designed to standardise glioma imaging descriptions. Though effective, deriving VASARI is time-consuming and seldom used clinically. We sought to resolve this problem with software automation and machine learning. Using glioma data from 1172 patients, we developed VASARI-auto, an automated labelling software applied to open-source lesion masks and an openly available tumour segmentation model. Consultant neuroradiologists independently quantified VASARI features in 100 held-out glioblastoma cases. We quantified 1) agreement across neuroradiologists and VASARI-auto, 2) software equity, 3) an economic workforce analysis, and 4) fidelity in predicting survival. Tumour segmentation was compatible with the current state of the art and equally performant regardless of age or sex. A modest inter-rater variability between in-house neuroradiologists was comparable to between neuroradiologists and VASARI-auto, with far higher agreement between VASARI-auto methods. The time for neuroradiologists to derive VASARI was substantially higher than VASARI-auto (mean time per case 317 vs. 3 s). A UK hospital workforce analysis forecast that three years of VASARI featurisation would demand 29,777 consultant neuroradiologist workforce hours and >£1.5 ($1.9) million, reducible to 332 hours of computing time (and £146 of power) with VASARI-auto. The best-performing survival model utilised VASARI-auto features instead of those derived by neuroradiologists. VASARI-auto is a highly efficient and equitable automated labelling system, a favourable economic profile if used as a decision support tool, and non-inferior survival prediction. Future work should iterate upon and integrate such tools to enhance patient care.

5.
Technol Cancer Res Treat ; 23: 15330338241277389, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39267420

RESUMEN

Through meticulous examination of lymph nodes, the stage and severity of cancer can be determined. This information is invaluable for doctors to select the most appropriate treatment plan and predict patient prognosis; however, any oversight in the examination of lymph nodes may lead to cancer metastasis and poor prognosis. In this review, we summarize a significant number of articles supported by statistical data and clinical experience, proposing a standardized evaluation protocol for lymph nodes. This protocol begins with preoperative imaging to assess the presence of lymph node metastasis. Radiomics has replaced the single-modality approach, and deep learning models have been constructed to assist in image analysis with superior performance to that of the human eye. The focus of this review lies in intraoperative lymphadenectomy. Multiple international authorities have recommended specific numbers for lymphadenectomy in various cancers, providing surgeons with clear guidelines. These numbers are calculated by applying various statistical methods and real-world data. In the third chapter, we mention the growing concern about immune impairment caused by lymph node dissection, as the lack of CD8 memory T cells may have a negative impact on postoperative immunotherapy. Both excessive and less lymph node dissection have led to conflicting findings on postoperative immunotherapy. In conclusion, we propose a protocol that can be referenced by surgeons. With the systematic management of lymph nodes, we can control tumor progression with the greatest possible likelihood, optimize the preoperative examination process, reduce intraoperative risks, and improve postoperative quality of life.


Asunto(s)
Escisión del Ganglio Linfático , Ganglios Linfáticos , Metástasis Linfática , Neoplasias , Humanos , Neoplasias/patología , Ganglios Linfáticos/patología , Metástasis Linfática/patología , Estadificación de Neoplasias , Pronóstico , Genómica/métodos , Multiómica
6.
Diagnostics (Basel) ; 14(17)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39272662

RESUMEN

This multicenter retrospective study evaluated the diagnostic performance of a deep learning (DL)-based application for detecting, classifying, and highlighting suspected aortic dissections (ADs) on chest and thoraco-abdominal CT angiography (CTA) scans. CTA scans from over 200 U.S. and European cities acquired on 52 scanner models from six manufacturers were retrospectively collected and processed by CINA-CHEST (AD) (Avicenna.AI, La Ciotat, France) device. The diagnostic performance of the device was compared with the ground truth established by the majority agreement of three U.S. board-certified radiologists. Furthermore, the DL algorithm's time to notification was evaluated to demonstrate clinical effectiveness. The study included 1303 CTAs (mean age 58.8 ± 16.4 years old, 46.7% male, 10.5% positive). The device demonstrated a sensitivity of 94.2% [95% CI: 88.8-97.5%] and a specificity of 97.3% [95% CI: 96.2-98.1%]. The application classified positive cases by the AD type with an accuracy of 99.5% [95% CI: 98.9-99.8%] for type A and 97.5 [95% CI: 96.4-98.3%] for type B. The application did not miss any type A cases. The device flagged 32 cases incorrectly, primarily due to acquisition artefacts and aortic pathologies mimicking AD. The mean time to process and notify of potential AD cases was 27.9 ± 8.7 s. This deep learning-based application demonstrated a strong performance in detecting and classifying aortic dissection cases, potentially enabling faster triage of these urgent cases in clinical settings.

7.
Sensors (Basel) ; 24(17)2024 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-39275737

RESUMEN

In this paper, a new light event acquisition chain in a three-gamma liquid xenon prototype for medical nuclear imaging is presented. The prototype implements the Multi-Time-Over-Threshold (MTOT) method. This method surpasses the Single-Time-Over-Threshold (STOT) by precisely determining both the number of vacuum ultraviolet (VUV) photons detected by each photomultiplier tube (PMT) and their arrival times for light signal measurement. Based on both the experimental and simulated results, the MTOT method achieved a 70% improvement in reconstructing photoelectrons (PEs) and enhanced the precision of the arrival time estimation by 20-30% compared with STOT. These results will enable an upgrade of the XEMIS2 (Xenon Medical Imaging System) camera, improving its performance as the imaged activity increases.

8.
Comput Med Imaging Graph ; 117: 102433, 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39276433

RESUMEN

Oral squamous cell carcinoma recognition presents a challenge due to late diagnosis and costly data acquisition. A cost-efficient, computerized screening system is crucial for early disease detection, minimizing the need for expert intervention and expensive analysis. Besides, transparency is essential to align these systems with critical sector applications. Explainable Artificial Intelligence (XAI) provides techniques for understanding models. However, current XAI is mostly data-driven and focused on addressing developers' requirements of improving models rather than clinical users' demands for expressing relevant insights. Among different XAI strategies, we propose a solution composed of Case-Based Reasoning paradigm to provide visual output explanations and Informed Deep Learning (IDL) to integrate medical knowledge within the system. A key aspect of our solution lies in its capability to handle data imperfections, including labeling inaccuracies and artifacts, thanks to an ensemble architecture on top of the deep learning (DL) workflow. We conducted several experimental benchmarks on a dataset collected in collaboration with medical centers. Our findings reveal that employing the IDL approach yields an accuracy of 85%, surpassing the 77% accuracy achieved by DL alone. Furthermore, we measured the human-centered explainability of the two approaches and IDL generates explanations more congruent with the clinical user demands.

9.
Quant Imaging Med Surg ; 14(9): 6566-6578, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39281143

RESUMEN

Background: Optic nerve imaging is crucial for diagnosing and understanding optic neuropathies because it provides detailed visualization of the nerve's structure and pathologies through advanced modalities. This study conducted a bibliometric analysis within the field of optic nerve imaging, aiming to pinpoint the latest research trends and focal points in optic nerve imaging. Methods: The core literature on optic nerve imaging published between January 1991 and August 2023 was retrieved from the Web of Science Core Collection. The analysis and visualization of scientific productivity and emerging trends were facilitated through the utilization of Bibliometrix software, CiteSpace, Gephi, VOSviewer, R software, and Python. Results: In total, 15,247 publications on optic nerve imaging were included in the analysis. Notably, the top 3 journals contributing to this field were Investigative Ophthalmology & Visual Science, Ophthalmology, and the British Journal of Ophthalmology. This research on optic nerve imaging extended across 97 countries, with the USA leading in research endeavors. Noteworthy burst term analysis revealed that "Segmentation" and "Machine learning" are gaining attention. Additionally, the Latent Dirichlet Allocation model indicated that image processing has been a hotspot in recent years. Conclusions: This study revealed the research trends, hotspots, and emerging topics in optic nerve imaging through bibliometric analysis and network visualization. At present, the research focus is directed towards employing artificial intelligence for image post-processing. The findings of this study offer valuable insights into future research direction and clinical applications.

10.
Front Physiol ; 15: 1432121, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39282086

RESUMEN

Objective: To develop and validate a method for detecting ureteral stent encrustations in medical CT images based on Mask-RCNN and 3D morphological analysis. Method: All 222 cases of ureteral stent data were obtained from the Fifth Affiliated Hospital of Sun Yat-sen University. Firstly, a neural network was used to detect the region of the ureteral stent, and the results of the coarse detection were completed and connected domain filtered based on the continuity of the ureteral stent in 3D space to obtain a 3D segmentation result. Secondly, the segmentation results were analyzed and detected based on the 3D morphology, and the centerline was obtained through thinning the 3D image, fitting and deriving the ureteral stent, and obtaining radial sections. Finally, the abnormal areas of the radial section were detected through polar coordinate transformation to detect the encrustation area of the ureteral stent. Results: For the detection of ureteral stent encrustations in the ureter, the algorithm's confusion matrix achieved an accuracy of 79.6% in the validation of residual stones/ureteral stent encrustations at 186 locations. Ultimately, the algorithm was validated in 222 cases, achieving a ureteral stent segmentation accuracy of 94.4% and a positive and negative judgment accuracy of 87.3%. The average detection time per case was 12 s. Conclusion: The proposed medical CT image ureteral stent wall stone detection method based on Mask-RCNN and 3D morphological analysis can effectively assist clinical doctors in diagnosing ureteral stent encrustations.

11.
NPJ Biosens ; 1(1): 11, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39286049

RESUMEN

Vagus nerve stimulation (VNS) is an FDA-approved stimulation therapy to treat patients with refractory epilepsy. In this work, we use a coherent holographic imaging system to characterize vagus nerve-evoked potentials (VEPs) in the cortex in response to VNS stimulation paradigms without electrode placement or any genetic, structural, or functional labels. We analyze stimulation amplitude up to saturation, pulse width up to 800 µs, and frequency from 10 Hz to 30 Hz, finding that stimulation amplitude strongly modulates VEPs response magnitude (effect size 0.401), while pulse width has a moderate modulatory effect (effect size 0.127) and frequency has almost no modulatory effect (effect size 0.009) on the evoked potential magnitude. We find mild interactions between pulse width and frequency. This non-contact label-free functional imaging technique may serve as a non-invasive rapid-feedback tool to characterize VEPs and may increase the efficacy of VNS in patients with refractory epilepsy.

12.
Cell Rep Med ; 5(9): 101713, 2024 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-39241771

RESUMEN

Reliably detecting potentially misleading patterns in automated diagnostic assistance systems, such as those powered by artificial intelligence (AI), is crucial for instilling user trust and ensuring reliability. Current techniques fall short in visualizing such confounding factors. We propose DiffChest, a self-conditioned diffusion model trained on 515,704 chest radiographs from 194,956 patients across the US and Europe. DiffChest provides patient-specific explanations and visualizes confounding factors that might mislead the model. The high inter-reader agreement, with Fleiss' kappa values of 0.8 or higher, validates its capability to identify treatment-related confounders. Confounders are accurately detected with 10%-100% prevalence rates. The pretraining process optimizes the model for relevant imaging information, resulting in excellent diagnostic accuracy for 11 chest conditions, including pleural effusion and heart insufficiency. Our findings highlight the potential of diffusion models in medical image classification, providing insights into confounding factors and enhancing model robustness and reliability.


Asunto(s)
Inteligencia Artificial , Humanos , Masculino , Femenino , Reproducibilidad de los Resultados , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Persona de Mediana Edad , Radiografía Torácica/métodos , Anciano , Adulto , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
13.
Comput Med Imaging Graph ; 117: 102434, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39284244

RESUMEN

Accurate segmentation of the pancreas in computed tomography (CT) holds paramount importance in diagnostics, surgical planning, and interventions. Recent studies have proposed supervised deep-learning models for segmentation, but their efficacy relies on the quality and quantity of the training data. Most of such works employed small-scale public datasets, without proving the efficacy of generalization to external datasets. This study explored the optimization of pancreas segmentation accuracy by pinpointing the ideal dataset size, understanding resource implications, examining manual refinement impact, and assessing the influence of anatomical subregions. We present the AIMS-1300 dataset encompassing 1,300 CT scans. Its manual annotation by medical experts required 938 h. A 2.5D UNet was implemented to assess the impact of training sample size on segmentation accuracy by partitioning the original AIMS-1300 dataset into 11 smaller subsets of progressively increasing numerosity. The findings revealed that training sets exceeding 440 CTs did not lead to better segmentation performance. In contrast, nnU-Net and UNet with Attention Gate reached a plateau for 585 CTs. Tests on generalization on the publicly available AMOS-CT dataset confirmed this outcome. As the size of the partition of the AIMS-1300 training set increases, the number of error slices decreases, reaching a minimum with 730 and 440 CTs, for AIMS-1300 and AMOS-CT datasets, respectively. Segmentation metrics on the AIMS-1300 and AMOS-CT datasets improved more on the head than the body and tail of the pancreas as the dataset size increased. By carefully considering the task and the characteristics of the available data, researchers can develop deep learning models without sacrificing performance even with limited data. This could accelerate developing and deploying artificial intelligence tools for pancreas surgery and other surgical data science applications.

14.
J Imaging Inform Med ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39284981

RESUMEN

Machine learning (ML) models often fail with data that deviates from their training distribution. This is a significant concern for ML-enabled devices as data drift may lead to unexpected performance. This work introduces a new framework for out of distribution (OOD) detection and data drift monitoring that combines ML and geometric methods with statistical process control (SPC). We investigated different design choices, including methods for extracting feature representations and drift quantification for OOD detection in individual images and as an approach for input data monitoring. We evaluated the framework for both identifying OOD images and demonstrating the ability to detect shifts in data streams over time. We demonstrated a proof-of-concept via the following tasks: 1) differentiating axial vs. non-axial CT images, 2) differentiating CXR vs. other radiographic imaging modalities, and 3) differentiating adult CXR vs. pediatric CXR. For the identification of individual OOD images, our framework achieved high sensitivity in detecting OOD inputs: 0.980 in CT, 0.984 in CXR, and 0.854 in pediatric CXR. Our framework is also adept at monitoring data streams and identifying the time a drift occurred. In our simulations tracking drift over time, it effectively detected a shift from CXR to non-CXR instantly, a transition from axial to non-axial CT within few days, and a drift from adult to pediatric CXRs within a day-all while maintaining a low false positive rate. Through additional experiments, we demonstrate the framework is modality-agnostic and independent from the underlying model structure, making it highly customizable for specific applications and broadly applicable across different imaging modalities and deployed ML models.

15.
BMC Med Educ ; 24(1): 969, 2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39237930

RESUMEN

BACKGROUND: Diagnostic radiology residents in low- and middle-income countries (LMICs) may have to provide significant contributions to the clinical workload before the completion of their residency training. Because of time constraints inherent to the delivery of acute care, some of the most clinically impactful diagnostic radiology errors arise from the use of Computed Tomography (CT) in the management of acutely ill patients. As a result, it is paramount to ensure that radiology trainees reach adequate skill levels prior to assuming independent on-call responsibilities. We partnered with the radiology residency program at the Aga Khan University Hospital in Nairobi (Kenya) to evaluate a novel cloud-based testing method that provides an authentic radiology viewing and interpretation environment. It is based on Lifetrack, a unique Google Chrome-based Picture Archiving and Communication System, that enables a complete viewing environment for any scan, and provides a novel report generation tool based on Active Templates which are a patented structured reporting method. We applied it to evaluate the skills of AKUHN trainees on entire CT scans representing the spectrum of acute non-trauma abdominal pathology encountered in a typical on-call setting. We aimed to demonstrate the feasibility of remotely testing the authentic practice of radiology and to show that important observations can be made from such a Lifetrack-based testing approach regarding the radiology skills of an individual practitioner or of a cohort of trainees. METHODS: A total of 13 anonymized trainees with experience from 12 months to over 4 years took part in the study. Individually accessing the Lifetrack tool they were tested on 37 abdominal CT scans (including one normal scan) over six 2-hour sessions on consecutive days. All cases carried the same clinical history of acute abdominal pain. During each session the trainees accessed the corresponding Lifetrack test set using clinical workstations, reviewed the CT scans, and formulated an opinion for the acute diagnosis, any secondary pathology, and incidental findings on the scan. Their scan interpretations were composed using the Lifetrack report generation system based on active templates in which segments of text can be selected to assemble a detailed report. All reports generated by the trainees were scored on four different interpretive components: (a) acute diagnosis, (b) unrelated secondary diagnosis, (c) number of missed incidental findings, and (d) number of overcalls. A 3-score aggregate was defined from the first three interpretive elements. A cumulative score modified the 3-score aggregate for the negative effect of interpretive overcalls. RESULTS: A total of 436 scan interpretations and scores were available from 13 trainees tested on 37 cases. The acute diagnosis score ranged from 0 to 1 with a mean of 0.68 ± 0.36 and median of 0.78 (IQR: 0.5-1), and there were 436 scores. An unrelated secondary diagnosis was present in 11 cases, resulting in 130 secondary diagnosis scores. The unrelated secondary diagnosis score ranged from 0 to 1, with mean score of 0.48 ± 0.46 and median of 0.5 (IQR: 0-1). There were 32 cases with incidental findings, yielding 390 scores for incidental findings. The number of missed incidental findings ranged from 0 to 5 with a median at 1 (IQR: 1-2). The incidental findings score ranged from 0 to 1 with a mean of 0.4 ± 0.38 and median of 0.33 (IQR: 0- 0.66). The number of overcalls ranged from 0 to 3 with a median at 0 (IQR: 0-1) and a mean of 0.36 ± 0.63. The 3-score aggregate ranged from 0 to 100 with a mean of 65.5 ± 32.5 and median of 77.3 (IQR: 45.0, 92.5). The cumulative score ranged from - 30 to 100 with a mean of 61.9 ± 35.5 and median of 71.4 (IQR: 37.4, 92.0). The mean acute diagnosis scores and SD by training period were 0.62 ± 0.03, 0.80 ± 0.05, 0.71 ± 0.05, 0.58 ± 0.07, and 0.66 ± 0.05 for trainees with ≤ 12 months, 12-24 months, 24-36 months, 36-48 months and > 48 months respectively. The mean acute diagnosis score of 12-24 months training was the only statistically significant greater score when compared to ≤ 12 months by the ANOVA with Tukey testing (p = 0.0002). We found a similar trend with distribution of 3-score aggregates and cumulative scores. There were no significant associations when the training period was categorized as less than and more than 2 years. We looked at the distribution of the 3-score aggregate versus the number of overcalls by trainee, and we found that the 3-score aggregate was inversely related to the number of overcalls. Heatmaps and raincloud plots provided an illustrative means to visualize the relative performance of trainees across cases. CONCLUSION: We demonstrated the feasibility of remotely testing the authentic practice of radiology and showed that important observations can be made from our Lifetrack-based testing approach regarding radiology skills of an individual or a cohort. From observed weaknesses areas for targeted teaching can be implemented, and retesting could reveal their impact. This methodology can be customized to different LMIC environments and expanded to board certification examinations.


Asunto(s)
Competencia Clínica , Países en Desarrollo , Internado y Residencia , Sistemas de Información Radiológica , Radiología , Humanos , Radiología/educación , Kenia , Tomografía Computarizada por Rayos X
16.
Biol Methods Protoc ; 9(1): bpae062, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39258159

RESUMEN

Deep neural networks have significantly advanced the field of medical image analysis, yet their full potential is often limited by relatively small dataset sizes. Generative modeling, particularly through diffusion models, has unlocked remarkable capabilities in synthesizing photorealistic images, thereby broadening the scope of their application in medical imaging. This study specifically investigates the use of diffusion models to generate high-quality brain MRI scans, including those depicting low-grade gliomas, as well as contrast-enhanced spectral mammography (CESM) and chest and lung X-ray images. By leveraging the DreamBooth platform, we have successfully trained stable diffusion models utilizing text prompts alongside class and instance images to generate diverse medical images. This approach not only preserves patient anonymity but also substantially mitigates the risk of patient re-identification during data exchange for research purposes. To evaluate the quality of our synthesized images, we used the Fréchet inception distance metric, demonstrating high fidelity between the synthesized and real images. Our application of diffusion models effectively captures oncology-specific attributes across different imaging modalities, establishing a robust framework that integrates artificial intelligence in the generation of oncological medical imagery.

17.
Sci Rep ; 14(1): 21755, 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39294306

RESUMEN

Leukemia is a type of blood tumour that occurs because of abnormal enhancement in WBCs (white blood cells) in the bone marrow of the human body. Blood-forming tissue cancer influences the lymphatic and bone marrow system. The early diagnosis and detection of leukaemia, i.e., the accurate difference of malignant leukocytes with little expense at the beginning of the disease, is a primary challenge in the disease analysis field. Despite the higher occurrence of leukemia, there is a lack of flow cytometry tools, and the procedures accessible at medical diagnostics centres are time-consuming. Distinct researchers have implemented computer-aided diagnostic (CAD) and machine learning (ML) methods for laboratory image analysis, aiming to manage the restrictions of late leukemia analysis. This study proposes a new Falcon optimization algorithm with deep convolutional neural network for Leukemia detection and classification (FOADCNN-LDC) technique. The main objective of the FOADCNN-LDC technique is to classify and recognize leukemia. The FOADCNN-LDC technique utilizes a median filtering (MF) based noise removal process to eradicate the image noise. Besides, the FOADCNN-LDC technique employs the ShuffleNetv2 model for the feature extraction process. Moreover, the detection and classification of the leukemia process are performed by utilizing the convolutional denoising autoencoder (CDAE) model. The FOA is implemented to select the hyperparameter of the CDAE model. The simulation process of the FOADCNN-LDC approach is performed on a benchmark medical dataset. The investigational analysis of the FOADCNN-LDC approach highlighted a superior accuracy value of 99.62% over existing techniques.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Diagnóstico por Computador , Leucemia , Humanos , Leucemia/diagnóstico , Leucemia/clasificación , Leucemia/patología , Diagnóstico por Computador/métodos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
18.
Transpl Int ; 37: 12827, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39296469

RESUMEN

Machine perfused ex-vivo organs offer an excellent experimental platform, e.g., for studying organ physiology and for conducting pre-clinical trials for drug delivery. One main challenge in machine perfusion is the accurate assessment of organ condition. Assessment is often performed using viability markers, i.e., lactate concentrations and blood gas analysis. Nonetheless, existing markers for condition assessment can be inconclusive, and novel assessment methods remain of interest. Over the last decades, several imaging modalities have given unique insights into the assessment of organ condition. A systematic review was conducted according to accepted guidelines to evaluate these medical imaging methods, focussed on literature that use machine perfused human-sized organs, that determine organ condition with medical imaging. A total of 18 out of 1,465 studies were included that reported organ condition results in perfused hearts, kidneys, and livers, using both conventional viability markers and medical imaging. Laser speckle imaging, ultrasound, computed tomography, and magnetic resonance imaging were used to identify local ischemic regions and quantify intra-organ perfusion. A detailed investigation of metabolic activity was achieved using 31P magnetic resonance imaging and near-infrared spectroscopy. The current review shows that medical imaging is a powerful tool to assess organ condition.


Asunto(s)
Perfusión , Humanos , Hígado/diagnóstico por imagen , Hígado/irrigación sanguínea , Riñón/diagnóstico por imagen , Riñón/irrigación sanguínea , Imagen por Resonancia Magnética/métodos , Diagnóstico por Imagen/métodos , Preservación de Órganos/métodos , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos
19.
Front Oncol ; 14: 1415859, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39290245

RESUMEN

Hepatocellular Carcinoma (HCC), the most common primary liver cancer, is a significant contributor to worldwide cancer-related deaths. Various medical imaging techniques, including computed tomography, magnetic resonance imaging, and ultrasound, play a crucial role in accurately evaluating HCC and formulating effective treatment plans. Artificial Intelligence (AI) technologies have demonstrated potential in supporting physicians by providing more accurate and consistent medical diagnoses. Recent advancements have led to the development of AI-based multi-modal prediction systems. These systems integrate medical imaging with other modalities, such as electronic health record reports and clinical parameters, to enhance the accuracy of predicting biological characteristics and prognosis, including those associated with HCC. These multi-modal prediction systems pave the way for predicting the response to transarterial chemoembolization and microvascular invasion treatments and can assist clinicians in identifying the optimal patients with HCC who could benefit from interventional therapy. This paper provides an overview of the latest AI-based medical imaging models developed for diagnosing and predicting HCC. It also explores the challenges and potential future directions related to the clinical application of AI techniques.

20.
Cureus ; 16(8): e67119, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39290911

RESUMEN

This study presents a detailed methodology for integrating three-dimensional (3D) printing technology into preoperative planning in neurosurgery. The increasing capabilities of 3D printing over the last decade have made it a valuable tool in medical fields such as orthopedics and dental practices. Neurosurgery can similarly benefit from these advancements, though the creation of accurate 3D models poses a significant challenge due to the technical expertise required and the cost of specialized software. This paper demonstrates a step-by-step process for developing a 3D physical model for preoperative planning using free, open-source software. A case involving a 62-year-old male with a large infiltrating tumor in the sacrum, originating from renal cell carcinoma, is used to illustrate the method. The process begins with the acquisition of a CT scan, followed by image reconstruction using InVesalius 3, an open-source software. The resulting 3D model is then processed in Autodesk Meshmixer (Autodesk, Inc., San Francisco, CA), where individual anatomical structures are segmented and prepared for printing. The model is printed using the Bambu Lab X1 Carbon 3D printer (Bambu Lab, Austin, TX), allowing for multicolor differentiation of structures such as bones, tumors, and blood vessels. The study highlights the practical aspects of model creation, including artifact removal, surface separation, and optimization for print volume. It discusses the advantages of multicolor printing for visual clarity in surgical planning and compares it with monochromatic and segmented printing approaches. The findings underscore the potential of 3D printing to enhance surgical precision and planning, providing a replicable protocol that leverages accessible technology. This work supports the broader adoption of 3D printing in neurosurgery, emphasizing the importance of collaboration between medical and engineering professionals to maximize the utility of these models in clinical practice.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA