Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 721
Filtrar
1.
Ann Med ; 56(1): 2399759, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39258876

RESUMEN

BACKGROUND: The status of BRCA1/2 genes plays a crucial role in the treatment decision-making process for multiple cancer types. However, due to high costs and limited resources, a demand for BRCA1/2 genetic testing among patients is currently unmet. Notably, not all patients with BRCA1/2 mutations achieve favorable outcomes with poly (ADP-ribose) polymerase inhibitors (PARPi), indicating the necessity for risk stratification. In this study, we aimed to develop and validate a multimodal model for predicting BRCA1/2 gene status and prognosis with PARPi treatment. METHODS: We included 1695 slides from 1417 patients with ovarian, breast, prostate, and pancreatic cancers across three independent cohorts. Using a self-attention mechanism, we constructed a multi-instance attention model (MIAM) to detect BRCA1/2 gene status from hematoxylin and eosin (H&E) pathological images. We further combined tissue features from the MIAM model, cell features, and clinical factors (the MIAM-C model) to predict BRCA1/2 mutations and progression-free survival (PFS) with PARPi therapy. Model performance was evaluated using area under the curve (AUC) and Kaplan-Meier analysis. Morphological features contributing to MIAM-C were analyzed for interpretability. RESULTS: Across the four cancer types, MIAM-C outperformed the deep learning-based MIAM in identifying the BRCA1/2 genotype. Interpretability analysis revealed that high-attention regions included high-grade tumors and lymphocytic infiltration, which correlated with BRCA1/2 mutations. Notably, high lymphocyte ratios appeared characteristic of BRCA1/2 mutations. Furthermore, MIAM-C predicted PARPi therapy response (log-rank p < 0.05) and served as an independent prognostic factor for patients with BRCA1/2-mutant ovarian cancer (p < 0.05, hazard ratio:0.4, 95% confidence interval: 0.16-0.99). CONCLUSIONS: The MIAM-C model accurately detected BRCA1/2 gene status and effectively stratified prognosis for patients with BRCA1/2 mutations.


Asunto(s)
Mutación , Inhibidores de Poli(ADP-Ribosa) Polimerasas , Humanos , Femenino , Inhibidores de Poli(ADP-Ribosa) Polimerasas/uso terapéutico , Masculino , Proteína BRCA1/genética , Proteína BRCA2/genética , Pronóstico , Persona de Mediana Edad , Supervivencia sin Progresión , Neoplasias Ováricas/genética , Neoplasias Ováricas/tratamiento farmacológico , Neoplasias de la Mama/tratamiento farmacológico , Neoplasias de la Mama/genética , Neoplasias de la Mama/patología , Terapia Molecular Dirigida/métodos , Neoplasias Pancreáticas/genética , Neoplasias Pancreáticas/tratamiento farmacológico , Adulto
2.
Scand J Psychol ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39285674

RESUMEN

This study aimed to enhance the interpretability and clinical utility of the strength and stressors in parenting (SSF) questionnaire, a parent-reported questionnaire designed to assess strength, stress and associated risks of mental ill-health in parents of children with developmental disabilities. Responses to the SSF and a demographic questionnaire were collected from 576 parents of children with (n = 203) and without (n = 373) developmental disabilities. To enhance the interpretability of the SSF, a subset of 129 parents were invited to complete an additional questionnaire consisting of three free-text questions regarding recent help-seeking behavior, experiences of mental ill-health and experiences of parenthood. Parents' responses to the free-text questions were then categorized as indicative of higher or lower degrees of stress and compared to their SSF score distribution to derive empirical cut-offs for strength, stress and risk of mental ill-health as measured by the SSF. The credibility of these cut-offs was evaluated by comparing the cut-offs with SSF scores collected from the other 447 parents. Finally, SSF scores from parents of children without developmental disabilities (n = 373) were used to generate percentile values for the SSF to enable a standardized interpretation of SSF scores. To increase the utility of the SSF, we examined a recurring pattern of missing answers to items 23 and 33-38, noted in previous studies of the SSF and repeated in the present study. These items were excluded from further analysis since our examination revealed that they were not missing at random but rather constituted real differences in parental experiences, such as receiving a healthcare allowance, or caring for more than one child. The proposed empirical cut-offs performed well in discriminating between the two groups and yielded a specificity of 77-89% and a sensitivity of 68-76% for the strength, stress and risk of mental ill-health subscales of the SSF. This study also presents a conversion chart associating each SSF score with a corresponding percentile value. We propose modifications to the SSF, whereby items 23 and 33-38 are excluded, which will enable a more reliable assessment of parental experiences. This will, together with the empirical cut-offs and percentile values, enhance the interpretability and clinical utility of the SSF.

3.
IEEE Trans Comput Soc Syst ; 11(1): 247-266, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-39239536

RESUMEN

Adaptive interpretable ensemble model based on three-dimensional Convolutional Neural Network (3DCNN) and Genetic Algorithm (GA), i.e., 3DCNN+EL+GA, was proposed to differentiate the subjects with Alzheimer's Disease (AD) or Mild Cognitive Impairment (MCI) and further identify the discriminative brain regions significantly contributing to the classifications in a data-driven way. Plus, the discriminative brain sub-regions at a voxel level were further located in these achieved brain regions, with a gradient-based attribution method designed for CNN. Besides disclosing the discriminative brain sub-regions, the testing results on the datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS) indicated that 3DCNN+EL+GA outperformed other state-of-the-art deep learning algorithms and that the achieved discriminative brain regions (e.g., the rostral hippocampus, caudal hippocampus, and medial amygdala) were linked to emotion, memory, language, and other essential brain functions impaired early in the AD process. Future research is needed to examine the generalizability of the proposed method and ideas to discern discriminative brain regions for other brain disorders, such as severe depression, schizophrenia, autism, and cerebrovascular diseases, using neuroimaging.

4.
Crit Care ; 28(1): 301, 2024 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-39267172

RESUMEN

In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.


Asunto(s)
Inteligencia Artificial , Humanos , Inteligencia Artificial/tendencias , Inteligencia Artificial/normas , Cuidados Críticos/métodos , Cuidados Críticos/normas , Toma de Decisiones Clínicas/métodos , Médicos/normas
5.
Sci Rep ; 14(1): 20594, 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232050

RESUMEN

Although live streaming is indispensable, live-streaming e-business requires accurate and timely sales-volume prediction to ensure a healthy supply-demand balance for companies. Practically, because various factors can significantly impact sales results, the development of a powerful, interpretable model is crucial for accurate sales prediction. In this study, we propose SaleNet, a deep-learning model designed for sales-volume prediction. Our model achieved correct prediction results on our private, real operating data. The mean absolute percentage error (MAPE) of our model's performance fell as low as 11.47% for a + 1.5-days forecast. Even for a 1-week forecast (+ 6 days), the MAPE was only 19.79%, meeting actual business needs and practical requirements. Notably, our model demonstrated robust interpretability, as evidenced by the feature contribution results which are consistent with prevailing research findings and industry expertise. Our findings provided a theoretical foundation for predicting shopping behavior in live-broadcast e-commerce and offered valuable insights for designing live-broadcast content and optimizing the user experience.

6.
ACS Sens ; 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39248698

RESUMEN

This study introduces a novel deep learning framework for lung health evaluation using exhaled gas. The framework synergistically integrates pyramid pooling and a dual-encoder network, leveraging SHapley Additive exPlanations (SHAP) derived feature importance to enhance its predictive capability. The framework is specifically designed to effectively distinguish between smokers, individuals with chronic obstructive pulmonary disease (COPD), and control subjects. The pyramid pooling structure aggregates multilevel global information by pooling features at four scales. SHAP assesses feature importance from the eight sensors. Two encoder architectures handle different feature sets based on their importance, optimizing performance. Besides, the model's robustness is enhanced using the sliding window technique and white noise augmentation on the original data. In 5-fold cross-validation, the model achieved an average accuracy of 96.40%, surpassing that of a single encoder pyramid pooling model by 10.77%. Further optimization of filters in the transformer convolutional layer and pooling size in the pyramid module increased the accuracy to 98.46%. This study offers an efficient tool for identifying the effects of smoking and COPD, as well as a novel approach to utilizing deep learning technology to address complex biomedical issues.

7.
ISA Trans ; : 1-12, 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39242294

RESUMEN

Neural network (NN)-based methods are extensively used for intelligent fault diagnosis in industrial systems. Nevertheless, due to the limited availability of faulty samples and the presence of noise interference, most existing NN-based methods perform limited diagnosis performance. In response to these challenges, a self-adaptive selection graph pooling method is proposed. Firstly, graph encoders with sharing parameters are designed to extract local structure-feature information (SFI) of multiple sensor-wise sub-graphs. Then, the temporal continuity of the SFI is maintained through time-by-time concatenation, resulting in a global sensor graph and reducing the dependency on data volume from the perspective of adding prior knowledge. Subsequently, leveraging a self-adaptive node selection mechanism, the noise interference of redundant and noisy sensor-wise nodes in the graph is alleviated, allowing the networks to concentrate on the fault-attention nodes. Finally, the local max pooling and global mean pooling of the node-selection graph are incorporated in the readout module to get the multi-scale graph features, which serve as input to a multi-layer perceptron for fault diagnosis. Two experimental studies involving different mechanical and electrical systems demonstrate that the proposed method not only achieves superior diagnosis performance with limited data, but also maintains strong anti-interference ability in noisy environments. Additionally, it exhibits good interpretability through the proposed self-adaptive node selection mechanism and visualization methods.

8.
Health Inf Sci Syst ; 12(1): 47, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39247905

RESUMEN

Users of social platforms often perceive these sites as supportive spaces to post about their mental health issues. Those conversations contain important traces about individuals' health risks. Recently, researchers have exploited this online information to construct mental health detection models, which aim to identify users at risk on platforms like Twitter, Reddit or Facebook. Most of these models are focused on achieving good classification results, ignoring the explainability and interpretability of the decisions. Recent research has pointed out the importance of using clinical markers, such as the use of symptoms, to improve trust in the computational models by health professionals. In this paper, we introduce transformer-based architectures designed to detect and explain the appearance of depressive symptom markers in user-generated content from social media. We present two approaches: (i) train a model to classify, and another one to explain the classifier's decision separately and (ii) unify the two tasks simultaneously within a single model. Additionally, for this latter manner, we also investigated the performance of recent conversational Large Language Models (LLMs) utilizing both in-context learning and finetuning. Our models provide natural language explanations, aligning with validated symptoms, thus enabling clinicians to interpret the decisions more effectively. We evaluate our approaches using recent symptom-focused datasets, using both offline metrics and expert-in-the-loop evaluations to assess the quality of our models' explanations. Our findings demonstrate that it is possible to achieve good classification results while generating interpretable symptom-based explanations.

9.
Front Artif Intell ; 7: 1472086, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39219701
10.
Front Robot AI ; 11: 1375490, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39104806

RESUMEN

Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the "first impression" of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent's behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.

11.
J Adv Res ; 2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39097091

RESUMEN

INTRODUCTION: Immune checkpoint inhibitors (ICIs) are potent and precise therapies for various cancer types, significantly improving survival rates in patients who respond positively to them. However, only a minority of patients benefit from ICI treatments. OBJECTIVES: Identifying ICI responders before treatment could greatly conserve medical resources, minimize potential drug side effects, and expedite the search for alternative therapies. Our goal is to introduce a novel deep-learning method to predict ICI treatment responses in cancer patients. METHODS: The proposed deep-learning framework leverages graph neural network and biological pathway knowledge. We trained and tested our method using ICI-treated patients' data from several clinical trials covering melanoma, gastric cancer, and bladder cancer. RESULTS: Our results demonstrate that this predictive model outperforms current state-of-the-art methods and tumor microenvironment-based predictors. Additionally, the model quantifies the importance of pathways, pathway interactions, and genes in its predictions. A web server for IRnet has been developed and deployed, providing broad accessibility to users at https://irnet.missouri.edu. CONCLUSION: IRnet is a competitive tool for predicting patient responses to immunotherapy, specifically ICIs. Its interpretability also offers valuable insights into the mechanisms underlying ICI treatments.

12.
Comput Biol Med ; 180: 108974, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39096613

RESUMEN

Promoters are DNA sequences that bind with RNA polymerase to initiate transcription, regulating this process through interactions with transcription factors. Accurate identification of promoters is crucial for understanding gene expression regulation mechanisms and developing therapeutic approaches for various diseases. However, experimental techniques for promoter identification are often expensive, time-consuming, and inefficient, necessitating the development of accurate and efficient computational models for this task. Enhancing the model's ability to recognize promoters across multiple species and improving its interpretability pose significant challenges. In this study, we introduce a novel interpretable model based on graph neural networks, named GraphPro, for multi-species promoter identification. Initially, we encode the sequences using k-tuple nucleotide frequency pattern, dinucleotide physicochemical properties, and dna2vec. Subsequently, we construct two feature extraction modules based on convolutional neural networks and graph neural networks. These modules aim to extract specific motifs from the promoters, learn their dependencies, and capture the underlying structural features of the promoters, providing a more comprehensive representation. Finally, a fully connected neural network predicts whether the input sequence is a promoter. We conducted extensive experiments on promoter datasets from eight species, including Human, Mouse, and Escherichia coli. The experimental results show that the average Sn, Sp, Acc and MCC values of GraphPro are 0.9123, 0.9482, 0.8840 and 0.7984, respectively. Compared with previous promoter identification methods, GraphPro not only achieves better recognition accuracy on multiple species, but also outperforms all previous methods in cross-species prediction ability. Furthermore, by visualizing GraphPro's decision process and analyzing the sequences matching the transcription factor binding motifs captured by the model, we validate its significant advantages in biological interpretability. The source code for GraphPro is available at https://github.com/liuliwei1980/GraphPro.


Asunto(s)
Redes Neurales de la Computación , Regiones Promotoras Genéticas , Humanos , Animales , Biología Computacional/métodos , Análisis de Secuencia de ADN/métodos , Ratones , Programas Informáticos
13.
Sci Rep ; 14(1): 19110, 2024 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-39154060

RESUMEN

Predicting the capacity of lithium-ion battery (LIB) plays a crucial role in ensuring the safe operation of LIBs and prolonging their lifespan. However, LIBs are easily affected by environmental interference, which may impact the precision of predictions. Furthermore, interpretability in the process of predicting LIB capacity is also important for users to understand the model, identify issues, and make decisions. In this study, an interpretable method considering environmental interference (IM-EI) for predicting LIB capacity is introduced. Spearman correlation coefficients, interpretability principles, belief rule base (BRB), and interpretability constraints are used to improve the prediction precision and interpretability of IM-EI. Dynamic attribute reliability is introduced to minimize the effect of environmental interference. The experimental results show that IM-EI model has good interpretability and high precision compared to the other models. Under interference conditions, the model still has good precision and robustness.

14.
Stud Health Technol Inform ; 316: 766-770, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176906

RESUMEN

In recent years, artificial intelligence (AI) has gained momentum in many fields of daily live. In healthcare, AI can be used for diagnosing or predicting illnesses. However, explainable AI (XAI) is needed to ensure that users understand how the algorithm arrives at a decision. In our research project, machine learning methods are used for individual risk prediction of hospital-onset bacteremia (HOB). This paper presents a vision on a step-wise process for implementation and evaluation of user-centered XAI for risk prediction of HOB. An initial requirement analysis revealed first insights on the users' needs of explainability to use and trust such risk prediction applications. The findings were then used to propose step-wise process towards a user-centered evaluation.


Asunto(s)
Inteligencia Artificial , Bacteriemia , Bacteriemia/diagnóstico , Humanos , Infección Hospitalaria , Aprendizaje Automático , Algoritmos , Medición de Riesgo
15.
Stud Health Technol Inform ; 316: 846-850, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176925

RESUMEN

Text classification plays an essential role in the medical domain by organizing and categorizing vast amounts of textual data through machine learning (ML) and deep learning (DL). The adoption of Artificial Intelligence (AI) technologies in healthcare has raised concerns about the interpretability of AI models, often perceived as "black boxes." Explainable AI (XAI) techniques aim to mitigate this issue by elucidating AI model decision-making process. In this paper, we present a scoping review exploring the application of different XAI techniques in medical text classification, identifying two main types: model-specific and model-agnostic methods. Despite some positive feedback from developers, formal evaluations with medical end users of these techniques remain limited. The review highlights the necessity for further research in XAI to enhance trust and transparency in AI-driven decision-making processes in healthcare.


Asunto(s)
Inteligencia Artificial , Procesamiento de Lenguaje Natural , Humanos , Aprendizaje Automático , Registros Electrónicos de Salud/clasificación , Aprendizaje Profundo
16.
JMIR Med Inform ; 12: e52896, 2024 Jul 26.
Artículo en Inglés | MEDLINE | ID: mdl-39087585

RESUMEN

Background: The application of machine learning in health care often necessitates the use of hierarchical codes such as the International Classification of Diseases (ICD) and Anatomical Therapeutic Chemical (ATC) systems. These codes classify diseases and medications, respectively, thereby forming extensive data dimensions. Unsupervised feature selection tackles the "curse of dimensionality" and helps to improve the accuracy and performance of supervised learning models by reducing the number of irrelevant or redundant features and avoiding overfitting. Techniques for unsupervised feature selection, such as filter, wrapper, and embedded methods, are implemented to select the most important features with the most intrinsic information. However, they face challenges due to the sheer volume of ICD and ATC codes and the hierarchical structures of these systems. Objective: The objective of this study was to compare several unsupervised feature selection methods for ICD and ATC code databases of patients with coronary artery disease in different aspects of performance and complexity and select the best set of features representing these patients. Methods: We compared several unsupervised feature selection methods for 2 ICD and 1 ATC code databases of 51,506 patients with coronary artery disease in Alberta, Canada. Specifically, we used the Laplacian score, unsupervised feature selection for multicluster data, autoencoder-inspired unsupervised feature selection, principal feature analysis, and concrete autoencoders with and without ICD or ATC tree weight adjustment to select the 100 best features from over 9000 ICD and 2000 ATC codes. We assessed the selected features based on their ability to reconstruct the initial feature space and predict 90-day mortality following discharge. We also compared the complexity of the selected features by mean code level in the ICD or ATC tree and the interpretability of the features in the mortality prediction task using Shapley analysis. Results: In feature space reconstruction and mortality prediction, the concrete autoencoder-based methods outperformed other techniques. Particularly, a weight-adjusted concrete autoencoder variant demonstrated improved reconstruction accuracy and significant predictive performance enhancement, confirmed by DeLong and McNemar tests (P<.05). Concrete autoencoders preferred more general codes, and they consistently reconstructed all features accurately. Additionally, features selected by weight-adjusted concrete autoencoders yielded higher Shapley values in mortality prediction than most alternatives. Conclusions: This study scrutinized 5 feature selection methods in ICD and ATC code data sets in an unsupervised context. Our findings underscore the superiority of the concrete autoencoder method in selecting salient features that represent the entire data set, offering a potential asset for subsequent machine learning research. We also present a novel weight adjustment approach for the concrete autoencoders specifically tailored for ICD and ATC code data sets to enhance the generalizability and interpretability of the selected features.

17.
Pflugers Arch ; 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39088045

RESUMEN

Explainable artificial intelligence (XAI) has gained significant attention in various domains, including natural and medical image analysis. However, its application in spectroscopy remains relatively unexplored. This systematic review aims to fill this gap by providing a comprehensive overview of the current landscape of XAI in spectroscopy and identifying potential benefits and challenges associated with its implementation. Following the PRISMA guideline 2020, we conducted a systematic search across major journal databases, resulting in 259 initial search results. After removing duplicates and applying inclusion and exclusion criteria, 21 scientific studies were included in this review. Notably, most of the studies focused on using XAI methods for spectral data analysis, emphasizing identifying significant spectral bands rather than specific intensity peaks. Among the most utilized AI techniques were SHapley Additive exPlanations (SHAP), masking methods inspired by Local Interpretable Model-agnostic Explanations (LIME), and Class Activation Mapping (CAM). These methods were favored due to their model-agnostic nature and ease of use, enabling interpretable explanations without modifying the original models. Future research should propose new methods and explore the adaptation of other XAI employed in other domains to better suit the unique characteristics of spectroscopic data.

18.
Waste Manag ; 188: 48-59, 2024 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-39098272

RESUMEN

Ensuring the interpretability of machine learning models in chemical engineering remains challenging due to inherent limitations and data quality issues, hindering their reliable application. In this study, a qualitatively implicit knowledge-guided machine learning framework is proposed to improve plasma gasification modelling. Starting with a pre-trained machine learning model, parameters are further optimized by integrating the heuristic algorithm to minimize the data fitting errors and resolving implicit monotonic inconsistencies. The latter is comprehensively quantified through Monte Carlo simulations. This framework is adaptive to different machine learning techniques, exemplified by artificial neural network (ANN) and support vector machine (SVM) in this study. Validated by a case study on plasma gasification, the results reveal that the improved models achieve better generalizability and scientific interpretability in predicting syngas quality. Specifically, for ANN, the root mean square error (RMSE) and knowledge-based error (KE) reduce by 36.44% and 83.22%, respectively, while SVM displays a decrease of 2.58% in RMSE and a remarkable 100% in KE. Importantly, the improved models successfully capture all desired implicit monotonicity relationships between syngas quality and feedstock characteristics/operating parameters, addressing a limitation that traditional machine learning struggles with.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Máquina de Vectores de Soporte , Gases , Algoritmos , Método de Montecarlo , Modelos Teóricos
19.
Comput Med Imaging Graph ; 116: 102422, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39116707

RESUMEN

Reliability learning and interpretable decision-making are crucial for multi-modality medical image segmentation. Although many works have attempted multi-modality medical image segmentation, they rarely explore how much reliability is provided by each modality for segmentation. Moreover, the existing approach of decision-making such as the softmax function lacks the interpretability for multi-modality fusion. In this study, we proposed a novel approach named contextual discounted evidential network (CDE-Net) for reliability learning and interpretable decision-making under multi-modality medical image segmentation. Specifically, the CDE-Net first models the semantic evidence by uncertainty measurement using the proposed evidential decision-making module. Then, it leverages the contextual discounted fusion layer to learn the reliability provided by each modality. Finally, a multi-level loss function is deployed for the optimization of evidence modeling and reliability learning. Moreover, this study elaborates on the framework interpretability by discussing the consistency between pixel attribution maps and the learned reliability coefficients. Extensive experiments are conducted on both multi-modality brain and liver datasets. The CDE-Net gains high performance with an average Dice score of 0.914 for brain tumor segmentation and 0.913 for liver tumor segmentation, which proves CDE-Net has great potential to facilitate the interpretation of artificial intelligence-based multi-modality medical image fusion.


Asunto(s)
Imagen Multimodal , Reproducibilidad de los Resultados , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Hígado/diagnóstico por imagen , Toma de Decisiones
20.
J Pers Med ; 14(8)2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39202047

RESUMEN

Our research evaluates advanced artificial (AI) methodologies to enhance diagnostic accuracy in pulmonary radiography. Utilizing DenseNet121 and ResNet50, we analyzed 108,948 chest X-ray images from 32,717 patients and DenseNet121 achieved an area under the curve (AUC) of 94% in identifying the conditions of pneumothorax and oedema. The model's performance surpassed that of expert radiologists, though further improvements are necessary for diagnosing complex conditions such as emphysema, effusion, and hernia. Clinical validation integrating Latent Dirichlet Allocation (LDA) and Named Entity Recognition (NER) demonstrated the potential of natural language processing (NLP) in clinical workflows. The NER system achieved a precision of 92% and a recall of 88%. Sentiment analysis using DistilBERT provided a nuanced understanding of clinical notes, which is essential for refining diagnostic decisions. XGBoost and SHapley Additive exPlanations (SHAP) enhanced feature extraction and model interpretability. Local Interpretable Model-agnostic Explanations (LIME) and occlusion sensitivity analysis further enriched transparency, enabling healthcare providers to trust AI predictions. These AI techniques reduced processing times by 60% and annotation errors by 75%, setting a new benchmark for efficiency in thoracic diagnostics. The research explored the transformative potential of AI in medical imaging, advancing traditional diagnostics and accelerating medical evaluations in clinical settings.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA