Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 95
Filtrar
1.
Front Artif Intell ; 7: 1384709, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39219699

RESUMEN

Agriculture is considered the backbone of Tanzania's economy, with more than 60% of the residents depending on it for survival. Maize is the country's dominant and primary food crop, accounting for 45% of all farmland production. However, its productivity is challenged by the limitation to detect maize diseases early enough. Maize streak virus (MSV) and maize lethal necrosis virus (MLN) are common diseases often detected too late by farmers. This has led to the need to develop a method for the early detection of these diseases so that they can be treated on time. This study investigated the potential of developing deep-learning models for the early detection of maize diseases in Tanzania. The regions where data was collected are Arusha, Kilimanjaro, and Manyara. Data was collected through observation by a plant. The study proposed convolutional neural network (CNN) and vision transformer (ViT) models. Four classes of imagery data were used to train both models: MLN, Healthy, MSV, and WRONG. The results revealed that the ViT model surpassed the CNN model, with 93.1 and 90.96% accuracies, respectively. Further studies should focus on mobile app development and deployment of the model with greater precision for early detection of the diseases mentioned above in real life.

2.
Environ Monit Assess ; 196(10): 875, 2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39222153

RESUMEN

Drought is an extended shortage of rainfall resulting in water scarcity and affecting a region's social and economic conditions through environmental deterioration. Its adverse environmental effects can be minimised by timely prediction. Drought detection uses only ground observation stations, but satellite-based supervision scans huge land mass stretches and offers highly effective monitoring. This paper puts forward a novel drought monitoring system using satellite imagery by considering the effects of droughts that devastated agriculture in Thanjavur district, Tamil Nadu, between 2000 and 2022. The proposed method uses Holt Winter Conventional 2D-Long Short-Term Memory (HW-Conv2DLSTM) to forecast meteorological and agricultural droughts. It employs Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) data precipitation index datasets, MODIS 11A1 temperature index, and MODIS 13Q1 vegetation index. It extracts the time series data from satellite images using trend and seasonal patterns and smoothens them using Holt Winter alpha, beta, and gamma parameters. Finally, an effective drought prediction procedure is developed using Conv2D-LSTM to calculate the spatiotemporal correlation amongst drought indices. The HW-Conv2DLSTM offers a better R2 value of 0.97. It holds promise as an effective computer-assisted strategy to predict droughts and maintain agricultural productivity, which is vital to feed the ever-increasing human population.


Asunto(s)
Agricultura , Sequías , Monitoreo del Ambiente , Imágenes Satelitales , Estaciones del Año , Agricultura/métodos , Monitoreo del Ambiente/métodos , India , Predicción
3.
Front Robot AI ; 11: 1387491, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39184863

RESUMEN

Colonoscopy is a reliable diagnostic method to detect colorectal polyps early on and prevent colorectal cancer. The current examination techniques face a significant challenge of high missed rates, resulting in numerous undetected polyps and irregularities. Automated and real-time segmentation methods can help endoscopists to segment the shape and location of polyps from colonoscopy images in order to facilitate clinician's timely diagnosis and interventions. Different parameters like shapes, small sizes of polyps, and their close resemblance to surrounding tissues make this task challenging. Furthermore, high-definition image quality and reliance on the operator make real-time and accurate endoscopic image segmentation more challenging. Deep learning models utilized for segmenting polyps, designed to capture diverse patterns, are becoming progressively complex. This complexity poses challenges for real-time medical operations. In clinical settings, utilizing automated methods requires the development of accurate, lightweight models with minimal latency, ensuring seamless integration with endoscopic hardware devices. To address these challenges, in this study a novel lightweight and more generalized Enhanced Nanonet model, an improved version of Nanonet using NanonetB for real-time and precise colonoscopy image segmentation, is proposed. The proposed model enhances the performance of Nanonet using Nanonet B on the overall prediction scheme by applying data augmentation, Conditional Random Field (CRF), and Test-Time Augmentation (TTA). Six publicly available datasets are utilized to perform thorough evaluations, assess generalizability, and validate the improvements: Kvasir-SEG, Endotect Challenge 2020, Kvasir-instrument, CVC-ClinicDB, CVC-ColonDB, and CVC-300. Through extensive experimentation, using the Kvasir-SEG dataset, our model achieves a mIoU score of 0.8188 and a Dice coefficient of 0.8060 with only 132,049 parameters and employing minimal computational resources. A thorough cross-dataset evaluation was performed to assess the generalization capability of the proposed Enhanced Nanonet model across various publicly available polyp datasets for potential real-world applications. The result of this study shows that using CRF (Conditional Random Fields) and TTA (Test-Time Augmentation) enhances performance within the same dataset and also across diverse datasets with a model size of just 132,049 parameters. Also, the proposed method indicates improved results in detecting smaller and sessile polyps (flats) that are significant contributors to the high miss rates.

4.
Biomed Phys Eng Express ; 10(5)2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39094595

RESUMEN

Dynamic 2-[18F] fluoro-2-deoxy-D-glucose positron emission tomography (dFDG-PET) for human brain imaging has considerable clinical potential, yet its utilization remains limited. A key challenge in the quantitative analysis of dFDG-PET is characterizing a patient-specific blood input function, traditionally reliant on invasive arterial blood sampling. This research introduces a novel approach employing non-invasive deep learning model-based computations from the internal carotid arteries (ICA) with partial volume (PV) corrections, thereby eliminating the need for invasive arterial sampling. We present an end-to-end pipeline incorporating a 3D U-Net based ICA-net for ICA segmentation, alongside a Recurrent Neural Network (RNN) based MCIF-net for the derivation of a model-corrected blood input function (MCIF) with PV corrections. The developed 3D U-Net and RNN was trained and validated using a 5-fold cross-validation approach on 50 human brain FDG PET scans. The ICA-net achieved an average Dice score of 82.18% and an Intersection over Union of 68.54% across all tested scans. Furthermore, the MCIF-net exhibited a minimal root mean squared error of 0.0052. The application of this pipeline to ground truth data for dFDG-PET brain scans resulted in the precise localization of seizure onset regions, which contributed to a successful clinical outcome, with the patient achieving a seizure-free state after treatment. These results underscore the efficacy of the ICA-net and MCIF-net deep learning pipeline in learning the ICA structure's distribution and automating MCIF computation with PV corrections. This advancement marks a significant leap in non-invasive neuroimaging.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Fluorodesoxiglucosa F18 , Tomografía de Emisión de Positrones , Humanos , Tomografía de Emisión de Positrones/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/irrigación sanguínea , Procesamiento de Imagen Asistido por Computador/métodos , Mapeo Encefálico/métodos , Redes Neurales de la Computación , Arteria Carótida Interna/diagnóstico por imagen , Masculino , Algoritmos , Femenino , Radiofármacos
5.
Sci Rep ; 14(1): 17777, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39090145

RESUMEN

Disasters caused by mine water inflows significantly threaten the safety of coal mining operations. Deep mining complicates the acquisition of hydrogeological parameters, the mechanics of water inrush, and the prediction of sudden changes in mine water inflow. Traditional models and singular machine learning approaches often fail to accurately forecast abrupt shifts in mine water inflows. This study introduces a novel coupled decomposition-optimization-deep learning model that integrates Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), Northern Goshawk Optimization (NGO), and Long Short-Term Memory (LSTM) networks. We evaluate three types of mine water inflow forecasting methods: a singular time series prediction model, a decomposition-prediction coupled model, and a decomposition-optimization-prediction coupled model, assessing their ability to capture sudden changes in data trends and their prediction accuracy. Results show that the singular prediction model is optimal with a sliding input step of 3 and a maximum of 400 epochs. Compared to the CEEMDAN-LSTM model, the CEEMDAN-NGO-LSTM model demonstrates superior performance in predicting local extreme shifts in mine water inflow volumes. Specifically, the CEEMDAN-NGO-LSTM model achieves scores of 96.578 in MAE, 1.471% in MAPE, 122.143 in RMSE, and 0.958 in NSE, representing average performance improvements of 44.950% and 19.400% over the LSTM model and CEEMDAN-LSTM model, respectively. Additionally, this model provides the most accurate predictions of mine water inflow volumes over the next five days. Therefore, the decomposition-optimization-prediction coupled model presents a novel technical solution for the safety monitoring of smart mines, offering significant theoretical and practical value for ensuring safe mining operations.

6.
Front Oncol ; 14: 1400341, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39091923

RESUMEN

Brain tumors occur due to the expansion of abnormal cell tissues and can be malignant (cancerous) or benign (not cancerous). Numerous factors such as the position, size, and progression rate are considered while detecting and diagnosing brain tumors. Detecting brain tumors in their initial phases is vital for diagnosis where MRI (magnetic resonance imaging) scans play an important role. Over the years, deep learning models have been extensively used for medical image processing. The current study primarily investigates the novel Fine-Tuned Vision Transformer models (FTVTs)-FTVT-b16, FTVT-b32, FTVT-l16, FTVT-l32-for brain tumor classification, while also comparing them with other established deep learning models such as ResNet50, MobileNet-V2, and EfficientNet - B0. A dataset with 7,023 images (MRI scans) categorized into four different classes, namely, glioma, meningioma, pituitary, and no tumor are used for classification. Further, the study presents a comparative analysis of these models including their accuracies and other evaluation metrics including recall, precision, and F1-score across each class. The deep learning models ResNet-50, EfficientNet-B0, and MobileNet-V2 obtained an accuracy of 96.5%, 95.1%, and 94.9%, respectively. Among all the FTVT models, FTVT-l16 model achieved a remarkable accuracy of 98.70% whereas other FTVT models FTVT-b16, FTVT-b32, and FTVT-132 achieved an accuracy of 98.09%, 96.87%, 98.62%, respectively, hence proving the efficacy and robustness of FTVT's in medical image processing.

7.
JMIR Med Inform ; 12: e57097, 2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39121473

RESUMEN

BACKGROUND: Activities of daily living (ADL) are essential for independence and personal well-being, reflecting an individual's functional status. Impairment in executing these tasks can limit autonomy and negatively affect quality of life. The assessment of physical function during ADL is crucial for the prevention and rehabilitation of movement limitations. Still, its traditional evaluation based on subjective observation has limitations in precision and objectivity. OBJECTIVE: The primary objective of this study is to use innovative technology, specifically wearable inertial sensors combined with artificial intelligence techniques, to objectively and accurately evaluate human performance in ADL. It is proposed to overcome the limitations of traditional methods by implementing systems that allow dynamic and noninvasive monitoring of movements during daily activities. The approach seeks to provide an effective tool for the early detection of dysfunctions and the personalization of treatment and rehabilitation plans, thus promoting an improvement in the quality of life of individuals. METHODS: To monitor movements, wearable inertial sensors were developed, which include accelerometers and triaxial gyroscopes. The developed sensors were used to create a proprietary database with 6 movements related to the shoulder and 3 related to the back. We registered 53,165 activity records in the database (consisting of accelerometer and gyroscope measurements), which were reduced to 52,600 after processing to remove null or abnormal values. Finally, 4 deep learning (DL) models were created by combining various processing layers to explore different approaches in ADL recognition. RESULTS: The results revealed high performance of the 4 proposed models, with levels of accuracy, precision, recall, and F1-score ranging between 95% and 97% for all classes and an average loss of 0.10. These results indicate the great capacity of the models to accurately identify a variety of activities, with a good balance between precision and recall. Both the convolutional and bidirectional approaches achieved slightly superior results, although the bidirectional model reached convergence in a smaller number of epochs. CONCLUSIONS: The DL models implemented have demonstrated solid performance, indicating an effective ability to identify and classify various daily activities related to the shoulder and lumbar region. These results were achieved with minimal sensorization-being noninvasive and practically imperceptible to the user-which does not affect their daily routine and promotes acceptance and adherence to continuous monitoring, thus improving the reliability of the data collected. This research has the potential to have a significant impact on the clinical evaluation and rehabilitation of patients with movement limitations, by providing an objective and advanced tool to detect key movement patterns and joint dysfunctions.

8.
Bioengineering (Basel) ; 11(8)2024 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-39199758

RESUMEN

Lung cancer, the second most common type of cancer worldwide, presents significant health challenges. Detecting this disease early is essential for improving patient outcomes and simplifying treatment. In this study, we propose a hybrid framework that combines deep learning (DL) with quantum computing to enhance the accuracy of lung cancer detection using chest radiographs (CXR) and computerized tomography (CT) images. Our system utilizes pre-trained models for feature extraction and quantum circuits for classification, achieving state-of-the-art performance in various metrics. Not only does our system achieve an overall accuracy of 92.12%, it also excels in other crucial performance measures, such as sensitivity (94%), specificity (90%), F1-score (93%), and precision (92%). These results demonstrate that our hybrid approach can more accurately identify lung cancer signatures compared to traditional methods. Moreover, the incorporation of quantum computing enhances processing speed and scalability, making our system a promising tool for early lung cancer screening and diagnosis. By leveraging the strengths of quantum computing, our approach surpasses traditional methods in terms of speed, accuracy, and efficiency. This study highlights the potential of hybrid computational technologies to transform early cancer detection, paving the way for wider clinical applications and improved patient care outcomes.

9.
Sci Rep ; 14(1): 17447, 2024 07 29.
Artículo en Inglés | MEDLINE | ID: mdl-39075091

RESUMEN

The bone marrow overproduces immature cells in the malignancy known as Acute Lymphoblastic Leukemia (ALL). In the United States, about 6500 occurrences of ALL are diagnosed each year in both children and adults, comprising nearly 25% of pediatric cancer cases. Recently, many computer-assisted diagnosis (CAD) systems have been proposed to aid hematologists in reducing workload, providing correct results, and managing enormous volumes of data. Traditional CAD systems rely on hematologists' expertise, specialized features, and subject knowledge. Utilizing early detection of ALL can aid radiologists and doctors in making medical decisions. In this study, Deep Dilated Residual Convolutional Neural Network (DDRNet) is presented for the classification of blood cell images, focusing on eosinophils, lymphocytes, monocytes, and neutrophils. To tackle challenges like vanishing gradients and enhance feature extraction, the model incorporates Deep Residual Dilated Blocks (DRDB) for faster convergence. Conventional residual blocks are strategically placed between layers to preserve original information and extract general feature maps. Global and Local Feature Enhancement Blocks (GLFEB) balance weak contributions from shallow layers for improved feature normalization. The global feature from the initial convolution layer, when combined with GLFEB-processed features, reinforces classification representations. The Tanh function introduces non-linearity. A Channel and Spatial Attention Block (CSAB) is integrated into the neural network to emphasize or minimize specific feature channels, while fully connected layers transform the data. The use of a sigmoid activation function concentrates on relevant features for multiclass lymphoblastic leukemia classification The model was analyzed with Kaggle dataset (16,249 images) categorized into four classes, with a training and testing ratio of 80:20. Experimental results showed that DRDB, GLFEB and CSAB blocks' feature discrimination ability boosted the DDRNet model F1 score to 0.96 with minimal computational complexity and optimum classification accuracy of 99.86% and 91.98% for training and testing data. The DDRNet model stands out from existing methods due to its high testing accuracy of 91.98%, F1 score of 0.96, minimal computational complexity, and enhanced feature discrimination ability. The strategic combination of these blocks (DRDB, GLFEB, and CSAB) are designed to address specific challenges in the classification process, leading to improved discrimination of features crucial for accurate multi-class blood cell image identification. Their effective integration within the model contributes to the superior performance of DDRNet.


Asunto(s)
Aprendizaje Profundo , Leucemia-Linfoma Linfoblástico de Células Precursoras , Leucemia-Linfoma Linfoblástico de Células Precursoras/patología , Leucemia-Linfoma Linfoblástico de Células Precursoras/clasificación , Humanos , Redes Neurales de la Computación , Diagnóstico por Computador/métodos , Niño
10.
J Environ Manage ; 366: 121932, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39043087

RESUMEN

Deep learning models provide a more powerful method for accurate and stable prediction of water quality in rivers, which is crucial for the intelligent management and control of the water environment. To increase the accuracy of predicting the water quality parameters and learn more about the impact of complex spatial information based on deep learning models, this study proposes two ensemble models TNX (with temporal attention) and STNX (with spatio-temporal attention) based on seasonal and trend decomposition (STL) method to predict water quality using geo-sensory time series data. Dissolved oxygen, total phosphorus, and ammonia nitrogen were predicted in short-step (1 h, and 2 h) and long-step (12 h, and 24 h) with seven water quality monitoring sites in a river. The ensemble model TNX improved the performance by 2.1%-6.1% and 4.3%-22.0% relative to the best baseline deep learning model for the short-step and long-step water quality prediction, and it can capture the variation pattern of water quality parameters by only predicting the trend component of raw data after STL decomposition. The STNX model, with spatio-temporal attention, obtained 0.5%-2.4% and 2.3%-5.7% higher performance compared to the TNX model for the short-step and long-step water quality prediction, and such improvement was more effective in mitigating the prediction shift patterns of long-step prediction. Moreover, the model interpretation results consistently demonstrated positive relationship patterns across all monitoring sites. However, the significance of seven specific monitoring sites diminished as the distance between the predicted and input monitoring sites increased. This study provides an ensemble modeling approach based on STL decomposition for improving short-step and long-step prediction of river water quality parameter, and understands the impact of complex spatial information on deep learning model.


Asunto(s)
Aprendizaje Profundo , Ríos , Calidad del Agua , Ríos/química , Monitoreo del Ambiente/métodos , Fósforo/análisis , Modelos Teóricos
11.
Int J Med Inform ; 190: 105544, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39003790

RESUMEN

OBJECTIVE: To determine the incidence of patients presenting in pain to a large Australian inner-city emergency department (ED) using a clinical text deep learning algorithm. MATERIALS AND METHODS: A fine-tuned, domain-specific, transformer-based clinical text deep learning model was used to interpret free-text nursing assessments in the electronic medical records of 235,789 adult presentations to the ED over a three-year period. The model classified presentations according to whether the patient had pain on arrival at the ED. Interrupted time series analysis was used to determine the incidence of pain in patients on arrival over time. We described the changes in the population characteristics and incidence of patients with pain on arrival occurring with the start of the Covid-19 pandemic. RESULTS: 55.16% (95%CI 54.95%-55.36%) of all patients presenting to this ED had pain on arrival. There were differences in demographics and arrival and departure patterns between patients with and without pain. The Covid-19 pandemic initially precipitated a decrease followed by a sharp, sustained rise in pain on arrival, with concurrent changes to the population arriving in pain and their treatment. DISCUSSION: Applying a clinical text deep learning model has successfully identified the incidence of pain on arrival. It represents an automated, reproducible mechanism to identify pain from routinely collected medical records. The description of this population and their treatment forms the basis of intervention to improve care for patients with pain. The combination of the clinical text deep learning models and interrupted time series analysis has reported on the effects of the Covid-19 pandemic on pain care in the ED, outlining a methodology to assess the impact of significant events or interventions on pain care in the ED. CONCLUSION: Applying a novel deep learning approach to identifying pain guides methodological approaches to evaluating pain care interventions in the ED, giving previously unavailable population-level insights.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Servicio de Urgencia en Hospital , Dolor , Humanos , Servicio de Urgencia en Hospital/estadística & datos numéricos , COVID-19/epidemiología , Masculino , Femenino , Dolor/epidemiología , Dolor/diagnóstico , Persona de Mediana Edad , Adulto , Registros Electrónicos de Salud/estadística & datos numéricos , Análisis de Series de Tiempo Interrumpido , Anciano , Australia/epidemiología , Incidencia , SARS-CoV-2
12.
J Ultrasound Med ; 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39032010

RESUMEN

Artificial intelligence (AI) models can play a more effective role in managing patients with the explosion of digital health records available in the healthcare industry. Machine-learning (ML) and deep-learning (DL) techniques are two methods used to develop predictive models that serve to improve the clinical processes in the healthcare industry. These models are also implemented in medical imaging machines to empower them with an intelligent decision system to aid physicians in their decisions and increase the efficiency of their routine clinical practices. The physicians who are going to work with these machines need to have an insight into what happens in the background of the implemented models and how they work. More importantly, they need to be able to interpret their predictions, assess their performance, and compare them to find the one with the best performance and fewer errors. This review aims to provide an accessible overview of key evaluation metrics for physicians without AI expertise. In this review, we developed four real-world diagnostic AI models (two ML and two DL models) for breast cancer diagnosis using ultrasound images. Then, 23 of the most commonly used evaluation metrics were reviewed uncomplicatedly for physicians. Finally, all metrics were calculated and used practically to interpret and evaluate the outputs of the models. Accessible explanations and practical applications empower physicians to effectively interpret, evaluate, and optimize AI models to ensure safety and efficacy when integrated into clinical practice.

13.
Br J Haematol ; 205(2): 699-710, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38894606

RESUMEN

In sub-Saharan Africa, acute-onset severe malaria anaemia (SMA) is a critical challenge, particularly affecting children under five. The acute drop in haematocrit in SMA is thought to be driven by an increased phagocytotic pathological process in the spleen, leading to the presence of distinct red blood cells (RBCs) with altered morphological characteristics. We hypothesized that these RBCs could be detected systematically and at scale in peripheral blood films (PBFs) by harnessing the capabilities of deep learning models. Assessment of PBFs by a microscopist does not scale for this task and is subject to variability. Here we introduce a deep learning model, leveraging a weakly supervised Multiple Instance Learning framework, to Identify SMA (MILISMA) through the presence of morphologically changed RBCs. MILISMA achieved a classification accuracy of 83% (receiver operating characteristic area under the curve [AUC] of 87%; precision-recall AUC of 76%). More importantly, MILISMA's capabilities extend to identifying statistically significant morphological distinctions (p < 0.01) in RBCs descriptors. Our findings are enriched by visual analyses, which underscore the unique morphological features of SMA-affected RBCs when compared to non-SMA cells. This model aided detection and characterization of RBC alterations could enhance the understanding of SMA's pathology and refine SMA diagnostic and prognostic evaluation processes at scale.


Asunto(s)
Anemia , Aprendizaje Profundo , Eritrocitos , Humanos , Eritrocitos/patología , Anemia/sangre , Anemia/patología , Anemia/diagnóstico , Femenino , Masculino , Preescolar , Malaria/sangre , Malaria/diagnóstico , Malaria/patología , Lactante , Niño
14.
IUBMB Life ; 76(9): 666-696, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38748776

RESUMEN

This research delves into the exploration of the potential of tocopherol-based nanoemulsion as a therapeutic agent for cardiovascular diseases (CVD) through an in-depth molecular docking analysis. The study focuses on elucidating the molecular interactions between tocopherol and seven key proteins (1O8a, 4YAY, 4DLI, 1HW9, 2YCW, 1BO9 and 1CX2) that play pivotal roles in CVD development. Through rigorous in silico docking investigations, assessment was conducted on the binding affinities, inhibitory potentials and interaction patterns of tocopherol with these target proteins. The findings revealed significant interactions, particularly with 4YAY, displaying a robust binding energy of -6.39 kcal/mol and a promising Ki value of 20.84 µM. Notable interactions were also observed with 1HW9, 4DLI, 2YCW and 1CX2, further indicating tocopherol's potential therapeutic relevance. In contrast, no interaction was observed with 1BO9. Furthermore, an examination of the common residues of 4YAY bound to tocopherol was carried out, highlighting key intermolecular hydrophobic bonds that contribute to the interaction's stability. Tocopherol complies with pharmacokinetics (Lipinski's and Veber's) rules for oral bioavailability and proves safety non-toxic and non-carcinogenic. Thus, deep learning-based protein language models ESM1-b and ProtT5 were leveraged for input encodings to predict interaction sites between the 4YAY protein and tocopherol. Hence, highly accurate predictions of these critical protein-ligand interactions were achieved. This study not only advances the understanding of these interactions but also highlights deep learning's immense potential in molecular biology and drug discovery. It underscores tocopherol's promise as a cardiovascular disease management candidate, shedding light on its molecular interactions and compatibility with biomolecule-like characteristics.


Asunto(s)
Enfermedades Cardiovasculares , Aprendizaje Profundo , Simulación del Acoplamiento Molecular , Enfermedades Cardiovasculares/tratamiento farmacológico , Enfermedades Cardiovasculares/metabolismo , Humanos , Tocoferoles/química , Tocoferoles/metabolismo , Unión Proteica , Proteínas/química , Proteínas/metabolismo
15.
J Integr Bioinform ; 21(2)2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38797876

RESUMEN

Protein structure determination has made progress with the aid of deep learning models, enabling the prediction of protein folding from protein sequences. However, obtaining accurate predictions becomes essential in certain cases where the protein structure remains undescribed. This is particularly challenging when dealing with rare, diverse structures and complex sample preparation. Different metrics assess prediction reliability and offer insights into result strength, providing a comprehensive understanding of protein structure by combining different models. In a previous study, two proteins named ARM58 and ARM56 were investigated. These proteins contain four domains of unknown function and are present in Leishmania spp. ARM refers to an antimony resistance marker. The study's main objective is to assess the accuracy of the model's predictions, thereby providing insights into the complexities and supporting metrics underlying these findings. The analysis also extends to the comparison of predictions obtained from other species and organisms. Notably, one of these proteins shares an ortholog with Trypanosoma cruzi and Trypanosoma brucei, leading further significance to our analysis. This attempt underscored the importance of evaluating the diverse outputs from deep learning models, facilitating comparisons across different organisms and proteins. This becomes particularly pertinent in cases where no previous structural information is available.


Asunto(s)
Pliegue de Proteína , Proteínas Protozoarias , Proteínas Protozoarias/química , Proteínas Protozoarias/metabolismo , Trypanosoma cruzi , Leishmania , Aprendizaje Profundo , Trypanosoma brucei brucei/metabolismo , Modelos Moleculares , Biología Computacional/métodos
16.
Plant Physiol Biochem ; 212: 108769, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38797010

RESUMEN

The primary challenges in tea production under multiple stress exposures have negatively affected its global market sustainability, so introducing an infield fast technique for monitoring tea leaves' stresses has tremendous urgent needs. Therefore, this study aimed to propose an efficient method for the detection of stress symptoms based on a portable smartphone with deep learning models. Firstly, a database containing over 10,000 images of tea garden canopies in complex natural scenes was developed, which included healthy (no stress) and three types of stress (tea anthracnose (TA), tea blister blight (TB) and sunburn (SB)). Then, YOLOv5m and YOLOv8m algorithms were adapted to discriminate the four types of stress symptoms; where the YOLOv8m algorithm achieved better performance in the identification of healthy leaves (98%), TA (92.0%), TB (68.4%) and SB (75.5%). Furthermore, the YOLOv8m algorithm was used to construct a model for differentiation of disease severity of TA, and a satisfactory result was obtained with the accuracy of mild, moderate, and severe TA infections were 94%, 96%, and 91%, respectively. Besides, we found that CNN kernels of YOLOv8m could efficiently extract the texture characteristics of the images at layer 2, and these characteristics can clearly distinguish different types of stress symptoms. This makes great contributions to the YOLOv8m model to achieve high-precision differentiation of four types of stress symptoms. In conclusion, our study provided an effective system to achieve low-cost, high-precision, fast, and infield diagnosis of tea stress symptoms in complex natural scenes based on smartphone and deep learning algorithms.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Hojas de la Planta , Teléfono Inteligente , Camellia sinensis , Estrés Fisiológico/fisiología , Enfermedades de las Plantas/microbiología ,
17.
Cureus ; 16(4): e57728, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38711724

RESUMEN

Clinical Decision Support Systems (CDSS) are essential tools in contemporary healthcare, enhancing clinicians' decisions and patient outcomes. The integration of artificial intelligence (AI) is now revolutionizing CDSS even further. This review delves into AI technologies transforming CDSS, their applications in healthcare decision-making, associated challenges, and the potential trajectory toward fully realizing AI-CDSS's potential. The review begins by laying the groundwork with a definition of CDSS and its function within the healthcare field. It then highlights the increasingly significant role that AI is playing in enhancing CDSS effectiveness and efficiency, underlining its evolving prominence in shaping healthcare practices. It examines the integration of AI technologies into CDSS, including machine learning algorithms like neural networks and decision trees, natural language processing, and deep learning. It also addresses the challenges associated with AI integration, such as interpretability and bias. We then shift to AI applications within CDSS, with real-life examples of AI-driven diagnostics, personalized treatment recommendations, risk prediction, early intervention, and AI-assisted clinical documentation. The review emphasizes user-centered design in AI-CDSS integration, addressing usability, trust, workflow, and ethical and legal considerations. It acknowledges prevailing obstacles and suggests strategies for successful AI-CDSS adoption, highlighting the need for workflow alignment and interdisciplinary collaboration. The review concludes by summarizing key findings, underscoring AI's transformative potential in CDSS, and advocating for continued research and innovation. It emphasizes the need for collaborative efforts to realize a future where AI-powered CDSS optimizes healthcare delivery and improves patient outcomes.

18.
Eur J Ophthalmol ; : 11206721241258253, 2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38809664

RESUMEN

PURPOSE: To investigate the potential of an Optical Coherence Tomography (OCT) based Deep-Learning (DL) model in the prediction of Vitreomacular Traction (VMT) syndrome outcomes. DESIGN: A single-centre retrospective review. METHODS: Records of consecutive adult patients attending the Royal Adelaide Hospital vitreoretinal clinic with evidence of spontaneous VMT were reviewed from January 2019 until May 2022. All patients with evidence of causes of cystoid macular oedema or secondary causes of VMT were excluded. OCT scans and outcome data obtained from patient records was used to train, test and then validate the models. RESULTS: For the deep learning model, ninety-five patient files were identified from the OCT (SPECTRALIS system; Heidelberg Engineering, Heidelberg, Germany) records. 25% of the patients spontaneously improved, 48% remained stable and 27% had progression of their disease, approximately. The final longitudinal model was able to predict 'improved' or 'stable' disease with a positive predictive value of 0.72 and 0.79, respectively. The accuracy of the model was greater than 50%. CONCLUSIONS: Deep-learning models may be utilised in real-world settings to predict outcomes of VMT. This approach requires further investigation as it may improve patient outcomes by aiding ophthalmologists in cross-checking management decisions and reduce the need for unnecessary interventions or delays.

19.
Methods ; 226: 164-175, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38702021

RESUMEN

Ensuring the safety and efficacy of chemical compounds is crucial in small-molecule drug development. In the later stages of drug development, toxic compounds pose a significant challenge, losing valuable resources and time. Early and accurate prediction of compound toxicity using deep learning models offers a promising solution to mitigate these risks during drug discovery. In this study, we present the development of several deep-learning models aimed at evaluating different types of compound toxicity, including acute toxicity, carcinogenicity, hERG_cardiotoxicity (the human ether-a-go-go related gene caused cardiotoxicity), hepatotoxicity, and mutagenicity. To address the inherent variations in data size, label type, and distribution across different types of toxicity, we employed diverse training strategies. Our first approach involved utilizing a graph convolutional network (GCN) regression model to predict acute toxicity, which achieved notable performance with Pearson R 0.76, 0.74, and 0.65 for intraperitoneal, intravenous, and oral administration routes, respectively. Furthermore, we trained multiple GCN binary classification models, each tailored to a specific type of toxicity. These models exhibited high area under the curve (AUC) scores, with an impressive AUC of 0.69, 0.77, 0.88, and 0.79 for predicting carcinogenicity, hERG_cardiotoxicity, mutagenicity, and hepatotoxicity, respectively. Additionally, we have used the approved drug dataset to determine the appropriate threshold value for the prediction score in model usage. We integrated these models into a virtual screening pipeline to assess their effectiveness in identifying potential low-toxicity drug candidates. Our findings indicate that this deep learning approach has the potential to significantly reduce the cost and risk associated with drug development by expediting the selection of compounds with low toxicity profiles. Therefore, the models developed in this study hold promise as critical tools for early drug candidate screening and selection.


Asunto(s)
Aprendizaje Profundo , Humanos , Descubrimiento de Drogas/métodos , Animales , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos , Cardiotoxicidad/etiología
20.
J Neurosci Methods ; 407: 110158, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38703797

RESUMEN

BACKGROUND: The serotonergic system modulates brain processes via functionally distinct subpopulations of neurons with heterogeneous properties, including their electrophysiological activity. In extracellular recordings, serotonergic neurons to be investigated for their functional properties are commonly identified on the basis of "typical" features of their activity, i.e. slow regular firing and relatively long duration of action potentials. Thus, due to the lack of equally robust criteria for discriminating serotonergic neurons with "atypical" features from non-serotonergic cells, the physiological relevance of the diversity of serotonergic neuron activities results largely understudied. NEW METHODS: We propose deep learning models capable of discriminating typical and atypical serotonergic neurons from non-serotonergic cells with high accuracy. The research utilized electrophysiological in vitro recordings from serotonergic neurons identified by the expression of fluorescent proteins specific to the serotonergic system and non-serotonergic cells. These recordings formed the basis of the training, validation, and testing data for the deep learning models. The study employed convolutional neural networks (CNNs), known for their efficiency in pattern recognition, to classify neurons based on the specific characteristics of their action potentials. RESULTS: The models were trained on a dataset comprising 27,108 original action potential samples, alongside an extensive set of 12 million synthetic action potential samples, designed to mitigate the risk of overfitting the background noise in the recordings, a potential source of bias. Results show that the models achieved high accuracy and were further validated on "non-homogeneous" data, i.e., data unknown to the model and collected on different days from those used for the training of the model, to confirm their robustness and reliability in real-world experimental conditions. COMPARISON WITH EXISTING METHODS: Conventional methods for identifying serotonergic neurons allow recognition of serotonergic neurons defined as typical. Our model based on the analysis of the sole action potential reliably recognizes over 94% of serotonergic neurons including those with atypical features of spike and activity. CONCLUSION: The model is ready for use in experiments conducted with the here described recording parameters. We release the codes and procedures allowing to adapt the model to different acquisition parameters or for identification of other classes of spontaneously active neurons.


Asunto(s)
Potenciales de Acción , Aprendizaje Profundo , Neuronas Serotoninérgicas , Neuronas Serotoninérgicas/fisiología , Animales , Potenciales de Acción/fisiología , Modelos Neurológicos , Ratones
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA