Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43.454
Filtrar
1.
Methods Mol Biol ; 2852: 223-253, 2025.
Artículo en Inglés | MEDLINE | ID: mdl-39235748

RESUMEN

One of the main challenges in food microbiology is to prevent the risk of outbreaks by avoiding the distribution of food contaminated by bacteria. This requires constant monitoring of the circulating strains throughout the food production chain. Bacterial genomes contain signatures of natural evolution and adaptive markers that can be exploited to better understand the behavior of pathogen in the food industry. The monitoring of foodborne strains can therefore be facilitated by the use of these genomic markers capable of rapidly providing essential information on isolated strains, such as the source of contamination, risk of illness, potential for biofilm formation, and tolerance or resistance to biocides. The increasing availability of large genome datasets is enhancing the understanding of the genetic basis of complex traits such as host adaptation, virulence, and persistence. Genome-wide association studies have shown very promising results in the discovery of genomic markers that can be integrated into rapid detection tools. In addition, machine learning has successfully predicted phenotypes and classified important traits. Genome-wide association and machine learning tools have therefore the potential to support decision-making circuits intending at reducing the burden of foodborne diseases. The aim of this chapter review is to provide knowledge on the use of these two methods in food microbiology and to recommend their use in the field.


Asunto(s)
Bacterias , Microbiología de Alimentos , Enfermedades Transmitidas por los Alimentos , Estudio de Asociación del Genoma Completo , Aprendizaje Automático , Humanos , Bacterias/genética , Enfermedades Transmitidas por los Alimentos/microbiología , Enfermedades Transmitidas por los Alimentos/genética , Variación Genética , Genoma Bacteriano , Estudio de Asociación del Genoma Completo/métodos , Fenotipo
2.
Methods Mol Biol ; 2852: 85-103, 2025.
Artículo en Inglés | MEDLINE | ID: mdl-39235738

RESUMEN

Although MALDI-TOF mass spectrometry (MS) is considered as the gold standard for rapid and cost-effective identification of microorganisms in routine laboratory practices, its capability for antimicrobial resistance (AMR) detection has received limited focus. Nevertheless, recent studies explored the predictive performance of MALDI-TOF MS for detecting AMR in clinical pathogens when machine learning techniques are applied. This chapter describes a routine MALDI-TOF MS workflow for the rapid screening of AMR in foodborne pathogens, with Campylobacter spp. as a study model.


Asunto(s)
Campylobacter , Farmacorresistencia Bacteriana , Aprendizaje Automático , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción/métodos , Campylobacter/efectos de los fármacos , Antibacterianos/farmacología , Humanos , Microbiología de Alimentos/métodos , Pruebas de Sensibilidad Microbiana/métodos , Enfermedades Transmitidas por los Alimentos/microbiología , Bacterias/efectos de los fármacos
3.
Methods Mol Biol ; 2856: 357-400, 2025.
Artículo en Inglés | MEDLINE | ID: mdl-39283464

RESUMEN

Three-dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, topologically associating domains (TADs), and A/B compartments, play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers and transcription factor binding site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, and TAD boundaries) and analyze their pros and cons. We also point out obstacles to the computational prediction of 3D interactions and suggest future research directions.


Asunto(s)
Cromatina , Aprendizaje Profundo , Cromatina/genética , Cromatina/metabolismo , Humanos , Biología Computacional/métodos , Aprendizaje Automático , Genómica/métodos , Elementos de Facilitación Genéticos , Regiones Promotoras Genéticas , Sitios de Unión , Genoma , Programas Informáticos
4.
Clin Chim Acta ; 564: 119938, 2025 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-39181293

RESUMEN

OBJECTIVE: Delta bilirubin (albumin-covalently bound bilirubin) may provide important clinical utility in identifying impaired hepatic excretion of conjugated bilirubin, but it cannot be measured in real-time for diagnostic purposes in clinical laboratories. METHODS: A total of 210 samples were collected, and their delta bilirubin levels were measured four times using high-performance liquid chromatography. Data collected included age, sex, diagnosis code, delta bilirubin, total bilirubin, direct bilirubin, total protein, albumin, globulin, aspartate aminotransferase, alanine transaminase, alkaline phosphatase, gamma-glutamyl transferase, lactate dehydrogenase, hemoglobin, serum hemolysis value, hemolysis index, icterus value (Iv), icterus index (Ii), lipemia value (Lv), and lipemia index. To conduct feature selection and identify the optimal combination of variables, linear regression machine learning was performed 1,000 times. RESULTS: The selected variables were total bilirubin, direct bilirubin, total protein, albumin, hemoglobin, Iv, Ii, and Lv. The best predictive performance for high delta bilirubin concentrations was achieved with the combination of albumin-direct bilirubin-hemoglobin-Iv-Lv. The final equation composed of these variables was as follows: delta bilirubin = 0.35 × Iv + 0.05 × Lv - 0.23 × direct bilirubin - 0.05 × hemoglobin - 0.04 × albumin + 0.10. CONCLUSION: The equation established in this study is practical and can be easily applied in real-time in clinical laboratories.


Asunto(s)
Bilirrubina , Aprendizaje Automático , Bilirrubina/sangre , Humanos , Femenino , Masculino , Persona de Mediana Edad , Adulto , Anciano , Adolescente , Adulto Joven , Niño , Anciano de 80 o más Años , Cromatografía Líquida de Alta Presión , Preescolar , Lactante
5.
Food Chem ; 462: 140931, 2025 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-39217752

RESUMEN

This research focused on distinguishing distinct matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) spectral signatures of three Enterococcus species. We evaluated and compared the predictive performance of four supervised machine learning algorithms, K-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF), to accurately classify Enterococcus species. This study involved a comprehensive dataset of 410 strains, generating 1640 individual spectra through on-plate and off-plate protein extraction methods. Although the commercial database correctly identified 76.9% of the strains, machine learning classifiers demonstrated superior performance (accuracy 0.991). In the RF model, top informative peaks played a significant role in the classification. Whole-genome sequencing showed that the most informative peaks are biomarkers connected to proteins, which are essential for understanding bacterial classification and evolution. The integration of MALDI-TOF MS and machine learning provides a rapid and accurate method for identifying Enterococcus species, improving healthcare and food safety.


Asunto(s)
Enterococcus , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción , Aprendizaje Automático Supervisado , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción/métodos , Enterococcus/clasificación , Enterococcus/química , Enterococcus/aislamiento & purificación , Enterococcus/genética , Algoritmos , Máquina de Vectores de Soporte , Técnicas de Tipificación Bacteriana/métodos , Aprendizaje Automático
6.
J Environ Sci (China) ; 149: 358-373, 2025 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-39181649

RESUMEN

Carbon emissions resulting from energy consumption have become a pressing issue for governments worldwide. Accurate estimation of carbon emissions using satellite remote sensing data has become a crucial research problem. Previous studies relied on statistical regression models that failed to capture the complex nonlinear relationships between carbon emissions and characteristic variables. In this study, we propose a machine learning algorithm for carbon emissions, a Bayesian optimized XGboost regression model, using multi-year energy carbon emission data and nighttime lights (NTL) remote sensing data from Shaanxi Province, China. Our results demonstrate that the XGboost algorithm outperforms linear regression and four other machine learning models, with an R2 of 0.906 and RMSE of 5.687. We observe an annual increase in carbon emissions, with high-emission counties primarily concentrated in northern and central Shaanxi Province, displaying a shift from discrete, sporadic points to contiguous, extended spatial distribution. Spatial autocorrelation clustering reveals predominantly high-high and low-low clustering patterns, with economically developed counties showing high-emission clustering and economically relatively backward counties displaying low-emission clustering. Our findings show that the use of NTL data and the XGboost algorithm can estimate and predict carbon emissions more accurately and provide a complementary reference for satellite remote sensing image data to serve carbon emission monitoring and assessment. This research provides an important theoretical basis for formulating practical carbon emission reduction policies and contributes to the development of techniques for accurate carbon emission estimation using remote sensing data.


Asunto(s)
Algoritmos , Monitoreo del Ambiente , Aprendizaje Automático , China , Monitoreo del Ambiente/métodos , Contaminantes Atmosféricos/análisis , Carbono/análisis , Teorema de Bayes , Tecnología de Sensores Remotos , Contaminación del Aire/estadística & datos numéricos , Contaminación del Aire/análisis
7.
J Environ Sci (China) ; 149: 68-78, 2025 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-39181678

RESUMEN

The presence of aluminum (Al3+) and fluoride (F-) ions in the environment can be harmful to ecosystems and human health, highlighting the need for accurate and efficient monitoring. In this paper, an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum (Al3+) and fluoride (F-) ions in aqueous solutions. The proposed method involves the synthesis of sulfur-functionalized carbon dots (C-dots) as fluorescence probes, with fluorescence enhancement upon interaction with Al3+ ions, achieving a detection limit of 4.2 nmol/L. Subsequently, in the presence of F- ions, fluorescence is quenched, with a detection limit of 47.6 nmol/L. The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python, followed by data preprocessing. Subsequently, the fingerprint data is subjected to cluster analysis using the K-means model from machine learning, and the average Silhouette Coefficient indicates excellent model performance. Finally, a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions. The results demonstrate that the developed model excels in terms of accuracy and sensitivity. This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment, making it a valuable tool for safeguarding our ecosystems and public health.


Asunto(s)
Aluminio , Monitoreo del Ambiente , Fluoruros , Aprendizaje Automático , Aluminio/análisis , Fluoruros/análisis , Monitoreo del Ambiente/métodos , Contaminantes Químicos del Agua/análisis , Fluorescencia
8.
J Environ Sci (China) ; 147: 259-267, 2025 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-39003045

RESUMEN

Arsenic (As) pollution in soils is a pervasive environmental issue. Biochar immobilization offers a promising solution for addressing soil As contamination. The efficiency of biochar in immobilizing As in soils primarily hinges on the characteristics of both the soil and the biochar. However, the influence of a specific property on As immobilization varies among different studies, and the development and application of arsenic passivation materials based on biochar often rely on empirical knowledge. To enhance immobilization efficiency and reduce labor and time costs, a machine learning (ML) model was employed to predict As immobilization efficiency before biochar application. In this study, we collected a dataset comprising 182 data points on As immobilization efficiency from 17 publications to construct three ML models. The results demonstrated that the random forest (RF) model outperformed gradient boost regression tree and support vector regression models in predictive performance. Relative importance analysis and partial dependence plots based on the RF model were conducted to identify the most crucial factors influencing As immobilization. These findings highlighted the significant roles of biochar application time and biochar pH in As immobilization efficiency in soils. Furthermore, the study revealed that Fe-modified biochar exhibited a substantial improvement in As immobilization. These insights can facilitate targeted biochar property design and optimization of biochar application conditions to enhance As immobilization efficiency.


Asunto(s)
Arsénico , Carbón Orgánico , Aprendizaje Automático , Contaminantes del Suelo , Suelo , Carbón Orgánico/química , Arsénico/química , Contaminantes del Suelo/química , Contaminantes del Suelo/análisis , Suelo/química , Modelos Químicos
9.
J Environ Sci (China) ; 147: 512-522, 2025 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-39003067

RESUMEN

To better understand the migration behavior of plastic fragments in the environment, development of rapid non-destructive methods for in-situ identification and characterization of plastic fragments is necessary. However, most of the studies had focused only on colored plastic fragments, ignoring colorless plastic fragments and the effects of different environmental media (backgrounds), thus underestimating their abundance. To address this issue, the present study used near-infrared spectroscopy to compare the identification of colored and colorless plastic fragments based on partial least squares-discriminant analysis (PLS-DA), extreme gradient boost, support vector machine and random forest classifier. The effects of polymer color, type, thickness, and background on the plastic fragments classification were evaluated. PLS-DA presented the best and most stable outcome, with higher robustness and lower misclassification rate. All models frequently misinterpreted colorless plastic fragments and its background when the fragment thickness was less than 0.1mm. A two-stage modeling method, which first distinguishes the plastic types and then identifies colorless plastic fragments that had been misclassified as background, was proposed. The method presented an accuracy higher than 99% in different backgrounds. In summary, this study developed a novel method for rapid and synchronous identification of colored and colorless plastic fragments under complex environmental backgrounds.


Asunto(s)
Monitoreo del Ambiente , Aprendizaje Automático , Plásticos , Espectroscopía Infrarroja Corta , Espectroscopía Infrarroja Corta/métodos , Monitoreo del Ambiente/métodos , Plásticos/análisis , Análisis de los Mínimos Cuadrados , Análisis Discriminante , Color
10.
Nature ; 633(8028): 101-108, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39232151

RESUMEN

Negotiations for a global treaty on plastic pollution1 will shape future policies on plastics production, use and waste management. Its parties will benefit from a high-resolution baseline of waste flows and plastic emission sources to enable identification of pollution hotspots and their causes2. Nationally aggregated waste management data can be distributed to smaller scales to identify generalized points of plastic accumulation and source phenomena3-11. However, it is challenging to use this type of spatial allocation to assess the conditions under which emissions take place12,13. Here we develop a global macroplastic pollution emissions inventory by combining conceptual modelling of emission mechanisms with measurable activity data. We define emissions as materials that have moved from the managed or mismanaged system (controlled or contained state) to the unmanaged system (uncontrolled or uncontained state-the environment). Using machine learning and probabilistic material flow analysis, we identify emission hotspots across 50,702 municipalities worldwide from five land-based plastic waste emission sources. We estimate global plastic waste emissions at 52.1 [48.3-56.3] million metric tonnes (Mt) per year, with approximately 57% wt. and 43% wt. open burned and unburned debris, respectively. Littering is the largest emission source in the Global North, whereas uncollected waste is the dominant emissions source across the Global South. We suggest that our findings can help inform treaty negotiations and develop national and sub-national waste management action plans and source inventories.


Asunto(s)
Ciudades , Monitoreo del Ambiente , Contaminación Ambiental , Internacionalidad , Microplásticos , Administración de Residuos , Residuos , Ciudades/estadística & datos numéricos , Contaminación Ambiental/análisis , Mapeo Geográfico , Cooperación Internacional , Aprendizaje Automático , Microplásticos/análisis , Administración de Residuos/legislación & jurisprudencia , Administración de Residuos/estadística & datos numéricos , Residuos/análisis
11.
J Med Internet Res ; 26: e52143, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39250789

RESUMEN

BACKGROUND: Acute exacerbations of chronic obstructive pulmonary disease (AECOPD) are associated with high mortality, morbidity, and poor quality of life and constitute a substantial burden to patients and health care systems. New approaches to prevent or reduce the severity of AECOPD are urgently needed. Internationally, this has prompted increased interest in the potential of remote patient monitoring (RPM) and digital medicine. RPM refers to the direct transmission of patient-reported outcomes, physiological, and functional data, including heart rate, weight, blood pressure, oxygen saturation, physical activity, and lung function (spirometry), directly to health care professionals through automation, web-based data entry, or phone-based data entry. Machine learning has the potential to enhance RPM in chronic obstructive pulmonary disease by increasing the accuracy and precision of AECOPD prediction systems. OBJECTIVE: This study aimed to conduct a dual systematic review. The first review focuses on randomized controlled trials where RPM was used as an intervention to treat or improve AECOPD. The second review examines studies that combined machine learning with RPM to predict AECOPD. We review the evidence and concepts behind RPM and machine learning and discuss the strengths, limitations, and clinical use of available systems. We have generated a list of recommendations needed to deliver patient and health care system benefits. METHODS: A comprehensive search strategy, encompassing the Scopus and Web of Science databases, was used to identify relevant studies. A total of 2 independent reviewers (HMGG and CM) conducted study selection, data extraction, and quality assessment, with discrepancies resolved through consensus. Data synthesis involved evidence assessment using a Critical Appraisal Skills Programme checklist and a narrative synthesis. Reporting followed PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. RESULTS: These narrative syntheses suggest that 57% (16/28) of the randomized controlled trials for RPM interventions fail to achieve the required level of evidence for better outcomes in AECOPD. However, the integration of machine learning into RPM demonstrates promise for increasing the predictive accuracy of AECOPD and, therefore, early intervention. CONCLUSIONS: This review suggests a transition toward the integration of machine learning into RPM for predicting AECOPD. We discuss particular RPM indices that have the potential to improve AECOPD prediction and highlight research gaps concerning patient factors and the maintained adoption of RPM. Furthermore, we emphasize the importance of a more comprehensive examination of patient and health care burdens associated with RPM, along with the development of practical solutions.


Asunto(s)
Aprendizaje Automático , Enfermedad Pulmonar Obstructiva Crónica , Enfermedad Pulmonar Obstructiva Crónica/fisiopatología , Humanos , Monitoreo Fisiológico/métodos , Telemedicina , Calidad de Vida
12.
J Safety Res ; 90: 100-114, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39251269

RESUMEN

INTRODUCTION: Fatigue is considered to have a life-threatening effect on human health and it has been an active field of research in different sectors. Deploying wearable physiological sensors helps to detect the level of fatigue objectively without any concern of bias in subjective assessment and interfering with work. METHODS: This paper provides an in-depth review of fatigue detection approaches using physiological signals to pinpoint their main achievements, identify research gaps, and recommend avenues for future research. The review results are presented under three headings, including: signal modality, experimental environments, and fatigue detection models. Fatigue detection studies are first divided based on signal modality into uni-modal and multi-modal approaches. Then, the experimental environments utilized for fatigue data collection are critically analyzed. At the end, the machine learning models used for the classification of fatigue state are reviewed. PRACTICAL APPLICATIONS: The directions for future research are provided based on critical analysis of past studies. Finally, the challenges of objective fatigue detection in the real-world scenario are discussed.


Asunto(s)
Fatiga , Humanos , Fatiga/diagnóstico , Dispositivos Electrónicos Vestibles , Aprendizaje Automático , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos
13.
Medicine (Baltimore) ; 103(36): e39464, 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39252309

RESUMEN

To more accurately diagnose and treat patients with different subtypes of thyroid cancer, we constructed a diagnostic model related to the iodine metabolism of THCA subtypes. THCA expression profiles, corresponding clinicopathological information, and single-cell RNA-seq were downloaded from TCGA and GEO databases. Genes related to thyroid differentiation score were obtained by GSVA. Through logistic analyses, the diagnostic model was finally constructed. DCA curve, ROC curve, machine learning, and K-M analysis were used to verify the accuracy of the model. qRT-PCR was used to verify the expression of hub genes in vitro. There were 104 crossover genes between different TDS and THCA subtypes. Finally, 5 genes (ABAT, CHEK1, GPX3, NME5, and PRKCQ) that could independently predict the TDS subpopulation were obtained, and a diagnostic model was constructed. ROC, DCA, and RCS curves exhibited that the model has accurate prediction ability. K-M and subgroup analysis results showed that low model scores were strongly associated with poor PFI in THCA patients. The model score was significantly negatively correlated with T cell follicular helper. In addition, the diagnostic model was significantly negatively correlated with immune scores. Finally, the results of qRT-PCR corresponded with bioinformatics results. This diagnostic model has good diagnostic and prognostic value for THCA patients, and can be used as an independent prognostic indicator for THCA patients.


Asunto(s)
Yodo , Neoplasias de la Tiroides , Humanos , Neoplasias de la Tiroides/genética , Neoplasias de la Tiroides/diagnóstico , Neoplasias de la Tiroides/patología , Biología Computacional/métodos , Femenino , Masculino , Aprendizaje Automático , Persona de Mediana Edad , Glándula Tiroides/patología , Glándula Tiroides/metabolismo , Curva ROC , Diferenciación Celular , Biomarcadores de Tumor/genética , Biomarcadores de Tumor/metabolismo
14.
Medicine (Baltimore) ; 103(36): e39610, 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39252327

RESUMEN

BACKGROUND: Obesity, a multifactorial and complex health condition, has emerged as a significant global public health concern. Integrating machine learning techniques into obesity research offers great promise as an interdisciplinary field, particularly in the screening, diagnosis, and analysis of obesity. Nevertheless, the publications on using machine learning methods in obesity research have not been systematically evaluated. Hence, this study aimed to quantitatively examine, visualize, and analyze the publications concerning the use of machine learning methods in obesity research by means of bibliometrics. METHODS: The Web of Science core collection was the primary database source for this study, which collected publications on obesity research using machine learning methods over the last 20 years from January 1, 2004, to December 31, 2023. Only articles and reviews that fit the criteria were selected for bibliometric analysis, and in terms of language, only English was accepted. VOSviewer, CiteSpace, and Excel were the primary software utilized. RESULTS: Between 2004 and 2023, the number of publications on obesity research using machine learning methods increased exponentially. Eventually, 3286 publications that met the eligibility criteria were searched. According to the collaborative network analysis, the United States has the greatest volume of publications, indicating a significant influence on this research. coauthor's analysis showed the authoritative one in this field is Leo Breiman. Scientific Reports is the most widely published journal. The most referenced publication is "R: a language and environment for statistical computing." An analysis of keywords shows that deep learning, support vector machines, predictive models, gut microbiota, energy expenditure, and genome are hot topics in this field. Future research directions may include the relationship between obesity and its consequences, such as diabetic retinopathy, as well as the interaction between obesity and epidemiology, such as COVID-19. CONCLUSION: Utilizing bibliometrics as a research tool and methodology, this study, for the first time, reveals the intrinsic relationship and developmental pattern among obesity research using machine learning methods, which provides academic references for clinicians and researchers in understanding the hotspots and cutting-edge issues as well as the developmental trend in this field to detect patients' obesity problems early and develop personalized treatment plans.


Asunto(s)
Bibliometría , Aprendizaje Automático , Obesidad , Humanos , Obesidad/epidemiología , Investigación Biomédica/métodos , Investigación Biomédica/tendencias
15.
Scand J Med Sci Sports ; 34(9): e14719, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39252407

RESUMEN

Step cadence-based and machine-learning (ML) methods have been used to classify physical activity (PA) intensity in health-related research. This study examined the association of intensity-specific PA duration with all-cause (ACM) and CVD mortality using the cadence-based and ML methods in 68 561 UK Biobank participants wearing wrist-worn accelerometers. The two-stage-ML method categorized activity type and then intensity. The one-level-cadence-method (1LC) derived intensity-specific duration using all detected steps (including standing utilitarian steps) and cadence thresholds of ≥100 steps/min (moderate intensity) and ≥130 steps/min (vigorous intensity). The two-level-cadence-method (2LC) detected ambulatory steps (i.e., walking and running) and then applied the same cadence thresholds. The 2LC exhibited the most pronounced association at the lower end of duration spectrum. For example, the 2LC showed the smallest minimum moderate-to-vigorous-PA (MVPA) duration (amount associated with 50% of optimal risk reduction) with similar corresponding ACM hazard ratio (HR) to other methods (2LC: 2.8 min/day [95% CI: 2.6, 2.8], HR: 0.83 [95% CI: 0.78, 0.88]; 1LC, 11.1[10.8, 11.4], 0.80 [0.76, 0.85]; ML, 14.9 [14.6, 15.2], 0.82 [0.76, 0.87]). The ML elicited the greatest mortality risk reduction. For example, the medians and corresponding HR in VPA-ACM association: 2LC, 2.0 min/day [95% CI: 2.0, 2.0], HR, 0.69 [95% CI: 0.61, 0.79]; 1LC, 6.9 [6.9, 7.0], 0.68 [0.60, 0.77]; ML, 3.2 [3.2, 3.2], 0.53 [0.44, 0.64]. After standardizing durations, the ML exhibited the most pronounced associations. For example, the standardized minimum durations in MPA-CVD mortality association were: 2LC, -0.77; 1LC, -0.85; ML, -0.94; with corresponding HR of 0.82 [0.72, 0.92], 0.79 [0.69, 0.90], and 0.77 [0.69, 0.85], respectively. The 2LC exhibited the most pronounced association with all-cause and CVD mortality at the lower end of the duration spectrum. The ML method provided the most pronounced association with all-cause and CVD mortality, thus might be appropriate for estimating health benefits of moderate and vigorous intensity PA in observational studies.


Asunto(s)
Acelerometría , Ejercicio Físico , Aprendizaje Automático , Humanos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Enfermedades Cardiovasculares/mortalidad , Adulto , Reino Unido , Mortalidad , Caminata
16.
Int J Neural Syst ; 34(11): 2450061, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39252679

RESUMEN

Machine learning algorithms are commonly used for quickly and efficiently counting people from a crowd. Test-time adaptation methods for crowd counting adjust model parameters and employ additional data augmentation to better adapt the model to the specific conditions encountered during testing. The majority of current studies concentrate on unsupervised domain adaptation. These approaches commonly perform hundreds of epochs of training iterations, requiring a sizable number of unannotated data of every new target domain apart from annotated data of the source domain. Unlike these methods, we propose a meta-test-time adaptive crowd counting approach called CrowdTTA, which integrates the concept of test-time adaptation into the meta-learning framework and makes it easier for the counting model to adapt to the unknown test distributions. To facilitate the reliable supervision signal at the pixel level, we introduce uncertainty by inserting the dropout layer into the counting model. The uncertainty is then used to generate valuable pseudo labels, serving as effective supervisory signals for adapting the model. In the context of meta-learning, one image can be regarded as one task for crowd counting. In each iteration, our approach is a dual-level optimization process. In the inner update, we employ a self-supervised consistency loss function to optimize the model so as to simulate the parameters update process that occurs during the test phase. In the outer update, we authentically update the parameters based on the image with ground truth, improving the model's performance and making the pseudo labels more accurate in the next iteration. At test time, the input image is used for adapting the model before testing the image. In comparison to various supervised learning and domain adaptation methods, our results via extensive experiments on diverse datasets showcase the general adaptive capability of our approach across datasets with varying crowd densities and scales.


Asunto(s)
Aprendizaje Automático , Humanos , Aglomeración , Algoritmos
17.
Int J Neural Syst ; 34(11): 2450060, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39252680

RESUMEN

Automatic seizure detection has significant value in epilepsy diagnosis and treatment. Although a variety of deep learning models have been proposed to automatically learn electroencephalography (EEG) features for seizure detection, the generalization performance and computational burden of such deep models remain the bottleneck of practical application. In this study, a novel lightweight model based on random convolutional kernel transform (ROCKET) is developed for EEG feature learning for seizure detection. Specifically, random convolutional kernels are embedded into the structure of a wavelet scattering network instead of original wavelet transform convolutions. Then the significant EEG features are selected from the scattering coefficients and convolutional outputs by analysis of variance (ANOVA) and minimum redundancy-maximum relevance (MRMR) methods. This model not only preserves the merits of the fast-training process from ROCKET, but also provides insight into seizure detection by retaining only the helpful channels. The extreme gradient boosting (XGboost) classifier was combined with this EEG feature learning model to build a comprehensive seizure detection system that achieved promising epoch-based results, with over 90% of both sensitivity and specificity on the scalp and intracranial EEG databases. The experimental comparisons showed that the proposed method outperformed other state-of-the-art methods for cross-patient and patient-specific seizure detection.


Asunto(s)
Aprendizaje Profundo , Electroencefalografía , Convulsiones , Análisis de Ondículas , Humanos , Convulsiones/diagnóstico , Convulsiones/fisiopatología , Electroencefalografía/métodos , Redes Neurales de la Computación , Procesamiento de Señales Asistido por Computador , Sensibilidad y Especificidad , Aprendizaje Automático
18.
Vestn Oftalmol ; 140(4): 80-85, 2024.
Artículo en Ruso | MEDLINE | ID: mdl-39254394

RESUMEN

The second part of the literature review on the application of artificial intelligence (AI) methods for screening, diagnosing, monitoring, and treating glaucoma provides information on how AI methods enhance the effectiveness of glaucoma monitoring and treatment, presents technologies that use machine learning, including neural networks, to predict disease progression and determine the need for anti-glaucoma surgery. The article also discusses the methods of personalized treatment based on projection machine learning methods and outlines the problems and prospects of using AI in solving tasks related to screening, diagnosing, and treating glaucoma.


Asunto(s)
Inteligencia Artificial , Glaucoma , Aprendizaje Automático , Redes Neurales de la Computación , Humanos , Glaucoma/diagnóstico , Glaucoma/fisiopatología , Glaucoma/terapia , Progresión de la Enfermedad , Diagnóstico por Computador/métodos
19.
Radiology ; 312(3): e232554, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39254446

RESUMEN

Background US is clinically established for breast imaging, but its diagnostic performance depends on operator experience. Computer-assisted (real-time) image analysis may help in overcoming this limitation. Purpose To develop precise real-time-capable US-based breast tumor categorization by combining classic radiomics and autoencoder-based features from automatically localized lesions. Materials and Methods A total of 1619 B-mode US images of breast tumors were retrospectively analyzed between April 2018 and January 2024. nnU-Net was trained for lesion segmentation. Features were extracted from tumor segments, bounding boxes, and whole images using either classic radiomics, autoencoder, or both. Feature selection was performed to generate radiomics signatures, which were used to train machine learning algorithms for tumor categorization. Models were evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity and were statistically compared with histopathologically or follow-up-confirmed diagnosis. Results The model was developed on 1191 (mean age, 61 years ± 14 [SD]) female patients and externally validated on 50 (mean age, 55 years ± 15]). The development data set was divided into two parts: testing and training lesion segmentation (419 and 179 examinations) and lesion categorization (503 and 90 examinations). nnU-Net demonstrated precision and reproducibility in lesion segmentation in test set of data set 1 (median Dice score [DS]: 0.90 [IQR, 0.84-0.93]; P = .01) and data set 2 (median DS: 0.89 [IQR, 0.80-0.92]; P = .001). The best model, trained with 23 mixed features from tumor bounding boxes, achieved an AUC of 0.90 (95% CI: 0.83, 0.97), sensitivity of 81% (46 of 57; 95% CI: 70, 91), and specificity of 87% (39 of 45; 95% CI: 77, 87). No evidence of difference was found between model and human readers (AUC = 0.90 [95% CI: 0.83, 0.97] vs 0.83 [95% CI: 0.76, 0.90]; P = .55 and 0.90 vs 0.82 [95% CI: 0.75, 0.90]; P = .45) in tumor classification or between model and histopathologically or follow-up-confirmed diagnosis (AUC = 0.90 [95% CI: 0.83, 0.97] vs 1.00 [95% CI: 1.00,1.00]; P = .10). Conclusion Precise real-time US-based breast tumor categorization was developed by mixing classic radiomics and autoencoder-based features from tumor bounding boxes. ClinicalTrials.gov identifier: NCT04976257 Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Bahl in this issue.


Asunto(s)
Neoplasias de la Mama , Ultrasonografía Mamaria , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Persona de Mediana Edad , Estudios Retrospectivos , Ultrasonografía Mamaria/métodos , Diagnóstico Diferencial , Interpretación de Imagen Asistida por Computador/métodos , Sensibilidad y Especificidad , Mama/diagnóstico por imagen , Adulto , Aprendizaje Automático , Anciano , Radiómica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA