Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 916
Filtrar
1.
JMIR Hum Factors ; 11: e55182, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39269739

RESUMEN

BACKGROUND: Digitization is vital for data management, especially in health care. However, problems still hinder health care stakeholders in their daily work while collecting, processing, and providing health data or information. Data are missing, incorrect, cannot be collected, or information is inadequately presented. These problems can be seen as data or information problems. A proven way to elicit requirements for (software) systems is by using creative frameworks (eg, user-centered design, design thinking, lean UX [user experience], or service design) or creative methods (eg, mind mapping, storyboarding, 6 thinking hats, or interaction room). However, to what extent they are used to solve data or information-related problems in health care is unclear. OBJECTIVE: The primary objective of this scoping review is to investigate the use of creative frameworks in addressing data and information problems in health care. METHODS: Following JBI guidelines and the PRISMA-ScR framework, this paper analyzes selected papers, answering whether creative frameworks addressed health care data or information problems. Focusing on data problems (elicitation or collection, processing) and information problems (provision or visualization), the review examined German and English papers published between 2018 and 2022 using keywords related to "data," "design," and "user-centered." The database SCOPUS was used. RESULTS: Of the 898 query results, only 23 papers described a data or information problem and a creative method to solve it. These were included in the follow-up analysis and divided into different problem categories: data collection (n=7), data processing (n=1), information visualization (n=11), and mixed problems meaning data and information problem present (n=4). The analysis showed that most identified problems fall into the information visualization category. This could indicate that creative frameworks are particularly suitable for solving information or visualization problems and less for other, more abstract areas such as data problems. The results also showed that most researchers applied a creative framework after they knew what specific (data or information) problem they had (n=21). Only a minority chose a creative framework to identify a problem and realize it was a data or information problem (n=2). In response to these findings, the paper discusses the need for a new approach that addresses health care data and information challenges by promoting collaboration, iterative feedback, and user-centered development. CONCLUSIONS: Although the potential of creative frameworks is undisputed, applying these in solving data and information problems is a minority. To harness this potential, a suitable method needs to be developed to support health care system stakeholders. This method could be the User-Centered Data Approach.


Asunto(s)
Atención a la Salud , Humanos , Creatividad , Manejo de Datos/métodos , Diseño Centrado en el Usuario
2.
Sci Total Environ ; 952: 175908, 2024 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-39218084

RESUMEN

To date, poly- and perfluoroalkyl substances (PFAS) represent a real threat for their environmental persistence, wide physicochemical variability, and their potential toxicity. Thus far a large portion of these chemicals remain structurally unknown. These chemicals, therefore, require the implementation of complex non-targeted analysis workflows using liquid chromatography coupled with high-resolution mass spectrometry (LC-HRMS) for their comprehensive detection and monitoring. This approach, even though comprehensive, does not always provide the much-needed analytical resolution for the analysis of complex PFAS mixtures such as fire-fighting aqueous film-forming foams (AFFFs). This study consolidates the advantages of the LC×LC technique hyphenated with high-resolution tandem mass spectrometry (HRMS/MS) for the identification of PFAS in AFFF mixtures. A total of 57 PFAS homolog series (HS) were identified in 3M and Orchidee AFFF mixtures thanks to the (i) high chromatographic peak capacity (n'2D,c ~ 300) and the (i) increased mass domain resolution provided by the "remainder of Kendrick Mass" (RKM) analysis on the HRMS data. Then, we attempted to annotate the PFAS of each HS by exploiting the available reference standards and the FluoroMatch workflow in combination with the RKM defect by different fluorine repeating units, such as CF2, CF2O, and C2F4O. This approach resulted in 12 identified PFAS HS, including compounds belonging to the HS of perfluoroalkyl carboxylic acids (PFACAs), perfluoroalkyl sulfonic acids (PFASAs), (N-pentafluoro(5)sulfide)-perfluoroalkane sulfonates (SF5-PFASAs), N-sulfopropyldimethylammoniopropyl perfluoroalkane sulfonamides (N-SPAmP-FASA), and N-carboxymethyldimethylammoniopropyl perfluoroalkane sulfonamide (N-CMAmP-FASA). The annotated categories of perfluoroalkyl aldehydes and chlorinated PFASAs represent the first record of PFAS HS in the investigated AFFF samples.

3.
Comput Biol Chem ; 113: 108212, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39277959

RESUMEN

Protein lysine crotonylation is an important post-translational modification that regulates various cellular activities. For example, histone crotonylation affects chromatin structure and promotes histone replacement. Identification and understanding of lysine crotonylation sites is crucial in the field of protein research. However, due to the increasing amount of non-histone crotonylation sites, existing classifiers based on traditional machine learning may encounter performance limitations. In order to address this problem, a novel deep learning-based model for identifying crotonylation sites is presented in this study, given the unique advantages of deep learning techniques for sequence data analysis. In this study, an MLP-Attention-based model was developed for the identification of crotonylation sites. Firstly, three feature extraction strategies, namely Amino Acid Composition, K-mer, and Distance-based residue features extraction strategy, were used to encode crotonylated and non-crotonylated sequences. Then, in order to balance the training dataset, the FCM-GRNN undersampling algorithm combining fuzzy clustering and generalized neural network approaches was introduced. Finally, to improve the effectiveness of crotonylation site identification, we explored various classification algorithms, and based on the relevant experimental performance comparisons, the multilayer perceptron (MLP) combined with the superimposed self-attention mechanism was finally selected to construct the prediction model ILYCROsite. The results obtained from independent testing and five-fold cross-validation demonstrated that the model proposed in this study, ILYCROsite, had excellent performance. Notably, on the independent test set, ILYCROsite achieves an AUC value of 87.93 %, which is significantly better than the existing state-of-the-art models. In addition, SHAP (Shapley Additive exPlanations) values were used to analyze the importance of features and their impact on model predictions. Meanwhile, in order to facilitate researchers to use the prediction model constructed in this study, we developed a prediction program to identify the crotonylation sites in a given protein sequence. The data and code for this program are available at: https://github.com/wmqskr/ILYCROsite.

4.
Neuroinformatics ; 2024 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-39278985

RESUMEN

Mouse models are crucial for neuroscience research, yet discrepancies arise between macro- and meso-scales due to sample preparation altering brain morphology. The absence of an accessible toolbox for magnetic resonance imaging (MRI) data processing presents a challenge for assessing morphological changes in the mouse brain. To address this, we developed the MBV-Pipe (Mouse Brain Volumetric Statistics-Pipeline) toolbox, integrating the methods of Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra (DARTEL)-Voxel-based morphometry (VBM) and Tract-Based Spatial Statistics (TBSS) to evaluate brain tissue volume and white matter integrity. To validate the reliability of MBV-Pipe, brain MRI data from seven mice at three time points (in vivo, post-perfusion, and post-fixation) were acquired using a 9.4T ultra-high MRI system. Employing the MBV-Pipe toolbox, we discerned substantial volumetric changes in the mouse brain following perfusion relative to the in vivo condition, with the fixation process inducing only negligible variations. Importantly, the white matter integrity was found to be largely stable throughout the sample preparation procedures. The MBV-Pipe source code is publicly available and includes a user-friendly GUI for facilitating quality control and experimental protocol optimization, which holds promise for advancing mouse brain research in the future.

5.
Anal Chim Acta ; 1326: 343123, 2024 Oct 16.
Artículo en Inglés | MEDLINE | ID: mdl-39260913

RESUMEN

BACKGROUND: N,N'-disubstituted p-phenylenediamine-quinones (PPDQs) are oxidization derivatives of p-phenylenediamines (PPDs) and have raised extensive concerns recently, due to their toxicities and prevalence in the environment, particularly in water environment. PPDQs are derived from tire rubbers, in which other PPD oxidization products besides reported PPDQs may also exist, e.g., unknown PPDQs and PPD-phenols (PPDPs). RESULTS: This study implemented nontarget analysis and profiling for PPDQ/Ps in aged tire rubbers using liquid chromatography-high-resolution mass spectrometry and a species-specific algorithm. The algorithm took into account the ionization behaviors of PPDQ/Ps in both positive and negative electrospray ionization, and their specific carbon isotopologue distributions. A total of 47 formulas of PPDQ/Ps were found and elucidated with tentative or accurate structures, including 25 PPDQs, 18 PPDPs and 4 PPD-hydroxy-quinones (PPDHQs). The semiquantified total concentrations of PPDQ/Ps were 14.08-30.62 µg/g, and the concentrations followed the order as: PPDPs (6.48-17.39) > PPDQs (5.86-12.14) > PPDHQs (0.16-1.35 µg/g). SIGNIFICANCE: The high concentrations and potential toxicities indicate that these PPDQ/Ps could seriously threaten the eco-environment, as they may finally enter the environment, accordingly requiring further investigation. The analysis strategy and data-processing algorithm can be extended to nontarget analysis for other zwitterionic pollutants, and the analysis results provide new understandings on the environmental occurrence of PPDQ/Ps from source and overall perspectives.

6.
Xenobiotica ; : 1-10, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39225512

RESUMEN

1.Challenges, strategies and new technologies in the field of biotransformation were presented and discussed at the 5th European Biotransformation Workshop, which was held on March 14, 2024 on the Novartis Campus in Basel, Switzerland.2. In this meeting report we summarise the presentations and discussions from this workshop.3. The topics covered are listed below:Advances in understanding drug induced liver injury (DILI) risks of carboxylic acids and targeted covalent inhibitorsBiotransformation of oligonucleotide-based therapeutics including automated software tools for metabolite identificationRecent advances in metabolite synthesisQualification and validation of a new compact Low Energy Accelerator Mass Spectrometry (LEA) system for metabolite profiling.

7.
Anal Chim Acta ; 1325: 342917, 2024 Oct 09.
Artículo en Inglés | MEDLINE | ID: mdl-39244310

RESUMEN

The evolution of analytical techniques has opened the possibilities of accurate analyte detection through a straightforward method and short acquisition time, leading towards their applicability to identify medical conditions. Surface-enhanced Raman spectroscopy (SERS) has long been proven effective for rapid detection and relies on SERS spectra that are unique to each specific analyte. However, the complexity of viruses poses challenges to SERS and hinders further progress in its practical applications. The principle of SERS revolves around the interaction among substrate, analyte, and Raman laser, but most studies only emphasize the substrate, especially label-free methods, and the synergy among these factors is often ignored. Therefore, issues related to reproducibility and consistency of results, which are crucial for medical diagnosis and are the main highlights of this review, can be understood and largely addressed when considering these interactions. Viruses are composed of multiple surface components and can be detected by label-free SERS, but the presence of non-target molecules in clinical samples interferes with the detection process. Appropriate spectral data processing workflow also plays an important role in the interpretation of results. Furthermore, integrating machine learning into data processing can account for changes brought about by the presence of non-target molecules when analyzing spectral features to accurately group the data, for example, whether the sample corresponds to a positive or negative patient, and whether a virus variant or multiple viruses are present in the sample. Subsequently, advances in interdisciplinary fields can bring SERS closer to practical applications.


Asunto(s)
Espectrometría Raman , Virus , Espectrometría Raman/métodos , Virus/aislamiento & purificación , Virus/química , Humanos , Propiedades de Superficie
8.
BioData Min ; 17(1): 29, 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39232851

RESUMEN

OBJECTIVE: Data imbalance is a pervasive issue in medical data mining, often leading to biased and unreliable predictive models. This study aims to address the urgent need for effective strategies to mitigate the impact of data imbalance on classification models. We focus on quantifying the effects of different imbalance degrees and sample sizes on model performance, identifying optimal cut-off values, and evaluating the efficacy of various methods to enhance model accuracy in highly imbalanced and small sample size scenarios. METHODS: We collected medical records of patients receiving assisted reproductive treatment in a reproductive medicine center. Random forest was used to screen the key variables for the prediction target. Various datasets with different imbalance degrees and sample sizes were constructed to compare the classification performance of logistic regression models. Metrics such as AUC, G-mean, F1-Score, Accuracy, Recall, and Precision were used for evaluation. Four imbalance treatment methods (SMOTE, ADASYN, OSS, and CNN) were applied to datasets with low positive rates and small sample sizes to assess their effectiveness. RESULTS: The logistic model's performance was low when the positive rate was below 10% but stabilized beyond this threshold. Similarly, sample sizes below 1200 yielded poor results, with improvement seen above this threshold. For robustness, the optimal cut-offs for positive rate and sample size were identified as 15% and 1500, respectively. SMOTE and ADASYN oversampling significantly improved classification performance in datasets with low positive rates and small sample sizes. CONCLUSIONS: The study identifies a positive rate of 15% and a sample size of 1500 as optimal cut-offs for stable logistic model performance. For datasets with low positive rates and small sample sizes, SMOTE and ADASYN are recommended to improve balance and model accuracy.

9.
J Appl Crystallogr ; 57(Pt 4): 1217-1228, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39108808

RESUMEN

Presented and discussed here is the implementation of a software solution that provides prompt X-ray diffraction data analysis during fast dynamic compression experiments conducted within the dynamic diamond anvil cell technique. It includes efficient data collection, streaming of data and metadata to a high-performance cluster (HPC), fast azimuthal data integration on the cluster, and tools for controlling the data processing steps and visualizing the data using the DIOPTAS software package. This data processing pipeline is invaluable for a great number of studies. The potential of the pipeline is illustrated with two examples of data collected on ammonia-water mixtures and multiphase mineral assemblies under high pressure. The pipeline is designed to be generic in nature and could be readily adapted to provide rapid feedback for many other X-ray diffraction techniques, e.g. large-volume press studies, in situ stress/strain studies, phase transformation studies, chemical reactions studied with high-resolution diffraction etc.

10.
Artículo en Inglés | MEDLINE | ID: mdl-39190874

RESUMEN

OBJECTIVES: Integration of social determinants of health into health outcomes research will allow researchers to study health inequities. The All of Us Research Program has the potential to be a rich source of social determinants of health data. However, user-friendly recommendations for scoring and interpreting the All of Us Social Determinants of Health Survey are needed to return value to communities through advancing researcher competencies in use of the All of Us Research Hub Researcher Workbench. We created a user guide aimed at providing researchers with an overview of the Social Determinants of Health Survey, recommendations for scoring and interpreting participant responses, and readily executable R and Python functions. TARGET AUDIENCE: This user guide targets registered users of the All of Us Research Hub Researcher Workbench, a cloud-based platform that supports analysis of All of Us data, who are currently conducting or planning to conduct analyses using the Social Determinants of Health Survey. SCOPE: We introduce 14 constructs evaluated as part of the Social Determinants of Health Survey and summarize construct operationalization. We offer 30 literature-informed recommendations for scoring participant responses and interpreting scores, with multiple options available for 8 of the constructs. Then, we walk through example R and Python functions for relabeling responses and scoring constructs that can be directly implemented in Jupyter Notebook or RStudio within the Researcher Workbench. Full source code is available in supplemental files and GitHub. Finally, we discuss psychometric considerations related to the Social Determinants of Health Survey for researchers.

11.
Anal Bioanal Chem ; 416(22): 4833-4848, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39090266

RESUMEN

The increasing recognition of the health impacts from human exposure to per- and polyfluorinated alkyl substances (PFAS) has surged the need for sophisticated analytical techniques and advanced data analyses, especially for assessing exposure by food of animal origin. Despite the existence of nearly 15,000 PFAS listed in the CompTox chemicals dashboard by the US Environmental Protection Agency, conventional monitoring and suspect screening methods often fall short, covering only a fraction of these substances. This study introduces an innovative automated data processing workflow, named PFlow, for identifying PFAS in environmental samples using direct infusion Fourier transform ion cyclotron resonance mass spectrometry (DI-FT-ICR MS). PFlow's validation on a bream liver sample, representative of low-concentration biota, involves data pre-processing, annotation of PFAS based on their precursor masses, and verification through isotopologues. Notably, PFlow annotated 17 PFAS absent in the comprehensive targeted approach and tentatively identified an additional 53 compounds, thereby demonstrating its efficiency in enhancing PFAS detection coverage. From an initial dataset of 30,332 distinct m/z values, PFlow thoroughly narrowed down the candidates to 84 potential PFAS compounds, utilizing precise mass measurements and chemical logic criteria, underscoring its potential in advancing our understanding of PFAS prevalence and of human exposure.


Asunto(s)
Fluorocarburos , Espectrometría de Masas , Animales , Espectrometría de Masas/métodos , Fluorocarburos/análisis , Flujo de Trabajo , Biota , Automatización , Monitoreo del Ambiente/métodos , Humanos , Hígado/química
12.
Sci Rep ; 14(1): 19554, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39174587

RESUMEN

The long-term loss of distribution network in the process of distribution network development is caused by the backward management mode of distribution network. The traditional analysis and calculation methods of distribution network loss can not adapt to the current development environment of distribution network. To improve the accuracy of filling missing values in power load data, particle swarm optimization algorithm is proposed to optimize the clustering center of the clustering algorithm. Furthermore, the original isolated forest anomaly recognition algorithm can be used to detect outliers in the load data, and the coefficient of variation of the load data is used to improve the recognition accuracy of the algorithm. Finally, this paper introduces a breadth-first-based method for calculating line loss in the context of big data. An example is provided using the distribution network system of Yuxi City in Yunnan Province, and a simulation experiment is carried out. And the findings revealed that the error of the enhanced fuzzy C-mean clustering algorithm was on average - 6.35, with a standard deviation of 4.015 in the situation of partially missing data. The area under the characteristic curve of the improved isolated forest algorithm subjects in the case of the abnormal sample fuzzy situation was 0.8586, with the smallest decrease, based on the coefficient of variation, and through the refinement of the analysis, it was discovered that the feeder line loss rate is 7.62%. It is confirmed that the suggested technique can carry out distribution network line loss analysis fast and accurately and can serve as a guide for managing distribution network line loss.

13.
Proteomics ; : e2300491, 2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39126236

RESUMEN

State-of-the-art mass spectrometers combined with modern bioinformatics algorithms for peptide-to-spectrum matching (PSM) with robust statistical scoring allow for more variable features (i.e., post-translational modifications) being reliably identified from (tandem-) mass spectrometry data, often without the need for biochemical enrichment. Semi-specific proteome searches, that enforce a theoretical enzymatic digestion to solely the N- or C-terminal end, allow to identify of native protein termini or those arising from endogenous proteolytic activity (also referred to as "neo-N-termini" analysis or "N-terminomics"). Nevertheless, deriving biological meaning from these search outputs can be challenging in terms of data mining and analysis. Thus, we introduce TermineR, a data analysis approach for the (1) annotation of peptides according to their enzymatic cleavage specificity and known protein processing features, (2) differential abundance and enrichment analysis of N-terminal sequence patterns, and (3) visualization of neo-N-termini location. We illustrate the use of TermineR by applying it to tandem mass tag (TMT)-based proteomics data of a mouse model of polycystic kidney disease, and assess the semi-specific searches for biological interpretation of cleavage events and the variable contribution of proteolytic products to general protein abundance. The TermineR approach and example data are available as an R package at https://github.com/MiguelCos/TermineR.

14.
Parkinsonism Relat Disord ; 127: 107104, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39153421

RESUMEN

BACKGROUND: Evaluation of disease severity in Parkinson's disease (PD) relies on motor symptoms quantification. However, during early-stage PD, these symptoms are subtle and difficult to quantify by experts, which might result in delayed diagnosis and suboptimal disease management. OBJECTIVE: To evaluate the use of videos and machine learning (ML) for automatic quantification of motor symptoms in early-stage PD. METHODS: We analyzed videos of three movement tasks-Finger Tapping, Hand Movement, and Leg Agility- from 26 aged-matched healthy controls and 31 early-stage PD patients. Utilizing ML algorithms for pose estimation we extracted kinematic features from these videos and trained three classification models based on left and right-side movements, and right/left symmetry. The models were trained to differentiate healthy controls from early-stage PD from videos. RESULTS: Combining left side, right side, and symmetry features resulted in a PD detection accuracy of 79 % from Finger Tap videos, 75 % from Hand Movement videos, 79 % from Leg Agility videos, and 86 % when combining the three tasks using a soft voting approach. In contrast, the classification accuracy varied between 40 % and 72 % when the movement side or symmetry were not considered. CONCLUSIONS: Our methodology effectively differentiated between early-stage PD and healthy controls using videos of standardized motor tasks by integrating kinematic analyses of left-side, right-side, and bilateral symmetry movements. These results demonstrate that ML can detect movement deficits in early-stage PD from videos. This technology is easy-to-use, highly scalable, and has the potential to improve the management and quantification of motor symptoms in early-stage PD.

15.
J Integr Bioinform ; 2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39092509

RESUMEN

This paper provides an overview of the development and operation of the Leonhard Med Trusted Research Environment (TRE) at ETH Zurich. Leonhard Med gives scientific researchers the ability to securely work on sensitive research data. We give an overview of the user perspective, the legal framework for processing sensitive data, design history, current status, and operations. Leonhard Med is an efficient, highly secure Trusted Research Environment for data processing, hosted at ETH Zurich and operated by the Scientific IT Services (SIS) of ETH. It provides a full stack of security controls that allow researchers to store, access, manage, and process sensitive data according to Swiss legislation and ETH Zurich Data Protection policies. In addition, Leonhard Med fulfills the BioMedIT Information Security Policies and is compatible with international data protection laws and therefore can be utilized within the scope of national and international collaboration research projects. Initially designed as a "bare-metal" High-Performance Computing (HPC) platform to achieve maximum performance, Leonhard Med was later re-designed as a virtualized, private cloud platform to offer more flexibility to its customers. Sensitive data can be analyzed in secure, segregated spaces called tenants. Technical and Organizational Measures (TOMs) are in place to assure the confidentiality, integrity, and availability of sensitive data. At the same time, Leonhard Med ensures broad access to cutting-edge research software, especially for the analysis of human -omics data and other personalized health applications.

16.
Endosc Int Open ; 12(8): E968-E980, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39184060

RESUMEN

Rapid climate change or climate crisis is one of the most serious emergencies of the 21st century, accounting for highly impactful and irreversible changes worldwide. Climate crisis can also affect the epidemiology and disease burden of gastrointestinal diseases because they have a connection with environmental factors and nutrition. Gastrointestinal endoscopy is a highly intensive procedure with a significant contribution to greenhouse gas (GHG) emissions. Moreover, endoscopy is the third highest generator of waste in healthcare facilities with significant contributions to carbon footprint. The main sources of direct carbon emission in endoscopy are use of high-powered consumption devices (e.g. computers, anesthesia machines, wash machines for reprocessing, scope processors, and lighting) and waste production derived mainly from use of disposable devices. Indirect sources of emissions are those derived from heating and cooling of facilities, processing of histological samples, and transportation of patients and materials. Consequently, sustainable endoscopy and climate change have been the focus of discussions between endoscopy providers and professional societies with the aim of taking action to reduce environmental impact. The term "green endoscopy" refers to the practice of gastroenterology that aims to raise awareness, assess, and reduce endoscopy´s environmental impact. Nevertheless, while awareness has been growing, guidance about practical interventions to reduce the carbon footprint of gastrointestinal endoscopy are lacking. This review aims to summarize current data regarding the impact of endoscopy on GHG emissions and possible strategies to mitigate this phenomenon. Further, we aim to promote the evolution of a more sustainable "green endoscopy".

17.
Cureus ; 16(7): e64263, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39130982

RESUMEN

Fog computing is a decentralized computing infrastructure that processes data at or near its source, reducing latency and bandwidth usage. This technology is gaining traction in healthcare due to its potential to enhance real-time data processing and decision-making capabilities in critical medical scenarios. A systematic review of existing literature on fog computing in healthcare was conducted. The review included searches in major databases such as PubMed, IEEE Xplore, Scopus, and Google Scholar. The search terms used were "fog computing in healthcare," "real-time diagnostics and fog computing," "continuous patient monitoring fog computing," "predictive analytics fog computing," "interoperability in fog computing healthcare," "scalability issues fog computing healthcare," and "security challenges fog computing healthcare." Articles published between 2010 and 2023 were considered. Inclusion criteria encompassed peer-reviewed articles, conference papers, and review articles focusing on the applications of fog computing in healthcare. Exclusion criteria were articles not available in English, those not related to healthcare applications, and those lacking empirical data. Data extraction focused on the applications of fog computing in real-time diagnostics, continuous monitoring, predictive analytics, and the identified challenges of interoperability, scalability, and security. Fog computing significantly enhances diagnostic capabilities by facilitating real-time data analysis, crucial for urgent diagnostics such as stroke detection, by processing data closer to its source. It also improves monitoring during surgeries by enabling real-time processing of vital signs and physiological parameters, thereby enhancing patient safety. In chronic disease management, continuous data collection and analysis through wearable devices allow for proactive disease management and timely adjustments to treatment plans. Additionally, fog computing supports telemedicine by enabling real-time communication between remote specialists and patients, thereby improving access to specialist care in underserved regions. Fog computing offers transformative potential in healthcare, improving diagnostic precision, patient monitoring, and personalized treatment. Addressing the challenges of interoperability, scalability, and security will be crucial for fully realizing the benefits of fog computing in healthcare, leading to a more connected and efficient healthcare environment.

18.
J Med Internet Res ; 26: e58502, 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39178032

RESUMEN

As digital phenotyping, the capture of active and passive data from consumer devices such as smartphones, becomes more common, the need to properly process the data and derive replicable features from it has become paramount. Cortex is an open-source data processing pipeline for digital phenotyping data, optimized for use with the mindLAMP apps, which is used by nearly 100 research teams across the world. Cortex is designed to help teams (1) assess digital phenotyping data quality in real time, (2) derive replicable clinical features from the data, and (3) enable easy-to-share data visualizations. Cortex offers many options to work with digital phenotyping data, although some common approaches are likely of value to all teams using it. This paper highlights the reasoning, code, and example steps necessary to fully work with digital phenotyping data in a streamlined manner. Covering how to work with the data, assess its quality, derive features, and visualize findings, this paper is designed to offer the reader the knowledge and skills to apply toward analyzing any digital phenotyping data set. More specifically, the paper will teach the reader the ins and outs of the Cortex Python package. This includes background information on its interaction with the mindLAMP platform, some basic commands to learn what data can be pulled and how, and more advanced use of the package mixed with basic Python with the goal of creating a correlation matrix. After the tutorial, different use cases of Cortex are discussed, along with limitations. Toward highlighting clinical applications, this paper also provides 3 easy ways to implement examples of Cortex use in real-world settings. By understanding how to work with digital phenotyping data and providing ready-to-deploy code with Cortex, the paper aims to show how the new field of digital phenotyping can be both accessible to all and rigorous in methodology.


Asunto(s)
Fenotipo , Programas Informáticos , Humanos , Biomarcadores , Visualización de Datos
19.
Sensors (Basel) ; 24(16)2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39204860

RESUMEN

The primary objective of the research presented in this article is to introduce an artificial neural network that demands less computational power than a conventional deep neural network. The development of this ANN was achieved through the application of Ordered Fuzzy Numbers (OFNs). In the context of Industry 4.0, there are numerous applications where this solution could be utilized for data processing. It allows the deployment of Artificial Intelligence at the network edge on small devices, eliminating the need to transfer large amounts of data to a cloud server for analysis. Such networks will be easier to implement in small-scale solutions, like those for the Internet of Things, in the future. This paper presents test results where a real system was monitored, and anomalies were detected and predicted.

20.
Sensors (Basel) ; 24(16)2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39205094

RESUMEN

Traditional broadcasting methods often result in fatigue and decision-making errors when dealing with complex and diverse live content. Current research on intelligent broadcasting primarily relies on preset rules and model-based decisions, which have limited capabilities for understanding emotional dynamics. To address these issues, this study proposed and developed an emotion-driven intelligent broadcasting system, EmotionCast, to enhance the efficiency of camera switching during live broadcasts through decisions based on multimodal emotion recognition technology. Initially, the system employs sensing technologies to collect real-time video and audio data from multiple cameras, utilizing deep learning algorithms to analyze facial expressions and vocal tone cues for emotion detection. Subsequently, the visual, audio, and textual analyses were integrated to generate an emotional score for each camera. Finally, the score for each camera shot at the current time point was calculated by combining the current emotion score with the optimal scores from the preceding time window. This approach ensured optimal camera switching, thereby enabling swift responses to emotional changes. EmotionCast can be applied in various sensing environments such as sports events, concerts, and large-scale performances. The experimental results demonstrate that EmotionCast excels in switching accuracy, emotional resonance, and audience satisfaction, significantly enhancing emotional engagement compared to traditional broadcasting methods.


Asunto(s)
Algoritmos , Emociones , Expresión Facial , Emociones/fisiología , Humanos , Aprendizaje Profundo , Grabación en Video/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA