Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Proteome Res ; 23(9): 3780-3790, 2024 Sep 06.
Artículo en Inglés | MEDLINE | ID: mdl-39193824

RESUMEN

Data-independent acquisition (DIA) has improved the identification and quantitation coverage of peptides and proteins in liquid chromatography-tandem mass spectrometry-based proteomics. However, different DIA data-processing tools can produce very different identification and quantitation results for the same data set. Currently, benchmarking studies of DIA tools are predominantly focused on comparing the identification results, while the quantitative accuracy of DIA measurements is acknowledged to be important but insufficiently investigated, and the absence of suitable metrics for comparing quantitative accuracy is one of the reasons. A new metric is proposed for the evaluation of quantitative accuracy to avoid the influence of differences in false discovery rate control stringency. The part of the quantitation results with high reliability was acquired from each DIA tool first, and the quantitative accuracy was evaluated by comparing quantification error rates at the same number of accurate ratios. From the results of four benchmark data sets, the proposed metric was shown to be more sensitive to discriminating the quantitative performance of DIA tools. Moreover, the DIA tools with advantages in quantitative accuracy were consistently revealed by this metric. The proposed metric can also help researchers in optimizing algorithms of the same DIA tool and sample preprocessing methods to enhance quantitative accuracy.


Asunto(s)
Proteómica , Espectrometría de Masas en Tándem , Proteómica/métodos , Proteómica/normas , Proteómica/estadística & datos numéricos , Espectrometría de Masas en Tándem/métodos , Espectrometría de Masas en Tándem/normas , Cromatografía Liquida/métodos , Cromatografía Liquida/normas , Algoritmos , Reproducibilidad de los Resultados , Humanos , Benchmarking , Péptidos/análisis , Programas Informáticos , Cromatografía Líquida con Espectrometría de Masas
2.
Artif Intell Med ; 148: 102781, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38325926

RESUMEN

The Concordance Index (C-index) is a commonly used metric in Survival Analysis for evaluating the performance of a prediction model. In this paper, we propose a decomposition of the C-index into a weighted harmonic mean of two quantities: one for ranking observed events versus other observed events, and the other for ranking observed events versus censored cases. This decomposition enables a finer-grained analysis of the relative strengths and weaknesses between different survival prediction methods. The usefulness of this decomposition is demonstrated through benchmark comparisons against classical models and state-of-the-art methods, together with the new variational generative neural-network-based method (SurVED) proposed in this paper. The performance of the models is assessed using four publicly available datasets with varying levels of censoring. Using the C-index decomposition and synthetic censoring, the analysis shows that deep learning models utilize the observed events more effectively than other models. This allows them to keep a stable C-index in different censoring levels. In contrast to such deep learning methods, classical machine learning models deteriorate when the censoring level decreases due to their inability to improve on ranking the events versus other events.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Análisis de Supervivencia
3.
Artículo en Inglés | MEDLINE | ID: mdl-37664890

RESUMEN

This study discusses the relationship between Polycystic Ovary Syndrome (PCOS) and diabetes in women, which has become increasingly prevalent due to changing lifestyles and environmental factors. The characteristic that distinguishes women with PCOS is hyperandrogenism which results from abnormal ovarian or adrenal function, which leads to the overproduction of androgens. Excessive androgens in women increase the risk of Type 2 diabetes (T2D) and insulin resistance (IR). Nowadays, diabetes affects people of all ages and is linked to factors such as lifestyle, genetics, stress, and aging. Diabetes, the uncontrolled high blood sugar level can potentially harm kidneys, nerves, eyes, and other organs and there is no cure, making it a concerning disease in developing nations. This research tried to submit the evidence through feature-wise correlation analyses between PCOS and diabetes. Hence, this model utilized the Exploratory Data Analysis (EDA) and the Elbow clustering algorithms for the experimental purpose in which the EDA deeply analyzed the features of PCOS and diabetes and recorded a positive correlation of 95%. The Elbow clustering technique is employed for verifying the correlations identified through EDA. Although limited research exists on this specific disease, this work provides potential evidence for the research community by evaluating the clustering results using Silhouette Score, Calinski-Harabasz Index, and Davies-Bouldin Index.


Asunto(s)
Diabetes Mellitus Tipo 2 , Hiperandrogenismo , Resistencia a la Insulina , Síndrome del Ovario Poliquístico , Femenino , Humanos , Síndrome del Ovario Poliquístico/complicaciones , Diabetes Mellitus Tipo 2/etiología , Factores de Riesgo , Hiperandrogenismo/complicaciones , Resistencia a la Insulina/fisiología
4.
J Imaging ; 9(12)2023 Dec 18.
Artículo en Inglés | MEDLINE | ID: mdl-38132699

RESUMEN

A three-dimensional (3D) video is a special video representation with an artificial stereoscopic vision effect that increases the depth perception of the viewers. The quality of a 3D video is generally measured based on the similarity to stereoscopic vision obtained with the human vision system (HVS). The reason for the usage of these high-cost and time-consuming subjective tests is due to the lack of an objective video Quality of Experience (QoE) evaluation method that models the HVS. In this paper, we propose a hybrid 3D-video QoE evaluation method based on spatial resolution associated with depth cues (i.e., motion information, blurriness, retinal-image size, and convergence). The proposed method successfully models the HVS by considering the 3D video parameters that directly affect depth perception, which is the most important element of stereoscopic vision. Experimental results show that the measurement of the 3D-video QoE by the proposed hybrid method outperforms the widely used existing methods. It is also found that the proposed method has a high correlation with the HVS. Consequently, the results suggest that the proposed hybrid method can be conveniently utilized for the 3D-video QoE evaluation, especially in real-time applications.

5.
Sensors (Basel) ; 23(22)2023 Nov 20.
Artículo en Inglés | MEDLINE | ID: mdl-38005673

RESUMEN

At present, text-guided image manipulation is a notable subject of study in the vision and language field. Given an image and text as inputs, these methods aim to manipulate the image according to the text, while preserving text-irrelevant regions. Although there has been extensive research to improve the versatility and performance of text-guided image manipulation, research on its performance evaluation is inadequate. This study proposes Manipulation Direction (MD), a logical and robust metric, which evaluates the performance of text-guided image manipulation by focusing on changes between image and text modalities. Specifically, we define MD as the consistency of changes between images and texts occurring before and after manipulation. By using MD to evaluate the performance of text-guided image manipulation, we can comprehensively evaluate how an image has changed before and after the image manipulation and whether this change agrees with the text. Extensive experiments on Multi-Modal-CelebA-HQ and Caltech-UCSD Birds confirmed that there was an impressive correlation between our calculated MD scores and subjective scores for the manipulated images compared to the existing metrics.

6.
Micromachines (Basel) ; 14(7)2023 Jul 07.
Artículo en Inglés | MEDLINE | ID: mdl-37512698

RESUMEN

Optical detection equipment (ODE) is subjected to vibrations that hamper the quality of imaging. In this paper, an active vibration isolation and compensation system (VICS) for the ODE is developed and systematically studied to improve the optical imaging quality. An active vibration isolator for cameras is designed, employing a dual-loop control strategy with position compensation and integral force feedback (IFF) control, and establishing the mapping relationship between vibration and image quality. A performance metric for evaluating images is also proposed. Finally, an experimental platform is constructed to verify its effectiveness. Based on the experimental results, it can be concluded that the proposed VICS effectively isolates vibrations, resulting in a reduction of 13.95 dB in the peak at the natural frequency and an 11.76 Hz widening of the isolation bandwidth compared with the system without it. At the same time, the experiments demonstrate that the image performance metric value increases by 46.03% near the natural frequency.

7.
BMC Public Health ; 23(1): 850, 2023 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-37165339

RESUMEN

BACKGROUND: Wellington-Dufferin-Guelph Public Health (WDGPH) has conducted an absenteeism-based influenza surveillance program in the WDG region of Ontario, Canada since 2008, using a 10% absenteeism threshold to raise an alert for the implementation of mitigating measures. A recent study indicated that model-based alternatives, such as distributed lag seasonal logistic regression models, provided improved alerts for detecting an upcoming epidemic. However model evaluation and selection was primarily based on alert accuracy, measured by the false alert rate (FAR), and failed to optimize timeliness. Here, a new metric that simultaneously evaluates epidemic alert accuracy and timeliness is proposed. The alert time quality (ATQ) metric is investigated as a model selection criterion on both a simulated and real data set. METHODS: The ATQ assessed alerts on a gradient, where alerts raised incrementally before or after an optimal day were considered informative, but were penalized for lack of timeliness. Summary statistics of ATQ, average alert time quality (AATQ) and first alert time quality (FATQ), were used for model evaluation and selection. Alerts raised by ATQ and FAR selected models were compared. Daily elementary school absenteeism and laboratory-confirmed influenza case data collected by WDGPH were used for demonstration and evaluation of the proposed metric. A simulation study that mimicked the WDG population and influenza demographics was conducted for further evaluation of the proposed metric. RESULTS: The FATQ-selected model raised acceptable first alerts most frequently, while the AATQ-selected model raised first alerts within the ideal range most frequently. CONCLUSIONS: Models selected by either FATQ or AATQ would more effectively predict community influenza activity with the local community than those selected by FAR.


Asunto(s)
Gripe Humana , Vigilancia de la Población , Humanos , Absentismo , Gripe Humana/diagnóstico , Gripe Humana/epidemiología , Ontario/epidemiología , Instituciones Académicas
8.
Disabil Rehabil Assist Technol ; : 1-10, 2023 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-36927193

RESUMEN

PURPOSE: Visual impairment-related disabilities have become increasingly pervasive. Current reports estimate a total of 36 million persons with blindness and 217 million persons with moderate to severe visual impairment worldwide. Assistive technologies (AT), including text-to-speech software, navigational/spatial guides, and object recognition tools have the capacity to improve the lives of people with blindness and low vision. However, access to such AT is constrained by high costs and implementation barriers. More recently, expansive growth in mobile computing has enabled many technologies to be translated into mobile applications. As a result, a marketplace of accessibility apps has become available, yet no framework exists to facilitate navigation of this voluminous space. MATERIALS AND METHODS: We developed the BLV (Blind and Low Vision) App Arcade: a fun, engaging, and searchable curated repository of app AT broken down into 11 categories spanning a wide variety of themes from entertainment to navigation. Additionally, a standardized evaluation metric was formalized to assess each app in five key dimensions: reputability, privacy, data sharing, effectiveness, and ease of use/accessibility. In this paper, we describe the methodological approaches, considerations, and metrics used to find, store and score mobile applications. CONCLUSION: The development of a comprehensive and standardized database of apps with a scoring rubric has the potential to increase access to reputable tools for the visually impaired community, especially for those in low- and middle-income demographics, who may have access to mobile devices but otherwise have limited access to more expensive technologies or services.


A wide array of assistive mobile applications now serve as low cost, convenient, and effective alternatives to standard tools in the rehabilitation domain.Given an extensive (and growing) marketplace of assistive apps, we highlight the importance of developing standardized evaluation frameworks that serve to assess the merit, functionality, and accessibility of tools in respective rehabilitation fields.To provide an introduction to a novel resource accessible to the public to exhibit verified and reliable assistive apps for the visually impaired community, especially for those in low- and middle-income demographics who may not have access to common technologies and services.

9.
J Pathol Inform ; 13: 100119, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36268073

RESUMEN

Context: Cytology is the study of whole cells in diagnostic pathology. Unlike standard histologic thinly sliced specimens, cytologic preparations consist of preparations of whole cells where cells commonly cluster and aggregate. As such, cytology preparations are generally much thicker than histologic slides, resulting in large patches of defocus when examined under the microscope. A diagnostic aggregate of cells often cannot be viewed in focus together, requiring pathologists to continually manipulate the focal plane, complicating the task of accurately assessing the entire cellular aggregate and thus in making a diagnosis. Further, it is extremely difficult to acquire useful uniformly in-focus digital images of cytology preparations for applications such as remote diagnostic evaluations and artificial intelligence models. The predominant current method to address this issue is to acquire digital images at multiple focal planes of the entire slide, which demands long scanning time, complex and expensive scanning systems, and huge storage capacity. Aims: Here we report a unique imaging method that can acquire cytologic images efficiently and computationally render all-in-focus digital images that are highly compact. Methods and material: This method applies a metric-based digital refocusing to microscopy data collected with a Fourier ptychographic microscope (FPM). The digitally refocused patches of images are then synthesized into an all-in-focus image. Results: We report all-in-focus FPM results of thyroid fine needle aspiration (FNA) cytology samples, demonstrating our method's ability to overcome the height variance of 30 µm caused by cell aggregation, and rendering images at high resolution (corresponds to a standard microscope with objective NA of 0.75) and that are all-in-focus. Conclusions: This technology is applicable to standard microscopes, and we believe can have an impact on diagnostic accuracy as well as ease and speed of diagnosing challenging specimens. While we focus on cytology slides here, we anticipate this technology's advantages will translate well for histology applications. This technique also addresses the issue of remote rapid evaluation of cytology preparations. Finally, we believe that by resolving the focus heterogeneity issues in standard digital images, this technique is a critical advance for applying machine learning to cytology specimens.

10.
Front Digit Health ; 4: 806076, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35252959

RESUMEN

OBJECTIVE: Automated speech recognition (ASR) systems have become increasingly sophisticated, accurate, and deployable on many digital devices, including on a smartphone. This pilot study aims to examine the speech recognition performance of ASR apps using audiological speech tests. In addition, we compare ASR speech recognition performance to normal hearing and hearing impaired listeners and evaluate if standard clinical audiological tests are a meaningful and quick measure of the performance of ASR apps. METHODS: Four apps have been tested on a smartphone, respectively AVA, Earfy, Live Transcribe, and Speechy. The Dutch audiological speech tests performed were speech audiometry in quiet (Dutch CNC-test), Digits-in-Noise (DIN)-test with steady-state speech-shaped noise, sentences in quiet and in averaged long-term speech-shaped spectrum noise (Plomp-test). For comparison, the app's ability to transcribe a spoken dialogue (Dutch and English) was tested. RESULTS: All apps scored at least 50% phonemes correct on the Dutch CNC-test for a conversational speech intensity level (65 dB SPL) and achieved 90-100% phoneme recognition at higher intensity levels. On the DIN-test, AVA and Live Transcribe had the lowest (best) signal-to-noise ratio +8 dB. The lowest signal-to-noise measured with the Plomp-test was +8 to 9 dB for Earfy (Android) and Live Transcribe (Android). Overall, the word error rate for the dialogue in English (19-34%) was lower (better) than for the Dutch dialogue (25-66%). CONCLUSION: The performance of the apps was limited on audiological tests that provide little linguistic context or use low signal to noise levels. For Dutch audiological speech tests in quiet, ASR apps performed similarly to a person with a moderate hearing loss. In noise, the ASR apps performed more poorly than most profoundly deaf people using a hearing aid or cochlear implant. Adding new performance metrics including the semantic difference as a function of SNR and reverberation time could help to monitor and further improve ASR performance.

11.
Environ Sci Technol ; 56(3): 2054-2064, 2022 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-34995441

RESUMEN

Solute descriptors have been widely used to model chemical transfer processes through poly-parameter linear free energy relationships (pp-LFERs); however, there are still substantial difficulties in obtaining these descriptors accurately and quickly for new organic chemicals. In this research, models (PaDEL-DNN) that require only SMILES of chemicals were built to satisfactorily estimate pp-LFER descriptors using deep neural networks (DNN) and the PaDEL chemical representation. The PaDEL-DNN-estimated pp-LFER descriptors demonstrated good performance in modeling storage-lipid/water partitioning coefficient (log Kstorage-lipid/water), bioconcentration factor (BCF), aqueous solubility (ESOL), and hydration free energy (freesolve). Then, assuming that the accuracy in the estimated values of widely available properties, e.g., logP (octanol-water partition coefficient), can calibrate estimates for less available but related properties, we proposed logP as a surrogate metric for evaluating the overall accuracy of the estimated pp-LFER descriptors. When using the pp-LFER descriptors to model log Kstorage-lipid/water, BCF, ESOL, and freesolve, we achieved around 0.1 log unit lower errors for chemicals whose estimated pp-LFER descriptors were deemed "accurate" by the surrogate metric. The interpretation of the PaDEL-DNN models revealed that, for a given test chemical, having several (around 5) "similar" chemicals in the training data set was crucial for accurate estimation while the remaining less similar training chemicals provided reasonable baseline estimates. Lastly, pp-LFER descriptors for over 2800 persistent, bioaccumulative, and toxic chemicals were reasonably estimated by combining PaDEL-DNN with the surrogate metric. Overall, the PaDEL-DNN/surrogate metric and newly estimated descriptors will greatly benefit chemical transfer modeling.


Asunto(s)
Compuestos Orgánicos , Agua , Fenómenos Químicos , Redes Neurales de la Computación , Octanoles , Compuestos Orgánicos/química , Agua/química
12.
Front Genet ; 12: 636743, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33833776

RESUMEN

Single-cell RNA sequencing (scRNA-seq) data provides unprecedented information on cell fate decisions; however, the spatial arrangement of cells is often lost. Several recent computational methods have been developed to impute spatial information onto a scRNA-seq dataset through analyzing known spatial expression patterns of a small subset of genes known as a reference atlas. However, there is a lack of comprehensive analysis of the accuracy, precision, and robustness of the mappings, along with the generalizability of these methods, which are often designed for specific systems. We present a system-adaptive deep learning-based method (DEEPsc) to impute spatial information onto a scRNA-seq dataset from a given spatial reference atlas. By introducing a comprehensive set of metrics that evaluate the spatial mapping methods, we compare DEEPsc with four existing methods on four biological systems. We find that while DEEPsc has comparable accuracy to other methods, an improved balance between precision and robustness is achieved. DEEPsc provides a data-adaptive tool to connect scRNA-seq datasets and spatial imaging datasets to analyze cell fate decisions. Our implementation with a uniform API can serve as a portal with access to all the methods investigated in this work for spatial exploration of cell fate decisions in scRNA-seq data. All methods evaluated in this work are implemented as an open-source software with a uniform interface.

13.
Accid Anal Prev ; 152: 106003, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33571922

RESUMEN

Vehicle automation safety must be evaluated not only for market success but also for more informed decision-making about Automated Vehicles' (AVs) deployment and supporting policies and regulations to govern AVs' unintended consequences. This study is designed to identify the AV safety quantification studies, evaluate the quantification approaches used in the literature, and uncover the gaps and challenges in AV safety evaluation. We employed a scoping review methodology to identify the approaches used in the literature to quantify AV safety. After screening and reviewing the literature, six approaches were identified: target crash population, traffic simulation, driving simulator, road test data analysis, system failure risk assessment, and safety effectiveness estimation. We ran two evaluations on the identified approaches. First, we investigated each approach in terms of its input (required data, assumptions, etc.), output (safety evaluation metrics), and application (to estimate AVs' safety implications at the vehicle, transportation system, and society levels). Second, we qualitatively compared them in terms of three criteria: availability of input data, suitability for evaluating different automation levels, and reliability of estimations. This review identifies four challenges in AV safety evaluation: (a) shortcomings in AV safety evaluation approaches, (b) uncertainties in AV implementations and their impacts on AV safety, (c) potential riskier behavior of AV passengers as well as other road users, and (d) emerging safety issues related to AV implementations. This review is expected to help researchers and rulemakers to choose the most appropriate quantification method based on their goals and study limitations. Future research is required to address the identified challenges in AV safety evaluation.


Asunto(s)
Accidentes de Tránsito/prevención & control , Accidentes de Tránsito/estadística & datos numéricos , Conducción de Automóvil/normas , Investigación/tendencias , Robótica/métodos , Robótica/normas , Seguridad , Humanos , Reproducibilidad de los Resultados
14.
Brief Bioinform ; 22(2): 1604-1619, 2021 03 22.
Artículo en Inglés | MEDLINE | ID: mdl-32043521

RESUMEN

Drug repositioning can drastically decrease the cost and duration taken by traditional drug research and development while avoiding the occurrence of unforeseen adverse events. With the rapid advancement of high-throughput technologies and the explosion of various biological data and medical data, computational drug repositioning methods have been appealing and powerful techniques to systematically identify potential drug-target interactions and drug-disease interactions. In this review, we first summarize the available biomedical data and public databases related to drugs, diseases and targets. Then, we discuss existing drug repositioning approaches and group them based on their underlying computational models consisting of classical machine learning, network propagation, matrix factorization and completion, and deep learning based models. We also comprehensively analyze common standard data sets and evaluation metrics used in drug repositioning, and give a brief comparison of various prediction methods on the gold standard data sets. Finally, we conclude our review with a brief discussion on challenges in computational drug repositioning, which includes the problem of reducing the noise and incompleteness of biomedical data, the ensemble of various computation drug repositioning methods, the importance of designing reliable negative samples selection methods, new techniques dealing with the data sparseness problem, the construction of large-scale and comprehensive benchmark data sets and the analysis and explanation of the underlying mechanisms of predicted interactions.


Asunto(s)
Simulación por Computador , Reposicionamiento de Medicamentos , Algoritmos , Teorema de Bayes , Análisis por Conglomerados , Biología Computacional/métodos , Interpretación Estadística de Datos , Aprendizaje Profundo , Reproducibilidad de los Resultados , Máquina de Vectores de Soporte
15.
J Med Syst ; 44(11): 196, 2020 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-33025300

RESUMEN

Open Access is an upcoming paradigm to communicate scientific knowledge. The Trans-O-MIM Project works on strategies, models, and evaluation metrics for the goal-oriented, stepwise, sustainable, and fair transformation of established subscription-based scientific journals into open-access-based journals. This research intends to present an evaluation metric and the associated identified appropriate parameters for such transformations. To develop the evaluation metric, it has been implemented in the context of a business management method for planning, steering and controlling action and corporate strategies. The central element was a 3-step procedure for developing the metric. In stage 1 necessary preconditions for a transformation were considered. Stage 2 is the actual elaboration of the evaluation metric by means of a scenario analysis and stage 3 comprises the exemplary testing at the journal Methods of Information in Medicine. The three methodological steps have primarily resulted in 5 scenarios with 9 different final states from the scenario analysis. Thus, the metric is now composed of these 5 scenarios, which can be used to evaluate the success or failure of a transformation. A list of 65 suitable parameters to measure changes in scenario were compiled. So, it is possible to evaluate the transformation and to find the current final state. Parameters like submissions, publications, and time as well as the scenario states could be applied to the transformation process of the Methods of Information in Medicine journal. The proposed evaluation metric can be used to evaluate the transformation processes of subscription-based journals into open-access-based journals.


Asunto(s)
Publicaciones Periódicas como Asunto , Benchmarking
16.
Neural Netw ; 119: 31-45, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31376636

RESUMEN

Generative Adversarial Network (GAN) has become an active research field due to its capability to generate quality simulation data. However, two consistent distributions (generated data distribution and original data distribution) produced by GAN cannot guarantee that generated data are always close to real data. Traditionally GAN is mainly applied to images, and it becomes more challenging for numeric datasets. In this paper, we propose a histogram-based GAN model (His-GAN). The purpose of our proposed model is to help GAN produce generated data with high quality. Specifically, we map generated data and original data into a histogram, then we count probability percentile on each bin and calculate dissimilarity with traditional f-divergence measures (e.g., Hellinger distance, Jensen-Shannon divergence) and Histogram Intersection Kernel. After that, we incorporate this dissimilarity score into training of the GAN model to update the generator's parameters to improve generated data quality. This is because the parameters have an influence on the generated data quality. Moreover, we revised GAN training process by feeding GAN model with one group of samples (these samples can come from one class or one cluster that hold similar characteristics) each time, so the final generated data could contain the characteristics from a single group to overcome the challenge of figuring out complex characteristics from mixed groups/clusters of data. In this way, we can generate data that is more indistinguishable from original data. We conduct extensive experiments to validate our idea with MNIST, CIFAR-10, and a real-world numeric dataset, and the results clearly show the effectiveness of our approach.


Asunto(s)
Exactitud de los Datos , Bases de Datos Factuales/normas , Redes Neurales de la Computación
17.
J Healthc Inform Res ; 3(4): 441-459, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35415434

RESUMEN

Longitudinal disease subtyping is an important problem within the broader scope of computational phenotyping. In this article, we discuss several data-driven unsupervised disease subtyping methods to obtain disease subtypes from longitudinal clinical data. The methods are analyzed in the context of chronic kidney disease, one of the leading health problems, both in the USA and worldwide. To provide a quantitative comparison of the different methods, we propose a novel evaluation metric that measures the cluster tightness and degree of separation between the various clusters produced by each method. Comparative results for two significantly large clinical datasets are provided, along with key insights that are possible due to the proposed evaluation metric.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA