Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Sci Justice ; 58(3): 200-218, 2018 May.
Artículo en Inglés | MEDLINE | ID: mdl-29685302

RESUMEN

When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments.

2.
J Biomed Inform ; 76: 69-77, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-29042246

RESUMEN

In order for clinicians to manage disease progression and make effective decisions about drug dosage, treatment regimens or scheduling follow up appointments, it is necessary to be able to identify both short and long-term trends in repeated biomedical measurements. However, this is complicated by the fact that these measurements are irregularly sampled and influenced by both genuine physiological changes and external factors. In their current forms, existing regression algorithms often do not fulfil all of a clinician's requirements for identifying short-term (acute) events while still being able to identify long-term, chronic, trends in disease progression. Therefore, in order to balance both short term interpretability and long term flexibility, an extension to broken-stick regression models is proposed in order to make them more suitable for modelling clinical time series. The proposed probabilistic broken-stick model can robustly estimate both short-term and long-term trends simultaneously, while also accommodating the unequal length and irregularly sampled nature of clinical time series. Moreover, since the model is parametric and completely generative, its first derivative provides a long-term non-linear estimate of the annual rate of change in the measurements more reliably than linear regression. The benefits of the proposed model are illustrated using estimated glomerular filtration rate as a case study used to manage patients with chronic kidney disease.


Asunto(s)
Algoritmos , Tasa de Filtración Glomerular , Modelos Teóricos , Probabilidad , Humanos , Insuficiencia Renal Crónica/fisiopatología
3.
Elife ; 62017 02 20.
Artículo en Inglés | MEDLINE | ID: mdl-28218891

RESUMEN

Diagnosis and treatment of circadian rhythm sleep-wake disorders both require assessment of circadian phase of the brain's circadian pacemaker. The gold-standard univariate method is based on collection of a 24-hr time series of plasma melatonin, a suprachiasmatic nucleus-driven pineal hormone. We developed and validated a multivariate whole-blood mRNA-based predictor of melatonin phase which requires few samples. Transcriptome data were collected under normal, sleep-deprivation and abnormal sleep-timing conditions to assess robustness of the predictor. Partial least square regression (PLSR), applied to the transcriptome, identified a set of 100 biomarkers primarily related to glucocorticoid signaling and immune function. Validation showed that PLSR-based predictors outperform published blood-derived circadian phase predictors. When given one sample as input, the R2 of predicted vs observed phase was 0.74, whereas for two samples taken 12 hr apart, R2 was 0.90. This blood transcriptome-based model enables assessment of circadian phase from a few samples.


Asunto(s)
Biomarcadores/sangre , Ritmo Circadiano , Perfilación de la Expresión Génica , Melatonina/biosíntesis , Humanos
4.
Mach Vis Appl ; 28(3): 393-407, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-32103860

RESUMEN

Images of the kidneys using dynamic contrast-enhanced magnetic resonance renography (DCE-MRR) contains unwanted complex organ motion due to respiration. This gives rise to motion artefacts that hinder the clinical assessment of kidney function. However, due to the rapid change in contrast agent within the DCE-MR image sequence, commonly used intensity-based image registration techniques are likely to fail. While semi-automated approaches involving human experts are a possible alternative, they pose significant drawbacks including inter-observer variability, and the bottleneck introduced through manual inspection of the multiplicity of images produced during a DCE-MRR study. To address this issue, we present a novel automated, registration-free movement correction approach based on windowed and reconstruction variants of dynamic mode decomposition (WR-DMD). Our proposed method is validated on ten different healthy volunteers' kidney DCE-MRI data sets. The results, using block-matching-block evaluation on the image sequence produced by WR-DMD, show the elimination of 99 % of mean motion magnitude when compared to the original data sets, thereby demonstrating the viability of automatic movement correction using WR-DMD.

5.
J Innov Health Inform ; 22(2): 293-301, 2015 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-26245243

RESUMEN

INTRODUCTION: Renal function is reported using the estimates of glomerular filtration rate (eGFR). However, eGFR values are recorded without reference to the particular serum creatinine (SCr) assays used to derive them, and newer assays were introduced at different time points across the laboratories in the United Kingdom. These changes may cause systematic bias in eGFR reported in routinely collected data, even though laboratory-reported eGFR values have a correction factor applied. DESIGN: An algorithm to detect changes in SCr that in turn affect eGFR calculation method was developed. It compares the mapping of SCr values on to eGFR values across a time series of paired eGFR and SCr measurements. SETTING: Routinely collected primary care data from 20,000 people with the richest renal function data from the quality improvement in chronic kidney disease trial. RESULTS: The algorithm identified a change in eGFR calculation method in 114 (90%) of the 127 included practices. This change was identified in 4736 (23.7%) patient time series analysed. This change in calibration method was found to cause a significant step change in the reported eGFR values, producing a systematic bias. The eGFR values could not be recalibrated by applying the Modification of Diet in Renal Disease equation to the laboratory reported SCr values. CONCLUSIONS: This algorithm can identify laboratory changes in eGFR calculation methods and changes in SCr assay. Failure to account for these changes may misconstrue renal function changes over time. Researchers using routine eGFR data should account for these effects.


Asunto(s)
Automatización de Laboratorios , Creatinina/sangre , Registros Electrónicos de Salud , Intercambio de Información en Salud , Fallo Renal Crónico/sangre , Fallo Renal Crónico/terapia , Pruebas de Función Renal/métodos , Mejoramiento de la Calidad , Anciano , Anciano de 80 o más Años , Algoritmos , Inglaterra , Femenino , Tasa de Filtración Glomerular/fisiología , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Atención Primaria de Salud
6.
Stud Health Technol Inform ; 180: 1105-7, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22874368

RESUMEN

BACKGROUND: Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. METHODS: Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. RESULTS: We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). DISCUSSION: These requirements and their associated models should become part of research study protocols.


Asunto(s)
Investigación Biomédica/métodos , Sistemas de Administración de Bases de Datos , Registros Electrónicos de Salud , Registros de Salud Personal , Almacenamiento y Recuperación de la Información/métodos , Registro Médico Coordinado/métodos , Vocabulario Controlado , Modelos Teóricos , Reino Unido
7.
Inform Prim Care ; 19(2): 57-63, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22417815

RESUMEN

BACKGROUND: Personalised medicine involves customising management to meet patients' needs. In chronic kidney disease (CKD) at the population level there is steady decline in renal function with increasing age; and progressive CKD has been defined as marked variation from this rate of decline. OBJECTIVE: To create visualisations of individual patient's renal function and display smoothed trend lines and confidence intervals for their renal function and other important co-variants. METHOD: Applying advanced pattern recognition techniques developed in biometrics to routinely collected primary care data collected as part of the Quality Improvement in Chronic Kidney Disease (QICKD) trial. We plotted trend lines, using regression, and confidence intervals for individual patients. We also created a visualisation which allowed renal function to be compared with six other covariants: glycated haemoglobin (HbA1c), body mass index (BMI), BP, and therapy. The outputs were reviewed by an expert panel. RESULTS: We successfully extracted and displayed data. We demonstrated that estimated glomerular filtration (eGFR) is a noisy variable, and showed that a large number of people would exceed the 'progressive CKD' criteria. We created a data display that could be readily automated. This display was well received by our expert panel but requires extensive development before testing in a clinical setting. CONCLUSIONS: It is feasible to utilise data visualisation methods developed in biometrics to look at CKD data. The criteria for defining 'progressive CKD' need revisiting, as many patients exceed them. Further development work and testing is needed to explore whether this type of data modelling and visualisation might improve patient care.


Asunto(s)
Biometría/métodos , Fallo Renal Crónico/terapia , Medicina de Precisión , Atención Primaria de Salud , Anciano , Anciano de 80 o más Años , Envejecimiento/fisiología , Biomarcadores/análisis , Femenino , Tasa de Filtración Glomerular , Humanos , Fallo Renal Crónico/fisiopatología , Masculino , Persona de Mediana Edad , Reconocimiento de Normas Patrones Automatizadas , Proyectos Piloto , Mejoramiento de la Calidad
8.
IEEE Trans Pattern Anal Mach Intell ; 32(6): 1097-111, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-20431134

RESUMEN

A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.


Asunto(s)
Identificación Biométrica , Interpretación Estadística de Datos , Sistemas de Administración de Bases de Datos , Bases de Datos Factuales , Dermatoglifia , Cara , Femenino , Humanos , Iris , Masculino , Reproducibilidad de los Resultados , Voz
9.
IEEE Trans Pattern Anal Mach Intell ; 29(3): 492-8, 2007 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-17224618

RESUMEN

Biometric authentication performance is often depicted by a detection error trade-off (DET) curve. We show that this curve is dependent on the choice of samples available, the demographic composition and the number of users specific to a database. We propose a two-step bootstrap procedure to take into account the three mentioned sources of variability. This is an extension to the Bolle et al.'s bootstrap subset technique. Preliminary experiments on the NIST2005 and XM2VTS benchmark databases are encouraging, e.g., the average result across all 24 systems evaluated on NIST2005 indicates that one can predict, with more than 75 percent of DET coverage, an unseen DET curve with eight times more users. Furthermore, our finding suggests that with more data available, the confidence intervals become smaller and, hence, more useful.


Asunto(s)
Algoritmos , Inteligencia Artificial , Biometría/métodos , Cara/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Software de Reconocimiento del Habla , Simulación por Computador , Humanos , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA