Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
SIAM J Sci Comput ; 41(4): A2212-A2238, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31749599

RESUMEN

Inference on unknown quantities in dynamical systems via observational data is essential for providing meaningful insight, furnishing accurate predictions, enabling robust control, and establishing appropriate designs for future experiments. Merging mathematical theory with empirical measurements in a statistically coherent way is critical and challenges abound, e.g., ill-posedness of the parameter estimation problem, proper regularization and incorporation of prior knowledge, and computational limitations. To address these issues, we propose a new method for learning parameterized dynamical systems from data. We first customize and fit a surrogate stochastic process directly to observational data, front-loading with statistical learning to respect prior knowledge (e.g., smoothness), cope with challenging data features like heteroskedasticity, heavy tails, and censoring. Then, samples of the stochastic process are used as "surrogate data" and point estimates are computed via ordinary point estimation methods in a modular fashion. Attractive features of this two-step approach include modularity and trivial parallelizability. We demonstrate its advantages on a predator-prey simulation study and on a real-world application involving within-host influenza virus infection data paired with a viral kinetic model, with comparisons to a more conventional Markov chain Monte Carlo (MCMC) based Bayesian approach.

2.
J Agric Biol Environ Stat ; 24(3): 398-425, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31496633

RESUMEN

The Gaussian process is an indispensable tool for spatial data analysts. The onset of the "big data" era, however, has lead to the traditional Gaussian process being computationally infeasible for modern spatial data. As such, various alternatives to the full Gaussian process that are more amenable to handling big spatial data have been proposed. These modern methods often exploit low-rank structures and/or multi-core and multi-threaded computing environments to facilitate computation. This study provides, first, an introductory overview of several methods for analyzing large spatial data. Second, this study describes the results of a predictive competition among the described methods as implemented by different groups with strong expertise in the methodology. Specifically, each research group was provided with two training datasets (one simulated and one observed) along with a set of prediction locations. Each group then wrote their own implementation of their method to produce predictions at the given location and each was subsequently run on a common computing environment. The methods were then compared in terms of various predictive diagnostics. Supplementary materials regarding implementation details of the methods and code are available for this article online. ELECTRONIC SUPPLEMENTARY MATERIAL: Supplementary materials for this article are available at 10.1007/s13253-018-00348-w.

3.
Ann Appl Stat ; 12(1): 27-66, 2018 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38623158

RESUMEN

In 2015 the US federal government sponsored a dengue forecasting competition using historical case data from Iquitos, Peru and San Juan, Puerto Rico. Competitors were evaluated on several aspects of out-of-sample forecasts including the targets of peak week, peak incidence during that week, and total season incidence across each of several seasons. our team was one of the winners of that competition, outperforming other teams in multiple targets/locales. In this paper we report on our methodology, a large component of which, surprisingly, ignores the known biology of epidemics at large-for example, relationships between dengue transmission and environmental factors-and instead relies on flexible nonparametric nonlinear Gaussian process (GP) regression fits that "memorize" the trajectories of past seasons, and then "match" the dynamics of the unfolding season to past ones in real-time. Our phenomenological approach has advantages in situations where disease dynamics are less well understood, or where measurements and forecasts of ancillary covariates like precipitation are unavailable, and/or where the strength of association with cases are as yet unknown. In particular, we show that the GP approach generally outperforms a more classical generalized linear (autoregressive) model (GLM) that we developed to utilize abundant covariate information. We illustrate variations of our method(s) on the two benchmark locales alongside a full summary of results submitted by other contest competitors.

4.
Philos Trans A Math Phys Eng Sci ; 370(1962): 1250-67, 2012 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-22291232

RESUMEN

Long-range dependence (LRD) and non-Gaussianity are ubiquitous in many natural systems such as ecosystems, biological systems and climate. However, it is not always appreciated that the two phenomena may occur together in natural systems and that self-similarity in a system can be a superposition of both phenomena. These features, which are common in complex systems, impact the attribution of trends and the occurrence and clustering of extremes. The risk assessment of systems with these properties will lead to different outcomes (e.g. return periods) than the more common assumption of independence of extremes. Two paradigmatic models are discussed that can simultaneously account for LRD and non-Gaussianity: autoregressive fractional integrated moving average (ARFIMA) and linear fractional stable motion (LFSM). Statistical properties of estimators for LRD and self-similarity are critically assessed. It is found that the most popular estimators can be biased in the presence of important features of many natural systems like trends and multiplicative noise. Also the LRD and non-Gaussianity of two typical natural time series are discussed.

5.
PLoS One ; 4(6): e5807, 2009 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-19503812

RESUMEN

BACKGROUND: Epidemiological interventions aim to control the spread of infectious disease through various mechanisms, each carrying a different associated cost. METHODOLOGY: We describe a flexible statistical framework for generating optimal epidemiological interventions that are designed to minimize the total expected cost of an emerging epidemic while simultaneously propagating uncertainty regarding the underlying disease model parameters through to the decision process. The strategies produced through this framework are adaptive: vaccination schedules are iteratively adjusted to reflect the anticipated trajectory of the epidemic given the current population state and updated parameter estimates. CONCLUSIONS: Using simulation studies based on a classic influenza outbreak, we demonstrate the advantages of adaptive interventions over non-adaptive ones, in terms of cost and resource efficiency, and robustness to model misspecification.


Asunto(s)
Brotes de Enfermedades , Epidemiología/instrumentación , Epidemiología/normas , Gripe Humana/epidemiología , Gripe Humana/terapia , Vacunación/métodos , Enfermedades Transmisibles , Simulación por Computador , Técnicas de Apoyo para la Decisión , Epidemiología/organización & administración , Humanos , Modelos Estadísticos , Método de Montecarlo , Salud Pública/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA