Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
1.
Micromachines (Basel) ; 15(5)2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38793212

RESUMEN

Layered double hydroxides (LDHs), also named hydrotalcite-like compounds, are anionic clays with a lamellar structure which have been extensively used in the last two decades as electrode modifiers for the design of electrochemical sensors. These materials can be classified into LDHs containing or not containing redox-active centers. In the former case, a transition metal cation undergoing a reversible redox reaction within a proper potential window is present in the layers, and, therefore, it can act as electron transfer mediator, and electrocatalyze the oxidation of an analyte for which the required overpotential is too high. In the latter case, a negatively charged species acting as a redox mediator can be introduced into the interlayer spaces after exchanging the anion coming from the synthesis, and, again, the material can display electrocatalytic properties. Alternatively, due to the large specific surface area of LDHs, molecules with electroactivity can be adsorbed on their surface. In this review, the most significant electroanalytical applications of LDHs as electrode modifiers for the development of voltammetric sensors are presented, grouping them based on the two types of materials.

2.
Eur J Epidemiol ; 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38724763

RESUMEN

Investigators often believe that relative effect measures conditional on covariates, such as risk ratios and mean ratios, are "transportable" across populations. Here, we examine the identification of causal effects in a target population using an assumption that conditional relative effect measures are transportable from a trial to the target population. We show that transportability for relative effect measures is largely incompatible with transportability for difference effect measures, unless the treatment has no effect on average or one is willing to make even stronger transportability assumptions that imply the transportability of both relative and difference effect measures. We then describe how marginal (population-averaged) causal estimands in a target population can be identified under the assumption of transportability of relative effect measures, when we are interested in the effectiveness of a new experimental treatment in a target population where the only treatment in use is the control treatment evaluated in the trial. We extend these results to consider cases where the control treatment evaluated in the trial is only one of the treatments in use in the target population, under an additional partial exchangeability assumption in the target population (i.e., an assumption of no unmeasured confounding in the target population with respect to potential outcomes under the control treatment in the trial). We also develop identification results that allow for the covariates needed for transportability of relative effect measures to be only a small subset of the covariates needed to control confounding in the target population. Last, we propose estimators that can be easily implemented in standard statistical software and illustrate their use using data from a comprehensive cohort study of stable ischemic heart disease.

3.
Ann Appl Stat ; 18(1): 858-881, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38784669

RESUMEN

In scientific studies involving analyses of multivariate data, basic but important questions often arise for the researcher: Is the sample exchangeable, meaning that the joint distribution of the sample is invariant to the ordering of the units? Are the features independent of one another, or perhaps the features can be grouped so that the groups are mutually independent? In statistical genomics, these considerations are fundamental to downstream tasks such as demographic inference and the construction of polygenic risk scores. We propose a non-parametric approach, which we call the V test, to address these two questions, namely, a test of sample exchangeability given dependency structure of features, and a test of feature independence given sample exchangeability. Our test is conceptually simple, yet fast and flexible. It controls the Type I error across realistic scenarios, and handles data of arbitrary dimensions by leveraging large-sample asymptotics. Through extensive simulations and a comparison against unsupervised tests of stratification based on random matrix theory, we find that our test compares favorably in various scenarios of interest. We apply the test to data from the 1000 Genomes Project, demonstrating how it can be employed to assess exchangeability of the genetic sample, or find optimal linkage disequilibrium (LD) splits for downstream analysis. For exchangeability assessment, we find that removing rare variants can substantially increase the p-value of the test statistic. For optimal LD splitting, the V test reports different optimal splits than previous approaches not relying on hypothesis testing. Software for our methods is available in R (CRAN: flintyR) and Python (PyPI: flintyPy).

4.
Diagnostics (Basel) ; 14(8)2024 Apr 11.
Artículo en Inglés | MEDLINE | ID: mdl-38667445

RESUMEN

Glucose meters provide a rapid blood glucose status for evidence-based diagnosis, monitoring, and treatment of diabetes mellitus. We aimed to evaluate the commutability of processed blood materials (PBMs) and their use in the performance evaluation of glucose meters. Two PBMs obtained by the fixed-cell method were analyzed for homogeneity, stability, and commutability. The compatibility of ten pairs between mass spectrometry and each glucose meter was categorized as compatible (mean paired difference ≤ 5%) and incompatible (mean paired difference > 5%). The performance of glucose meter 1 (n = 767) and glucose meter 2 (n = 266) was assessed. The glucose in the PBMs remained homogenized and stable for at least 180 days. Six out of ten pairs had commutable PBMs. Commutability of PBMs was observed in both well-compatible and incompatible glucose results. Target glucose values from mass spectrometry were significantly different (p ≤ 0.05) from consensus values in one group of glucose meters. When commutable PBMs were used, glucose meter 1 showed better performance than glucose meter 2, and the percentage of satisfaction was associated when using target values for glucose from mass spectrometry and consensus values, but the performance of glucose meter 2 was not associated. PBM from a fixed-cell method could be mass produced with acceptable homogeneity and stability. Commutability testing of PBMs is required prior to use in the performance evaluation of glucose meters, as the commutability of glucose in the PBMs obtained by a fixed-cell method was variable and depended on the individual glucose meter.

5.
Adv Sci (Weinh) ; 11(20): e2306035, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38501901

RESUMEN

Layered double hydroxides (LDHs) have been widely studied for biomedical applications due to their excellent properties, such as good biocompatibility, degradability, interlayer ion exchangeability, high loading capacity, pH-responsive release, and large specific surface area. Furthermore, the flexibility in the structural composition and ease of surface modification of LDHs makes it possible to develop specifically functionalized LDHs to meet the needs of different applications. In this review, the recent advances of LDHs for biomedical applications, which include LDH-based drug delivery systems, LDHs for cancer diagnosis and therapy, tissue engineering, coatings, functional membranes, and biosensors, are comprehensively discussed. From these various biomedical research fields, it can be seen that there is great potential and possibility for the use of LDHs in biomedical applications. However, at the same time, it must be recognized that the actual clinical translation of LDHs is still very limited. Therefore, the current limitations of related research on LDHs are discussed by combining limited examples of actual clinical translation with requirements for clinical translation of biomaterials. Finally, an outlook on future research related to LDHs is provided.


Asunto(s)
Materiales Biocompatibles , Sistemas de Liberación de Medicamentos , Hidróxidos , Ingeniería de Tejidos , Hidróxidos/química , Humanos , Sistemas de Liberación de Medicamentos/métodos , Materiales Biocompatibles/química , Materiales Biocompatibles/uso terapéutico , Ingeniería de Tejidos/métodos , Técnicas Biosensibles/métodos , Animales
6.
Theor Popul Biol ; 156: 103-116, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38367871

RESUMEN

A multi-type neutral Cannings population model with migration and fixed subpopulation sizes is analyzed. Under appropriate conditions, as all subpopulation sizes tend to infinity, the ancestral process, properly time-scaled, converges to a multi-type coalescent sharing the exchangeability and consistency property. The proof gains from coalescent theory for single-type Cannings models and from decompositions of transition probabilities into parts concerning reproduction and migration respectively. The following section deals with a different but closely related multi-type Cannings model with mutation and fixed total population size but stochastically varying subpopulation sizes. The latter model is analyzed forward and backward in time with an emphasis on its behavior as the total population size tends to infinity. Forward in time, multi-type limiting branching processes arise for large population size. Its backward structure and related open problems are briefly discussed.


Asunto(s)
Genética de Población , Modelos Genéticos , Reproducción/genética , Densidad de Población , Mutación
7.
Front Pharmacol ; 14: 1266322, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38074153

RESUMEN

Introduction: In recent years, there has been a growing trend among regulatory agencies to consider the use of historical controls in clinical trials as a means of improving the efficiency of trial design. In this paper, to enhance the statistical operating characteristic of Phase I dose-finding trials, we propose a novel model-assisted design method named "MEM-Keyboard". Methods: The proposed design is based on the multisource exchangeability models (MEMs) that allows for dynamic borrowing of information from multiple supplemental data sources, including historical trial data, to inform the dose-escalation process. Furthermore, with the frequent occurrence of delayed toxicity in novel anti-cancer drugs, we extended our proposed method to handle late-onset toxicity by incorporating historical data. This extended method is referred to as "MEM-TITE-Keyboard" and aims to improve the efficiency of early clinical trials. Results: Simulation studies have indicated that the proposed methods can improve the probability of correctly selecting the maximum tolerated dose (MTD) with an acceptable level of risk, compared to designs that do not account for information borrowing and late-onset toxicity. Discussion: The MEM-Keyboard and MEM-TITE-Keyboard, easy to implement in practice, provide a useful tool for identifying MTD and accelerating drug development.

8.
Genet Epidemiol ; 47(8): 637-641, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37947279

RESUMEN

The comparison of biological systems, through the analysis of molecular changes under different conditions, has played a crucial role in the progress of modern biological science. Specifically, differential correlation analysis (DCA) has been employed to determine whether relationships between genomic features differ across conditions or outcomes. Because ascertaining the null distribution of test statistics to capture variations in correlation is challenging, several DCA methods utilize permutation which can loosen parametric (e.g., normality) assumptions. However, permutation is often problematic for DCA due to violating the assumption that samples are exchangeable under the null. Here, we examine the limitations of permutation-based DCA and investigate instances where the permutation-based DCA exhibits poor performance. Experimental results show that the permutation-based DCA often fails to control the type I error under the null hypothesis of equal correlation structures.


Asunto(s)
Genómica , Humanos , Estadística como Asunto
9.
J Math Biol ; 87(2): 26, 2023 07 10.
Artículo en Inglés | MEDLINE | ID: mdl-37428265

RESUMEN

Data taking values on discrete sample spaces are the embodiment of modern biological research. "Omics" experiments based on high-throughput sequencing produce millions of symbolic outcomes in the form of reads (i.e., DNA sequences of a few dozens to a few hundred nucleotides). Unfortunately, these intrinsically non-numerical datasets often deviate dramatically from natural assumptions a practitioner might make, and the possible sources of this deviation are usually poorly characterized. This contrasts with numerical datasets where Gaussian-type errors are often well-justified. To overcome this hurdle, we introduce the notion of latent weight, which measures the largest expected fraction of samples from a probabilistic source that conform to a model in a class of idealized models. We examine various properties of latent weights, which we specialize to the class of exchangeable probability distributions. As proof of concept, we analyze DNA methylation data from the 22 human autosome pairs. Contrary to what is usually assumed in the literature, we provide strong evidence that highly specific methylation patterns are overrepresented at some genomic locations when latent weights are taken into account.


Asunto(s)
Genoma , Genómica , Humanos , Probabilidad , Secuenciación de Nucleótidos de Alto Rendimiento
10.
Am Stat ; 77(1): 35-40, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37334071

RESUMEN

In the paired data setting, the sign test is often described in statistical textbooks as a test for comparing differences between the medians of two marginal distributions. There is an implicit assumption that the median of the differences is equivalent to the difference of the medians when employing the sign test in this fashion. We demonstrate however that given asymmetry in the bivariate distribution of the paired data, there are often scenarios where the median of the differences is not equal to the difference of the medians. Further, we show that these scenarios will lead to a false interpretation of the sign test for its intended use in the paired data setting. We illustrate the false-interpretation concept via theory, a simulation study, and through a real-world example based on breast cancer RNA sequencing data obtained from the Cancer Genome Atlas (TCGA).

11.
Philos Trans A Math Phys Eng Sci ; 381(2247): 20220148, 2023 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-36970824

RESUMEN

The paper discusses shrinkage priors which impose increasing shrinkage in a sequence of parameters. We review the cumulative shrinkage process (CUSP) prior of Legramanti et al. (Legramanti et al. 2020 Biometrika 107, 745-752. (doi:10.1093/biomet/asaa008)), which is a spike-and-slab shrinkage prior where the spike probability is stochastically increasing and constructed from the stick-breaking representation of a Dirichlet process prior. As a first contribution, this CUSP prior is extended by involving arbitrary stick-breaking representations arising from beta distributions. As a second contribution, we prove that exchangeable spike-and-slab priors, which are popular and widely used in sparse Bayesian factor analysis, can be represented as a finite generalized CUSP prior, which is easily obtained from the decreasing order statistics of the slab probabilities. Hence, exchangeable spike-and-slab shrinkage priors imply increasing shrinkage as the column index in the loading matrix increases, without imposing explicit order constraints on the slab probabilities. An application to sparse Bayesian factor analysis illustrates the usefulness of the findings of this paper. A new exchangeable spike-and-slab shrinkage prior based on the triple gamma prior of Cadonna et al. (Cadonna et al. 2020 Econometrics 8, 20. (doi:10.3390/econometrics8020020)) is introduced and shown to be helpful for estimating the unknown number of factors in a simulation study. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.

12.
J Appl Stat ; 50(4): 984-1016, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36925902

RESUMEN

In this paper, a new dependent model is introduced. The model is motivated using the structure of series-parallel systems consisting of two series-parallel systems with a random number of parallel sub-systems that have fixed components connected in series. The dependence properties of the proposed model are studied. Two estimation methods, namely the moment method, and the maximum likelihood method are applied to estimate the parameters of the distributions of the components based on observing the system's lifetime data. A Monte Carlo simulation study is used to evaluate the performance of the estimators. Two real data sets are used to illustrate the proposed method. The results are useful for researchers and practitioners interested in analyzing bivariate data related to extreme events.

13.
Biology (Basel) ; 12(2)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36829559

RESUMEN

The factors that determine the relative rates of amino acid substitution during protein evolution are complex and known to vary among taxa. We estimated relative exchangeabilities for pairs of amino acids from clades spread across the tree of life and assessed the historical signal in the distances among these clade-specific models. We separately trained these models on collections of arbitrarily selected protein alignments and on ribosomal protein alignments. In both cases, we found a clear separation between the models trained using multiple sequence alignments from bacterial clades and the models trained on archaeal and eukaryotic data. We assessed the predictive power of our novel clade-specific models of sequence evolution by asking whether fit to the models could be used to identify the source of multiple sequence alignments. Model fit was generally able to correctly classify protein alignments at the level of domain (bacterial versus archaeal), but the accuracy of classification at finer scales was much lower. The only exceptions to this were the relatively high classification accuracy for two archaeal lineages: Halobacteriaceae and Thermoprotei. Genomic GC content had a modest impact on relative exchangeabilities despite having a large impact on amino acid frequencies. Relative exchangeabilities involving aromatic residues exhibited the largest differences among models. There were a small number of exchangeabilities that exhibited large differences in comparisons among major clades and between generalized models and ribosomal protein models. Taken as a whole, these results reveal that a small number of relative exchangeabilities are responsible for much of the structure of the "model space" for protein sequence evolution. The clade-specific models we generated may be useful tools for protein phylogenetics, and the structure of evolutionary model space that they revealed has implications for phylogenomic inference across the tree of life.

14.
J Biopharm Stat ; 33(6): 708-725, 2023 11 02.
Artículo en Inglés | MEDLINE | ID: mdl-36662162

RESUMEN

Among many efforts to facilitate timely access to safe and effective medicines to children, increased attention has been given to extrapolation. Loosely, it is the leveraging of conclusions or available data from adults or older age groups to draw conclusions for the target pediatric population when it can be assumed that the course of the disease and the expected response to a medicinal product would be sufficiently similar in the pediatric and the reference population. Extrapolation then can be characterized as a statistical mapping of information from the reference (adults or older age groups) to the target pediatric population. The translation, or loosely mapping of information, can be through a composite likelihood approach where the likelihood of the reference population is weighted by exponentiation and that this exponent is related to the value of the mapped information in the target population. The weight is bounded above and below recognizing the fact that similarity (of the disease and the expected response) is still valid despite variability of response between the cohorts. Maximum likelihood approaches are then used for estimation of parameters, and asymptotic theory is used to derive distributions of estimates for use in inference. Hence, the estimation of effects in the target population borrows information from the reference population. In addition, this manuscript also talks about how this method is related to the Bayesian statistical paradigm.


Asunto(s)
Funciones de Verosimilitud , Adulto , Humanos , Niño , Anciano , Teorema de Bayes
15.
J Epidemiol ; 33(8): 385-389, 2023 08 05.
Artículo en Inglés | MEDLINE | ID: mdl-35067497

RESUMEN

BACKGROUND: The counterfactual definition of confounding is often explained in the context of exchangeability between the exposed and unexposed groups. One recent approach is to examine whether the measures of association (eg, associational risk difference) are exchangeable when exposure status is flipped in the population of interest. We discuss the meaning and utility of this approach, showing their relationships with the concept of confounding in the counterfactual framework. METHODS: Three hypothetical cohort studies are used, in which the target population is the total population. After providing an overview of the notions of confounding in distribution and in measure, we discuss the approach from the perspective of exchangeability of measures of association (eg, factual associational risk difference vs counterfactual associational risk difference). RESULTS: In general, if the measures of association are non-exchangeable when exposure status is flipped, confounding in distribution is always present, although confounding in measure may or may not be present. Even if the measures of association are exchangeable when exposure status is flipped, there could be confounding both in distribution and in measure. When we use risk difference or risk ratio as a measure of interest and the exposure prevalence in the population is 0.5, testing the exchangeability of measures of association is equivalent to testing the absence of confounding in the corresponding measures. CONCLUSION: The approach based on exchangeability of measures of association essentially does not provide a definition of confounding in the counterfactual framework. Subtly differing notions of confounding should be distinguished carefully.


Asunto(s)
Causalidad , Humanos , Factores de Confusión Epidemiológicos , Japón
16.
Res Synth Methods ; 13(4): 520-532, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35485631

RESUMEN

Bayesian methods seem a natural choice for combining sources of evidence in meta-analyses. However, in practice, their sensitivity to the choice of prior distribution is much less attractive, particularly for parameters describing heterogeneity. A recent non-Bayesian approach to fixed-effects meta-analysis provides novel ways to think about estimation of an average effect and the variability around this average. In this paper, we describe the Bayesian analogs of those results, showing how Bayesian inference on fixed-effects parameters-properties of the study populations at hand-is more stable and less sensitive than standard random-effects approaches. As well as these practical insights, our development also clarifies different ways in which prior beliefs like homogeneity and correlation can be reflected in prior distributions. We also carefully distinguish how random-effects models can be used to reflect sampling uncertainty versus their use reflecting a priori exchangeability of study-specific effects, and how subsequent inference depends on which motivation is used. With some important theoretical results, illustrated in an applied meta-analysis example, we show the robustness of the fixed effects even for small numbers of studies.


Asunto(s)
Teorema de Bayes , Metaanálisis como Asunto , Incertidumbre
17.
Educ Psychol Meas ; 82(3): 409-443, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35444336

RESUMEN

Multilevel structural equation modeling (MSEM) allows researchers to model latent factor structures at multiple levels simultaneously by decomposing within- and between-group variation. Yet the extent to which the sampling ratio (i.e., proportion of cases sampled from each group) influences the results of MSEM models remains unknown. This article explores how variation in the sampling ratio in MSEM affects the measurement of Level 2 (L2) latent constructs. Specifically, we investigated whether the sampling ratio is related to bias and variability in aggregated L2 construct measurement and estimation in the context of doubly latent MSEM models utilizing a two-step Monte Carlo simulation study. Findings suggest that while lower sampling ratios were related to increased bias, standard errors, and root mean square error, the overall size of these errors was negligible, making the doubly latent model an appealing choice for researchers. An applied example using empirical survey data is further provided to illustrate the application and interpretation of the model. We conclude by considering the implications of various sampling ratios on the design of MSEM studies, with a particular focus on educational research.

18.
Biom J ; 64(3): 557-576, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35285064

RESUMEN

In this article, we address the problem of simultaneous testing hypothesis about mean and covariance matrix for repeated measures data when both the mean vector and covariance matrix are patterned. In particular, tests about the mean vector under block circular and doubly exchangeable covariance structures have been considered. The null distributions are established for the corresponding likelihood ratio test statistics, and expressions for the exact or near-exact probability density and cumulative distribution functions are obtained. The application of the results is illustrated by both a simulation study and a real-life data example.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Simulación por Computador , Funciones de Verosimilitud
20.
Pharm Stat ; 21(2): 327-344, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34585501

RESUMEN

In many orphan diseases and pediatric indications, the randomized controlled trials may be infeasible because of their size, duration, and cost. Leveraging information on the control through a prior can potentially reduce sample size. However, unless an objective prior is used to impose complete ignorance for the parameter being estimated, it results in biased estimates and inflated type-I error. Hence, it is essential to assess both the confirmatory and supplementary knowledge available during the construction of the prior to avoid "cherry-picking" advantageous information. For this purpose, propensity score methods are employed to minimize selection bias by weighting supplemental control subjects according to their similarity in terms of pretreatment characteristics to the subjects in the current trial. The latter can be operationalized through a proposed measure of overlap in propensity-score distributions. In this paper, we consider single experimental arm in the current trial and the control arm is completely borrowed from the supplemental data. The simulation experiments show that the proposed method reduces prior and data conflict and improves the precision of the of the average treatment effect.


Asunto(s)
Proyectos de Investigación , Teorema de Bayes , Niño , Simulación por Computador , Humanos , Tamaño de la Muestra , Sesgo de Selección
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA