Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Anal Biochem ; 694: 115602, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38977233

RESUMEN

Modern isothermal titration calorimetry instruments give great precision, but for comparable accuracy they require chemical calibration. For the heat factor, one recommended process is HCl into the weak base TRIS. In studying this reaction with a VP-ITC and two Nano-ITCs, we have encountered some problems, most importantly a titrant volume shortfall Δv ≈ 0.3 µL, which we attribute to diffusive loss of HCl in the syringe tip. This interpretation is supported by a mathematical treatment of the diffusion problem. The effect was discovered through a variable-v protocol, which thus should be used to properly allow for it in any reaction that similarly approaches completion. We also find that the effects from carbonate contamination and from OH- from weak base hydrolysis can be more significant that previously thought. To facilitate proper weighting in the least-squares fitting of data, we have estimated data variance functions from replicate data. All three instruments have low-signal precision of σ ≈ 1 µJ; titrant volume uncertainty is a factor of ∼2 larger for the Nano-ITCs than for the VP-ITC. The final heat factors remain uncertain by more than the ∼1 % precision of the instruments and are unduly sensitive to the HCl concentration.


Asunto(s)
Calorimetría , Calorimetría/métodos , Calibración , Ácido Clorhídrico/química
2.
Anal Chem ; 94(46): 15997-16005, 2022 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-36343110

RESUMEN

ANSWER: No. Most goodness-of-fit (GOF) tests attempt to discern a preferred weighting using either absolute or relative errors in the back-calculated calibration x values. However, the former are predisposed to select constant weighting and the latter 1/x2 or 1/y2 weighting, no matter what the true weighting should be. Here, I use Monte Carlo simulations to quantify the flaws in GOF tests and show why they falsely prefer their predisposition weighting. The weighting problem is solved properly through variance function (VF) estimation from replicate data, conveniently separating this from the problem of selecting a response function (RF). Any weighting other than inverse-variance must give loss of precision in the RF parameters and in the estimates of unknowns x0. In particular, the widely used 1/x2 weighting, if wrong, not only sacrifices precision but even worse, appears to give better precision at small x, leading to falsely optimistic estimates of detection and quantification limits. Realistic VFs typically become constant in the low-x, low-y limit. Thus, even when 1/x2 weighting is correct at large signal, the neglect of the constant variance component at small signal again gives too-small detection and quantification limits. VF estimation has been disparaged as too demanding of data. Why this is not true is demonstrated with Monte Carlo simulations that show only a few percent increase in calibration parameter uncertainties when the VF is estimated from just three replicates at each of six calibration x values. This point is further demonstrated using examples from the recent literature.


Asunto(s)
Calibración , Análisis de los Mínimos Cuadrados , Método de Montecarlo , Incertidumbre
3.
Anal Biochem ; 642: 114481, 2022 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34843699

RESUMEN

By conducting binding experiments at a range of temperatures T using isothermal titration calorimetry (ITC), one can obtain two estimates of the binding enthalpy - calorimetric (ΔH°cal) from the experiments at each T, and van't Hoff (ΔH°vH) from the T dependence of the binding constant K°. From thermodynamics it is clear that these two must be identical, but early efforts to demonstrate this for ITC data indicated significant inconsistency. In an extensive 2004 study of the Ba2+ + 18-crown-6 ether complexation used in prior comparisons, Mizoue and Tellinghuisen found modest (10-20%) but statistically significant differences, which were tentatively attributed to problems converting the calorimetric estimates to their standard state values, as implied by the superscript ° in the notation. In the present work the 2004 results are reanalyzed using results obtained since then from temperature, heat, and volume calibration of the instrument and a better determination of the data variance function required for the weighted least-squares fitting of the data. The new results show consistency for temperatures 5-30 °C but persistent statistically significant differences from 35 to 46 °C. Several possible explanations for the remaining discrepancies are examined, with methods that include fitting the K and ΔHcal data together.


Asunto(s)
Bario/química , Calorimetría , Éteres Corona/química , Termodinámica , Calibración
4.
Life (Basel) ; 11(7)2021 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-34357065

RESUMEN

Methods for estimating the qPCR amplification efficiency E from data for single reactions are tested on six multireplicate datasets, with emphasis on their performance as a function of the range of cycles n1-n2 included in the analysis. The two-parameter exponential growth (EG) model that has been relied upon almost exclusively does not allow for the decline of E(n) with increasing cycle number n through the growth region and accordingly gives low-biased estimates. Further, the standard procedure of "baselining"-separately estimating and subtracting a baseline before analysis-leads to reduced precision. The three-parameter logistic model (LRE) does allow for such decline and includes a parameter E0 that represents E through the baseline region. Several four-parameter extensions of this model that accommodate some asymmetry in the growth profiles but still retain the significance of E0 are tested against the LRE and EG models. The recursion method of Carr and Moore also describes a declining E(n) but tacitly assumes E0 = 2 in the baseline region. Two modifications that permit varying E0 are tested, as well as a recursion method that directly fits E(n) to a sigmoidal function. All but the last of these can give E0 estimates that agree fairly well with calibration-based estimates but perform best when the calculations are extended to only about one cycle below the first-derivative maximum (FDM). The LRE model performs as well as any of the four-parameter forms and is easier to use. Its proper implementation requires fitting to it plus a suitable baseline function, which typically requires four-six adjustable parameters in a nonlinear least-squares fit.

5.
Anal Biochem ; 611: 113946, 2020 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-32918867

RESUMEN

The ultimate precision in both dPCR and qPCR experiments is limited by the Poisson statistics in the total number m of template molecules in the sample, giving relative standard deviation 1/m. This means that precision is limited by sample volume at low concentrations. Accordingly qPCR instruments, used in dPCR mode, can give better precision than dPCR instruments in this limit. For example, 13% standard deviation can be achieved with a 96-well plate for number concentrations ~20-5000 mL-1. For fixed m, qPCR loses to dPCR by a factor of ~2 in precision when calibration is needed.


Asunto(s)
Modelos Químicos , Reacción en Cadena en Tiempo Real de la Polimerasa , Calibración , Humanos
6.
BMC Bioinformatics ; 21(1): 291, 2020 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-32640980

RESUMEN

BACKGROUND: A recently proposed method for estimating qPCR amplification efficiency E analyzes fluorescence intensity ratios from pairs of points deemed to lie in the exponential growth region on the amplification curves for all reactions in a dilution series. This method suffers from a serious problem: The resulting ratios are highly correlated, as they involve multiple use of the raw data, for example, yielding ~ 250 E estimates from ~ 25 intensity readings. The resulting statistics for such estimates are falsely optimistic in their assessment of the estimation precision. RESULTS: Monte Carlo simulations confirm that the correlated pairs method yields precision estimates that are better than actual by a factor of two or more. This result is further supported by estimating E by both pairwise and Cq calibration methods for the 16 replicate datasets from the critiqued work, and then comparing the ensemble statistics for these methods. CONCLUSION: Contrary to assertions in the proposing work, the pairwise method does not yield E estimates a factor of 2 more precise than estimates from Cq calibration fitting (the standard curve method). On the other hand, the statistically correct direct fit of the data to the model behind the pairwise method can yield E estimates of comparable precision. Ways in which the approach might be improved are discussed briefly.


Asunto(s)
Reacción en Cadena en Tiempo Real de la Polimerasa , Correlación de Datos , Fluorescencia , Método de Montecarlo
7.
Anal Chem ; 92(16): 10863-10871, 2020 08 18.
Artículo en Inglés | MEDLINE | ID: mdl-32678579

RESUMEN

Methods for straight-line fitting of data having uncertainty in x and y are compared through Monte Carlo simulations and application to specific data sets. Under special circumstances, the "ignorance" methods, methods which are typically used without information about the data errors σx and σy, are equivalent to the recommended best approach. The latter is numerical rather than formulaic but is easy to implement in programs that permit user-defined fit functions. It can handle any response function, linear or nonlinear, for any σxi and σyi. Estimates for the latter must be supplied and rightfully belong in any data analysis.

8.
Anal Chem ; 91(14): 8715-8722, 2019 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-31180654

RESUMEN

Inverse variance weighting ensures optimal parameter estimation in least-squares fitting, with exact parameter standard errors for linear least-squares with known data variance. In this Feature, I emphasize the virtues of numerical methods for estimating data variance functions and for determining these limits for any calibration model, linear or nonlinear.

9.
Biomol Detect Quantif ; 17: 100084, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31194178

RESUMEN

The standard approach for quantitative estimation of genetic materials with qPCR is calibration with known concentrations for the target substance, in which estimates of the quantification cycle (Cq ) are fitted to a straight-line function of log(N 0), where N 0 is the initial number of target molecules. The location of Cq for the unknown on this line then yields its N 0. The most widely used definition for Cq is an absolute threshold that falls in the early growth cycles. This usage is flawed as commonly implemented: threshold set very close to the baseline level, which is estimated separately, from designated "baseline cycles." The absolute threshold is especially poor for dealing with the scale variability often observed for growth profiles. Scale-independent markers, like the first derivative maximum (FDM) and a relative threshold (Cr ) avoid this problem. We describe improved methods for estimating these and other Cq markers and their standard errors, from a nonlinear algorithm that fits growth profiles to a 4-parameter log-logistic function plus a baseline function. Further, by examining six multidilution, multireplicate qPCR data sets, we find that nonlinear expressions are often preferred statistically for the dependence of Cq on log(N 0). This means that the amplification efficiency E depends on N 0, in violation of another tenet of qPCR analysis. Neglect of calibration nonlinearity leads to biased estimates of the unknown. By logic, E estimates from calibration fitting pertain to the earliest baseline cycles, not the early growth cycles used to estimate E from growth profiles for single reactions. This raises concern about the use of the latter in lengthy extrapolations to estimate N 0. Finally, we observe that replicate ensemble standard deviations greatly exceed predictions, implying that much better results can be achieved from qPCR through better experimental procedures, which likely include reducing pipette volume uncertainty.

10.
Anal Biochem ; 563: 79-86, 2018 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-30149027

RESUMEN

Isothermal titration calorimetry data recorded on a MicroCal/Malvern VP-ITC instrument for water-water blanks and for dilution of aqueous solutions of BaCl2 and Ba(NO3)2 are analyzed using Origin software, the freeware NITPIC program, and in-house algorithms, to compare precisions for estimating the heat per injection q. The data cover temperatures 6-47 °C, injection volumes 4-40 µL, and average heats 0-200 µcal. For water-water blanks, where baseline noise limits precision, NITPIC and the in-house algorithm achieve precisions of 0.05 µcal, which is better than Origin by a factor of 4. The precision differences decrease with increasing |q|, becoming insignificant for |q| > 200 µcal. In its default mode, NITPIC underestimates |q| for peaks with incomplete return to baseline, but the shortfall can be largely corrected by overriding the default injection time parameter. The variance estimates from 26 dilution experiments are used to assess the data variance function. The results determine the conditions under which weighted least squares should be used to estimate thermodynamic parameters from ITC data.


Asunto(s)
Calorimetría/métodos , Algoritmos , Calor , Análisis de los Mínimos Cuadrados , Temperatura , Termodinámica
11.
Biochim Biophys Acta Gen Subj ; 1862(4): 886-894, 2018 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-29289616

RESUMEN

BACKGROUND: Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. METHODS: The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting eA, ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. RESULTS: Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. CONCLUSION: Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. GENERAL SIGNIFICANCE: Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods.


Asunto(s)
Algoritmos , Simulación por Computador , Análisis de los Mínimos Cuadrados , Método de Montecarlo , Calorimetría/métodos , Enzimas/metabolismo , Humanos , Cinética , Reproducibilidad de los Resultados
12.
Sci Rep ; 6: 38951, 2016 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-27958340

RESUMEN

Real-time quantitative polymerase chain reaction (qPCR) data are found to display periodic patterns in the fluorescence intensity as a function of sample number for fixed cycle number. This behavior is seen for technical replicate datasets recorded on several different commercial instruments; it occurs in the baseline region and typically increases with increasing cycle number in the growth and plateau regions. Autocorrelation analysis reveals periodicities of 12 for 96-well systems and 24 for a 384-well system, indicating a correlation with block architecture. Passive dye experiments show that the effect may be from optical detector bias. Importantly, the signal periodicity manifests as periodicity in quantification cycle (Cq) values when these are estimated by the widely applied fixed threshold approach, but not when scale-insensitive markers like first- and second-derivative maxima are used. Accordingly, any scale variability in the growth curves will lead to bias in constant-threshold-based Cqs, making it mandatory that workers should either use scale-insensitive Cqs or normalize their growth curves to constant amplitude before applying the constant threshold method.


Asunto(s)
Modelos Químicos , Reacción en Cadena en Tiempo Real de la Polimerasa/métodos , Humanos
13.
Anal Biochem ; 513: 43-46, 2016 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-27567993

RESUMEN

Isothermal titration calorimetry data for very low c (≡K[M]0) must normally be analyzed with the stoichiometry parameter n fixed - at its known value or at any reasonable value if the system is not well characterized. In the latter case, ΔH° (and hence n) can be estimated from the T-dependence of the binding constant K, using the van't Hoff (vH) relation. An alternative is global or simultaneous fitting of data at multiple temperatures. In this Note, global analysis of low-c data at two temperatures is shown to estimate ΔH° and n with double the precision of the vH method.


Asunto(s)
Modelos Teóricos , Calorimetría Indirecta/métodos
14.
Biochim Biophys Acta ; 1860(5): 861-867, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-26477875

RESUMEN

BACKGROUND: Successful ITC experiments require conversion of cell reagent (titrand M) to product and production or consumption of heat. These conditions are quantified for 1:1 binding, M+X ⇔ MX. METHODS: Nonlinear least squares is used in error-propagation mode to predict the precisions with which the key quantities - binding constant K, reaction enthalpy ΔH°, and stoichiometry number n - can be estimated over a wide range of the dimensionless quantity that governs isotherm shape, c=K[M]0. The measurement precision σq is estimated from analysis of water-water blanks. RESULTS: When the product conversion exceeds 90%, the parameter relative standard errors are proportional to σq/qtot, where the total heat qtot ≈ ΔH° [M]0V0. Specifically, σK/K×qtot/σq ≈ 25 for c=10(-3)-10, ≈ 11 c(1/3) for c=10-10(4). For c>1, n and ΔH° are more precise than K; this holds also at smaller c for the product n×ΔH° and for ΔH° when n can be held fixed. Use of as few as 10 titrant injections can outperform the customary 20-40 while also improving productivity. CONCLUSION: These principles are illustrated in experiment design using the program ITC-PLANNER15. GENERAL SIGNIFICANCE: Simple quantitative guidelines replace the "c rules" that have dominated the literature for decades.


Asunto(s)
Compuestos de Bario/química , Calorimetría/normas , Cloruros/química , Éteres Corona/química , Programas Informáticos , Calor , Cinética , Análisis de los Mínimos Cuadrados , Nitratos/química , Proyectos de Investigación , Temperatura , Termodinámica
15.
Anal Biochem ; 496: 1-3, 2016 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-26562324

RESUMEN

Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program.


Asunto(s)
Análisis de los Mínimos Cuadrados , Reacción en Cadena en Tiempo Real de la Polimerasa/métodos , Incertidumbre
16.
Anal Chem ; 88(24): 12183-12187, 2016 12 20.
Artículo en Inglés | MEDLINE | ID: mdl-28193077

RESUMEN

The role of partition volume variability, or polydispersity, in digital polymerase chain reaction methods is examined through formal considerations and Monte Carlo simulations. Contrary to intuition, polydispersity causes little precision loss for low average copy number per partition µ and can actually improve precision when µ exceeds ∼4. It does this by negatively biasing the estimates of µ, thus increasing the number of negative (null) partitions N0. In keeping with binomial statistics, this increases the relative precision of N0 and hence of the biased estimate m of µ. Below µ = 1, the precision loss and the bias are both small enough to be negligible for many applications. For higher µ the bias becomes more important than the imprecision, making accuracy dependent on knowledge of the partition volume distribution function. This information can be gained with optical microscopy or through calibration with reference materials.


Asunto(s)
Reacción en Cadena de la Polimerasa/métodos , Calibración , Modelos Estadísticos , Método de Montecarlo , Tamaño de la Muestra
17.
Anal Chem ; 87(17): 8925-31, 2015 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-26235706

RESUMEN

Monte Carlo simulations are used to examine the bias and loss of precision that result from experimental error and analysis procedures in real-time quantitative polymerase chain reaction (PCR). In the limit of small copy numbers (N0), Poisson statistics govern the dispersion in estimates of the quantification cycle (Cq) for replicate experiments, permitting the estimation of N0 from the Cq variance, which is inversely proportional to N0. We derive corrections to expressions given previously for this determination. With increasing N0, the Poisson contribution decreases and other effects, like pipet volume uncertainty (typically >3%), dominate. Cycle-to-cycle variability in the amplification efficiency E produces scale dispersion similar to that for variability in the sensitivity of fluorescence detection. When this E variability is proportional to just the amplification (E - 1), there is insignificant effect on Cq if scale-independent definitions are used for this marker. Single-reaction analysis methods based on the exponential growth equation are inherently low-biased in E and high-biased in N0, and these biases can amount to factor-of-4 or greater error in N0. For estimating Cq, their greatest limitation is use of a constant absolute threshold, making them inefficient for data that exhibit scale variability.

18.
Anal Chem ; 87(3): 1889-95, 2015 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-25582662

RESUMEN

The quantification cycle (Cq) is widely used for calibration in real-time quantitative polymerase chain reaction (qPCR), to estimate the initial amount, or copy number (N0), of the target DNA. Cq may be defined several ways, including the cycle where the detected fluorescence achieves a prescribed threshold level. For all methods of defining Cq, the standard deviation from replicate experiments is typically much greater than the estimated standard errors from the least-squares fits used to obtain Cq. For moderate-to-large copy number (N0 > 10(2)), pipet volume uncertainty and variability in the amplification efficiency (E) likely account for most of the excess variance in Cq. For small N0, the dispersion of Cq is determined by the Poisson statistics of N0, which means that N0 can be estimated directly from the variance of Cq. The estimation precision is determined by the statistical properties of χ(2), giving a relative standard deviation of ∼(2/n)(1/2), where n is the number of replicates, for example, a 20% standard deviation in N0 from 50 replicates.


Asunto(s)
Reacción en Cadena en Tiempo Real de la Polimerasa/métodos , Análisis de Varianza , Dosificación de Gen , Análisis de los Mínimos Cuadrados
19.
Anal Biochem ; 464: 94-102, 2014 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-24991688

RESUMEN

Most methods for analyzing real-time quantitative polymerase chain reaction (qPCR) data for single experiments estimate the hypothetical cycle 0 signal y0 by first estimating the quantification cycle (Cq) and amplification efficiency (E) from least-squares fits of fluorescence intensity data for cycles near the onset of the growth phase. The resulting y0 values are statistically equivalent to the corresponding Cq if and only if E is taken to be error free. But uncertainty in E usually dominates the total uncertainty in y0, making the latter much degraded in precision compared with Cq. Bias in E can be an even greater source of error in y0. So-called mechanistic models achieve higher precision in estimating y0 by tacitly assuming E=2 in the baseline region and so are subject to this bias error. When used in calibration, the mechanistic y0 is statistically comparable to Cq from the other methods. When a signal threshold yq is used to define Cq, best estimation precision is obtained by setting yq near the maximum signal in the range of fitted cycles, in conflict with common practice in the y0 estimation algorithms.


Asunto(s)
Reacción en Cadena de la Polimerasa/métodos , Incertidumbre , Calibración , Análisis de los Mínimos Cuadrados
20.
Anal Biochem ; 449: 76-82, 2014 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-24365068

RESUMEN

New methods are used to compare seven qPCR analysis methods for their performance in estimating the quantification cycle (Cq) and amplification efficiency (E) for a large test data set (94 samples for each of 4 dilutions) from a recent study. Precision and linearity are assessed using chi-square (χ(2)), which is the minimized quantity in least-squares (LS) fitting, equivalent to the variance in unweighted LS, and commonly used to define statistical efficiency. All methods yield Cqs that vary strongly in precision with the starting concentration N0, requiring weighted LS for proper calibration fitting of Cq vs log(N0). Then χ(2) for cubic calibration fits compares the inherent precision of the Cqs, while increases in χ(2) for quadratic and linear fits show the significance of nonlinearity. Nonlinearity is further manifested in unphysical estimates of E from the same Cq data, results which also challenge a tenet of all qPCR analysis methods - that E is constant throughout the baseline region. Constant-threshold (Ct) methods underperform the other methods when the data vary considerably in scale, as these data do.


Asunto(s)
Reacción en Cadena en Tiempo Real de la Polimerasa/métodos , Calibración , Análisis de los Mínimos Cuadrados , Modelos Lineales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA