Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 397
Filtrar
1.
Open Mind (Camb) ; 8: 1107-1128, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39296349

RESUMEN

Transfer learning, the reuse of newly acquired knowledge under novel circumstances, is a critical hallmark of human intelligence that has frequently been pitted against the capacities of artificial learning agents. Yet, the computations relevant to transfer learning have been little investigated in humans. The benefit of efficient inductive biases (meta-level constraints that shape learning, often referred as priors in the Bayesian learning approach), has been both theoretically and experimentally established. Efficiency of inductive biases depends on their capacity to generalize earlier experiences. We argue that successful transfer learning upon task acquisition is ensured by updating inductive biases and transfer of knowledge hinges upon capturing the structure of the task in the inductive bias that can be reused in novel tasks. To explore this, we trained participants on a non-trivial visual stimulus sequence task (Alternating Serial Response Times, ASRT); during the Training phase, participants were exposed to one specific sequence for multiple days, then on the Transfer phase, the sequence changed, while the underlying structure of the task remained the same. Our results show that beyond the acquisition of the stimulus sequence, our participants were also able to update their inductive biases. Acquisition of the new sequence was considerably sped up by earlier exposure but this enhancement was specific to individuals showing signatures of abandoning initial inductive biases. Enhancement of learning was reflected in the development of a new internal model. Additionally, our findings highlight the ability of participants to construct an inventory of internal models and alternate between them based on environmental demands. Further, investigation of the behavior during transfer revealed that it is the subjective internal model of individuals that can predict the transfer across tasks. Our results demonstrate that even imperfect learning in a challenging environment helps learning in a new context by reusing the subjective and partial knowledge about environmental regularities.

2.
Heliyon ; 10(16): e36284, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39262974

RESUMEN

The relevance of atmospheric particulate matter (PM) to health and the environment is widely known. Long-term studies are necessary for understanding current and future trends in air quality management. This study aimed to assess the long-term PM concentration in the Magdalena department (Colombia). It focused on the following aspects: i) spatiotemporal patterns, ii) correlation with meteorology, iii) compliance with standards, iv) temporal trends over time, v) impact on health, and vi) impact of policy management. Fifteen stations from 2003 to 2021 were analyzed. Spearman-Rho and Mann-Kendall methods were used to correlate concentration with meteorology. The temporal and five-year moving trends were determined, and the trend magnitude was calculated using Teil-Sen. Acute respiratory infection odd ratios and risk of cancer associated with PM concentration were used to assess the impact on health. The study found that the maximum PM10 concentration was 194.5 µg/m3, and the minimum was 3 µg/m3. In all stations, a negative correlation was observed between PM10 and atmospheric water content, while the wind speed and temperature showed a positive correlation. The global trends indicated an increasing value, with five fluctuations in five-year moving trends, consistent with PM sources and socio-economic behavior. PM concentrations were found to comply with national standard; however, the results showed a potential impact on population health. The management regulation had a limited impact on increasing concentration. Considering that national regulations tend to converge towards WHO standards, the study area must create a management program to ensure compliance.

3.
Front Comput Neurosci ; 18: 1293279, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39268151

RESUMEN

The question of how consciousness and behavior arise from neural activity is fundamental to understanding the brain, and to improving the diagnosis and treatment of neurological and psychiatric disorders. There is significant murine and primate literature on how behavior is related to the electrophysiological activity of the medial prefrontal cortex and its role in working memory processes such as planning and decision-making. Existing experimental designs, specifically the rodent spike train and local field potential recordings during the T-maze alternation task, have insufficient statistical power to unravel the complex processes of the prefrontal cortex. We therefore examined the theoretical limitations of such experiments, providing concrete guidelines for robust and reproducible science. To approach these theoretical limits, we applied dynamic time warping and associated statistical tests to data from neuron spike trains and local field potentials. The goal was to quantify neural network synchronicity and the correlation of neuroelectrophysiology with rat behavior. The results show the statistical limitations of existing data, and the fact that making meaningful comparison between dynamic time warping with traditional Fourier and wavelet analysis is impossible until larger and cleaner datasets are available.

4.
Lab Anim ; : 236772241246602, 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39157973

RESUMEN

Most classical statistical tests assume data are normally distributed. If this assumption is not met, researchers often turn to non-parametric methods. These methods have some drawbacks, and if no suitable non-parametric test exists, a normal distribution may be used inappropriately instead. A better option is to select a distribution appropriate for the data from dozens available in modern software packages. Selecting a distribution that represents the data generating process is a crucial but overlooked step in analysing data. This paper discusses several alternative distributions and the types of data that they are suitable for.

5.
BMC Med Res Methodol ; 24(1): 189, 2024 Aug 29.
Artículo en Inglés | MEDLINE | ID: mdl-39210285

RESUMEN

BACKGROUND: Accurate prediction of subject recruitment, which is critical to the success of a study, remains an ongoing challenge. Previous prediction models often rely on parametric assumptions which are not always met or may be difficult to implement. We aim to develop a novel method that is less sensitive to model assumptions and relatively easy to implement. METHODS: We create a weighted resampling-based approach to predict enrollment in year two based on recruitment data from year one of the completed GRIPS and PACE clinical trials. Different weight functions accounted for a range of potential enrollment trajectory patterns. Prediction accuracy was measured by Euclidean distance for enrollment sequence in year two, total enrollment over time, and total weeks to enroll a fixed number of subjects, against the actual year two enrollment data. We compare the performance of the proposed method with an existing Bayesian method. RESULTS: Weighted resampling using GRIPS data resulted in closer prediction evidenced by better coverage of observed enrollment with the prediction intervals and smaller Euclidean distance from actual enrollment in year 2, especially when enrollment gaps were filled prior to the weighted resampling. These scenarios also produced more accurate predictions for total enrollment and number of weeks to enroll 50 participants. These same scenarios outperformed an existing Bayesian method for all 3 accuracy measures. In PACE data, using a reduced year 1 enrollment resulted in closer prediction evidenced by better coverage of observed enrollment with the prediction intervals and smaller Euclidean distance from actual enrollment in year 2, with the weighted resampling scenarios better reflecting the seasonal variation seen in year (1) The reduced enrollment scenarios resulted in closer prediction for total enrollment over 6 and 12 months into year (2) These same scenarios also outperformed an existing Bayesian method for relevant accuracy measures. CONCLUSION: The results demonstrate the feasibility and flexibility for a resampling-based, non-parametric approach for prediction of clinical trial recruitment with limited early enrollment data. Application to a wider setting and long-term prediction accuracy require further investigation.


Asunto(s)
Teorema de Bayes , Selección de Paciente , Ensayos Clínicos Controlados Aleatorios como Asunto , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto/métodos , Ensayos Clínicos Controlados Aleatorios como Asunto/estadística & datos numéricos , Anciano , Pacientes Internos/estadística & datos numéricos , Estadísticas no Paramétricas , Femenino
6.
Hum Psychopharmacol ; : e2909, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38995719

RESUMEN

OBJECTIVES: Stimuli that are separated by a short window of space or time, known as spatial and temporal binding windows (SBW/TBWs), may be perceived as separate. Widened TBWs are evidenced in schizophrenia, although it is unclear if the SBW is similarly affected. The current study aimed to assess if dexamphetamine (DEX) may increase SBWs in a multimodal visuo-tactile illusion, potentially validating usefulness as an experimental model for multimodal visuo-tactile hallucinations in schizophrenia, and to examine a possible association between altered binding windows (BWs) and working memory (WM) suggested by previous research. METHODS: A placebo-controlled, double-blinded, and counter-balanced crossover design was employed. Permuted block randomisation was used for drug order. Healthy participants received DEX (0.45 mg/kg, PO, b.i.d.) or placebo (glucose powder) in capsules. The Rubber Hand Illusion (RHI) and Wechsler Adult Intelligence Scale Spatial Span was employed to determine whether DEX would alter SBWs and WM, respectively. Schizotypy was assessed with a variety of psychological scales. RESULTS: Most participants did not experience the RHI even under normal circumstances. Bi-directional and multimodal effects of DEX on individual SBWs and schizotypy were observed, but not on WM. CONCLUSIONS: Bidirectional multimodal effects of DEX on the RHI and SBWs were observed in individuals, although not associated with alterations in WM.

7.
Heliyon ; 10(13): e33952, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39055800

RESUMEN

The precise estimation of solar PV cell parameters has become increasingly important as solar energy deployment expands. Due to the intricate and nonlinear characteristics of solar PV cells, meta-heuristic algorithms show greater promise than traditional ones for parameter estimation. This study utilizes the Puffer Fish (PF) meta-heuristic optimization method, inspired by male puffer fish's circular structures, to estimate parameters of a modified four-diode PV cell. The PF algorithm's performance is assessed against ten benchmark test functions, with results presented as mean and standard deviation for validation. Comparative analysis with Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO), Rat Search Algorithm (RAT), Heap Based Optimizer (HBO), and Cuckoo Search (CS) algorithms highlights PF's superior performance, achieving optimal solutions with minimal error of 7.8947E-08. Statistical tests, including Friedman Ranking (1st) and Wilcoxon's rank sum (3.8108E-07), confirm PF's superiority. The circular structures of male puffer fish serve as an effective model for optimization algorithms, enhancing parameter estimation. Benchmark tests and statistical analysis consistently underscore PF's superiority over other meta-heuristic algorithms. Future research should explore PF's potential applications in solar energy and beyond.

8.
J Environ Manage ; 365: 121553, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38908148

RESUMEN

Carbon dioxide (CO2) emissions are the primary contributors to climate change. Addressing and mitigating climate change necessitates the effective management and utilization of renewable energy consumption, which poses a substantial challenge for the forthcoming decades. This study explores the dynamic effects of service value added (SVA) and renewable energy on environmental quality, particularly focusing on CO2 emissions. Unlike previous studies, we employ a non-parametric modeling approach to uncover the time-varying influence of service sector growth on CO2 emissions. Specifically, we apply the local linear dummy variable estimation (LLDVE) method to a panel of the 17 highest-emitting nations over the period 1980-2021. Our study uncovers a non-linear relationship between CO2 emissions and SVA. From 1980 to 2003, we observe a negative correlation. However, starting from 2005 to 2020, we witness a shift towards a positive correlation, indicating a rise in energy consumption within the service sector. The results indicate that significant emitter economies have yet to achieve sustainability, with the service sector continuing to contribute to pollution. Addressing this issue necessitates more robust climate change policies and increased investment in clean energy, specifically targeting the service sector, including buildings and transport.


Asunto(s)
Dióxido de Carbono , Cambio Climático , Dióxido de Carbono/análisis , Energía Renovable , Contaminación del Aire
9.
Front Neurosci ; 18: 1344114, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38933813

RESUMEN

One-shot learning, the ability to learn a new concept from a single instance, is a distinctive brain function that has garnered substantial interest in machine learning. While modeling physiological mechanisms poses challenges, advancements in artificial neural networks have led to performances in specific tasks that rival human capabilities. Proposing one-shot learning methods with these advancements, especially those involving simple mechanisms, not only enhance technological development but also contribute to neuroscience by proposing functionally valid hypotheses. Among the simplest methods for one-shot class addition with deep learning image classifiers is "weight imprinting," which uses neural activity from a new class image data as the corresponding new synaptic weights. Despite its simplicity, its relevance to neuroscience is ambiguous, and it often interferes with original image classification, which is a significant drawback in practical applications. This study introduces a novel interpretation where a part of the weight imprinting process aligns with the Hebbian rule. We show that a single Hebbian-like process enables pre-trained deep learning image classifiers to perform one-shot class addition without any modification to the original classifier's backbone. Using non-parametric normalization to mimic brain's fast Hebbian plasticity significantly reduces the interference observed in previous methods. Our method is one of the simplest and most practical for one-shot class addition tasks, and its reliance on a single fast Hebbian-like process contributes valuable insights to neuroscience hypotheses.

10.
Restor Dent Endod ; 49(2): e21, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38841381

RESUMEN

Objectives: This paper aims to serve as a useful guide for sample size determination for various correlation analyses that are based on effect sizes and confidence interval width. Materials and Methods: Sample size determinations are calculated for Pearson's correlation, Spearman's rank correlation, and Kendall's Tau-b correlation. Examples of sample size statements and their justification are also included. Results: Using the same effect sizes, there are differences between the sample size determination of the 3 statistical tests. Based on an empirical calculation, a minimum sample size of 149 is usually adequate for performing both parametric and non-parametric correlation analysis to determine at least a moderate to an excellent degree of correlation with acceptable confidence interval width. Conclusions: Determining data assumption(s) is one of the challenges to offering a valid technique to estimate the required sample size for correlation analyses. Sample size tables are provided and these will help researchers to estimate a minimum sample size requirement based on correlation analyses.

11.
J Environ Manage ; 360: 121132, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38754191

RESUMEN

In the context of global climate change threatening human survival, and in a post-pandemic era that advocates for a global green and low-carbon economic recovery, conducting an in-depth analysis to assess whether green finance can effectively support low-carbon economic development from a dynamic perspective is crucial. Unlike existing research, which focuses solely on the average effects of green credit (GC) on carbon productivity (CP), we introduce a non-parametric panel data model to investigate GC's impact on CP across 30 provinces in China from 2003 to 2021, verifying a significant time-varying effect. Specifically, during the first phase (2003-2008), GC negatively impacted CP. In the second phase (2009-2014), this negative influence gradually diminished and transformed into a positive effect. In the third phase (2015-2021), GC continued to positively influence CP, although this effect became insignificant during the pandemic. Further subgroup analysis reveals that in the regions with low environmental regulations, GC did not significantly boost CP throughout the sample period. In contrast, in the regions with high environmental regulations, GC's positive effect persisted in the mid to late stages of the sample period. Additionally, compared to the regions with low levels of marketization, the impact of GC on CP was more pronounced in highly marketized regions. This indicates that the promoting effect of GC on CP depends on strong support from environmental regulations and well-functioning market mechanisms. By adopting a non-parametric approach, this study reveals variations in the impact of GC on CP across different stages and under the influence of the pandemic shock, offering new insights into the relationship between GC and China's CP.


Asunto(s)
Carbono , Cambio Climático , China , Carbono/análisis
12.
Sensors (Basel) ; 24(10)2024 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-38793932

RESUMEN

This paper investigates the detection of broken rotor bar in squirrel cage induction motors using a novel approach of randomly positioning a triaxial sensor over the motor surface. This study is conducted on two motors under laboratory conditions, where one motor is kept in a healthy state, and the other is subjected to a broken rotor bar (BRB) fault. The induced electromotive force of the triaxial coils, recorded over ten days with 100 measurements per day, is statistically analyzed. Normality tests and graphical interpretation methods are used to evaluate the data distribution. Parametric and non-parametric approaches are used to analyze the data. Both approaches show that the measurement method is valid and consistent over time and statistically distinguishes healthy motors from those with BRB defects when a reference or threshold value is specified. While the comparison between healthy motors shows a discrepancy, the quantitative analysis shows a smaller estimated difference in mean values between healthy motors than comparing healthy and BRB motors.

13.
Pain Med ; 2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38775642

RESUMEN

OBJECTIVE: The statistical analysis typically employed to compare pain both before and after interventions assumes scores are normally distributed. The present study evaluates whether Numeric Rating Scale (NRS), specifically the NRS-11, scores are indeed normally distributed in a clinically-relevant cohort of adults with chronic axial spine pain pre- and post-analgesic intervention. METHODS: Retrospective review from four academic medical centers of prospectively collected data from a uniform pain diary administered to consecutive patients after undergoing medial branch blocks. The pain diary assessed NRS-11 scores immediately pre-injection and at 12 different time points post-injection up to 48 hours. D'Agostino-Pearson tests were used to test normality at all time points. RESULTS: One hundred fifty pain diaries were reviewed and despite normally distributed pre-injection NRS-11 scores (K2 = 0.655, p = 0.72), all post-injection NRS-11 data was not normally distributed (K2 = 9.70- 17.62, p = 0.0001-0.008). CONCLUSIONS: Although the results of parametric analyses of NRS-11 scores are commonly reported in pain research, some properties of NRS-11 do not satisfy the assumptions required for these analyses. The data demonstrate non-normal distributions in post-intervention NRS-11 scores, thereby violating a key requisite for parametric analysis. We urge pain researchers to consider appropriate statistical analysis and reporting for non-normally distributed NRS-11 scores to ensure accurate interpretation and communication of these data. Practicing pain physicians should similarly recognize that parametric post-intervention pain score statistics may not accurately describe the data and should expect manuscripts to utilize measures of normality to justify the selected statistical methods.

14.
Sci Rep ; 14(1): 11452, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38769323

RESUMEN

This study addresses the drawbacks of traditional methods used in meter coefficient analysis, which are low accuracy and long processing time. A new method based on non-parametric analysis using the Back Propagation (BP) neural network is proposed to overcome these limitations. The study explores the classification and pattern recognition capabilities of the BP neural network by analyzing its non-parametric model and optimization methods. For model construction, the study uses the United Kingdom Domestic Appliance-Level Electricity dataset's meter readings and related data for training and testing the proposed model. The non-parametric analysis model is used for data pre-processing, feature extraction, and normalization to obtain the training and testing datasets. Experimental tests compare the proposed non-parametric analysis model based on the BP neural network with the traditional Least Squares Method (LSM). The results demonstrate that the proposed model significantly improves the accuracy indicators such as mean absolute error (MAE) and mean relative error (MRE) when compared with the LSM method. The proposed model achieves an MAE of 0.025 and an MRE of 1.32% in the testing dataset, while the LSM method has an MAE of 0.043 and an MRE of 2.56% in the same dataset. Therefore, the proposed non-parametric analysis model based on the BP neural network can achieve higher accuracy in meter coefficient analysis when compared with the traditional LSM method. This study provides a novel non-parametric analysis method with practical reference value for the electricity industry in energy metering and load forecasting.

15.
Ann Appl Stat ; 18(1): 858-881, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38784669

RESUMEN

In scientific studies involving analyses of multivariate data, basic but important questions often arise for the researcher: Is the sample exchangeable, meaning that the joint distribution of the sample is invariant to the ordering of the units? Are the features independent of one another, or perhaps the features can be grouped so that the groups are mutually independent? In statistical genomics, these considerations are fundamental to downstream tasks such as demographic inference and the construction of polygenic risk scores. We propose a non-parametric approach, which we call the V test, to address these two questions, namely, a test of sample exchangeability given dependency structure of features, and a test of feature independence given sample exchangeability. Our test is conceptually simple, yet fast and flexible. It controls the Type I error across realistic scenarios, and handles data of arbitrary dimensions by leveraging large-sample asymptotics. Through extensive simulations and a comparison against unsupervised tests of stratification based on random matrix theory, we find that our test compares favorably in various scenarios of interest. We apply the test to data from the 1000 Genomes Project, demonstrating how it can be employed to assess exchangeability of the genetic sample, or find optimal linkage disequilibrium (LD) splits for downstream analysis. For exchangeability assessment, we find that removing rare variants can substantially increase the p-value of the test statistic. For optimal LD splitting, the V test reports different optimal splits than previous approaches not relying on hypothesis testing. Software for our methods is available in R (CRAN: flintyR) and Python (PyPI: flintyPy).

16.
Entropy (Basel) ; 26(5)2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38785636

RESUMEN

Using information-theoretic quantities in practical applications with continuous data is often hindered by the fact that probability density functions need to be estimated in higher dimensions, which can become unreliable or even computationally unfeasible. To make these useful quantities more accessible, alternative approaches such as binned frequencies using histograms and k-nearest neighbors (k-NN) have been proposed. However, a systematic comparison of the applicability of these methods has been lacking. We wish to fill this gap by comparing kernel-density-based estimation (KDE) with these two alternatives in carefully designed synthetic test cases. Specifically, we wish to estimate the information-theoretic quantities: entropy, Kullback-Leibler divergence, and mutual information, from sample data. As a reference, the results are compared to closed-form solutions or numerical integrals. We generate samples from distributions of various shapes in dimensions ranging from one to ten. We evaluate the estimators' performance as a function of sample size, distribution characteristics, and chosen hyperparameters. We further compare the required computation time and specific implementation challenges. Notably, k-NN estimation tends to outperform other methods, considering algorithmic implementation, computational efficiency, and estimation accuracy, especially with sufficient data. This study provides valuable insights into the strengths and limitations of the different estimation methods for information-theoretic quantities. It also highlights the significance of considering the characteristics of the data, as well as the targeted information-theoretic quantity when selecting an appropriate estimation technique. These findings will assist scientists and practitioners in choosing the most suitable method, considering their specific application and available data. We have collected the compared estimation methods in a ready-to-use open-source Python 3 toolbox and, thereby, hope to promote the use of information-theoretic quantities by researchers and practitioners to evaluate the information in data and models in various disciplines.

17.
Sci Rep ; 14(1): 9244, 2024 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-38649776

RESUMEN

Modelling of solar irradiation is paramount to renewable energy management. This warrants the inclusion of additive effects to predict solar irradiation. Modelling of additive effects to solar irradiation can improve the forecasting accuracy of prediction frameworks. To help develop the frameworks, this current study modelled the additive effects using non-parametric quantile regression (QR). The approach applies quantile splines to approximate non-parametric components when finding the best relationships between covariates and the response variable. However, some additive effects are perceived as linear. Thus, the study included the partial linearly additive quantile regression model (PLAQR) in the quest to find how best the additive effects can be modelled. As a result, a comparative investigation on the forecasting performances of the PLAQR, an additive quantile regression (AQR) model and the new quantile generalised additive model (QGAM) using out-of-sample and probabilistic forecasting metric evaluations was done. Forecasted density plots, Murphy diagrams and results from the Diebold-Mariano (DM) hypothesis test were also analysed. The density plot, the curves on the Murphy diagram and most metric scores computed for the QGAM were slightly better than for the PLAQR and AQR models. That is, even though the DM test indicates that the PLAQR and AQR models are less accurate than the QGAM, we could not conclude an outright greater forecasting performance of the QGAM than the PLAQR or AQR models. However, in situations of probabilistic forecasting metric preferences, each model can be prioritised to be applied to the metric where it performed slightly the best. The three models performed differently in different locations, but the location was not a significant factor in their performances. In contrast, forecasting horizon and sample size influenced model performance differently in the three additive models. The performance variations also depended on the metric being evaluated. Therefore, the study has established the best forecasting horizons and sample sizes for the different metrics. It was finally concluded that a 20% forecasting horizon and a minimum sample size of 10000 data points are ideal when modelling additive effects of solar irradiation using non-parametric QR.

18.
Psychometrika ; 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38652357

RESUMEN

We provide a framework for motivating and diagnosing the functional form in the structural part of nonlinear or linear structural equation models when the measurement model is a correctly specified linear confirmatory factor model. A mathematical population-based analysis provides asymptotic identification results for conditional expectations of a coordinate of an endogenous latent variable given exogenous and possibly other endogenous latent variables, and theoretically well-founded estimates of this conditional expectation are suggested. Simulation studies show that these estimators behave well compared to presently available alternatives. Practically, we recommend the estimator using Bartlett factor scores as input to classical non-parametric regression methods.

19.
Genome Biol ; 25(1): 96, 2024 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-38622747

RESUMEN

We present a non-parametric statistical method called TDEseq that takes full advantage of smoothing splines basis functions to account for the dependence of multiple time points in scRNA-seq studies, and uses hierarchical structure linear additive mixed models to model the correlated cells within an individual. As a result, TDEseq demonstrates powerful performance in identifying four potential temporal expression patterns within a specific cell type. Extensive simulation studies and the analysis of four published scRNA-seq datasets show that TDEseq can produce well-calibrated p-values and up to 20% power gain over the existing methods for detecting temporal gene expression patterns.


Asunto(s)
Perfilación de la Expresión Génica , Análisis de la Célula Individual , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos , Perfilación de la Expresión Génica/métodos , Simulación por Computador , Expresión Génica
20.
Entropy (Basel) ; 26(4)2024 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-38667889

RESUMEN

We consider a constructive definition of the multivariate Pareto that factorizes the random vector into a radial component and an independent angular component. The former follows a univariate Pareto distribution, and the latter is defined on the surface of the positive orthant of the infinity norm unit hypercube. We propose a method for inferring the distribution of the angular component by identifying its support as the limit of the positive orthant of the unit p-norm spheres and introduce a projected gamma family of distributions defined through the normalization of a vector of independent random gammas to the space. This serves to construct a flexible family of distributions obtained as a Dirichlet process mixture of projected gammas. For model assessment, we discuss scoring methods appropriate to distributions on the unit hypercube. In particular, working with the energy score criterion, we develop a kernel metric that produces a proper scoring rule and presents a simulation study to compare different modeling choices using the proposed metric. Using our approach, we describe the dependence structure of extreme values in the integrated vapor transport (IVT), data describing the flow of atmospheric moisture along the coast of California. We find clear but heterogeneous geographical dependence.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA