Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Entropy (Basel) ; 26(7)2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-39056927

RESUMEN

This paper establishes a general framework for measuring statistical divergence. Namely, with regard to a pair of random variables that share a common range of values: quantifying the distance of the statistical distribution of one random variable from that of the other. The general framework is then applied to the topics of socioeconomic inequality and renewal processes. The general framework and its applications are shown to yield and to relate to the following: f-divergence, Hellinger divergence, Renyi divergence, and Kullback-Leibler divergence (also known as relative entropy); the Lorenz curve and socioeconomic inequality indices; the Gini index and its generalizations; the divergence of renewal processes from the Poisson process; and the divergence of anomalous relaxation from regular relaxation. Presenting a 'fresh' perspective on statistical divergence, this paper offers its readers a simple and transparent construction of statistical-divergence gauges, as well as novel paths that lead from statistical divergence to the aforementioned topics.

2.
Entropy (Basel) ; 26(3)2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-38539705

RESUMEN

Exponential families are statistical models which are the workhorses in statistics, information theory, and machine learning, among others. An exponential family can either be normalized subtractively by its cumulant or free energy function, or equivalently normalized divisively by its partition function. Both the cumulant and partition functions are strictly convex and smooth functions inducing corresponding pairs of Bregman and Jensen divergences. It is well known that skewed Bhattacharyya distances between the probability densities of an exponential family amount to skewed Jensen divergences induced by the cumulant function between their corresponding natural parameters, and that in limit cases the sided Kullback-Leibler divergences amount to reverse-sided Bregman divergences. In this work, we first show that the α-divergences between non-normalized densities of an exponential family amount to scaled α-skewed Jensen divergences induced by the partition function. We then show how comparative convexity with respect to a pair of quasi-arithmetical means allows both convex functions and their arguments to be deformed, thereby defining dually flat spaces with corresponding divergences when ordinary convexity is preserved.

3.
Entropy (Basel) ; 26(3)2024 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-38539745

RESUMEN

We consider privacy mechanisms for releasing data X=(S,U), where S is sensitive and U is non-sensitive. We introduce the robust local differential privacy (RLDP) framework, which provides strong privacy guarantees, while preserving utility. This is achieved by providing robust privacy: our mechanisms do not only provide privacy with respect to a publicly available estimate of the unknown true distribution, but also with respect to similar distributions. Such robustness mitigates the potential privacy leaks that might arise from the difference between the true distribution and the estimated one. At the same time, we mitigate the utility penalties that come with ordinary differential privacy, which involves making worst-case assumptions and dealing with extreme cases. We achieve robustness in privacy by constructing an uncertainty set based on a Rényi divergence. By analyzing the structure of this set and approximating it with a polytope, we can use robust optimization to find mechanisms with high utility. However, this relies on vertex enumeration and becomes computationally inaccessible for large input spaces. Therefore, we also introduce two low-complexity algorithms that build on existing LDP mechanisms. We evaluate the utility and robustness of the mechanisms using numerical experiments and demonstrate that our mechanisms provide robust privacy, while achieving a utility that is close to optimal.

4.
Entropy (Basel) ; 25(11)2023 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-37998208

RESUMEN

The action of a noise operator on a code transforms it into a distribution on the respective space. Some common examples from information theory include Bernoulli noise acting on a code in the Hamming space and Gaussian noise acting on a lattice in the Euclidean space. We aim to characterize the cases when the output distribution is close to the uniform distribution on the space, as measured by the Rényi divergence of order α∈(1,∞]. A version of this question is known as the channel resolvability problem in information theory, and it has implications for security guarantees in wiretap channels, error correction, discrepancy, worst-to-average case complexity reductions, and many other problems. Our work quantifies the requirements for asymptotic uniformity (perfect smoothing) and identifies explicit code families that achieve it under the action of the Bernoulli and ball noise operators on the code. We derive expressions for the minimum rate of codes required to attain asymptotically perfect smoothing. In proving our results, we leverage recent results from harmonic analysis of functions on the Hamming space. Another result pertains to the use of code families in Wyner's transmission scheme on the binary wiretap channel. We identify explicit families that guarantee strong secrecy when applied in this scheme, showing that nested Reed-Muller codes can transmit messages reliably and securely over a binary symmetric wiretap channel with a positive rate. Finally, we establish a connection between smoothing and error correction in the binary symmetric channel.

5.
Entropy (Basel) ; 25(10)2023 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-37895589

RESUMEN

Variational inference provides a way to approximate probability densities through optimization. It does so by optimizing an upper or a lower bound of the likelihood of the observed data (the evidence). The classic variational inference approach suggests maximizing the Evidence Lower Bound (ELBO). Recent studies proposed to optimize the variational Rényi bound (VR) and the χ upper bound. However, these estimates, which are based on the Monte Carlo (MC) approximation, either underestimate the bound or exhibit a high variance. In this work, we introduce a new upper bound, termed the Variational Rényi Log Upper bound (VRLU), which is based on the existing VR bound. In contrast to the existing VR bound, the MC approximation of the VRLU bound maintains the upper bound property. Furthermore, we devise a (sandwiched) upper-lower bound variational inference method, termed the Variational Rényi Sandwich (VRS), to jointly optimize the upper and lower bounds. We present a set of experiments, designed to evaluate the new VRLU bound and to compare the VRS method with the classic Variational Autoencoder (VAE) and the VR methods. Next, we apply the VRS approximation to the Multiple-Source Adaptation problem (MSA). MSA is a real-world scenario where data are collected from multiple sources that differ from one another by their probability distribution over the input space. The main aim is to combine fairly accurate predictive models from these sources and create an accurate model for new, mixed target domains. However, many domain adaptation methods assume prior knowledge of the data distribution in the source domains. In this work, we apply the suggested VRS density estimate to the Multiple-Source Adaptation problem (MSA) and show, both theoretically and empirically, that it provides tighter error bounds and improved performance, compared to leading MSA methods.

6.
Entropy (Basel) ; 25(2)2023 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-36832712

RESUMEN

The Gaussian law reigns supreme in the information theory of analog random variables. This paper showcases a number of information theoretic results which find elegant counterparts for Cauchy distributions. New concepts such as that of equivalent pairs of probability measures and the strength of real-valued random variables are introduced here and shown to be of particular relevance to Cauchy distributions.

7.
Entropy (Basel) ; 24(3)2022 Mar 19.
Artículo en Inglés | MEDLINE | ID: mdl-35327940

RESUMEN

The existing work has conducted in-depth research and analysis on global differential privacy (GDP) and local differential privacy (LDP) based on information theory. However, the data privacy preserving community does not systematically review and analyze GDP and LDP based on the information-theoretic channel model. To this end, we systematically reviewed GDP and LDP from the perspective of the information-theoretic channel in this survey. First, we presented the privacy threat model under information-theoretic channel. Second, we described and compared the information-theoretic channel models of GDP and LDP. Third, we summarized and analyzed definitions, privacy-utility metrics, properties, and mechanisms of GDP and LDP under their channel models. Finally, we discussed the open problems of GDP and LDP based on different types of information-theoretic channel models according to the above systematic review. Our main contribution provides a systematic survey of channel models, definitions, privacy-utility metrics, properties, and mechanisms for GDP and LDP from the perspective of information-theoretic channel and surveys the differential privacy synthetic data generation application using generative adversarial network and federated learning, respectively. Our work is helpful for systematically understanding the privacy threat model, definitions, privacy-utility metrics, properties, and mechanisms of GDP and LDP from the perspective of information-theoretic channel and promotes in-depth research and analysis of GDP and LDP based on different types of information-theoretic channel models.

8.
J Math Chem ; 60(1): 239-254, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34840396

RESUMEN

In this work, a new version of Rényi's divergence is presented. The expression obtained is used as a tool to identify molecules that could share some chemical or structural properties, and a data basis set of 1641 molecules is used in this study. Our results suggest that this new form of Rényi divergence could be a useful tool that will eventually permit complementary studies in which the main goal is to obtain molecules with similar properties.

9.
Entropy (Basel) ; 23(5)2021 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-33922636

RESUMEN

We give bounds on the difference between the weighted arithmetic mean and the weighted geometric mean. These imply refined Young inequalities and the reverses of the Young inequality. We also studied some properties on the difference between the weighted arithmetic mean and the weighted geometric mean. Applying the newly obtained inequalities, we show some results on the Tsallis divergence, the Rényi divergence, the Jeffreys-Tsallis divergence and the Jensen-Shannon-Tsallis divergence.

10.
Entropy (Basel) ; 23(4)2021 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-33919807

RESUMEN

When applying a diagnostic technique to complex systems, whose dynamics, constraints, and environment evolve over time, being able to re-evaluate the residuals that are capable of detecting defaults and proposing the most appropriate ones can quickly prove to make sense. For this purpose, the concept of adaptive diagnosis is introduced. In this work, the contributions of information theory are investigated in order to propose a Fault-Tolerant multi-sensor data fusion framework. This work is part of studies proposing an architecture combining a stochastic filter for state estimation with a diagnostic layer with the aim of proposing a safe and accurate state estimation from potentially inconsistent or erroneous sensors measurements. From the design of the residuals, using α-Rényi Divergence (α-RD), to the optimization of the decision threshold, through the establishment of a function that is dedicated to the choice of α at each moment, we detail each step of the proposed automated decision-support framework. We also dwell on: (1) the consequences of the degree of freedom provided by this α parameter and on (2) the application-dictated policy to design the α tuning function playing on the overall performance of the system (detection rate, false alarms, and missed detection rates). Finally, we present a real application case on which this framework has been tested. The problem of multi-sensor localization, integrating sensors whose operating range is variable according to the environment crossed, is a case study to illustrate the contributions of such an approach and show the performance.

11.
Entropy (Basel) ; 23(2)2021 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-33562882

RESUMEN

Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager's E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin-Csiszár mutual information of order α derived from the Rényi divergence[...].

12.
Entropy (Basel) ; 22(3)2020 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-33286090

RESUMEN

Motivated by a horse betting problem, a new conditional Rényi divergence is introduced. It is compared with the conditional Rényi divergences that appear in the definitions of the dependence measures by Csiszár and Sibson, and the properties of all three are studied with emphasis on their behavior under data processing. In the same way that Csiszár's and Sibson's conditional divergence lead to the respective dependence measures, so does the new conditional divergence lead to the Lapidoth-Pfister mutual information. Moreover, the new conditional divergence is also related to the Arimoto-Rényi conditional entropy and to Arimoto's measure of dependence. In the second part of the paper, the horse betting problem is analyzed where, instead of Kelly's expected log-wealth criterion, a more general family of power-mean utility functions is considered. The key role in the analysis is played by the Rényi divergence, and in the setting where the gambler has access to side information, the new conditional Rényi divergence is key. The setting with side information also provides another operational meaning to the Lapidoth-Pfister mutual information. Finally, a universal strategy for independent and identically distributed races is presented that-without knowing the winning probabilities or the parameter of the utility function-asymptotically maximizes the gambler's utility function.

13.
Entropy (Basel) ; 22(12)2020 Dec 12.
Artículo en Inglés | MEDLINE | ID: mdl-33322766

RESUMEN

Importance sampling is a Monte Carlo method where samples are obtained from an alternative proposal distribution. This can be used to focus the sampling process in the relevant parts of space, thus reducing the variance. Selecting the proposal that leads to the minimum variance can be formulated as an optimization problem and solved, for instance, by the use of a variational approach. Variational inference selects, from a given family, the distribution which minimizes the divergence to the distribution of interest. The Rényi projection of order 2 leads to the importance sampling estimator of minimum variance, but its computation is very costly. In this study with discrete distributions that factorize over probabilistic graphical models, we propose and evaluate an approximate projection method onto fully factored distributions. As a result of our evaluation it becomes apparent that a proposal distribution mixing the information projection with the approximate Rényi projection of order 2 could be interesting from a practical perspective.

14.
Comput Struct Biotechnol J ; 18: 1830-1837, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32728406

RESUMEN

Single-cell transcriptomics offers a powerful way to reveal the heterogeneity of individual cells. To date, many information theoretical approaches have been proposed to assess diversity and similarity, and characterize the latent heterogeneity in transcriptome data. Diversity implies gene expression variations and can facilitate the identification of signature genes; while, similarity unravels co-expression patterns for cell type clustering. In this review, we summarized 16 measures of information theory used for evaluating diversity and similarity in single-cell transcriptomic data, provide references and shed light on selected theoretical properties when there is a need to select proper measurements in general cases. We further provide an R package assembling discussed approaches to improve the researchers own single-cell transcriptome study. At last, we prospected further applications of diversity and similarity measures in support of depicting heterogeneity in single-cell multi-omics data.

15.
Entropy (Basel) ; 21(3)2019 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-33266958

RESUMEN

Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback-Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.

16.
Entropy (Basel) ; 21(8)2019 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-33267491

RESUMEN

Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon's mutual information when their order α is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding.

17.
Entropy (Basel) ; 20(5)2018 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-33265473

RESUMEN

This paper is focused on f-divergences, consisting of three main contributions. The first one introduces integral representations of a general f-divergence by means of the relative information spectrum. The second part provides a new approach for the derivation of f-divergence inequalities, and it exemplifies their utility in the setup of Bayesian binary hypothesis testing. The last part of this paper further studies the local behavior of f-divergences.

18.
Entropy (Basel) ; 20(6)2018 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-33265546

RESUMEN

This short note addresses the problem of autonomous on-line path-panning for exploration and occupancy-grid mapping using a mobile robot. The underlying algorithm for simultaneous localisation and mapping (SLAM) is based on random-finite set (RFS) modelling of ranging sensor measurements, implemented as a Rao-Blackwellised particle filter. Path-planning in general must trade-off between exploration (which reduces the uncertainty in the map) and exploitation (which reduces the uncertainty in the robot pose). In this note we propose a reward function based on the Rényi divergence between the prior and the posterior densities, with RFS modelling of sensor measurements. This approach results in a joint map-pose uncertainty measure without a need to scale and tune their weights.

19.
Entropy (Basel) ; 20(8)2018 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-33265676

RESUMEN

This article deals with new concepts in a product MV-algebra, namely, with the concepts of Rényi entropy and Rényi divergence. We define the Rényi entropy of order q of a partition in a product MV-algebra and its conditional version and we study their properties. It is shown that the proposed concepts are consistent, in the case of the limit of q going to 1, with the Shannon entropy of partitions in a product MV-algebra defined and studied by Petrovicová (Soft Comput. 2000, 4, 41-44). Moreover, we introduce and study the notion of Rényi divergence in a product MV-algebra. It is proven that the Kullback-Leibler divergence of states on a given product MV-algebra introduced by Markechová and Riecan in (Entropy 2017, 19, 267) can be obtained as the limit of their Rényi divergence. In addition, the relationship between the Rényi entropy and the Rényi divergence as well as the relationship between the Rényi divergence and Kullback-Leibler divergence in a product MV-algebra are examined.

20.
Entropy (Basel) ; 20(12)2018 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-33266620

RESUMEN

This paper provides tight bounds on the Rényi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one to one. To that end, a tight lower bound on the Rényi entropy of a discrete random variable with a finite support is derived as a function of the size of the support, and the ratio of the maximal to minimal probability masses. This work was inspired by the recently published paper by Cicalese et al., which is focused on the Shannon entropy, and it strengthens and generalizes the results of that paper to Rényi entropies of arbitrary positive orders. In view of these generalized bounds and the works by Arikan and Campbell, non-asymptotic bounds are derived for guessing moments and lossless data compression of discrete memoryless sources.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA