Tunable Measures for Information Leakage and Applications to Privacy-Utility Tradeoffs
In the first half of the paper, we introduce a tunable measure for information leakage called maximal α-leakage. This measure quantifies the maximal gain of an adversary in refining a tilted version of its posterior belief of any (potentially random) function of a data set conditioning on a released data set. The choice of α determines the specific adversarial action ranging from refining a belief for α =1 to guessing the best posterior for α = ∞, and for these extremal values maximal α-leakage simplifies to mutual information and maximal leakage, respectively. For α∈(1,∞) this measure is shown to be the Arimoto channel capacity. We show that maximal α-leakage satisfies data processing inequalities and sub-additivity (composition property). In the second half of the paper, we use maximal α-leakage as the privacy measure and study the problem of data publishing with privacy guarantees, wherein the utility of the released data is ensured via a hard distortion constraint. Unlike average distortion, hard distortion provides a deterministic guarantee of fidelity. We show that under a hard distortion constraint, both the optimal mechanism and the optimal tradeoff are invariant for any α>1, and the tunable leakage measure only behaves as either of the two extrema, i.e., mutual information for α=1 and maximal leakage for α=∞.
READ FULL TEXT