A randomized computation M
satisfies -differential privacy if for any adjacent data sets and , and any subset C
of possible outcomes Range(M)
Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Foundations and Trends in Theoretical Computer Science 9.3-4 (2014): 211-407.
Lets say, there are two databases D1 and D2 are neighbouring
if they agree except for a single entry.
The mechanism behaves nearly identically for D1 and D2, then an attacker can’t tell whether D1 or D2 was used (and hence can’t learn much about the individual).
Narayanan, Arvind, and Vitaly Shmatikov. "How to break anonymity of the netflix prize dataset." arXiv preprint cs/0610105 (2006).
sensitive
user dataAnything that must be protected against unauthorised access.
Federated Learning. https://federated.withgoogle.com. Accessed on 1st November, 2020
where
H. B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. y Arcas, ”Communication-Efficient Learning of Deep Networks from Decentralized Data”, arXiv preprint arXiv:1602.05629 , 2017.
Chen, Si, Ruoxi Jia, and Guo-Jun Qi. "Improved Techniques for Model Inversion Attack." arXiv preprint arXiv:2010.04092 (2020).
Zhang, Yuheng, et al. "The secret revealer: generative model-inversion attacks against deep neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Bonawitz, Keith, et al. "Practical secure aggregation for privacy-preserving machine learning." Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017.
Carlini, Nicholas, et al. "The secret sharer: Evaluating and testing unintended memorization in neural networks." 28th {USENIX} Security Symposium ({USENIX} Security 19). 2019.
- Allow only group queries? - How many are happy? How many other than Ram is happy?








$min_{w \in R^d} f(w)$, where $F_k(w) = \frac{1}{n_k}\sum_{u \in P_k} f_i(w)$ $








--- # Is that all? --- # Fairness ## To understand fairness, we need to understand discrimination: - Disparate treatment: Intentionally treating an individual differently based on his/her membership in a particular class. - Disparate impact: Negatively affecting members of a class than others by a policy which looked neutral. *Pessach, Dana, and Erez Shmueli. "Algorithmic fairness." arXiv preprint arXiv:2001.09784 (2020).* --- # Sources of bias/discrimination ## Some sources: - Data (Sensitive attributes - race, gender, age etc) - Biased data - Missing data - Incorrect data - ---