Workshop on Differential Privacy and Statistical Data Analysis
Description
How can we obtain scientic benefits from statistical analysis of sensitive data, without compromising the privacy of the individuals who contribute their data? The past decade has seen rapid progress on this question due to the emergence of a mathematically rigorous privacy framework known as differential privacy. Informally, differential privacy provides a robust individual privacy guarantee, ensuring that no adversary, regardless of their capabilities, can learn much more about an individual user than they could have learned had that user's data never been collected.
After a decade of intense academic research demonstrating that many statistical tasks are compatible with differential privacy, it is now seeing wide adoption by a variety of organizations, including large companies like Google, Apple, and Uber. Most notably, the U.S. Census Bureau has chosen to adopt differential privacy for public data releases derived from its 2020 Census, which provides data for countless statistical purposes. Although much of the motivation for differential privacy comes from Statistics, and many of the intended users are statisticians, the majority of differential privacy research has been conducted by computer scientists. Despite shared goals, the two communities differ profoundly in how they formulate problems, the applications they focus on, and the techniques they bring to bear. Given the recent wave of deployments of differential privacy for statistical analyses, this workshop is being organized in hopes of bridging the gap between these two communities. This workshop will bring together leading statisticians and computer scientists working on differential privacy to learn about the state-of-the-art in each other's area, transfer knowledge between communities, and to discuss the most important directions for a more cohesive future research agenda.