Prix bas
CHF76.00
Impression sur demande - l'exemplaire sera recherché pour vous.
Over the last decade, differential privacy (DP) has emerged as the de facto standard privacy notion for research in privacy-preserving data analysis and publishing. The DP notion offers strong privacy guarantee and has been applied to many data analysis tasks.
This Synthesis Lecture is the first of two volumes on differential privacy. This lecture differs from the existing books and surveys on differential privacy in that we take an approach balancing theory and practice. We focus on empirical accuracy performances of algorithms rather than asymptotic accuracy guarantees. At the same time, we try to explain why these algorithms have those empirical accuracy performances. We also take a balanced approach regarding the semantic meanings of differential privacy, explaining both its strong guarantees and its limitations.
We start by inspecting the definition and basic properties of DP, and the main primitives for achieving DP. Then, we give a detailed discussion on the the semantic privacy guarantee provided by DP and the caveats when applying DP. Next, we review the state of the art mechanisms for publishing histograms for low-dimensional datasets, mechanisms for conducting machine learning tasks such as classification, regression, and clustering, and mechanisms for publishing information to answer marginal queries for high-dimensional datasets. Finally, we explain the sparse vector technique, including the many errors that have been made in the literature using it.
The planned Volume 2 will cover usage of DP in other settings, including high-dimensional datasets, graph datasets, local setting, location privacy, and so on. We will also discuss various relaxations of DP.
Auteur
Ninghui Li is a professor of computer science at Purdue University. His research interests are in security and privacy. He received a Bachelor's degree from the University of Science and Technology of China in 1993 and a Ph.D. in computer science from New York University in 2000. Before joining the faculty of Purdue in 2003, he was a research associate at Stanford University's Computer Science Department for three years. Prof. Li is Vice Chair of ACM Special Interest Group on Security, Audit and Control (SIGSAC). He is serving, or has served, on the editorial boards of ACM Transactions on Privacy and Security (TOPS), Journal of Computer Security (JCS), IEEE Transactions on Dependable and Secure Computing, VLDB Journal, and ACM Transactions on Internet Technology. He has served on the Program Committees of many international conferences and workshops in computer security, databases, and data mining, including serving as Program Chair for 2014 and 2015 ACM Conference on Computer and Communications Security (CCS), ACM's flagship conference in the field of security and privacy.Min Lyu has been a lecturer in the School of Computer Science and Technology at the University of Science and Technology of China since 2007. She received a Bachelor's degree and a Master's degree in applied mathematics from Anhui University in 1999 and 2002, respectively, and a Ph.D. in applied mathematics from the University of Science and Technology of China in 2005. Before joining the faculty of the University of Science and Technology of China in 2007, she was a post-doctor in School of Computer Science and Technology at the University of Science and Technology of China for two years. She was a visiting scholar at Purdue University in 2015, hosted by Professor Ninghui Li on the topic of differential privacy. Her research interests are in security and privacy, social networks, and combinatorial mathematics.Dong Su is a Ph.D. student in the Computer Science Department at Purdue University. He received a Bachelor's degree in software engineering from Tianjin University in 2005 and a Master's degree in computer science from the University of Chinese Academy of Sciences in 2010. He entered Purdue University in the Fall of 2010, and is working under the supervision of Dr. Ninghui Li on the topic of differentially private data publishing for data analysis. His research interests are in data privacy, information security, machine learning, and database management systems.Weining Yang is a Ph.D. student in the Department of Computer Science at Purdue University. He attended Tsinghua University and graduated with a Bachelor's degree in computer science in 2011. He entered Purdue University in the Fall of 2011, and worked under the supervision of Dr. Ninghui Li on the topic of differentially private data publishing and password-based authentication. His research interests are in security and privacy, in particular private data publishing and user authentication.
Texte du rabat
Over the last decade, differential privacy (DP) has emerged as the de facto standard privacy notion for research in privacy-preserving data analysis and publishing. The DP notion offers strong privacy guarantee and has been applied to many data analysis tasks. This Synthesis Lecture is the first of two volumes on differential privacy. This lecture differs from the existing books and surveys on differential privacy in that we take an approach balancing theory and practice. We focus on empirical accuracy performances of algorithms rather than asymptotic accuracy guarantees. At the same time, we try to explain why these algorithms have those empirical accuracy performances. We also take a balanced approach regarding the semantic meanings of differential privacy, explaining both its strong guarantees and its limitations. We start by inspecting the definition and basic properties of DP, and the main primitives for achieving DP. Then, we give a detailed discussion on the the semantic privacy guarantee provided by DP and the caveats when applying DP. Next, we review the state of the art mechanisms for publishing histograms for low-dimensional datasets, mechanisms for conducting machine learning tasks such as classification, regression, and clustering, and mechanisms for publishing information to answer marginal queries for high-dimensional datasets. Finally, we explain the sparse vector technique, including the many errors that have been made in the literature using it. The planned Volume 2 will cover usage of DP in other settings, including high-dimensional datasets, graph datasets, local setting, location privacy, and so on. We will also discuss various relaxations of DP.
Contenu
Acknowledgments.- Introduction.- A Primer on ?-Differential Privacy.- What Does DP Mean?.- Publishing Histograms for Low-dimensional Datasets.- Differentially Private Optimization.- Publishing Marginals.- The Sparse Vector Technique.- Bibliography.- Authors' Biographies.