Implementing diferential privacy in machine learning
Báo cáo viên: Trần Bảo Trung

Thời gian: 9h30, Thứ 5, ngày 17 tháng 10 năm 2019

Địa điểm: Phòng 611-612, Nhà A6, Viện Toán học

Tóm tắt: Privacy risks exist for companies like Grab that extensively collect, use, and publish sensitive data. To address user privacy concerns, a relatively new formulation of privacy called differential privacy has emerged and has been adopted by various companies. In this project, we study the theoretical framework of differential privacy and its applicability to machine learning algorithms, with particular attention paid to random forest. To demonstrate the impact of differential privacy on prediction accuracy, we built a differentially private random forest algorithm from scratch and used Scitkit-learn's random forest as a benchmark. For the dataset that we will use for analysis, we have opted for the NYC taxi dataset to align with Grab's interest. We also provide a literature review of other state-of-the-art differentially private algorithms as future directions for Grab.

Trở lại

Công bố khoa học mới