Báo cáo viên: Trần Thế Hùng
Thời gian: 9h30, Thứ 5, ngày 27//02/2020. Địa điểm: Phòng 612, nhà A6, Viện Toán học, 18 Hoàng Quốc Việt.
Tóm tắt: Abstract: Scaling Bayesian optimisation (BO) to high-dimensional search spaces is a active and open research problems particularly when no assumptions are made on function structure. We propose a novel approach where the acquisition function only requires maximisation on a discrete set of low dimensional subspaces embedded in the original high-dimensional search space. We show that in spite of this convenience, our algorithm remains convergent. In particular, cumulative regret of our algorithm only grows sub-linearly with the number of iterations. More importantly, as evident from our regret bounds, our algorithm provides a way to trade the convergence rate with the number of subspaces used in the optimisation. Finally, when the number of subspaces is "sufficiently large", our algorithm's cumulative regret is at most $mathcal{O}^{*}(sqrt{Tgamma_T})$ as opposed to $mathcal{O}^{*}(sqrt{DTgamma_T})$ for the GP-UCB of Srinivas et al. (2012), reducing a crucial factor $sqrt{D}$ where $D$ being the dimensional number of input space. We perform empirical experiments to evaluate our method extensively, showing that its sample efficiency is better than the existing methods for many optimisation problems involving dimensions up to 5000.
Link bài báo: https://arxiv.org/abs/1911.11950 |