WEEKLY ACTIVITIES

On the acceleration of first-order methods: Applications in optimization and learning
Speaker: Prof. Samir Adly

Time: 10h15 - 11h15, Tuesday December 20th, 2022

Location: Room 617, building A6, Institute of Mathematics (18 Hoang Quoc Viet, Cau Giay, Hanoi)

Abstract: In this talk, accessible to a large audience of researchers and students, we trace the recent advances in fast first-order optimization algorithms. Indeed, the acceleration of optimization methods is a current research topic and many teams in France (and in the world) are working in this field. First order methods, such as gradient descent or stochastic gradient, have gained popularity. The major development is due to the mathematician Yurii Nesterov who proposed in 1983 a class of accelerated gradient methods having a faster global convergence rate than gradient descent. We can also mention the FISTA algorithm, proposed by Beck and Teboulle in 2009, which is very successful in the machine learning and signal and image processing communities. Gradient-based optimization algorithms can also be studied from the perspective of Ordinary Differential Equations (ODE). This point of view allows us to propose new algorithms obtained by discretizing these ODEs and to improve their performances thanks to acceleration techniques and low computational complexity necessary for the analysis of massive data. We will also mention the Ravine method, which was introduced by Gelfand and Tsetlin in 1961. Indeed, the Nesterov accelerated gradient method and the Ravine method are closely related: they can be deduced from each other by reversing the order of the extrapolation and gradient operations in their definitions. More surprisingly, they are based on the same equations. This is why the Ravine method is often used by practitioners, and sometimes confused with Nesterov’s Accelerated Gradient. The presentation will be supported by some historical facts and open questions.

Back

New Scientiffic Publications