Convergence Analysis of Multi-step One-shot Methods for Regularized Linear Inverse Problems
Speaker: Vũ Tuấn Anh

Time: 9:30, Wednesday, April 24, 2024

Venue: 5th floor, A6 Building

Abstract: When an inverse problem is solved by a gradient-based optimization algorithm, the corresponding forward and adjoint problems, which are employed to compute the gradient, can be also solved iteratively. The idea of iterating at the same time on the inverse problem unknown and on the forward and adjoint problem solutions yields the concept of one-shot inversion methods. We are interested in the case where the inner iterations for the direct and adjoint problems are incomplete, that is, stopped before achieving a high accuracy on their solutions. Here, we focus on general linear inverse problems and generic fixed-point iterations for the associated forward problem, then analyze variants of the so-called multi-step one-shot methods, in particular semi-implicit schemes with a regularization parameter. By studying the eigenvalues of the block matrix of the coupled iterations, we established sufficient conditions on the descent step for convergence. Several numerical experiments are provided to illustrate the convergence of these methods in comparison with the classical gradient descent, where the forward and adjoint problems are solved exactly by a direct solver instead. We observe that few inner iterations are enough to guarantee good convergence of the inversion algorithm, even in the presence of noisy data. In this presentation, we will focus on the technical proof for the convergence of studied methods.

Back