Người báo cáo: Đinh Dũng
Thời gian: 9h00, Thứ 3 ngày 12/9/2017 Địa điểm: Phòng 4, Nhà A14, Viện Toán học, 18 Hoàng Quốc Việt, Cầu Giấy, Hà Nội Tóm tắt: In the recent decades, various approaches and methods have been proposed for the numerical solving of parametric partial differential equations of the form begin{equation} D(u,y) = 0, end{equation} where $u mapsto D(u,y)$ is a partial differential operator that depends on $d$ parameters represented as the vector$y = (y_1,...,y_d) in Omega subset mathbb{R}^d$. If we assume that this problem is well-posed in a Banach space $X$, then the solution map $y mapsto u(y)$ is defined from the parametric domain $Omega$ to the solution space $X$. Depending on the nature of the object modeled by the above equation, the parameter $y$ may be either deterministic or random variable. The main challenge in numerical computation is to approximate the entire solution map $y mapsto u(y)$ up to a prescribed accuracy with acceptable cost. This problem becomes actually difficult when $d$ may be very large. Here we suffer the so-called curse of dimensionality coined by Bellman: the computational cost grows exponentially in the dimension $d$ of the parametric space. Moreover, in some models the number of parameters may be even countably infinite. In the present paper, a central question to be considered is: Under what assumptions does a sequence of finite element approximations with a certain error convergence rate for the nonparametric problem $D(u,y_0) = 0$ at every point $y_0 in Omega$ induce a sequence of finite element approximations with the same error convergence rate for the parametric problem? We solved it for a model parametric elliptic equation by linear collective methods, and therefore, show that the curse of dimensionality is broken by them. However, we believe that our approach and methods can be extended to more general equations. |