The Mini-course “Approximations for stochastic and parametric partial differential equations and Bayesian inverse problems” by Dr Hoang Viet Ha (NTU Singapore) has opened on June 19, 2014. This is one event of research group High-dimensional approximation led by Prof. Dinh Dung.
Dr. Hoang Viet Ha.
There have been more than 30 people attending the course, which lasts until the morning of June 20, 2014.
According to Dr Hoang Viet Ha, partial differential equations with random coefficients arise from many important engineering and technological areas. Due to the uncertainty in the coefficients, solving these equations is an enormous computational task. The Monte Carlo method, which selects a large number of realizations of the random coefficients and solves for each realization a separate equation, is too complex for many problems. In many situations, the generalized polynomial chaos approach is far more efficient. The coefficient of the equation is expressed as a linear expansion of random variables, for example, by the Karhunen-Loeve expansion. The equation can then be treated as a parametric equation that depends on an infinite countable sequence of parameters. Finding statistical properties of the solution amounts to solving the equation for each sequence of parameters. However, the solution belongs to a Lebesgue space with respect to the space of parameter sequences, with values in a Banach space of functions. This Lebesgue space admits an infinite set of known multi-variable polynomials of the parameters as an orthonormal basis. The solution therefore can be expressed as a linear expansion of these polynomials. Solving the equation for all the parameter sequences is equivalent to finding the coefficients of this expansion. This is a challenging research topic that is attracting much attention of the scientific community at the moment.
Inverse problems assume that some information on the solution of a partial differential equation is known, and aim to construct the equation. The observational data are normally blemished with noise. This leads to the ill-posedness of the problem. Classical minimizing methods assume that the noise is deterministic. They remove the ill-posedness by adding a regularizing term in the form of the norm of a function in a Banach space. This term is normally chosen in an ad-hoc manner. The Bayesian method assumes that the noise follows a known distribution and the quantity of interest (for example, the coefficient of the equation) belongs to a probability space with a known prior probability distribution. It aims to find the posterior probability distribution which is the conditional probability of the unknown quantity given the noisy data. The problem is clearly defined and the solution is uniquely determined.
The mini-course aims to give an introduction to the basis of these topics.