Seminar Thống kê Ứng dụng T5.2022

Thời gian: 14:00 đến 16:00 Ngày 11/05/2022

Địa điểm: Online

Tóm tắt:

Seminar Thống kê Ứng dụng

Website: https://sites.google.com/view/tkud/home

 Thời gian: 14:00 – 16:00 | Hình thức: Trực tuyến qua phần mềm Zoom

Thời gian

Giảng viên

Tên bài giảng

11/05/2022

TS. Nguyễn Trung Tín

The National Institute for Research in Digital Science and Technology (Inria), Grenoble-Rhône-Alpes, France.

A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models.

25/5/2022

TBC

TBC

Dr. TrungTin Nguyen, Postdoctoral Fellow in Statistics and Data Science at Inria Grenoble-Rhône-Alpes, France. Homepage: https://trung-tinnguyen.github.io/

Email: tinnguyen0495@gmail.com

Abstract: Mixture of experts (MoE) are a popular class of statistical and machine learning models that have gained attention over the years due to their flexibility and efficiency. In this work, we consider Gaussian-gated localized MoE (GLoME) and block-diagonal covariance localized MoE (BLoME) regression models to present nonlinear relationships in heterogeneous data with potential hidden graph-structured interactions between high-dimensional predictors. These models pose difficult statistical estimation and model selection questions, both from a computational and theoretical perspective. This paper is devoted to the study of the problem of model selection among a collection of GLoME or BLoME models characterized by the number of mixture components, the complexity of Gaussian mean experts, and the hidden block-diagonal structures of the covariance matrices, in a penalized maximum likelihood estimation framework. In particular, we establish non-asymptotic risk bounds that take the form of weak oracle inequalities, provided that lower bounds for the penalties hold. The good empirical behavior of our models is then demonstrated on synthetic and real datasets.