Lectures on Matrix computation in data science
Thời gian: 10:00:06/05/2018 đến 12:00:11/05/2018
Địa điểm: B4-705, VIASM
Báo cáo viên: Prof. Vu Ha Van (Yale University), Assistant: Andrei Deaneanu
Tóm tắt:Schedule:
Lectures from 10:00 to 12:00 everyday from May 6 until May 11, 2018. Bài giảng diễn ra từ 10:00 đến 12:00 hàng ngày từ 06/5-11/5/2018.
|
Sunday 6th |
Monday 7th |
Tuesday 8th |
Wensday 9th |
Thursday 10th |
Friday 11th |
Start learning |
10:00am to 12:00pm |
10:00am to 12:00pm |
10:00am to 12:00pm |
10:00am to 12:00pm |
10:00am to 12:00pm |
10:00am to 12:00pm |
Abstract:
We live in the age of large data. A fundamental goal of data science is to find interesting information in existing data and make it beneficial to human kind.
Data is often represented in the form of (very large) matrices. In order to find interesting information, one needs to run matrix algorithms. Fast and accurately.
These series of lectures address two key issues in matrix algorithms: The presence of noise and the sparsification technique.
(1) Data is noisy. How does the noise change the output of our algorithms?
(2) Most matrix algorithms are fast if the input is sparse (namely, having many zeroes). For instance, it is very fast two multiply two vectors full of zeroes. How can we sparsify our matrix (namely, replacing it with a sparse one) while not losing essential information?
(3) Recently, it has turned out, somewhat surprisingly, that some results for (1) can be used to solve problems in (2).
Requirement: basic knowledge in linear algebra and probability.
Data is often represented in the form of (very large) matrices. In order to find interesting information, one needs to run matrix algorithms. Fast and accurately.
These series of lectures address two key issues in matrix algorithms: The presence of noise and the sparsification technique.
(1) Data is noisy. How does the noise change the output of our algorithms?
(2) Most matrix algorithms are fast if the input is sparse (namely, having many zeroes). For instance, it is very fast two multiply two vectors full of zeroes. How can we sparsify our matrix (namely, replacing it with a sparse one) while not losing essential information?
(3) Recently, it has turned out, somewhat surprisingly, that some results for (1) can be used to solve problems in (2).
Requirement: basic knowledge in linear algebra and probability.