Low-rank approximation and optimization
|
Overview
Low-rank approximation and optimization. The day will start by a crash course which will be followed by 3 research talks.
Registration
Venue
Aix-Marseille Université
3, place Victor Hugo - MARSEILLE Cedex 03
Program
9:45-10:00 | Coffee and introduction | |
10:00-12:00 | Konstantin Usevich |
Crash-course Low-rank approximation and optimization: from matrices to tensors Slides Abstract. |
Abstract
This mini-course will give an overview of low-rank approximations and underlying optimization problems. The first part of the course will recall basics of low-rank approximations for matrices, with an review of various modifications of the basic low-rank approximation problem (structured/nonnegative constraints, missing data, etc.) The second part will contain an introduction to low-rank decompositions and approximations of n-way arrays (tensors). Differences and similarities with the matrix case will be pointed out. An overview of tools and optimization problems will be provided. Some application examples will be mentioned. |
||
12:00 - 1:30 | Lunch | |
1:30 - 2:30 | Philipp Krah | Research talk An introdution to dynamical low-rank approximation Abstract. |
Abstract
This presentation introduces dynamical low-rank approximation techniques for matrix and tensor differential equations, offering a powerful approach to approximate high-dimensional dynamical systems. The methodology finds relevance in various domains, including quantum chemistries' many-body problems, the gradient flow of neural networks, and the time-dependent discretized solutions of partial differential equations. Beginning with the matrix case, the talk progresses to generalize the approach for tensors in the Tucker form. The session includes numerical demonstrations using a generalized shallow water model, supplemented by practical exercises aimed at facilitating audience comprehension and implementation of the fundamental algorithm. |
||
2:30-3:30 | Henrique Goulart | Research talk Spiked tensor models through the prism of random matrix theory Slides Abstract. |
Abstract Numerous problems in signal processing, machine learning and beyond can be addressed by decomposing some low-rank structure hidden in a noisy observed data tensor. While several algorithms exist for this task, little is known about their actual performance in practice, and also about fundamental statistical limits. Substantial progress has been recently made in the rank-one, large-dimensional case. In particular, we have proposed an approach based on tools from random matrix theory for studying the maximum likelihood estimation of the so-called spiked (rank-one) tensor model. In this talk, I will introduce this approach and discuss some recent extensions by us and others, notably to the study of deflation schemes for tensor decomposition.
References |
||
4:00-5:00 | Irène Waldspurger | Research talk Trust-regions for ill-conditioned low-rank problems Slides Abstract. |
Abstract
When reconstructing a low-rank matrix from linear measurements, it is classical to write the matrix as the product of two "thin" matrices and optimize directly over the factors. This approach, called the Burer-Monteiro factorization, saves computational time and storage space, compared to optimizing over the full matrix. When the size of the factors is stricly larger than the rank of the underlying unknown matrix (the "overparametrized setting"), the factorization introduces ill-conditioning in the problem, making straightforward first-order methods slow. To overcome this issue, preconditioned gradient methods have been proposed. In this talk, we will discuss the performance of a second-order method, namely trust-regions. This is joint work with Florentin Goyens and Clément Royer. |
Confirmed speakers
Philipp Krah (I2M, Aix-Marseille Univ.)
Henrique Goulart (IRIT, INP Toulouse)
Irène Waldspurger (CEREMADE, CNRS, Univ. Paris Dauphine-PSL)