Low-rank approximation and optimization
May 31rst, 2024


Overview

Low-rank approximation and optimization. The day will start by a crash course which will be followed by 3 research talks.


Registration

Registration is free but mandatory. To register please fill the form here.

Venue

FRUMAM, St Charles, salle de séminaire étage 2
Aix-Marseille Université
3, place Victor Hugo - MARSEILLE Cedex 03

Program

9:45-10:00 Coffee and introduction
10:00-12:00 Konstantin Usevich
Crash-course
TBA
12:00 - 1:30 Lunch
1:30 - 2:30 Philipp Krah Research talk and practical session
An introdution to dynamical low-rank approximation
Abstract.
Abstract
This presentation introduces dynamical low-rank approximation techniques for matrix and tensor differential equations, offering a powerful approach to approximate high-dimensional dynamical systems. The methodology finds relevance in various domains, including quantum chemistries' many-body problems, the gradient flow of neural networks, and the time-dependent discretized solutions of partial differential equations. Beginning with the matrix case, the talk progresses to generalize the approach for tensors in the Tucker form. The session includes numerical demonstrations using a generalized shallow water model, supplemented by practical exercises aimed at facilitating audience comprehension and implementation of the fundamental algorithm.
2:30-3:30 Henrique Goulart Research talk
Spiked tensor models through the prism of random matrix theory
Abstract.
Abstract
Numerous problems in signal processing, machine learning and beyond can be addressed by decomposing some low-rank structure hidden in a noisy observed data tensor. While several algorithms exist for this task, little is known about their actual performance in practice, and also about fundamental statistical limits. Substantial progress has been recently made in the rank-one, large-dimensional case. In particular, we have proposed an approach based on tools from random matrix theory for studying the maximum likelihood estimation of the so-called spiked (rank-one) tensor model. In this talk, I will introduce this approach and discuss some recent extensions by us and others, notably to the study of deflation schemes for tensor decomposition.

References
J. H. de M. Goulart, R. Couillet and P. Comon. A random matrix perspective on random tensors. Journal of Machine Learning Research, 23(264):1–36, Sep 2022.
M. E. A. Seddik, J. H. de M. Goulart and M. Guillaud. Hotelling deflation on large symmetric spiked tensors. In Annales du XXIXème Colloque Francophone de Traitement du Signal et des Images (GRETSI), Grenoble, France, Aug-Sep 2023.
M. E. A. Seddik, M. Achab, J. H. de M. Goulart and M. Debbah. A nested matrix-tensor model for noisy multi-view clustering. arXiv:2305.19992, May 2023.
M. E. A. Seddik, M. Guillaud and R. Couillet. When Random tensors meet random matrices. arXiv:2112.12348, Nov 2022.

4:00-5:00 Irène Waldspurger Research talk
Trust-regions for ill-conditioned low-rank problems
Abstract.
Abstract
When reconstructing a low-rank matrix from linear measurements, it is classical to write the matrix as the product of two "thin" matrices and optimize directly over the factors. This approach, called the Burer-Monteiro factorization, saves computational time and storage space, compared to optimizing over the full matrix.
When the size of the factors is stricly larger than the rank of the underlying unknown matrix (the "overparametrized setting"), the factorization introduces ill-conditioning in the problem, making straightforward first-order methods slow. To overcome this issue, preconditioned gradient methods have been proposed. In this talk, we will discuss the performance of a second-order method, namely trust-regions.
This is joint work with Florentin Goyens and Clément Royer.

Confirmed speakers

Konstantin Usevich (CRAN, CNRS, Univ. Lorraine)
Philipp Krah (I2M, Aix-Marseille Univ.)
Henrique Goulart (IRIT, INP Toulouse)
Irène Waldspurger (CEREMADE, CNRS, Univ. Paris Dauphine-PSL)

Organizers