Mathematical Foundations of Machine Learning
May 2nd, 2023


Overview

The workshop is on the Mathematical Foundations of Machine Learning. It will focus on some recent interactions between mathematics and machine Learning. Research topics include C*-algebra, Koopman operators and optimal transport and their use for machine learning algorithms.


Registration

Registration is free.

Venue

Frumam, St Charles, étage 2
Aix-Marseille Université
3, place Victor Hugo - MARSEILLE Cedex 03

Program

2:00 - 2:50  Yuka Hashimoto 
Research talk
Reproducing kernel Hilbert C*-module for data analysis
Slides
Abstract.
Abstract
Reproducing kernel Hilbert C*-module (RKHM) is a generalization of Reproducing kernel Hilbert space (RKHS) and characterized by a C*-algebra-valued positive definite kernel and the inner product induced by this kernel. The advantages of applying RKHMs instead of RKHSs are that we can enlarge representation spaces and can construct positive definite kernels using the product structure in the C*-algebra. We show fundamental properties in RKHMs, such as representer theorems, and consider a supervised problem in RKHMs. This framework is valid, for example, for analyzing image data.
2:50 - 3:40  Masahiro Ikeda  Research talk
Continuity of Koopman operators on reproducing kernel Hilbert spaces with analytic positive definite functions
Slides
Abstract.
Abstract
In this talk we introduce a recent result with I. Ishikawa and Y. Sawano published in JMAA about continuity of Koopman operators on a reproducing kernel Hilbert space (RKHS) associated with an analytic positive definite function. We proved that only affine transforms can do so in a certain large class of RKHS. Our result covers not only the Paley-Wiener space on the real line, studied in previous works, but also much more general RKHSs corresponding to analytic positive definite functions. Our method only relies on intrinsic properties of the RKHSs, and we establish a connection between the behavior of Koopman operators and asymptotic properties of the greatest zeros of orthogonal polynomials on a weighted space on the real line. We also investigate the compactness of the composition operators and show that any bounded composition operators cannot be compact in our situation.
3:40 - 4:10 Coffee break
4:10 - 5:00  Thibaut Le Gouic  Research talk
Using gradient flows in the Wasserstein space in machine learning
Abstract.
Abstract
Some probability measures evolve according to a partial differential equation that can be seen as the gradient flow of a certain functional in the Wasserstein space. This perspective allows to bring in optimization tools to tackle the analysis of the evolution of these measures. Since its introduction in a paper by Jordan, Kinderlehrer and Otto in 1998, this point of view has proven fruitful beyond the analysis of partial differential equations. In this presentation, after defining the gradients in the Wasserstein space and the intuitions and machinery behind it, we will explore how they can naturally be applied to analyze some sampling and neural networks' optimization algorithms.

Confirmed speakers

Yuka Hashimoto (NTT, Japan)
Masahiro Ikeda (Riken AIP, Japan)
Thibaut Le Gouic (I2M).

Organizers