Journée
"Mathématiques du Deep Learning"
30 novembre 2018


Programme

9:45 Accueil
10.00 Thierry Artières Etat des lieux en Deep Learning
Résumé.
[Slides]
Résumé
Le deep learning est aujourd'hui le fer de lance de l'apprentissage automatique appliqué. Il a permis des avancées spectaculaires dans de multiples problèmes durs liés à l'automatisation de tâches de perception (audition, vision), de compréhension, de raisonnement (langage naturel) ou de décision séquentielle (jeux de stratégie). Il est au coeur de ce que l'on nomme aujourd'hui l'intelligence artificielle et constitue à la fois le socle d'un secteur industriel extrêmement dynamique mais aussi un domaine de recherche très actif qui s'inscrit dans le domaine plus large de l'apprentissage automatique. Cette présentation dresse un état des lieux des résultats marquants, de l'état actuel et des tendances des recherches dans le domaine.
12:00 Pause - Repas
13.30 Jakob Verbeek Recent results on deep learning for shape matching, machine translation, and generative image modeling
Résumé.
[Slides 1]
[Slides 2]
Résumé
In this talk I will present three pieces of recent work on deep learning in various contexts. The first part considers learning shape correspondence over 3D meshes, for which we propose a novel graph-convolutional network architecture. This architecture leads to highly accurate correspondences learned from raw XYZ input data, outperforming previous work based on using 3D shape descriptors. The second part considers machine translation, we propose a method based on a 2D convolutional neural network that encodes token-level interactions across source and target sentence. It is an alternative approach to the common encoder-decoder models with attention, and yields performance that is competitive with the state of the art. The third part considers generative image models. Existing methods can be divided in likelihood-based models (VAEs, pixelCNN, NVP) and likelihood-free models (GANs). These approaches have complementary strengths and weaknesses, and we propose an approach in which we combine maximum likelihood and adversarial traning. This results in models which yield realistic samples typical of GANs, and for which we can assess likelihoods of held-out training data to validate that the models do not mode-drop.
14.30 Pause
15.00 François Malgouyres Multilinear compressive sensing and an application to convolutional linear networks
Résumé.
[Slides]
Résumé
We study a deep linear network endowed with a structure. It takes the form of a matrix $X$ obtained by multiplying $K$ matrices (called factors and corresponding to the action of the layers). The action of each layer (i.e. a factor) is obtained by applying a fixed linear operator to a vector of parameters satisfying a constraint. The number of layers is not limited. Assuming that $X$ is given and factors have been estimated, the error between the product of the estimated factors and $X$ (i.e. the reconstruction error) is either the statistical or the empirical risk.
In this work, we provide necessary and sufficient conditions on the network topology under which a stability property holds. The stability property requires that the error on the parameters defining the factors (i.e. the stability of the recovered parameters) scales linearly with the reconstruction error (i.e. the risk). Therefore, under these conditions on the network topology, any successful learning task leads to stably defined features and therefore interpretable layers/network.
In order to do so, we first evaluate how the Segre embedding and its inverse distort distances. Then, we show that any deep structured linear network can be cast as a generic multilinear problem (that uses the Segre embedding). This is the {\em tensorial lifting}. Using the tensorial lifting, we provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We finally provide the necessary and sufficient condition called \NSPlong~(because of the analogy with the usual Null Space Property in the compressed sensing framework) which guarantees that the stability property holds.
We illustrate the theory with a practical example where the deep structured linear network is a convolutional linear network. As expected, the conditions are rather strong but not empty. A simple test on the network topology can be implemented to test if the condition holds.