Spectral learning of weighted automata: multitask setting and nonlinear extension




Date(s) : 13/04/2018   iCal
10 h 00 min - 11 h 00 min

Structured objects such as strings, trees and graphs are ubiquitous in data science but learning functions defined over such objects can be a tedious task.
Weighted automata (WA) are powerfultools that can efficiently model such functions and are thus particularly relevant for machine learning. In particular, the spectral learning algorithm offers an efficient way to learn WA which comes with strong theoretical guarantees.

In this talk, I will present two recent extensions of the spectral learning algorithm. The first one addresses the multitask learning problem for WA: how can one leverage relatedness between two or more WAs to learn more efficiently? After introducing the novel model of vector-valued WA (which conveniently helps formalizing this problem), I will show how the spectral learning algorithm can be extended to vector-valued WA and showcase the benefits of this approach with experiments on a natural language modeling task.

In the second part of the talk I will discuss connections between WA and recurrent neural networks and present a non-linear extension of WA along with a learning algorithm. This learning algorithm can be seen as a non-linear counterpart to the classical spectral learning algorithm, where the factorization of the Hankel matrix is replaced by an auto-encoder network and each transition function is realized by a feed-forward network.

http://cs.mcgill.ca/~grabus/

Catégories Pas de Catégories



Retour en haut