BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Europe/Paris
X-WR-TIMEZONE:Europe/Paris
BEGIN:VEVENT
UID:2657@i2m.univ-amu.fr
DTSTART;TZID=Europe/Paris:20190110T140000
DTEND;TZID=Europe/Paris:20190110T150000
DTSTAMP:20181226T130000Z
URL:https://www.i2m.univ-amu.fr/evenements/approximation-with-sparsely-con
 nected-deep-networks/
SUMMARY: (...): Approximation with sparsely connected deep networks
DESCRIPTION:: Many of the data analysis and processing pipelines that have 
 been carefully engineered by generations of mathematicians and practitione
 rs can in fact be implemented as deep networks. Allowing the parameters of
  these networks to be automatically trained (or even randomized) allows to
  revisit certain classical constructions.The talk first describes an empir
 ical approach to approximate a given matrix by a fast linear transform thr
 ough numerical optimization. The main idea is to write fast linear transfo
 rms as products of few sparse factors\, and to iteratively optimize over t
 he factors. This corresponds to training a sparsely connected\, linear\, d
 eep neural network. Learning algorithms exploiting iterative hard-threshol
 ding  have been shown to perform well in practice\, a striking example bei
 ng their ability to somehow “reverse engineer” the fast Hadamard trans
 form. Yet\, developing a solid understanding of their conditions of succes
 s remains an open challenge.In a second part\, we study the expressivity o
 f sparsely connected deep networks. Measuring a network's complexity by it
 s number of connections\, we consider the class of functions which error o
 f best approximation with networks of a given complexity decays at a certa
 in rate. Using classical approximation theory\, we show that this class ca
 n be endowed with a norm that makes it a nice function space\, called appr
 oximation space. We establish that the presence of certain “skip connect
 ions” has no impact of the approximation space\, and discuss the role of
  the network's nonlinearity (also known as activation function) on the res
 ulting spaces\, as well as the benefits of depth. For the popular ReLU non
 linearity (as well as its powers)\, we relate the newly identified spaces 
 to classical Besov spaces\, which have a long history as image models asso
 ciated to sparse wavelet decompositions. The sharp embeddings that we esta
 blish highlight how depth enables sparsely connected networks to approxima
 te functions of increased “roughness” (decreased Besov smoothness) com
 pared to shallow networks and wavelets.Joint work with Luc Le Magoarou (In
 ria)\, Gitta Kutyniok (TU Berlin)\, Morten Nielsen (Aalborg University) an
 d Felix Voigtlaender (KU Eichstätt).http://people.irisa.fr/Remi.Gribonval
 /
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:STANDARD
DTSTART:20181028T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
END:STANDARD
END:VTIMEZONE
END:VCALENDAR