Combining Complex Wavelets with Deep Networks: aiming to improve learning efficiency for vision systems

Date(s) : 11/01/2019   iCal
14 h 00 min - 15 h 00 min

Scattering networks [Bruna & Mallat, IEEE Trans PAMI 2013; Oyallon & Mallat, CVPR 2015] may be interpreted as convolutional network layers in which the filters are defined by complex wavelet transforms and whose layer non-linearities are typically complex modulus (L2-norm) operators. Usually they are pre-designed using standard complex wavelet design methodologies that are based on accumulated human knowledge about vision systems, and they involve minimal training. It is found that several layers of scatternet can usefully replace the early layers of a deep convolution neural net (CNN). The aim of this strategy is that the deterministic and complete nature of the wavelet transformations will result in deep networks that are faster at learning, more comprehensible in their behaviour and perhaps better at generalisation than a CNN which has to learn all of its layers from finite amounts of training data. Furthermore, by employing tight-frame overcomplete wavelets and L2-norm nonlinearities, signal energy may be conserved through the scatternet layers, leading to some interesting strategies for subspace selection.

In this talk we shall suggest a number of ways that dual-tree complex wavelets may be incorporated into deep networks, either to generate scatternet front-ends or to produce interesting alternatives to standard convolutional layers, embedded deeper in the network. We will also show how recent ideas on CNN layer visualisation can be extended to include the wavelet-based layers too. We shall pose more questions than answers, while also presenting a few results from current stages of this work. I am very grateful to my co-researchers on this project, Amarjot Singh and Fergal Cotter.

Catégories Pas de Catégories

Retour en haut 

Secured By miniOrange