Hamiltonian Monte Carlo Bayesian optimization for sparse deep neural networks

Lotfi Chaari

Date(s) : 17/03/2023   iCal
14 h 30 min - 15 h 30 min

The performance of a deep neural network strongly depends on the optimization method used during the learning process. In supervised learning, the essence of most architectures is to build an optimization model and learn the parameters from the available training data. In this sense, regularization is usually employed for the sake of stability or uniqueness of the solution. When non-smooth regularizers such as the l1 norm are used to promote sparse networks, this optimization becomes difficult due to the non-differentiability of the target criterion, which may also be non-convex. We propose a Bayesian optimization framework based on an MCMC scheme that allows efficient sampling even for non-smooth energy function. We demonstrate that using the proposed method for image classification leads to high-accuracy results that cannot be achieved using classical optimizers.

Séminaire Signal et Apprentissage

Site Nord, CMI


Retour en haut 

Secured By miniOrange