BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Europe/Paris
X-WR-TIMEZONE:Europe/Paris
BEGIN:VEVENT
UID:6370@i2m.univ-amu.fr
DTSTART;TZID=Europe/Paris:20210618T143000
DTEND;TZID=Europe/Paris:20210618T153000
DTSTAMP:20241120T201414Z
URL:https://www.i2m.univ-amu.fr/evenements/majority-vote-learning-in-pac-b
 ayesian-theory-state-of-the-art-and-novelty/
SUMMARY:Paul Viallard (Laboratoire Hubert Curien\, Data Intelligence Team\,
  Saint-Étienne): Majority vote learning in PAC-bayesian theory: state of 
 the art and novelty
DESCRIPTION:Paul Viallard: In machine learning\, ensemble methods are ubiqu
 itous: Boosting\, Bagging\, Support Vector Machine or Random Forest are fa
 mous examples. Here we focus on models expressed as a weighted majority vo
 te. The objective is then to learn a majority vote where its performance i
 s guaranteed on new unseen data. Such guarantee can be estimated with PAC 
 (Probably Approximately Correct) guarantees\, a.k.a. generalization bounds
 \, that is obtained by upper-bounding the risk that the majority vote make
 s an error (through the 0-1 loss). One statistical machine learning theory
  to provide such bounds in the context of majority votes is the PAC-Bayesi
 an framework. The PAC-Bayesian framework has the advantage to offer bounds
  that can be optimizable: learning algorithms can be derived. However\, a 
 major drawback of this framework is that the classical bounds do not direc
 tly provide bounds on the majority vote risk: one has to use a (non-precis
 e) surrogate of the 0-1 loss. In this talk\, we recall the state-of-the-ar
 t learning algorithms based on PAC-Bayesian bound minimizations. Moreover\
 , we will introduce 3 contributions that allow us to obtain majority votes
  with precise guarantees: (1) We introduce 3 algorithms based on a surroga
 te of the PAC-Bayesian literature called the C-Bound. Our minimization pro
 cedures lead to tight generalization bounds since they directly minimize P
 AC-Bayesian Bounds. (2) We discuss the use of another kind of bound in the
  context of majority vote learning\, called "disintegrated PAC-Bayesian bo
 und". (3) We introduce a way to learn a stochastic majority vote (with wei
 ghts sampled from a Dirichlet distribution) where its guarantee is directl
 y on the majority vote risk: it does not require the use of a surrogate (l
 ike the C-Bound).\nhttps://arxiv.org/abs/2104.13626\n\n&nbsp\;
ATTACH;FMTTYPE=image/jpeg:https://www.i2m.univ-amu.fr/wp-content/uploads/2
 021/04/Paul_Viallard.jpg
CATEGORIES:Séminaire,Signal et Apprentissage,Virtual event
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
DTSTART:20210328T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR