BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//7.2.3.1//EN
TZID:Europe/Paris
X-WR-TIMEZONE:Europe/Paris
BEGIN:VEVENT
UID:5486@i2m.univ-amu.fr
DTSTART;TZID=Europe/Paris:20121115T140000
DTEND;TZID=Europe/Paris:20121115T150000
DTSTAMP:20241028T214843Z
URL:https://www.i2m.univ-amu.fr/evenements/e-morvant-lif-a-well-founded-pa
 c-bayesian-majority-vote-applied-to-the-nearest-neighbor-rule/
SUMMARY: (...): E. Morvant (LIF): A Well-founded PAC-Bayesian Majority Vote
  applied to the Nearest Neighbor Rule
DESCRIPTION:: A Well-founded PAC-Bayesian Majority Vote applied to the Near
 est Neighbor Rule\n\nBy Emilie Morvant\\\, LIF.\n\nThe Nearest Neighbor (N
 N) [1] rule is probably the best-known classification method. Its widespre
 ad use in machine learning and pattern recognition is due to its simplicit
 y\\\, its theoretical properties and its good practical performance. In th
 is work\\\, we focus on the k-NN classifier rule where the predicted class
  of an instance corresponds to the majority class over its k-nearest neigh
 bors in the learning sample. However\\\, k-NN rule suffers from limitation
 s\\\, among which the choice of a suitable k and the impossibility to deri
 ve generalization guarantees with a standard k-NN algorithm in finite-samp
 le situations.\nTo tackle these drawbacks\\\, we propose to investigate a 
 new well-founded quadratic algorithm called MinCq [2]\\\, which takes adva
 ntage of the PAC-Bayesian setting [3] by looking for a probability distrib
 ution over a set of voters H which seeks for suitable weights to be given 
 to each voters in order to build a majority vote. In particular\\\, MinCq 
 aims at minimizing a bound involving the first two statistical moments of 
 the margin realized on the learning data. This framework offers strong and
  elegant theoretical guarantees on the learned weighted majority vote.\nIn
  the context of k-NN rule\\\, if H consists of the set of the k-NN classif
 iers themselves (k={1\\\,2\\\,...})\\\, MinCq may prevent us from tuning k
  and can "easily" provide generalization guarantees. However in such a sit
 uation\\\, we point out two limitations of MinCq. First\\\, it focuses on 
 quasi-uniform distribution (i.e. close to the uniform distribution) which 
 is not appropriate to settings where one has an a priori belief on the rel
 evance of the voters: We would like to give higher weights to nearer neigh
 bors. Second\\\, the theoretical guarantees are not true when the voters a
 re built from learning examples (which is the case of a k-NN classifier).\
 nIn this work\\\, we propose thus to generalize MinCq by allowing the inco
 rporation of an a priori belief P\\\, constraining the learned distributio
 n to be P-aligned and we extend the generalization guarantees to the PAC-B
 ayes sample compression setting with voters built from learning examples. 
 We set a suitable P-aligned distribution and we conduct a large comparativ
 e experimental study that shows practical evidences of the efficieny of ou
 r method called P-MinCq. \n\n[1] T. Cover and P. Hart\\\, "Nearest neighbo
 r pattern classification"\\\, IEEE Transactions on Information Theory\\\, 
 vol.13\\\, no. 1\\\, pp. 21-27\\\, 1967\n[2] F. Laviolette\\\, M. Marchand
  and J.-F. Roy\\\, "From PAC-Bayes bounds to quadratic program for majorit
 y votes"\\\, in Proceedings of ICML 2011\n[3] D. A. McAllester\\\, "PAC-Ba
 yesian model Averaging"\\\, in Proceedings of COLT 1999\n\n
CATEGORIES:Séminaire,Signal et Apprentissage
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:STANDARD
DTSTART:20121028T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
END:STANDARD
END:VTIMEZONE
END:VCALENDAR