BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//wp-events-plugin.com//6.4.6.4.1//EN
TZID:Europe/Paris
X-WR-TIMEZONE:Europe/Paris
BEGIN:VEVENT
UID:3286@i2m.univ-amu.fr
DTSTART;TZID=Europe/Paris:20121108T133000
DTEND;TZID=Europe/Paris:20121108T173000
DTSTAMP:20200422T073822Z
URL:https://www.i2m.univ-amu.fr/events/full-signal-and-machine-learning-af
ternoon-session-for-welcoming-new-members/
SUMMARY:Full Signal and Machine Learning afternoon session for welcoming ne
w members. -
DESCRIPTION:13h30 Optimization of High Dimensional Functions: Application t
o a Pulse Shaping Problem\, Mattias Gybels\, LIF.\n\n14h Nonlinear functio
nal data analysis with reproducing kernels\, Hachem Kadri\, LIF.\n\n14h30
Confused Multiclass Relevance Vector Machine\, Ugo Louche\, LIF.\n\n15h Au
tomatic Drum Transcription with informed NM\\\, Antoine Bonnefoy\, LIF.\n\
n15h30 Coffee break.\n\n16h Proximal methods for multiple removal in seism
ic data\, Caroline Chaux\, LATP.\n\n16h30 Cosparse analysis model and unce
rtainty principle: some basics and challenges\, Sangnam Nam\, LATP.\n\n17h
On the accuracy of fiber tractography\, Sebastiano Barbieri\, LATP.\n\n17
h30 End of the scientific part.\nOptimization of High Dimensional Function
s: Application to a Pulse Shaping Problem by Mattias Gybels\, LIF.\nDuring
that presentation\, I will present the work accomplished during my Master
degree internship. After a quick overview of the main concepts of optimiz
ation\, I will detail the optimization problem raised by the “Laser-matt
er interaction” research team of the Hubert Curien Laboratory (Saint-Eti
enne). Finally I will explain the chosen solution and detail some of our r
esults.\n\nNonlinear functional data analysis with reproducing kernels\, b
y Hachem Kadri\, LIF.\nRecent statistical and machine learning studies hav
e revealed the potential benefit of adopting a functional data analysis (F
DA) point of view to improve learning when data are objects in infinite di
mensional Hilbert spaces. However\, nonlinear modeling of such data (aka f
unctional data) is a topic that has not been sufficiently investigated\, e
specially when response data are functions. Reproducing kernel methods pro
vide powerful tools for nonlinear learning problems\, but to date they hav
e been used more to learn scalar or vector-valued functions than function-
valued functions. Consequently\, reproducing kernels for functional data a
nd their associated function-valued RKHS have remained mostly unknown and
poorly studied. This work describes a learning methodology for nonlinear F
DA based on extending the widely used scalar-valued RKHS framework to the
functional response setting. It introduces a set of rigorously defined rep
roducing operator-valued kernels suitable for functional response data\, t
hat can valuably applied to take into account relationships between sample
s and the functional nature of data. Finally\, it shows experimentally tha
t the nonlinear FDA framework is particularly relevant for speech and audi
o processing applications where attributes are really functions and depend
ent of each other.\n\nConfused Multiclass Relevance Vector Machine by Ugo
Louche\, LIF.\nThe Relevance Vector Machine (RVM\, Tipping 2001) is a Baye
sian method for machine learning. It is closely related to the well-known
support vector machines (SVM\, Vapnik\, 1995): RVMs can take advantage of
kernel embeddings and they compute sparse solutions (which is beneficial b
oth from the statistical and computational points of view). In addition to
the SVMs\, though\, RVMs do not require any hyperparameters settings\, th
anks to their Bayesian formulation and they compute predictions with proba
bilistic outputs. RVMs have been recently extended to the problem of multi
class prediction with composite kernel (mRVM\, Damoulas and Girolami\, 200
9) where it has been shown that their good properties still hold. this wor
k\, we present a quick overview of the RVM/mRVM method and the Variational
Bayesian Expectation Maximization approximation (VBEM\, Beal and Ghahrama
ni\, 2003)\\\; as the latter is used to overcome intractability in the mRV
M model. We then propose a new multiclass RVM approach capable of handling
the case where their might be mislabellings in the training data\, as it
may be the case in many real-world applications. Based on the idea that we
are provided with a confusion matrix\, we derive a learning algorithm tha
t computes a multiclass predictor that shows extreme robustness to confuse
d labels. The crux of our work is to provide the various learning equation
s coming from the need to recourse to the VBEM approximation to solve the
full Bayesian and intractable learning problem posed by the mRVM model\, i
n the case of mislabelled data.\n\nAutomatic Drum Transcription with infor
med NMF by Antoine Bonnefoy\, LIF.\nExtracting structured data from a musi
cal signal is an active subject of research (Music Information Retrieval).
In this context\, the drum kit holds an important part of the information
\, it contains the rhythmic part of the music. The NMF is a powerful tool
for source separation\, using this particularity one can apply it to separ
ate the sound into several tracks\, each one containing only one element o
f the kit\, so as to extract the drum score. We've used a NMF method\, and
added some prior informations based on physical and statistical way of pl
aying drums\, to the algorithm in order to improve the results.\n\nProxima
l methods for multiple removal in seismic data\, by Caroline Chaux\, LATP.
\nJoint work: Diego Gragnaniello\, Mai Quyen Pham\, Jean-Christophe Pesque
t\, Laurent Duval. During the acquisition of seismic data\, undesirable co
herent seismic events such as multiples\, are also recorded\, often result
ing in a degradation of the signal of interest. The complexity of these da
ta has historically contributed to the development of several efficient si
gnal processing tools\; for instance wavelets or robust l1-based sparse re
storation. The objective of this work is to propose an original approach t
o the multiple removal problem. A variational framework is adopted here\,
but instead of assuming some knowledge on the kernel\, we assume that a te
mplate is available. Consequently\, it turns out that the problem reduces
to estimate Finite Impulse Response filters\, the latter ones being assume
d to vary slowly along time. We assume that the characteristics of the sig
nal of interest are appropriately described through a prior statistical mo
del in a basis of signals\, e.g. a wavelet basis. The data fidelity term t
hus takes into account the statistical properties of the basis coefficient
s (one can take a l1-norm to favour sparsity)\, the regularization term mo
dels prior informations that are available on the filters and a last const
raint modelling the smooth variations of the filters along time is added.
The resulting minimization is achieved by using the PPXA+ method which bel
ongs to the class of parallel proximal splitting approaches.\n\nCosparse a
nalysis model and uncertainty principle: some basics and challenges\, by S
angnam Nam\, LATP.\nSparse synthesis model has been studied extensively an
d intensely over the recent years and has found an impressive number of su
ccessful applications. In this talk\, we discuss an alternative\, but simi
lar looking model called cosparse analysis model. As basics\, we show why
we think the model is different from the sparse model and then discuss the
uniqueness property in the compressive sensing framework. Next\, we look
at challenging task of analysis operator learning. Uncertainty principle i
s an important (but rather unfortunate) concept in signal\nprocessing (and
other fields). Roughly speaking\, it says that we cannot achieve simultan
eous localization of both time and frequency to arbitrary precisions. Whil
e the formulation in continuous domain is beautiful and can be proved eleg
antly\, there appear to be many challenges when we move to discrete domain
. We will discuss some of these challeges. We will also discuss how uncert
ainty principle appears in the analysis model.
CATEGORIES:Séminaire Signal et Apprentissage
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:STANDARD
DTSTART:20121028T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
END:STANDARD
END:VTIMEZONE
END:VCALENDAR