Institut de Mathématiques de Marseille, UMR 7373


Accueil > Séminaires > Séminaires et Groupes de travail hebdomadaires > Signal et Apprentissage

Séminaire Signal et Apprentissage

par Anthoine Sandrine, Chaux Caroline, Lozingot Eric, Melot Clothilde - publié le , mis à jour le



  • Jeudi 13 septembre 2012 14:00-15:00 -

    T. Peel (LIF) : Matching Pursuit with Stochastic Selection

    Résumé : Matching pursuit with stochastic selection\nBy Thomas Peel\, LIF.\n\nWe propose a Stochastic Selection strategy that ac- celerates the atom selection step of Matching Pursuit. This strategy consists of randomly selecting a subset of atoms and a subset of rows in the full dictionary at each step of the Matching Pursuit to obtain a sub-optimal but fast atom selection. We study the performance of the proposed algorithm in terms of approximation accuracy (decrease of the residual norm)\, of exact-sparse recovery and of audio declipping of real data. Numerical experiments show the relevance of the ap- proach. The proposed Stochastic Selection strategy is presented with Matching Pursuit but applies to any pursuit algorithms provided that their selection step is based on the computation of correlations.

    Exporter cet événement

  • Jeudi 20 septembre 2012 14:00-15:00 -

    A. Olivero (LIF/LATP) : Phase Reconstruction Problem : Recent Advances and Applications for Audio Signals

    Résumé : Phase Reconstruction Problem : Recent Advances and Applications for Audio Signals \nBy Anaik Olivero\, LIF/LATP.\n\n"Tout est dans le titre."

    Exporter cet événement

  • Jeudi 27 septembre 2012 14:00-15:00 -

    S. Takerkart (LIF) : Learning from structured fMRI patterns using graph kernels

    Résumé : Learning from structured fMRI patterns using graph kernels.\nBy Sylvain Takerkart\, LIF.\n\nClassification of medical images in multi-subjects settings is a difficult challenge due to the variability that exists between individuals. Here we introduce a new graph-based framework specifically designed to deal with inter-subject functional variability present in functional MRI data. A graphical representation is built to encode the functional\, geometric and structural properties of local activation patterns. The design of a specific graph kernel allows to conduct SVM classification directly in graph space. I will present results obtained on both simulated and real datasets\, describe potential applications and discuss future directions for this work.

    Exporter cet événement

  • Du 11 au 12 octobre 2012 -

    Journée des nouveaux entrants du LATP

  • Jeudi 18 octobre 2012 14:00-15:00 -

    N. Pustelnik (ENS Lyon) : A multicomponent proximal algorithm for Empirical Mode Decomposition

    Résumé : A multicomponent proximal algorithm for Empirical Mode Decomposition\n\nBy Nelly Pustelnik\, ENS Lyon\nThe Empirical Mode Decomposition (EMD) is known to be a powerful\ntool adapted to the decomposition of a signal into a collection of\nintrinsic mode functions (IMF). A key procedure in the extraction\nof the IMFs is the sifting process whose main drawback is to depend\non the choice of an interpolation method and to have no clear\nconvergence guarantees. We propose a convex optimization procedure\nin order to replace the sifting process in the EMD. The considered\nmethod is based on proximal tools\, which allow us to deal with\na large class of constraints such as quasi-orthogonality or extrema based\nconstraints.\n

    Lieu : Room 164 CMI

    Exporter cet événement

  • Jeudi 25 octobre 2012 09:00-17:00 -

    Workshop at FRUMAM on the estimation of hyperparameters

    Résumé : More information on the website of the event.

    Lieu : FRUMAM

    Exporter cet événement

  • Jeudi 8 novembre 2012 13:30-17:30 -

    Full Signal and Machine Learning afternoon session for welcoming new members.

    Résumé : 13h30 Optimization of High Dimensional Functions : Application to a Pulse Shaping Problem\, Mattias Gybels\, LIF.\n14h Nonlinear functional data analysis with reproducing kernels\, Hachem Kadri\, LIF.\n14h30 Confused Multiclass Relevance Vector Machine\, Ugo Louche\, LIF.\n15h Automatic Drum Transcription with informed NM\, Antoine Bonnefoy\, LIF.\n15h30 Coffee break.\n16h Proximal methods for multiple removal in seismic data\, Caroline Chaux\, LATP.\n16h30 Cosparse analysis model and uncertainty principle : some basics and challenges\, Sangnam Nam\, LATP.\n17h On the accuracy of fiber tractography\, Sebastiano Barbieri\, LATP.\n17h30 End of the scientific part.\n\n

    Optimization of High Dimensional Functions : Application to a\nPulse Shaping Problem by Mattias Gybels\, LIF.

    \nDuring that presentation\, I will present the work accomplished during my Master degree internship. After a quick overview of the main concepts of optimization\, I will detail the optimization problem raised by the “Laser-matter interaction” research team of the Hubert Curien Laboratory (Saint-Etienne). Finally I will explain the chosen solution and detail some of our results.\n\n

    Nonlinear functional data analysis with reproducing kernels\, by Hachem Kadri\, LIF.

    \nRecent statistical and machine learning studies have revealed the potential benefit of adopting a functional data analysis (FDA) point of view to improve learning when data are objects in infinite dimensional Hilbert spaces. However\, nonlinear modeling of such data (aka functional data) is a topic that has not been sufficiently investigated\, especially when response data are functions. Reproducing kernel methods provide powerful tools for nonlinear learning problems\, but to date they have been used more to learn scalar or vector-valued functions than function-valued functions. Consequently\, reproducing kernels for functional data and their associated function-valued RKHS have remained mostly unknown and poorly studied. This work describes a learning methodology for nonlinear FDA based on extending the widely used scalar-valued RKHS framework to the functional response setting. It introduces a set of rigorously defined reproducing operator-valued kernels suitable for functional response data\, that can valuably applied to take into account relationships between samples and the functional nature of data. Finally\, it shows experimentally that the nonlinear FDA framework is particularly relevant for speech and audio processing applications where attributes are really functions and dependent of each other. \n\n

    Confused Multiclass Relevance Vector Machine by Ugo Louche\, LIF.

    \nThe Relevance Vector Machine (RVM\, Tipping 2001) is a Bayesian method for machine learning. It is closely related to the well-known support vector machines (SVM\, Vapnik\, 1995) : RVMs can take advantage of kernel embeddings and they compute sparse solutions (which is beneficial both from the statistical and computational points of view). In addition to the SVMs\, though\, RVMs do not require any hyperparameters settings\, thanks to their Bayesian formulation and they compute predictions with probabilistic outputs.\nRVMs have been recently extended to the problem of multiclass prediction with composite kernel (mRVM\, Damoulas and Girolami\, 2009) where it has been shown that their good properties still hold.\nIn this work\, we present a quick overview of the RVM/mRVM method and the Variational Bayesian Expectation Maximization approximation (VBEM\, Beal and Ghahramani\, 2003)\ ; as the latter is used to overcome intractability in the mRVM model.\nWe then propose a new multiclass RVM approach capable of handling the case where their might be mislabellings in the training data\, as it may be the case in many real-world applications. Based on the idea that we are provided with a confusion matrix\, we derive a learning algorithm that computes a multiclass predictor that shows extreme robustness to confused labels. The crux of our work is to provide the various learning equations coming from the need to recourse to the VBEM approximation to solve the full Bayesian and intractable learning problem posed by the mRVM model\, in the case of mislabelled data.\n\n

    Automatic Drum Transcription with informed NMF by Antoine Bonnefoy\, LIF.

    \nExtracting structured data from a musical signal is an active subject of research (Music Information Retrieval). In this context\, the drum kit holds an important part of the information\, it contains the rhythmic part of the music. The NMF is a powerful tool for source separation\, using this particularity one can apply it to separate the sound into several tracks\, each one containing only one element of the kit\, so as to extract the drum score. We’ve used a NMF method\, and added some prior informations based on physical and statistical way of playing drums\, to the algorithm in order to improve the results.\n\n

    Proximal methods for multiple removal in seismic data\, by Caroline Chaux\, LATP.

    \nJoint work : Diego Gragnaniello\, Mai Quyen Pham\, Jean-Christophe Pesquet\, Laurent Duval.\n\nDuring the acquisition of seismic data\, undesirable coherent seismic events such as multiples\, are also recorded\, often resulting in a degradation of the signal of interest.\nThe complexity of these data has historically contributed to the development of several efficient signal processing tools\ ; for instance wavelets or robust l1-based sparse restoration.\nThe objective of this work is to propose an original approach to the multiple removal problem. A variational framework is adopted here\, but instead of assuming some knowledge on the kernel\, we assume that a template is available.\nConsequently\, it turns out that the problem reduces to estimate Finite Impulse Response filters\, the latter ones being assumed to vary slowly along time.\nWe assume that the characteristics of the signal of interest are appropriately described through a prior statistical model in a basis of signals\, e.g. a wavelet basis. The data fidelity term thus takes into account the statistical properties of the basis coefficients (one can take a l1-norm to favour sparsity)\, the regularization term models prior informations that are available on the filters and a last constraint modelling the smooth variations of the filters along time is added.\nThe resulting minimization is achieved by using the PPXA+ method which belongs to the class of parallel proximal splitting approaches.\n\n

    Cosparse analysis model and uncertainty principle : some basics and challenges\, by Sangnam Nam\, LATP.

    \nSparse synthesis model has been studied extensively and intensely over the recent years and has found an impressive number of successful applications. In this talk\, we discuss an alternative\, but similar looking model called cosparse analysis model. As basics\, we show why we think the model is different from the sparse model and then discuss the uniqueness property in the compressive sensing framework. Next\, we look at challenging task of analysis operator learning.\nUncertainty principle is an important (but rather unfortunate) concept in signal\nprocessing (and other fields). Roughly speaking\, it says that we cannot achieve simultaneous localization of both time and frequency to arbitrary precisions. While the formulation in continuous domain is beautiful and can be proved elegantly\, there appear to be many challenges when we move to discrete domain. We will discuss some of these challeges.\nWe will also discuss how uncertainty principle appears in the analysis model.

    Exporter cet événement

  • Jeudi 15 novembre 2012 14:00-15:00 -

    E. Morvant (LIF) : A Well-founded PAC-Bayesian Majority Vote applied to the Nearest Neighbor Rule

    Résumé : A Well-founded PAC-Bayesian Majority Vote applied to the Nearest Neighbor Rule\n\nBy Emilie Morvant\, LIF.\n\nThe Nearest Neighbor (NN) [1] rule is probably the best-known classification method. Its widespread use in machine learning and pattern recognition is due to its simplicity\, its theoretical properties and its good practical performance. In this work\, we focus on the k-NN classifier rule where the predicted class of an instance corresponds to the majority class over its k-nearest neighbors in the learning sample. However\, k-NN rule suffers from limitations\, among which the choice of a suitable k and the impossibility to derive generalization guarantees with a standard k-NN algorithm in finite-sample situations.\nTo tackle these drawbacks\, we propose to investigate a new well-founded quadratic algorithm called MinCq [2]\, which takes advantage of the PAC-Bayesian setting [3] by looking for a probability distribution over a set of voters H which seeks for suitable weights to be given to each voters in order to build a majority vote. In particular\, MinCq aims at minimizing a bound involving the first two statistical moments of the margin realized on the learning data. This framework offers strong and elegant theoretical guarantees on the learned weighted majority vote.\nIn the context of k-NN rule\, if H consists of the set of the k-NN classifiers themselves (k=1\,2\,...)\, MinCq may prevent us from tuning k and can "easily" provide generalization guarantees. However in such a situation\, we point out two limitations of MinCq. First\, it focuses on quasi-uniform distribution (i.e. close to the uniform distribution) which is not appropriate to settings where one has an a priori belief on the relevance of the voters : We would like to give higher weights to nearer neighbors. Second\, the theoretical guarantees are not true when the voters are built from learning examples (which is the case of a k-NN classifier).\nIn this work\, we propose thus to generalize MinCq by allowing the incorporation of an a priori belief P\, constraining the learned distribution to be P-aligned and we extend the generalization guarantees to the PAC-Bayes sample compression setting with voters built from learning examples. We set a suitable P-aligned distribution and we conduct a large comparative experimental study that shows practical evidences of the efficieny of our method called P-MinCq. \n\n[1] T. Cover and P. Hart\, "Nearest neighbor pattern classification"\, IEEE Transactions on Information Theory\, vol.13\, no. 1\, pp. 21-27\, 1967\n[2] F. Laviolette\, M. Marchand and J.-F. Roy\, "From PAC-Bayes bounds to quadratic program for majority votes"\, in Proceedings of ICML 2011\n[3] D. A. McAllester\, "PAC-Bayesian model Averaging"\, in Proceedings of COLT 1999\n\n

    Exporter cet événement

  • Jeudi 22 novembre 2012 14:00-15:00 -

    L. Ralaivola (LIF) : online confusion learning and passive-aggressive scheme.

    Résumé : Online confusion learning and passive-aggressive scheme\nBy Liva Ralaivola\, LIF.\nThis work provides the first — to the best of our knowledge — analysis of online learning algorithms for multiclass problems when the confusion matrix is taken as a performance measure. The work builds upon recent and elegant results on non- commutative concentration inequalities\, i.e. concentration inequalities that apply to matrices\, and more precisely to matrix martingales. We do establish generalization bounds for online learning algorithms and show how the theoretical study motivates the proposition of a new confusion-friendly learning procedure. This learning algorithm\, called COPA (for COnfusion Passive-Aggressive) is a passive- aggressive learning algorithm\ ; it is shown that the update equations for COPA can be computed analytically\, thus allowing the user from not having to recourse to any optimization package to implement it.

    Exporter cet événement

  • 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | ... | 21

groupe de travail

Manifestation scientifique

Nature Séminaire transverse
Intitulé Signal et Apprentissage
Responsables Caroline Chaux (I2M)
François-Xavier Dupé (LIF)
Valentin Emiya (LIF)
Hachem Kadri (LIF)
Laboratoires de rattachement I2M Équipe SI du Groupe ALEA (Marseille)
LIF équipe Qarma (Marseille)
Fréquence Hebdomadaire
Jour-Horaire Vendredi, 14h-15h
Lieux CMI, salle de séminaire R164 (accès)
et quelques fois à la FRUMAM St Charles (accès)

Contact :

Et pour s’inscrire :