Adversarial attacks

Zhengyu Zhao
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany

Date(s) : 03/03/2023   iCal
14 h 30 min - 15 h 30 min

An adversarial attack is a method to generate adversarial examples. Hence, an adversarial example is an input to a machine learning model that is purposely designed to cause a model to make a mistake in its predictions despite resembling a valid input to a human.

L’apprentissage automatique contradictoire est l’étude des attaques contre les algorithmes d’apprentissage automatique et des défenses contre de telles attaques.

Séminaire Signal et Apprentissage

Site Nord, CMI


Retour en haut 

Secured By miniOrange