Adversarial attacks

From RB Wiki

Adversarial attacks encompass a large range of users' behaviors trying to hack an algorithm's vulnerabilities for their advantages.

Evasion attacks

An evasion attack is the vulnerability of an algorithm to imperceptible alterations of their inputs. Typically, while the algorithm successfully classifies cat images as such 99.999% of the time, for any cat image, there may be a slight perturbation of the image such that the algorithm no longer classifies the perturbed cat image as a cat image. This vulnerability has become critical to large-scale algorithms, like YouTube's paedophilia moderation algorithm Wired19.

GSS14 highlighted the vulnerabilities of state-of-the-art machine learning algorithms to evasion attacks, with an example that has since become iconic.


In February 2020, an artist caused a traffic jam by crossing a bridge with a trolley filled with 99 phones, thereby making Google Maps believe that there was a jam on a bridge and redirecting numerous drivers TechBriefly20.

Poisoning attacks

Poisoning attacks consist in contaminating a machine learning algorithm's training data. Robust statistics consists of developing learning algorithms that successfully learn from poisoned datasets, hopefully nearly as well as if the datasets were not poisoned in the first place. There have been remarkable recent progress in this domain DiakonikolasKane19 DepersinLecué19 BEGS17 RB2.

Astroturfing attacks

Astroturfing attacks and SEO-optimization exploit vulnerabilities of recommender systems to promote specific contents, for instance by creating fake accounts or exploiting compromised accounts to tweet hashtags (and immediately erase the tweet to prevent detection) EOOR19.