Difference between revisions of "Adversarial attacks"
(2 intermediate revisions by the same user not shown) | |||
Line 8: | Line 8: | ||
[https://arxiv.org/pdf/1907.05418.pdf CXYFY][https://dblp.org/rec/bibtex/journals/corr/abs-1907-05418 19] | [https://arxiv.org/pdf/1907.05418.pdf CXYFY][https://dblp.org/rec/bibtex/journals/corr/abs-1907-05418 19] | ||
+ | |||
+ | In February 2020, an artist caused a traffic jam by crossing a bridge with a trolley filled with 99 phones, thereby making Google Maps believe that there was a jam on a bridge and redirecting numerous drivers [https://techbriefly.com/2020/02/03/an-artist-created-fake-traffic-jams-on-google-maps-using-99-phones/amp/ TechBriefly20]. | ||
== Poisoning attacks == | == Poisoning attacks == | ||
− | Poisoning attacks consist in contaminating a machine learning algorithm's training data. [[Robust statistics]] consists of developing learning algorithms that successfully learn from poisoned datasets, hopefully nearly as well as if the datasets were not poisoned in the first place. There have been remarkable recent progress in this domain [https://arxiv.org/pdf/1911.05911.pdf DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19] [https://arxiv.org/pdf/1906.03058 DepersinLecué][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Robust+subgaussian+estimation+of+a+mean+vector+in+nearly+linear+time&btnG= 19] [http://papers.nips.cc/paper/6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent.pdf BEGS][https://dblp.org/rec/bibtex/conf/nips/BlanchardMGS17 17]. | + | Poisoning attacks consist in contaminating a machine learning algorithm's training data. [[Robust statistics]] consists of developing learning algorithms that successfully learn from poisoned datasets, hopefully nearly as well as if the datasets were not poisoned in the first place. There have been remarkable recent progress in this domain [https://arxiv.org/pdf/1911.05911.pdf DiakonikolasKane][https://dblp.org/rec/bibtex/journals/corr/abs-1911-05911 19] [https://arxiv.org/pdf/1906.03058 DepersinLecué][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Robust+subgaussian+estimation+of+a+mean+vector+in+nearly+linear+time&btnG= 19] [http://papers.nips.cc/paper/6617-machine-learning-with-adversaries-byzantine-tolerant-gradient-descent.pdf BEGS][https://dblp.org/rec/bibtex/conf/nips/BlanchardMGS17 17] [https://www.youtube.com/watch?v=QguWgfGsG-k RB2]. |
== Astroturfing attacks == | == Astroturfing attacks == | ||
− | Astroturfing attacks and SEO-optimization exploit vulnerabilities of recommender systems to promote specific contents, for instance by creating fake accounts or exploiting compromised accounts [https://arxiv.org/pdf/1910.07783.pdf EOOR][https://dblp.org/rec/bibtex/journals/corr/abs-1910-07783 19]. | + | Astroturfing attacks and SEO-optimization exploit vulnerabilities of recommender systems to promote specific contents, for instance by creating fake accounts or exploiting compromised accounts to tweet hashtags (and immediately erase the tweet to prevent detection) [https://arxiv.org/pdf/1910.07783.pdf EOOR][https://dblp.org/rec/bibtex/journals/corr/abs-1910-07783 19]. |
Latest revision as of 12:42, 5 February 2020
Adversarial attacks encompass a large range of users' behaviors trying to hack an algorithm's vulnerabilities for their advantages.
Evasion attacks
An evasion attack is the vulnerability of an algorithm to imperceptible alterations of their inputs. Typically, while the algorithm successfully classifies cat images as such 99.999% of the time, for any cat image, there may be a slight perturbation of the image such that the algorithm no longer classifies the perturbed cat image as a cat image. This vulnerability has become critical to large-scale algorithms, like YouTube's paedophilia moderation algorithm Wired19.
GSS14 highlighted the vulnerabilities of state-of-the-art machine learning algorithms to evasion attacks, with an example that has since become iconic.
In February 2020, an artist caused a traffic jam by crossing a bridge with a trolley filled with 99 phones, thereby making Google Maps believe that there was a jam on a bridge and redirecting numerous drivers TechBriefly20.
Poisoning attacks
Poisoning attacks consist in contaminating a machine learning algorithm's training data. Robust statistics consists of developing learning algorithms that successfully learn from poisoned datasets, hopefully nearly as well as if the datasets were not poisoned in the first place. There have been remarkable recent progress in this domain DiakonikolasKane19 DepersinLecué19 BEGS17 RB2.
Astroturfing attacks
Astroturfing attacks and SEO-optimization exploit vulnerabilities of recommender systems to promote specific contents, for instance by creating fake accounts or exploiting compromised accounts to tweet hashtags (and immediately erase the tweet to prevent detection) EOOR19.