Difference between revisions of "Algorithmic bias"
m |
m |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
An algorithmic bias is an (undesirable) bias of an algorithm. In machine learning, this can typically occur if the training dataset contains biased data, e.g. data with historical gender or racial biaises. | An algorithmic bias is an (undesirable) bias of an algorithm. In machine learning, this can typically occur if the training dataset contains biased data, e.g. data with historical gender or racial biaises. | ||
+ | |||
+ | == Impossibility theorem == | ||
+ | |||
+ | Group and individual fairness are incompatible [https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf Stucchio18]. | ||
== Word embedding == | == Word embedding == | ||
Line 5: | Line 9: | ||
The case of word embedding is particularly important, as algorithms rely more and more on natural language processing trained with historical texts. Such texts usually contain a lot of implicit biases which are essentially impossible to clean. | The case of word embedding is particularly important, as algorithms rely more and more on natural language processing trained with historical texts. Such texts usually contain a lot of implicit biases which are essentially impossible to clean. | ||
− | [https://arxiv.org/pdf/1607.06520.pdf BCZSK][https://dblp.org/rec/bibtex/conf/nips/BolukbasiCZSK16 16] showed that the word embedding of occupations correlated with gender. They found out that "computer programmer - man + woman ≈ homemaker", among other disturbing results. | + | [https://arxiv.org/pdf/1607.06520.pdf BCZSK][https://dblp.org/rec/bibtex/conf/nips/BolukbasiCZSK16 16] [https://www.pnas.org/content/pnas/115/16/E3635.full.pdf GSJZ][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=Word+embeddings+quantify+100+years+of+gender+and+ethnic+stereotypes&btnG= 18] showed that the word embedding of occupations correlated with gender. They found out that "computer programmer - man + woman ≈ homemaker", among other disturbing results. |
Note however that [https://arxiv.org/pdf/1905.09866.pdf NNG][https://dblp.uni-trier.de/rec/bibtex/journals/corr/abs-1905-09866 19] show that the highly publicized "doctor-man+woman=nurse" is actually an artefact due to forbidding the use of "doctor" as a reply. | Note however that [https://arxiv.org/pdf/1905.09866.pdf NNG][https://dblp.uni-trier.de/rec/bibtex/journals/corr/abs-1905-09866 19] show that the highly publicized "doctor-man+woman=nurse" is actually an artefact due to forbidding the use of "doctor" as a reply. |
Latest revision as of 16:50, 23 January 2020
An algorithmic bias is an (undesirable) bias of an algorithm. In machine learning, this can typically occur if the training dataset contains biased data, e.g. data with historical gender or racial biaises.
Impossibility theorem
Group and individual fairness are incompatible Stucchio18.
Word embedding
The case of word embedding is particularly important, as algorithms rely more and more on natural language processing trained with historical texts. Such texts usually contain a lot of implicit biases which are essentially impossible to clean.
BCZSK16 GSJZ18 showed that the word embedding of occupations correlated with gender. They found out that "computer programmer - man + woman ≈ homemaker", among other disturbing results.
Note however that NNG19 show that the highly publicized "doctor-man+woman=nurse" is actually an artefact due to forbidding the use of "doctor" as a reply.