An algorithmic bias is an (undesirable) bias of an algorithm. In machine learning, this can typically occur if the training dataset contains biased data, e.g. data with historical gender or racial biaises.
Group and individual fairness are incompatible Stucchio18.
The case of word embedding is particularly important, as algorithms rely more and more on natural language processing trained with historical texts. Such texts usually contain a lot of implicit biases which are essentially impossible to clean.