# Preference learning from comparisons

It has been argued that we humans are much more effective at comparing alternatives than at scoring them MaystrePhD18 RB5. Besides, implicit observations of humans mostly provide such choice data, in the form of like/no-like, share/no-share, click/no-click, and so on. Therefore, preference learning from comparisons seems to be an important approach to preference learning.

## Classical models

The classical models for preference learning are due to Thurston27, Zermelo29, Mosteller51, BradleyTerry52, LuceBook59, DavidBook63.

In these models, in a choice between 1 and 2, the human implicitly the scores $\theta_1$ and $\theta_2$ that she assigns to each alternative. But his computation of $\theta_1 - \theta_2$ is noisy, and is accompanied with some noise $\varepsilon$, to yield $x_{12} = \theta_1-\theta_2+\varepsilon$. In the above models, the sign of $x_{12}$ then determines the choice of the human.

This approach allows to explain some of the inconsistencies in humans' decision-making. Equivalently, this corresponds to saying that the probability that the human chooses 1 over 2 is a function of the intrinsic difference $\theta_1-\theta_2$, which we can write $\mathbb P[1 \succ 2] = \mathbb P[x_{12}\gt 0] = \phi(\theta_1-\theta_2)$. Intuitively, the greater the intrinsic difference between 1 and 2, the less likely it is for the human to say he prefers 2 to 1.

Different models assume different noise models for $\varepsilon$, or, equivalently, for the function $\phi$. Thurstone's model assumes that $\varepsilon$ is normally distributed, which is equivalent to saying that $\phi(z) = \Phi(z/\sigma)$, where $\sigma^2$ is the variance of $\varepsilon$ (which may depend on the choice of 1 and 2), and where $\Phi$ is the cumulative density function of the standard normal distribution.

The Bradley-Terry model assumes that $\varepsilon$ follows a Gumbel distribution. Equivalently, it sets $\phi(z)= \frac{1}{1+\exp(-z)}$ (up to variance scaling). Luce's model generalizes the Bradley-Terry models, by considering choices among numerous alternatives and setting $\phi(z_2, z_3, ...)= \frac{1}{1+\exp(-z_2)+\exp(-z_3)+...}$, where this quantity is the probability of choosing option 1, and where $z_i = \theta_1-\theta_i$. Interestingly, Luce proved that this framework was equivalent to demanding independence of irrelevant alternatives.

## Inference

The inference problem is then the problem of inferring the values of parameters $\theta_i$ given observational data $\mathcal D$ of choices $i$ out of a set of alternatives $\mathcal A$. Bayesian inference would suggest computing $\mathbb P[\theta|\mathcal D]$. But as often, this approach might be too computationally costly in practice. Approximate Bayesian methods are needed.

Note that in the Luce model, the log-likelihood $\log \mathbb P[\mathcal D|\theta] = \sum \left( \theta_i - \log \sum_{(j \succ i) \in \mathcal D} \exp \theta_j \right)$ is a strictly concave function in $\theta$ if the comparison graph is strongly connected (for any $i,j$, there exists $k_1, ... k_m$ such that there are some of the data that says $i \succ k_1 \succ k_2 \succ ... \succ k_m \succ j$). This proves the existence and uniqueness of the maximum likelihood estimator under this assumption.

## Markov chain for preference learning

NOS12 proposed a Markov chain approach to compute the maximum likelihood estimator, in the same vein as the PageRank algorithm PBMW99. Namely, by choosing adequately the transition rates of the comparison graph, the stationary distribution of the Markov chain thereby constructed turns out to equal the vector $(\exp \theta_i^\star)$, where $\theta^\star$ is the maximum likelihood estimator.

Note however, that this approach is too data and computationally expensive if the set of alternatives is combinatorial. In fact, for most recommender systems, it may be inapplicable, since most users have never even been exposed to most of the video or music contents of platforms like YouTube or Spotify.

## Gaussian process

Perhaps one of the most promising avenues to scalable comparison-based preference learning consists of building upon the assumption that, a priori, the intrinsic preference $\theta_1-\theta_2$ between two alternatives $z_1$ and $z_2$ is a Gaussian process. By then invoking the similarity between having to choose between $x = (z_1,z_2)$ and $x' = (z_1',z_2')$, we can then generalize from observed data by means of a kernel method.

To implement this approach, we first need a measure of the similarity between two choices $x$ and $x'$. Intuitively, $x$ and $x'$ will be similar if both $z_1 \approx z_1'$ and $z_2 \approx z_2'$. Note that $x$ and $x'$ can also be "anti-similar", if $z_1 \approx z_2'$ and $z_2 \approx z_1'$. Let us call $K(x,x')$ the similarity between $x$ and [/itex]x'[/itex]. Gaussian process prior and Bayesian methods then enable to infer a posterior from revealed preferences on the preference for other dilemmas ChuGhahramani05a ChuGharamani05b.

Note that such kernels can be approximately implemented by vector representation, for instance by some deep neural network $f$. We could then have $K(x,x') = f(x)^Tf(x')$ (anti-similarity could be naturally enforced if $f(z_1,z_2) = - f(z_2,z_1)$, which is guaranteed if $f(z_1,z_2) = g(z_1)-g(z_2)$ for some other vector representation $g$). The use of such vector representations could be useful to accelerate computation, as opposed to the kernel method which requires going through all the training data to make a prediction.

## Connection to sports

Chess Elo78, football MKFG16.