Difference between revisions of "Social choice"
Line 13: | Line 13: | ||
[http://darp.lse.ac.uk/papersDB/Harsanyi_(JPolE_55).pdf Harsanyi][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=cardinal+welfare%2C+individualistic+ethics%2C+harsanyi&btnG= 55] proved that weighted sums of individuals' utilities are the only social choice mechanisms that aggregate [[Von Neumann-Morgenstern preferences]] to yield Von Neumann-Morgenstern group preferences in such a way that, if every individual of the group is indifferent between probabilistic outcomes <math>p</math> and <math>q</math>, then so is the group. This is a compelling argument for a simple addition of individuals' preferences. | [http://darp.lse.ac.uk/papersDB/Harsanyi_(JPolE_55).pdf Harsanyi][https://scholar.google.ch/scholar?hl=en&as_sdt=0%2C5&q=cardinal+welfare%2C+individualistic+ethics%2C+harsanyi&btnG= 55] proved that weighted sums of individuals' utilities are the only social choice mechanisms that aggregate [[Von Neumann-Morgenstern preferences]] to yield Von Neumann-Morgenstern group preferences in such a way that, if every individual of the group is indifferent between probabilistic outcomes <math>p</math> and <math>q</math>, then so is the group. This is a compelling argument for a simple addition of individuals' preferences. | ||
− | To prove it, consider a finite set of alternatives. Denote <math>u_{ij}</math> is the utility of individual <math>i</math> for alternative <math>j</math>. Individual <math>i</math>'s utility for a probability <math>p</math> equals <math> | + | To prove it, consider a finite set of alternatives. Denote <math>u_{ij}</math> is the utility of individual <math>i</math> for alternative <math>j</math>. Individual <math>i</math>'s utility for a probability <math>p</math> equals <math>(up)_i</math>. Denoting <math>v_j</math> the group's utility for alternative <math>j</math>, the indifference property says that <math>u (p-q) = 0</math> implies <math>v^T (p-q) = 0</math>. Using linear algebra and the fact that this holds for all <math>p, q</math> then implies that <math>v</math> belongs to the vector space spanned by the <math>u_i</math>'s. This results in saying that <math>w = a + \sum w_i u_i</math>, for nonnegative weights <math>w_i</math>'s (and arbitrary constant <math>a</math>). |
However, there are major caveats to applying this mechanism in practice. First note that the scaling of different individuals' utility functions (or equivalently of their weights) remains to be settled, which does not seem straightforward to be done. But most importantly, this social choice mechanism is not incentive-compatible. If implemented, individuals will have incentives to exaggerate their preferences (or to tell their representatives to do so). Finally, such expected utility maximization will surely turn into the maximization of some expected proxies, which would then be prone to [[Goodhart's law]]. | However, there are major caveats to applying this mechanism in practice. First note that the scaling of different individuals' utility functions (or equivalently of their weights) remains to be settled, which does not seem straightforward to be done. But most importantly, this social choice mechanism is not incentive-compatible. If implemented, individuals will have incentives to exaggerate their preferences (or to tell their representatives to do so). Finally, such expected utility maximization will surely turn into the maximization of some expected proxies, which would then be prone to [[Goodhart's law]]. |
Revision as of 10:34, 23 February 2020
Social choice is the study of how to elicit, aggregate and explain human preferences for collective decision-making. This is critical to AI ethics, as we will need to decide collectively on which ethics an AI will follow. For instance, what video should be recommended by the YouTube algorithm when a user queries "Trump", "vaccine" or "social justice"?
Contents
Background
Social choice theory arguably started with a remarkable memoire by Condorcet Condorcet1785. He argued that if one alternative is preferred to any other alternative by a majority then it should be selected. This is the Condorcet principle (see MrPhi17aFR, MrPhi17bFR).
Unfortunately, social choice theory is plagued with impossibility results, like the Condorcet paradox (PBSInfinite17, MicMaths15FR), Arrow's impossibility theorem (Arrow50, PBSInfinite17, S4A17bFR) and the Gibbard-Satterthwaite theorem (Wandida17, S4A17bFR). Different voting systems yield different winners (StatChat16FR, S4A17aFR).
Today's most convincing social choice mechanisms are probably the approval voting (BramsFishburnBook07, CGPGrey), majority judgment (BalinksiLarakiBook11, ScienceÉtonnante16FR, S4A17cFR) and the randomized Condorcet voting system (Hoang17, S4A17dFR, S4A17eFR).
Harsanyi's Utilitarian Theorem
Harsanyi55 proved that weighted sums of individuals' utilities are the only social choice mechanisms that aggregate Von Neumann-Morgenstern preferences to yield Von Neumann-Morgenstern group preferences in such a way that, if every individual of the group is indifferent between probabilistic outcomes [math]p[/math] and [math]q[/math], then so is the group. This is a compelling argument for a simple addition of individuals' preferences.
To prove it, consider a finite set of alternatives. Denote [math]u_{ij}[/math] is the utility of individual [math]i[/math] for alternative [math]j[/math]. Individual [math]i[/math]'s utility for a probability [math]p[/math] equals [math](up)_i[/math]. Denoting [math]v_j[/math] the group's utility for alternative [math]j[/math], the indifference property says that [math]u (p-q) = 0[/math] implies [math]v^T (p-q) = 0[/math]. Using linear algebra and the fact that this holds for all [math]p, q[/math] then implies that [math]v[/math] belongs to the vector space spanned by the [math]u_i[/math]'s. This results in saying that [math]w = a + \sum w_i u_i[/math], for nonnegative weights [math]w_i[/math]'s (and arbitrary constant [math]a[/math]).
However, there are major caveats to applying this mechanism in practice. First note that the scaling of different individuals' utility functions (or equivalently of their weights) remains to be settled, which does not seem straightforward to be done. But most importantly, this social choice mechanism is not incentive-compatible. If implemented, individuals will have incentives to exaggerate their preferences (or to tell their representatives to do so). Finally, such expected utility maximization will surely turn into the maximization of some expected proxies, which would then be prone to Goodhart's law.
The (Huge) Flaw of Classical Social Choice
Unfortunately, such approaches are limited because they can only handle a reasonable amount of alternatives. If we are to design AI ethics collectively, we need to choose a code (or, say, guidelines or texts of laws). Yet there are combinatorially many such codes! If we consider 1,000-line codes, this would represent ~210,000 alternatives. Classical voting systems won't do the trick.
Now, there are already lots of results in social choice for structured combinatorial sets of alternatives, mostly derived from auction theory (VCG mechanism NRTV07 S4A17fFR, Myerson's auction Myerson81 S4A17gFR, Gale-Shapley GaleShapley62 Numberphile14 S4A17hFR...). Most impressively, in a series of papers CDW11, CDW12a, CDW12b, CDW13a and CDW13b, Cai, Daskalakis and Weinberg proved that the polynomial tractability of a Bayesian social choice approximation problem (i.e. with incentive-compatibility constraints) is equivalent to that of the full-information problem with an additional social welfare term to be optimized (see S4A17fFR).
However, we surely have to also tackle the case of unstructured combinatorial sets of alternatives (also, polytime may be too slow in practice).
Bounds for limited communication complexity
A fascinating result by MPSW19 shows that the worst-case lost of social welfare due to polynomial communication complexity (voters communicate at most log(#alternatives) bits) is unbounded (#alternatives2 for deterministic elicitation+voting, #alternatives for randomized).
There are caveats though. For one thing, this is a worst-case analysis. But human preferences may be more structured. Also, priors can be invoked. Plus, authors assumed that the same elicitation was applied to all voters, which is clearly suboptimal. A lot more research on the communication-complexity versus social-welfare tradeoff is definitely desired. This is exciting!
Applications to AI Ethics
It has been argued to be critical to solve the problem of AI ethics (GRTVW16,CSBDK17). In brief, we are unlikely to agree on what ethics to program. However, we might be able to agree on how to agree on some ethics to program even though we disagree. The trick to implement some (virtual) democratic voting on moral preferences.
Interestingly, ideas along these lines have already been developed for the cases of autonomous car dilemmas NGADR+18 UpAndAtom18, kidney transplant FBSDC18 and food donation (called WeBuildAI LKKKY+19).
Note that in all such applications, the set of alternatives is combinatorially large. The trick to perform voting with limited elicitation from voters is to collect binary-choice-based preferences, and to then extrapolate preferences for other cases using machine learning (with some inductive bias). Another way to interpret this is to consider that voters get substituted by digital surrogates, whose task is to answer just as the voters would. This is kind of like representative democracy, where voters are replaced by their representatives. But machine learning can allow individual representatives through customized surrogates!
To build trust in the surrogate, WeBuildAI proposes to voters to test their surrogates, and to replace it, if needs be, by some computational model of their owns. They show that such interactions build trust from voters. They also propose some interpretability framework, where voters are given the implications of the vote of their surrogates.
Now what Lê can't wait for, is for all such frameworks to be applied to problems that really matter, because they influence billions of people. Yes, Lê is (again!) talking about recommender algorithms of social medias like YouTube. How should hate speech be moderated? What should be shown to someone who wants to learn about climate change? Should there be an additional tax on, say, car advertisements? Should angering videos be less viral?
Lê would be thrilled to see social choice theory applied to such critical moral questions.
Scaled voting
One frequent remark that is being made is whether we really can (and should) agree on ethical issues. For instance, ADKSH+18 showed that Japanese prefer to save walkers, while Chinese prefer to save car passengers. Should we really enforce a common ethics worldwide?
Well, we probably don't need to. Cars could be programmed to save Japanese walkers and Chinese car passengers. They could be made to defend freedom in US and baguette in France. While humans usually have preferences for what happens elsewhere in the world, they usually have stronger preferences for what happens near their home. This probably is something that should be considered when designing voting-based ethics.
One proposal to reflect such nuances is quadratic voting LalleyWeyl18, which can be made secure ParkRivest16. In quadratic voting, a voter who wants its vote to weigh n times more must pay n2. This guarantees (asymptotic) efficiency (utilitarian outcome) and incentive-compatibility. But quadratic voting only applies to 2-alternative votes (typically statu quo vs new law) and is manipulable by collusion.
Another interesting point to be made about multidimensional voting is that the (geometric) median is strategy-proof for voters with a peaked preference, and a valuation that decreases with the distance to the peaked preference. The geometric median is particularly suited for, say, determining budget allocation through social choice. Weirdly, we don't know of a neat paper on this, though using sum of distance minimizer is well-known (need citation).
Preferences versus volitions
It's been argued Yudkowsky04, Tarleton10, Soares15, Hoang19 that we surely should not aggregate today's human moral preferences, because of cognitive biases KahnemanBook11. Mostly, our preferences are inconsistent, manipulable via framing, time-dependent, subject to addictions, and so on. We are likely to regret today's claimed preferences in the future, or as soon as we better understand their consequences. Instead, it is argued, we should program human volitions, which corresponds to what we would prefer to prefer, instead of what we simply prefer.
Unfortunately, a lot more research is needed to better formalize and analyze the concept of volition, and how it diverges from preferences. One fruitful path may be to analyze the difference between what's learned through inverse reinforcement learning, as opposed to through (well-framed) elicitation. See volition for a lot more discussion on this problem.