Social Choice's problem with strategic/irrational(?) Altruism and Spite
-
In the traditional formulation of utility and social choice a rational agent is imagined as having a preference matrix over all possible states and the "Social Choice" being the choice which optimizes for a group of agents across all preference matrices to achieve a utopian choice or something.
But a very important a common way that people are irrational is altruism and spite. That is, my utility function is a function of someone else's utility function and ... What's important here for modeling is if you have a group of people, everyone's utility functions/preferred orderings will be systematically correlated by a partial differential equation. Furthermore, altruistic and spiteful behavior is dependent on previous relationships over time, expectations for the future, and system design.
For a 1-shot question this doesn't really change anything and was already priced into many models.
But if you are trying to design a system to fairly distribute a good. Say, government distributing public goods, there are some huge modeling effects to consider. If a government gives out 1 widget there is utility update for:
- the intrinsic utility of receiving that widget
- update everyone else's utility function/preference orderings based on altruism or spite.
- Everyone modifies strategic thinking which changes observed behavior.
Let's consider this example of pay-outs to agents <A, B, C>
<10, 10, 0>
<10, 0, 10>
<0, 10, 10>
<5, 5, 5>
<0, 0, 0>If the agents are rational then there is a Condorcet cycle.
With altruistic agents (depending on the altruism parameters) there is one Condorcet winner or a cycle from every agent wanting everyone else to have more.
With agents who are strongly mutually spiteful <0, 0, 0> is a Condorcet winner.
So if you internalize spite/altruism into your social choice model, irrationally (or rationally?) spiteful agents might willingly choose nuclear war as the efficient social choice.
I think from a system design perspective it is important to treat utility/preference orderings from the thing being distributed differently from the second order effects of how people feel about each other's feelings. One important finding here is that altruistic/spiteful behavior is a strategy trying to selfishly optimize for a reward. Altruism and spite are learned and evolved behaviors which are evolutionary stable strategies and even learnable by perfectly rational agents. They key is they only make sense in repeated games and in the context of a meta-strategy against other agents.
Spite/altruism can also be artifacts of conscious strategic play. Disproportionate retaliation and 3rd party retaliation of non-compliance are bargaining strategies that exist even without voting. Sending a message can be a utility-maximizing strategy in some contexts.
I guess there are some complex 2nd and 3rd order effects of switching from rational agents to real people.
-
If you have an allocation game dividing a budget, a pizza, etc... this game is zero-sum in the sense that a you getting a slice of pizza is a slice that isn't going to me.
If you are negotiating with an agent who you are unsure is altruistic, rational, or spiteful there is a strategic incentive to misrepresent as a spiteful agent. An altruistic agent gets less than their fair share and a spiteful agent gets more than the rational agent if they can credibly convince other agents they are spiteful. It is worth it to pay off a spiteful agent rather than provoke it.
So if you are trying to optimize for the social choice there's a big problem with spiteful agents. With limited information all agents are incentivized to misrepresent as spiteful and over-represent how much they care about minor concerns. And for making social choice with limited resources there is a genuine zero-sum nature to the problem. Rational agents operating in a zero-sum environment will behave spitefully (you having it is negatively correlated with me having it.)
So with that irrationality baked in, if we could magically find what the social choice was, with spiteful agents the social choice isn't very sociable. And spiteful agents should be common under resource constraints. This correlation between individual's utility functions will also heavily restrain the types of group choice sets which exist.