Wednesday, June 27, 2007

How Altruism Matters in Game Theory

I've been talking a lot about game theory today. Especially as it relates to the political arena. Where, I think, it's a complicated but natural fit. But while I'm at it I thought I'd flag up this interesting little tidbit.


One of the keystones of game theory is that, when you're trying to figure things out, you're trying to deal with homo economicus. The Economic Man, the one who will act in a ration and self-interested manner. The perfect player, in other words, who'll always act in her own interests to win the game and not for some kind of greater, common good.


Naturally, then, if you dive into game theory you might think that logically, then, the only way to play is to be a greedy, selfish bastard. And, of course, that everyone else is as nasty, amoral, and unscrupulous as you when sitting on the other side of the game table. And therefore come to the conclusion that human beings are messy, illogical creatures who've created things like mercy and kindness and civilization out of sheer, dumb luck and not any kind of rational, deliberative process.


It's the reason why, in talking about a game of multiplayer Chicken, I avoided the idea of the Volunteer's Dilemma. Because that's a game where the best and worst move, at the same time is to do nothing. It's a game that logically proves, in so many words, there's no profit in being altruistic.


That's not entirely true, however. Because another keystone of game theory is reducing complex events into simple games. In a single round of a Prisoner's Dilemma or, say, the Pirate's Game, it might well pay to be a complete bastard. But as those games are iterated over multiple rounds, especially if the participants remember the events of previous rounds, then the benefits of attempting to screw someone over fall off drastically.


For example, a researcher named Robert Axelrod organized a Prisoner's Dilemma tournament tournament with a few simple rules and he invited his colleagues to submit game strategies which would be played over and over again. He was looking to see what would happen in so-called Iterated Prisoner's Dilemma games (Also known, by the way, as Peace-War), ones where multiple rounds are played and the memory of the past rounds is carried forward. For which strategies would be the most successful over any number of terms. What he found, which was published in a book called the Evolution of Cooperation was that, over time, those strategies which were altruistic tended to succeed in maximizing their game states while those which were guided by self-interest tended to fail. While what happens when you get two strategies that are designed to benefit the opponent together should be obvious, the rest is somewhat counter-intuitive. Two greedy strategies matched against each other would be cannibalistic, each driving the scores of the other lower. And a greedy strategy and an altruistic one matched against each other would result in the altruistic program – at least the best ones - adapting to and countering the self-interested strategy to the benefit of both. Mr. Axelrod argues, I believe, this as proof that altruistic qualities can arise naturally through evolutionary selection.


Mr. Axelrod, by the way, also found that the statistically best strategy to use is one known as “Tit for Tat” which was submitted by Anatol Rapoport. Which, very simply, begins by deferring to the opponent and then, in subsequent rounds, does whatever they did the last round. It retaliates against aggressive strategies while rewarding co-operative ones. It's, at the same time, nice and tough. It offers a hand up for the opponent but doesn't fold when presented with a threat. It's something so simple even a child could understand. And it's so important because it's a strategy that shows the strength of forgiveness. Because it says, “I will work with you, I will help you, I will help myself, we will help each other, if you trespass I will oppose you as strongly as you oppose me but no more, if you return to good behavior then I will work with you again, I will help you again.” Reach out the iron hand in the velvet glove. It's so sublime, so powerful, a strategy because it co-opts, subverts, and destroys an enemy through benevolence.


Which, again, might seem counter-intuitive. Just like the Volunteer's Dilemma where one player can make a sacrifice to benefit every player. There, the more players you have the less incentive each has to play and the more likely it is that no one will. Because the more people around the table the more you can believe that some other sucker will make the foolish choice to be altruistic. But experimental evidence (Which is far from conclusive and, of which, my brief searching isn't able to find a good example) seems to suggest otherwise. Even rational players can be motivated by something other than pure self-interest. And sometimes, the best move to make, is the one that leaves everyone better off.

No comments: