Game theory provides notation that helps describe some sorts of moral claims precisely. Of particular interest, game theory describes two particularly interesting conditions that coordination problems dealing with agents having competing desires might achieve: Nash equilibria and Pareto optima. Many Nash equilibria are not Pareto optimal, and many Pareto optima are not in a Nash equilibrium. For every Nash equilibrium that is not a Pareto optimum, there exists a Pareto optimum that is at least as good for all participants than the Nash equilibrium and better for at least some of the participants than the Pareto optimum. Moreover, the general structure of the game ensures that the Nash equilibrium is a more stable condition than the Pareto optimum, so the natural final state of this game (provided it reaches conditions sufficiently close to a non-Pareto-optimal Nash equilibrium) is strictly worse than some theoretically attainable condition that is only not attainable in practice because of the behavior of the participants in the game.
I assert that anyone who is smart enough to understand game theory as it applies to the situations they are dealing with (which, in many cases, requires somewhat more intelligence than simply understanding it well enough to push the symbols and do the math, though that is often a prerequisite) will concede that a game which has the same possible states assigned to the same players but has slightly different rules so that all achievable Nash equilibria are Pareto optimal is a better game than the game which has rules that permit the nearest Nash equilibrium to be something other than Pareto optimal. This is a very precise way of saying that, if the incentive structures can be rearranged so that some of the participants benefit, none of the participants are harmed, and you have to let the game play out to determine whether you benefit or merely do as well as you would have done without the rule change, all of the participants, no matter how risk adverse they may be will agree to the rule change unless they are simply insufficiently intelligent to see that the rule change cannot possibly harm them but can possibly benefit them.
(Side note for the truly pedantic: The way I've phrased it, the rule is slightly too weak. It is always true, but a stronger version of the rule would always be true, and also apply to a slightly wider variety of cases. The stronger version is much harder to state precisely, so I presented the weaker version instead. The strong version acknowledges that in some games, it is not possible to change the rules so that all achievable Nash equilibria are Pareto optimal, but in many of these games, some potential rulesets give Nash equilibria that are closer to being Pareto optimal than the achievable Nash equilibria produced by other rulesets. In these cases, the rulesets that come the closest to producing Pareto optimal Nash equilibria are better than the rulesets that fail to achieve as much Pareto efficiency in their equilibrium conditions. I don't know any way to phrase this rule precisely in words, without resorting to definition that includes variables, which is simply rude to do in a blog post. In any game which includes randomization, future outcomes are not predictable, so the consideration can be extended even further. People should be able to agree to rules that increase their net expected outcomes even when it is statistically possible that the rule will hurt the ultimate outcome they face. That is: If a game includes enough randomization that more than one possible equilibrium condition is possible, people ought to be able to agree that a rule change that improves several of the attainable equilibrium conditions from their perspective, even if it doesn't improve all of them, as long as it improves enough of them by a significant enough margin that the net expected outcome for each participant is an improvement... but at this point, you need to be computing each participant's risk aversion function as a separate thing from each participant's risk aversion function because the various possible Nash equilibria have different probabilities of being reached.)
(Additional side note to answer a possible objection of wannabe pedants: What if the original game under is a deterministic, solved game? You can postulate a rule change R that would be a Pareto optimum that benefits some but not all of the participants. Why should the people who know in advance that they won't benefit from R agree to R? The answer is that they shouldn't but, in the event that R exist, there is also a ruleset R` that would benefit everyone. In particular, R benefits one participant by the amount x. There are plenty of infinitely sub-dividable abstract things that can be distributed within a game some of which are no more valuable in total than x is to the player who benefits by receiving x. [For example: Add randomization into the game, and give each participant some non-zero, non-negative change of receiving x instead of the person who would have received it under R. Simpler solutions exist if x is itself sub-dividable or if x is valued in a currency that is itself infinitely sub-dividable (which any currency can be made to be)]. Add a rule that says, once the game plays out under R enough for a player to achieve x, x enters a lottery to determine who actually gets it. What I've described does not tell you which Pareto optimum the game should achieve, merely that Pareto optima are better than Nash equilibria. [For example, not everyone has to be given the same chance of getting x. The person who would receive it under R might get a 50% chance of receiving x while everyone else gets a 1/2(n-1) chance of receiving x where n is the number of players.] There is a new meta-game that describes negotiating the rule change with an infinite regression of meta-meta-games for negotiating the rules of those games. Throughout this whole infinite regression, you never lose the condition that says that everyone intelligent enough to understand what's going on recognizes that the game in which all achievable Nash equilibria are Pareto optimal is better than one in which at least one achievable Nash equilibrium is not a Pareto optimum. You just gain a whole lot of complexity... A lot of heated moral arguments are about negotiating the rule changes. People know that the current state of things permits a tragedy of the commons, but they realize that different ways to resolve the tragedy favor different parties to different extents and argue vehemently about why they are morally entitled to receive a larger share of the expected gain from negotiating the rule change than anyone else gets. The moral rule I have postulated is (mostly) agnostic about this condition. Switching the rules so that the Nash equilibrium moves to a Pareto optimum makes the game better, but switching it so that it moves to the best Pareto optimum for you, your friends, and equals does not necessarily make it a better game than switching it so that it moves to the best Pareto optimum for me, my friends, and equals. And we both have an incentive to claim that our side is right in our advocacy of a particular improvement that differs from the particular improvement another side wants.)
Innate human ethics has at least one feature that I believe can be explained as the result of evolved optimization "discovering" through trial and error this moral rule that will be naturally discovered by any sufficiently intelligent process or agent. (Natural selection, like all optimization engines, is a system that produces results consistent with the application of intelligence whether or not we would call the system itself intelligent.) The human impulse to seek justice (shared with some other animals, certainly chimpanzees, but probably most social animals) is a rule that helps constrain the game so that Nash equilibria tend to be Pareto optimal. Justice, as it is typically practiced and/or desired, is the impulse to ensure that anyone who willfully harms someone is harmed as a result even if no benefit to the original victim(s) (or anyone else) comes from [society/god/the victim/the victim's friends] harming the original perpetrator. The rule of justice is added to the game as a disincentive against committing harm, thereby causing many conditions that would otherwise have been Nash equilibria that were not Pareto optimal to cease to be Nash equilibria -- in the process causing the equilibrium to shift to something that is closer to Pareto optimal, if not necessarily optimal itself.
This rule can be erroneously extended, and misapplied in a way that causes short-term improvements in the game, without actually shift the end condition to a better state. Natural selection is a blind optimizer that finds short-term local maxima even when they caused long term problems, so it should not be surprising that the natural human sense of ethics builds on justice in non-optimal ways that would be obviously wrong to a sufficiently intelligent outside observer too. The human tendency to promote among each other the belief that an invisible supernatural being that loves justice is watching everyone even when they think they are alone is similarly likely to promote better equilibrium conditions. This application of the previous rule is more problematic than the simple promotion of justice among each other because it is easily violated. In particular it's an unenforceable rule. Once people figure out is unenforceable, the Nash equilibrium it may have prevented while people believed that Someone would enforce it, goes away again. It the mean time, people who have figured out that the rule is false can exploit it to the detriment of the people who still believe it
This consideration brings me to the second rule that all sufficiently intelligent agents ought to be able to agree upon as a moral rule.
Actions predicated upon false hypotheses are always wrong.
There are many cases in which an agent has beliefs and allows its actions to be predicated upon its beliefs. If the agent believed A, it would do X. Whereas; if it instead believed B, it would do Y instead. In these cases the agent is wrong to do A if Y is true, because the agent's actions are based on the false predicate X.
This is distinct from saying that it's wrong to hold false hypotheses. Merely that actions which would not have been conducted if it were not for the erroneous belief being held are wrong. (For whatever reason, inaction seems to be morally favored. Some theories treat a moral preference for inaction as an inconsistency and a failure of human reason -- read almost anyone's discussion of the trolley problem to see that this is something that people tend to object to on theoretical grounds, though people almost universally hold it as far as practical application goes.)
Two ancient tribes get into a territorial dispute. Both sides know that if they get into a war with each other, the war can win either in a stalemate or with one side victorious. If it ends in a stalemate, both sides will lose resources and men, and neither side will gain enough to offset their loss. Neither side would engage the other in combat if they believed that the ensuing war would end in a stalemate.
Both sides also know that if the war ends with the other side victorious, things will go badly for them. Their gods will be disgraced, their men will be slain, their women raped, and their children carried off into slavery (with the male children having been made eunuchs). Either side would rather surrender up front and simply sacrifice some of its territory and pay a tribute than face these terrible consequences.
They should never get into a war because they never would get into a war if they always correctly predicted whether the war would win in a stalemate and who would win if it wouldn't. Neither side would be the aggressor in the event that both sides knew the war would lead to a stalemate. In the event that it wouldn't, the weaker side should capitulate preemptively, averting the conflict.
Since you would only aggressively initiate a war if you expected to win it, it is wrong to initiate a war that you will not win. It is not wrong to defend yourself in the event that someone who will not defeat you if you defend yourself has initiated a war against you. These two statements fit well with most people's moral intuitions about wrong behavior... which should be true of a moral theory. Even if it's axioms seem irrefutable and/or unquestionable, a moral theory that makes repugnant claims ought to be called into question.
This brings us to the case when it is wrong to defend, something that, on its face seems odd for a moral theory to assert, but which I claim demonstrates the strength of the two ideas posited so far.
To begin with, it explains a major edge case in most that is problematic for most moral theories: what makes a power legitimate? People wince at the suggestion that "might makes right" as an explanation for why the government has legitimate authority to prescribe or proscribe various activities... but most other explanations are simply absurd. This explanation transforms that claim a little, into one that I find far less morally repugnant: which is the assertion that the governments ability to correctly ascertain that they have the power to enforce their makes them "not wrong" to do so, to the extent that they are actually correct in figuring out what they have the power to enforce. It does not necessarily mean that they are "right" to do so. Secondly, it adds the further caveat, based on exactly the same principal, that the government is wrong to make certain judgments even if they have the ability to enforce those judgments. For example, even though Nazi Germany (more or less) had the power to enforce a law prohibiting anyone from being Jewish in German-controlled territories, they were wrong to do so, because the only reason that they would do so is that they had many false beliefs about Jewish conspiracies and the effects of ethnic purity on a population. If it were not for those false beliefs, the holocaust wouldn't have happened.
But, we still have the previous case about the two ancient societies getting into a war with each other. My moral theory postulates that, in the event that one side will lose the war if a war occurs, they are wrong to fight it, and should instead surrender if surrendering gives them a better expected result than losing the war. (Game theory says it should, since the other society has an incentive to give them an incentive not to fight them.) In this case, they are wrong to fight the war because the only reason that they would fight it is because they falsely believe that things can be expected to go better for them if they fight it than if they surrender. In the modern world this sort of situation mainly deals with rebels and terrorists who erroneously believe that they can establish an independent state somewhere... and most of us have no qualms calling them "wrong" to do so. What we have qualms with is the idea of calling the ancient society that is going to be the victim of atrocities wrong to have defended itself when they wouldn't have had atrocities committed against them at all if they hadn't defended themselves. It feels too much like blaming the victim. But at the same time, it's obvious that they "shouldn't have" fought, which is really all I am saying when I say "it was wrong" of them to fight. They shouldn't have fought because they, their children, and everyone else would be better of if they didn't (including their enemies), and because they wouldn't have fought if they had been able to correctly infer the outcome, but they were some combination of too unwilling to pay the tribute and too overconfident in their abilities to realize that the course of action that they should have taken is simply to pay the tribute and surrender. These two qualities are qualities we tend to describe as "greed" (an enforceable tribute is no different from a tax) and "arrogance" both of which we would typically call "bad" so our intuitive notion is that it is wrong to have these qualities.
The real cause of our squeamishness around calling the side that erroneously defended itself wrong in this ancient conflict is not that saying this violates our general sense of morality, it's that saying this almost seems like it's saying that the side that defended itself "deserved" to have atrocities committed against them. Which is not what this moral theory says at all. It simply says that they should not have fought. Previous considerations about changing the rules of the game to improve the equilibrium conditions explain why it is wrong to commit genocide. If people were able to get together and make rules, they would want to make rules that prevent atrocities, because (with a few assumptions) everybody's expected outcome in the game is better when the consequences of warfare are less horrific. (Mostly because the game is already non-deterministic.) We come up with a rule to say that genocide is also wrong, and that all remaining countries will punish the invading country for its war crimes. Notably, this moral theory does not have the property of most modern judicial systems of ultimately taking one side or the other. It is fully capable of declaring both sides wrong. In this case, we can agree that the ancient societies should have agreed on rules to prevent what we would consider war crimes, and this is a separate consideration from saying that the side that would be harmed from fighting the war should not have fought the war.
In fact, these rules never declare anything "right" except insofar as they say that adding certain rules to a game makes the game better, which can arguably be extended to saying that including and enforcing those rules in a game is "right"... though that's more semantics than substance.
There's one more thing that I want to point out about the second moral rule: Actions predicated on false hypotheses are always wrong.
This rule resolves the is/ought question. It has reduced a question of "ought" to a question of "is." In particular, it is the observation that many actions are based on truth-propositions about the world. When someone believes that God created human life and made it sacred from the moment of conception, they believe that they ought to oppose abortion even if they wouldn't feel that way if they didn't have this belief about human life. Most people who are strongly opposed to abortion would not be strongly opposed to abortion if they did not hold the religious views that cause them to be strongly opposed to abortion. Similarly, most people who support either state-mandated abortion (as in China) or the right of women to choose to have abortions if they see fit (as in most Western countries) would not do so if they believed the proposition, "There exists a God who judges the world who is offended by abortion and will judge nations/people harshly if they tolerate abortion." Despite all of the other moral posturing that people do around this subject, it pretty readily reduces to a truth-proposition about morality. If the theistic views of pro-lifers are largely correct, they are taking the side of morality. If not, they are taking the immoral view.
In the process, it explains why a sin is worse than a error of calculation -- at least for a certain form of sin. A sin is a miscalculation that results in an action that would not have occurred without the miscalculation. Many miscalculations can be errors without causing sins. Someone can hold an erroneous belief that would cause them to take the same actions that they would have taken if they instead held a true belief. These people are still mistaken, but their actions are not "wrong" because they are still doing what they "should" do.
Another thing to point out about this moral rule is that it invalidates Pascal's wager. Pascal's wager is about computing the utility that various beliefs purport to give to their holders multiplying that utility by the likelihood of that belief system being true, and then choosing the belief that maximizes expected utility. It has many problems, not the least of which is that it is subject to Pascal's mugging. However, the moral rule I have suggested says that you are wrong to act on erroneous beliefs, which would make acting on Pascal's wager wrong whenever Pascal's wager leads you to act wrongly, which one is much more often than you would act wrongly if you didn't consider the purported rewards associated with a belief when selecting between them, since it easy to contrive a false theory with arbitrarily high purported punishment for failure to believe and comply or arbitrarily high purported reward for choosing to believe and comply. (Or both the carrot and the stick if you want both.)
(Notice, this theory does not preclude holding beliefs with humility: "What I would do if I believed I had at least a one percent chance of being wrong about beliefs I hold as strongly as I believe that evolution through the mechanism of natural selection produced life" is still a valid proposition. For reasons of elementary statistics, "what I would do if I believed the fundamentalist interpretation of the Bible had a 1% chance of being correct" is an absurd alternative to "What I would do if I believed I had at least a 1% chance of being wrong." You have infinitely many possible beliefs to choose from and have no reason to elevate any particular belief you do not hold to this sort of prominence. Though this particular idea has more to do with my rejection of the bit-complexity formulation of Occam's razor than it does with the moral theory I am presenting.)
All told from these considerations:
We have two axioms of morality, one of which has a natural formulation in game theory, and the other has a natural formulation in proposition logic. Both axioms are consistent with behavior we would expect from a fully-rational entity. They explain features of the natural human ethics, resolve the is-ought problem, show errant actions to be more erroneous than simple miscalculation, invalidate Pascal's wager, explain why the aggressor is in the wrong in a conflict that ends in a stalemate, and say why the rule of a government is usually legitimate while still explaining why the Nazis were wrong.
I would say that this list of results strongly indicates that there's something to the theory.
Most of these results come from the second axiom. So most of the support that these observation give to the moral theory come from the second axiom. That said, all of the objections I can think of to the second axiom are resolved by the first one, which (possibly because it has been more fully developed by other people before me) is also the one that seems more obvious to me, and certainly is the first of the two which I held. Without the first rule, the theory seems noticeably incomplete, in a bad way.
The theory probably doesn't satisfy most people's desire for completeness. If we hypothesize a being of infinite power and infinite malice, this theory does not explain why it would be wrong for that being to torture everyone else for all of forever. However, it does say that if no such hypothetical being exists, you would be wrong to reject a theory on the grounds that it cannot deal with the false hypothetical.
No comments:
Post a Comment