One of the discrepancies between behavioral economics and traditional economics is that traditional economists tend to hold a much stronger belief that people respond to incentives than do behavioral economists.
I believe that the behavioral economists are right, for the purposes of describing individual behavior. In particular, I believe that the traditional model of human behavior that claims that people change in response to incentives contains unnecessary assumptions. Incentives are self-enforcing. They don't need to change people to affect the state of the system containing the incentives. Instead, they cause the people who behave in ways that align with the incentive structure to gain power within the system and people whose behavior does not align with the incentive structure of the system to lose power within. I will consider three examples below.
To clarify: I fully agree with the statement "Systems converge towards a condition that maximizes the rewarded behavior." The problem I have with the statement "Human respond to incentives" is that it is a statement about humans and not about systems. I am going to argue that the behavior of responding to incentives is a general property of systems that have sufficiently many actors affecting results and not one that has anything to do with the particulars of human nature.
I would expect to see the system-level effects observed by traditional economist combined with the individual effects observed by behavioral economists. In particular, I would expect that the sum of aggregate behavior converges towards the result you would expect by assuming that humans respond to behavior while individual behavior remains mostly unchanged by the introduction of new incentives. The aggregating function is what is actually changing not the behavior of the individual people.
I. Peer review.
One of the (many) problems with the peer review system, is that insufficient controls are in place to ensure that the people who review and approve articles actually bother to read them. If the reviewers do read the papers, they for the most part don't read them carefully enough to catch the sorts of mistakes that you could only find through careful analysis.
They have no incentive to do so. Thus we should not expect that reviewers typically do the amount of due diligence required to actually sign off on the validity of the papers they review. I'm not disputing this result. I am simply claiming that the cause of this result has to do with the effects of the incentive on the peer review system, not the absence of incentives corrupting people.
Suppose that there are some people who have a natural tendency to carefully review every article, and others who have a natural tendency to be less thorough than they should be. In the absence of an incentive to review carefully, the people who are less thorough will be able to review far more papers than the people who are more careful.
(Note: I'm not an academic. I'm using a hypothetical "I," along with a hypothetical "you" because it makes discussing ...)
The reasons are twofold. First, there is the problem of simple bandwidth. If you spend three hours and significant mental energy on the average paper you review. And I spend twenty minutes skimming through each paper I review, I can review nine papers in the time that you can review one, while exerting far less mental energy in the process. Ergo, I have the bandwidth capacity to review more papers than you.
Secondly, skimming through papers instead of carefully reading them allows me to build my personal brand in academia faster than you can, which will result in more people asking me to review papers than ask you to review papers. To carefully review a paper, you have to work-time that drains you that you could be spending on your research and devote it instead to thinking about somebody else's research. I on the other hand, never have to really push my own research out of my head as I'm glancing through other people's results just carefully enough to glean any insights that might be relevant to me. Ergo, all other things being more or less equal, I will be able to produce more research than you and/or better research than you, simply since I was able to devote more energy to that. Since our respective reputations have everything to do with the quality of the work that we publish and nothing to do with the quality of work that we expend review other people's research, my reputation grows faster than yours. When publications are looking for someone they can trust to give good reviews, they're going to look for people with a good reputation, so they'll choose me over you precisely because I didn't expend the energy on reviewing that would have distracted me from covering more ground on my own research. Incidentally, reversing their criteria so that they looked for reviewers who did not have as a good of a reputation would produce even worse results, because now they would be selecting for incompetence instead of selecting for complete devotion to one's own personal research that doesn't leave time or energy for careful peer reviewing.
The lack of an incentive might never cause you to yell "screw it" and give up on doing the high quality reviews that you feel personally or morally obliged to conduct. But it doesn't matter. Precisely because your behavior is less aligned with the incentive structure than mine is, you will have less total impact on the system than I do. And the incentive structure will cause reviews to be cursory and swift without having to cause reviewers to be less thorough in their review process than they would be if the incentives were more aligned with the desired results.
In this particular case, I would not expect monetary incentives to solve the problem because they would not adjust the way that the aggregating incentive is perverse, even though they presumably would modify human behavior if the mechanism of incentives affecting the system were for the incentives to modify human behavior. Reputation is the currency of academia, not money. Getting payed more for being able to accurately predict which studies would replicate wouldn't fix peer review unless accurately predicting which studies would replicate also resulted in an improved reputation within academia.
II. Wages.
We both run factories. I decide that it's unethical to pay your employees minimum wage, and instead pay then $20/hour because that's the most you can pay them without going into the red. You have no ethical problems with paying people minimum wage, so you pay them minimum wage.
At first things are fine for me, but you're able to take some of the money you save and reinvest it into improving your product as well as improving your factory. Pretty soon, my product is noticeably sub-par. So I go out of business.
The incentive structure didn't need to make me any more greedy. It just needed to exist. Because my behavior doesn't fit with the incentive structure, I become irrelevant, while you remain relevant because your behavior does.
III. Investing.
People realize that businesses are resource constrained and incapable of operating outside their incentive structure so long as anyone operates within the incentive structure. Some of them decide to solve the moral hazards that businesses face by improving the incentive structure through investing. They'll only buy the stock of companies that the deem to be more moral than other companies; thereby making it easier for those companies to raise capital and improve themselves.
Let's say a third of investors do this. Another third of investors throws darts at the market, thereby getting the average return. And the remaining third of investors carefully research what to buy focusing on the expected profitability of the companies rather than moral issues. They all start out with the same amount of money.
At this point, I need to explain something that a lot of people naively don't understand. The stock market is not a Ponzi scheme. Valuations are not simply determined by the will of the market. Companies make a profit. They use some of their profits to pay dividends, buy back shares, and pay off their debts. In the long term, the valuation of a stock is dictated by the profitability of the company, not by the whims of the market.
In the last section, we already discussed one way that moral behavior causes companies to underperform the market. In fact, that's the whole premise of moral investing. People know that "being greedy" causes companies to perform better, and they are trying to create a different incentive to counteract that effect. So what happens?
The moral investors underperform the market. The random investors perform at market. And the investors that research profitability outperform the market. Let's say they get 2%, 5%, and 8% annual returns respectively. These are reasonable numbers. (David Einhorn and Warren Buffet do much better than 8% on average annually. The 5% return is close to market average for the last century, and 2% is probably generous for people whose investment strategy is systematically favoring companies that are likely to get out-competed.) After thirty years, the moral investors will only control 11% of the market. The random investors will control 27% of the market, and the investors that research profitability will control the remaining 62% of the market. After 50 years (20 more than thirty not 50 more years), the respective percentages will be 4%, 19%, and 77%. The investment strategy that is aligned with the market's incentive structure converges towards controlling 100% of the market; while the one that is systematically mis-aligned with its values pretty rapidly converges towards zero.
No comments:
Post a Comment