Thursday, May 7, 2015

Senescence

The first person to live to be a thousand years old will have been specifically bio-engineered for such longevity, clonally descended (with intentional genetic modification) from other predecessors who were also bio-engineered to increase their longevity.

I think one of the reasons for the popularity of Aubrey de Grey's nigh-alchemistic views of aging is that they are phrased in such a way to suggest that disagreeing with them sounds anti-scientific (or at the very least, skeptical of the potential for scientific progress). The purpose of my claim above is to provide an equally trans-humanist view, that is equally based on the belief that people will eventually find ways to better overcome senescence, without making De Grey's mistake of being utterly and completely wrong.

Because de Grey is wrong. (Dead wrong, one might say.)

The mainstream academia position on de Grey is that his claims deal with research topics that are too poorly understood to provide proof of the falsenesss of de Grey's claims, but the available evidence strongly indicates that he is wrong, and once more is known, his position will be completely falsified. That's fair. Most of the evidence that strongly refutes de Grey's position is soft evidence. Nobody understands precisely how aging works but all of the theories that explain anything at all are incompatible with de Grey's beliefs. But you can't really use any of them to refute them because all of them have enough problems of their own.

But the soft evidence against him is overwhelming.

First of all, humans already have the longest average life span of any endotherm and the largest average lifespan of any terrestrial animal, even if you limit your sample to organisms that survive to adulthood in order to give reptiles a fighting chance at high average lifespans since they have worse than astronomical mortality rates shortly after hatching. Humans also have the highest recorded maximum lifespan of any endotherm.

Terrestrial animals are a worthwhile category because living on land is a lot harder than living in water. Read Margulis if you want a little bit fuller explanation. More or less, earth-born life is aquatic, and terrestrial life survives by internally preserving oceanic conditions. Moreover, all animals start their life in a liquid environment (of an egg, womb, or body of water). Instead of adapting a way for zygotes to survive outside of an aquatic environment, mammals have evolved a way to produce an artificial aquatic environment inside the females of the species, even as some mammals also evolved a dry environment for their young to mature for a while after birth. If you can only find aquatic animals to support some claims of what is possible, you probably aren't making claims that apply to longevity. Evolving to live on land forced terrestrial animals through a very tight bottleneck, which mercilessly selected in favor of traits needed to perpetuate a species on land without any regard for what traits members of those species might later wish they had had preserved. I think chordates traded off the potential for biological immortality long before adapting to living on land, but at any rate, the likelihood of them having preserved it past living on land is infinitesimal. There was just too much of an optimization for something else in particular.

The endothermy argument is something else altogether. We tend to measure lifespans in years. Years are biologically irrelevant. If we wanted to measure lifespans in something more biologically relevant we should focus on metabolic turnover and cell divisions. Both of these are way higher in endotherms (mammals, birds, and tuna!) than they are in ectotherms (non-endotherms). Giant tortoises spend much of their lives biologically dead. They hibernate so thoroughly that they don't even have a heart beat. Since they don't have to maintain their body temperature in their cells, their cells don't have to do anything. When it gets too cold, they just die for a little while and wait for the sun to come back out. True story. A giant tortoise living to be three hundred is biologically equivalent to a mammal living to be about four. (Not four hundred, just four.)

Then there's complexity. Talking about the simplicity or complexity of organism is extremely out of vogue. But fashion is dumb anyways, so I don't care. Biologically immortal organisms are extremely simple. Vascular plants have three organ-types (for want of a better term). They have leaves, stems, and roots. Flowers are a type of leaf. Tubers are a type of stem. Bulbs are a type of leaf. Etc. This isn't semantics or pedantry. Different organ-types are made up of different tissue-types which are made up of different cell-types. Different cell-types rely on different conditions from each other to survive. Plants only have a few dozen cell types, so they don't have to code for preserving that many different modes of survival. Biologically immortal organisms, whether they are plants or animals, are ridiculously simple as measured in the number of different cell types that they have. For reference, you have more types of tissue that are unique to your ear than a clam has in its whole body. By unique to your ear, I mean specialized tissue types that only occur in your ear and don't occur anywhere else in your body. Your ear isn't even close to being the most complicated organ-like-thing in your body either. (That would be your brain, in case you're wondering. There are more cell types in the brain than there are tissue types in the ear, and there are so many different tissue-types in the brain that, to the best of my knowledge, no one has attempted to make an exhaustive list. As of 2014, the authors of the leading neuroscience textbook did not believe that an exhaustive list of neurotransmitters is known yet, and that list is tiny by comparison to trying to exhaustively identify all of the different kinds of tissues that occur in the brain. Brains are ridiculously complicated, just in terms of their raw hardware before you begin to look at the way that that hardware is organized. In a very overtly evolved sort of way. We're talking tangled, haphazard, messy, disorganized complexity.)

And then there's the data. If sequoias died at a young enough age, we'd think that they were biologically immortal. For the first thousand years of their lives, they exhibit all of the traits of biological immortality. Their chances of living another year increase with every successive year, etc. (Actually, this general trait of having once chance of living another day increase with every successive day is true of most plants for the first few years of their lives. Hell, it's true of most animals too during their early life. Mortality rates tend to be pretty high for the young of most species.) If sequoia had a high enough mortality rate that they all died before living to be 1500 years old we'd think they were biologically immortal. But they don't. They live to be about five times as old as the oldest biologically immortal organism, and they only begin to exhibit senescence once their a thousand years older than the oldest known biologically immortal organism. Sequoia are "biologically immortal" by the way that characteristic is defined for a far longer period of time than any organism that is actually believed to be biologically immortal. Most "biologically immortal" organisms have very short life spans. Sponges, clams, and sea urchins are the exceptions. They sometimes live to be a few hundred years old, and they have members that exhibit decreased mortality with age for that whole lifespan. (Also, PETA claimed that one particular lobster was over 100 years old in a bid to save its life, but their calculation was based off of some very spurious assumptions. Moreover, lobsters are known not to be biologically immortal, since their chances of dying while molting [or simply becoming unable to molt] increase dramatically as they age. Becoming unable to molt is eventually fatal for a lobster.) Hydra can live to be up to four years old without showing any increase in mortality as they age! Whoop-dee-doo! Humans can also live to be up to four years old without any increase in mortality over that period... while being much more biologically active.

So at this point, we've got four major strikes against the idea of a human living to be a thousand. Simple organism (as measured in the number of cell-types, tissue-types, or organ-types the organism possesses) live to have a lot longer lifespans than more complex organism. Aquatic animals are able to have much longer lifespan than terrestrial animals. Ectotherms have a much longer lifespan as measured in years than endotherms but a much shorter lifespan as measured by biologically-relevant criteria such as total metabolic activity or cell divisions. And when we look at extremely simple, aquatic, endotherms, we find that they still have maximum lifespans capped at a few hundred years. The oldest clam ever discovered was 507 years old. The most longevic plants can live to perhaps 5,000 years old, and some scientists believe that a few sponges might be 10,000-20,000 years old, but at this point we're talking about things that are ridiculously simple compared to amniotes.

The oldest known terrestrial homotherm died on August 4th 1997. She was 122 years old. Her name was Jeanne Calment, and she was human. No other mammal is known to have lived that long. (Though bowhead whales [not a terrestrial animal] are thought to live up to 150 years, possibly longer.) No bird is known or thought to have lived that long.

When somebody says that senescence isn't biologically inevitable in the context of talking about extending human life spans, that person is saying something pretty ignorant. The available evidence strongly suggests that humans are already flirting with the upper bound of what's biologically possible for organisms with our biological history.

But what about science? Technology! Progress! Onward! Yay science! Hurrah! Hurrah! Hurrah!

Isn't the fact that humans live longer than any other terrestrial homotherms evidence in favor of modern medicine?

No. Not at all.

People from parts of the world with no modern medicine have been reliably shown to be 110-115 years old. Any alleged medical progress that humanity has made in the past four hundred years might have extended the upper bound on human lifespan by about five years. Probably not. There are a lot of other things like adequate nutrition and improved hygiene that ought to increase longevity in the first world relative to the third world.

I'm not knocking medicine, in general. Claims that medicine has improved the average human lifespan are very substantiated. Childbirth used to be a major cause of death for women. Medical advances of the past century and a half have practically eliminated deaths due to childbirth. Childhood mortality due to contagious disease also used to be a major cause of death, and vaccines have pretty much eliminated those. So I'm not knocking medicine altogether. I'm strongly in favor of the medical position in two of the more controversial issues in medicine. But I see no evidence whatsoever to suggest that medical progress has done anything to begin overcoming senescence. Maybe, modern medicine has extended them by 5%, but it's not on the verge of extending human lifespan by 1000%.

It just isn't.

I'd bet my life on it.

Sunday, April 26, 2015

Experiments, Placebos, and Effect Size (a parable)

The purpose of this experiment was to measure the effects of wording on people's behavior.

Method

Participants were randomly assigned to three groups, two experimental and one control. Participants were payed $5 to participate in this experiment and were told that it would only take five minutes of their time, and then were sent into a room with an experimenter who introduced himself and handed participants their instructions.

Participants in the first experimental condition were handed a sheet of paper that said, "Please complete as many push ups as you can without stopping."

Participants in the second experimental condition were handed a sheet of paper that said, "Please complete as many sit ups as you can without stopping."

Participants in the control condition were handed a blank sheet of paper.

Results

Participants in the first experimental condition completed on average 10.4 push ups (median 8). They completed on average 0 sit ups (median 0).

Participants in the second experimental condition completed on average 22.8 sit ups (median 24). They completed on average 0 push ups (median 0).

Participants in the control condition completed on average 0 push ups (median 0) and 0 sit ups (median 0).

Conclusion

This experiment found with high probability (p < 0.05) that people's behavior is affected by the wording of instructions they are given.

Saturday, April 25, 2015

Do people respond to incentives?

One of the discrepancies between behavioral economics and traditional economics is that traditional economists tend to hold a much stronger belief that people respond to incentives than do behavioral economists.

I believe that the behavioral economists are right, for the purposes of describing individual behavior. In particular, I believe that the traditional model of human behavior that claims that people change in response to incentives contains unnecessary assumptions. Incentives are self-enforcing. They don't need to change people to affect the state of the system containing the incentives. Instead, they cause the people who behave in ways that align with the incentive structure to gain power within the system and people whose behavior does not align with the incentive structure of the system to lose power within. I will consider three examples below.

To clarify: I fully agree with the statement "Systems converge towards a condition that maximizes the rewarded behavior." The problem I have with the statement "Human respond to incentives" is that it is a statement about humans and not about systems. I am going to argue that the behavior of responding to incentives is a general property of systems that have sufficiently many actors affecting results and not one that has anything to do with the particulars of human nature.

I would expect to see the system-level effects observed by traditional economist combined with the individual effects observed by behavioral economists. In particular, I would expect that the sum of aggregate behavior converges towards the result you would expect by assuming that humans respond to behavior while individual behavior remains mostly unchanged by the introduction of new incentives. The aggregating function is what is actually changing not the behavior of the individual people.

I. Peer review.

One of the (many) problems with the peer review system, is that insufficient controls are in place to ensure that the people who review and approve articles actually bother to read them. If the reviewers do read the papers, they for the most part don't read them carefully enough to catch the sorts of mistakes that you could only find through careful analysis.

They have no incentive to do so. Thus we should not expect that reviewers typically do the amount of due diligence required to actually sign off on the validity of the papers they review. I'm not disputing this result. I am simply claiming that the cause of this result has to do with the effects of the incentive on the peer review system, not the absence of incentives corrupting people.

Suppose that there are some people who have a natural tendency to carefully review every article, and others who have a natural tendency to be less thorough than they should be. In the absence of an incentive to review carefully, the people who are less thorough will be able to review far more papers than the people who are more careful.

(Note: I'm not an academic. I'm using a hypothetical "I," along with a hypothetical "you" because it makes discussing ...)

The reasons are twofold. First, there is the problem of simple bandwidth. If you spend three hours and significant mental energy on the average paper you review. And I spend twenty minutes skimming through each paper I review, I can review nine papers in the time that you can review one, while exerting far less mental energy in the process. Ergo, I have the bandwidth capacity to review more papers than you.

Secondly, skimming through papers instead of carefully reading them allows me to build my personal brand in academia faster than you can, which will result in more people asking me to review papers than ask you to review papers. To carefully review a paper, you have to work-time that drains you that you could be spending on your research and devote it instead to thinking about somebody else's research. I on the other hand, never have to really push my own research out of my head as I'm glancing through other people's results just carefully enough to glean any insights that might be relevant to me. Ergo, all other things being more or less equal, I will be able to produce more research than you and/or better research than you, simply since I was able to devote more energy to that. Since our respective reputations have everything to do with the quality of the work that we publish and nothing to do with the quality of work that we expend review other people's research, my reputation grows faster than yours. When publications are looking for someone they can trust to give good reviews, they're going to look for people with a good reputation, so they'll choose me over you precisely because I didn't expend the energy on reviewing that would have distracted me from covering more ground on my own research. Incidentally, reversing their criteria so that they looked for reviewers who did not have as a good of a reputation would produce even worse results, because now they would be selecting for incompetence instead of selecting for complete devotion to one's own personal research that doesn't leave time or energy for careful peer reviewing.

The lack of an incentive might never cause you to yell "screw it" and give up on doing the high quality reviews that you feel personally or morally obliged to conduct. But it doesn't matter. Precisely because your behavior is less aligned with the incentive structure than mine is, you will have less total impact on the system than I do. And the incentive structure will cause reviews to be cursory and swift without having to cause reviewers to be less thorough in their review process than they would be if the incentives were more aligned with the desired results.

In this particular case, I would not expect monetary incentives to solve the problem because they would not adjust the way that the aggregating incentive is perverse, even though they presumably would modify human behavior if the mechanism of incentives affecting the system were for the incentives to modify human behavior. Reputation is the currency of academia, not money. Getting payed more for being able to accurately predict which studies would replicate wouldn't fix peer review unless accurately predicting which studies would replicate also resulted in an improved reputation within academia.

II. Wages.

We both run factories. I decide that it's unethical to pay your employees minimum wage, and instead pay then $20/hour because that's the most you can pay them without going into the red. You have no ethical problems with paying people minimum wage, so you pay them minimum wage.

At first things are fine for me, but you're able to take some of the money you save and reinvest it into improving your product as well as improving your factory. Pretty soon, my product is noticeably sub-par. So I go out of business.

The incentive structure didn't need to make me any more greedy. It just needed to exist. Because my behavior doesn't fit with the incentive structure, I become irrelevant, while you remain relevant because your behavior does.

III. Investing.

People realize that businesses are resource constrained and incapable of operating outside their incentive structure so long as anyone operates within the incentive structure. Some of them decide to solve the moral hazards that businesses face by improving the incentive structure through investing. They'll only buy the stock of companies that the deem to be more moral than other companies; thereby making it easier for those companies to raise capital and improve themselves.

Let's say a third of investors do this. Another third of investors throws darts at the market, thereby getting the average return. And the remaining third of investors carefully research what to buy focusing on the expected profitability of the companies rather than moral issues. They all start out with the same amount of money.

At this point, I need to explain something that a lot of people naively don't understand. The stock market is not a Ponzi scheme. Valuations are not simply determined by the will of the market. Companies make a profit. They use some of their profits to pay dividends, buy back shares, and pay off their debts. In the long term, the valuation of a stock is dictated by the profitability of the company, not by the whims of the market.

In the last section, we already discussed one way that moral behavior causes companies to underperform the market. In fact, that's the whole premise of moral investing. People know that "being greedy" causes companies to perform better, and they are trying to create a different incentive to counteract that effect. So what happens?

The moral investors underperform the market. The random investors perform at market. And the investors that research profitability outperform the market. Let's say they get 2%, 5%, and 8% annual returns respectively. These are reasonable numbers. (David Einhorn and Warren Buffet do much better than 8% on average annually. The 5% return is close to market average for the last century, and 2% is probably generous for people whose investment strategy is systematically favoring companies that are likely to get out-competed.) After thirty years, the moral investors will only control 11% of the market. The random investors will control 27% of the market, and the investors that research profitability will control the remaining 62% of the market. After 50 years (20 more than thirty not 50 more years), the respective percentages will be 4%, 19%, and 77%. The investment strategy that is aligned with the market's incentive structure converges towards controlling 100% of the market; while the one that is systematically mis-aligned with its values pretty rapidly converges towards zero.


Tuesday, April 21, 2015

Universal Ethics

I believe their are at least two ethical claims that any sufficiently intelligent agent would agree to. I do not believe that the standard of sufficient intelligence is so low to permit all humans to meet it, but most mathematically competent people who have been exposed to the appropriate background material are I believe sufficiently intelligent to see that these claims are accurate.

Game theory provides notation that helps describe some sorts of moral claims precisely. Of particular interest, game theory describes two particularly interesting conditions that coordination problems dealing with agents having competing desires might achieve: Nash equilibria and Pareto optima. Many Nash equilibria are not Pareto optimal, and many Pareto optima are not in a Nash equilibrium. For every Nash equilibrium that is not a Pareto optimum, there exists a Pareto optimum that is at least as good for all participants than the Nash equilibrium and better for at least some of the participants than the Pareto optimum. Moreover, the general structure of the game ensures that the Nash equilibrium is a more stable condition than the Pareto optimum, so the natural final state of this game (provided it reaches conditions sufficiently close to a non-Pareto-optimal Nash equilibrium) is strictly worse than some theoretically attainable condition that is only not attainable in practice because of the behavior of the participants in the game.

I assert that anyone who is smart enough to understand game theory as it applies to the situations they are dealing with (which, in many cases, requires somewhat more intelligence than simply understanding it well enough to push the symbols and do the math, though that is often a prerequisite) will concede that a game which has the same possible states assigned to the same players but has slightly different rules so that all achievable Nash equilibria are Pareto optimal is a better game than the game which has rules that permit the nearest Nash equilibrium to be something other than Pareto optimal. This is a very precise way of saying that, if the incentive structures can be rearranged so that some of the participants benefit, none of the participants are harmed, and you have to let the game play out to determine whether you benefit or merely do as well as you would have done without the rule change, all of the participants, no matter how risk adverse they may be will agree to the rule change unless they are simply insufficiently intelligent to see that the rule change cannot possibly harm them but can possibly benefit them.

(Side note for the truly pedantic: The way I've phrased it, the rule is slightly too weak. It is always true, but a stronger version of the rule would always be true, and also apply to a slightly wider variety of cases. The stronger version is much harder to state precisely, so I presented the weaker version instead. The strong version acknowledges that in some games, it is not possible to change the rules so that all achievable Nash equilibria are Pareto optimal, but in many of these games, some potential rulesets give Nash equilibria that are closer to being Pareto optimal than the achievable Nash equilibria produced by other rulesets. In these cases, the rulesets that come the closest to producing Pareto optimal Nash equilibria are better than the rulesets that fail to achieve as much Pareto efficiency in their equilibrium conditions. I don't know any way to phrase this rule precisely in words, without resorting to definition that includes variables, which is simply rude to do in a blog post. In any game which includes randomization, future outcomes are not predictable, so the consideration can be extended even further. People should be able to agree to rules that increase their net expected outcomes even when it is statistically possible that the rule will hurt the ultimate outcome they face. That is: If a game includes enough randomization that more than one possible equilibrium condition is possible, people ought to be able to agree that a rule change that improves several of the attainable equilibrium conditions from their perspective, even if it doesn't improve all of them, as long as it improves enough of them by a significant enough margin that the net expected outcome for each participant is an improvement... but at this point, you need to be computing each participant's risk aversion function as a separate thing from each participant's risk aversion function because the various possible Nash equilibria have different probabilities of being reached.)

(Additional side note to answer a possible objection of wannabe pedants: What if the original game under is a deterministic, solved game? You can postulate a rule change R that would be a Pareto optimum that benefits some but not all of the participants. Why should the people who know in advance that they won't benefit from R agree to R? The answer is that they shouldn't but, in the event that R exist, there is also a ruleset R` that would benefit everyone. In particular, R benefits one participant by the amount x. There are plenty of infinitely sub-dividable abstract things that can be distributed within a game some of which are no more valuable in total than x is to the player who benefits by receiving x. [For example: Add randomization into the game, and give each participant  some non-zero, non-negative change of receiving x instead of the person who would have received it under R. Simpler solutions exist if x is itself sub-dividable or if x is valued in a currency that is itself infinitely sub-dividable (which any currency can be made to be)]. Add a rule that says, once the game plays out under R enough for a player to achieve x, x enters a lottery to determine who actually gets it. What I've described does not tell you which Pareto optimum the game should achieve, merely that Pareto optima are better than Nash equilibria. [For example, not everyone has to be given the same chance of getting x. The person who would receive it under R might get a 50% chance of receiving x while everyone else gets a 1/2(n-1) chance of receiving x where n is the number of players.] There is a new meta-game that describes negotiating the rule change with an infinite regression of meta-meta-games for negotiating the rules of those games. Throughout this whole infinite regression, you never lose the condition that says that everyone intelligent enough to understand what's going on recognizes that the game in which all achievable Nash equilibria are Pareto optimal is better than one in which at least one achievable Nash equilibrium is not a Pareto optimum. You just gain a whole lot of complexity... A lot of heated moral arguments are about negotiating the rule changes. People know that the current state of things permits a tragedy of the commons, but they realize that different ways to resolve the tragedy favor different parties to different extents and argue vehemently about why they are morally entitled to receive a larger share of the expected gain from negotiating the rule change than anyone else gets. The moral rule I have postulated is (mostly) agnostic about this condition. Switching the rules so that the Nash equilibrium moves to a Pareto optimum makes the game better, but switching it so that it moves to the best Pareto optimum for you, your friends, and equals does not necessarily make it a better game than switching it so that it moves to the best Pareto optimum for me, my friends, and equals. And we both have an incentive to claim that our side is right in our advocacy of  a particular improvement that differs from the particular improvement another side wants.)

Innate human ethics has at least one feature that I believe can be explained as the result of evolved optimization "discovering" through trial and error this moral rule that will be naturally discovered by any sufficiently intelligent process or agent. (Natural selection, like all optimization engines, is a system that produces results consistent with the application of intelligence whether or not we would call the system itself intelligent.) The human impulse to seek justice (shared with some other animals, certainly chimpanzees, but probably most social animals) is a rule that helps constrain the game so that Nash equilibria tend to be Pareto optimal. Justice, as it is typically practiced and/or desired, is the impulse to ensure that anyone who willfully harms someone is harmed as a result even if no benefit to the original victim(s) (or anyone else) comes from [society/god/the victim/the victim's friends] harming the original perpetrator. The rule of justice is added to the game as a disincentive against committing harm, thereby causing many conditions that would otherwise have been Nash equilibria that were not Pareto optimal to cease to be Nash equilibria -- in the process causing the equilibrium to shift to something that is closer to Pareto optimal, if not necessarily optimal itself.

This rule can be erroneously extended, and misapplied in a way that causes short-term improvements in the game, without actually shift the end condition to a better state. Natural selection is a blind optimizer that finds short-term local maxima even when they caused long term problems, so it should not be surprising that the natural human sense of ethics builds on justice in non-optimal ways that would be obviously wrong to a sufficiently intelligent outside observer too. The human tendency to promote among each other the belief that an invisible supernatural being that loves justice is watching everyone even when they think they are alone is similarly likely to promote better equilibrium conditions. This application of the previous rule is more problematic than the simple promotion of justice among each other because it is easily violated. In particular it's an unenforceable rule. Once people figure out is unenforceable, the Nash equilibrium it may have prevented while people believed that Someone would enforce it, goes away again. It the mean time, people who have figured out that the rule is false can exploit it to the detriment of the people who still believe it

This consideration brings me to the second rule that all sufficiently intelligent agents ought to be able to agree upon as a moral rule.

Actions predicated upon false hypotheses are always wrong.

There are many cases in which an agent has beliefs and allows its actions to be predicated upon its beliefs. If the agent believed A, it would do X. Whereas; if it instead believed B, it would do Y instead. In these cases the agent is wrong to do A if Y is true, because the agent's actions are based on the false predicate X.

This is distinct from saying that it's wrong to hold false hypotheses. Merely that actions which would not have been conducted if it were not for the erroneous belief being held are wrong. (For whatever reason, inaction seems to be morally favored. Some theories treat a moral preference for inaction as an inconsistency and a failure of human reason -- read almost anyone's discussion of the trolley problem to see that this is something that people tend to object to on theoretical grounds, though people almost universally hold it as far as practical application goes.)

Two ancient tribes get into a territorial dispute. Both sides know that if they get into a war with each other, the war can win either in a stalemate or with one side victorious. If it ends in a stalemate, both sides will lose resources and men, and neither side will gain enough to offset their loss. Neither side would engage the other in combat if they believed that the ensuing war would end in a stalemate.

Both sides also know that if the war ends with the other side victorious, things will go badly for them. Their gods will be disgraced, their men will be slain, their women raped, and their children carried off into slavery (with the male children having been made eunuchs). Either side would rather surrender up front and simply sacrifice some of its territory and pay a tribute than face these terrible consequences.

They should never get into a war because they never would get into a war if they always correctly predicted whether the war would win in a stalemate and who would win if it wouldn't. Neither side would be the aggressor in the event that both sides knew the war would lead to a stalemate. In the event that it wouldn't, the weaker side should capitulate preemptively, averting the conflict.

Since you would only aggressively initiate a war if you expected to win it, it is wrong to initiate a war that you will not win. It is not wrong to defend yourself in the event that someone who will not defeat you if you defend yourself has initiated a war against you. These two statements fit well with most people's moral intuitions about wrong behavior... which should be true of a moral theory. Even if it's axioms seem irrefutable and/or unquestionable, a moral theory that makes repugnant claims ought to be called into question.

This brings us to the case when it is wrong to defend, something that, on its face seems odd for a moral theory to assert, but which I claim demonstrates the strength of the two ideas posited so far.

To begin with, it explains a major edge case in most that is problematic for most moral theories: what makes a power legitimate? People wince at the suggestion that "might makes right" as an explanation for why the government has legitimate authority to prescribe or proscribe various activities... but most other explanations are simply absurd. This explanation transforms that claim a little, into one that I find far less morally repugnant: which is the assertion that the governments ability to correctly ascertain that they have the power to enforce their makes them "not wrong" to do so, to the extent that they are actually correct in figuring out what they have the power to enforce. It does not necessarily mean that they are "right" to do so. Secondly, it adds the further caveat, based on exactly the same principal, that the government is wrong to make certain judgments even if they have the ability to enforce those judgments. For example, even though Nazi Germany (more or less) had the power to enforce a law prohibiting anyone from being Jewish in German-controlled territories, they were wrong to do so, because the only reason that they would do so is that they had many false beliefs about Jewish conspiracies and the effects of ethnic purity on a population. If it were not for those false beliefs, the holocaust wouldn't have happened.

But, we still have the previous case about the two ancient societies getting into a war with each other. My moral theory postulates that, in the event that one side will lose the war if a war occurs, they are wrong to fight it, and should instead surrender if surrendering gives them a better expected result than losing the war. (Game theory says it should, since the other society has an incentive to give them an incentive not to fight them.) In this case, they are wrong to fight the war because the only reason that they would fight it is because they falsely believe that things can be expected to go better for them if they fight it than if they surrender. In the modern world this sort of situation mainly deals with rebels and terrorists who erroneously believe that they can establish an independent state somewhere... and most of us have no qualms calling them "wrong" to do so. What we have qualms with is the idea of calling the ancient society that is going to be the victim of atrocities wrong to have defended itself when they wouldn't have had atrocities committed against them at all if they hadn't defended themselves. It feels too much like blaming the victim. But at the same time, it's obvious that they "shouldn't have" fought, which is really all I am saying when I say "it was wrong" of them to fight. They shouldn't have fought because they, their children, and everyone else would be better of if they didn't (including their enemies), and because they wouldn't have fought if they had been able to correctly infer the outcome, but they were some combination of too unwilling to pay the tribute and too overconfident in their abilities to realize that the course of action that they should have taken is simply to pay the tribute and surrender. These two qualities are qualities we tend to describe as "greed" (an enforceable tribute is no different from a tax) and "arrogance" both of which we would typically call "bad" so our intuitive notion is that it is wrong to have these qualities.

The real cause of our squeamishness around calling the side that erroneously defended itself wrong in this ancient conflict is not that saying this violates our general sense of morality, it's that saying this almost seems like it's saying that the side that defended itself "deserved" to have atrocities committed against them. Which is not what this moral theory says at all. It simply says that they should not have fought. Previous considerations about changing the rules of the game to improve the equilibrium conditions explain why it is wrong to commit genocide. If people were able to get together and make rules, they would want to make rules that prevent atrocities, because (with a few assumptions) everybody's expected outcome in the game is better when the consequences of warfare are less horrific. (Mostly because the game is already non-deterministic.) We come up with a rule to say that genocide is also wrong, and that all remaining countries will punish the invading country for its war crimes. Notably, this moral theory does not have the property of most modern judicial systems of ultimately taking one side or the other. It is fully capable of declaring both sides wrong. In this case, we can agree that the ancient societies should have agreed on rules to prevent what we would consider war crimes, and this is a separate consideration from saying that the side that would be harmed from fighting the war should not have fought the war.

In fact, these rules never declare anything "right" except insofar as they say that adding certain rules to a game makes the game better, which can arguably be extended to saying that including and enforcing those rules in a game is "right"... though that's more semantics than substance.

There's one more thing that I want to point out about the second moral rule: Actions predicated on false hypotheses are always wrong.

This rule resolves the is/ought question. It has reduced a question of "ought" to a question of "is." In particular, it is the observation that many actions are based on truth-propositions about the world. When someone believes that God created human life and made it sacred from the moment of conception, they believe that they ought to oppose abortion even if they wouldn't feel that way if they didn't have this belief about human life. Most people who are strongly opposed to abortion would not be strongly opposed to abortion if they did not hold the religious views that cause them to be strongly opposed to abortion. Similarly, most people who support either state-mandated abortion (as in China) or the right of women to choose to have abortions if they see fit (as in most Western countries) would not do so if they believed the proposition, "There exists a God who judges the world who is offended by abortion and will judge nations/people harshly if they tolerate abortion." Despite all of the other moral posturing that people do around this subject, it pretty readily reduces to a truth-proposition about morality. If the theistic views of pro-lifers are largely correct, they are taking the side of morality. If not, they are taking the immoral view.

In the process, it explains why a sin is worse than a error of calculation -- at least for a certain form of sin. A sin is a miscalculation that results in an action that would not have occurred without the miscalculation. Many miscalculations can be errors without causing sins. Someone can hold an erroneous belief that would cause them to take the same actions that they would have taken if they instead held a true belief. These people are still mistaken, but their actions are not "wrong" because they are still doing what they "should" do.

Another thing to point out about this moral rule is that it invalidates Pascal's wager. Pascal's wager is about computing the utility that various beliefs purport to give to their holders multiplying that utility by the likelihood of that belief system being true, and then choosing the belief that maximizes expected utility. It has many problems, not the least of which is that it is subject to Pascal's mugging. However, the moral rule I have suggested says that you are wrong to act on erroneous beliefs, which would make acting on Pascal's wager wrong whenever Pascal's wager leads you to act wrongly, which one is much more often than you would act wrongly if you didn't consider the purported rewards associated with a belief when selecting between them, since it easy to contrive a false theory with arbitrarily high purported punishment for failure to believe and comply or arbitrarily high purported reward for choosing to believe and comply. (Or both the carrot and the stick if you want both.)

(Notice, this theory does not preclude holding beliefs with humility: "What I would do if I believed I had at least a one percent chance of being wrong about beliefs I hold as strongly as I believe that evolution through the mechanism of natural selection produced life" is still a valid proposition. For reasons of elementary statistics, "what I would do if I believed the fundamentalist interpretation of the Bible had a 1% chance of being correct" is an absurd alternative to "What I would do if I believed I had at least a 1% chance of being wrong." You have infinitely many possible beliefs to choose from and have no reason to elevate any particular belief you do not hold to this sort of prominence. Though this particular idea has more to do with my rejection of the bit-complexity formulation of Occam's razor than it does with the moral theory I am presenting.)


All told from these considerations:
We have two axioms of morality, one of which has a natural formulation in game theory, and the other has a natural formulation in proposition logic. Both axioms are consistent with behavior we would expect from a fully-rational entity. They explain features of the natural human ethics, resolve the is-ought problem, show errant actions to be more erroneous than simple miscalculation, invalidate Pascal's wager, explain why the aggressor is in the wrong in a conflict that ends in a stalemate, and say why the rule of a government is usually legitimate while still explaining why the Nazis were wrong.

I would say that this list of results strongly indicates that there's something to the theory.

Most of these results come from the second axiom. So most of the support that these observation give to the moral theory come from the second axiom. That said, all of the objections I can think of to the second axiom are resolved by the first one, which (possibly because it has been more fully developed by other people before me) is also the one that seems more obvious to me, and certainly is the first of the two which I held. Without the first rule, the theory seems noticeably incomplete, in a bad way.

The theory probably doesn't satisfy most people's desire for completeness. If we hypothesize a being of infinite power and infinite malice, this theory does not explain why it would be wrong for that being to torture everyone else for all of forever. However, it does say that if no such hypothetical being exists, you would be wrong to reject a theory on the grounds that it cannot deal with the false hypothetical.

Friday, April 3, 2015

Axing Occam's Razor Part 1 (Overview of complaints)

Occam's razor as it is typically said in words is "The simplest explanation is always the best." I don't know of anyone who considers that version of Occam's razor to be sufficiently precise to be treated as a theory of epistemology. A reasonably common epistemology, notably popularized by the rationalist movement though it has predated that movement by quite a while, is the computational information-theoretic version of Occam's razor that says that theories should be assigned prior probabilities according to their minimum length as measured in bits. Typically, people don't specify anything beyond this. The minimum length in bits of any program in one programming language is after all somewhat related to its minimum length in any other programming language. You can pretty easily show that every pair of Turing complete languages can completely describe the operations of the other, and from there, the proof that for every pair of languages (P,Q) and for every integer N, there exists an integer M such that any program that can be expressed in N or fewer bits in P can be expressed in M or fewer bits in Q. This constraint doesn't puts a very week bound on the relationship between languages. You can also pretty easily prove that for every state S, there exists a programming language P(S) such that S can be expressed in only one bit in P(S). (At this point P(S) is a contrived language specifically designed to permit this hack, so I think it is only fair to reference S in the description of the language P(S).

People who are serious about providing  computational information theoretic version of Occam's razor must specify (at least vaguely) what they believe the ideal programming language should be like.

I'm not a fan of Occam's razor, so the choice of language I would recommend is itself not consistent with Occam's razor. I would consider anything that wasn't specifically engineered to minimize the length of all of its most validated theories a very weak candidate. (This would have to be a language that revises itself in response to evidence, and has a system of pointers that permits frequently referenced concepts to be reused with ease; keeping track of what language you ought to be using would be computationally intractible because they are reflexive in complicated ways... but that's beyond the scope of the current conversation. I will discuss it later.) Most people pick something far more arbitrary. Actually, most people avoid picking anything at all, which is worse than picking something arbitrary.

But of the people who I have read who do put some effort into describing which programming language they would use as the basis of their version of Occam's razor, the most popular choice tends to be Binary Lambda calculus. Actually, if you are truly a proponent of Occam's razor this is pretty much the right choice, since it is probably very close to the simplest possible programming language that can possibly be devised. It autocompiles very easily (requiring few bits to describe itself), which if you truly have strong Occam priors ought to be a requirement for any self-consistent basis for your theory. Of course, anyone can devise a programming language that autocompiles with a single bit instruction, which would make them simpler in one sense. However, any of these programming languages is going to be much more challenging to implement in any other language than binary lambda calculus. Binary lambda calculus is very simple in most pair-wise language comparisons. It's very difficult to come up with other programming languages that compile each other with as little difficulty as most programming languages can compile binary lambda calculus.

One of the arguments I would make against Occam's razor is that many more complicated programming languages than binary lambda calculus are practically guaranteed to outperform binary lambda calculus as the best programming language to use for measuring length. In particular, binary lambda calculus does not have a good enough system of pointers to make the explanation for "someone carried the box" simpler than "the box teleported itself." Teleportation is practically guaranteed to be the simplest explanation for motion in this sort of system by many, many orders of magnitude. Just update the position stored for the object is always way easier than specifying that somebody or something else moved it, and the degree to which this explanation is likely to be simpler overwhelms available evidence. It is likely to be simpler by thousands of bits setting the prior odds against any alternative at 2 ** 1000 to 1 even in optimistic projections.

In short, I think there are very strong information-theoretic counterarguments to the validity of interpretations of information-theoretic formulations of Occam's razor, and I think that the theory does deserve to be refuted on these grounds, but I also think that there is an even stronger logical refutation of Occam's razor that takes that deals with "True statements are true" sort of tautologies.

In particular:

The simplest explanation is always the simplest.
Equivalent theories are always equivalent.
The most persuasive explanation is always the most persuasive.
The most probable explanation (given available data) is always the most probable explanation (given available data).
Reflexively self-consistent explanations are always reflexively self-consistent.
The most accurate explanation is always the most accurate.
The epistemology that produces the most predictive priors is the epistemology that produces the most predictive priors.
Self-improving epistemologies are self-improving; whereas static epistemologies are static.
Etc.

We can model all of these things separately and see that all of these things contain some elements of consistency with Occam's razor and some elements of inconsistency with Occam's razor -- the most damning of which are that considerations about persuasiveness and reflexive self-consistency would lead us to realize that statements like Occam's razor would be likely to be believed even if they don't have any merit.

We can even create an epistemology that (assuming the basic accuracy of statistics) is guaranteed to be asymptotically equivalent to Occam's razor if and only if no better alternative exists.

This brings me to my final note which is to say that most epistemologies that include Occam's razor and another theory are themselves inconsistent with Occam's razor. It is possible to devise a system of priors from the assumption that probability theory is more-or-less correct. In which case you have a formal system that takes only probability theory as its axioms, and this system is itself information-theoretically simpler than a system which also has axioms for weighting simplicity.

Some of these considerations may be a little abstruse, but most of them should be relatively straightforward to anyone with the mathematical background required to understand the information-theoretic formulation of Occam's razor to begin with.

Thursday, April 2, 2015

Where does the time go?

My mind has just been blank the last few days, much more blank than I ever recall it being. I've edited To Change the World (the book I just finished writing), done a little reading, less writing, started exercising again, saw my family, and got in contact with a few people that I've needed to contact, began a little work getting back into the swing of programming, and caught up with a few things that I was putting off while I made a final surge on my book the last few days. It doesn't sound like nearly as paltry a list as it is, when I list it out like that, but that's most of what I did over the past three days, and I don't even know what I thought about in between. My mind has just been blank, and I've had a growing sense of worry building up in the back of my head the past couple days.

I guess I've been looking at profiles for literary agents and reading about what I should do in my next steps and started thinking about my job search quite a bit. When I start to worry about something, time just evaporates. I feel tired, but I don't feel like I've been up long today. I felt exhausted yesterday, and again felt exhausted without having felt like the day had existed. I think anxiety messes with my sense of time a lot more than I used to think it did. Hopefully, I can overcome that.

I don't know what to do about my job search. I really don't know what sort of career I'd like to have. Some part of me wants to go back into developing software. It's just an easy way to do something where I know I have a lot to contribute, and would do well... especially if I can avoid becoming anxious, which I should be able to do. Another part of me remembers that I've just come from a recent experience that I don't want to go back to. I was too isolated at my last job for sure. Part of the problem was that I was working from home. Sometimes, I truly love writing code. Sometimes, I think it's one of the most fun things I've ever done. (Writing, painting, reading, and playing music are also up there with it for sure.) I pretty much quit playing computer games when I started learning to write code, because it was more enthralling in a similar way. It's puzzles to be solved and progress to be made in a highly imaginative medium, but I also burnt out really hard when I did eventually burn out.

I don't know. I'm halfway tempted to think that I should go back into programming and start taking medication if I do start to burn out again. If I can find a job in an office with interesting people, programming in a language that I like (preferably Python or Haskell) working on something that interests me (algorithms, possibly some design and UI), I think I would enjoy it a lot more than what I've been doing. It's just, I don't know where to find that.

On the other hand, if I can get a job doing consulting, spending most of my day interacting with other people, doing research, and writing up the results of my research, I think I'd have a lot of energy to spend on programming when I get home. I want to work on a project that I can keep developing for years, and I just don't think I have much hope of maintaining my enthusiasm for development if I do it at work and do it again on my own time. I'm a complete variety junky.

I suppose my real options are that I either get a job where I code by day, and then I come home and I write nights and weekends or I find a job where I write and research during the day, and then I come home and code nights and weekends.

That should solve my problem for me. I ought to be able to have a better career doing coding than I would as a consultant, and I would expect to have more success as a writer on my own than I would as a developer on my own. Either way though, I have to scrap a ton of what I want to do with my life.

That's my biggest problem. I want to do way too much.

I always feel like I'm wasting my time because I always feel as if there is something more important that I ought to be doing, if I could simply figure out how to be doing all of it.

I need to be doing things like this just to keep my sanity. If I wasn't writing through my thought process, it would keep spinning out into all sorts of irrelevant directions. I actually do accomplish far more when I let myself indulge in a few minutes worth of writing stream of consciousness each day than when I try to organize my thoughts some other way, so I need to keep doing that. I also accomplish more when I go for a run (and I'm way happier too) so long as I keep my runs pretty short, so I need to do that again. But then those things start to feel like a waste of time as soon as I have something else going that seems productive. It's a catch 22 that I just need to accustom myself to, I guess.

But then there are the projects that I really wish I could spend my whole life working on... I really wish I could divide myself into multiple copies so that I could spend a lifetime writing songs, another lifetime writing books, another lifetime painting, and another lifetime writing code. Maybe more than one on a couple of those.

I have so many thoughts in my head that I feel like are screaming to get out. Designs related to programming languages and AI, images that I really want to see, songs that are half-completed, unfinished books for which I have a chapter or two and an outline. Somethings got to give. I know I need to abandon my music... but it hurts. I probably should abandon my art, but that also hurts... I didn't bring my painting supplies home with me because I knew they would distract me from my writing if I had them, and I miss them. (I went a little while without a piano when I moved back out to Chicago, that was also really hard for me. It wasn't the hardest time I've had when I tried to give up piano, but it was close. I actually became depressed one of the times I tried to quit, and I've never even made much of an attempt to give up singing. In some ways, I want to, in others, I'd rather just die. This is just such a part of me.)

I should probably just take a day sometime to let myself cry, and face the fact that I can't keep these things in my life as much as I'd like to. I need to grow up, and growing up is... fucking stupid. Seriously, pretty much all I want in life is to have the freedom to split my time between four productive activities that I've loved. (Ok, I'd also like to spend some time researching and learning, and other time socializing and being in relationships with other people, and a little bit exercising and showering and eating.) But the chances of that happening any time soon are practically zero.

Whatever, I need to go to bed tonight, and tomorrow, I need to come to turns with what I'm doing in this world.

Monday, March 30, 2015

It is finished!

I just finished writing my first full length book. It's just over 61,000 words. I've done quite a bit of editing as I was writing, mainly when I hit writers block with respect to new material. About half of it is edited close to completion (I think) but I might have to do some more editing on the other portions. It's non-fiction so I think most of the cutting was pretty obvious. There were occasional section that felt clunky that I decided fairly quickly couldn't work. (I don't even want to know how many words I have written in the various drafts and previous attempts I've made at writing this book... more than 61,000 though.)

The feeling of accomplishment has been slowly arriving. At first it felt really anti-climactic. For about a half hour, I felt less of a sense of completion than I typically do after finishing a blog post or an essay. But for the last fifteen minutes the realization that I actually finished writing what I'd set out to write is has finally been setting in.

The prose itself probably isn't the most elegant I've ever written. It's actually much easier to write elegant prose with fiction and platitude-style philosophy than it is to write elegant prose with heavier subject matter. But the content was the most ambitious subject matter that I've ever written about, and probably is the most ambitious I ever will write about. It's all very applied, specific, and object-level. Unlike this blog post... I doubt I will ever writie anything this applied and specific again. It's just the one book I had to write to give myself an internal license to focus my future on the more theoretical, abstract, meta-level stuff I prefer to think about and write about.

I'm too close to it right now to really re-read it and think about. But I do plan to review my own book, on my book review blog eventually. :)

What's the point of becoming a literary critic if you're not going to critique your own work?

And after dinner tonight, I'm going to smoke a cigar.

Then tomorrow, I suppose I start looking for an agent, looking for a job, and starting work on the coding project that should be my primary companion until I figure out what city I'll be living in and working in... at which point I should also begin seeking out having a social life again. And possibly begin building one that's a bit more digital than my social life in the past has been.

Yippee!

 -----

I got called away to dinner while I was writing this, so I've now spent quite a bit of time eating, smoking, going for a walk, relaxing and otherwise celebrating being done with my project. I should resume my book-reviewing in a little more force tomorrow or the next day, since I don't have much else to do other than send a few emails, update my resume a little, and submit a couple of applications.

RescueTime does not work nearly as well as I had hoped. I don't think it does a good job of picking up on time when my only interaction with the computer is through the mouse. It wasn't working at all with the browser the first few days (Chrome on Ubuntu). I was running the full program as well as the browser extension with the box checked in the browser extension to say I was running the full program. When I unchecked that box and started running only the browser extension without the native program it started picking up more, but it tells me that I'm only at my computer about two hours a day... which is simply not true. (Also, if I trusted it's estimates for how long I spend writing, it would tell me that I write a ridiculously high number of words per minute, well over 100 some days, which I know is simply false. When I'm pretty much writing stream of conscious as I am right now, my progress is less than 40 words per minute, and my ordinary writing is a lot slower).

On the beeminder front, my habits have changed in ways that I did not anticipate since I started using beeminder. I've started skipping breakfast and showering a lot more frequently (never two days in a row for showering) so that I can keep my time down. While this is technically keeping my total time spending in these activities well below my target amounts, it's not really in the spirit of what I meant to do. I'll get up some mornings and think that if I'm not going to take a long shower, I might as well not shower at all, because it kind of kills the enjoyment of the whole experience for me whenever I rush it.

This is why I don't really like making goals typically. They seem like such a good idea at the time, but then I often regret making them. When I ate no added sugar January a year ago, that was a good goal. Eating sugar again after taking a month of abstinence actually produced a high that was way more intense than I would ever have anticipated. A couple weeks into the fast I pretty much got used to it. I was only able to do that because I was living alone, so I could avoid having any added sugar in the apartment. No way I could have had that much willpower just on my own.

But even that had some negative consequences. I lost some weight and lost some enthusiasm for eating as a result of it. And that wasn't particularly healthy for me

Pushing towards the goal of being able to run from Hyde Park to downtown and back without stopping was another good goal, with pretty much only positive consequences. I can't think of any negative side effects of that one. (Other than not being able to really walk much the day after I actually did that run. I really didn't think I would make it the last couple miles on the return.) Springs coming, so I should probably start running again.

So basically, I'm not really feeling too much love for beeminder. It might be great for people who are really good at making the right goals, if having some mechanism to hold them to the letter of those goals is all they need to preserve their motivation, but the spirit of the thing is easier to keep (I think) when you aren't aiming to keep the letter of the law.

Also, you really have to think about exactly what you want your goal to be. I wanted to waste less time as I ate breakfast (which basically meant not reading the newspaper with breakfast when I really think about what I was doing that was causing me to spend a sillily long time at the table.) But an appropriate time goal for when I'm eating alone is very different from one that's appropriate for eating with other people and talking, so again, if I'm aiming to keep that goal to the letter, it interferes with my life in dumb ways.

I'm not good at making beeminder style goals, so I just don't think beeminder is the tool for me. I like the concept. It just doesn't fit me.

I love the concept of RescueTime. It just doesn't work.

So both of those experiments are somewhat of a failure. Alas and alack, and oh well.

Thursday, March 19, 2015

Goals

I need to do a better job setting goals for myself and keeping them. I've decided to start using beeminder to keep track of a few things that I do while not at my computer, and rescuetime to help me track what I do do at my computer. Beeminder says that most of the goals people set with it are "Do more" type goals. I should probably make a few more of those, but my natural tendencies that I feel the most need to combat tend to fall in the do less category. (Actually, "measure more" might be a better description of what I should be doing than "do less" since simply keeping track of how much I do of a given activity seems to help me dramatically cut back on it. But I'm measuring for the sake of doing less.) Unfortunately, I think beeminder defaults to saying that I did zero of something if I don't enter data, so the easiest way for me to cheat with "do less" goals is simply to stop entering data. That might be one of the reasons that the other type of goals tend to be more popular, at least it might explain differences in repeat users. People are probably more likely to keep track of their goals if failing to track them results in presumed failure rather than presumed success.

I want to go to bed at midnight, get up at six, and get eight hours of sleep a night. I know that this is impossible, but each of these goals seem plausible to me. Six seems like the right time to get up. Midnight seems like the right time to go to bed, and eight hours of sleep seems like the right amount of sleep (and of the three, getting enough sleep will probably will have the biggest impact on my long term quality of life).

This is one of the reasons I like "do less" goals. Spend less time sleeping sounds like a really bad goal, but I think a lot of "do more" goals implicitly include something like "spend less time sleeping." The hard part about setting better priorities is figuring out what to cut, just like the hard part of budgeting is figuring out what not to buy. But sometimes, something is obviously extravagant. I spend an extravagant amount of time showering and eating breakfast right now. I'm unemployed right now -- working on my own projects. -- so, for now, I don't have enough pressure to finish things quickly in the mornings unless I keep track of exactly what I'm doing. However, my tendency to take longer than necessary showers and spend a lot longer than necessary eating breakfast began at my last job. I ate breakfast at my desk, which was probably a bad idea, because multitasking actually reduces efficiency, but in my last few months before I quit, improving efficiency at my desk ceased to be something I was trying to do. The worse thing was that I started taking showers after I got home from work because starting work on time without getting up earlier than necessary conflicted with spending an extravagant amount of time in the shower. Those are my first two beeminder goals. I'm going to wait until I gather more data from using rescuetime to figure out what my goals should be with regard to the time I spend at my computer.

I don't really have a good sense for how much time I spend doing what while at my computer. Once I have a better feel for that, hopefully, I will see that a few particular uses of my time seem particularly over-represented relative to what I would consider optimal. Most of the ways in which I use the computer are ways which I would consider productive, just like showering and eating breakfast are things I would consider productive. I basically use it to read, write, and occasionally to code. All of which are things that I think I should be doing. I would consider reading the news to be one of the least productive forms of reading that I can do, unless I am specifically reading with an eye for figuring out what makes some particular journalist particularly appealing to a general audience (e.g. reading Mitch Album way back when he was still a sports columnist would have been productive reading for any aspiring writer). I never read the news this way, so it's unlikely to do me much good. What benefit do I derive by following political scandals? I suppose if I were commentating on them, I could argue that I'm doing research, but I'm not doing that either. In contrast, reading smart people make informed judgments about the world seems like a very productive use of my time, especially if they use statistics to do it. Similarly, reading people who write well seems like a good use of my time to me (since I am aspiring to become a writer). If I had to explain what is wrong with Kant in a single sentence, I would say that Kant was a philosopher who spent more time writing down his own ideas than he spent reading other people's. Reality doesn't actually work the way Kant imagined it, so his philosophy is irrelevant at best and damaging in the usual case. People learn by practice, but people also learn by watching, listening, and changing their minds. What you learn be practice is relevant only to the extent that what you've learned by watching, listening, and changing your mind has led you in a good direction.

That said, I think that the best learning is also oriented. The world holds way more information than anyone can learn in one lifetime and the amount of information in it is growing exponentially. Because of the growth, even achieving immortality wouldn't solve the learning problem. I can learn mandarin, I've spent a few hours starting to do so, but is learning Mandarin actually a good use of my time? Probably not. There's practically nothing that I am well-suited to do in my current state that I would come to be better-suited to do simply by speaking Mandarin fluently. And speaking Mandarin fluently really isn't a useful skill in isolation either. If I find myself in a business role that has me traveling to China (or Singapore!) regularly, it probably will make some amount of sense for me to resume studying Mandarin. I would somewhat like to do that... but not so strongly that I should put a major emphasis on studying it right now. So that's a useless thing for me to learn (at present) that I have (correctly) quit learning (for now).

I'm also really interested in biology and somewhat interested in nutrition and medicine. Is this a useful subject for me to continue reading about? At present, the answer is also "no." It would be a useful subject for me to read about if I were doing something with that information. I'd like to eventually write about biology. If I want to write about biology eventually, I should probably start by writing about it before I have anything particular that I'm trying to say -- just reading and reacting to what I read. Doing so would allow me to orient my learning and practice what I want to be doing in a way that leads to increase in a relevant skill.

Similarly, I've always been fairly interested in business, finance, and investing. If I'm going to spend time reading about finance and investing, I think I also need to spend more time writing about it and linking back to what I have read.

I'm thinking of splitting my blog into several more topics so that I can better keep track of what I'm doing, and keep my thoughts sorted into neater categories, if I start taking this approach. Eventually, I'm also going to want to write things that increase my ability to draw an audience to my writing. Hopefully, I can start to do that by the end of this year, but I don't want to put energy into that particular endeavor until I've got something finished that I want to be drawing an audience towards. In particular, I want to finish writing the book that I've set as my highest priority, before I begin seeking to draw an audience to my writing (by posting on forums and online magazines that let me direct an audience back to my other writings).

So I'm going to orient my endeavors. For now, I think I should down-regulate my interest in biology. I won't resume that interest until I've made substantial progress in the other areas I'm discussing below.

Becoming a good investor seems like the best path to long term success in this world. I think I should maintain and probably even increase my interest in that. If I do that, it should also become a good topic for me to write about, which means that I should create a new blog devoted to investing, where I keep track of my thoughts in relationship to investment ideas, specific and general (and constantly remind any hypothetical readers that no matter how much my opinions sound like advise, they are simply opinions and not investment advice).

Another interest I should probably maintain is my interest in coding, because, honestly, I'll probably have to get a job in software development again once I finish writing my book. It's what I'm most qualified to do at the moment, and it's a career path for which there is also a lot of demand. I might or might not have anything to say about that before I do re-enter the workforce. Once I do, I want to start posting to Bad Code Considered Harmful again, at least twice a month.

I think I also should continue to read and review books. In fact, I should probably make reviewing books that I've already read a higher priority than it is now. I'll plan to write one book review today, and to make maintaining that blog a higher priority than maintaining Informed Dissent, at least while I can still think of books that I really ought to review. (Most of my books are unfortunately in storage in Chicago, and I'm living with my dad in Pennsylvania, so I might need to hold off on reviewing some of them until then, but I've read a lot of my dad's books and most of the books that I have with me, so I should be able to write at least fifty reviews before I run out of books that I have on hand.)

Informed Dissent is going to be where I keep track of everything that doesn't fit well into other broad categories of life and thoughts. It's going to mostly deal with behavior optimization, writing about writing, everyday life, plans, and responses to things that I read on the internet that don't fit into any of my other categories. It's going to be about those things because that's what I naturally do write about a lot of the time, not necessarily because it's what I should write about. But I do think that most of those topics are worthy of some focus, so I don't feel the need to get rid of them. By themselves, none of them are particularly useful. Writing should for the most part be about something worth saying -- writing about writing makes a lot more sense in the context of also writing other things well than it does when someone only writes about writing. Similarly, behavior optimization without using that behavioral optimization to accomplish other things makes practically no sense.

Finally, I need to finish my book (which I'm not discussing in more detail because I always get more nervous about writing it -- and my progress slows -- after I discuss it in detail). I did make progress on it yesterday. I didn't end up trying to write a bad version of each of the places I'm stuck. When I read one of those sections to refresh in my mind what I should be doing, something clicked just enough for me to continue working on the real version of that section. Hopefully, the same happens again today!

(And now, in keeping with my new tradition, I must edit this document.)

Wednesday, March 18, 2015

Inadequacy

I've been reading a lot of Star Slate Codex recently. It's fantastic, probably the best blog-like thing I've ever read. I have a fairly short list of blog-like things that I've been enthusiastic about at one point or another, (enthusiastic enough to spend 100+ hours reading). In no particular order, they're Paul Graham's essays, the War Nerd, Unqualified Reservations, Less Wrong, and Overcoming Bias. I should go back and read more of Yvain's contributions to Less Wrong because, Star Slate Codex seems significantly better to me than the average material I remember reading on Less Wrong. (Yvain was a previous online identity of Scott Alexander S., Star Slate Codex's author, who keeps his last name semi-private though it's easy enough to find.)

A lot of today's best writing is being done on blogs, in my opinion. Much of what I've read in any of those blogs would be in the top quartile of the non-fiction that I've read in my life. For reference, I've read thirty works of recently published works of non-fiction with four or more stars on Amazon in the past twelve months (all of which I plan to eventually review on my other blog...). Krug's Don't Make Me Think, and Gladwell's Outliers would be two books I would put in the lower quartile of those thirty. Don Norman's Design of Everyday Things and King's On Writing would be towards the top. Paul Ekman, Margaret Mead, and Dan Ariely would be in the middle.

Even so, reading a blog as good as Star Slate Codex makes me begin to feel very inadequate as a writer. I don't feel at all bad that that blog is much better than mine. All of the blogs I've mentioned and a lot of the ones I've read a little bit of but haven't read with any enthusiasm are a lot better than this one. (I'm still trying to get the hang of what I'm doing here, and would be rather embarrassed if I had readers). What makes me feel inadequate when I read Star Slate Codex is that some of it is better than my best writing... and it's a blog that gets updated all the time. Just like squid314 before it and many other oeuvre to which Scott A. has contributed over the years. Eliezer Yudkowsky wrote a lot of great essays in his two year period of posting something every day, but he also wrote a lot of pieces that seemed like the work of someone who was posting everyday for the sake of churn. Practically nothing on Star Slate Codex seems like it is written for the sake of churn, at least practically nothing I've read. I started with the archives, before I looked into the top posts. Even a lot of his everyday posts that he doesn't single out as being particularly good are particularly good, just maybe not quite as particularly good as some of his other posts. So basically, Scott Alexander is someone who can sit down at his computer and effortlessly write long, clean posts that are as good or better than my best writing (which in general doesn't show up on my blog... nor is it something that I expect will become a staple of blogging in the future, since the point for me at least is to write regularly).

Since I want to write, my takeaway from this discovery is not going to be any form of surrender. Scott Alexander is quite a bit older than me, and he has been writing daily or near daily posts, many of which deal with recurring themes, for a lot longer than I have. Actually, even though comparing myself to him makes me feel inadequate, I consider my desire to compare myself to him and realize that I don't measure up a positive sign. As Thiel discusses in Zero to One, one of the ways that people manipulate beliefs is to mess with the sets of comparisons. Successful organizations (while they tend to be highly focused) also tend to see themselves as a contender in a huge ocean. Google doesn't look at itself as a search giant; it looks at itself as a major player in the huge field having to do with organization of information.

In contrast, organizations on their way to failure restrict their circles of comparison in such a way that allows them to view themselves as being great at what they do. This is what I naturally do with music... I know several people who are better at each musical skill than I take pride in possessing (assuming I'm not tinkering with the definition of "musical skill" just so that I can be special), but I don't personally know anyone who improvises on the piano, writes music, plays classical music from every epoch on the piano, plays pop and rock music on the piano, and sings in a variety of styles all better than I do. When I want to compare myself to other amateur musicians, I've defined a very special niche for comparison which is designed to make me feel good about myself -- someone is only better than me if they are better at every musical skill in which I happen to have an interest, and most people who have musical skills have a different set of interests to begin with... and to the the extent that our interests overlap, they are better at certain things and worse at others. By choosing a very specific criterion as my basis of comparison, I'm implicitly admitting that I am not a great musician, and that I don't really have much hope of ever becoming one. I can admit this consciously without it registering emotionally.

Feelings of musical adequacy would be a bad sign for me if I wanted to become a great musician because I've taken a view of musical abilities which is designed to cater to my sense of pride, as opposed to one that motivates improvement. Monet was almost never satisfied with his own paintings. Jimi Hendrix was never satisfied with his guitar ability. Even when he was the best golfer in the world, Tiger Woods was never satisfied with his stroke. Nadal keeps tinkering with his serve. People who remain eternally amateurs, on the other hand, seem to find it much easier to take pride in what they can do. I'm reasonably satisfied with my musical ability. I had several friends in college who were extremely proud of their artistic abilities... and I myself have been extremely proud of them at various times. I became proud of my programming ability about six months after I started to code... and honestly, I probably haven't improved much since that happened. All of those forms of pride that I've seen in myself and others tend to rest more on finding a way to compare one's ability favorably to an unimportant category than they do in feeling oneself close to the end of a long journey towards mastery. Feelings of adequacy seem to come from and lead to persistent mediocrity, and are a bad sign for anyone who wants to excel at anything.

Feelings of inadequacy are not necessarily a good sign. I sucked at soccer when I was in high school. (I had little choice in whether I would play a sport during my freshman and sophomore years of high school, and I picked soccer because I would have felt more embarrassed to fail at a sport like tennis which I had actually spent a lot of time practicing. Succeeding in sports was never really much of an option for me. I was the kid that got an exception to the "no strikeouts" rule in gym class, because, really, nobody was supposed to need it quite that badly, and other people needed to have an at bat eventually.) So I worked really hard to appear to be trying at soccer practice (meaning, I would put effort into running and sometimes put some thought into determining whether I was running in the right general direction, but I never really got around to figuring out how running in general directions influenced what happened to the soccer ball). With sports, I never felt adequate. I also never felt that I could become adequate, and I was never really interested in improving. In this sort of case, feelings of inadequacy are a bad sign, in indicator of true failure.

Feelings of inadequacy are a good sign when they are accompanied by a desire to improve and a desire to exert effort and energy improving. They are especially a good sign when they are accompany a willingness to acknowledge that you need to change something.

Until I started reading Star Slate Codex, my plan for improving my writing ability was simply to write more. I'm sure that that's one piece of what I need to do, but now I also realize that I need to write better. To some extent, this is obvious. I've been practicing writing with the intent to improve. That's my primary reason for writing these almost-daily posts. (I took a break because of feelings of inadequacy which I'm coming back from now. I was also working on my book during that time, and having some degree of writer's block. As time goes by, I hope to take fewer of these breaks. The everydailyness of an exercise seems to matter more than the total amount of time devoted to it.) I even devoted some of this practice to deliberately refining particular skills. In my last post, I avoided using "to be" verbs except in the title. I plan to repeat that exercise every once in a while. I also plan to remove the concept of "should" from my writing from time to time, and focus on making object-level descriptions of whatever I'm trying to talk about. I also spend some of my time writing stories, poetry, and humor pieces. I used to write far more humor pieces, but I am questioning whether I should continue to do so, because I'd like to avoid hyperbole and bombastic fluff, and I tend to gravitate towards those devices when I'm writing comedic prose.

Reading Star Slate Codex has given me other thoughts for ways I should go about trying to improve my writing. Scott Alexander proofreads and edits all of his posts, which is not something I've done with my daily churn. Like most of the other blogs I mentioned as being particularly good, he also tends to write about topics that involve linking to sources and related posts. Sometimes he does actual statistical research, and he often embeds pictures and quotes. He revisits topics regularly, corrects himself, and revises his thoughts. These are all things that I seldom do in my day-to-day writing, probably because they are all things that I think of a little bit as work.

I love reading. I love writing. I enjoy writing about what I've read, though not quite as much as I enjoy writing about whatever happens to be on my mind. I'm less enthusiastic about the idea of exerting the effort it takes to maintain the level of organization required to keep track of what I have read so that I can link to my sources. I enjoy proofreading other people's writing much more than I enjoy proofreading my own.

These are all things that I can do, and that I should do more if I want to become better at writing. Scott Alexander has used them to refine his ability, and I would like to write more like he does. I'm going to make more of an effort to do them (beginning by proofreading this post, and inserting one link... just to get a bit more used to it). I don't want to jump into this so headlong that writing begins to feel like a chore. I don't think I am likely to continue to improve at anything once I cease to have enthusiasm for doing it, but I do want to increase my use of good habits and to learn by making changes that improve my performance.

Another thing that Scott Alexander does differently from me, which is something that I will need to work on improving eventually is that he writes a lot about his real life. I don't have much opportunity to do that at present because my real life write now is writing. I quit my last job to focus on writing a few things, that I think will lead to improved prospects for me in the future, and will begin seriously looking once I finish writing my book (that I'm almost done writing). Hopefully, I have that done seven days from now... My original target was the end of January, but I missed that. I really don't have much remaining. I've already written everything where I had a good idea in words of what I was trying to say before I started writing. Now, I just have two or three topics of maybe thirty paragraphs each that deal with ideas that I feel like I understand, but have been struggling to put into words. Most of the rest is even proof read and edited because I needed to be doing something while I was trying to figure out what to say there.

I might need to try writing a few bad drafts just to get the ideas more into words. I'll try that tactic today, and hopefully, I can turn those bad drafts into mediocre drafts tomorrow or a few days from now.

Thursday, March 12, 2015

How productive is thinking?

I spend a lot of time thinking. I'm beginning to wonder whether I could increase my productivity and my happiness by simply thinking less. I've actually wondered this on many occasions, or at least I vaguely recall having thought similar thoughts on many occasions. I have some collections of memories which seem dubious to me. Random deficiencies of whatever routine maintenance the brain performs on itself explains these classes of memories to my satisfaction... so I'll call them quasi-memories. For example, I have a quasi-memory of constantly thinking the same "important thought" that I need to write down in the morning as I drift off to sleep and then remembering that I have thought this particular thought many times before as well as making the same mental note to remember to write it down in the morning but that I always forget about it in the morning. I wake up many mornings remembering this experience from the night before, but never remembering what particular thought I had meant to write down. So either I have some thought that only occurs once I reach the point that falling asleep seems more important than taking note of something I would get up and note if I had a little more energy (and I always react to it the same way even though I recall acting to it that way every time), or I have a routine false recollection that I wake up with many mornings. Given that sleep does weird things to the brain, I think the idea of a false recollection better explains the phenomenon than the idea that reaching the point of no return with regards to falling asleep consistently reproduces the same epiphany which I can never remember at other times. However, I do have certain thought patterns that only occur in response to certain stimuli. When I want to think of an "obvious place that I will certainly remember for storing something," a few particular locations (that I can never remember later when I look for what I stored come to mind). I know that this situation routinely induces the same experience in me because after I have searched for my tape-measure or whatever for a few hours, give up, buy a new one, and decide to put away the new tape measure some place that will prevent me from ever having this problem again, I go to a place which I clearly didn't consider searching when I wanted to find a tape measure, and see my old one sitting there... and then I realize I need to put my new one someplace else. This only happens in an apartment with no "storage area" and no garage. I would store my tools in those locations in a house (and indeed do when I live with family). "Under the bathroom sink" feels like the natural place to store a tool in an apartment, but it didn't feel like the natural place to look for a tool. Instead I looked for it in all my kitchen cabinets and all my closets (including the bathroom linen closet), but never checked under the bathroom sink. This experience has happened to me several times (all with different classes of items), after one experience, I can update my memories to realize that I store tools under the bathroom sink.

So today, I need to pick the schema for my data in a new project, and I remember another routine epiphany that seems to only come to me whenever I work on designing a schema for my data. I don't know if I have quasi-memories of this epiphany or if I really do rethink this thought after I have spent several hours weighing my options, and never make a sufficiently permanent note of it to help me repeat the same mistake either in this context or in the much broader contexts to which I think it applies. I question whether I actually benefit at all by thinking about things, most of the time I think about them. I want to find a mystical schema that somehow magically evaporates most of the other implementation problems I expect to encounter, but usually I have to solve a few problems in any given project that have opposing constraints. I can do one of my sub-projects more easily if I build the relationships one way, and I can do another more easily if I build them another way, and if I do it both ways, I'll have to make sure that data updates in multiple places which always produces more trouble than it prevents. So basically, I have two decent options, and no perfect options, and I spend a bunch of time looking for a perfect option. I devote a lot of thinking time to problems that I don't actually need to solve. The same pattern applies to other things I think about, not just designing schema.

When I worry, I usually worry because I'd like to solve a problem for which I lack some necessary piece of information. I know: if A, then I should X; and if B, then I should Y, but I don't know A or B and I don't know how to learn them, and I need to take action soon or reap consequences that seem as bad or worse than the consequences for making the wrong decision. In most of the cases that I deal with regularly, I would benefit by taking an action as soon as I realize that I lack access to the information that would allow me to make a knowably correct decision or as soon as I realize which trade-off prevents me from having a perfect option available. Instead, I think about it. By natural inclination, I value thoughtfulness. I naturally assume thoughtfulness leads to more prudent decisions and that more prudent decisions lead to better outcomes, but many real circumstances prevent thoughtfulness from having the power to produce more prudent decisions. These circumstances probably occur more frequently than the ones where thoughtfulness does lead to prudence. In these circumstances, making a decision promptly will lead to better outcomes than pondering the decision carefully because pondering the decision simply wastes time. (Going back to the treachery of childhood: test questions always have a right solution. The contrived experiences of an education condition students to believe that thoughtfulness leads to better outcomes much more often than the messy conditions of reality actually permit you to find the right solution.)

I wouldn't go so far as to say that running with the first idea will produce results comparable to stopping and thinking about decisions. But stopping and thinking doesn't require hours of thought. I can go through a few questions pretty quickly and after five or ten minutes produce answers that on average produce as good of results as spending hours deliberating, I think. What do I have? What do I want to accomplish? What do I know? What do I need to know? Can I learn it or will I have to guess? What paths lead to what I want to accomplish? Can I combine the best elements of all of them or do they actually have conflicting features that forces a trade-off? None of these questions take long to answer, and none of them get solved much by remaining in abstract thought.

Questions like "does a more efficient algorithm exist for solving this problem?" do require a lot of thought, but seldom have much significance for most problems.