Sunday, May 24, 2020

Doublespeak

The doublespeak of American propaganda fascinates me. A significant portion of the role of the American State Department is manufacturing propaganda. For more on this, see Noam Chomsky's Manufacturing Consent. I think the official American narrative about China and the Middle East is particularly fascinating as an example of extreme doublespeak.

I'm mainly writing this today as a way to keep track of three links that I wish to comment upon further.

First link:


A few broad points:
  1. The U.S. State department publicly created modern Islamic Terrorism in the 1980's (presumably after many years of groundwork laid for it in the 1970's in secret) in its effort to combat the Soviet Union, particularly in Afghanistan. The United States backed the Taliban and gave it weapons and training. Many of the revolutionary jihadist activities of the past decade have been pro-democracy. Trump's attempt to combat Isis succeeded because he acknowledged that the revolutionary jihadist organization the U.S. has labeled "pro-democracy allies" are strongly associated with allied with the revolutionary jihadist organization we have labeled "terrorist organization." To the extent that modern Islamic terrorism is a coherent concept, it was unambiguously created by U.S. State Department, and the U.S. State Department publicly armed and funded it from 1980 until the present as a means of combating first Soviet and later Russian regional interests, and maintaining global instability with its corresponding need for a military superpower to maintain a global military presence. The fact that some of these factions also hate the United States is not altogether surprising, but to characterize the modern form of jihadism as "anti-democratic" activity is absurd and false. Most of the jihadist movements of the last fifteen years including all of the successful ones have either been pro-democracy movements backed by the U.S. or allied with another movement that was. The United States was backing ISIS's allies in Syria before Iran was.
  2. Discussions of Chinese, Russian, and Iranian propaganda related to the coronavirus epidemic and relating to interference in U.S. elections are ridiculous. Social engineering schemes involving U.S. social media platforms are an enormous business in Russia and Eastern Europe. No matter whose paying for it to be done, it would most likely be done from Russia. So far, the status of the investigation into why the Russians did it have only established that Trump's team did not actively conspire with the Russians who engaged in these schemes. They have not proved that Putin was connected to them or that they were not funded by any of Trump's wealthy supports or SuperPacs or any of the other arteries of corruption in the U.S. election system. The idea that says, "because this hacking and social engineering took place in Russia, it happened at the behest of the Russians" is objectively false. The Russian government is not conspiring to give vain American teenagers who want to be the next Kylie Jenner and happen to have rich fathers 20,000 followers on Twitter. On the other hand, Iran and the United States have had an actively hostile relationship for the entirety of the existence of Iran's current regime. The strongest evidence I can give that the coronavirus has ever been as dangerous as it is presented as being is that it ravaged Iran's government. This happened while the regime was actively reeling from having its most important member strategically executed by the U.S. government. It makes sense that Iran would believe that the coronavirus is another weapon targeting them.
  3. Some people in China including some officials of its government have parroted some of the claims that the Iranian government is making. These are not the official claims of the Chinese government. The existence of a few conspiracy theorists is not the same thing as proof of a conspiracy, and there are a huge number of people in the Chinese government. Furthermore, Mike Pompeo is saying very similar things about China as these conspiracy theorists are saying about the United States. And Mike Pompeo does speak for the United States on matters of foreign affairs and foreign policy. The United States is guilty of the thing that this paper is falsely accusing the Chinese government of doing.
  4. The Trump administration is far more at fault for this current epidemic than the government of China is. The coronavirus started in Wuhan when pork prices in China were extremely elevated, and hundreds of millions of lower-middle class people were suddenly more interested in buying exotic meat. This resulted in new strains of the virus jumping from animals to people. This happened because Trump's trade war disrupted global supply chains.
  5. At the time China was suppressing information about the coronavirus, they had good reason to believe that the panic was worse than prevention would be, and they had good reason to believe that Western governments would seek to create panic. People have died from the lockdowns. People have died from the panic. These lockdowns have ruined the livelihoods of hundreds of millions of poor people. It's still not clear whether any of the panic and lockdowns have saved anybody's lives in Western countries. China's response unambiguously saved lives. They controlled the disease effectively as soon as they had good reason to believe it was serious, and to the extent that anybody actually suppressed information versus just came to the reasonable and typically correct conclusion that most new diseases are better to ignore than address, those people have been held accountable. Beijing's response to coronavirus has been entirely appropriate. There's only one country that was actively testing everybody at its borders for coronavirus since before it had an epidemic on its own hands. That country is Iceland. It's data said the same thing that the national data on infectious rates and death rates says. East Asian countries responded appropriately and effectively to the disease and greatly mitigated its spread. This became a global health crisis as soon as the disease made it to Central and Western Europe. These are the countries that incubated the disease and spread it to the world.
  6. I don't know how anybody takes an objective look at this data and says that the coronavirus is evidence that we need to be doing all we can to bolster Western-style democracy. East Asian approaches to government are starting to look better everyday. The West has been failing its citizens and plunging itself and the rest of the world into wars of ideological colonialism and exporting these militant ideologies to the rest of the world (including, particularly, Japan during the Meiji Restoration and China during the period of its political turmoil during the first sixty years of the 20th century) since 1618, at the latest. (It's been unambiguously true since the start of the Thirty Years War. I'd argue it became true in 1512, but I concede that the argument that this all began in 1096 has merits.) The fact that coronavirus has been disastrous in the West and in a few random totalitarian regimes without being disastrous anywhere else is nothing new. This is just the first time we have had a truly global crisis that it made it possible to compare the efficacy of various regimes to each other in the information era. Coronavirus beat China's engines of propaganda, and it's also beating them in the West. In doing so, it's also showing us exactly how much of a lie each regime has always been telling. (This is also all a priori obvious to anyone who thinks about it in the abstract for a few minutes. Of all the forms of government, democracy is the one that is most reliant on propaganda to ensure continuity, because it is the one that is most susceptible to radical change whenever popular ideas change. The rulers of a democratic regime only maintain power by controlling ideology.)

Second link:


This is double speak at its finest. It's creating a term "truth decay" which just means that people's trust in the establishment is eroding. It literally presents zero evidence that any of the erosion of trust in the establishments is unwarranted. In fact, it subtly makes the case that this is correct. "Truth decay" is happening because people more easily have access to information. Hmm. Maybe, people are losing trust in the establishment because its "facts" aren't always true.

Does it tell the truth about the safety and dangers of drugs like cannabis and LSD compared to drugs like alcohol?
Do its agencies do what they claim to do? For instance, does the FDA actually promote health or do they primarily bolster the special interests of powerful lobbies? What about the EPA? What about the DOJ? How about the IRS?
Does it tell the truth about other countries?
Does it enforce the law in way that protects oppressed people like impoverished minorities and victims of rape and domestic violence?
Is any aspect of its prison system optimized for actually preventing crime, or does it have a prison system that is optimized for transforming non-violent drug users into violent criminals?
Is it actively promoting democracy and inclusion or is it actively seeking to disenfranchise the people it oppresses?
Is it respecting the rights that it guarantees to its citizens or does it actively spy on them in ways that explicitly violate the rights it promises that it gives them?

From where I'm sitting, it looks to me like the erosion of trust in the American establishment has more to do with the fact that people are beginning to learn some things that are true, not like truth is decaying. This also happens to be what I would expect to happen a priori if people are given increased access to information and living in a dishonest and oppressive society.

Third link:


This is the most staggering example of double speak I have ever seen. What it says is that Xi's plan for China is to upend the global order by building win-win trade deals with everyone to replace the alliances that the United States has built up by military coercion. It cites extensive evidence that this is China's actual plan, and it makes it clear that the authors of the paper are terrified it will work, and that they have no doubt of the accuracy of the characterization of the current world order as being primarily a militaristic world order based on the extreme dominance of the United States' military, and the rivalries it has created with its chief enemies (particularly Russia and the former USSR) to maintain global reliance on the U.S. military. (See also U.S. military spending.)

Somehow it tries to make the case that the United States is in the right and Xi's plan is a threat to global stability.

Friday, May 22, 2020

What's your null hypothesis?

You know that one of your friends is reading a book that you lent him late last week, and that he usually returns the books you lend him after about three weeks. Today is Saturday, and you know you yourself typically do most of your reading on the weekends, and that you and your friend have similar schedules and obligations. You thought the book you lent him was a good book, but no more gripping than the typical book you lend your friend, and it's about average length. You read it in three days. What chance should you estimate that he already finished reading the book? What chance should you estimate that he finishes it this week? What chance should you estimate that he reads more than 20 pages today? What chance should you estimate that he'll read this week at all?

Is your null hypothesis that he didn't read today or that he did?

The concept of privileging a null hypothesis seems odd to me if it's not the thing that would be our default assumption to begin with. Things have effects. People and animal do things, and our behaviors are affected by our environments and by the things we eat, etc. (Like the paint that is chipping off the walls, yum!) Why do we privilege the hypothesis that this doesn't happen?

Also, why would we perform experiments without expecting them to have a particular effect? Shouldn't we base our experiments on our beliefs about the world and look to prove or disprove our beliefs based on finding these things or failing to find them?

By the way, I think that the default null hypothesis that people would have for the questions I asked at the beginning are different depending on how the question is phrased and often conflicting. Most people would consider a null hypothesis of reading being evenly distributed over the week as a better null hypothesis than it being concentrated on the weekend. But the default null hypothesis for how long it will take him to finish reading the book would probably be three weeks given the background I gave. But the default null hypothesis for whether he read today would probably be "no" instead of "yes."

Identities Part 1 (Why fictitious identities are important)

I had a very unhappy childhood. Until I started talking openly about it, and in particular, until I started telling people that I thought my mother abused me badly and that the way she abused me left me with several severe personality and mood disorders by the time I entered adolescence, I hated being myself and felt the need to invent somebody else I could be instead.

I'll write more about those subjects on this blog in the days ahead.

This post is a record of my thought process towards the end of my period of severe identity dysphoria. I've also learned how to trust people since I wrote this. I had good reason to start trusting people as of 2011, but I couldn't really start doing it until I started to be happy.

The entirety of this post can be summarized as me saying, at the time I wrote it, I hated being myself, and I didn't trust anyone else. In the years since, I've become incredibly happy, and learned to live my life in a manner that is radically built on trust.

I'm publishing this only as one of the more benign examples of the thought process that I had as a young adult, which was significantly more benign than the thought process I had as a teenager.

I was also more conservative five years ago. I had a lot of ideas that were left of center that I felt somewhat comfortable having ascribed to me, and several strongly ideas that were right of center that I wanted to hold anonymously. I only used the liberal ideas as examples when I was describing reasons that I might want to have alter egos and maintain privacy in my communication. Fortunately, I never got around to posting any of my excessively right wing ideas. In general, I'm much more scared of being on the bad list of people who are more liberal than me than people who are more conservative for me. Progressive people, in general, know how to use the internet, and have built companies and cultures from which I would not like to have been completely excluded. Staunch conservatives just have money and connections to established power. I'm not afraid of them.

I also think some of the fear was really guilt. I thought I believed some things that I really didn't believe. I knew at some level that there was something deeply wrong with those beliefs, but I hadn't figured it out yet. I've despised mainstream liberalism my whole life. I've despised mainstream conservative ever since I've been old enough to think for myself. Until Donald Trump, I continued to hate mainstream liberalism more than I hated mainstream conservatism. (I'm mostly amused that that Donald chump became president. I love being able to tell everybody that I think the President of the United States is a turd covered in a spray-on tan and having the vast majority of people agree with me. He's the worst person who has been elected president in more than sixty years, but he's not that much worse of a person than the average president, and he has been a much better president than George Bush II was, for what that's worth. I mean, look at the difference between their track records in the mid-east, and tell me you'd thing George II would have handled coronavirus any better.) While I insufficiently despised mainstream conservatism, I wanted to believe that sometimes the Republicans were less wrong than the Democrats, but now, I'm pretty happy to believe that both of them are entirely wrong about everything, and that the truth lies pretty far left of mainstream politics on practically everything.


My original post is below.
--------------------------------------------------------------------------------------------------------------------------



I'm beginning to think that every good citizen of the information age should have at least one fictitious identity. I haven't yet constructed any, but I'm planning to do so reasonably soon. The uses of these identities are myriad and range from entertainment, to psychological, to self-benefit, to actually promoting the civic good.

The ability to promote the civic good is the one that I think is the strongest reason people should do it, and therefore the one that is most worth mentioning. Imagine you lived four or five hundred years ago and were European. If that was the case, you lived in a state with practically no freedom of thought, and your religious leaders instructed you to maintain many objectively false beliefs about the world, and did so with the support of their legal system and political system. They were especially intolerant of the religious views that eventually gave way to the movements that are most consistent with what most modern people consider ethical today, and the ones that most of the people are scientifically minded consider correct.

It is certainly the case historically that the viewpoints that people protected with militant fervor have tended to become ones that were viewed by subsequent generations with disdain; whereas the persecuted viewpoints have tended to predict future sentiment a little bit better, even if they represent a faction that has decreased in popularity since then. The Catholic martyrs of the English Reformation came much closer to holding modern ideals than the Vatican of the same time period did. One can hardly imagine the humanist Thomas More advocating the imprisonment of Galileo (though he did support the persecution of protestant dissenters while England remained in communion with the Holy See, before the Henry VIII's Great Matter brought England to the Germanic side of the conflict, though it would almost certainly have done so eventually for other reasons were it not for the king's Great Matter).

While, from a modern perspective, both sides of any historic controversy tend to seem hopelessly outdated, the censored side tends to come off a little bit better on most issues than the side doing the censoring. For reasons that I will come to in a future post, I suspect that this general rule will hold across all times and cultures and even apply to non-human societies if they exist elsewhere in the universe, because I suspect it is a general rule about information, and not a general rule about human nature.

But first, I need to answer the obvious objection, namely: "The concern I've raised is irrelevant because we live in a society that has guaranteed us freedom of speech."  Sure, we do. If you are a male between the age of 18 and 25 in the United States, the government reserves the right to revoke practically every freedom guaranteed in the bill of rights, and you can be pretty much guaranteed that both the age limits and the gender limits of that power will be eliminated in the event of an actual emergency (where an emergency is defined not so much as anything likely to threaten the citizenry of the United States, but instead as anything that threatens the state itself — these two entities are very different. Members of the state form a very non-representative sample of the citizenry. Lawyers, for example, are highly over-represented). But this strong rebuttal against the claim that modern states give their citizens freedom of speech is overkill. The much more important argument is that even public censor often makes freely expressing one's beliefs unwise even when it is technically legal. You can easily lose your job (or your spouse) for holding opinions that disagree with the established conventions. People would have more freedom to express unpopular beliefs if they had some sort of fictitious identity to attribute those beliefs to, that would protect them from most of the fallout associated with those beliefs.

Occasionally, it makes some amount of sense to have a very weak separation between a fictitious identity and a real one. This is usually because your real identity has begun saying so many controversial things that it doesn't matter if the unpopular beliefs of the fake identity also get associated with it. For most of us, however, this policy is inadvisable, and having a strong separation from a real identity is the only way to do separate identities.

For myself, reasons that I feel the need to create a fictitious identity include the following:
  1. Now that my name is associated with what I write here, I no longer feel as comfortable admitting my personal flaws as I did before. [Solution: I'm perfectly comfortable admitting to personal flaws publicly now.]
  2. Now that my name is associated with what I write here, I no longer feel comfortable at all describing my frustrations with people who I interact personally in a semi-anonymous way that might allow them to infer who I am. [Solution: just name them so that they're no longer semi-anonymous, and tell everybody everything to their face first before I start writing about it. My main concern is that I thought the rest of my family would disown me when I started telling people my mother was abusive, and that the reason she is abusive is because both her parents are horrendously evil people, so being a much better parent than her parents were was an unacceptably low bar. For whatever reason, all of my family is perfectly content to admit that my maternal grandmother is a horrible person, but they all insist that I couldn't possibly have any excuse to be opposed to my maternal grandfather who tortured animals for a living and forced his kids to help him do it.]
  3. To the extent that I have a sex drive, it expresses itself mainly in narratives (which is apparently very common for girls/women and very uncommon for boys/men regardless of their sexual orientation unless they were homeschooled, and the evidence that it is much more common for boys/men who have been homeschooled is mostly anecdotal.) I am completely uncomfortable with the thought of anything I write on the subjects becoming associated with my real identity, BUT by far, the worst writing I read has to do with this subject. I'm convinced I could write much better. So... I think it's worth attempting. [Solution. I'm still undecided. This might be a good use for a fictitious identity... though I do think it would probably be better to write these things nonymously too.]
  4. I have political opinions that I don't want to have associated with me. [Solution: get rid of those opinions.]
  5. [There was an unfinished sentence that used to be 4. I don't think it was going to say the thing about politics.]

Retraction of previous complaint

[Context: I wrote this post immediately before getting my only job as a software developer at a place that I thought was actually good to work. I worked there for a few years until somebody else acquired them.]

A little while ago I complained about what it's like to work in software development. The gist of my complaint was that writing code is a ton of fun, and when I was in school and still had to take a bunch of worthless gen-ed and theory classes, I couldn't wait to graduate so that I could finally start writing code full time, and not have to bother with busywork and going to class.

In some ways, it's odd that I didn't like gen-ed classes. I thoroughly enjoy both reading and writing. I write something almost every day, much of it private and much of it intended for publication. It's rare nowadays for me to go a week without having at least one day were I write an essay that is the length of a typical paper I wrote for school. I read 1-2 works of non-fiction a month just for fun and a little bit less fiction, which isn't that much less than I'd be assigned in gen-ed courses at uChicago. The problem I had with my classes is that I didn't see a point to them. At least half of the books I read for school were books I never would have read. After one book by Plato, I was pretty sure I'd never read another. Dante's Inferno was already the most tedious book I had ever read the first time I read it and it was even worse the second time. -- Aristotle and the Iliad, however, are both fantastic... and I expect to read the rest of Aristotle at some point in my life. The same thing with the papers. Except for a few rare instances, the papers were not what I would have been writing if I wasn't assigned to write them. I have absolutely no objection to the general practice of reading and then writing about what I've read. I just had objections to what my instructors picked for me to read and what they picked for me to write about.

When I graduated and starting working, I had the same complaint about the work I was assigned to do as I did about the busywork I was assigned in school. I just didn't see the point. The code my bosses were telling me to write was not the code that I would be writing if I was examining the codebase and trying to figure out what code needed to be written. It was also not the code my bosses would be telling me to write if they were examining the codebase and trying to fix it, or even if they understood enough about programming to do the requisite examination.

Worse, I'd gotten the sense from talking to several of my friends and reading endlessly many rants on the internet that this was extremely normal. The life of a software developer is in many (possibly even most) companies quite dreadful. You spend your days degbugging awful legacy code that you're not allowed to actually fix because nobody trusts you to re-write it any better than the last guy, and scorns your assertions that you could in fact radically improve some particular part of the project by axing everything and starting over as a combination of Not Invented Here Syndrome and the sort of hubris that allegedly accompanies all software developers. Yada. Yada. Yada. So you spend your days duct-taping together heaps of garbage and listening to lectures by people who have no clue what they're talking about telling you how you should be able to duct-tape garbage together a helluva lot faster and turn it into something pristine. And it's awful, and I would rather do just about anything else.

It pays okay. But if you're working at one of the approximately infinitely many shops that have been maintaining a product so long that it's mostly held together with silly string and chewing gum, the company's probably not very profitable (if they're profitable at all), so the pay probably isn't great, and you may have to work lots of unpaid overtime just to the company from collapsing under the weight of all the drying saliva in the chewing gum. So once you take into account the hours and your prospects for advancing within your company, you are probably underpaid relative to your IQ. If you're in a career like consulting or sales, it doesn't matter if you find yourself at a bad company for a while because your job is pretty much networking anyways. If you're a software developer, your job is sitting in front of a computer. If you're at a not-great company, they are probably pretty aware that they are not the best company, and they might try to keep your networking opportunities as limited as possible.

You might have to fight your boss just to be allowed to speak directly to the clients whose problems your trying to solve (instead of having some other middleman at the company have the conversation first when the client and secondly with you). It can be even harder to get permission to talk to the consultants who the company brings in to decide what code you'll be writing next. This wasn't a big problem at my last employer, but it was a huge problem at my first job out of college. My boss who knew next to nothing about stats, hired a stats guy as a contractor and then insisted upon being an intermediary between me and this consultant. He would relate instructions to me based on his conversation with the consultant that made absolutely no sense, and I would tell him that something had been lost in translation and give him some options for things that were kind of similar to what he had said but would make sense. Then he would go back to the stats guy, and after about three conversations, we'd start to get somewhere that could have been easily communicated if I was just allowed to talk to the guy. But my boss insisted on being the intermediary. I might be paranoid, but I was convinced that he did that specifically because he was worried that guy might offer me a way better job than the one I had. I was actually hired at Groovebug (the company I'm referring to) specifically to specialize at the thing that this consultant did full time. In my job interview, I had told my boss that that was what I was looking to do with my career. It was related to projects that were a few months down the pipeline when he hired me, but something he did plan to eventually have done. So anyways, my boss there knew my long term career goals, and he also knew that he was blocking me from the one networking opportunity at Groovebug that fit well with them. He also knew that a ton of stuff was being lost in translation when he tried to be an intermediate between me and the other guy. I did eventually meet the consultant, and he told me that he thought that his job would be a lot easier if he was able to deal directly with me (without any prompting from me to say so), so I know he wasn't opposed to us meeting, I know I was actively asking to meet directly with him, and I know that my boss who knew all of this stuff was very opposed to that happening. I think it's a pretty reasonable interpretation of the circumstances for me to say that my boss at the time was actively trying to derail my networking opportunities.

So anyways, my professional experiences in my first few years of school were pretty awful, and I felt trapped in a really bad situation. And I felt like it was pretty easy to go into software engineering or computer science and wind up at a company that you wouldn't want to stay, and find yourself feeling trapped, with no real prospects for advancement in the company (because a lot of the companies where you wouldn't want to stay are not profitable and not growing) and no leads towards greener pastures tomorrow. And to top it all off, when you're in one of these situations, you start to get really burnt out. When you've spent ten hours debugging old code, you don't come home wanting to write more code to post on github, you pick up a new hobby that is as unlike programming as anything you have ever done. I picked up painting and I started running. If you're not an introvert (and you probably are if you're a software developer), feeling drained every day also makes you not particularly enthusiastic about socializing. I really like the version of the introvert-extrovert distinction that says that extroverts are the people who get energized by socializing and introverts are the people who use up their energy by socializing. That's definitely how I feel. I like socializing with other people; it's just the last thing I want to do when I'm already feeling drained. (I've always believed that the correlation between happiness and socialization is mostly attributable to introverts, and is caused by the fact that introverts only want to socialize when they're happy. When I'm feeling happy, I want to hang out with other people. When I'm feeling exhausted, sad, or angry, I want to be alone. It's not that I don't want to see other people when I'm not feeling positive about life. I just don't want to be seen.)

So anyways, all of this stuff is still true.

And yet, I have a completely different perspective of what it's like to work in STEM than I did a couple months ago when I wrote it.

What changed? I put my resume on Monster.

For real.

I was reluctant to do so. I didn't expect anything good to come of it. I expected to be a little be depressed after I put my resume up because I'd just get a few calls from pyramid marketing companies like I did when I posted my resume on some website years ago when my only degree was my high school diploma and all of my work experience was the sort of summer jobs that you would expect. (A math REU once I got to college. Manual labor when I was in high school.)

Turns out, there really is a huge demand for software engineers, especially software engineers who X, for any value of X, including software engineers that really like statistics, math, and machine learning! In all my time of looking at job listings trying to find a company that might higher me, I have never seen a single job listing for a role that fits what I wanted to be doing that didn't list a PhD in Math/Statistics/Computer Science/Physics as a prerequisite and also ask for five years of software developer experience (neither of which I have). I've occasionally applied to one of these companies with a cover letter saying that I'm a software developer who wants to specialize in artificial intelligence and machine learning, but am willing to continue in a UI or implementation or something else that I have experience doing along with beginning to get professional experience in AI kind of stuff. Never a nibble. Nobody's ever contacted me about a machine learning role related to those inquiries. Nobody's ever contacted me about another role developing iOS apps that would involve working with people who are developing machine-learning algorithms that those apps use. (I've gotten a few requests for just plain iOS development over the years at companies that manufactured "ooh shiny!" but I'm really not into making "ooh shiny!" unless there's also some substance behind it.

Then I posted my resume on Monster, and all of a sudden I started getting calls from people who wanted the opportunity to spend their time and their networks trying to find a place for me that fits my interests and is located someplace I would want to work. And they set up actual time to talk to me, and get to know me and what I can do, what I've done, and what I'd like to be doing. And they set up interviews for me with the sorts of companies that are doing exactly what I'd want to be doing, where I want to be doing it, and paying more than I would have been asking for if I wasn't talking to these people. It's fantastic! I've gone from feeling like I'm completely trapped to feeling like there's a whole wide world out there full of people who want to hire people who like using Linux and programming in python, Haskell, Lisp, and other programming languages that are fun to use. And like I'm not going to end up spending the rest of my life debugging ancient .NET code or writing pointless iPhone apps if I don't manage to get myself out of the software industry while I'm still enough of a generalist for consulting companies to take an interest in me.

So this is the conclusion of the matter. If you go into software development as someone who majored in abstract mathematics, you have a really high chance of finding yourself at a lousy firm or two in your first couple years out of school. (Because math majors are not in as high of demand as software developers straight out of college.) While you're at these sorts of companies, you won't have any networking opportunities at these sorts of companies, and you probably won't develop your skills much or put much new material on github. You will see your world closing in around you and feel trapped, but when you emerge from that cocoon, you will find that you have metamorphosed into something that is much more appealing to the parts of the software industry that you actually wish to appeal to than you were when you went in.

So all in all, it's probably worth it. And I take back what I said about the software development industry being a terrible industry to work in unless you find a great company at your first job out of college. In most industries, you expect your first few years out of college to not be the best years of your career. You also expect to be making connections and building the skills you will use for the rest of your career which doesn't happen in the software industry. (At the kinds of companies where you don't want to work, you are probably building skills related to technologies that were obsolete enough to be inconvenient years ago, and will become obsolete enough to be irrelevant in another few years.) But it doesn't matter. There actually is enough demand that you don't need to waste a bunch of time networking and attending conferences once you have accumulated a couple years of experience.


Edit:
Once you have enough experience that good companies are willing to let you demonstrate competence (which is only about two years), it's really easy to demonstrate competence and end up with a job that pays well enough that you can quit what you're doing on a whim and have money saved up to allow you to continue to live like a king while you build the career you want. I've completely changed my original recommendation away from, "If you're not sure what you want to do with your life, avoid coding," 18 months out of school to, "If you're not sure what you want to do with your life, and you're smart, take up coding until you figure it out," at this point.

Once you put in your first two years to get experience, you will find work as a software developer whether or not you really even try.

Nakedness

[I wrote this in 2015 and left it as a draft until today.]

I've never felt as exposed as I have recently.

This is probably the biggest consequence for me of finishing To Change the World and then deciding to start blogging under my own name as opposed to a meaningless pseudonym. I am way more reluctant to just say things than I was a few months before, especially in writing. I still write quite a bit just in Google docs or a few other places where I have no fear of what I say being associated with me in anything close to a permanent fashion. But I feel strangely alienated from what I'm saying in any form where I expect or intend other people to be able to at least in principal find what I've written and associate it with me. I say "at least in principal" because, to the best of my knowledge, I don't have any readers yet. This is a good thing as far as I am concerned. I'm uncomfortable enough with the idea that people can be reading what I'm saying and knowing that it's me whose saying it. I don't know how I would feel about having that idea become a reality. Eventually, I will try to obtain readers, but for now I need to get comfortable with the idea of not writing to an audience.

I say "not writing to an audience" because that's what I feel like I'm doing differently now from anything I've ever done before. Most of what I've written in the past has been written either for my own private benefit or with one to four specific readers in mind. When I write entirely for myself, I say anything that comes to mind. I don't have to worry about the possibility of people being offended or people questioning my sanity, thinking I'm a whiny brat, or anything of the like. It's only me, and years from now when I come back to it, I know that at the very least I will read it sympathetically, no matter how far my mind has drifted from the beliefs that I have today. I'm vaguely aware that my future self will regard my present self as every bit as much of an idiot as my present self regards my past, but I also believe that my future self will be as amused by my present self as my present self is with my past. That's always been the easiest way to write, and always been the writing I most hope nobody but me will ever see.

Then there are the papers, letters, and a few other narrowly circulated writings that I have produced over the years. (Articles for school publications and all that.) These, I could write usually knowing who I would provoke, offend, underwhelm, delight, impress, or make to feel another response. I could mute the parts of my perspective that clashed sufficiently with my environment to actually cause me worry while still saying most of what I wanted to say. But all that time I was writing for an audience of what? Never more than 200 people I'd say. Maybe a few innocuous news articles I wrote in school had a slightly higher circulation, but not the opinion pieces.

But now, I feel like every line I say will either go entirely unread (which is fine) or offend someone. I don't know who and I don't know why, but I feel like everything I say is going to upset someone that I would rather not upset.

The only thing I know about the internet is that someone is always offended. (I mean the only thing besides like HTML, how TCP works and all of the technical stuff. In context, this remark is referring to my ignorance of the social character of the internet, not bewilderment over how the internet works. You see what I did there? I totally didn't trust you to read my intended meaning in context, and then I got worried that if you did read it correctly you might think I didn't realize quite how heavy handed I was being to explain it more fully. But I did realize this, and felt the need to point that out too.)

This isn't about a persecution complex. I don't think it takes any more bravery to hold and share my own beliefs than I think it would to hold and share any set of beliefs, but I do think it takes a lot more courage than I ever expected to hold and share beliefs in a setting that is truly public. I'm much more impressed by the people who have done it than I ever used to be in the past, to say nothing of how impressed I am with the people who very publicly share things that I would consider much more personal than beliefs.

I'm not talking about facebook or anything like that. Writing for a school paper never intimidated me. Writing for an opinion paper that I knew would have a somewhat self-selecting audience also never intimidated me.

Newsflash: In Sports, Winners Cheat

Context: This is an essay I wrote at the time when I decided to quit watching sports because watching sports makes me angry. I haven't followed team sports since the 2015 NCAA Basketball finals except for briefly rooting for the Cubs in the post-season of 2016 since I was a Cubs fan as a kid, and I lived in Chicago at the time they finally won the World Series again, and a few of my relatives are rabid Cubs fans who wanted to know how exciting it was to be in Chicago, so I had to make some effort to take advantage of the opportunity to keep them from being too ashamed of me. I did watch a Hilary Hahn concert instead of a Cubs game during the World Series, however. Emptiest concert I ever attended at the CSO. The officiating in the second half of the 2015 Men's College Basketball finals was so bad that it is no longer possible for me to believe that refs are honest. If I feel the need to watch sports, I watch esports now. When I watch esports, my team sometimes wins, and my team sometimes loses, and it either excites or disappoints me. It doesn't make me angry because I never feel like they got cheated. It's strange how that works.

So how 'bout Tom Brady?

Tom Brady might not quite be football's Lance Armstrong, but he's the closest thing that football has to an absolutely dominant winner. And he's one more superstar athlete that has been found guilty of cheating. Baseball had it's whole steroids era. Before that, there was the spitball era when pitchers that kept their game together by hiding Vaseline behind their ears, and rubbing a little on the ball to mess with its spin even after the league official banned the practice. (I mean, it was still cheating before it was officially cheating too, though, right? You shouldn't need a rule to explicitly say that rigging the equipment is not allowed.)

It happens in other sports too.

This seems like the ideal opportunity to comment that after the NCAA tournament in basketball this year, the main question I had in my mind was whether someone in Krzyzewski's program bribed the refs at halftime, or whether that was just a fan who bet on the game. If nobody bribed them, it sure looked to me like somebody must have threatened them. Someone scary. They didn't even attempt to make it look like they were officiating the game honestly the second half. I'm not even a fan of Wisconsin.  I was rooting for Wisconsin that game despite their uniform, but mostly because I was watching the game with my dad, and he's a Purdue man who roots for Big Ten teams in tournaments as long as they're not IU. (I didn't actually hate Duke until that game. Now my favorite sports joke is that Krzyzewski is the most arrogant coach in history. Lots of teams have had legendary coaches, but only Duke made him their mascot.)

I'd watched a couple Wisconson games with my dad over the course of the season, and they don't foul. They just don't foul. One of the games was a Purdue game, and my dad insisted that they fouled and his team was getting robbed. It's the only game I've ever watched with my dad where I felt like the replays consistently agreed with me when I said "Uh... I didn't see it" rather than with my dad. He's usually a bit more biased than me, but he knows the game way better, and is better at seeing all the action. But Wisconsin doesn't foul. And they didn't foul any more against Duke than they usually do. And they didn't foul any more in the second half of that game than they did in the first half. And Duke was playing out of control during that period in the second half when Wisconson was called for more fouls that they usually get called for in an entire game. Four years in a row, they were one of the top five teams in college basketball for committing the fewest fouls. Going into the game, they committed the least fouls of any team in college basketball that year. Then all of a sudden, halfway through the second half of the national championship game, they have a double digit lead, and they start fouling at five times their typical rate? When teams tend to foul way less often when they're ahead? And they're doing it at both ends of the court? Without Duke fouling at all? (Duke had an immaculate second half as far as called fouls go, even though they have not been a particularly foul-free team over the year, and there second half looked ugly. Their second half was ugly.) Oh yeah, and if that wasn't bad enough, the refs reviewed a play at the end of the game to decide who hit the ball out of bounds, and they made a call that was so obviously incorrect that the commissioner (or whatever the head official over the whole NCAA is called) had to explain to the press that the refs on the court never saw the angle that most conclusively showed that the ball went out on a Duke player. The call was so completely unambiguously wrong that the head of NCAA basketball felt the need to apologize for it. And that was just the icing.

Apart form ambiguous instances of cheating (Did Tom Brady do what he's accused of? Did somebody actually bribe the refs?), there are unambiguous forms of cheating.

The official rules of NFL say how much the football should be inflated and they also say what the consequences should be for using an improperly inflated ball. (A slap in the wrist: $25k as a fine, I think, with the option to declare the game a forfeiture if the use of the improperly weighted balls is deemed to be the cause of the victory of the team that used them, but no option to declare it fofeited if they would have won otherwise. Goodall has decided that what happened with New England was a much more serious offense than what people had in mind with that particular rule.)

The rules of basketball give the refs the power to call fouls, and no power to force anybody to review them, and some people exploit these rules. [Edit: I believe this has changed this. But by the time it happened, my interest in the game was completely dead.]

When I was in high school, a former-professional soccer player came to teach at my school. Naturally, he also became an assistant coach on the varsity soccer team. The head coach never played soccer professionally. He turned down an opportunity to do so (in Europe), because he decided he would rather move across the Atlantic to marry his wife who didn't want to move to Europe than become a professional athlete. So he became a French and Spanish teacher and a soccer coach. At least, that was the story I heard when I asked why the former pro wasn't automatically going to become the head coach. I also heard that former pro had about two games of experience in the American professional soccer scene. This was still before 2010, so Americans still didn't care about professional soccer at all at the time (he started coaching at my high school in 2005, I don't know how much time had elapsed since he'd stopped playing pro soccer), so getting cut a few games after going pro in American soccer was about as impressive to everyone as being part of a high school volleyball team that made it to states but lost in the finals. Impressive, but meh, if that's all you've got for bragging rights.

This was my sophomore year, so as you might expect if you have read my previous descriptions of how well I did in sports as a child, I was still a bench warmer for the Freshman team. Actually, it's not quite as bad as I make it sound. I swam as a kid and was usually the guy who came in second or third out of six to twelve kids my age -- but I hated swimming. Then I switched to  tennis in middle school, and was mediocre enough at that. Then I switched to soccer in high school and was so far behind that my mediocre-at-best natural ability in sports never had the chance to be fully realized in my brief attempts to learn soccer. Again, this was way in the past, so the most embarrassing thing at the time was that I played soccer instead of a real sport like baseball. (Ha!) So maybe being a freshman team bench warmer as a sophomore playing soccer is every bit as bad as it sounds. I was never any good at the sports that kids who can't play sports are good at, but I couldn't even begin to play the sports that people who are good at sports play at all.

Anyways, I digress. The point I'm trying to make is that I never won in sports so you don't have to suspect me of cheating, and any time somebody does win in sports we should all assume they cheated until they prove otherwise because I'm still a sore loser whose upset about being awful at sports as a kid. :) Actually, the point I was going to make was that this former pro believed the soccer coaching at my school was lacking one critical part of the game: how to mislead the refs. I'm serious. His main job was to train the goalies (at normal goalie stuff, not cheating), but a few practices the one year I still played when he coached (and I assume the subsequent years), he would take the whole team through sessions on how to get away with doing things that were supposed to result in penalties if a ref saw them and how to draw calls against your opponent without them doing anything when a ref wasn't looking straight at you. This was stuff like, "When you're not in a position to make a play and you're in the refs peripheral vision (far enough away from the main action that he won't look directly at you), pick a running trajectory that brings you right past an opposing player and trip yourself over your own feet right as you run past him." And then you practice doing it. Or when to be excessively aggressive to try to trigger your opponent to retaliate against you right after you drew the refs attention without doing anything that looked too suspicious. I'm not sure which of the other coaches knew that he was teaching these techniques. I'm pretty sure the head coach would not have approved. He didn't bother much with the freshman team. Neither of these coaches did much with the JV team, but the only time I learned any of this guys techniques was a day when JV and the freshman team practiced together because a couple of the other coaches were absent. So the head coach coached the varsity team, and this guy taught everyone on the JV team that day.

The one person I've ever talked to about sports after he was a professional athlete decided that the primary thing missing from training kids to play sports well at my high school was that nobody was teaching them how to cheat. (I've also talked to a few athletes about sports before they became professional athletes. All of them still played with integrity so far as I was aware.)

Thursday, May 7, 2015

Senescence

The first person to live to be a thousand years old will have been specifically bio-engineered for such longevity, clonally descended (with intentional genetic modification) from other predecessors who were also bio-engineered to increase their longevity.

I think one of the reasons for the popularity of Aubrey de Grey's nigh-alchemistic views of aging is that they are phrased in such a way to suggest that disagreeing with them sounds anti-scientific (or at the very least, skeptical of the potential for scientific progress). The purpose of my claim above is to provide an equally trans-humanist view, that is equally based on the belief that people will eventually find ways to better overcome senescence, without making De Grey's mistake of being utterly and completely wrong.

Because de Grey is wrong. (Dead wrong, one might say.)

The mainstream academia position on de Grey is that his claims deal with research topics that are too poorly understood to provide proof of the falsenesss of de Grey's claims, but the available evidence strongly indicates that he is wrong, and once more is known, his position will be completely falsified. That's fair. Most of the evidence that strongly refutes de Grey's position is soft evidence. Nobody understands precisely how aging works but all of the theories that explain anything at all are incompatible with de Grey's beliefs. But you can't really use any of them to refute them because all of them have enough problems of their own.

But the soft evidence against him is overwhelming.

First of all, humans already have the longest average life span of any endotherm and the largest average lifespan of any terrestrial animal, even if you limit your sample to organisms that survive to adulthood in order to give reptiles a fighting chance at high average lifespans since they have worse than astronomical mortality rates shortly after hatching. Humans also have the highest recorded maximum lifespan of any endotherm.

Terrestrial animals are a worthwhile category because living on land is a lot harder than living in water. Read Margulis if you want a little bit fuller explanation. More or less, earth-born life is aquatic, and terrestrial life survives by internally preserving oceanic conditions. Moreover, all animals start their life in a liquid environment (of an egg, womb, or body of water). Instead of adapting a way for zygotes to survive outside of an aquatic environment, mammals have evolved a way to produce an artificial aquatic environment inside the females of the species, even as some mammals also evolved a dry environment for their young to mature for a while after birth. If you can only find aquatic animals to support some claims of what is possible, you probably aren't making claims that apply to longevity. Evolving to live on land forced terrestrial animals through a very tight bottleneck, which mercilessly selected in favor of traits needed to perpetuate a species on land without any regard for what traits members of those species might later wish they had had preserved. I think chordates traded off the potential for biological immortality long before adapting to living on land, but at any rate, the likelihood of them having preserved it past living on land is infinitesimal. There was just too much of an optimization for something else in particular.

The endothermy argument is something else altogether. We tend to measure lifespans in years. Years are biologically irrelevant. If we wanted to measure lifespans in something more biologically relevant we should focus on metabolic turnover and cell divisions. Both of these are way higher in endotherms (mammals, birds, and tuna!) than they are in ectotherms (non-endotherms). Giant tortoises spend much of their lives biologically dead. They hibernate so thoroughly that they don't even have a heart beat. Since they don't have to maintain their body temperature in their cells, their cells don't have to do anything. When it gets too cold, they just die for a little while and wait for the sun to come back out. True story. A giant tortoise living to be three hundred is biologically equivalent to a mammal living to be about four. (Not four hundred, just four.)

Then there's complexity. Talking about the simplicity or complexity of organism is extremely out of vogue. But fashion is dumb anyways, so I don't care. Biologically immortal organisms are extremely simple. Vascular plants have three organ-types (for want of a better term). They have leaves, stems, and roots. Flowers are a type of leaf. Tubers are a type of stem. Bulbs are a type of leaf. Etc. This isn't semantics or pedantry. Different organ-types are made up of different tissue-types which are made up of different cell-types. Different cell-types rely on different conditions from each other to survive. Plants only have a few dozen cell types, so they don't have to code for preserving that many different modes of survival. Biologically immortal organisms, whether they are plants or animals, are ridiculously simple as measured in the number of different cell types that they have. For reference, you have more types of tissue that are unique to your ear than a clam has in its whole body. By unique to your ear, I mean specialized tissue types that only occur in your ear and don't occur anywhere else in your body. Your ear isn't even close to being the most complicated organ-like-thing in your body either. (That would be your brain, in case you're wondering. There are more cell types in the brain than there are tissue types in the ear, and there are so many different tissue-types in the brain that, to the best of my knowledge, no one has attempted to make an exhaustive list. As of 2014, the authors of the leading neuroscience textbook did not believe that an exhaustive list of neurotransmitters is known yet, and that list is tiny by comparison to trying to exhaustively identify all of the different kinds of tissues that occur in the brain. Brains are ridiculously complicated, just in terms of their raw hardware before you begin to look at the way that that hardware is organized. In a very overtly evolved sort of way. We're talking tangled, haphazard, messy, disorganized complexity.)

And then there's the data. If sequoias died at a young enough age, we'd think that they were biologically immortal. For the first thousand years of their lives, they exhibit all of the traits of biological immortality. Their chances of living another year increase with every successive year, etc. (Actually, this general trait of having once chance of living another day increase with every successive day is true of most plants for the first few years of their lives. Hell, it's true of most animals too during their early life. Mortality rates tend to be pretty high for the young of most species.) If sequoia had a high enough mortality rate that they all died before living to be 1500 years old we'd think they were biologically immortal. But they don't. They live to be about five times as old as the oldest biologically immortal organism, and they only begin to exhibit senescence once their a thousand years older than the oldest known biologically immortal organism. Sequoia are "biologically immortal" by the way that characteristic is defined for a far longer period of time than any organism that is actually believed to be biologically immortal. Most "biologically immortal" organisms have very short life spans. Sponges, clams, and sea urchins are the exceptions. They sometimes live to be a few hundred years old, and they have members that exhibit decreased mortality with age for that whole lifespan. (Also, PETA claimed that one particular lobster was over 100 years old in a bid to save its life, but their calculation was based off of some very spurious assumptions. Moreover, lobsters are known not to be biologically immortal, since their chances of dying while molting [or simply becoming unable to molt] increase dramatically as they age. Becoming unable to molt is eventually fatal for a lobster.) Hydra can live to be up to four years old without showing any increase in mortality as they age! Whoop-dee-doo! Humans can also live to be up to four years old without any increase in mortality over that period... while being much more biologically active.

So at this point, we've got four major strikes against the idea of a human living to be a thousand. Simple organism (as measured in the number of cell-types, tissue-types, or organ-types the organism possesses) live to have a lot longer lifespans than more complex organism. Aquatic animals are able to have much longer lifespan than terrestrial animals. Ectotherms have a much longer lifespan as measured in years than endotherms but a much shorter lifespan as measured by biologically-relevant criteria such as total metabolic activity or cell divisions. And when we look at extremely simple, aquatic, endotherms, we find that they still have maximum lifespans capped at a few hundred years. The oldest clam ever discovered was 507 years old. The most longevic plants can live to perhaps 5,000 years old, and some scientists believe that a few sponges might be 10,000-20,000 years old, but at this point we're talking about things that are ridiculously simple compared to amniotes.

The oldest known terrestrial homotherm died on August 4th 1997. She was 122 years old. Her name was Jeanne Calment, and she was human. No other mammal is known to have lived that long. (Though bowhead whales [not a terrestrial animal] are thought to live up to 150 years, possibly longer.) No bird is known or thought to have lived that long.

When somebody says that senescence isn't biologically inevitable in the context of talking about extending human life spans, that person is saying something pretty ignorant. The available evidence strongly suggests that humans are already flirting with the upper bound of what's biologically possible for organisms with our biological history.

But what about science? Technology! Progress! Onward! Yay science! Hurrah! Hurrah! Hurrah!

Isn't the fact that humans live longer than any other terrestrial homotherms evidence in favor of modern medicine?

No. Not at all.

People from parts of the world with no modern medicine have been reliably shown to be 110-115 years old. Any alleged medical progress that humanity has made in the past four hundred years might have extended the upper bound on human lifespan by about five years. Probably not. There are a lot of other things like adequate nutrition and improved hygiene that ought to increase longevity in the first world relative to the third world.

I'm not knocking medicine, in general. Claims that medicine has improved the average human lifespan are very substantiated. Childbirth used to be a major cause of death for women. Medical advances of the past century and a half have practically eliminated deaths due to childbirth. Childhood mortality due to contagious disease also used to be a major cause of death, and vaccines have pretty much eliminated those. So I'm not knocking medicine altogether. I'm strongly in favor of the medical position in two of the more controversial issues in medicine. But I see no evidence whatsoever to suggest that medical progress has done anything to begin overcoming senescence. Maybe, modern medicine has extended them by 5%, but it's not on the verge of extending human lifespan by 1000%.

It just isn't.

I'd bet my life on it.