Two of my favorite writers are Tyler Cowen and Cal Newport.
Tyler Cowen is one of the most influential bloggers on earth. In addition to being a Harvard-educated economist, he’s created a multi-media intellectual empire for himself at George Mason University, with a podcast, blog, lecture series, online university, columns at Bloomberg, and tentacles all over print and audio media. As an example of the scope of his influence, reading Cowen’s blog is the first part of Malcolm Gladwell’s morning routine.
Cal Newport is one of the youngest tenured professors at Georgetown University, where he teaches computer science and is widely published on topics that are far beyond the scope of my intellect. But that’s not how I know about him. I know him from two of his books: So Good They Can’t Ignore You and Deep Work, both of which I consider among the most important books on how to approach work that I’ve ever read. Whenever I meet anyone who seems to be in a professional funk, if I suspect they might be open to suggestion, I do everything I can to steer them to Cal Newport’s books. Reading his first book was essential to helping me choose a path for my business, and reading the second has been equally helpful me to get it where I want it to be.
Tyler Cowen is an information omnivore (he wrote a book about just that topic, actually, called Age of the Infovore), consuming insane amounts of information from diverse print and online sources. He often cites as one of the keys to his success the fact that he is incredibly responsive to email. According to Age of the Infovore, he checks his email every five minutes. And despite having a massive online following, he answers every single email he gets.
Cal Newport’s approach is about as different from Cowen’s as you could get. He might be described as a borderline Luddite who teaches computer science. More specifically, he thinks that social media, email, and instant messaging apps are intellectual scourges to be avoided at all costs. According to Newport, to create work that has real value, you need to concentrate for long periods of time. Every time we check Facebook or email, it completely destroys our concentration, causing us to shift our focus in a way that is impossible or nearly impossible to retrieve. Newport thinks the best way to succeed is to purge our lives of the constant hum of pings, notifications, and feeds trying to steal our precious attention resources. To succeed in a modern economy where attention is a scarce resource, we must create and cultivate an environment every day where true, deep work is possible without interruption.
Both Cowen and Newport are writers and intellectuals at the absolute pinnacles of their fields. But their approaches seem diametrically opposed to each other. So who is right? Which approach is better—deep work or constant, unrelenting media consumption?
I’m reading a book right now called What Works for Whom? It’s by a couple of English academics that specialize in psychotherapy research. The book digs into which types of psychological and psychiatric interventions work for which types of psychological problems. Here’s an excerpt:
Summarizing quantitative review of outcomes for MDD [Major Depressive Disorder], it seems clear that psychological therapy has benefit over no therapy, though when active therapies are contrasted, differences between them are less clear. Although there are indications that CBT [Cognitive Behavioral Therapy] is superior to less structured forms of psychotherapeutic intervention, it is worth noting that this conclusion appears less robust when the contrast treatment is credible and theory-grounded.
The overarching lesson I’ve taken from the book is that the aggregated data show that techniques that work for one type of problem do not necessarily work for others. Not exactly a Eureka! insight, but worth noting.
CBT and inter-personal therapy work well for depression but have not shown to be effective as a treatment for substance abuse. What works for anxiety is not an effective treatment for anorexia. And so on.
Have you ever finished a book or a TED talk and thought to yourself: “This is it! This is the key to what I’ve been looking for! If I just do X or don’t do Y, all of my problems will go away!” And then after a few hours or days, the magic solution fades from memory and life goes on as normal?
There’s no shortage of people out there who are in the business of selling us on breakthroughs.
A breakthrough is an event or realization that forever changes your life. Before this moment, you were one, lesser version of yourself. And after this moment, you’re a better, improved version of yourself.
If a weekend or a conference or a person were truly capable of making you a better person, it’d be hard to quantify just how much money that would be worth. But if they were real, the sellers of breakthroughs could probably just name their price. And even if it might be real, plenty of people would be willing to take a gamble and spend a small fortune on the off chance that it might do the trick.
Last year, I found this blog on false insights by David Chapman. It’s wonderful. For me, it was a breakthrough on why most breakthroughs aren’t really breakthroughs.
Most of the time, according to Chapman, when we are sold on the idea of having an insight it’s because we’ve been tricked into finding an easy solution to what seemed like an artificially hard problem. By solving an easy problem masquerading as a hard problem, we are tricked into believing that we have somehow achieved mastery with relatively little effort. Perhaps all it took was a slight tweak in direction or mindset.
Ultimately, most breakthroughs are an artificial sleight of hand. The ability to solve one problem doesn’t necessarily mean that you have the power to solve another, unrelated problem. There is no magic key that lets you open all doors.
I’ve spent most of my life searching for simple, overarching rules that might help me live a better life. An intellectual golden ticket, of sorts.
And the more I read, the more convinced I am that there are no one-sized-fits-all formulas for how to live your life—no perfect plans for how to shape your schedule, your work-life routine, or pretty much anything else. No life hack works for everyone equally well. For each of us to maximize our talents, we must maximize self-knowledge by paying attention to our own internal signals, while incorporating lessons from those around us to the extent they might be helpful. And even then, we still have to adapt to every new wrinkle that life throws at us. And this will never end.
If you look up directions on Google Maps, there are usually three or four different options for how to get to your destination. Google will point you to the fastest way and then give you options for alternative routes. Perhaps you want to pick a more scenic option or avoid the highway. But if you go pretty much anywhere else other than those three or four routes, you’re going to end up going the wrong way. There is more than one way to get where you want to go, but there are infinitely more routes that will take you the wrong way.
I think life is like this, except to date there is no Google Maps service that is able to provide objective directions for how to get where you’re going—and where not to go.
There are lots of books and videos and Ted talks and inspirational seminars where people sell you on the idea that they can tell you how to live your life.
But as is obvious when you read books like What Works for Whom? neurodiversity is a real thing. There are plenty of 500-page academic treatises that will send your mind spinning with details of how effective strategies in one arena will be totally unsuccessful in another. Effective strategies for beating markets won’t help you in your family life. Warren Buffet’s wife left him to be with her tennis coach.
So it goes.
A few years ago I ran a couple of 100-mile races. I’ve heard some other people who have run 100-mile races say that after they have run a 100-mile race, it makes everything else they do in life much easier. As in, after running 100 miles, they know they can accomplish anything.
I’m skeptical of this. After I ran 100 miles, I just knew that I could run 100 miles. I didn’t find it easier to find meaning and purpose in my work. I didn’t find that my relationship struggles had changed. The problems I had in life before I had run 100 miles were more or less the same problems I had after I had run 100 miles.
A while back I wrote this piece called “Metarules for Games,” wherein I tried to come up with a set of overarching practices for how to approach new games. I re-read it recently, and I think it’s interesting and useful for people who like to play games, at least up to a point. It’s an exercise in how to think about games generally, but if you read it, at best, it might only provide a marginal advantage in games over someone who had not read it.
Reading “Metarules for Games,” won’t make you a chess master. To do that, you would need a base level of intelligence, plus many thousands of hours of practice and intense study. Being a chess master isn’t about breakthroughs. It’s about developing skills over years of work and then making successful adaptations during individual games. In the same vein, reading “Metarules for runners” won’t make you a 4-minute miler. Reading “Metarules for investors” won’t make you a billionaire (or a millionaire or even a thousand-aire). That’s just not how it works.
There are popular writers—Tim Ferriss in particular comes to mind—who specialize in studying and decoding habits of success. The idea is that if we learn certain overarching rules, certain patterns for how to organize our lives, that we might find a shortcut in a path to success and high status.
This sounds to me like the business of selling breakthroughs.
This type of study breaks down when we look at people like Tyler Cowen and Cal Newport, whom I mentioned at the beginning of the post. The habits that caused Cowen’s success are the habits Newport looks to avoid. If the one habit—the practice of constantly checking email, and its opposite pair, the practice of consciously avoiding email because it is a distraction from deep, focused work—both can serve as a path to success, then perhaps we can deduce that there’s no magic breakthrough to be made just by picking one habit or its opposite. Perhaps there is relatively little utility in obsessing over the anecdotal meta-habits of intellectual titans.
Tyler Cowen is highly skilled at processing massive amounts of information. Cal Newport is highly skilled at focusing on complicated tasks that most people—even very intelligent people—could not. Both have adapted to modern circumstances to find success. Cowen has succeeded by navigating the waters of information overflow better than anyone else, and Newport by avoiding the currents and staying on shore. But, despite opposite approaches, both have found a way to make it work.
I think, in retrospect, I’ve spent far too much of my life searching for breakthroughs. It’s tempting to look for one pattern or a set of patterns that will light the way for all times and places. But that’s probably not a thing. As I hit the juicy part of middle age, I think when it comes to breakthroughs, they are more about flash than true light.
What is far more useful is the simple cultivation of skills. Work skills, physical skills, interpersonal skills, relationship skills—super-nichey skills in your chosen field that most people can’t even pronounce but you know better than anyone else on earth. The more skills you have, the more you can help people get things done. In short, if you want to be valued, be good at a lot of things that people find valuable.
Then, it’s about adapting those skills to different environments. The world today is not the same as it was five years ago and will not be the same as it will be in five years. You might be in a wheelchair or get cancer or win the lottery, or most likely, none of the above. So each of us must constantly adapt whatever skills we possess to new environments.
Learn and cultivate skills. Adapt them to whatever new environment you might find yourself in.
Rinse, repeat. How’s that for a breakthrough?
 Tim Ferriss’s book is called, “Tools of Titans,” which is a way better name than “Metarules for Success.” That’s why he’s Tim Ferriss.
But as a lawyer, I’ve been privy to jury trials. They’re long and tedious. Things that you think should minutes take hours. And things that you’d think would take an hour can take days.
Our legal system–flawed though it may be–is usually very careful and deliberate. The course of people’s lives hinge on what we do there, and so lawyers and judges scrutinize the process very carefully.
In an actual trial, people are accused of specific crimes. Typically, there are sub-elements of each crime: an action that includes a few different steps and usually a component of intent.
If someone is accused of killing someone, they can be specifically accused of manslaughter or murder. The latter requires that the killer acted with “malice aforethought.” If the prosecutor wants to convict a person of murder, they must submit evidence that convinces the jury not only that the person did the killing, but that they did so with the requisite intent. Some states have different gradations of first-degree and second-degree murder, or felony murder, each with different specific sub-elements.
The accused might introduce evidence of a reasonable defense. Perhaps the killing was in self-defense, or the accused was coerced.
Accusing someone of a crime is easy. Convicting a person of a crime requires precise argumentation.
In criminal justice, there is proportionality to the punishment depending on the severity of a crime. A person convicted of negligent homicide will spend less time in jail than the person convicted of manslaughter and less still than a person convicted of first-degree murder.
In trials, the jury hears long arguments from both sides about their version of the events. Each side is entitled by law to have a certified professional, bound by ethical and professional duties of competence and zealous advocacy, tell their story.
In trials, lawyers vet the jury to exclude those who have pre-existing biases and prejudices that would make it difficult for them to consider either side’s arguments with fairness.
Lawyers argue about what evidence should be considered in reaching a verdict, and judges make decisions about what is appropriate for juries to hear. The subject of what constitutes proper evidence is one of the most nuanced and complex areas of the law. Information that is unduly prejudicial or might inflame a jury is kept from them to avoid biasing their decisions. Most often, prior bad acts are inadmissible to prove a subsequent crime, unless the prior acts show a pattern of conduct.
For alleged crimes where the allegation is not the perpetration of a crime, but rather complicit behavior in someone else’s perpetration of a crime, the standard for criminal prosecution is much higher. A criminal conviction usually requires not just awareness of someone’s else’s crime or a mistake in preventing the crime, but an affirmative act to aid and abet the commission of the crime.
As passive readers of news and media, we rarely have the information we need to make an informed judgment of another human being.
The accused in the news is rarely accused of a specific crime (at least by the news media itself), and so we can almost never determine whether or not the elements of the crime have been satisfied; we don’t know the facts from the perspective of the victim and the accused; we only have access to biasing and prejudicial news reports; most frequently, the people making the most noises about the allegations are people with pre-exiting biases and prejudices that make them the least reliable sources of information.
When it comes time to pass judgment, we have no ability to mete out justice with proportionality. When it comes to internet justice, there are really on two settings; shame and ostracism or not guilty. And the latter verdict is in short supply.
In sum, judging someone based on headlines violates all the principles our society has established for due process under the law. It’s the quintessence of prejudice.
I was in my last year of law school at Duke when the Duke lacrosse scandal blew up. A couple of times, I had television reporters interrupt my daily runs to try to get me to talk on camera about what had happened.
Both times my response (to the reporters, not on camera) was the same.
“I have no idea. I wasn’t there, and I don’t know anybody who was.”
The problem with trying to make an informed judgment of another human being is that to do it right requires a lot of work. You can’t read just one article. To have any hope of completeness, you’d need to read multiple sources from multiple perspectives. You have to carefully consider their potential biases.
Which leaves you with two bad options: spend tons of time online researching the potential impropriety of someone’s actions you probably don’t know and will likely never meet, or make an uninformed judgment based on incomplete information. The former is almost certainly a waste of time and energy and the latter is horribly unfair.
The weird thing to me is the instinct I feel I have to opine on every matter of public discourse—and think that I’m providing a social good by doing so. As if I’m helping society by spreading misinformation. And I think many people, like me, feel a little guilty when we don’t do this. As if we’re not pulling our weight.
Perhaps an evolutionary explanation for the instinct to judge always and everywhere is our ancient history in much smaller bands. It’s well documented that until about 10,000 years ago, which is to say through most of human history, people lived in small bands of about 150 or less. In communities that small, you’d definitely want to seek out and eliminate all suspect behavior, because it could impact your survival and the survival of your family if you didn’t.
But in online communities of millions and billions, where, because of the law of large numbers, lots of people are always going to be doing bad things, obsessing over everyone else’s perceived misconduct is almost never a good use of time. You could easily spend your whole life studying the details of violent crimes and never scratch the surface of all that’s out there, with little or no benefit to you or your community.
The instinct that was critical in bands of 150 is wasteful and unhealthy in the online communities we have today.
If I am ever on a jury, I plan to take that responsibility seriously. But until such time as I am summoned and bound by law to participate in the formal judgment of another human being, I will do my best to recognize that I almost never have enough information to judge another person, and that I’m better off refraining from expressing an opinion as to their guilt or innocence.
It’s important to notice that we have this instinct to constantly judge, but that it’s probably not in our best interests if we do.
 Here, I think it’s important to distinguish “judging” in the sense that a certain person should be shamed, banished from public discourse, or lose his or her employment from instances where we make snap judgments like, “do I want to spend time with this person?” or “that guy seems like a jerk.” The latter is inevitable and necessary to function. The former is not, unless you’re formally charged with that responsibility.
I used to have a crap ton of unhealthy habits but a real yen for new year’s resolutions. Sometimes I would even make resolutions while I was engaged in the very act of doing the things I was trying to stop. As a particularly ludicrous example, I remember a few times, years ago, when I would draw up plans or a “resolution” for how I was going to drink less or not at all at the very moment I was drunk and in the process of getting drunker.
Perhaps not surprisingly this never worked. And I eventually came around to figuring out that if I wanted to stop doing something the first step was to just stop doing it.
This year, after celebrating with a few glasses of sparkling water, I went to bed on New Year’s Eve around 9:30, put on a noise cancellation device, and woke up in 2018 feeling all right with the world.
And though I don’t much care for resolutions any more, there are still a few bad habits I’m trying to kick: Namely, I’m trying to give up refined sugars and avoid news sources and people that frequently rely on or resort to ad hominem arguments.
My reasoning for giving up the refined sugars is that they’re a collection of (incredibly delicious but) fattening, tooth-rotting substances with little to no nutritional value. And since I’m not the kind of guy who can eat one or two cookies, it’s best that I avoid them altogether. If it comes to bananas and kale, I’m ok at moderating. But when it comes to pure sugar, I’m like a much paler, hairier version of the cookie monster.
My reasoning for giving up news sources and people that resort to ad hominem arguments is that they’re (easy to read but) emotionally toxic and make me feel angry and unhappy. I like to say that I don’t like Deadspin, Gizmodo, angry rants on Facebook, Buzzfeed or any of that rubbish, but I still occasionally indulge. And the end result is usually the same as when I eat a bag of cookies: I feel swollen, angry, bloated, and hating the world.
The thing that got me thinking about refined sugars was when I was audiobooking the Eddie Izzard autobiography, Believe Me: A Memoir of Love, Death, and Jazz Chickens. Izzard gave up refined sugar a few years back and rants about sugar in the book early and often. He attributes lots of negative things about his youth and early middle age to his excessive consumption of sugar.
But the line that stuck with me the most was when he said that refined sugar “destroys your tastebuds for real food.”
I had never really thought about it like that before, but it makes sense. The more your diet consists of refined sugars—chemically manufactured products designed to lure us with their sweetness—the more real foods don’t seem quite so appealing.
The thing that got me thinking about ad hominem arguments was the recent Sam Altman blog post about how he felt more comfortable talking about sensitive topics in China than in San Francisco. And how he thought this was a very bad thing.
I thought Sam made a very intelligent, reasoned, articulate case for the benefits of a society that is conducive to broader free speech norms.
But then of course the entire internet proceeded to shit all over him. Now, if you Google his name, the 4th thing that comes up is a Gizmodo article called “Sam Altman is an Idiot.”
Sam Altman is a Stanford grad, a wealthy and successful entrepreneur, and the head of the most prestigious startup accelerator in the world, at the age of 32. He is most decidedly not an idiot. If you are in a debate with Sam Altman and your initial conclusion is that he is an idiot, then that probably says more about you than it does him.
But that is the internet we have today.
I happen to agree with Sam. But I can appreciate that there is an intelligent, reasoned position on the other side of the debate. There is no easy way for a government or society to restrict the kind of speech we believe is unhealthy for our society while allowing the good stuff to get through, but it is possible that it can be done better than the US does today. And it is possible that the healthiest equilibrium is one that further restricts speech.
Since Sam is smarter than I am, I suspect he knows this, too.
But there was precious little reasoned counter-argument to Sam’s post. Instead, there was plenty of this.
At first, I didn’t think there was any connection between refined sugar and toxic online debates. And on the surface there is not. But over time, I started to notice how both began to seem very much alike.
They’re both easy and ubiquitous. And getting more easy and ubiquitous all the time.
Refined sugar is everywhere in the grocery store. Granola bars? Check. (Supposedly healthy) Soups? Check. Emergen-C for when you’re sick? It’s the first and the second ingredients. Fancy yogurt? Tons of it. It’s in damned near everything that comes in a package. It’s quick, tasty, tempting, and easy. And ad hominem arguments, they’re really easy, too. It’s easier to call someone you disagree with an idiot than it is to explain why you think they’re wrong—or to use your best efforts to persuade them on why they might change their mind.
I’ve had a blog for more than two years now! After two years and 170 posts, it’s had literally (barely) thousands of readers.
The most popular article I’ve written thus far is called the Hyperevolution of Hyperstimulus. It’s about why capitalism is making it harder every day to be healthy.
Our sweets are getting sweeter and our booze is boozier and our drugs are getting more potent. Our social media are getting better at devouring our time and attention resources, our streaming TV channels are getting better at making us binge, and our news sources are getting better at getting our clicks. Those sites that don’t pull off this feat cease to exist. The ones that survive keep getting better at getting and keeping our attention.
This means that our entertainment today and our tasty treats are more enticing than at any time in human history. Yay 2018! But it also means that it’s never been harder to resist these temptations. Boo 2018!
A nuanced, thoughtful discussion is a like a kale green salad with cashews and a touch of lemon. A personal attack on a celebrity is like Count Chocula with chocolate milk and extra marshmallows. Or, if you’re an adult, it’s high-end Malagasy chocolate with caramel and sea salt. You know the former is better for you, but man, chocolate, caramel, and sea salt?
I remember the first time I ever went online, back in 1995. I couldn’t tell you why now, but whatever reason, the first thing I thought to do was to see what I could find about one of my favorite bands, an obscure country-rock jam-band outfit from San Francisco called Dieselhed.
As a teenager growing up in suburban Denver, I didn’t know a single person who liked Dieselhed. But online I found so many—a whole world of people who traveled around the country to watch their shows, record live tapes, and exchange Dieselhed music.
I knew at that moment, then and there, that I had found my people. I knew, then and there, that after the internet, nothing would be ever the same.
If Gizmodo had written an article in response to Sam Altman’s post with the headline, “Contra Sam Altman, here are seven reasons why social shaming of certain forms of speech will provide greater benefit to society than allowing them to continue,” nobody would have read the article. Since their actual post title, “Sam Altman is an Idiot” is the 4th thing that comes up when you Google the man’s name, we can safely assume that lots of people did read it. Or, at least, clicked on the link.
Avoidance of hyperstimuli is more about what’s left after you get rid of the sugar high than it is about dumping the sugar high.
In lives with less porn, endless sugar, obsession with athletes who play sports on TV, and Netflix and chill, there is more love-making, nutritious food, play, and real human interaction.
And in a world with less shouting online, there is more calm and quiet. There is less feeling that our society, our world, and our own lives are damaged beyond redemption.
We know that salad is better for us than gooey marshmallows. The question, of course, is what do we do with this information?
We could picket our local grocery store with a sign that says, “Down with Frosted Flakes!” but something tells me that’s not likely to be effective. And so, too, an organized boycott of Deadspin and Gizmodo and all sites of their ilk is unlikely to change much.
I believe in nudges and thoughtful choice architecture, but there’s a limit to how well that will work. Because if Facebook optimized for what was healthiest for you, rather than what was most likely to attract your attention, it would just be replaced by another social media platform that was better at getting your attention.
Capitalists are as content to sell you a Hanes t-shirt as a Coach purse. Businesses are looking to make money, and they’re willing to cater to those who want to blend in and to those who want to stand out. They’ll sell you sugary snacks or rolled outs or anything in between. People will find a way to sell you what you want to buy.
Right now, we’re buying (by clicking the links for) the shouting online. We’re buying the insults. We’re buying the personal attacks.
I have the sphere of influence of a small rodent. I know that this blog post will not move the needle of online discourse. But I’m hopeful this resolution (totally unrelated to the turning of the calendar year) to avoid toxic online conversations and people will improve my life. That I’ll be less anxious and upset. That I won’t have that constant, unrelenting feeling that the world is rotten to its core.
As I learned with Dieselhed way back when, there are niches for everything online. It’s just about grooming your little online garden so that it’s a reflection of the life you want. Maybe Metallica’s fan pages were 100,000 times more popular than Dieselhed’s—I’m sure they were. Doesn’t matter.
It’s good enough that you can find your people, and to know that they’re out there.
 There are plenty of intellectual types who resort to this garbage, too. One writer who’s recently lost me is Nassim Taleb. I’ve enjoyed much of his writing, particularly this, but then there’s stuff like this, where he calls intellectual-yet-idiot a class of caricatured straw academic who, as best as I can tell, is just a composite of the opposite of him. Sure, he’s a smart and often innovative thinker. But name-calling is still just name-calling. It’s lazy and cheap. It doesn’t reflect well on you even if you’re a writer of Nassim Taleb’s stature (particularly if you’re a writer of his stature).
 I never actually became friends with any of those people.
I tend to agree, but not for the reasons mentioned in the article.
The tax cut won’t go into effect until 2018. Which means that the effects of the tax cut won’t be felt by most people until April 2019. Whatever the impact of the policy—good, bad, or indifferent—its actual consequences won’t come about until well after the 2018 election.
But somehow in a 1500-word article about whether the tax bill would help the GOP in 2018, the fact that its impact would happen after the election never seemed worth mentioning.
Food isn’t about Nutrition
Clothes aren’t about Comfort
Bedrooms aren’t about Sleep
Marriage isn’t about Romance
Talk isn’t about Info
Laughter isn’t about Jokes
Charity isn’t about Helping
Church isn’t about God
Art isn’t about Insight
Medicine isn’t about Health
Consulting isn’t about Advice
School isn’t about Learning
Research isn’t about Progress
Politics isn’t about Policy
The book is about the elaborate dance between the pleasant sounding, prosocial, altruistic motives we project to the world and the selfish motives that often underly our behavior.
I’ve long enjoyed the writing of both Simler and Hanson, and so I will confess I that was predisposed to like the book. I was not disappointed. It was a thoroughly enjoyable and easily digestible read on a difficult subject.
The book is an excellent survey of the literature on evolutionary biology, self-deception, and the biology of self-deception. The authors draw from the research of Trivers, Tooby, Haidt, and others.
First, we’re suggesting that key human behaviors are often driven by multiple motives—even behaviors that seem pretty single-minded, like giving and receiving medical care. This shouldn’t be too surprising; humans are complex creatures, after all. But second, and more importantly, we’re suggesting that some of these motives are unconscious; we’re less than fully aware of them. And they aren’t mere mouse-sized motives, scurrying around discreetly in the back recesses of our minds. These are elephant-sized motives large enough to leave footprints in national economic data.
As an example, imagine someone who gives to charity. If the real reason for that giving is not only a genuine care for others, but also a desire to look good in the community, according to the authors, the best way to sell that false motivation is to actually believe that the real reason for giving is a genuine care for others.
The authors quote Trivers, who says, “We deceive ourselves the better to deceive others.”
Politics is about coalition building rather than pure policy. Art is about showing off how much leisure time we have to perform challenging and hard-to-replicate tasks rather than beauty. Religion is about norm enforcement and hard-to-escape community bonds rather than divine inspiration. Education is about conformity, day care, and socialization rather than learning.
Nearly all of our social activities have hidden subtexts that are about more than what we politely discuss in public. These are our hidden motives in everyday life.
When I talked about this book with my wife, she said, “that’s interesting and probably at least partially true, but what do we do with that information?”
It’s a good question. It’s probably the question most people will ask themselves as they read the book.
Funny she should ask. It just so happens that this question was the central focus in the book’s last chapter and conclusion.
This was also what I considered the weakest part of the book.
The authors’ primary answer to the question is “situational awareness.”
That’s all well and good when the goal is to detect others’ bullshit, but an alarm went off in my head in the “Physician, Heal Thyself” sub-chapter.
After all, if one of the main theses in the book is that self deception is strategic and lack of self awareness in terms of our motivations serves a critical evolutionary purpose, how is it that situational awareness of that self deception can also be strategic?
We cannot “deceive ourselves to better deceive others and simultaneously strategically benefit from doing the opposite.
This seems flatly contradictory. If the very trait that is strategic in its absence can also be strategic its presence, then neither trait would be strategic. The whole book is about not-P and then the last chapter says, “But P!” The Elephant in the Brain is an anti-self-help book, and that’s ok. It might be the best anti-self-help book I’ve read. But in the last chapter it reverses course and goes into full-on self-help mode.
The correct answer to the question of “what do we do with this information?” is probably “situational awareness of our self deception, though interesting, might not be that helpful in terms of our own behavior. That’s why we were designed with this lack of self awareness.”
But that’s not what the authors say. Instead, they try to rationalize why this brand of situational awareness is helpful, and how it can be used in our personal life and in business.
The authors state that, “Savvy institution designers must therefore identify both the surface goals to which people give lip service and the hidden goals that people are also trying to achieve.”
If taken literally, this is horrible advice! Savvy institution designers will do no such thing. Elon Musk would not be a better entrepreneur if he were aware and openly stated that his real motivations for building his companies were not just the betterment of the human race but rather the glorification of his own ego and the raising of his own status.
If Stanford and other elite institutions advertised that their education was available for free to everyone and that the real value of a degree was because of a bald, zero-sum elitist credentialism; if churches advertised that the real reason for their elaborate ceremonies and overwhelming institutional demands was to demonstrate shared commitment and community-enforced norms rather than because of divine inspiration; if companies acknowledged that the real purpose of the business is for the ego-glorification and wealth-creation of the owners, rather than for whatever garbage is spouted off in the mission statement; if a political party admitted “what we’re really trying to do is raise the status of these groups and lower the status of these groups,” then all of these institutions would immediately and irrevocably unravel.
People whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one’s friends, and one’s cherished group identity. This freezes belief revision.
Savvy institutions have dogma. Savvy institutions have mission statements. Savvy institutions have mottoes, creeds, and fight songs.
Savvy institutions do not acknowledge their own inconsistencies.
Institutions that acknowledge their own weaknesses, biases, and inconsistencies are weak institutions.
This is why rationalists struggle to organize a meetup of 20 people in a metro area of two million people, whereas the Mormon Church and Islam are growing as fast as they are. This is why you’ll never meet a 3rd-generation Unitarian.
It would appear that the authors fell into their own trap—wishing for a pretty benefit to ascribe to our awareness of our hidden motivations, when the rest of the book tells us that the opposite is true.
Either way, this doesn’t take away from the greatness of the book on the whole. The overall work is still well worth reading. If any of these concepts are new to you, reading this book will make it hard to look at much of anything you do in the same way again.
“Did you see us say: ‘Even when we simply acknowledge the elephant to ourselves, in private, we burden our brains with self-consciousness and the knowledge of our own hypocrisy. These are real downsides, not to be shrugged off.'”
“You ask ‘how is it that situational awareness of that self deception can also be strategic?’ We didn’t mean to suggest that the gains from situational awareness will usually outweigh these harms. We just said ‘There are benefits'”
 The authors would probably acknowledge that charity is at least partially about the selfless act of giving, but would emphasize that we are programmed to emphasize the pleasant-sounding aspect our selflessness when doing so while concealing our more selfish desires beneath the surface.
 I’m not normally inclined to focus on what I believe to be the most negative aspects of an author’s work. But in this case, Hanson claims that he prefers direct, frank criticism. So here goes.
Many are concerned about the monuments of the West and the East—to know who built them. For my part, I should like to know who in those days did not build them—who were above such trifling.
Henry David Thoreau, Walden
One underrated virtue is the concept of status flexibility.
So much of American society is obsessed with spending every spare minute of life clawing the way to the top of whatever ladder you might find yourself on.
Sometimes, relative status matters. But not always. Though you won’t hear many people talk about it, sometimes you can actually improve the quality of your life by playing lower status roles.
Consider the concept of the first follower, as espoused and explained by Derek Sivers:
By attaching one’s self to a higher status person as a follower, rather than trying to be a leader, you can raise your own status. This is the basic principle behind finding a good mentor, finding a Ph.D advisor, or brown-nosing any high profile member of your community. In many ways, it’s easier to ride the coattails of someone who already has prestige than to try to achieve prestige directly.
Further, by playing the low status role in your initial conversations with new people you meet, you can raise your status long term. This is a critical subtext in the book How to Win Friends and Influence People, by Dale Carnegie, perhaps the most important self-help book of all time.
Here are the key tenets of that book:
Become genuinely interested in other people
Remember a person’s name
Be a good listener
Be wiser than other people if you can; but do not tell them so
To be interesting, be interested
Ask questions that other persons will enjoy answering
Talk in terms of other people’s interests
Make the other person feel important
In sum, play a role that temporarily increases your neighbor’s status, rather than worrying about your own, ,and you can reap rewards (or you can just have friends who enjoy your company).
And though obvious, it’s worth mentioning: It’s easier to play low status roles than it is to play high status ones. If everyone tries to go through the door first, there will be logjam at the entrance. Best to open the door for your neighbor instead. You avoid the rush, and you be considerate while you’re doing it.
This may sound a touch cynical, but consciously deferring to others—and being content deferring to others—in most situations is among the most prosocial things you can do. Most of society’s conflicts arise when two or more people are clamoring for status. Avoid needlessly clamoring for high status when it doesn’t matter and you avoid many conflicts.
Trying to be a leader all of the time is a guaranteed path to stress and turmoil. Every society needs people who will play roles of modest status most of the time for it to continue to function. Not only is that rational, but it’s totally healthy. Whether you’re ultimately looking to angle for higher status in your preferred field, just looking to fly under the radar, or even if you just want to live a life of peace, consciously accepting a flexible stance on status is an effective strategy to get there.
That’s the title of a 2016 meta-analysis by Michal Bauer, Christopher Blattman, Julie Chytilova, Joseph Henrich, Edward Miguel, and Tamar Mitts. It’s a fascinating and contrarian view on the long-term consequences of violence.
The short answer in the paper is that yes, it does. That’s the counter-intuitive angle that the paper is trying to evoke.
But almost certainly the more precise answer based on the weight of their research is, “war fosters cooperation among insiders, but not much cooperation, and perhaps even some hostility, toward outsiders.” This more nuanced answer is much less counter-intuitive than the title of the paper might suggest.
Think of the way that countries rally together during war or after a terrorist attack. When one’s survival is threatened, the instinct is to cooperate and work together to fend off an outside threat. The “rally around the flag” effect is real.
According to research by Bauer, Cassar, Chytilova, and Henrich (2014) in war-torn Sierra Leone, victims of violence were much less selfish and more inequality averse toward in-group members than those who had never been exposed to war. But there were no comparable effects of cooperation and unselfishness toward outsiders. Further, additional research by Cecchi, Leueld, Voor, and van der Waal (2015) on soccer players in Sierra Leone showed that victims of war violence behaved more altruistically toward their teammates but were also more likely to get yellow or red cards than those who were not victims of violence.
In the United States, the generation that fought in World War II is often referred to as “The Greatest Generation.” What made them so great?
That generation, more so than prior or subsequent generations of Americans, faced a real existential threat. They came together and overcame that threat, and that effort brought them closer together, creating a social cohesion that other generations do not possess.
To the extent that our country is particularly polarized now, perhaps one can view the lack of a serious external rival as a contributing factor in that polarization. Without external rivals to force us to direct our attention elsewhere, we increasingly direct our negative energy at our internal rivals.
Rationalist hero, AI alignment pioneer, and brilliant autodidact Eliezer Yudkowsky recently published a short book called Inadequate Equilibria, Where and How Civilizations Get Stuck. Like all Yudkowsky writing, it’s densely thought provoking and intellectually playful. It’s not for everyone, but for those who enjoy thinking through the hardest and deepest philosophical problems, his writing is a can’t-miss.
The book has two central theses: 1) there are many areas where civilizations reach suboptimal equilibria because of flawed incentive structures and 2) too often smart people are too modest in challenging those inadequate equilibria.
When it comes to flawed systems, Yudkowsky believes that this is not a rare occurrence.
[M]ost of the time systems end up dumber than the people in them due to multiple layers of terrible incentives, and that this is normal and not at all a surprising state of affairs to suggest.
The first part of the book discusses which systems are more likely to have flawed structures that are exploitable and which do not. For Yudkowsky, there are three basic types of systems: 1) efficient and not easily exploitable 2) inefficient but inexploitable, and 3) inefficient and exploitable.
For an example of a system that’s efficient and not easily exploitable, Yudkowsky points to short-term markets. There are millions of smart people who watch markets and are highly incentivized to know whether the price of Facebook stock is going up or going down from one day to the next. If you know for certain whether the price of Facebook is going up or down tomorrow, you can make millions. Since other people can too, they will also want to do the same. If you have a leg up on the market, you should already be wealthy. If you are not, then perhaps you should be more most about your ability to outperform markets.
Yudkowsky also emphasizes that there are many flawed systems that many people know are flawed, but cannot be exploited because of skewed incentives. An example of this might be NCAA college sports, particularly football and basketball. For both college football and college basketball, the NFL and the NBA rely on collegiate sports to provide minor-league systems for their professional leagues. College sports are popular, but the athletes – the very people who are busy dedicating their lives to providing the entertainment, receive no compensation. This is a screwed up system that athletes have known is screwed up for decades. But there isn’t anything anyone can do about it, because of the extraordinary coordination problems inherent in the system.
Colleges certainly have no incentive to change the system; they profit from it. Individual athletes may wish to change the system, but to do so involves a huge coordination problem. Each individual athlete only gets one chance to aspire to a professional career, and the best way to do that is to excel in the conventional system. The best athletes, the ones with the most leverage, have the least incentive to rock the boat. They just have to play nice for a year or two and then profit from a professional career. Those with the greatest incentive to make a change, excellent athletes who aren’t quite good enough to become professionals, don’t have enough leverage to force a change. And the fans, the ones who watch and pay money to see college athletes, they like to pretend that the minor-league athletes who represent their alma maters are actually somehow representative of where they went to school.
So the system lives on, even though everyone knows it’s ridiculous, because the incentives of those who make decisions and benefit from the system are separated from those within the ecosystem, and the incentives of those who make decisions favor a perpetuation of the system.
The last category of systems is the inefficient and exploitable. It is here where Yudkowsky recommends that we focus our attention. Every successful startup began when visionaries saw a flawed system and then went about fixing it. Yudkowsky here is not necessarily arguing that everyone should go about starting their own business, but rather having the confidence to trust one’s own judgment. To understand that the majority of systems outside of short-term markets have deep flaws. And if the opportunity is right for exploiting those flaws, that we should do so.
This is a central disagreement I have with modest epistemology: modest people end up believing that they live in an inexploitable world because they’re trying to avoid acting like an arrogant kind of person. Under modest epistemology, you’re not supposed to adapt rapidly and without hesitation to the realities of the situation as you observe them, because that would mean trusting yourself to assess adequacy levels; but you can’t trust yourself, because Dunning-Kruger, et cetera.
The alternative to modest epistemology isn’t an immodest epistemology where you decide that you’re higher status than doctors after all and conclude that you can now invent your own de novo medical treatments as a matter of course. The alternative is deciding for yourself whether to trust yourself more than a particular facet of your civilization at this particular time and place, checking the results whenever you can, and building up skill.
We live in a world where Donald Trump was elected president. This is not a place where the most qualified always rise to the top. As such, we ought not perpetually to defer to those with higher status. We have to rely on our own judgment to decide what to do and how to react in any given environment.
Yudkowsky does not suggest that it is easy to exploit vulnerable systems. To succeed, we have to be very cautious about picking our battles. He provides the following formulation for how often we can expect to exploit such systems.
0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?”
Once per year or thereabouts, an answer of “Yes” to “Can I generate a synthesis of existing correct contrarianism which will beat my current civilization’s next-best alternative, for just myself.”
Many cases of trying to pick a previously existing side in a running dispute between experts, if you think that you can follow the object-level arguments reasonably well and there are strong meta-level cues that you can identify.
Yudkowsky then makes one final, powerful argument about why we should not have too much modesty in the face of daunting systemic challenges. Simply put, modesty is a losing strategy.
I think that’s my true rejection, in the following sense: If I saw a sensible formal epistemology underlying modesty and I saw people who advocated modesty going on to outperform myself and others, accomplishing great deeds through the strength of their diffidence, then, indeed, I would start paying very serious attention to modesty
This does not mean that we should then employ an arrogant disregard for systems or people. It means we should make a lot of small bets, assess how we do in those bets, and then reassess and move forward. In Yudkowsky’s words, we should:
Run experiments; place bets; say oops. Anything less is an act of self-sabotage.
Given that so many believe in conspiracy theories, and how dangerous they can be, it’s amazing how little serious scholarship exists on why people believe in them.
Never one to shy away from a challenge, in 2008, Cass Sunstein, the most cited and influential legal scholar alive, wrote a paper with Adrian Vermeule to figure out why so many believe things that aren’t true.
Sunstein starts off by explaining why these conspiracy theories so pernicious. He argues that if you are willing to believe that the Holocaust was faked or that 9/11 was an inside job, that your mistrust of institutions runs so deep that you’ll believe just about anything.
To think, for example, that U.S. government officials destroyed the World Trade Center and then covered their tracks requires an ever-widening conspiracy theory, in which the 9/11 Commission, congressional leaders, the FBI, and the media were either participants in or dupes of the conspiracy. But anyone who believed that would undercut the grounds for many of their other beliefs, which are warranted only by trust in the knowledge-producing institutions created by government and society. How many other things must not be believed, if we are not to believe something accepted by so many diverse actors? There may not be a logical contradiction here, but conspiracy theorists might well have to question a number of propositions that they seem willing to take for granted. As Robert Anton Wilson notes of the conspiracy theories advanced by Holocaust deniers, “a conspiracy that can deceive us about 6,000,000 deaths can deceive us about anything, and [then] it takes a great leap of faith for Holocaust Revisionists to believe World War II happened at all, or that Franklin Roosevelt did serve as President from 1933 to 1945, or that Marilyn Monroe was more ‘real’ than King Kong or Donald Duck.
Sunstein offers a few different explanations of why people want to believe conspiracy theories. First, he cites Karl Popper’s Open Society and Its Enemies for the idea that people need to find someone to blame for all of society’s ills. People aren’t hardwired to believe that complex problems may have complex origins. Conspiracy theories appeal to a desire for a simple cause-and-effect resolution—and a clear scapegoat—for every scary problem.
When Germany struggled after World War I and thousands starved, people looked for someone to blame. The combination of reparation burdens, bad monetary policy, and a worldwide financial crisis probably caused their problems. But these are complex and abstract causes. It was easier for Hitler and the Nazis to find a convenient scapegoat in the Jews.
While Sunstein acknowledges that the desire to find a scapegoat often happens, he finds Popper’s “hidden agent” hypothesis to be limited in its predictive scope. For example, there is no question that the events of 9/11 were caused by someone. The problem there is that conspiracy theorists think the wrong people did it.
According to Sunstein, more often conspiracy theories are caused by crippled epistemology—belief systems that are rooted in flawed decision-making, factual error, and lack of quality information.
For most of what they believe that they know, human beings lack personal or direct information; they must rely on what other people think. In some domains, people suffer from a “crippled epistemology,” in the sense that they know very few things, and what they know is wrong. Many extremists fall in this category; their extremism stems not from irrationality, but from the fact that they have little (relevant) information, and their extremist views are supported by what little they know. Conspiracy theorizing often has the same feature. Those who believe that Israel was responsible for the attacks of 9/11, or that the Central Intelligence Agency killed President Kennedy, may well be responding quite rationally to the informational signals that they receive.
Next, Sunstein points to rumors and conspiracy entrepreneurs. As we have seen in our most recent election, when there is financial incentive to give people certain information that they would like to believe, entrepreneurs are often eager to fill the void.
Finally, and perhaps most critically, Sunstein points to the problem of group polarization. As groups become increasingly polarized, they are more at risk for conspiracy theories. This is because of a well-documented phenomenon of group members coming together to form ever-more extreme positions. If you get ten conservatives in a room together, they’re likely to end up much more conservative after they deliberate than when they began. The same phenomenon occurs with liberals. In a mixed group, individuals’ opinions will tend to converge, but when a group already starts out with a certain directional lean, when left in isolation that group will grow more extreme over time.
This, when combined with a deep distrust of authority, leads to conspiracy theories. When two groups are polarized, one group may feel quite logically that the other group does not represent its interests. If one group is in power and the other is not, this scenario is fertile ground for conspiracy theories for the group not in power, because all information from the opposing group is inherently suspect.
Think of enemy propaganda leaflets dropped from airplanes during a war. If enemy planes dropped leaflets on you, and those leaflets contained arguments and beliefs that ran counter to everything you had previously believed, you would be disinclined to believe the substance of the leaflets.
For purposes of understanding the spread of conspiracy theories, it is especially important to note that group polarization is particularly likely, and particularly pronounced, when people have a shared sense of identity and are connected by bonds of solidarity. These are circumstances in which arguments by outsiders, unconnected with the group, will lack much credibility, and fail to have much of an effect in reducing polarization.
Because the proponents of these theories’ have inherent skepticism toward authority, Sunstein argues that the most effective means of rebutting these theories is not formal government action, but rather cognitive infiltration.
In one variant, government agents would openly proclaim, or at least make no effort to conceal, their institutional affiliations. A recent newspaper story recounts that Arabic-speaking Muslim officials from the State Department have participated in dialogues at radical Islamist chat rooms and websites in order to ventilate arguments not usually heard among the groups that cluster around those sites, with some success. In another variant, government officials would participate anonymously or even with false identities. Each approach has distinct costs and benefits; the second is riskier but potentially brings higher returns. In the former case, where government officials participate openly as such, hard-core members of the relevant networks, communities and conspiracy-minded organizations may entirely discount what the officials say, right from the beginning. The risk with tactics of anonymous participation, conversely, is that if the tactic becomes known, any true member of the relevant groups who raises doubts may be suspected of government connections. Despite these difficulties, the two forms of cognitive infiltration offer different risk-reward mixes and are both potentially useful instruments.
The inherent difficulty in combatting conspiracy theories is obvious. But the value in studying and analyzing conspiracy theories, rather than dismissing their proponents entirely, as Sunstein as done here, seems like a positive step in fortifying an open society with a strong epistemological foundation.