Friday Funday: Why Clocks Run Clockwise

Why clocks run clockwise.

“You don’t have to be good.” Gives me chills every time.

The ever-brilliant Kevin Simler of Melting Asphalt is developing a browser plug-in that will make your Twitter feed physically painful to view to keep you from spending too much time on the site.

Dani Rodrik on how to defeat a demagogue. My take from last year.

Great New Yorker piece on how Stalin became Stalinist.

Congratulations to Peru for qualifying for its first World Cup since 1982. Zero bonus points for good sportsmanship. This video is from their opponent’s hotel the night before they qualified.

Sunstein on Conspiracy Theories

Most people who have heard about the Holocaust believe it was either a complete or a partial hoax. A majority in the United States believe that there was more than one gunman in the JFK assassination. And the majority of the British public believe the moon landing was faked.

Given that so many believe in conspiracy theories, and how dangerous they can be, it’s amazing how little serious scholarship exists on why people believe in them.

Never one to shy away from a challenge, in 2008, Cass Sunstein, the most cited and influential legal scholar alive, wrote a paper with Adrian Vermeule to figure out why so many believe things that aren’t true.

Sunstein starts off by explaining why these conspiracy theories so pernicious. He argues that if you are willing to believe that the Holocaust was faked or that 9/11 was an inside job, that your mistrust of institutions runs so deep that you’ll believe just about anything.

To think, for example, that U.S. government officials destroyed the World Trade Center and then covered their tracks requires an ever-widening conspiracy theory, in which the 9/11 Commission, congressional leaders, the FBI, and the media were either participants in or dupes of the conspiracy. But anyone who believed that would undercut the grounds for many of their other beliefs, which are warranted only by trust in the knowledge-producing institutions created by government and society. How many other things must not be believed, if we are not to believe something accepted by so many diverse actors? There may not be a logical contradiction here, but conspiracy theorists might well have to question a number of propositions that they seem willing to take for granted. As Robert Anton Wilson notes of the conspiracy theories advanced by Holocaust deniers, “a conspiracy that can deceive us about 6,000,000 deaths can deceive us about anything, and [then] it takes a great leap of faith for Holocaust Revisionists to believe World War II happened at all, or that Franklin Roosevelt did serve as President from 1933 to 1945, or that Marilyn Monroe was more ‘real’ than King Kong or Donald Duck.

Sunstein offers a few different explanations of why people want to believe conspiracy theories. First, he cites Karl Popper’s Open Society and Its Enemies for the idea that people need to find someone to blame for all of society’s ills. People aren’t hardwired to believe that complex problems may have complex origins. Conspiracy theories appeal to a desire for a simple cause-and-effect resolution—and a clear scapegoat—for every scary problem.

When Germany struggled after World War I and thousands starved, people looked for someone to blame. The combination of reparation burdens, bad monetary policy, and a worldwide financial crisis probably caused their problems. But these are complex and abstract causes. It was easier for Hitler and the Nazis to find a convenient scapegoat in the Jews.

While Sunstein acknowledges that the desire to find a scapegoat often happens, he finds Popper’s “hidden agent” hypothesis to be limited in its predictive scope. For example, there is no question that the events of 9/11 were caused by someone. The problem there is that conspiracy theorists think the wrong people did it.

According to Sunstein, more often conspiracy theories are caused by crippled epistemology—belief systems that are rooted in flawed decision-making, factual error, and lack of quality information.

For most of what they believe that they know, human beings lack personal or direct information; they must rely on what other people think. In some domains, people suffer from a “crippled epistemology,” in the sense that they know very few things, and what they know is wrong. Many extremists fall in this category; their extremism stems not from irrationality, but from the fact that they have little (relevant) information, and their extremist views are supported by what little they know. Conspiracy theorizing often has the same feature. Those who believe that Israel was responsible for the attacks of 9/11, or that the Central Intelligence Agency killed President Kennedy, may well be responding quite rationally to the informational signals that they receive.

Next, Sunstein points to rumors and conspiracy entrepreneurs. As we have seen in our most recent election, when there is financial incentive to give people certain information that they would like to believe, entrepreneurs are often eager to fill the void.

Finally, and perhaps most critically, Sunstein points to the problem of group polarization. As groups become increasingly polarized, they are more at risk for conspiracy theories. This is because of a well-documented phenomenon of group members coming together to form ever-more extreme positions. If you get ten conservatives in a room together, they’re likely to end up much more conservative after they deliberate than when they began. The same phenomenon occurs with liberals. In a mixed group, individuals’ opinions will tend to converge, but when a group already starts out with a certain directional lean, when left in isolation that group will grow more extreme over time.

This, when combined with a deep distrust of authority, leads to conspiracy theories. When two groups are polarized, one group may feel quite logically that the other group does not represent its interests. If one group is in power and the other is not, this scenario is fertile ground for conspiracy theories for the group not in power, because all information from the opposing group is inherently suspect.

Think of enemy propaganda leaflets dropped from airplanes during a war. If enemy planes dropped leaflets on you, and those leaflets contained arguments and beliefs that ran counter to everything you had previously believed, you would be disinclined to believe the substance of the leaflets.

For purposes of understanding the spread of conspiracy theories, it is especially important to note that group polarization is particularly likely, and particularly pronounced, when people have a shared sense of identity and are connected by bonds of solidarity. These are circumstances in which arguments by outsiders, unconnected with the group, will lack much credibility, and fail to have much of an effect in reducing polarization.

Because the proponents of these theories’ have inherent skepticism toward authority, Sunstein argues that the most effective means of rebutting these theories is not formal government action, but rather cognitive infiltration.

In one variant, government agents would openly proclaim, or at least make no effort to conceal, their institutional affiliations. A recent newspaper story recounts that Arabic-speaking Muslim officials from the State Department have participated in dialogues at radical Islamist chat rooms and websites in order to ventilate arguments not usually heard among the groups that cluster around those sites, with some success. In another variant, government officials would participate anonymously or even with false identities. Each approach has distinct costs and benefits; the second is riskier but potentially brings higher returns. In the former case, where government officials participate openly as such, hard-core members of the relevant networks, communities and conspiracy-minded organizations may entirely discount what the officials say, right from the beginning. The risk with tactics of anonymous participation, conversely, is that if the tactic becomes known, any true member of the relevant groups who raises doubts may be suspected of government connections. Despite these difficulties, the two forms of cognitive infiltration offer different risk-reward mixes and are both potentially useful instruments.

The inherent difficulty in combatting conspiracy theories is obvious. But the value in studying and analyzing conspiracy theories, rather than dismissing their proponents entirely, as Sunstein as done here, seems like a positive step in fortifying an open society with a strong epistemological foundation.

Dr. George Sheehan on the Benefits of Physical Play

The first time I remember hearing the word “philosopher” and thinking that it was something I wanted to do was in reference to a man named George Sheehan. If you’re not a runner over the age of 40, you’ve probably never heard of him. But to runners from the ’70s, ’80s, and early ’90s, he was a ubiquitous avuncular figure who expressed his opinions on the most profound parts of life, always through the lens of a long-distance runner.

From the late ’60s to his death in 1993, he wrote a regular column for Runner’s World. He was an MD before he was known as a philosopher, and so his columns started off as medical advice for runners. But as his following grew and he got older, his writing shifted from a focus on the body to the mind and the deeper questions of life.

One topic that recurs throughout his writing that is almost entirely absent from the writing of other philosophers is the idea of physical play. For Sheehan, physical play was the starting point for a healthy body and mind. He argued frequently and vigorously against the stereotype that physical play is only for children:

As we age we stop following our physical bliss. The body is pampered rather than challenged. It is told to be quiet, and becomes no more than a receptacle for the mind and the spirit. Life becomes a matter of creature comforts. The challenge becomes its ability to withstand he effects of our bad habits. We are no longer athletes. We have become spectators.

This will never do. Among Emerson’s instructions for the good life was another terse statement: “Be first a good animal.” Life is not a spectator sport. Only to the good animal come the peak experiences, the joys, the epiphanies. All of us are Olympians. And each day brings with it success or failure, as it were, only to ourselves. How this plays out is determined much more by our body than we think. “The body is the source of our energy,” said Plato. We are our bodies, our bodies are us, and we must live this life physically and at the top of our powers.

Going the Distance, George Sheehan

Sheehan sought out philosophers and poets who emphasized the benefits of play and time spent in nature and quoted from them liberally. He quoted Thoreau, Dickinson, William James, Arisotle, and Ortega y Gassett. Here’s another gem where he sources English philosopher Bertrand Russell on the benefits of play and exercise:

Russell thought it was impossible to be happy without physical play—of both mind and body. But such activity, he suggested, should be agreeable, directed to a desired end, and not contrary to our impulses.

“A dog will pursue rabbits to the point of complete exhaustion and be happy all the time, but if you put a dog on a treadmill he would not be happy because he is not engaged in a natural activity.”

I am an observer of happy dogs. Daily I see numbers of them walking with their owners on the boardwalk and grass in front of our beach house. They are a curious lot, constantly in motion and exploring the world around them. At times they are engaged in play, chasing through sticks or pursuing Frisbees. One characteristic is immediately evident. They are very serious when having fun. They may wag their tails but they are totally concentrated on what is about to happen.

Play is of equal importance to us. The things we do with our bodies should be done merely because they are fun—not because they serve some serious purpose. If we are not doing something that is enjoyable on its own account we should look for something that is. We may not find something as natural to us as hunting is to a dog, but we can come quite close.

George Sheehan lived and wrote with energy and vigor until the last days of his struggle with cancer. For a detailed account of his last months, Going the Distance is a beautiful and lasting portrayal of coming to grips with death and remembering a life well lived.

Lakoff on the Pervasiveness of Conventional Metaphor

George Lakoff may be the world’s most influential expert on the subject of metaphor. His book, Metaphors We Live By, and his renown paper, The Contemporary Theory of Metaphor (1992), offer profound insight into how central metaphors are in language, opening our eyes to the fact that they are perhaps the most fundamental building block of how we communicate with each other.[1]

We’re all familiar with metaphors like this one.

And this one…

These are what Lakoff describes as “novel metaphors.” This is when someone creates a metaphor that is not yet ubiquitous and uses it to communicate something (poetically, artistically, or otherwise).

These types of metaphors are not controversial. Lakoff’s research is on another type of metaphor, called “conventional metaphors,” that reside in everyday language.

These metaphors are so common to the way we communicate that we don’t think of them as metaphors. We think of them as literal language when they are not.

The goal of Lakoff’s research is to disprove the following beliefs.

All everyday conventional language is literal, and none is metaphorical

All subject matter can be comprehended literally, without metaphor

Only literal language can be contingently true of false

All definitions given in a lexicon of a language are literal, not metaphorical

The concepts used in grammar of a language are all literal; none are [sic] metaphorical

To demonstrate this, Lakoff introduces the pervasive metaphor of “Time as Motion.”

We often use motion, such as getting closer to an object, or moving forward, or moving backward, to describe the progression of time. This way of speaking is so ingrained that it takes a moment to realize that described Time as Motion is not a literal expression of time. The “the cat is on the mat” is a literal expression. If I were to say, “going forward, I’m going to make sure the cat does not sit on the mat,” the expression of time is metaphorical. What I’m really saying propositionally is, “in the future I resolve not to let the cat sit on the mat.” We don’t often think of the “going forward” expression as metaphorical in the sense that we think of “love is a rose” as a metaphor. But it is a metaphor, as are many of the other ways we express our relationship time.

Lakoff gives the following additional examples of the Time as Motion metaphor:

There’s going to be trouble down the road

He stayed there for ten years

He passed the time happily

We’re coming up on Christmas

I’ll be there in a minute

This way of speaking is universal, so much so that we struggle to explain time resorting to the idea of Time as Motion.

Similarly, we use Motion as a metaphor for Progress in many other ways.

For example, to say that Success is Reaching the End of the Path:

We’ve reached the end

We are seeing the light at the end of the tunnel

We have only a short way to go

The end is in sight

He needs some direction

To say that Lack of Purpose is Lack of Direction

He’s just floating around

He’s drifting aimlessly

He needs some direction

To use references to Horses as a metaphor for Control of a Situation

Get a grip

Don’t let things get out of hand

Wild horses couldn’t drag me away

Keeping a tight rein

Whoa!

The main thrust of Lakoff’s research is to show that a far higher percentage of our conventional language is metaphorical than we might presume.

Many intellectuals downplay the robustness of metaphors to express complex and serious ideas. But according to Lakoff certain metaphors are universals; they’re an inextricable part of the human mechanism of communication. They’re not to be dismissed as a less serious way to express our ideas, but rather a fundamental tool to map our understanding of reality. And only by understanding the way we use metaphor can we understand the full shape of the linguistic maps we use to chart our understanding of our inner worlds.

[1] See what I did there?

Friday Funday, Vol. I: John Cleese Wins the Internet

If I were the Queen, John Cleese would be Sir John Cleese.

Brainpickings turned 11 this week! The brilliant, lyrical Maria Popova’s ten reflections on ten years of doing her blog.

Gelman on why Tom Wolfe likes to troll people about evolution.

Catalan protestors shut down the region (nation?) on Wednesday. Now, one of the main political parties plans to begin an operation to create borders along the highways connecting Spain and Catalonia. Something tells me this does not end well.

For contrast, a reporter tries to go around Northern Ireland getting locals to support the Republic of Ireland and Northern Ireland in their upcoming World Cup qualifiers. Most people were on board. Some were not.

What 15 minutes of solitude does to your emotions.

Nobel Economist Kahneman on How to Measure the Value of a Human Life

Nobel prize winner Daniel Kahneman is one of the most influential thinkers alive. Along with his now-deceased colleague Amos Tversky, he invented the discipline of behavioral economics, probably the most important movement in the field of economics in the last half century.

Kahneman is probably most famous for calling into question traditional conceptions of human rationality in economic models, but he has also called into question another model that we use to measure the value of the state of our health.

The “QALY” or “Quality-Adjusted Life Year” is one of the most important tools health experts use to decide how to apportion aid and health care around the world. If you have perfect health, your QALY is 1. If you’re dead, your QALY is 0. If you are a paraplegic and your quality of life is 50% of what it would be if you were in full health, then your QALY is .5. If you are suffering to the point where your life experience is worse than being dead, then your QALY is negative.

That’s the QALY. This may seem like a very nerdy, abstract, and esoteric metric, and it certainly is. But very often, it is a nerdy, abstract, and esoteric metric that determines who lives and who dies.

QALYs are used in health administrative systems in the UK, Netherlands, Germany, Australia, Canada, and New Zealand[1] to inform pricing and reimbursement decisions and by many of the most influential charity organizations to determine who will receive benefits of charitable gifts. It is an essential component of the theoretical framework for the increasingly popular trend of Effective Altruism.

Kahneman’s issue with QALYs is that there are, we might say “consistent inconsistencies” in the way that people perceive their conditions that affect the data. For example, someone with a colostomy will rate their general well being as relatively high, while someone who used to have a colostomy – but no longer does – will say that they were miserable when they had it. (Smith, Ubel, Sheriff, 2006). Further, there is ample evidence that those who suffer severe spinal cord injuries – after an initial period of sadness – adjust their level of happiness upwards after about five years after suffering their debilitating injury.

If we know this to be true, how do we assess the QALY of someone who has a colostomy? Do we use the survey data from the person who currently has a colostomy and says that she is not miserable or the survey data from the person no longer does and says she was miserable when she did have a colostomy?

Perhaps there might be a way of using some sort of weighted average of all the potential life experiences and incorporating them into the assessment of a revised form of QALY. According to Kahneman, we should “[s]et up one scale facing all the complexities of the data, the internal inconsistency, philosophical issues, the relative weight of experience, and other ways to look at utilities.”

Kahneman presented this paper – which seems to provide valuable input that could improve on the QALY metric – back in 2009. Since then, it does not appear that that sufficient momentum has built behind his ideas to change the status quo.

[1] The US has literally outlawed this approach. See also, A. Torbica, R. Tarricone, M. Drummond, The use of CEA in health care—is the US exceptional? (2016).

McShane et al on Why We Should Abandon Statistical Significance

“The difference between ‘significant’ and ‘not significant’ is not itself statistically significant.” Gelman and Stern (2006).

Relying on this central premise as their critique for the way much of modern science publishing is conducted, McShane, Gal, Gelman, Robert, and Tackett recently published a short paper called “Abandon Statistical Significance,” arguably the most important paper published this year.

This is a paper that implicates all science publishing, as well as government testing, popular science writing, and perhaps more critically, what we say when we say something is meaningful in a scientific sense.

The crux of the paper is to ask the deep question of, “how do we know when the results of a study are meaningful, as opposed to simply pure noise?” Statisticians have traditionally done this by testing a null hypothesis (that there is no difference between two populations) and an alternative hypothesis (there is a meaningful difference between two populations).

As an example, if you wanted to test the question of whether people born on weekdays lived longer than people born on weekends, you’d collect all the actuarial data related to when people were born and when they’d died, and then measure whether there was a difference between the two populations (weekday births and weekend births).

We wouldn’t expect there to be much difference in these two populations. Especially if we collected data on large numbers of people, we’d expect the data to show little or no difference. And if there were a difference, we’d expect it to be just random noise (In statistical terms, we would expect a p-value measuring the difference between the two populations to be pretty close to 1).

This is super weak evidence to reject the null hypothesis. In other words, there’s no probably no connection between the day of the week you were born and how long you’re going to live.

This part isn’t what’s controversial. What’s more controversial is what happens at the other end of the spectrum, because it’s hard to know the exact moment – the threshold – when we can say with confidence that the results of a study are meaningful.

This is what McShane & Company emphasize in their paper “Abandon Statistical Significance.” There is no magic point at which a study becomes meaningful. Rather you have to look at all the circumstances related to the study to know what it means. One single statistical analysis cannot prove that something is significant, because other factors matter.  The “other factors” that matter include “prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain.”

Traditionally, scientists have rather arbitrarily decided that the 5% threshold (P-value <.05, less than 1 in 20 chance of being wrong) is the measure of when something is meaningful. But there is no scientific magic fairy dust that transforms the data from meaningless from meaningful when there is a difference between a 6 % and 5% chance of occurring at random. And that’s what Gelman means when he says that “[t]he difference between ‘significant’ and ‘not significant’ is not itself statistically significant.”

There is a huge motivation for researchers to find evidence that something is meaningful. Few careers are borne out of papers showing that something isn’t meaningful or significant. As a result, scientists are highly motivated to go find statistical significance and publish those results. This bias in favor of searching for statistical significance leads to lower quality research and more confusion about what is truly significant.

This is why we so often hear contradictory evidence about what diets, foods, and lifestyles are good for us and which are bad. Scientists are trying to make names for themselves by searching for supposedly meaningful evidence and publishing that information to the press.

McShane & Company convincingly argue that scientists and science publishers should instead look more deeply into the totality of the evidence related to any given study, rather than to magic thresholds that supposedly purport to show significance.

Sleeping with Your Sister, Trolley Problems, and Other Quandaries of Post Hoc Rationalization

In 2001, Jonathan Haidt, the brilliant social psychologist and proponent of heterodox thinking, wrote what he considers to be his most important paper, “The emotional dog and its rational tail: A social intuitionist approach to moral judgment.”

He starts it off with the some controversy:

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love?

Almost without exception, people react with visceral disgust at the idea of a brother sleeping with his sister, regardless of the precautions taken not to have children. But when pressed for reasons, it’s a little harder to explain why, other than to say that it’s just disgusting and wrong.

So why do we think this is wrong when there is no obvious harm?

When philosophers and policy-makers talk about morality and concepts of right and wrong, those discussions usually involve doing the greatest good for the greatest number of people, abstract rules of right and wrong, and similarly abstract concepts of virtue.

But according to Haidt, our real moral intuitions are not derived from positions borne out of reason, but rather from our visceral and automatic reactions to an elicited situation. Our intuitions come first, then the judgments come second. Reasons come third and are usually a “post hoc construction” meant to defend the initial intuition. When we act to defend our initial intuitions, we automatically find ourselves in lawyer mode, looking to defend our initial intuition (sleeping with your sister is REALLY BAD?!?!!!!!) rather than in judge mode, trying to find truth in an impartial way (is there anything truly wrong with sleeping with one’s sibling, if there is no attempt to have children?).

If you’re like most people, you think it’s wrong for a brother to sleep with his sister, even if it’s consensual, not for procreation, and no one is harmed or upset by it. And the reason why has nothing to do with whatever reasons we provide.

Let’s think about another set of moral problems. If you’re at all familiar with philosophy, the following trolley problems are about as well trodden as the bad guy at the end of the original Naked Gun movie, but I think they’re still helpful to further illustrate the point.

Scenario 1: There is a trolley going down a track toward a V junction where the track separates into two lines. A person standing at the junction has the choice whether to send the trolley barreling down a track with one person who is tied to the track or down another track with ten people tied to the track. You are the person at the switch. Which option do you choose?

Scenario 2: Same as scenario 1, except that the default situation is one where the trolley will kill ten people, and the person at the switch must affirmatively flip the switch in order to only kill one person and avoid killing ten. You are the person at the switch. Which option do you choose?

Scenario 3: Same as scenario 1, except instead of two tracks, there is only one. And instead of flipping a switch, you’re standing on top of a bridge next to a very fat man whose weight is sufficient to stop the trolley. To save the ten people, you have to push the fat man off the bridge onto the tracks. You are standing next to the fat man. Do you push him off the bridge?

Scenario 4: In a hospital in a small town, ten people are dying of various diseases of ten different organs of the body. If they do not receive transplants within the next 48 hours they will die. No recently deceased donors are available and none is expected in the next 48 hours. A perfectly healthy 18-year-old deliveryman is standing in the lobby waiting for a pick up. You are a doctor and you know that you can sedate him, take his organs, and save all ten of the other patients. The 18-year-old will then die. You are the doctor. Do you operate on the 18-year-old and take his organs?

Unless you’re a psychopath or a philosophy professor, your reaction to Scenario 1 was probably different from Scenario 4. Almost everyone would choose to kill one rather than ten in Scenario 1. Almost nobody would choose to kill one person instead of ten in Scenario 4. Why not?

The net effect is exactly the same. One person killed and ten people saved. But morally, they feel very different.

The likely answer according to Haidt is that our actual moral intuitions derive from an automatic reaction to an elicited situation. It’s not about who lives and who dies, necessarily, or about any other abstract rule or theory of right and wrong, but what feels right and what feels wrong.

Whatever reasons we say for why Scenario 1 is different from Scenario 4, the real reason is probably that Scenario 1 feels like the right thing to do and Scenario 4 feels like the wrong thing to do.

Most of us think that our reasoning provides the basis for our morality. But Haidt says that the causality goes the other way around. He tells us:

(a) There are two cognitive processes at work—reasoning and intuition—and the reasoning process has been overemphasized; (b) reasoning is often motivated; (c) the reasoning process constructs post hoc justifications, yet we experience the illusion of objective reasoning; and (d) moral action covaries with moral emotion more than with moral reasoning.

Moral intuition and moral actions usually stem from an evolutionary and culturally acquired sense of right and wrong. Once we hear about a moral quandary, we have a gut reaction that tells us the right answer, and then our brains go to work explaining why. According to Haidt:

[Multiple studies by] Kuhn (1991) Kunda (1990), and Perkins, Farady, and Bushey (1991) found that everyday reasoning is heavily marred by the biased search only for reasons that support one’s already-stated hypothesis.[1]

I suspect that most philosophers and rationalists consider the intuitive aspect of morality to be a bug rather than a feature. But it appears to be a fundamental aspect of how we actually make moral decisions.

But should we give up on using reason to make moral decisions? Haidt views his social intuitionist model as anti-rationalist only in a limited sense.

It says that moral reasoning is rarely the direct cause of moral judgment. That is a descriptive claim, about how moral judgments are actually made. It is not a normative or prescriptive claim, about how moral judgments ought to be made. (emphasis added).

But as we think through our personal belief systems about how we make decisions, we should be aware that more often than not, we are probably motivated in our reasoning, rather than using reason as our motivation.

[1] According to Haidt, “people are capable of engaging in private moral reasoning, and many people can point to times in their lives when they changed their minds on a moral issue just from mulling the matter over by themselves. Although some of these cases may be illusions (see the post hoc reasoning problem, below), other cases may be real, particularly among philosophers, one of the few groups that has been found to reason well (Kuhn, 1991).

The Suffering of the Privileged

No matter how wealthy and privileged you are, there’s something fundamentally painful and difficult about the human condition.

Every year, about 45,000 American commit suicide.

Tens of millions of Americans are alcoholics.

Many millions more suffer from severe depression.

And this does not include drug addiction and other severe forms of mental illness.

These are the data for the wealthiest nation on earth, where nearly half of the top 1% of global earners live. These are the data for those who have won the lottery when it comes to material resources and good fortune.

But it’s not enough for many to avoid suffering and misery.

I don’t know who said it, “but everyone is going through a personal battle that you will never understand.”

This is a point that isn’t acknowledged enough. Status competitions matter in a sense, but even those who win the status competitions sometimes suffer to the point of self-destruction.

Who wouldn’t trade career paths with Robin Williams?

No matter how much food we have in the cupboard, no matter how much status we have achieved in our careers, our brains never stop functioning as problem-finding machines. Their job is to perpetually alert us of what is going wrong and even what could go wrong.

Even when we should be thinking about how lucky we are, our brains are not hardwired that way. Even when all is seemingly well, all day they scream out, “Danger! Danger!”

We were designed for the survival of our genes, not contentment in our conscious states. Our minds actively work against any efforts we make to stay happy. Even if you’re 70 years old with a billion dollars in the bank, your mind will still stir on the risk of dying in misery and penury. Life never stops being hard, in that sense.

Until we change the fundamentals of our hardware, that will never change. No matter how much good fortune we might have.

Rolling the Dice

Imagine you got invited to a party where everyone was going to play a game. And the starting positions of the game were going to be determined by the roll of two dice.

If you get two 1s, you start in last place. If you get two 6s, you start in first.

But there’s a twist, because the party favors are going to be allocated according to everyone’s starting positions. The people who start with two 6s will then move their party to a fancy mansion, where they’ll be given better party snacks, more comfortable chairs, fancy booze, more comfortable clothes, and there will be bands with virtuoso musicians playing whatever kind of music they’d like. It’ll be a heckuva party.

Those who get two 1s will then be escorted off to a mud hut next to an arsenic mine. There won’t be any food or drinks. Not even clean water. It’ll be freezing at night and unbearably hot during the day. The only clothes they will get will be ones they find (if they can find any) in a landfill. There won’t be any games or anything to do. There won’t be toilets, either.

And then everyone in between will be sorted according to their relative starting positions. The 1-2 mud hut has an upgrade of one toilet to share among all the guests. The 6-5 party has good food and booze, but the band’s not as good.

You get the point.

And here’s the thing: it’s not a party. You’re going to spend the rest of your life there.

All of the sudden, this roll of the dice feels very consequential.

But it’s your turn and lo and behold, you get two 6s! There was only a 1 in 36 chance that was going to happen.

Holy crap–you’re going to spend the rest of your life in the mansion! This is the greatest moment ever. You can’t believe your good fortune.

You arrive at the mansion, you have your first glass of champagne, you listen to the ensemble of Les Claypool, Bradford Marsalis, and Pavarotti playing an exquisite international fusion combo set of the most inspiring music you could ever imagine, and then they make an announcement.

It turns out, there’s going to be another roll of the dice! And once again, they’re going to be reallocating the assets of each party among the guests. They reassure everyone at the double-6s party that each and every one of the new parties will still be superior to all of the other parties elsewhere, but that some of the new parties will be better than others. The new double-6 party will be unimaginably awesome, even compared to the first party, and then the lesser parties will be less so.

Whatever happens on this second roll of the dice, wouldn’t you still appreciate that in the grand scheme of things, you were very fortunate?

If you live in the United States or a country of comparable prosperity, you got double 6s as your starting place in life. You started life, relatively speaking, in the mansion. You literally have access to fancy food and beverages that most of the world and all other material possessions that much of the world can’t even imagine.

Of course, status games–all of our various competitions that determine who is more important and who is not, and who gets what and who does not–never end. They never will. But it’s easy to lose sight, because everyone around us also started with two 6s, how very fortunate we are. We get so stressed out and often feel so aggrieved when things don’t work out for us, that we forget that for all practical and meaningful purposes, we’ve already been picked as the winners for most critical game of chance life had to offer.