Sleeping with Your Sister, Trolley Problems, and Other Quandaries of Post Hoc Rationalization

In 2001, Jonathan Haidt, the brilliant social psychologist and proponent of heterodox thinking, wrote what he considers to be his most important paper, “The emotional dog and its rational tail: A social intuitionist approach to moral judgment.”

He starts it off with the some controversy:

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love?

Almost without exception, people react with visceral disgust at the idea of a brother sleeping with his sister, regardless of the precautions taken not to have children. But when pressed for reasons, it’s a little harder to explain why, other than to say that it’s just disgusting and wrong.

So why do we think this is wrong when there is no obvious harm?

When philosophers and policy-makers talk about morality and concepts of right and wrong, those discussions usually involve doing the greatest good for the greatest number of people, abstract rules of right and wrong, and similarly abstract concepts of virtue.

But according to Haidt, our real moral intuitions are not derived from positions borne out of reason, but rather from our visceral and automatic reactions to an elicited situation. Our intuitions come first, then the judgments come second. Reasons come third and are usually a “post hoc construction” meant to defend the initial intuition. When we act to defend our initial intuitions, we automatically find ourselves in lawyer mode, looking to defend our initial intuition (sleeping with your sister is REALLY BAD?!?!!!!!) rather than in judge mode, trying to find truth in an impartial way (is there anything truly wrong with sleeping with one’s sibling, if there is no attempt to have children?).

If you’re like most people, you think it’s wrong for a brother to sleep with his sister, even if it’s consensual, not for procreation, and no one is harmed or upset by it. And the reason why has nothing to do with whatever reasons we provide.

Let’s think about another set of moral problems. If you’re at all familiar with philosophy, the following trolley problems are about as well trodden as the bad guy at the end of the original Naked Gun movie, but I think they’re still helpful to further illustrate the point.

Scenario 1: There is a trolley going down a track toward a V junction where the track separates into two lines. A person standing at the junction has the choice whether to send the trolley barreling down a track with one person who is tied to the track or down another track with ten people tied to the track. You are the person at the switch. Which option do you choose?

Scenario 2: Same as scenario 1, except that the default situation is one where the trolley will kill ten people, and the person at the switch must affirmatively flip the switch in order to only kill one person and avoid killing ten. You are the person at the switch. Which option do you choose?

Scenario 3: Same as scenario 1, except instead of two tracks, there is only one. And instead of flipping a switch, you’re standing on top of a bridge next to a very fat man whose weight is sufficient to stop the trolley. To save the ten people, you have to push the fat man off the bridge onto the tracks. You are standing next to the fat man. Do you push him off the bridge?

Scenario 4: In a hospital in a small town, ten people are dying of various diseases of ten different organs of the body. If they do not receive transplants within the next 48 hours they will die. No recently deceased donors are available and none is expected in the next 48 hours. A perfectly healthy 18-year-old deliveryman is standing in the lobby waiting for a pick up. You are a doctor and you know that you can sedate him, take his organs, and save all ten of the other patients. The 18-year-old will then die. You are the doctor. Do you operate on the 18-year-old and take his organs?

Unless you’re a psychopath or a philosophy professor, your reaction to Scenario 1 was probably different from Scenario 4. Almost everyone would choose to kill one rather than ten in Scenario 1. Almost nobody would choose to kill one person instead of ten in Scenario 4. Why not?

The net effect is exactly the same. One person killed and ten people saved. But morally, they feel very different.

The likely answer according to Haidt is that our actual moral intuitions derive from an automatic reaction to an elicited situation. It’s not about who lives and who dies, necessarily, or about any other abstract rule or theory of right and wrong, but what feels right and what feels wrong.

Whatever reasons we say for why Scenario 1 is different from Scenario 4, the real reason is probably that Scenario 1 feels like the right thing to do and Scenario 4 feels like the wrong thing to do.

Most of us think that our reasoning provides the basis for our morality. But Haidt says that the causality goes the other way around. He tells us:

(a) There are two cognitive processes at work—reasoning and intuition—and the reasoning process has been overemphasized; (b) reasoning is often motivated; (c) the reasoning process constructs post hoc justifications, yet we experience the illusion of objective reasoning; and (d) moral action covaries with moral emotion more than with moral reasoning.

Moral intuition and moral actions usually stem from an evolutionary and culturally acquired sense of right and wrong. Once we hear about a moral quandary, we have a gut reaction that tells us the right answer, and then our brains go to work explaining why. According to Haidt:

[Multiple studies by] Kuhn (1991) Kunda (1990), and Perkins, Farady, and Bushey (1991) found that everyday reasoning is heavily marred by the biased search only for reasons that support one’s already-stated hypothesis.[1]

I suspect that most philosophers and rationalists consider the intuitive aspect of morality to be a bug rather than a feature. But it appears to be a fundamental aspect of how we actually make moral decisions.

But should we give up on using reason to make moral decisions? Haidt views his social intuitionist model as anti-rationalist only in a limited sense.

It says that moral reasoning is rarely the direct cause of moral judgment. That is a descriptive claim, about how moral judgments are actually made. It is not a normative or prescriptive claim, about how moral judgments ought to be made. (emphasis added).

But as we think through our personal belief systems about how we make decisions, we should be aware that more often than not, we are probably motivated in our reasoning, rather than using reason as our motivation.

[1] According to Haidt, “people are capable of engaging in private moral reasoning, and many people can point to times in their lives when they changed their minds on a moral issue just from mulling the matter over by themselves. Although some of these cases may be illusions (see the post hoc reasoning problem, below), other cases may be real, particularly among philosophers, one of the few groups that has been found to reason well (Kuhn, 1991).