What I Learned from My Mini-Retirement at Age 34 (Mostly, that Retirement is Overrated)


From April 2012 to April 2013, my aggregate gross income was zero.

I read a lot. I ran a lot. I wrote some. Taught myself a little Python and LISP. Played around with a few startup ideas. But I had no overarching or under-arching purpose to what I was doing.

My prior job was working at a high-paying law firm. And since I am frugal and a decent investor, I had some money left over.

The only reason I went to law school was because I tested well and because I was tired of being broke. The only reason I worked at the high-paying law firm was because it was a high-paying law firm.

So there I was, 34 years old, with a decent amount of money and nothing to do.

And do you know what the experience taught me? That I needed some sort of professional focus in my life. And that doing nothing, or doing a bunch of random things without much direction, had started to erode at my sense of purpose. It made me feel as if I were on an island of irrelevance.

And whereas before I had thought that early retirement was the ultimate dream, I soon realized that retirement – at any age – probably wasn’t for me. And the more I thought about and read about retirement, the more I realized that retirement probably isn’t ideal for a majority of professionals today.


The standard practice for most of us is to work hard during our prime years and then retire in our golden years.

And there is some historical rationale to this. In 1790, 90% of the labor force in the United States worked in agriculture. And working on a farm is demanding, hard work. For most, working on a farm as a 70 or 80 year old is impractical or impossible. But now, less than 3% of the labor force works in agriculture. And the total number of jobs today in the United States that require even “moderate physical activity” is less than 20%.

For 80% of the population, there is no physical reason to compress your working years before the age 65.


According to Pew Research, retirees are no more likely to be happy than workers. So, then, why retire? Apart from the global financial industry, currently estimated in the $11 trillion range, which certainly encourages us to think about retirement, there are other reasons.

To name a few:

  • Not working in our golden years sounds nice. No more bosses, no more stress, no more inconvenience of coming into the office every day. Given the choice between work and no work, most people think they’d prefer no work.
  • It seems trivial to spend the last years of our life focusing on anything other than spending time with loved ones and appreciating the best things life has to offer.
  • Many suffer from declining health as we get older. Whether it’s arthritis, incontinence, dementia, or any other ailment traditionally associated with old age, there are plenty of physical reasons why it might be hard to go to into an office, even if the labor done at work isn’t physical.
  • Also, society might not want older workers to keep working. Mandatory retirement is illegal in most circumstances in the United States, but we’ve created a host of social incentives, from 401(k)’s to social security, to encourage most workers to leave the workplace before age 70. Maybe this is for the benefit of retirees; maybe it’s for the benefit of the younger folks.


In spite of all the incentives to retire, not everyone seems to want to go.

Counterintuitive as it may seem, retirement, and the desire to retire, is actually less common among who can most afford to retire.

According to a recent piece by NBC:

A new survey shows that America’s highest earners don’t plan on retiring until they are at least 70 years old. Lower-income groups—and even those considered “affluent”—plan to retire much younger, according to the study from Spectrem Group, a wealth research firm.

When asked, “At what age do you expect to retire?” nearly one-third of those with annual earnings of $750,000 or more answered “over 70.” Fifteen percent of them say they never plan to retire.

On the other hand, only 6 percent of those making under $100,000 a year plan to retire after 70, and the same percentage say they never plan to retire. Most plan to retire by 65.

The Spectrem survey is backed up by other, previous studies. A 2010 study from Barclay’s Wealth found that 54 percent of millionaires say they want to continue working in retirement. Globally, 60 percent of those with a net worth of $15 million or more plan to stay involved with work “no matter what their age.

Warren Buffett is still the CEO of a company with a $357 billion market cap at age 85. Charlie Munger is still vice chair, at age 92! No one needs to work less than they do. But they still go tap dancing to work every day.

Similarly, the average age of retirement for Supreme Court justices in the United States is now 78.3, and many work until they die.

Maybe it’s because elite performers tend to be workaholics. But maybe something else is going on.


I recently read this post by Kevin Simler called a Nihilist’s Guide to Meaning. In it, he describes meaning as a product of one’s influence and connection to other people and institutions. According to Simler (I’m oversimplifying), the extent that we are involved in groups and institutions where our actions have an impact on other people is the extent to which we find meaning in our lives. It’s the connections to others that gives us meaning.

When we retire, we remove ourselves from the place where we are most likely to make an impact, work. What’s more, senior citizens tend to be further removed from other institutions that impact people, such as schools. A few institutions like churches welcome contributions from senior citizens, but most social institutions don’t have a place for the elderly. And so as people age, they often stop doing the things that previously gave meaning to their lives. That’s a recipe for obsolesce, depression, and decrepitude.

Warren Buffett can change thousands of lives with a check, a comment, or a stock order. He can make or break a company’s future or provide any charity of his choice the gift of perpetual solvency. His choices make a meaningful impact on others, and so he’s not inclined to delegate those choices, no matter how old he gets.

And so, too, with the rest of the Forbes list. Almost none of them is retired in any real sense – though they all possess wealth that could never be spent no matter how exorbitant their spending habits. Still, they nearly all shirk from a life of pure leisure, removed from work. From Sheldon Adelson, age 82, to Mark Zuckerberg, age 32, they’re still at it.

So, too, is the case for Supreme Court justices. They’re not eager to relinquish their hard-earned influence. Many liberal jurists begged Ruth Bader Ginsburg to retire while Obama still had the ability to appoint her successor. She didn’t. And at 83, if she were to die or fall ill while a Republican were in office, it would have grave consequences for the progressive causes she has advocated for her entire life. But you have to respect her determination to stick with it, and she’s still as important and influential as ever.

The point is, those who make an impact with work rarely wish to leave, no matter how old they get.


I work with startups. And I often run into people who say they’re looking to accumulate wealth and retire early. It’s something I hear once or twice a week.

To me, that’s a huge red flag.

I often ask those entrepreneurs, if you make all that money, what will you do then?

Some say they’ll move to a beach in Central America or to Spain or to Paris.

But then what? Will you really just drink watery Central American beer and go scuba diving for the rest of your life?

Sometimes they’ll tell me about some project they’d work on.

I tell them that’s probably what they should be working on now. (Because if you’re going to succeed at a startup, you better have the enthusiasm to keep working on it regardless of how long it takes and how much money you make short term).

Jobs didn’t retire. Buffett didn’t retire. Zuckerberg won’t retire. And if you have the drive and energy that’s required to create a successful startup, I’m guessing it’s unlikely that you’ll ever retire, either.


I have a theory about how to get the most out of your professional life. And it’s adopted from a hobby I’ve been doing since I was 12: long-distance running.

Most training strategies in running follow a simple formula. Stress the body near its limits, rest, repeat. The types of stress vary, but the concept is nearly universal. Push the body to the extremes of what it can handle, and then let the body recover fully.

If you don’t stress the body enough, you’ll never improve your fitness level. You’ll never come near your full potential. Stress it to the extreme without rest, and you’ll suffer sickness, injury, or burnout – and you’ll never come near your full potential.

The goal is a goldilocks mixture of stress and rest, alternating between the two.

Some call this type of stress “eustress,” which is a term for the type of stress that is good for the body and mind. There is an optimal level of stress where a person is challenged to get the most out of his or her abilities. The exact formula for the perfect amount of stress varies from person to person and moment to moment. But without exception, every training regiment I have ever seen incorporates the concepts of stress and rest in some form.


The standard professional arc for most working Americans doesn’t jibe with the eustress model. From birth to when we enter the working world, the focus is on education and self discovery. Then, from that point to retirement, most professionals work 48-50 weeks a year – and then come to a near-complete halt.

That’s way too much stress in the middle, with not enough rest and recovery, and way too much rest and recovery at the end with nowhere near enough stress.

The ideal mixture is person- and circumstance-specific, but the ideal mixture for all of us should include work stress and work recovery throughout all stages of our lives.

Complete rest leads to atrophy and decay. The over-75 demographic watches more TV than any other age group. That doesn’t sound like sucking the marrow out of life to me. Retirees need eustress as much as any other age group – perhaps more, because of the inevitable process of physical and mental decay that happens to each of us as we age. And the conventional cycle of intense working years followed by a complete cessation of professional activity discourages us for obtaining the healthy stresses we need in our final years.


And of course, there is the fact that nearly 20% of Americans die before they reach age 65. Not delaying gratification can be a character flaw, but the decision to delay it too much is fraught with peril as well.


So rather than thinking about retirement, I think about my professional life in daily, weekly, quarterly, annual, and other periodic recovery cycles. If I go on a 30-mile mountain run, I need more than one or two rest days to recover. And if I bang out an intense project or a big deal where I’m working much more than normal, I try to take time to incorporate a longer vacation. If I’m burned out, like I was when I left my big-firm law job, I take a sabbatical. But I have no plan to retire. I’ll work and rest throughout my entire life, but as long as I have my faculties about me, I’ll have a project waiting for me at the end of every rest cycle.

From everything I have experienced, that seems to be the only way to get better. And from everything I have learned, that’s the only way to live.

The Strategic Value of Not Planning


About four years ago, I ran my first 100-mile trail race.

I spent four years working myself into shape. In 2008, I weighed 210 lbs. On race day in 2012, I weighed 150.

Leading up to the race, I ran an hour a day, every day during the week, and then three to five hours every Saturday and Sunday.

I prepped everything I could on the mental side too. I read every blog and race report I could find about the race. I watched every YouTube video anyone had made about the race.

And while I was preparing physically and mentally, I planned everything I could.

I had an excel spreadsheet on my computer that estimated what my splits would be at each aid station. I planned scenarios for good days, mediocre days, and bad days. I planned what I would eat at each aid station down to the last calorie, and exactly how much water and electrolytes I would take in, too.

And then race day came.

About 30 seconds after the race started, I realized I was wearing the wrong shoes for the course. I fell for the first time twenty minutes into the race. And then I fell another dozen times in the first half marathon. It took everything in my power not to fall off the side of the trail.

By the time the first aid station came around, I was already about 25 minutes off my worst-case scenario pace.

Then, just after the aid station, I went up a hill and got lost after a missed a flag.

There I was, about three hours into a 24-hour race, and I was already an hour behind schedule.

I had spent hundreds of hours contemplating a range of possible scenarios. And all of them were wrong.

You can read the full race report here. Bottom line: a bunch of stuff happened that I didn’t expect. I struggled but I finished. It worked out ok, though nothing worked out as planned.


After the race, I couldn’t stop thinking about how much time I had wasted planning for the race.

I had always been a planner. And I had always considered myself rational and economical in how I spent my time. But never before had I noticed such a stark disconnect in the time I had spent preparing for something and the return on the planning investment. It made me reconsider not just how I planned for ultra races (an obvious a waste of time), but it made me reconsider the notion of planning any activity.


My thinking now, with limited exceptions, is that most long-term planning is useless.

With most endeavors, I try to limit any instinct I have to plan more than a week in advance.

It’s not that I think that all predictions and plans are useless, it’s that I now have come to believe that most plans created outside an area of expertise or without a rigorous model are no better than baseline guesses. And since most of my predictions and plans aren’t based on a sophisticated model or coming from a place of expertise, I’m usually better off not planning at all.

Stated another way, for every endeavor, there is a moment when the number of variables in a system is large enough or not carefully understood well enough, where planning beyond that stage is a complete waste of time.

That’s a moment I call, “the Maximum Useful Planning Point,” or MUPP.


In Nate Silver’s book, The Signal and the Noise, he writes about how much weather forecasters’ predictions have improved over the past 20 years.

But as much as weather forecasts have improved, the value of a long-term weather forecast still fades after a few days. Forecasts a day or two in advance are now highly accurate, but weather forecasts for ten days ahead are still of no value at all. After nine days, weather forecasters’ predictions are no more accurate than the baseline assumption of the average weather in a specific location on any given day based on prior years’ information. And these are the best models in the world. According to Silver’s book:

After a little more than a week, Loft told me, chaos theory completely takes over, and dynamic memory of the atmosphere erases itself . . . Once the atmosphere has had enough time to circulate, the weather patterns bear so little resemblance to the their starting points that the models don’t do any good.

Weather is a dynamic system, in that the behavior of the system at one point in time influences its behavior in the future. And after a certain period of time in any dynamic system, chaos theory takes over and our models no longer serve to predict with any accuracy. That’s what I call the MUPP. For weather predictions right now, the MUPP is nine days for the best computer models.


The point at which one’s predictions cease to predict better than the base case scenario is the MUPP. This would apply equally to any system where you might ask the question, “is planning for a an event X days away useful?” The farther you try to plan into the future, the less accurate your model, and the more dynamic the system, the shorter the MUPP.

Let’s use another example. In chess, the more sophisticated you are, the more you can anticipate different scenarios. And it is a common supposition among novice chess players to think that the more sophisticated you become at chess, the more moves you begin to think ahead. This is true up to a point. But there is a declining utility the farther you think ahead, because if your opponent moves in an unexpected way, the thinking ahead immediately loses all value. With 32 pieces and dozens or even hundreds of possible moves every turn, depending on the stage of the game, most moves by your opponent will render your long-term planning useless. This is why chess masters don’t simply sketch out their entire game plans ahead of time, because there are so many possible moves, that the odds of executing such a plan are infinitesimal. Of course, in chess, knowing what your opponent is trying to do is important, but what is knowable or even probabilistic to know is usually a few turns on the horizon. The MUPP in chess is usually a few moves ahead.

In an endgame situation, a good chess player can absolutely think more than a dozen moves ahead, but that’s in part because it’s possible to force the action in a certain direction. In chess, the MUPP is very situational. In early to mid-game, the MUPP is only a few moves ahead. As variables decrease toward the end, it might be 15 or more moves ahead.

In games and in life, the MUPP is dynamic.


What I’m ultimately trying to combat here may seem like a bit of a straw man. But it’s something that’s important for me to remind myself. And it’s something I see often enough – organizations or individuals that seek to develop long-term plans or predictions, many of which simply have no hope of corresponding with reality.

The habit of long-term planning pervades every organization I’ve worked with, including most startups.

I’ve spent a disproportionate amount of my life constructing plans. And much  frustration has happened when my carefully constructed plan diverged from the messiness and uncertainty of reality. Which is what happened about every time I made a detailed plan.

But part of me thought it was irresponsible not to have a plan.

And then a few months ago, I read this article by tech giant Marc Andreesen about his tips for personal productivity.

His biggest tip for personal productivity? No schedule.

Let’s start with a bang: don’t keep a schedule.

He’s crazy, you say!

I’m totally serious. If you pull it off — and in many structured jobs, you simply can’t — this simple tip alone can make a huge difference in productivity.

By not keeping a schedule, I mean: refuse to commit to meetings, appointments, or activities at any set time in any future day.

As a result, you can always work on whatever is most important or most interesting, at any time.

Want to spend all day writing a research report? Do it!

Want to spend all day coding? Do it!

Want to spend all day at the cafe down the street reading a book on personal productivity? Do it!

When someone emails or calls to say, “Let’s meet on Tuesday at 3”, the appropriate response is: “I’m not keeping a schedule for 2007, so I can’t commit to that, but give me a call on Tuesday at 2:45 and if I’m available, I’ll meet with you.”

Or, if it’s important, say, “You know what, let’s meet right now.”

Clearly this only works if you can get away with it. If you have a structured job, a structured job environment, or you’re a CEO, it will be hard to pull off.

But if you can do it, it’s really liberating, and will lead to far higher productivity than almost any other tactic you can try.

This is, of course, exceedingly difficult. But it is also profound.

If you try to anticipate what the best use of your time is today and tomorrow, you’re probably going to get that right. But much like the weather, if you try to predict what the best use of your time is going to be ten days from now, you’re probably going to be wrong. And so by committing to a schedule ahead of time, you’ll have committed to a non-ideal use of your time. If you try to schedule your life three months, six months, or years ahead of time, you’re likely to be not just wrong, but colossally wrong. And your commitment to doing something other than best use of your time will work against you.

And this to me, is the strategic value of non-planning. Because most people overplan their lives, those who have both the discipline to work hard and the restraint to not overplan can seize opportunities that arise without notice and take advantage of the unpredictable, giving them a competitive advantage over those who are rigorous or inflexible.

Frans Johansson’s book The Click Moment, Seizing Opportunity in an Unpredictable World, focuses on this very point.

This book is about two very simple but highly provocative ideas. The first one is this: success is random, far more random than we have come to believe. The second is that there are a number of specific actions that individuals and organizations can take to capture randomness and focus it in our favor. The reason people tend to consider these ideas provocative is because success, we are often told, is a result of strategy, planning, and careful analysis. Luck, on the other hand, is a force that lies outside of our control. This book rejects these conventional perspectives and proposes a useful and compelling alternative.”

Success almost never comes as a result of long-term planning, but through aggressive exploitation of nascent opportunities, ones that cannot be predicted ahead of time. Most businesses and professionals follow rigid plans and routines in hopes of that hard work leading to success. But, as counter-intuitive as it might sound, the rigid planning and routines often eliminates the possibility of pursuing the best opportunities as they arise.

Mark Zuckerberg was a psychology major who developed a “hot or not”-type App in college, and it happened to develop into a business. Had he maintained the resolve of sticking with psychology he would have never created Facebook. Bill Gates dropped out of Harvard because he knew that he couldn’t wait until he graduated and still develop the first interpreter for the Altair. Larry Page and Sergey Brin dropped out of grad school to start Google. Steve Jobs dropped out of Reed College to get a job at Atari (and later built Apple).

None of these businesses was planned.

So much of what it takes to become a visionary is the ability to abandon a plan, change your mind, and seize an opportunity.

Most startups know this well and have systems to accommodate for that fact. Startup methodology guru Steve Blank is famous for observing, “No business plan survives first contact with a customer.”

Many of the smartest engineering firms are also averse to long-term planning. Most notably, agile development methodology creates schedules according to tight feedback loops, mostly based on cycles shorter than a weather forecast.

Experiment. Test. Repeat. Explore. Make lots of small, calculated bets. Don’t bother with the long-term, rigid plans.

It’s the methodology that’s worked for many of the best-run organizations in the world. But it runs counter to the way we naturally think. We like to believe we can know what we’ll do a year or five years from now. But the moment we pretend we do, we put an arbitrary anchor on our future that limits how we live.


The ultimate question is: when is it more strategic to have a plan and when is it more strategic not to have a plan? Or, stated another way, how do we figure out the MUPP in any given endeavor and how do we use that information?

My conclusion is this: unless there is data or evidence that shows there is value in planning more than a week or so in advance, that there’s probably no value in doing so.

That feels chaotic. That feels uncertain. That feels unstable. But if you can discipline yourself to work and focus in an environment that is almost wholly absorbed by short-term priorities and immediate feedback cycles, you’ll find yourself able to exploit a range of opportunities most organizations will never even consider.

Republicanism, Democracy, Bureaucracy, and Instability

A few thoughts on Brexit:

  • After years of handwringing over whether the Greeks would leave the EU, whether the Spanish would leave the EU, or whether the Portuguese would leave the EU, it turns out Britain is the first to leave the EU. I guess it makes sense that a relatively rich country, rather than a relatively poor one, would be the first to leave. Why? Because they can.
  • This vote says as much about referendums as it does about British sentiment or the EU. Even if 99% of the time, most people want a certain political dynamic, as long as a referendum is called at the 1% of the time when they do not, any stable political system can be undone. The lesson here is that referendums are a terrific destabilizing force.
  • Republican forms of government are much more stable than democratic ones.
  • Are weak federalist systems always doomed to failure?
  • The bureaucratic fallout from this will be pervasive and it will last decades. Thousands of laws just got invalidated and there will be a vacuum where thousands of others used to exist. This is a legal cluster without modern precedent.

The Hyperevolution of Hyperstimulus

Most animals, including humans, can be made to prefer the fake to the real. Sometimes this is funny, and sometimes it is sad. Often both.

There are some birds, such as the graylag goose, that will raise the chicks of other birds, and actually prefer to raise other birds’ chicks, as long as the invader bird looks more impressive than its own offspring. A bird can drop an egg into its nest, and if the egg looks healthier and fitter than its own, it will raise the imposter egg and neglect its own progeny.

This practice can be taken this to ridiculous extremes. To note one colorful example, scientists were able get a graylag goose to try to hatch a volleyball, rather than its own eggs, because, according to the superficial aesthetics of the graylag goose, the volleyball looks more impressive.

Scientists call this phenomenon, “supernormal stimulus.” It is what happens when an exaggerated version of what appeals to evolutionary instincts can cause some one or some thing to engage in behavior counterproductive to its own survival (or the survival of its genes).

I have previously written that the biggest problem that most people face in the developed world is that we are hardwired to live in an environment that is very different from how we now live. Our world is filled with supernormal stimuli, and we aren’t equipped to handle it.

This post is about how the gap between the intensity of supernormal stimuli and more normal, healthy stimuli is growing. And I posit that future generations will be selected for their ability to resist these ever-evolving stimuli (“hyperstimuli”).


Nobel laureate Niko Tinbergen invented the term “supernormal stimuli” to describe his research that showed that he could create artificial stimulus that appealed to animals’ instincts more than the original objects for which they’d evolved. As Deirdre Barrett wrote in her book Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose, nature is replete with easily conned animals. Below are a few examples:

Male barn swallows have light brown chests and females choose the ones with the most intense color as an indication of fitness. Scientists with a $5.99 felt-tip marker can darken the chest of a previously scorned male, and suddenly females line up to mate with him.

Male sticklebacks ignored a real male to fight a dummy brighter red than any natural fish. They’d choose to escort an exaggeratedly round-bellied model over a real egg-bearing female.

Tinbergen and other students studied geese and found similar patterns. The characteristic that determined which egg a goose would roll back into the nest—color, size, markings—could be exaggerated in dummy eggs. The graylag goose ignored its own egg while making a heroic attempt to retrieve a volleyball.

And so, too, have we used the term supernormal stimulus to describe the non-naturally occurring treats that we have come to prefer over the real world treats around us.

The classic example of a human supernormal stimulus or a “superstimulus” is a snickers bar. There is no food in nature that is as sweet and as savory as a snickers bar. Our instincts tell us to seek out sweet and savory foods to survive. So Mars produced an inexpensive, tasty treat that satisfies our craving for both salty and sweet foods, and they’ve placed them next to the checkout counters of nearly every grocery store in the United States. And we buy them in huge numbers.

It’s no good for us. We know it’s no good for us. But Mars has to make 15 million a day to keep up with demand. We eat them far more than is evolutionarily adaptive (unless we are starving, the adaptive amount of snickers bars is zero). What’s worse, it serves as a substitute for naturally occurring, more nutritious food, leading to diabetes, obesity, and lots of other problems.

We prefer snickers to fruits and berries. We choose fictional stories about relationships on television over seeking out and developing our own relationships. We prefer to watch elite athletes play sports on television rather than exercising ourselves. Men prefer to watch enhanced models copulating on computer screens more than having sex with our real partners.

Like the geese choosing the volleyball over its own eggs, we often prefer to indulge a fantasy of an exaggerated, fake version of our lives to the real experience.


If you’re still reading, you probably know the backstory.

We evolved to live in scarcity in the savannah 10,000 years ago. We now live in abundance, mostly in cities and towns. We evolved with instincts that tell us we should eat as much as we can, whenever we can. Because in the savannah, we never knew when our next meal would come from. But now most of us live within a short walk or drive from hundreds if not thousands of food options, with cuisine from every part of the world. Now, almost everyone in the developed world has the resources to eat to the point of obesity. And many of us do.

But it isn’t just about food. Beyond food, everyone with discretionary income is bombarded by opportunities for potent forms of superstimuli – drugs, video games, sports, politics, pornography, alcohol – we have opportunities for distraction and stimulation that are many factors more powerful than what our ancestors could have ever thought possible.

And these stimuli keep getting more intense.


Meanwhile, the stimuli in our professional lives aren’t as intense as they once were.

Being chased by a lion, starving to death, or going to war with a neighboring tribe isn’t a very secure way to live, but what it lacks in security, it makes up for in intensity. Near-death experiences have a way of focusing the mind.

By contrast, most of our professional responsibilities today can be distinguished by their non-lethal nature. Whether you spend your days debugging code, drafting legal contracts, managing accounts receivable, or selling widgets, it’s a safe bet that you at least occasionally struggle with motivation. And that’s in part because on a deep, fundamental level, we all know that what we’re doing daily isn’t a matter of life or death. Accounts receivable isn’t a struggle for survival.

On occasion, political dynamics and other pressures create artificial stress that focuses our attention like our ancestor who was chased by a lion. But for most of us, professionally, the greatest stress is of losing one’s job or failing at a business, rather than losing one’s life. And for most of us, if that happens, we’ll just get another job.

There are lots of ways to survive today that are easy to do, albeit incredibly boring. You can survive by working at Taco Bell or working in customer service or working as a lawyer. It’s just that it all damned near bores us to death while we do it. Our security creates that dullness that haunts our lives.

And so we have created an entire genre of non-fiction literature dedicated to the subject of productivity – the study of how to convince ourselves to do what we actually think we want to do. But the real reason we don’t work harder, is that we don’t have to do it to survive.


The greatest fiction writers explore the most important psychological themes of their eras. Dickens explored the horrors of industrialization and the French Revolution; Conrad penetrated the dark heart of colonialism; Faulkner explored racism and bigotry in the deep south.

David Foster Wallace, perhaps the greatest fiction writer of the last part of the 20th century, was obsessed with subject with boredom. His last book, The Pale King, which was published posthumously, because he killed himself before he finished it, was all about boredom. The book is ostensibly about accounting and the IRS, but the underlying message is about tolerating boredom in the modern world. Wallace wrote, and may have believed, that the ability to overcome boredom was perhaps the most important factor to success. He wrote, “[t]o be, in a word, unborable. It is the key to modern life. If you are immune to boredom, there is literally nothing you cannot accomplish.”

Because if you are unborable, you won’t get distracted. And if you don’t get distracted, you’re going to be a lot more productive than everyone around you.


Supernormal stimuli are getting more powerful every day. Because of competition for attention and resources, what was once considered tasty and great and entertaining a few years ago is no longer so today. To compete, industry must up the ante in the intensity of the stimulus to get us to notice.

To offer just a few examples:

Compare the clarity, quality, and variety of television today with those of a generation ago.

Compare the quality and intensity of video games with a generation ago.

Compare the ubiquity and intensity of marijuana with a generation ago.

Compare the variety and intensity of beer with a generation ago.

Compare the quantity, availability, and intensity of pornography (you can find your own links) with a generation ago.

30 years ago, HTML didn’t exist. Now, half the planet is online, and the average American spends 15-plus hours a week online.

Compare the stimuli we post on social media to the humdrum nature of our actual lives. We post pictures of our most exciting vacations, our tastiest meals, and our most interesting moments. We follow the lives of the most fabulous and most attractive people we know.

And so on.

The trend is unlikely to stop here. Just as the stimuli of 40 years ago seem quaint and laughably dull today, so, too will the stimuli of the 2050s humble our current forms of diversion.

Our stimuli are evolving according to the dictates of Moore’s law, Edholm’s law, Metcalfe’s law, and the law of accelerating returns.

Meanwhile, our bodies and brains are more or less the same as they were when we were plodding around the Serengeti. Our superstimuli are turning hyperstimuli, but our capacity to resist the hyperstimuli are evolving at a glacier’s pace.

A few years ago, Eliezer Yudkowsky wrote an article about supernormal stimulus and the decline of civilization. In it, he cited a few articles about gamers who had literally died for lack of food or water because of their obsession with video games. These are extreme examples of people succumbing to superstimuli.

But there are others.

Life expectancy has topped out in the United States over the past three years. There has been an increase in alcohol-related deaths. Opioid addiction is rampant. And of course there is the problem of obesity and related physical and social problems.

In total, the average American spends 11 hours a day in front of electronic media.

We spend nearly all of our time absorbed by technology and forms of stimulation that didn’t exist a few generations ago. Basically, how we spend our entire waking lives has almost nothing in common with how all humans lived before 1900.

It isn’t enough to say that we are constantly inundated and bombarded by supernormal stimuli. For many of us, it’s become our whole lives.


The gap between highly controlled, low-stimulus work environments and potent stimuli options away from work has never been greater. And as the gap between normal stimuli and superstimuli grows larger, so, too, will the impact of these superstimuli on society.


I don’t know how extensive virtual reality’s impact will be on society, but it would appear to be the ultimate form of supernatural stimulus. It is by definition a fake reality. And the goal is to get the technology to become so impressive that we will either not be able to distinguish it from reality or that we will prefer it to reality. To quote from Kevin Kelly’s recent book, The Inevitable:

The best of these achieve an unshakeable sense of presence. The usual goal for increasing the degree of realism while you tell a story is to suspend disbelief. The goal for VR is not to suspend belief but to ratchet up belief—that you are somewhere else, and maybe even somebody else . . . Cheap, abundant VR will be an experience factory. We’ll use it to visit environments too dangerous to risk in the flesh, such as war zones, deep seas, or volcanoes. Or we’ll use it for experiences we can’t easily get to as humans—to visit the inside of a stomach, the surface of a comet. Or to swap genders, or become a lobster. Or to cheaply experience something expensive, like a flyby of the Himalayas.

If Kelly is right, we’re just years away from ubiquitous, cheap, fake experiences that we will prefer to real life. In essence, it is the end game for superstimulus.


By now, you may believe that the point of this post is to warn about the dangers of supernormal stimulus.

But that’s not really where I’m going with it. That would be a normative judgment, and I’m not interested in those. What’s interesting to me is not lecturing people on the importance of avoiding snickers bars or virtual reality, but rather, the impact of these supernormal stimuli on the future of humanity.

Not just in how it impacts individuals, but how it impacts broader social trends and demographics.

Those who literally kill themselves through addiction to superstimuli are relatively few, but those who choose not reproduce or reproduce less because of obsession with superstimuli are many – and that number may be growing.

Supernormal stimuli are by definition evolutionarily maladaptive. The graylag goose that tries to raise a volleyball won’t reproduce, nor will the gamer who spends all his waking hours playing video games or immersed in virtual reality.

Virtual reality sex might become more pleasurable than real sex soon enough. But if you chose the virtual kind over the natural kind, your genes will end with you.

Perhaps less dramatically, anyone who lives in an urban center probably has friends or acquaintances who have chosen not have children, simply because they prefer an epicurean lifestyle. This is not a phenomenon that occurs in the poorest countries in the world, but rather one that is unique to the wealthiest. It is the equivalent of saying, “I don’t want to fulfill my evolutionary purpose because the food and wine around me is too tasty.”

There is data to back up the anecdotes: Few would argue that heavy drinking is healthy. But the data show that the hardest drinkers may well be drinking themselves out of the gene pool. Of the 25 heaviest-drinking countries in the world, not a single one has a birthrate at or above replacement level. Whether it is causation or correlation I’ll leave for someone else to perform a factor analysis to determine, but there is certainly an open question whether the heaviest drinkers are drinking themselves out of existence.

[Note: I wasn’t able to find any studies that analyzed alcohol or drug consumption and birthrates. Nor of any studies on the subject of heavy video-game playing on birthrates. If anyone knows of a study on either subject, please let me know.]

Contrast this with the birthrates of teetotaling Mormons and Muslims, and it is plain to note the trend of who is going to survive to populate the planet over successive generations, if this trend continues, will be the ones with the capacity or the social structure in place that helps them to resist superstimuli.

We have long known from the Stanford marshmallow experiment and similar studies that the power to delay gratification is strongly correlated with improved life outcomes. But as the stimuli around us grow in their intensity, so too might the importance of this trait in terms of creating improved personal and genetic outcomes.

Perhaps more than any other trait, those who can resist will be those who inherit the earth.

Whatever I Want, Whenever I Want

I’ve struggled with this blog to decide what its identity will be.

I have toyed with a few different sequences in terms of timing of posts, frequency of posts, and the structure of content.

But, seeing as how I am not doing this for money or for any sort of professional gain, I have decided that there is only one format that makes sense:

I’m going to post whatever I want, whenever I want to do so.

Joyous and Swift now has over 100 posts, so it has that base as a starting point. That makes me happy. But I found myself over the last few days struggling to string together posts day after day while busy with other things. I think the quality was starting to suffer because of a desire to post on a certain schedule.

Screw that. No more struggle. From now on, I post whatever I want, whenever I want to do so.

And that’s all I have to say about that.

Friday Shortcuts – Robots Reading Romance Novels

Last week, I wrote about how I was excited for robots to read this blog. Turns out, they might be more into reading romance novels instead.

Andrew Sullivan returned to writing this week, to pen an article about the demagogue who shall not be named. I agree with Sullivan that most people are far too sanguine about the recent turn of events. He is not like other politicians. Whether his possible election as president is, in fact, an “extinction-level event” for constitutional order is unclear. But I suspect if he gets elected it’ll be the worst development for American civil liberties in 150 years.

Why Are There So Few High-Status Threshold Workers?

[Epistemic Status: Speculative and based on availability heuristic. Still, the answer feels right]

Not long ago, Tyler Cowen gave a talk at my alma mater called, “Why hasn’t economic progress lowered work hours more?”

It’s great.

Mr. Cowen starts the lecture by talking about John Maynard Keynes’s prediction back in 1930, that by the time we get to 2030 (it’s getting close!), the average person would only work about 15 hours a week. Cowen then talks about the income effect and the substitution effect, and the breakdown of hours among different demographic groups.

One of the pieces of data that resonated most with me was that the top 1% is the group that is most dominated by substitution effect, rather than income effect. Or, stated without econ jargon, the richest, even though they need the money the least, are the ones working the most.

Seems counterintuitive, doesn’t it?

You’d think that once you’d earned about $300,000 a year, you’d get to the point where you’d say, “Gee, I have enough money. What I could use right now is some time to spend it.”

A person who came to this conclusion and then acted on it would be what Cowen calls a “threshold earner.” They’d be following through on Keynes’s logical prediction that once you had enough money, you’d then make an effort to enjoy the fruits of your labor, rather than laboring more.

But that’s not what’s happening.

Instead, it’s the exact opposite. They elite and wealthy are more obsessed with work than ever.

Mr. Cowen reviews the data and concludes, basically, “It must that these people want to work those hours.”

Perhaps. Maybe that’s part of it. But as a lawyer who had the opportunity to inhabit the ecosystem where many of these creatures reside, I have a different theory.

My theory is that high-status workers work more because status is relative, and that nobody is more status-conscious than the top 1%.

I remember when I first went to law school, the salaries of first-year lawyers were $125,000 a year at all of the major law firms (it’s now $160,000). If you were enrolled at one of the T-14 law schools (top 14), as they were known, you were virtually guaranteed to get one of these jobs at graduation (or at least that was true back in 2006). To someone who had been accustomed to living well on 1/10th of that as an English teacher in Barcelona, this seemed an impossibly large salary to me. One of my best friends in law school, though, was from New York and had many friends who had already been through law school, and were working those jobs. He told me that some of his friends struggled to live on that salary.

This was incomprehensible. How could a person make that much money struggle to live on that income? That’s insanity!

My friend gave me an explanation of why this happens, and why it’s so common, that has stuck with me to this day. He said, “You may think that $125,000 is a lot of money now, but everyone at the law firm makes at least that. Second-year lawyers make $135,000; third-year lawyers make $145,000. Plus bonuses of $30,000-$200,000 a year. All the partners, of course, make millions a year. Right now, $125,000 is a lot of money to you. But when you get to the law firm, all it makes you is the poorest person in your new social circle.”

That’s why high-status earners work more. It’s not that they enjoy the work so much. It’s that they enjoy the status benefits of the work. And as anyone who has had the pleasure of hearing about the drama associated with admittance into Manhattan private pre-schools can tell you, status is an arms race that doesn’t end when you reach the top 1%.

If you were a 74th percentile basketball player, you probably wouldn’t define your self-image based on your basketball-playing ability. But if you played Division I college basketball, there’s a good chance that you do or that you at least once did.

Likewise, for most, the more competitive you get in social status games, the more they matter to you.

My guess is that if a top 1%-worker were dropped into a different ecosystem where they were no longer surrounded by other top 1%-ers, they’d immediately work less. They’d probably still work more than the average person, but without a competitive ecosystem pushing them to ever-higher levels of status, they’d immediately settle into an equilibrium where they could position themselves at the top of the hierarchy doing as little work as possible.

Also, as a final comment, it may be worth noting that the question of this post creates its own paradox. It’s hard to isolate “high-status threshold earners,” because once someone decides to step off the high-speed treadmill, they usually cease to be high status.

What I’m Reading – 5/4 – Team of Rivals

Team of Rivals, Doris Kearns Goodwin

There are few figures in American history larger than Lincoln. And the bigger the figure, the more the space they occupy in our consciousness feels like myth rather than history.

Team of Rivals is the first book I’ve read about Lincoln as an adult. And there was much about his story that felt new.

For example, I don’t remember knowing how much of a long shot he was to obtain the Republican nomination in the first place. On the first ballot at the Republican convention in 1860, Lincoln was 4th in the voting. Lincoln distinguished himself only as everyone’s second choice. And then only after longstanding party rivalries tainted the candidacies of William Seward and Salmon Chase did he receive enough support to gain the nomination.

I also don’t remember knowing how obscure Lincoln was as a national figure leading up to his election as president. National media was generally unaware of whether his name was Abraham or Abram when he was nominated.

And I didn’t know that his assassination was a part of a broader conspiracy to kill the president, the vice president, and the secretary of state. The person responsible for killing the vice president decided against it last minute, and the assassination of the secretary of state (William Seward) was unsuccessful – though Seward and his son were both wounded on the same day Lincoln was assassinated.

Goodwin portrays Lincoln as a kind and disarming person. I don’t know why, but that struck me as novel, too. Kindness isn’t a trait that I associate with powerful people. But in contemporary letters, the word kind appears regularly in his peers’ descriptions of him. And, in reading the book, that appears to have surprised his rivals as much as it did me. Indeed, it seems that he used his kindness as a political skill to deflect attacks from his rivals.

And that strikes me as about as extraordinary and useful a skill as a person could have.

Epistemic Status

One of my favorite bloggers, Scott Alexander of Slate Star Codex, often begins his blog posts with a brief parenthetical about his “epistemic status.”

It’s his way of declaring at the outset how confident he is what he’s about to say.

In a recent post, for example, he wrote:

[Epistemic status: very speculative, asserted with only ~30% confidence. On the other hand, even though psychiatrists don’t really talk about this it’s possible other groups know this all already]

The caveat about epistemic status is an attempt to be forthright while anticipating obvious criticisms of what he’s about to write before anyone reads it. It is a meta-wonky way of sidestepping the easiest forms of criticism and moving on to the heart of the subject matter.

It’s also a declaration of modesty and self-awareness. The least intelligent tend to be the most confident in their beliefs. And the most intelligent know that there’s much more that we don’t know than what we do.

Scott Alexander is as smart as it gets. So, even though he’s smarter than 99.99% of the population, indeed, because he’s smarter than 99.99% of the population, he knows there’s always a chance he might be wrong. And thus whenever he states an opinion he starts with a frank estimate about how likely it is that he is wrong.

And since I believe that the most efficient way to grow is to steal from people who are smarter than you are, I might adopt this practice as well. With due recognition that it is a lesson learned from someone much more intelligent than I am.