Correct models are bad

Serious baby

“Our love of being right is best understood as our fear of being wrong.”
– Kathryn Schulz

We may think that correct mental models are always good to have, but the reality is that some correct models are useless or even harmful.

***

A model is correct to the extent that there are facts and evidence to support the model and its claims. We’ve talked about why wrong mental models are good. It’s also the case that correct models, and even correct information, can be bad. How could this be?

All models are simplifications of the world. Whenever we knowingly use a wrong model, we are aware of what the model doesn’t cover. The danger of using correct models is that we are more likely to apply the model without thinking about what’s been left out. Here are several weaknesses of correct models.

 

The model is ineffective

Some correct models don’t seem to serve much purpose, so even though they are correct, it’s not really worth keeping them around. An example of an ineffective model is the “calorie in, calorie out” approach to dieting. This is 100% correct: if someone eats more calories than they use, they will gain weight. If they eat fewer calories than they use, they will lose weight.

This model is also useless, even if we ignore problems with actually measuring calories.1 The hard part about dieting is managing hunger levels and being mindful of eating habits. If someone wants to lose weight and eats the wrong foods, they will constantly be hungry. No one has the willpower to deal with constant hunger forever, and they will abandon the diet.

Better approaches focus on satiety and mindfulness. These approaches aren’t as “scientific” or “correct” as the calorie in, calorie out model, but they focus on what matters to an actual dieter. Even though the calorie in, calorie out model is completely correct, it is an ineffective model for the purposes of dieting.

 

The model is an obstacle to exploration

Once our brains have decided that a mental model is correct, we are unlikely to question the model. This can stunt our learning and personal growth.

A great example is the overconfidence effect. The famous behavioral economist Richard Thaler said, “Perhaps the most robust finding in the psychology of judgment and choice is that people are overconfident.” The statistician Nate Silver said, “Of the various cognitive biases that investors suffer from, overconfidence is the most pernicious.”

There is huge body of research showing the existence of systematic overconfidence.2 A well known example is that more than 90 percent of American drivers rate themselves better than average. Mathematically, only 50 percent of people can be better than average. This is evidence that people are overconfident.

But are they? Phil Rosenzweig, the business author, wasn’t convinced that this was the end of the matter. He asked people about their driving ability and got a similar result, with the majority of people saying that they were above average. But he also asked people about their skill in drawing pictures. This time, the majority of people believed they were worse than average.

So much for the overconfidence effect! What’s going on here?

Further research shows that people are not systematically overconfident. The reality is more nuanced: people are overconfident in tasks they find easy, and underconfident in tasks they find difficult. People are not systematically overconfident: they rate themselves accurately, but don’t have much information on the skill level of others. As a result, they rely too much on their own skill level in their estimates.3

A simplistic model (the overconfidence effect) blinded an army of researchers from thinking more deeply about the situation. The model was correct – in certain cases, people are overconfident. The research studies were scientific and accurate. But because the research seemed so authoritative, no one thought of looking beyond it.

 

The model is socially or politically dangerous

Some correct models may even be harmful. These are especially dangerous for people who value truth and correctness, but are not politically savvy.

In Stalin’s time, a lot of genetics researchers were jailed and killed because their findings conflicted with Stalin’s views. In our time, similarly, research into genetics is risky wherever the dominant culture values equality. Any link between genetics and personality, behavior, or ability is delicate. Genetics research is dangerous and a single wrong move could lead to a career-ending loss of reputation.

More generally, correct models are sometimes too socially harmful, and it’s wiser to stick with a wrong model.4 It could be bad to learn correct models even if we didn’t intend to apply them. That would increase the amount of correct information that we must suppress. The correct knowledge increases the risk that we will say the wrong things. It also drains our mental energy and creates unhappiness because we have to spend more resources on policing ourselves.5

 

The model is easy to misuse

In academic research, statistical significance decides whether something will be published. The p-value is used to calculate it. The stakes are high, because statistically insignificant findings won’t be published.

Motivated researchers have lots of ways to get the p-value they want. For example, they can divide an experiment into many smaller parts, and a few of these will meet the p-value test by chance. The results in these subpopulations are then reported as significant while the others are ignored. This is a widespread problem. Famous researchers have used this method to deliberately create bogus research.

Statistical significance is a good quality indicator when used properly. Sadly, the incentives of academic research encourage people to game the system. Without careful safeguards against misuse, bad outcomes can result even from correct models.

 

Consequences of misuse are severe

More bad news: even if a correct model is hard to misuse, it can still lead to bad outcomes! This happens when using the model wrongly is catastrophic.

For example, immigration is good. Immigrants give more than they receive in benefits. They increase the share of working-age population. They have useful skills and are well-educated. Immigrants allow the host country to grow its population despite low birthrates.

The immigrants also benefit, as we would expect. In return for uprooting their familiar life, economic immigrants are able to earn more money and build a better life for themselves. They also help their families and country by sending money back home. Immigration is a classic example of a win-win situation.

But immigration is dangerous because it is a big decision that is hard to reverse. In 2015, the German Chancellor, Angela Merkel, let in a million refugees as a humanitarian gesture, relying on research showing that immigration is good. These refugees were mostly highly skilled and qualified people, and policy analysts thought they would benefit Europe. Unfortunately, she underestimated the challenges of handling such a large amount of people, the unequal burden on different countries in Europe, and the conflict caused by differing cultural and social norms. The result was to alienate many parts of Europe and to enable the rise of extremist and populist parties.

 

In summary, there are several ways in which a model can be correct but still not worth using:

  • The model is ineffective
  • The model is an obstacle to exploration
  • The model is socially or politically dangerous
  • The model is easy to misuse
  • Consequences of misuse are severe

Don’t be so quick to embrace a model just because it is correct!

We might feel guilty about ignoring correct models. It feels wrong to deliberately refuse knowledge; surely the path to self-improvement lies in gathering correct information? I hope this article shows why that’s not the case, and that we should think carefully before using correct models of the world.


* Edited on 8 March 2018 to clarify the definition of correctness.
1. Here is a visual explanation of some potential problems with calorie intake measurement.
2. As of this writing, the Wikipedia page on the overconfidence effect has no less than 34 references to academic papers showing the existence of a systematic overconfidence effect.
3. This example is directly taken from chapter 5 of Phil Rosenzweig’s excellent book, Left Brain, Right Stuff. This book receives my highest recommendation.
4. Shouldn’t we try to change an unjust system? Yes, but we need strong political skills to do so, instead of just relying on winning the argument with the truth.
5. For some illustrative examples, imagine a person in China learning about Mao-era history, or a person in the U.S. learning about the 1950-1953 bombings of North Korea.

6 thoughts on “Correct models are bad

  1. Correct models can also be bad due to imprecision. If I predict that it will rain this week with 20% probability, that is less useful than being able to tell you that it will rain on Tuesday with 20% probability and has 0% probability the rest of the week.

  2. No edit button on this website, annoying, but the lack of precision ties in well to the overconfidence effect. It’s easy to think that because we have a working model for something, we don’t need to look at it in any more detail, and that can be wrong.

  3. That’s a good point. A correct model can be too imprecise for its output to be useful. And as you say, it makes us less likely to look for better models. This is a particular problem for models which are prestigious or claim to be authoritative.

    This site is using wordpress.com which doesn’t have the ability to allow editing. I will look into alternatives that allow comment edits!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s