Andy Evans
Research & Writing·9 min read

Escape from Model Land

All models are wrong, but some are useful — and some are dangerous. Drawing on Erica Thompson's book, this article explores the limits of models, the cost of ignoring them, and how to behave once you accept that your model is not the world.

The horse race experts

In the 1970s, the psychologist Paul Slovic ran an experiment with professional horse race handicappers. He asked them to predict race outcomes using a limited set of variables — just five pieces of information per horse. The experts performed well. Their predictions were reasonably accurate, and they knew it: their confidence was calibrated to their actual hit rate.

Then Slovic gave them more data. Ten variables. Twenty. Forty. With each increase, something striking happened: the experts’ confidence rose steadily — but their accuracy did not. In fact, it declined. The additional information did not help them predict better. It helped them construct more elaborate stories about why they were right, while quietly degrading the quality of their actual decisions.

The lesson is uncomfortable. More data does not automatically mean better decisions. If you do not know which variables genuinely matter — if you cannot distinguish signal from noise within your model of the world — then adding more inputs makes you more confident and less accurate at the same time. This is the central danger of modelling: the seductive precision of a complex model can mask a fundamental misunderstanding of what drives the outcome.

What is Model Land?

In her book Escape from Model Land, the mathematician Erica Thompson introduces a concept that should be required reading for anyone who builds or relies on quantitative models. Model Land is the world inside the model — a place where everything is precisely defined, internally consistent, and beautifully structured. In Model Land, there is a correct answer to every question, because the model’s assumptions define the universe.

The danger is forgetting that Model Land is not the real world. The precision inside the model is a property of the model, not a property of reality. When a financial model produces a fair value of £23.47, the number has the appearance of exactitude. But the confidence we place in that number should be governed not by how precisely the model computes it, but by how well the model’s assumptions correspond to reality — and that is a question the model itself cannot answer.

Thompson’s argument is that we spend too much time optimising inside Model Land — refining parameters, adding complexity, back-testing against historical data — and not enough time asking whether the model’s structure is even approximately right. The map is not the territory. The more detailed the map, the more tempting it is to forget that.

The limitations of models

All models simplify reality. That is their purpose — to make something complex tractable. But every simplification encodes assumptions about what matters and what can be ignored. The limitations are structural, not fixable with more data:

  • Models assume their own structure. A regression assumes linearity. A Monte Carlo assumes a distribution shape. A discounted cash flow assumes that future cash flows are the right thing to discount. These structural choices are not tested by the model — they are imposed on it.
  • Parameters are fitted to past data. Every calibrated model is an exercise in assuming that the future will resemble the past in the specific ways the model captures. When the regime changes — and in markets, it always eventually does — the model breaks precisely where it was most confidently calibrated.
  • Tail events are systematically underweighted. Models built on historical data will underestimate the frequency and severity of events that have not yet occurred in the sample. The 2008 financial crisis was not a six-sigma event — it was an event that the models were structurally incapable of anticipating.
  • Complexity creates false confidence. As Slovic's handicappers discovered, a more complex model does not necessarily produce better predictions. It produces more detailed predictions, which feel more convincing — but feeling convincing and being accurate are different things.
  • Expert judgement is not eliminated by models — it is embedded in them. Every modelling choice — which variables to include, which functional form to use, which data to train on — is a judgement call. The model does not replace human judgement; it encodes it, often in ways that are invisible to the end user.

The cost of ignoring the limitations

In finance, the cost of mistaking Model Land for reality is not academic — it is measured in capital destroyed, careers ended, and systemic crises triggered. The pattern repeats:

Risk models before 2008

Value at Risk models told banks that the probability of large losses was negligibly small — because the models assumed returns were normally distributed and that historical correlations were stable. Both assumptions were wrong in exactly the way that mattered. The models did not fail because the maths was wrong. They failed because the assumptions encoded in the structure were wrong, and nobody was incentivised to question them.

Structured product ratings

Credit rating models for complex structured products assumed that the underlying default probabilities were largely independent. When defaults turned out to be highly correlated — because they shared a common driver in the housing market — the models produced ratings that were not just inaccurate but catastrophically misleading. The models were internally consistent. They were also profoundly wrong.

The precision trap in equity valuation

An analyst who produces a fair value of £23.47 has not demonstrated that the company is worth £23.47. They have demonstrated that their model, given their assumptions, produces that number. The number inherits whatever confidence is warranted by the assumptions — which is usually far less than the two decimal places suggest. The danger is that the false precision anchors subsequent decisions: position sizing, buy/sell triggers, and performance attribution all treat the model output as if it were a measurement rather than an estimate.

In each case, the failure was not computational. The models did exactly what they were designed to do. The failure was epistemological: people believed the model’s output was a statement about reality, when it was actually a statement about Model Land.

Why are you modelling?

One of Thompson’s most important contributions is the insistence that the purpose of a model should drive every design choice. Not all models serve the same function, and confusing the purpose leads to misuse:

Models for understanding

Built to explore how a system works. Simplicity is a feature, not a bug. A toy model that captures the essential dynamics is more useful than a complex model that obscures them. The output is insight, not a number.

Models for prediction

Built to forecast specific outcomes. Accuracy is the standard. But accuracy degrades rapidly as the system becomes more complex, more reflexive, and more subject to genuine uncertainty. Markets are the hardest case.

Models for communication

Built to convey a view of the world to others. The audience matters as much as the mathematics. A model that is technically correct but impossible to explain may be worse than a simpler model that conveys the right intuition.

The pitfall is using a model built for one purpose as if it served another. A valuation model built for understanding — to explore which drivers matter most — should not be treated as a prediction of what the stock will do. A scenario model built for communication should not be used to size a position. Clarity about purpose is the first defence against model misuse.

Escaping Model Land

Thompson offers something closer to a manifesto than a checklist. The spirit of her conclusions, applied to investment:

  • Treat model outputs as one input among many, not as the answer. The model is a tool for thinking, not an oracle for prediction. The moment you stop questioning the output is the moment it becomes dangerous.
  • Be honest about what the model cannot tell you. Every model has a boundary beyond which its outputs are meaningless. Knowing where that boundary lies — and communicating it clearly — is as important as the model itself.
  • Sensitivity analysis is not optional. The primary output of any model should be: which assumptions drive the result? If the answer changes dramatically when you move one input, you do not have a robust answer — you have a bet on that input.
  • Resist the temptation to add complexity. More parameters, more data, more decimal places do not make a model more accurate. They make it more precise — and precision without accuracy is worse than useless, because it creates false confidence.
  • Remember that expert judgement is embedded in every model. The choice of what to model, how to model it, and what to leave out is a human judgement call. Pretending the model is objective is itself a modelling error.
  • Use models to ask better questions, not to avoid asking them. The best use of a model is to structure your thinking, surface your assumptions, and identify what you would need to believe for the output to be true. The worst use is to outsource your thinking to the spreadsheet.

Key Takeaway

Models are powerful tools for structuring thought — but they are maps, not territories. The most dangerous models are the ones that look most precise, because precision creates the illusion that the hard questions have been answered. They have not. Escaping Model Land means using models to sharpen your thinking while retaining the humility to know that the real world will always surprise you in ways your model cannot anticipate.