Andy Evans
Research & Writing·8 min read

Bayesian Updating

How Bayesian updating applies to investment decisions — and why the industry's forecast publishing model was designed to prevent it.

Life on the sell side

Working on the sell side, I experienced first-hand how the system was designed to prevent the very thing Bayesian thinking demands: regular, frictionless updating.

When you published research on a company, you were required to upload your earnings forecasts to a central system. These were point estimates — a single number for revenue, a single number for earnings. Not a range, not a distribution, not a probability-weighted set of scenarios. One number.

Once those numbers were in the system, they became the official published view. You could not present different numbers to clients, to salespeople, or to anyone outside the research department unless you formally published a new set of forecasts through the same system. There was no informal “I think earnings are tracking higher” — if you hadn’t updated the system, the old number was the number.

Think about what this means from a probabilistic perspective. New information arrives continuously — a trading statement, a competitor’s results, a macro data point, a conversation with management. Each of these should shift your probability distribution, even if only slightly. But the system imposed enormous friction on updating. Publishing a new forecast meant writing a new note, going through compliance, uploading to the system, and distributing to clients. The cost of updating was high enough that most analysts updated quarterly at best — and many only updated when the gap between their published number and reality had become embarrassingly large.

The result was a profession structurally designed around point estimates that were expensive to change — the exact opposite of how probabilistic judgement should work.

The problem with point forecasts

The sell-side experience is an extreme case, but the underlying pattern is universal. Most investment analysis culminates in a single number: a target price, a normalised earnings figure, an IRR. The real world delivers new information continuously, and the question is not whether your initial forecast was right — it’s how your view should change as reality unfolds.

The standard approach is to hold the forecast until it becomes obviously wrong, then revise in a large jump. This is psychologically natural but analytically poor. A better approach is regular Bayesian updating — taking the prior (your existing view), incorporating new evidence, and producing a posterior (your updated view) in a disciplined, mechanical way.

The conclusion from spending time with this framework: regular Bayesian updating on a company’s fundamentals is the best approach. It requires two things — sufficiently regular updates to shift probabilities, and a focus on how probability has changed rather than the absolute level of probability.

The mechanics of updating

Bayesian updating takes a dataset of historic outcomes (the mean and variance of earnings, for example) and asks: given a new piece of information (this year’s results), how should the probability distribution shift?

The key inputs are:

  • Prior mean: the most recent piece of information, or the historic average of earnings
  • Prior variance: how much do earnings typically deviate? Should we weight recent results more heavily?
  • Historic mean and variance: the long-run average and spread of the company's earnings
  • Weight: how much should we weight new information versus history? Typically 90% history, 10% new — but this is open to challenge
  • Time: there is a consideration of time if you are doing this over a number of years

The most important insight is not in the headline probability — it is in the change. How has the probability shifted? Is it moving towards your thesis or away from it? And at what rate?

See it in action

Consider a medical example: a disease affects 1 in 100 people. A test is 95% accurate — it correctly identifies 95% of sick people and correctly clears 95% of healthy people. You test positive. Most people answer “95%” — but the real probability you are sick is approximately 16%. The low base rate (1%) means the false positives from the 99 healthy people vastly outnumber the true positives from the 1 sick person.

Use the tool below to explore how base rate, sensitivity, and specificity interact. Move the sliders and watch how the posterior probability — and the visual composition of true vs. false positives — changes dramatically.

Bayesian Updating: Why Base Rates Matter

Adjust the sliders to see how prior probability, test sensitivity, and specificity interact to determine the true probability of disease given a positive test.

1%
0.1%50%
95%
50%99%
95%
50%99.9%

1,000 People Tested (each dot = 10)

True positive (has disease, tests positive)
False positive (no disease, tests positive)
False negative (has disease, tests negative)
True negative (no disease, tests negative)

Result

If you test positive, the probability you actually have the disease is:

16.1%

10
True Positives
50
False Positives
1
False Negatives
941
True Negatives

Out of 60 positive tests, 50 are false alarms.

Even with a 95%-accurate test, if the base rate of a condition is just 1%, a positive result means you have roughly a 16% chance of actually having the disease — most positive tests are false positives. This is why base rates are the single most important input in probabilistic reasoning, and why ignoring them (the “base rate fallacy”) leads to systematically overconfident conclusions in medicine, investing, and everyday life.

A practical example

Consider a company where the analyst forecast normalised EBIT of £310m five years ago, but the historic mean was £250m with a 25% chance of achievement. Each annual result shifts the probability distribution. If results come in consistently below forecast, the probability of achieving the normalised target falls — not in a sudden realisation, but gradually, as each data point updates the posterior.

The value of this approach is that it forces you to confront deteriorating fundamentals early, rather than waiting until the thesis is clearly broken. It also works in the opposite direction — when results exceed expectations, the probability of achieving your target rises, giving you confidence to hold or add.

The practical applications extend beyond individual stocks. At the tail of the distribution, Bayesian updating can help identify companies that are triggering a revisit — stocks where the probability has shifted enough to warrant fundamental re-underwriting. This is particularly useful for managing the bottom momentum of the screen: companies that have been on the watchlist for years but whose fundamentals have quietly deteriorated.

Connection to the investment process

This connects directly to several themes in the philosophy section:

  • Set fundamental stop losses — not price-based, but probability-based triggers for revisiting a position
  • Tail exercise — identify companies in the lowest momentum of the screen that could be candidates for exit
  • More quantitative triggers — move beyond qualitative judgement to systematic probability shifts
  • Continuous re-underwriting — the seven red-flag questions applied not just at entry but throughout the holding period

The Base Rates article covers the complementary idea of starting with the outside view. Bayesian updating is what happens next — the disciplined incorporation of case-specific evidence over time.

Key Takeaway

Don’t anchor to your initial forecast and wait for it to be proved right or wrong. Update continuously as new information arrives. The change in probability matters more than the absolute level — and a disciplined updating framework forces you to confront uncomfortable truths before they become obvious.