Human Skill in an AI World
AI commoditises the quantifiable. The investors who thrive will be those who use it to amplify the skills that machines cannot replicate — judgement, conviction, and decision-making under genuine uncertainty.
The chess grandmaster’s problem
When Deep Blue beat Garry Kasparov in 1997, chess didn’t die. Something more interesting happened. For a period, “centaur” teams — a human grandmaster working with a chess engine — could beat both the best humans and the best machines playing alone. The human brought strategic intuition, creative opening preparation, and the ability to recognise when the computer’s evaluation was misleading. The machine brought calculation speed, tactical precision, and tireless analysis of variations.
Eventually the engines got strong enough that the human contribution in chess became negligible. But investing is not chess. Chess is a closed system with perfect information, fixed rules, and a finite game tree. Markets are open, reflexive, socially constructed, and subject to genuine uncertainty — the kind where the probability distribution itself is unknown. The gap between what AI can do and what the situation demands is far wider in investing than it ever was in chess.
The question for investors is not whether AI will replace them. It is how to use AI to amplify the specific skills where humans still have a genuine edge — and to stop wasting time on the tasks where they do not.
What AI takes away
AI commoditises everything that is quantifiable, repeatable, and rule-based. In an investment context, this means:
- •Screening and filtering: scanning thousands of companies across dozens of metrics is instant, not a competitive advantage
- •Data extraction: pulling numbers from annual reports, parsing filings, normalising across accounting standards — all automatable
- •First-pass analysis: a coherent initial assessment of a company can be produced in minutes, not days
- •Back-testing and pattern matching: historical factor analysis, correlation matrices, base rate computation — pure computation
- •Report writing: drafting, formatting, and distributing research notes
None of these tasks are unimportant. But they are no longer differentiating. If everyone has access to the same AI-powered screening, the same automated data extraction, and the same first-pass analysis, the edge cannot come from doing these things. It has to come from somewhere else.
What AI cannot take away
Investment skill is not one thing. It is a bundle of capabilities, some quantifiable and some not. The quantifiable parts — processing speed, data coverage, computational precision — are exactly what AI excels at. The parts that remain are harder to define, harder to measure, and harder to replicate. They are also where the real edge lies.
Judgement under genuine uncertainty
Markets are complex adaptive systems. Identifying a past relationship and acting on it changes the system itself. AI can process data from the past, but it cannot navigate a future that has no precedent. The investor who correctly assesses a situation that has never occurred before — a regulatory shift, a geopolitical rupture, a technology inflection — is exercising a form of judgement that no model can replicate, because the model has no training data for it.
Conviction and the courage to act
AI can tell you a stock is statistically cheap. It cannot tell you whether you have the conviction to hold it through a 40% drawdown while your peers, your clients, and the financial press tell you that you are wrong. The ability to act on an insight — to size a position, to hold through discomfort, to add when the price falls further — is a behavioural skill that cannot be automated.
Asking the right questions
AI is extraordinarily good at answering well-specified questions. It is far less good at knowing which questions to ask. The investor with deep domain knowledge knows where to probe, what to be suspicious of, and which variables actually matter — even when they cannot fully articulate why. This is tacit knowledge, built through years of pattern recognition that operates below the level of conscious reasoning.
Reading people and situations
Meeting a management team, assessing their incentives, judging whether their strategy is credible — these are fundamentally human skills. Capital allocation skill, corporate governance quality, and management integrity do not show up in spreadsheets until it is too late. The investor who can read a room, detect evasion, or sense when a narrative is too polished is using a form of intelligence that AI cannot access.
Creative synthesis across domains
The best investment insights often come from connecting ideas across domains — seeing a parallel between a supply chain disruption and a historical precedent, or recognising that a company's competitive dynamics resemble a pattern from a completely different industry. This kind of lateral thinking requires a breadth of experience and a willingness to make non-obvious connections that current AI cannot reliably produce.
Relationships and trust
Client communication, team leadership, mentoring, and the ability to explain complex ideas simply are skills that compound over a career. Trust is built through dialogue, shared experience, and consistent behaviour over time — not through data.
The subtraction principle
The right way to think about AI in investing is not “what can AI do?” but “what should AI take away from the investor’s day?”
Every hour an experienced investor spends on data extraction, report formatting, or screening — tasks where they add no differentiated skill — is an hour not spent on the things where they genuinely add value: meeting companies, debating investment theses, mentoring junior analysts, thinking deeply about a position, or building the relationships that produce proprietary insight.
The goal is not to make the investor more productive at everything. It is to subtract the undifferentiated tasks so that more time and cognitive energy flows to the areas where human skill actually compounds. This is an important distinction. A common mistake is to use AI to do more of everything — more screens, more reports, more data, more meetings. But as the Escape from Model Land article argues, more information does not mean better decisions. It can mean worse decisions made with more confidence.
The subtraction principle: use AI to take away the tasks where you do not add skill, so that you can concentrate on the tasks where you do. The output should be fewer, better decisions — not more, faster ones.
Chesterton’s fence: know what you’re removing
G.K. Chesterton proposed a simple principle: if you come across a fence in the middle of a field and cannot see why it is there, do not tear it down. First understand its purpose. Only then can you judge whether it is still needed.
The same principle applies to automating investment processes. A task that looks like drudgery from the outside may serve purposes that are invisible until it is gone. The analyst who manually reads every filing is also building intuition for what normal looks like — so that abnormal jumps out. The portfolio manager who hand-writes the monthly commentary is forced to articulate their thinking, which exposes gaps that a templated AI summary would paper over. Before you automate, understand what the human in the loop was actually doing — not just the visible output, but the invisible judgement, calibration, and quality control embedded in the process.
This matters because AI has real shortcomings that are easy to overlook when the output looks polished:
- •Hallucination: large language models produce confident, well-formatted text that can be entirely wrong. The output reads like it was written by someone who knows what they are talking about, whether or not it is accurate.
- •Probabilistic outputs: the same prompt to the same model can produce different answers on different runs. AI agents are not deterministic systems delivering consistent results — they are probabilistic systems delivering plausible results.
- •No self-knowledge of failure: unlike a human analyst who feels uncertain and flags it, an LLM gives no reliable signal about when it is guessing. The confidence of the output is unrelated to its accuracy.
- •Evaluation is not optional: any serious deployment of AI in an investment process needs an evaluation pipeline — systematic checks on output quality, accuracy tracking over time, and human review at critical decision points. Without this, you are trusting a system that does not know when it is wrong.
Just because it is easy to get AI to do something does not mean it is the right answer in all circumstances. The question is not “can AI do this?” but “what are we losing by removing the human from this step, and have we built the infrastructure to catch what the human used to catch?”
The experience gap narrows — on the surface
One consequence of powerful AI tools is that they compress the visible gap between experienced and less experienced investors. A junior analyst armed with AI can produce a company note that is well-structured, covers the key points, and reads as credibly as one written by someone with twenty years of domain knowledge. The output looks the same. The understanding behind it is very different.
This is not a criticism of less experienced investors — everyone starts somewhere, and AI is a genuinely powerful learning tool. The risk is subtler: when the surface quality of analysis is uniformly high, it becomes harder for organisations to distinguish who truly understands a business from who has assembled a credible-looking summary. The signal value of written analysis erodes. Knowing the facts and understanding the dynamics start to look the same from the outside, even though they are fundamentally different on the inside.
This makes it more important, not less, to invest in the things that build genuine domain knowledge: time spent with companies, mentoring relationships, structured feedback loops, and the slow accumulation of pattern recognition that comes from years of deliberate practice. AI raises the floor of analytical quality. It does not raise the ceiling. The ceiling is still set by the depth of human understanding.
How to maximise what remains
If the intangible skills are where the edge lies, the question becomes: how do you deliberately develop and protect them?
- •Invest in decision quality, not decision volume. Use AI to reduce the number of decisions you make, not increase it. Fewer, deeper, higher-conviction positions — with more time spent on each — is the natural endpoint.
- •Build feedback loops. The skills that matter most — judgement, conviction calibration, reading management — improve only with deliberate practice and honest self-assessment. Decision journals, after-action reviews, and systematic tracking of forecast accuracy are the infrastructure for this.
- •Protect thinking time. The most valuable hours in an investor's week are the ones spent thinking — not in meetings, not processing data, not writing reports. AI should create more of these hours, not fill them with more tasks.
- •Develop tacit knowledge deliberately. Read widely, across domains. Meet companies, even when there is no immediate investment thesis. Build mental models from experience, not just from data. The investor with the richest internal model of how businesses work will have the largest advantage over AI.
- •Cultivate the team skills that compound. Communication, mentoring, intellectual challenge, and the ability to build a culture of honest debate are skills that AI cannot provide and that organisations systematically undervalue. These are the skills that turn a group of individuals into a unit that outperforms.
- •Stay honest about what you do and do not add. The hardest exercise: look at your week and identify the hours where your contribution was genuinely differentiated versus the hours where a competent AI could have done the same work. Then redesign your week.
Key Takeaway