The empirical finding that simple statistical algorithms consistently outperform expert intuitive judgment across domains, not because algorithms are sophisticated but because they're consistent.
In domain after domain—medical diagnosis, parole decisions, graduate admissions, wine quality, loan defaults—simple algorithms outperform expert judgment. This isn't because algorithms are smarter; even crude models beat experts. It's because algorithms are consistent—they don't have good days and bad days, don't get tired, and don't suffer from cognitive biases. Human experts are inconsistent, influenced by irrelevant factors (mood, recent cases, time of day), and prone to overconfidence. The resistance to algorithms is itself a bias—we prefer human judgment even when it's demonstrably inferior because we value narrative explanations and are uncomfortable with uncertainty. The prescription: use algorithms for repeated decisions with clear outcomes.
A simple formula using just a few variables (GPA, test scores, undergraduate institution) predicts graduate school success better than admissions committee interviews and holistic reviews. The formula is consistent; the committee is influenced by irrelevant factors like interview performance, which has little predictive validity.
Human experts with years of experience must be better than simple formulas—actually, consistency beats expertise in most domains; algorithms win not by being smarter but by being consistent.
Why do simple algorithms consistently outperform expert intuitive judgment across domains, even when the algorithms are crude?
How do the illusion of validity and WYSIATI explain why admissions committees resist using simple algorithms, even when algorithms predict graduate school success better than holistic reviews?
The slow, deliberate, effortful mode of thinking that allocates attention to complex computations, self-control, and conscious reasoning.
Mental ModelThe fast, automatic, intuitive mode of thinking that operates effortlessly and generates impressions, intuitions, and feelings without conscious control.
Mental ModelJudging the frequency or probability of events by how easily examples come to mind, leading to overestimation of vivid or recent events.
Mental ModelJudging probability by how much something resembles a typical case while ignoring base rates, sample size, and statistical principles.
Mental ModelThe tendency to rely too heavily on an initial piece of information (the anchor) when making subsequent judgments, even when the anchor is arbitrary or irrelevant.
Mental ModelThe principle that losses loom psychologically larger than equivalent gains, with losing something feeling roughly twice as bad as gaining the same thing feels good.
PrincipleA descriptive model of decision-making under risk showing that people evaluate outcomes relative to a reference point, are loss-averse, and weight probabilities non-linearly.
FrameworkSystem 1's tendency to construct the most coherent story possible from currently available information without considering what's missing or questions not asked.
PrincipleThe empirical finding that simple statistical algorithms consistently outperform expert intuitive judgment across domains, not because algorithms are sophisticated but because they're consistent.
A simple formula using just a few variables (GPA, test scores, undergraduate institution) predicts graduate school success better than admissions committee interviews and holistic reviews. The formula is consistent; the committee is influenced by irrelevant factors like interview performance, which has little predictive validity.
Human experts with years of experience must be better than simple formulas—actually, consistency beats expertise in most domains; algorithms win not by being smarter but by being consistent.
Algorithms vs. Intuition is explored in depth in "Thinking, Fast and Slow" by Daniel Kahneman. Distilo provides a deep AI-powered analysis with key insights, audio narration, and practical frameworks.