The base rate fallacy, also called base rate neglect, is an error that occurs when the conditional probability of some hypothesis H given some evidence E is assessed without taking into account the "base rate" or "prior probability" of H and the total probability of evidence E.
In a city of 1 million inhabitants there are 100 known terrorists and 999,900 non-terrorists. The base rate probability of one random inhabitant of the city being a terrorist is thus 0.0001 and the base rate probability of a random inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs a surveillance camera with automatic facial recognition software. If one of the known terrorists is seen by the camera, the system has a 99% probability of detecting the terrorist and ringing an alarm bell. If the camera sees a non-terrorist, it will only incorrectly trigger the alarm 1% of the time. So the failure rate of the camera is always 1%.
Suppose somebody triggers the alarm. What is the chance they are really a terrorist?
Someone making the base rate fallacy would incorrectly claim that the false-alarm rate must be 1 in 100, because the failure rate of the device is 1 in 100, so if the alarm rings, there's a 99% probability that the camera has detected a real terrorist. The fallacy arises from the assumption that the device-failure rate and the false-alarm rate are equal.
This assumption is incorrect, because if the camera sees a random sampling of the population—or even some less-random sample, like the people entering an airport—it is far more likely to see non-terrorists than terrorists. The higher frequency of non-terrorists increases the false-alarm rate.
Imagine that the city's entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore about 10,098 people will trigger the alarm, and only about 99 of them will be terrorists. So the probability that a person who triggers the alarm is actually a terrorist is only about 99 in 10,098, or 1/102.
The base rate fallacy is only fallacious in this example when there are a different number of non-terrorists than terrorists. In a city with a population of exactly 50% terrorists and 50% non-terrorists, the probability of misidentification from a camera system like the one described above will be the same as the failure rate of the device.
In many real-world situations, though, particularly problems like detecting criminals in a largely law-abiding population, the small proportion of targets in the large population make the base rate fallacy very applicable. Even a low false-positive rate will result in so many false alarms as to make such a system useless.
In the above example, where P(A|B) means the probability of A given B, the base rate fallacy is the incorrect assumption that:
However, the correct expression uses Bayes' theorem to take into account the probabilities of both A and B, and is written as:
Thus, in the example the probability is overestimated by more than 100 times, due to the failure to include the prior probability of one being a terrorist and the total probability of a bell ringing.
Findings in psychology
In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students. When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive information about the particular student, even if the new descriptive information was obviously of little or no relevance to school performance. This finding has been used to argue that interviews are an unnecessary part of the college admissions process because interviewers are unable to pick successful candidates better than basic statistics.[who?]
Psychologists Daniel Kahneman and Amos Tversky attempted to explain this finding in terms of the representativeness heuristic. Richard Nisbett has argued that some attributional biases like the fundamental attribution error are instances of the base rate fallacy: people underutilize "consensus information" (the "base rate") about how others behaved in similar situations and instead prefer simpler dispositional attributions.
- Bayesian probability
- Data dredging
- False positive paradox
- Inductive argument
- Misleading vividness
- Prosecutor's fallacy
- Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44, 211-233.
- Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-251. (summary here)
- Nisbett, R.E., Borgida, E., Crandall, R., & Reed, H. (1976). Popular induction: Information is not always informative. In J.S. Carroll & J.W. Payne (Eds.), Cognition and social behavior, 2, 227-236.
- The Base Rate Fallacy The Fallacy Files
- Psychology of Intelligence Analysis: Base Rate Fallacy