Kahneman and Tversky’s work in the 1970s initiated a revolution in the way psychologists think about decision-making. This project presents the biography of one of Kahneman and Tversky’s seminal articles, “On the Psychology of Prediction.” The annotations on the PDF and this web resource will provide context—both theoretical and practical—for understanding the impact of this article. Later sections of this web resource will focus on the theories presented in “On the Psychology of Prediction.” For now, consider the world of psychology and decision-making prior to the publication of this article.
Before Kahneman and Tversky’s contributions, the prevailing notion—not just in psychology, but also in fields as diverse as economics and public policy—was that people were more or less rational actors. In the 1950s, the work of eighteen-century mathematician Thomas Bayes, whose famous theory held that probabilities should be computed with deference to both present likelihood and prior odds, was subsumed into the dominant model of decision-making. Noteworthy among the social theorists of the 1950s was Ward Edwards, who along with his contemporaries painted a rosy picture of human mental processing—one in which people were sensitive, in part, to base rate information.
A reevaluation of Bayesian logic in everyday life launched Kahneman and Tversky’s collaboration in 1969, a joint career that spanned over a decade and produced scores of journal articles. “On the Psychology of Prediction,” one of their first major collaborative works, synthesized their findings from the previous three years of work in a cohesive set of theories suggesting that people did not appear to make decisions in a Bayesian manner. Rather, Kahneman and Tversky argued that heuristic cues, rather than appropriate statistical information, predominated decision-making.
For more on Ward Edwards, click here.
For a demo explaining Bayes theorem, click here.
For more on Bayes theorem, click here.
For a Bayes theorem calculator, click here.
For written excerpts from an interview with Dr. Baron that discusses how Kahneman and Tversky’s work relates to decision-making, Edwards, and Bayesian statistics, click here.
Otherwise, continue on to the first section, a discussion of the Tom W. study.
TOM W., REVISITED
The Tom W. study encapsulates Kahneman and Tversky’s central thesis that people ignore statistical information when given heuristic information. At face value, it seems that people are indeed poor users of relevant statistical information. This would seem to explain why the judgments of the psychology grad students (third column, page 238) were more reflective of the second (attribution) column than of the first (approximate base rates).
If one delves deeper, however, the situation is not as clear as it first appears from the results. The methodology used in studies like this has been criticized by others in the field of psychology. Excerpts from interviews with Dr. McCauley and Dr. Davis, both of whom are psychologists who study decision-making and attributions in regard to the limitations of Kahneman and Tversky’s methodology, follow.
For additional examples of how “real world” decision-making relates to Kahneman and Tversky’s work, click here. Otherwise, read on for a discussion of how the representativeness hypothesis relates to the Tom W. study.
What exactly is representativeness? “On the Psychology of Prediction” offers the following definition in its abstract: “by this [representativeness] heuristic, people predict the outcome that appears most representative of the evidence” (Kahneman and Tversky 1973). However, as one can see, this definition that Kahneman and Tversky use for representativeness is circular. Therefore, it is worth taking an extra moment to consider just what this term means in the context of heuristics and decision-making. Perhaps the most straightforward summary of representativeness comes from the online Wikipedia, which reads, “Under the representativeness heuristic, we judge things as being similar based on how closely they resemble each other using prima facie, often superficial qualities, rather than essential characteristics.” In other words, representative judgments focus on cues in relation to a heuristic stereotype rather than focusing on the statistically relevant information. For example, people often mistake physique as representative of intelligence. At fist impression attractive individuals are judged as more intelligent than unattractive people; an obvious misuse of available cues, as as looks have no correlation to intelligence.
Consider two additional examples of representativeness. First, suppose you bump into an engineer with a pocket protector. If this is your first experience with an engineer, you might think that pocket protectors are representative of engineers, even if in reality they are relatively rare. Hence, what becomes a heuristic model can distort actual base rates. Second, suppose you are in Nebraska and hear someone speaking in what sounds like a New York accent. Though the actual base rate of New Yorkers in Nebraska is low given the population of the whole state, the accent might be so representative that you immediately conclude that the speaker is from New York.
It is not an enormous leap to understand the role of representation in the Tom W. study. In this case, certain cues—like mechanical aptitude—are representative of engineers and therefore skewed subject predictions about Tom W.’s vocation.
For additional information on Dr. Rosch, click here.
The next section relates representativeness in terms of the availability heuristic and attribution theory.
THE AVAILABILITY HEURISTIC AND ATTRIBUTION THEORY
The availability heuristic can be thought of, in simple terms, as people making decisions based on what is most salient in memory, rather than on complete information. In terms of Kahneman and Tversky’s work, this explains in part why people seem more swayed by obvious cues, like a qualitative description of Tom W., and less by immediate factors, like statistical probability. There are, however, problems with relating these two theories in that the literature on attribution theory cannot speak to how people translate attributions from the individual to group level. The following pair of clips from McCauley focus on how aspects of Kahneman and Tversk’s work, following from the earlier discussion of the Tom W. study, do not appear to coincide with other theories in psychology.
Ultimately, it is difficult to reconcile Kahneman and Tversky’s conclusions with McCauley’s theoretical concerns. While it is obvious that representative cues impact probabilistic judgments, as demonstrated throughout “On the Psychology of Prediction,” this finding cannot speak to how these heuristics develop. It is also worth noting an apparent discrepancy between McCauley’s concerns and Perloe’s explanation of stereotype theory (see: representativeness).
The next section discusses the relationship of utility to decision-making.
UTILITY AND DECISION-MAKING
One apparent shortcoming of the theories presented in “On the Psychology of Prediction” is that Kahneman and Tversky, on the surface, did not seem to account for a major facet of “real world” decision-making: utility. One could argue that decision-making—both in our evolutionary past and in everyday life—is contingent on an understanding of the perceived value of possible outcomes. Hence, many psychologists argue that it is not simply that people are poor Bayesians, as but rather that people are used to balancing more than base rate information when making decisions. When this second component of utility is removed in a study of decision-making, as per some of Kahneman and Tversky's studies, one could argue that the results reflect artificial test conditions and not “real world” processing.
Suppose that the FDA is considering licensing the manufacture of a new headache drug, which gives +10 units of utility (i.e., relief from symptoms) over competing headache drugs. Not licensing this drug means that people who are prone to headaches might be left 10 units of utility worse off. However, this drug also has a low chance of causing severe bleeding and ulcerations, and these side effects will be defined as -100 units of utility. This balance is represented in the following diagram:
Permit drug Don’t permit drug
Desired outcome +10 -10
Side effects -100 0 or +100
If the FDA simply determines ultimate value by multiplying the utility related to each case times the base rate of side effects, it is obvious that the risks involved with the drug far outweigh any potential good. For a more subtle explanation of utility and decision-making, click on the next clip.
This, of course, relates to the material on Bayes and decision-making. Specifically, if utility associated with different outcomes is weighted differently, then logic is no longer bound by a direct analysis of base rates and probability.
To be fair, while a discussion of utility is not at the forefront of Kahneman and Tversky’s work up to and including “On the Psychology of Prediction,” it was not long before their theory expanded to encompass a measure of utility. By the late 1970s, Kahneman and Tversky, guided by the latter’s work in mathematical economics, had developed a set of principles called prospect theory. Prospect theory, although similar on the surface to utilitarianism, differs by framing human decisions in terms of minimizing risk rather than maximizing possible utility. For a detailed look at prospect theory, click here for excerpts from Kahneman’s 2002 interview with Forbes magazine. Prospect theory and its offshoots formed a body of research that would culminate three decades later in the Nobel Prize in economics.
The next section discusses the role of base rates, utility, and prediction in clinical decision-making.
CLINICAL DECISION-MAKING: BAYES IN THE REAL WORLD
In 1973, Rosenhan and his colleagues dropped a bomb on the world of clinical psychology in publishing in Science the famous pseudo-patient study, “On Being Sane in Insane Places.” In this controversial article, Rosenhan et al. approached twelve psychiatric hospitals claiming to hear voices, and in each case were granted admittance with the majority (eleven) diagnosed as schizophrenic (the final subject was admitted under an alternative diagnosis). From this, Rosenhan argued that, considering the relative rareness of schizophrenia in the general population, it was absurd that the clinicians committed the pseudo-patients on the basis of such tenuous evidence. This result was interpreted by Rosenhan as indisputable proof of Kahneman and Tversky’s central thesis in that clinicians—who are often held as paradigms of careful decision-making—were no more immune from base rate errors than the common man. For a more detailed explanation of the original Rosenhan study offered by Dr. Davis, click here (DA 3).
A retort to the conclusions advanced in Rosenhan’s study, and a related work by Langer and Abelson, was offered by Dr. Davis in a series of articles during the mid-1970s. In these articles, Davis argued that clinicians, while perhaps not overtly aware of the fact, were acting in accordance with both Bayesian and utility theories. For the full text of Davis’ articles, click here. Excerpts from Dr. Davis, summarizing the arguments presented in his papers and their relation to Kahneman and Tversky, follow:
There are a few key elements that should be taken from the preceding clips. First, neither Davis nor Perloe suggests that clinicians acted with Bayes’ theorem in mind. Despite this, the actions of the clinicians in both the Rosenhan el al. and the Langer and Abelson studies reflect at least a sub-conscious sensitivity to base rate information. Furthermore, when the utility of different outcomes is factored into the decision-making process, the actions of the clinicians seem much more reasonable that they might first appear. Ultimately, although Kahneman and Tversky’s statement that people are not Bayesian at all is probably appropriate for describing human decision making, it does not follow that the decisions we make are inherently illogical.
Continue for additional examples of appropriate and inappropriate use of base rates and Bayesian logic.
As with all theoretical models, Bayes’ theorem is only useful when correctly applied. This section contains different examples, as discussed in the interviews, of appropriate and inappropriate uses of Bayes’ theorem and base rate information. Note that many of these clips focus on why thinking in terms of category base rates is difficult, especially considering the negative social perception of issues like stereotyping.
In summary, base rates are not the ultimate be-all-end-all factor in decision-making. The previous sections’ discussions, particularly the discussion of utility, highlight the limitations of Bayesian logic in everyday life. The best decision-makers, as per Davis’ comments, are those who are aware of base rates and use them to supplement, rather than replace, normal processing.
For more on Dr. Meehl's contribution to decision making, click here.
THE ILLUSION OF VALIDITY
A final theory to consider from “On the Psychology of Prediction” is what Kahneman and Tversky define as the "illusion of validity." The illusion of validity is defined on page 249:
Factors which enhance confidence, for example, consistency and extremity, are often negatively correlated with predictive accuracy. Thus, people are prone to experience much confidence in highly fallible judgments, a phenomenon that may be termed the illusion of validity.
In other words, at the times when one should be most unsure of accuracy—say, at the extreme ends of predictions about GPA, as per the example in the article—people are much more likely to place faith in their assessment. In terms of regression, therefore, people do not regress towards base rates when predicting accuracy in situations of ambiguous certainty, but rather maintain confidence regardless of base rate information. What is most ironic, as Kahneman and Tversky are quick to point out, is that the most statistically suspect one’s prediction, the more confidence one places in it—at least insofar as the experiments mentioned are concerned.
here for an interview with Forbes in which Dr. Kahneman discusses the illusion
It is difficult, if not impossible, to end conclusively any discussion of “On the Psychology of Prediction.” This project, therefore, is not so much a biography of an experiment as it is the biography of an idea—that is, the idea that humans are limited in their abilities to make decisions based on statistical information. The theories that have been advanced in this article still influence psychology, public policy, economics, and countless other fields. From Bayes’ theorem to clinical decision-making, from terrorism to engineers’ personalities, “On the Psychology of Prediction” and its successor articles have changed the way we think about thinking.
One should understand
three, inter-related lessons from this project. First, one should understand
Bayesian logic and how it relates to Kahneman and Tversky’s central thesis.
Second, the conclusions of each sub-study, if not all of the statistical analysis,
should be easily understood in terms of decision-making. Third, and most important,
one should leave with more questions unanswered than one had at the beginning.
The interviews threaded throughout this discussion hint at the many fields that
build from or are subsequently affected by “On the Psychology of Predicting.”
Ackman, D. (2002). Nobel laureate debunks economic theory. Forbes.com interview with Dr. Kahneman.
Davis, D.A. (1976). On being detectably sane in insane places: Base rates and psychodiagnosis. Journal of Abnormal Psychology, 85, 416-422.
Davis, D. A. (1979). What's in a name: A Bayesian rethinking of attributional biases in clinical judgment. Journal of Consulting and Clinical Psychology, 42, 1109-1114.
Dr. Perloe’s class notes, which formed the basis for the walkthrough of Bayes theorem.
Wikipedia entries for Thomas Bayes and representativeness.
Additional supplemental links, while not explicitly referenced in this paper, are located at the ends of many of the preceding sections.
I would like to thank Dr.
Davis for his guidance, support, advice, stories, interview, general enthusiasm,
and tea. I would also like to thank Dr. Baron, Dr. Perloe, and Dr. McCauley
for their willingness to speak with me about this project. For that I am very
grateful. I would like to thank J.D. Zipkin for his help and guidance with all
things html. Without his patience, this project wouldn’t work. Period. I would
like to thank Keith Weissglass for teaching me Final Cut. And finally, I would
like to thank Hilary Franklin for reading through my drafts and telling me when
I wasn’t making sense or spelling correctly. Thank you all.