Kahneman and Tversky’s work in the 1970s initiated a revolution in the way psychologists think about decision-making.  This project presents the biography of one of Kahneman and Tversky’s seminal articles, “On the Psychology of Prediction.”  The annotations on the PDF and this web resource will provide context—both theoretical and practical—for understanding the impact of this article.   Later sections of this web resource will focus on the theories presented in “On the Psychology of Prediction.”  For now, consider the world of psychology and decision-making prior to the publication of this article.  


Before Kahneman and Tversky’s contributions, the prevailing notion—not just in psychology, but also in fields as diverse as economics and public policy—was that people were more or less rational actors.  In the 1950s, the work of eighteen-century mathematician Thomas Bayes, whose famous theory held that probabilities should be computed with deference to both present likelihood and prior odds, was subsumed into the dominant model of decision-making.  Noteworthy among the social theorists of the 1950s was Ward Edwards, who along with his contemporaries painted a rosy picture of human mental processing—one in which people were sensitive, in part, to base rate information.  


A reevaluation of Bayesian logic in everyday life launched Kahneman and Tversky’s collaboration in 1969, a joint career that spanned over a decade and produced scores of journal articles.  “On the Psychology of Prediction,” one of their first major collaborative works, synthesized their findings from the previous three years of work in a cohesive set of theories suggesting that people did not appear to make decisions in a Bayesian manner.  Rather, Kahneman and Tversky argued that   heuristic cues, rather than appropriate statistical information, predominated decision-making. 


For more on Ward Edwards, click here.


For a demo explaining Bayes theorem, click here.

For more on Bayes theorem, click here.

For a Bayes theorem calculator, click here.


For written excerpts from an interview with Dr. Baron that discusses how Kahneman and Tversky’s work relates to decision-making, Edwards, and Bayesian statistics, click here.


Otherwise, continue on to the first section, a discussion of the Tom W. study.






The Tom W. study encapsulates Kahneman and Tversky’s central thesis that people ignore statistical information when given heuristic information.  At face value, it seems that people are indeed poor users of relevant statistical information.  This would seem to explain why the judgments of the psychology grad students (third column, page 238) were more reflective of the second (attribution) column than of the first (approximate base rates).


If one delves deeper, however, the situation is not as clear as it first appears from the results.  The methodology used in studies like this has been criticized by others in the field of psychology.  Excerpts from interviews with Dr. McCauley and Dr. Davis, both of whom are psychologists who study decision-making and attributions in regard to the limitations of Kahneman and Tversky’s methodology, follow.  


MC 1: <bgsound src="InterviewClips\McCauley Clips\MC_1.mp4" WIDTH=150 HEIGHT=15>
McCauley explains why, in his opinion, Kahneman and Tversky approached their survey methodology backwards.  Typically, social psychologists start with a category and then ask for related attributes.  In the Tom W. study, however, Kahneman and Tversky presented subjects with attributes and then asked for predictions about group membership.  Hence, it was the irregular nature of the task and of questions asked that contributed to the highly significant results.  For more information on the possible impact of the phrasing of the questions, continue to clip two.


MC 2: <bgsound src="InterviewClips\McCauley Clips\MC_2.mp4" WIDTH=150 HEIGHT=15>
McCauley suggests that the exact wording of a question—not just in Kahneman and Tversky’s study, but in the wider body of decision-making literature—can be manipulated to produce almost any desired result.  Therefore, as soon as researchers find the phrasing that produces a consistent pattern of results, the examination stops without any attempt to generalize the findings through different survey methodologies.  He suggests that the only satisfactory solution to this shortcoming is to compile the data from “random and representative sample of all the different ways to ask this question [of characteristics and group inclusion] to establish true generalizability.” McCauley offers the issue of abortion to illustrate how questions phrasing can produce different results in regards to appears on the surface to be a single issue. For example, if someone might first say she is against abortion, but change her response changed to include additional details (if the mother health’s health was at risk, etc). This, however, is an exaggerated example. Even small differences in wording can produce discrepant results. For example, asking “how much do you like Bob” versus “how much do you dislike Bob,” while theoretically asking about a similar assessment of Bob in each case, might prime people for different answers. In terms of the present study, it is possible that different questions relating Tom W. to engineers or lawyers might have produced different or less starkly significant results. 


DA 1: <bgsound src="InterviewClips\Davis Clips\DA_1.mp4" WIDTH=150 HEIGHT=15>
Davis responds to the concerns McCauley’s expressed in the preceding two clips, focusing on the issue of phrasing questions and generalizability.  However, in defense of Kahneman and Tversky, Davis highlights the fact that their way of thinking about the issue, methodological concerns aside, represented a significant and important departure from the earlier paradigms of decision-making. 



MC 3: <bgsound src="InterviewClips\McCauley Clips\MC_3.mp4" WIDTH=150 HEIGHT=15>
McCauley argues that Kahneman and Tversky’s conclusions—that people are always poor at making decisions based on statistical information—is harsher than is warranted.  Rather than demonstrate sensitivity to the fact that most of time humans make very good decisions, McCauley asserts that Kahneman  “adds ever longer to the litany of human foibles in information processing.”  In other words, McCauley suggests that Kahneman and Tversky focused on the negative aspects of human cognition without deference to the fact that most decisions are properly founded.


MC 4: <bgsound src="InterviewClips\McCauley Clips\MC_4.mp4" WIDTH=150 HEIGHT=15>
Following from the previous point, McCauley suggests that it might be possible, in contrast to Kahneman and Tversky’s suggestion, that this “litany of foibles” can be reconciled to the fact that people process information satisfactorily most of the time.  Furthermore, while it is undeniable that phenomena like those of the Tom. W. study are consistent and reproducible, it is unclear how reflective these results are of “real world” cases decision-making. 


For additional examples of how “real world” decision-making relates to Kahneman and Tversky’s work, click here.  Otherwise, read on for a discussion of how the representativeness hypothesis relates to the Tom W. study.






What exactly is representativeness?  “On the Psychology of Prediction” offers the following definition in its abstract: “by this [representativeness] heuristic, people predict the outcome that appears most representative of the evidence” (Kahneman and Tversky 1973).  However, as one can see, this definition that Kahneman and Tversky use for representativeness is circular.  Therefore, it is worth taking an extra moment to consider just what this term means in the context of heuristics and decision-making.  Perhaps the most straightforward summary of representativeness comes from the online Wikipedia, which reads, “Under the representativeness heuristic, we judge things as being similar based on how closely they resemble each other using prima facie, often superficial qualities, rather than essential characteristics.”  In other words, representative judgments focus on cues in relation to a heuristic stereotype rather than focusing on the statistically relevant information.  For example, people often mistake physique as representative of intelligence. At fist impression attractive individuals are judged as more intelligent than unattractive people; an obvious misuse of available cues, as as looks have no correlation to intelligence.


PE 1: <bgsound src="InterviewClips\Perloe Clips\PE_1.mp4" WIDTH=150 HEIGHT=15>
Perloe responds to the issue of why people seem to ignore base rates.  One major problem is that, in line with the work of cognitive psychologist Eleanor Rosch, people apply current information to a heuristic stereotype.  Hence, probabilistic judgments are based on resemblance of an input to a stereotype rather than on an input to a Bayesian model of base rates.  There is no need, if one takes into consideration what Kahneman and Tversky define as representative information, to account for base rates. 



DA 2: <bgsound src="InterviewClips\Davis Clips\MC_2.mp4" WIDTH=150 HEIGHT=15>
This Davis clip, originally presented in the section on clinical decision-making, also applies to the discussion of representativeness.  Focusing on the segment following 2:40, Davis discusses how representativeness can skew one’s understanding of base rates.  In a clinical example, Davis suggests that a clinician’s first or most vivid experience with a schizophrenic can form the foundation for her understanding of schizophrenics as a whole.  Hence, it is possible that the cues that one takes as representative can actually differ substantially from the actual base rates. 


Consider two additional examples of representativeness.  First, suppose you bump into an engineer with a pocket protector.  If this is your first experience with an engineer, you might think that pocket protectors are representative of engineers, even if in reality they are relatively rare.  Hence, what becomes a heuristic model can distort actual base rates.  Second, suppose you are in Nebraska and hear someone speaking in what sounds like a New York accent.  Though the actual base rate of New Yorkers in Nebraska is low given the population of the whole state, the accent might be so representative that you immediately conclude that the speaker is from New York. 


It is not an enormous leap to understand the role of representation in the Tom W. study.  In this case, certain cues—like mechanical aptitude—are representative of engineers and therefore skewed subject predictions about Tom W.’s vocation.


For additional information on Dr. Rosch, click here.


The next section relates representativeness in terms of the availability heuristic and attribution theory.  






The availability heuristic can be thought of, in simple terms, as people making decisions based on what is most salient in memory, rather than on complete information. In terms of Kahneman and Tversky’s work, this explains in part why people seem more swayed by obvious cues, like a qualitative description of Tom W., and less by immediate factors, like statistical probability.  There are, however, problems with relating these two theories in that the literature on attribution theory cannot speak to how people translate attributions from the individual to group level.  The following pair of clips from McCauley focus on how aspects of Kahneman and Tversk’s work, following from the earlier discussion of the Tom W. study, do not appear to coincide with other theories in psychology.


MC 5: <bgsound src="InterviewClips\McCauley Clips\MC_5.mp4" WIDTH=150 HEIGHT=15>
McCauley poses the rhetorical question “what is our attribution to theory for making attributions to groups,” and answers that no formal body of literature exists which speaks to attributions on a large scale. The problem is that all attribution, as far as current theory is able to predict, occurs at the individual level—no one knows for certain how people generalize attributions from individuals to an entire group. For example, consider the frenzy of racial profiling after September 11th in which a great leap was made from nineteen hijackers to Arabs in general. “While the attribute ‘Arab’ was highly representative of 9/11 hijackers, the attribute ‘hijacker’ is not particularly representative of Arabs (Davis). Hence, Kahneman and Tversky’s work—as typified by the Tom W. study—is statistically significant but cannot be properly understood in terms of attribution theory.


MC 6: <bgsound src="InterviewClips\McCauley Clips\MC_6.mp4" WIDTH=150 HEIGHT=15>
This is McCauley’s response to the question “Can the contact hypothesis be understood in Bayesian terms?”  It seems that interaction with a group member refines one’s base rate of attributions of the group as a whole.  McCauley’s response, as per the previous clip, is that the contact hypothesis—and attribution theory as a whole—depends on the assumption that people naturally process attributions upwards from the individual to the group level.  Since this assumption is proven to be tenuous at best, Bayesian logic does not appear to be applicable to the contact hypothesis.     


Ultimately, it is difficult to reconcile Kahneman and Tversky’s conclusions with McCauley’s theoretical concerns.  While it is obvious that representative cues impact probabilistic judgments, as demonstrated throughout “On the Psychology of Prediction,” this finding cannot speak to how these heuristics develop.  It is also worth noting an apparent discrepancy between McCauley’s concerns and Perloe’s explanation of stereotype theory (see: representativeness).   


The next section discusses the relationship of utility to decision-making. 






One apparent shortcoming of the theories presented in “On the Psychology of Prediction” is that Kahneman and Tversky, on the surface, did not seem to account for    a major facet of “real world” decision-making: utility.  One could argue that decision-making—both in our evolutionary past and in everyday life—is contingent on an understanding of the perceived value of possible outcomes.  Hence, many psychologists argue that it is not simply that people are poor Bayesians, as but rather that people are used to balancing more than base rate information when making decisions.  When this second component of utility is removed in a study of decision-making, as per some of Kahneman and Tversky's studies, one could argue that the results reflect artificial test conditions and not “real world” processing.


PE 2: <bgsound src="InterviewClips\Perloe Clips\PE_2.mp4" WIDTH=150 HEIGHT=15>
Perloe discusses the issues of utility and base rate probability in terms of the everyday scenario of mugging.  If you suppose that a neighborhood is bad, you might not walk there at night not because the probability of being mugged is high—it may not be—but because the negative aspect of being mugged is so great.  Factors unrelated to the base rate can, and should, influence our judgment.  Perloe relates this phenomenon to medicine, suggesting that the FDA will control drugs not based only on a direct probability of side effects but also on the severity of the negative side effects.  For a further illustration of this balance, consider the following example:


Suppose that the FDA is considering licensing the manufacture of a new headache drug, which gives +10 units of utility (i.e., relief from symptoms) over competing headache drugs.  Not licensing this drug means that people who are prone to headaches might be left 10 units of utility worse off.  However, this drug also has a low chance of causing severe bleeding and ulcerations, and these side effects will be defined as -100 units of utility.  This balance is represented in the following diagram:


Permit drug                             Don’t permit drug

Desired outcome                     +10                                          -10

Side effects                              -100                                         0 or +100


If the FDA simply determines ultimate value by multiplying the utility related to each case times the base rate of side effects, it is obvious that the risks involved with the drug far outweigh any potential good.  For a more subtle explanation of utility and decision-making, click on the next clip.   


PE 3: <bgsound src="InterviewClips\Perloe Clips\PE_3.mp4" WIDTH=150 HEIGHT=15>
Perloe discusses the impact of accuracy criteria on decision-making.  Perloe begins with a discussion of an experiment in which subjects were asked to report whether or not they had heard a given signal in the presence of noise.  The subject’s response of yes or no depended on the exact nature of the task.  If one must be absolutely sure that there are no false positives, then the cutoff for saying “yes” will be very high.  This is called the juror matrix in that jurors function according to the principle that indisputable proof is necessary for a conviction, even if that means some criminals are acquitted.  The juror matrix stands in contrast to the sentry matrix in which false positives are far more preferable to false negatives. 

This, of course, relates to the material on Bayes and decision-making.  Specifically, if utility associated with different outcomes is weighted differently, then logic is no longer bound by a direct analysis of base rates and probability.  


To be fair, while a discussion of utility is not at the forefront of Kahneman and Tversky’s work up to and including “On the Psychology of Prediction,” it was not long before their theory expanded to encompass a measure of utility.  By the late 1970s, Kahneman and Tversky, guided by the latter’s work in mathematical economics, had developed a set of principles called prospect theory.  Prospect theory, although similar on the surface to utilitarianism, differs by framing human decisions in terms of minimizing risk rather than maximizing possible utility.  For a detailed look at prospect theory, click here for excerpts from Kahneman’s 2002 interview with Forbes magazine.  Prospect theory and its offshoots formed a body of research that would culminate three decades later in the Nobel Prize in economics. 


The next section discusses the role of base rates, utility, and prediction in clinical decision-making.






In 1973, Rosenhan and his colleagues dropped a bomb on the world of clinical psychology in publishing in Science the famous pseudo-patient study, “On Being Sane in Insane Places.”  In this controversial article, Rosenhan et al. approached twelve psychiatric hospitals claiming to hear voices, and in each case were granted admittance with the majority (eleven) diagnosed as schizophrenic (the final subject was admitted under an alternative diagnosis).  From this, Rosenhan argued that, considering the relative rareness of schizophrenia in the general population, it was absurd that the clinicians committed the pseudo-patients on the basis of such tenuous evidence.  This result was interpreted by Rosenhan as indisputable proof of Kahneman and Tversky’s central thesis in that clinicians—who are often held as paradigms of careful decision-making—were no more immune from base rate errors than the common man.  For a more detailed explanation of the original Rosenhan study offered by Dr. Davis, click here (DA 3).   


A retort to the conclusions advanced in Rosenhan’s study, and a related work by Langer and Abelson, was offered by Dr. Davis in a series of articles during the mid-1970s.  In these articles, Davis argued that clinicians, while perhaps not overtly aware of the fact, were acting in accordance with both Bayesian and utility theories.  For the full text of Davis’ articles, click here. Excerpts from Dr. Davis, summarizing the arguments presented in his papers and their relation to Kahneman and Tversky, follow:


DA 4: <bgsound src="InterviewClips\Davis Clips\DA_4.mp4" WIDTH=150 HEIGHT=15>
Davis describes his response to Rosenhan’s article in greater detail.  Specifically, he argued that the clinicians in Rosehnan’s study could be understood in Bayesian terms.  It seems that the clinicians in the study were wrong in taking a single cue—such as hearing voices—and then interpreting it as indicative of schizophrenia.  One might argue, as Rosenhan did, that this move was unwarranted considering the very low prevalence of schizophrenia in the general population.  Davis, however, argued that it was inappropriate to use the general population as a base rate because the people who come of their own volition to mental wards are themselves an atypical sample.  From this, Davis argued that, if one considers the base rate of schizophrenia and the cue of hearing voices in the sub-population of people seeking admittance to clinical care, then the clinicians actually acted appropriately.  


PE 4: <bgsound src="InterviewClips\Perloe Clips\PE_4.mp4" WIDTH=150 HEIGHT=15>
PE 4: This is an excerpt from Dr. Perloe in reference to Davis’ work.  Even if it is not externally obvious, people do seem to function with at least a subconscious appreciation of base rates when making decisions.  Specifically, the more experience that one has with a category, the more salient the relevant information becomes for future processing.  Holding with Davis’ ideas, the clinicians in the Langer and Abelson study were not mistaken in functioning as if the base rate schizophrenia in the subjects was higher than for the general population.  In terms of Bayes theorem, “people can behave as Bayesians only when there is good reason to think about the base rate.  One good reason is that you have lots of experience, and the other… is that you have a theory that membership in that category is relevant to making a good judgment.” 


DA 5: <bgsound src="InterviewClips\Davis Clips\DA_5.mp4" WIDTH=150 HEIGHT=15>
Davis describes his work in response to another study, “A Patient by Any Other Name,” published by Langer and Abelson.  In this study, clinicians rated people differently if they were presented as job applicants or potential patients.  Langer and Abelson argued that clinicians should work only with information at hand and that clinicians were unfair in judging the job applicants differently than possible patients.  In response, Davis argues that the clinicians in “A Patient by Any Other Name” were justified in acting as they did because it is not a wild assumption to think that the base rates of pertinent characteristics differ between job applications and potential patients.  In closing, Davis suggests that even if Kahneman and Tversky were correct in arguing that people were non-Bayesian, at least a psychodynamic clinicians who appeared to be influenced by the label "patient" were not behaving as illogically as it first seemed .


DA 6: <bgsound src="InterviewClips\Davis Clips\DA_6.mp4" WIDTH=150 HEIGHT=15>
This clip frames the clinical issues in relation to the juror and sentry matrices described in the utility section.  It is considered whether sending away someone at the brink of a potentially disastrous illness or playing it safe and admitting someone on limited information is the more serious mistake.  If you assume that the latter option is better than the former, then it follows that clinicians would follow a sentry matrix in which a “false negative” (turning a potential patient away) has far worse consequences than a “false positive” (incorrect admittance to institution).   It follows that the clinicians acted logically by admitting the pseudo-patients, even considering the low base rate probability of serious illness, because turning a seriously ill individual away would have severe negative consequences. 


There are a few key elements that should be taken from the preceding clips.  First, neither Davis nor Perloe suggests that clinicians acted with Bayes’ theorem in mind.  Despite this, the actions of the clinicians in both the Rosenhan el al. and the Langer and Abelson studies reflect at least a sub-conscious sensitivity to base rate information.  Furthermore, when the utility of different outcomes is factored into the decision-making process, the actions of the clinicians seem much more reasonable that they might first appear.  Ultimately, although Kahneman and Tversky’s statement that people are not Bayesian at all is probably appropriate for describing human decision making, it does not follow that the decisions we make are inherently illogical.   


Continue for additional examples of appropriate and inappropriate use of base rates and Bayesian logic.






As with all theoretical models, Bayes’ theorem is only useful when correctly applied. This section contains different examples, as discussed in the interviews, of appropriate and inappropriate uses of Bayes’ theorem and base rate information. Note that many of these clips focus on why thinking in terms of category base rates is difficult, especially considering the negative social perception of issues like stereotyping.


PE 5: <bgsound src="InterviewClips\Perloe Clips\PE_5.mp4" WIDTH=150 HEIGHT=15>
Referring to Kahneman and Tversky’s work and the literature on attribution theory, Perloe suggests that when people know that “base rate information is causally related to membership in a category, they will use it.” He explains the “On the Psychology of Prediction” example of test scores as relevant base rate information. For the article’s treatment of this issue, go to page 9 of the PDF (pg. 245).


PE 6: <bgsound src="InterviewClips\Perloe Clips\PE_6.mp4" WIDTH=150 HEIGHT=15>
Following from the previous clip, Perloe discusses the issues of base rates and decision-making in greater detail. Discusses why people are so poor, or if nothing else hesitant, to think along Bayesian lines. He mentions that political correctness make it almost a taboo to consider social categories, as it might be misconstrued as negative stereotyping.


PE 7: <bgsound src="InterviewClips\Perloe Clips\PE_7.mp4" WIDTH=150 HEIGHT=15>
Perloe suggests that people do not know how to make logical decisions based on given information.  He suggests that, luridly politically incorrectness examples aside, stereotyping is not inherently bad.  In fact, to totally spurn stereotypes and related base rates is actually contrary to the Bayesian model of decision-making because the base rate information associated with a stereotype is often rooted in probabilistic truth.  


DA 7: <bgsound src="InterviewClips\Davis Clips\DA_7.mp4" WIDTH=150 HEIGHT=15>
Davis argues, holding with the work of Dr. Baron, that the best course of action when presented with limited information is no action at all. This assertion is then related to the issue of clinical decision-making, as per the previous discussion of diagnoses of schizophrenia. A good Bayesian will know that insufficient evidence can not conclusively speak to how evidence should be related to a base rate.


DA 8: <bgsound src="InterviewClips\Davis Clips\DA_8.mp4" WIDTH=150 HEIGHT=15>
As an interesting counterpoint to the bulk of the discussion, Davis presents a scenario in which pure Bayesian logic is inferior to heuristic diagnoses. Holding with Meehl, he suggests that certain actuarial tables, when one considers the role of human observation and utility, are less effective than clinicians acting on their own. Meehl’s example is that even a layman, given an equation where observable symptoms can are analyzed in response to base rates in a population, can out-perform clinicians in making diagnosis. This is because probabilistic equations, in contrast to clinicians, are immune form the human fallacy of overemphasizing availability or other heuristic clues.


DA 9: <bgsound src="InterviewClips\Davis Clips\DA_9.mp4" WIDTH=150 HEIGHT=15>
In response to the concerns presented in the preceding clip, Davis argues that clinicians and Bayesian models should complement one another to be most effective. Specifically, he argues that the observational skills of a clinician can be combined with the insights offered by base rates to produce better diagnostic decisions. In other words, clinicians are valuable because of their perceptive sensitivity. However, clinicians alone are not immune to fallacies of availability or representativeness. Bayesian models—a computer diagnostic aid, for example—are therefore useful by reminding clinicians of the relevant base rate information. Clinicians, therefore, can compare their diagnostic intuition to unbiased probabilistic information.


In summary, base rates are not the ultimate be-all-end-all factor in decision-making.  The previous sections’ discussions, particularly the discussion of utility, highlight the limitations of Bayesian logic in everyday life.  The best decision-makers, as per Davis’ comments, are those who are aware of base rates and use them to supplement, rather than replace, normal processing.     


For more on Dr. Meehl's contribution to decision making, click here.






A final theory to consider from “On the Psychology of Prediction” is what Kahneman and Tversky define as the "illusion of validity."  The illusion of validity is defined on page 249:


Factors which enhance confidence, for example, consistency and extremity, are often negatively correlated with predictive accuracy.   Thus, people are prone to experience much confidence in highly fallible judgments, a phenomenon that may be termed the illusion of validity


In other words, at the times when one should be most unsure of accuracy—say, at the extreme ends of predictions about GPA, as per the example in the article—people are much more likely to place faith in their assessment. In terms of regression, therefore, people do not regress towards base rates when predicting accuracy in situations of ambiguous certainty, but rather maintain confidence regardless of base rate information. What is most ironic, as Kahneman and Tversky are quick to point out, is that the most statistically suspect one’s prediction, the more confidence one places in it—at least insofar as the experiments mentioned are concerned.


Click here for an interview with Forbes in which Dr. Kahneman discusses the illusion of validity.






It is difficult, if not impossible, to end conclusively any discussion of “On the Psychology of Prediction.” This project, therefore, is not so much a biography of an experiment as it is the biography of an idea—that is, the idea that humans are limited in their abilities to make decisions based on statistical information. The theories that have been advanced in this article still influence psychology, public policy, economics, and countless other fields. From Bayes’ theorem to clinical decision-making, from terrorism to engineers’ personalities, “On the Psychology of Prediction” and its successor articles have changed the way we think about thinking.


One should understand three, inter-related lessons from this project. First, one should understand Bayesian logic and how it relates to Kahneman and Tversky’s central thesis. Second, the conclusions of each sub-study, if not all of the statistical analysis, should be easily understood in terms of decision-making. Third, and most important, one should leave with more questions unanswered than one had at the beginning. The interviews threaded throughout this discussion hint at the many fields that build from or are subsequently affected by “On the Psychology of Predicting.”






Ackman, D. (2002). Nobel laureate debunks economic theory. interview with Dr. Kahneman.


Davis, D.A. (1976). On being detectably sane in insane places: Base rates and psychodiagnosis. Journal of Abnormal Psychology, 85, 416-422.


Davis, D. A. (1979). What's in a name: A Bayesian rethinking of attributional biases in clinical judgment. Journal of Consulting and Clinical Psychology, 42, 1109-1114.


Dr. Perloe’s class notes, which formed the basis for the walkthrough of Bayes theorem.


Wikipedia entries for Thomas Bayes and representativeness.


Additional supplemental links, while not explicitly referenced in this paper, are located at the ends of many of the preceding sections.





I would like to thank Dr. Davis for his guidance, support, advice, stories, interview, general enthusiasm, and tea. I would also like to thank Dr. Baron, Dr. Perloe, and Dr. McCauley for their willingness to speak with me about this project. For that I am very grateful. I would like to thank J.D. Zipkin for his help and guidance with all things html. Without his patience, this project wouldn’t work. Period. I would like to thank Keith Weissglass for teaching me Final Cut. And finally, I would like to thank Hilary Franklin for reading through my drafts and telling me when I wasn’t making sense or spelling correctly. Thank you all.