Now I'd like to talk about a particular error in reasoning that is one of the most common errors we make, and one of the most trouble making. When we try to figure out why an object or a person behaved in some way, we're prone to a dispositional bias. That's an inclination to see behavior as caused by dispositions, such as traits, rather than as a response to contexts or situations. This bias routinely causes us to make the fundamental attribution error. Which is a tendency to mistakenly regard dispositions of the object or person as the primary cause of behavior, while ignoring important situational or context factors. Dispositions include traits, abilities, attitudes and motives. In one famous experiment, a very long time ago, psychologists asked one group of college participants to read an essay in favor of Castro's Cuba. And they had another group read an essay opposing Castro's Cuba. The first group was told that a political scientist instructor told the students to write an essay in favor of Cuba. And the second group was told that the political science told the student to write an essay opposing Castro's Cuba. Logically, participants should not have felt that they had learned anything about the participants' true attitudes. But, in fact, participants who read the favorable essay assumed that the student who wrote it was, in fact, in favor of Castro's Cuba. And those who read the unfavorable essay assumed that the student who wrote it was opposed to Castro's Cuba. In another classic study, students participated in a study with a TV quiz format. One student was assigned at random to be the questioner, and one was assigned to be a contestant. The questioners took advantage of the esoteric knowledge that they had that they could assume was probably not known to the contestant. For example, what is the sweet-smelling waxy stuff that comes from whales and is used as a base for perfume? That's ambergris, in case you haven't recently read Moby Dick. You might think that it would have been clearer to both participants and observers that the questioner had a big advantage by virtue of his role. Which meant that he could display some impressive knowledge and couldn't be shown to be ignorant of anything. But, in fact, both the contestants and observers who watched the quiz take place rated the questioner as far more knowledgeable than either the contestant or the average university student. The study has profound relevance to everyday life. An organizational psychologist set up a laboratory version of an office. He randomly selected some participants to be manager and others to be clerk. Managers read manuals on supervisory responsibilities, while the experimenter showed the clerks the mailboxes, the filing system, and so on. After the office had been in operation for a while, managers and clerks rated each other on a number of work-related traits including leadership, intelligence, motivation for hard work, assertiveness and supportiveness. For all of those traits, managers rated their fellow managers more highly than they rated their clerks, and clerks rated their managers higher than they rated their fellow clerks. And note that these findings were obtained despite the fact that everyone knew that the roles had been assigned at random. In everyday life, people usually can't assume that, so it can be even harder to separate the effects of roles from the dispositions of the people who occupy the role. Notice how the representativeness heuristic, the failure to appreciate the law of large numbers and the fundamental attribution error operate together in generating our mistakes about how much we can learn from observation of a single behavior. Jan gave the panhandler a dollar. That's an act that's representative of generosity. So, she must be a generous person. I don't pay much attention to the context, which may have been one in which most people would have given the panhandler a dollar. And I don't do much checking against other behavior by Jan. There's no recognition that a single act doesn't constitute a great deal of evidence. These kinds of powerful biases operating together help to account for our massive failure to appreciate how little we actually learn from a single behavior. Remember the earlier session on correlation and one on law of large numbers? People are very badly calibrated. We asked people what they thought the correlation was between how honest people were in one situation and how honest they were in another. Or what the correlation was between how friendly people are in one situation and with how friendly they are in another. People thought those correlations were about .8. Huge, in other words, a very, very strong correlation. So strong that we could be reasonably sure that if a person was honest in one situation, he would be in another as well. In fact, the correlation is extremely low. For all kinds of traits, honesty, friendliness, aggressiveness, conscientiousness, etc, predicting one behavior from another that measures that same trait is a risky business, correlations are quite low. So, the fundamental attribution error is very fundamental, and very profound, and very trouble making. The last bias that I want to talk about is the confirmation bias. When testing hypotheses, we tend to look only for evidence that could confirm the hypothesis and not for evidence that could disconfirm it. The search for evidence that might disconfirm a hypothesis is as important to testing a hypothesis as a search for confirming evidence. Recall the two-by-two table, where you were asked to decide whether symptom x is associated with disease A, whether it's diagnostic of disease A. Some participants say, yes, it is diagnostic. Because some people with the disease have the symptom. They look only at confirming evidence and only at one cell. Other participants say yes, more people with the disease have the symptom than people without the disease. Those participants look at two cells, but still, only for confirming evidence. Very few people understand that all the cells of the table are essential. The evidence as a whole serves to disconfirm the hypothesis. But if you don't look at the evidence that could disconfirm the hypothesis, you'll never discover that fact. Our susceptibility to the confirmation bias means that the way a hypothesis is stated can determine the answer we get. Participants in one study of the confirmation bias were asked to interview someone. Some participants were asked to determine whether the person was an extrovert, and some were asked to determine whether the person was an introvert. The first group was inclined to ask questions for which, if the answer was positive, it would tend to confirm the hypothesis. For example, in what situations are you most talkative? The second group also was inclined to ask questions for which, if the answer was positive, it would tend to confirm the hypothesis. For example, in what situations do you wish you could be more outgoing? Of course, people asked the first type of question are going to appear to be more extroverted than people asked the second type of question. And, in fact, when the interview tapes were played for a new batch of participants, those participants judged that interviewees tested by someone who was trying to determine whether the interviewee was an extrovert rated the interviewee as more extroverted than did participants who rated interviewees tested by someone who is trying to determine whether the interviewee was an introvert. You might find that you can use this rule for all kinds of things. Whenever you want to know whether x is associated with y, or x causes y, or x is a symptom of y, think of the two-by-two table. X, yes or no? Y, yes or no? And remember that if you haven't got a number in all four cells, you don't have the ability to assess the question. Psychologists have come up with dozens of biases, and a lot of them are worth thinking about. The ones we've just been talking about are some of the most important, because the consequences can be pretty severe. We can get in big trouble, in big arguments, by failing to realize that our understanding of the world is an interpretation, not revealed truth. Some of the most important tools we use are schemas and heuristics, which are generally useful, but far from foolproof. And some of the most consequential errors we make are due to our susceptibility to the fundamental attribution error. Finally, many of the inferences we make about the world can be regarded as tests of hypotheses. Unfortunately, too much of the time we simply confirm our hypotheses rather than actually testing them. The next lesson will be concerned with choice. How we can choose between two or more courses of action, and what's the best way to do so?