Let's talk a little bit about integrating scores across different criteria. We've identified our criteria, we've evaluated candidates using your questions, we actually just had a debate here in the studio about whether or not your favorite spirit animal really is a valid question? Maybe that's entered into it. I'm going to stand my ground on that one but anyway, we've got our scores and now we need to figure out whether or not to hire somebody. That should be easy. We just go to the information, will just make our decision. Some really interesting evidence that we create problems for ourselves at this stage, to that even at this point when we've got all the information we need, we can still make bad decisions. Lovely set of experiments reported by Robin Doors in a classic paper, we tried to get judges to make expert predictions on similar questions to this. Some of it was hard. Some of it was when faculty were hiring doctoral students, asking them to ask the missions committee to rate people and then seeing how well their missions ratings predicted how people would do in the program but there are other things. One set of experiments asked psychiatrists to predict based on steed on patient's file, who would be diagnosed with neurosis versus psychosis. Then there were a bunch of studies where they looked at where they looked at students, and ask people to predict somebody coming into the college what their GPA would be after year. If we think about the first two stages of we need to identify criteria and evaluate people against the criteria, these are being done. These files have the information, it's fairly simple information so everybody's worked out, what are the things we ought to be looking at and they're all looking at the same information as and then they can make prediction. We see here the first column here the judge predictions is the correlations between what people predicted and the actual outcomes. We see these correlations are terrible. A lot of them about 0.3, 0.4, they're not great, they're not awful. Judges do okay on this. Maybe some of these outcomes are just hard to predict and they are but the interesting question, psychologists often could be doing better. What they did is they take these files say, let's look at we're trying to predict how graduate students are going to do. Let's take the fall, and is going to have a few fairly standard piece of information. We can basically just give it a score for the quality of the GPA in undergrad. We could give them a score for the level of their GRE which the standardized test you take to get into graduate school. We could give them a score for the quality of references. We can take all this information, we can just give it a score. They said it would take the file, we'll score them on the criteria, and then we'll just add up the scores give each go in equal weight, and just add them up, and use that to make our prediction. What they found was when they did this like super simple, the correlation of these predictions and the outcomes was far high. You see when we're trying to predict GPA for example, we get from about 0.3-0.6, when we're trying to predict faculty ratings, we go from about 0.2-0.5. It's the same information people had to make their judgments, but just adding up the scores, turnouts do much better than somebody looking at it and trying to take a holistic view. Maybe it's because there's something unique about equal weights, they tried doing it with randomly. It's just randomly, we're going to put different ways with different criteria and just add them up. Again, that the better than just the expert judgment. For each judge, we can actually figure out which weight they're putting on each of these criteria based on their predictions, and use those same weights just consistently for the same judge. This person on average gave that a weight of about 0.3 to GPA but it fluctuates from person to person. We'll just give away of 0.3 to everybody and come up with our own predictions that way. When they did that, again, they'd be the judges predictions. What's the point here? The point is that actually, even when we've got all the information, go from that final step of here at this person's strengths and weaknesses to a final judgment about should we hire this person or not hire this person? We are not very good at that. It turns out for whatever reason, the human brain doesn't seem to be very well wired to do this trade off against multiple dimensions where I'm looking at here are their strengths and here are their weaknesses. My sense is that what happens is because we've got a lot of information we don't know what to focus on, we tend to just anchor on a couple of things. We look at either of these magic bullets, like this one thing in their background, they've would just got this incredibly high GPA therefore, they're going to be a star and we ignore a lot of warning signs. All their red flags is one thing, really low standardized test score that makes us nervous, and so we ignore everything else, because we're doing that, we don't do a good job weighing up the strengths and weaknesses. Take one thing away from this discussion, I realized even that is often optimistic but if you take one thing away, it is this idea that once you've collected all the information, you don't want to do we usually do. Which is sit around in a group discussing, arguing the pros and cons before ultimately trying to come some judgment. No. Just take the scores and add them up. Just copy people on the criteria add them up, hire the person with the highest scores. All of the evidence suggests if you can do that, you're going to end up systematically bringing him better people than if you try and use your own judgment, to figure out how it all hangs together.