In the previous modules, we discussed posterior distribution and credible intervals. This lesson discussed Bayesian point estimates, losses and decision making. To a Bayesian, the posterior distribution is the basis of any inference, since it integrates both her prior opinions and knowledge and the new information provided by the data. It also contains everything she believes about the distribution of the unknown parameter of interest. But the posterior distribution on its own is not always sufficient. Sometimes one wants to express one's inference is a credible interval to indicate a range of likely values for the parameter. That would be helpful if you wanted to say that you are 95% certain the probability of an RU-486 pregnancy lies between some number L and some number U. And on other occasions, one needs to make a single number guess about the value of the parameter. For example, you might want to declare the average payoff for an insurance claim or tell a patient how much longer she has to live. In such cases, the Bayesian perspective leads directly to decision theory. And in decision theory, one seeks to minimize one's expected loss. Loss can be tricky. If you're declaring the average payoff for an insurance claim, and if you are linear in how you value money, that is, twice as much money is exactly twice as good, then one can prove that the optimal one-number estimate is the median of the posterior distribution. But in different situations, other measures of loss might apply. If you are advising a patient on her life expectancy, it is easy to imagine that large errors are far more problematic than small ones. And perhaps the loss increases as the square of how far off your single number estimate is from the truth. For example, if she's told that her average life expectancy is two years, and it is actually ten, then her estate planning will be catastrophically bad, and she will die in poverty. In the case when the loss is proportional to the quadratic error, one can show that the optimal one-number estimate is the mean of the posterior distribution. Finally, in some cases, the penalty is 0 if If you're exactly correct, but constant if you're at all wrong. This is the case with the old saying that close only counts with horseshoes and hand grenades. And it would apply if you want a prize for correctly guessing the number of jelly beans in a jar. Here, of course, instead of minimizing expected losses, we want to maximize the expected gain. If a Bayesian is in such a situation, then her best one-number estimate is the mode of her posterior distribution, which is the most likely value. There's a large literature on decision theory, and it is directly link to risk analysis, which arises in many fields. Although it is possible for frequentists to employ a certain kind of decision theory, It is much more natural for Bayesians. In this lesson, we've seen how Bayesians make point estimates of unknown parameters, we introduced the idea that one should make the choices that minimize the loss, and found that the best estimate depends on the kind of loss function one is using. In the next video, we will discuss in more depth how these best estimates are determined.