In Bayesian statistics, all inference about a parameter is made base on the posterior distribution. Including in the setting of a hypothesis test. Suppose we have two competing hypothesis, H1 and H2. Then the probability that H1 is true given the data is the posterior probability of H1. And the probability that H2 is true given the data is the posterior probability of H2. One straightforward way of choosing between H1 and H2 would be to choose the one with the higher posterior probability. In other words, you reject H1 if the posterior probability of H1 is smaller than the posterior probability of H2. However, since hypothesis testing is a decision problem, we should also consider a loss function. Revisiting the HIV testing example from earlier in the course. Suppose we want to test H1 which is patient does not have HIV, versus H2, patient has HIV. Note that these are the only two possibilities, hence these are mutually exclusive hypotheses that cover the entire decision space. We can define the loss function as L(d) as the loss that occurs when decision d is made. The Bayesian testing procedure then minimizes the posterior expected loss. The possible decisions, in other words actions, in our two competing hypotheses example are d1. Which is choosing H1, in other words deciding that the patient does not have HIV. And d2, which is choosing H2, in other words deciding that the patient has HIV. Suppose our decision is d1. We might be right, or we might be wrong. If we decide that the patient does not have HIV, and indeed they don't. Then the decision is right and the loss associated with d1 is zero, in other words, no loss. If we decide that the patient does not have HIV, but they actually do. Then the decision is wrong, and the loss associated with d1 is some value w1. Now suppose our decision is d2. Again, we might be right or we might be wrong. If we decide that the patient has HIV, and indeed they do, then the decision is right. And the loss associated with d2 is zero, in other words, no loss. If we decide the patient has HIV but they actually do not, then the decision is wrong and the loss associated with d2 is some value w2. The consequences of making a wrong decision d1 or d2 are different. If the decision is d1 and it is wrong, then you decided that the patient does not have HIV when in reality they do. This is a false negative, potential consequences are no treatment and premature death. These are grave consequences. If the decision is d2 and it is wrong then you decided that the patient has HIV when in reality they don’t. This is a false positive. Potential consequences are distrust and unnecessary further investigation. These consequences are certainly not ideal but they are much less grave than the consequences of a false negative decision. Let's put these definitions in the context of the HIV testing example with ELISA. The loss function for d1 can take on the values zero or w1 say 1000 and the loss function for d2 can take on the values zero or ten. The values of w1 or w2 are arbitrarily chosen. But the important thing to realize is that w1, the loss associated with a false negative determination. Is much higher than w2, the loss associated with a false positive determination. Remember from the earlier video that our patient had tested positive on the ELISA. In that video, we also calculated the posterior probability of the patient not having HIV given positive ELISA result. As approximately 0.88 and the posterior probability of having HIV given the positive ELISA result. As the complement of this value, approximately .12. Then the expected loss for d1 can be calculated as the sum of the two posterior probabilities weighted by their associated losses. That is, the posterior possibility of the patient not having HIV. Given positive ELISA times zero, since the loss for deciding the patient does not have HIV would be the right decision in this case. Times the posterior probability of having HIV given positive ELISA times 1,000, the loss associated with a false negative. We can similarly calculate the expected loss for d2. Since the expected loss for d2 is lower, we should make that decision. That is, we should decide that the patient has HIV. Note that our decision is highly influenced by the losses we assigned to d1 and d2. If the losses were symmetric, say both w1 and w2 were ten. Then the expected loss for d1 would be 1.2 and the expected loss for d2 wouldn't change and we would choose d1 instead. That is, we would decide that the patient does not have HIV. To recap, Bayesian methodologies allow for the integration of losses into the decision making framework easily. And in Bayesian testing, we minimize the posterior expected loss.