Up until now, we have looked at how we can study the intended effects of an intervention. Focusing on how to obtain valid and generalizable estimates of the advocacy of an intervention. But research into any unintended adverse effects that may result from the use of an intervention is essential for clinicians so that they can weigh up the pros and cons of the intervention for their patient. As with research into the intended effects of an intervention, research into the unintended effects is also causal in nature. If a patient's health declines after receiving a certain intervention, it's essential that clinicians know whether the adverse effects were caused by the intervention itself, as this may lead the clinicians to discontinue treatment for a patient. Furthermore, if a therapy is known to cause severe side effects, a clinician may decide not to begin with that intervention and instead, may seek an alternative intervention. Adverse drug reactions can have a range of presentations, and it's helpful to identify which kinds of effects we want to investigate, as our choice will determine the kind of study that we will need to conduct. A simple yet effective approach is to use the Type A and B classification system. Originally proposed by Rawlins and Thompson in the 1970s. Type A side effects are characterized as being relatively common, and to an extent predictable. Generally, Type A side effects are dose dependent, and as the intervention dosage is increased, the severity of the Type A side effects will gradually worsen. Type B side effects, on the other hand, are much rarer than Type A side effects, and it can be very difficult to predict when an adverse reaction of this type will occur. The side effects are also generally more severe and can often be life threatening. One example of such a Type B side effect is an allergic reaction to a drug leading to anaphylactic shock, which, if left untreated, could be fatal. So how can we go about studying the unintended effects of an intervention? In an ideal situation, we would study both Type A and type B side effects within a single study. But as you can imagine, often this is not possible because of differences in the frequency of occurrence, and different approaches are needed to address these research problems. Let's start with Type A side effects. Earlier, we began planning an intervention study to assess the efficacy of a new polypill in patients with a moderate risk of developing cardiovascular disease within the next ten years. While the results of early studies suggested that the drug is safe in humans, more information is needed about the frequency of common side effects that it may cause including, dizziness, cough, and gastric bleeding, all potentially directly relate to the primary actions of the drugs. And importantly, we will want evidence that can establish whether any of these conditions are directly caused by taking the polypill. Once again, in order to address this question, we take an epidemiological approach. We begin by defining the study determinant as the polypill, which we can compare against the placebo pill. And multiple outcomes to assess different types of Type A side effects. As with research into the intended effects of an intervention, to obtain valid findings in our study, we will need to ensure comparability at the level of the intervention to level of outcome, and patient characteristics. And, as you've probably already guessed, a randomized control trial would be an excellent tool to help us achieve this. It could even be a good idea to combine this study with our original study looking at the efficacy of the polypill, in which case we would treat the Type A side effects as secondary outcomes. Randomization assures comparability of prognosis which, in a non run most study, could be different between treated and untreated patients. Because physicians may be reluctant to give the pill to those patients who they expect would be particularly vulnerable to, for example, gastric bleeding. This would be the case for patients who require the use of an NSAID for other reasons. So once again, an RCT is the design of choice for intervention research in Type A, unintended effects. However, there are some circumstances in which an RCT is not the best type of study to assess adverse effects, nor even essential for valid comparisons, and this brings us to research in to Type B side effects. After a few years in the market clinicians begin to notice that a small minority of patients who were prescribed the polypill developed aseptic meningitis. Concern has been raised that aseptic meningitis may be a rare allergic Type B side effect of the drug possibly resulting from some form of interaction of the separate components in a single pill. We could consider setting up a post marketing cohort study to address this. But we need to question whether prognosis is comparable for those treated and untreated, as discussed previously, for intended effects. A particular feature, however, of Type B effects is that they cannot be easily predicted for the individual patient, and thus are not likely to have played a role in setting the indication for use in patients. Consequently, unlike Type A effects that can be explained by the primary actions of the drug, patients who are or are not given the drug may well be comparable for the prognosis of aseptic meningitis. Even patients without an indication at all, and therefore not receiving the drug would be comparable for the risk of this particular outcome. While theoretically, an RCT may give us the best chance of obtaining findings that are free of bias, if the side effect occurs only in say, 1 in every 10 or 100,000 patients, we need an extremely large trial to be able to reach any firm conclusions. And a study of that scale simply would not be feasible. And given a lack of confounding by indication for this outcome, an observational study may provide an equally valid comparison. Yet, due to the infrequency of the outcome, still a very large number of patients may be needed to be followed up over a period of several years. An alternative, more efficient approach could be to use a case control design in which patients who develop the outcome are first identified along with control patients from the general population. Next, their prescription histories could then be checked. The validity of the findings in a cohort or case control study would given the comparability of prognosis, not differ, and the same results in either approach will be obtained after a far smaller investment of time and money. So we have seen that addressing the clinical problem of unintended effects can be challenging, especially when the side effects are rare. The approach that you take to investigate unintended effects will depend greatly on the type of outcome that you're interested in studying. And on a final note, while the categorization of unintended effects into Types A and B can be useful when defining an intervention research question, there are some situations where unintended effects do not fall neatly into one of these categories. For example, it may be that the side effect has dose dependent properties, and therefore is predictable in principle but it is still uncommon. In such cases care must be taken to choose an appropriate study design that takes into account any potential sources of bias.