Intervention is fundamental to daily clinical practice. Upon reaching a conclusion about a likely diagnosis and prognosis of a patient, a clinician must decide whether some kind of medical intervention is needed. Whether it be a therapy to treat a specific disease or some kind of preventative measure. If a certain intervention is indicated, the next question is, what will be the patient's prognosis if they receive intervention compared to if they do not receive it? In order to answer this question there must be evidence of the safety and efficacy of the intervention. And this kind of evidence can be acquired through sound clinical research. For now, we will focus our attention on how we can study the intended effects of an intervention. That is, the efficacy of therapeutic or preventative measures. Research into the intended effects of an intervention is causal. The aim of the research is to know whether the intervention itself is responsible for any observed improvements in the health of its recipients. Let's take the example of a potential new antiepileptic drug designed to manage seizures in adolescents. We decide to conduct a study to investigate whether the new drug reduces the frequency of seizures and decide to restrict the domain of our study to adolescents only. In this case, we are interested in knowing whether patients who take the new drug will show any improvement in their symptoms of epilepsia. The determinant in our study is the antiepileptic drug, given in a prespecified mild dose. And the primary outcome is the number of seizures per day. We are also interested in studying cognitive function and seizure duration as secondary outcomes. What kind of study design should we choose in order to properly address this research question? Before we can answer this, we first need to consider how we can minimize the main sources of bias that could threaten the validity of our findings. The key to validity and intervention research lies with ensuring comparability to absence of the intervention within three areas, the study population, the intervention, and the outcome. First of all, the prognosis of patients with different treatments must be comparable at the start of the study. If patients within the treated group are more healthy overall than those in the intervention group before we begin our study, it is likely that we will overestimate the efficacy of our treatment. The converse also applies. If, for example, the patients in our treatment group are on average younger than the patients in the untreated group. And seizure frequency and severity generally declines with age. Any effects of the drug that we observe in our study will be confounded by age. And the treatment may appear to be less efficacious than it really is. Next, we must ensure that their is comparability at the level of the intervention. Extraneous effects that are not directly related to the effects of the drug could introduce bias into our findings. For example, if patients are aware that they are in either the treated or untreated group, it may influence their behavior, and in turn, may have an effect on their outcomes. Finally, the way that we measure the outcome between the different treatment groups in the study must be comparable. This not only implies that the same definitions for the outcomes should be used for patients within the untreated and treated groups, but also it means that measurement of the outcome for the patient should not be influenced by a knowledge of which treatment group the patients are in. For example, if a researcher is aware that the patients they are observing has received the antiepileptic drug, they may be optimistic in the way the measure the patients outcome. This won't be as much of a problem for objective outcomes, such as the number of seizures per day, but could be a source of bias for more subjective outcomes, such as cognitive function. So how can we go about achieving this comparability? We could consider conducting a cohort study, a popular choice of design for etiological research. We could observe a group of patients, some of whom are currently taking the treatment, and some of whom are not, and record their outcomes. However, it is very unlikely that we will be able to achieve comparability at any of the levels that we just discussed. Instead, we will need to address our research questions in a more controlled setting. In most circumstances, the optimal study designed for research into the intended effects of an intervention is to randomize the control to trial, or RCT. And RCT can be seen as a tool for measuring the causal relationship between an intervention and an outcome. And if designed correctly, should provide valid, unconfounded findings that can be directly translated to clinical practice. Unlike observational studies, RCTs are based on experimental design and treatment is allocated to recruit patients according to the protocol of the study, not for any other reason. So why should we choose to conduct an RCT? Well, an RCT is the most secure means of achieving comparability at the patient, intervention and outcome levels. Arguably, the most powerful aspect of the RCT design is the randomization of intervention allocation. In intervention research we are generally interested in assessing the effects of an intervention on an outcome by comparing it with a different intervention. By making the allocation of two interventions a random process, the use of the intervention should no longer be associated with the baseline characteristics or prognosis of the study's participants. Randomization is the most effective method for ensuring comparability of patients in different treatment groups provided they are large enough. And if done properly, should ensure that confounding bias is not an issue within the study. Now how can we ensure comparability at the level of the intervention within an RCT? Because trials are constructed in a controlled setting, it may be possible to design them in such a way that patients and the clinicians giving the interventions are unaware of which intervention they have been randomly assigned to. One way of doing this is to compare the intervention of interest with the placebo control. The placebo and the intervention of interest should be indistinguishable for the patients and the clinicians who administer it. This should prevent the observed effects from the intervention, from being distorted by extraneous effects that could arise if the patients or the physicians knew which treatment they were receiving. It is also worth noting that there are some circumstances where a placebo control is not appropriate due to ethical or practical reasons. In addition, if we were more interested in assessing the overall effectiveness of a new intervention compared to current practice, use of a placebo would not be clinically relevant because it could underestimate the effect in real life. Trials of this kind, without a placebo, are known as spragmatic clinical trials. But for now we will focus on explanatory clinical trials, where efficacy is our primary interest and placebo control may be an appropriate means of obtaining valid results. Lastly, we come back to the issue of comparability in the way that the outcome is measured in patients. A key feature of RCTs is the possibility to prevent the conditions or researchers who observe the study outcomes from knowing which interventions the patients received. This is commonly known as blinding. In our antiepileptic drug trial, we could set up a trial in such a way that the treatment allocation is concealed to both the patients and the clinicians in charge of recording the study outcomes. This approach is commonly known as a double blind approach. So far we have focused solely on the importance of obtaining valid results. But for our research to be useful in clinical practice, we also need to consider whether it will lead to results that can be generalized to patients outside of our study. Ideally we'd like our research findings to be applicable to any patient within our domain of interest. And is this possible when using an RTC design? It certainly is, although extra care is needed when considering which patients to enroll in the trial. If we were to only include adolescents age 16-18 in our antiepileptic drug trial, we may not have enough evidence in the end to make recommendations for younger adolescents. Therefore, it's very important that the populations from which we recruit patients, and the inclusion and exclusion criteria for our study, reflect the domain of interest and are not overly restrictive. So is the RCT a panacea to all our methodological problems? I'm afraid not. There are, in fact, many circumstances where a trial design just isn't possible. This could be for ethical reasons. In some situations, it may be unethical to control treatment allocation in an experimental setting. There may also be circumstances where a trial is simple not feasible. For example, when the outcome of interest is very rare, and so a very large number of patients would be needed to be recruited into trial. So, while in most cases an RCT is a tool of choice, for intervention research alternative observational designs may sometimes be necessary. Also in those circumstances it helps to keep the randomized design in mind to detect and prevent sources of confounding.