[MUSIC] Hi, in this module I want to talk about another important piece of conducting a survey, response rates and for longitudinal surveys attrition and follow up. Response rates refer to the share of the population that was included in the sample who actually complete the survey and yield usable data. Even if sampling is done correctly, differential response rates may introduce bias into the resulting data. If certain types of sampled units were more or less likely to respond, the effect on the final dataset may be the same as if they were over-sampled or under-sampled. So, for example, if people of a particular social class, particular race, are especially unlikely to respond in the survey, they may be underrepresented in the final dataset. Now, what can be even more subtle, and perhaps more problematic, are situations where there are certain types of people that are less likely to respond, but they don't have obvious characteristics that would allow us to conduct some sort of reweighting to adjust for their under-response. But that's a highly technical issue that I can't get into here. Now, if failure to respond is purely random, that's less of a problem. We just have a smaller dataset, but it may not be a biased dataset. Now, the reason we need to pay attention to this is that around the world response rates are declining. Decades ago, response rates in many places were actually quite high. Now, again, in recent decades people are steadily less willing to participate in surveys. This is especially true for telephone surveys. A few decades ago, opinion polling, other kinds of polling or using telephones were very successful. Back in the 1970s, even in the 1980s, or early 1990s, phone calls from telemarketers and scammers were relatively rare. If somebody called you at home it was probably because they had a serious purpose. People were willing to answer the phone and then even answer questions once the survey had been explained. Nowadays of course, if you're like me, you may not answer the phone if you don't recognize the number. Or you may be immediately suspicious of anyone who's asking questions. So the problem with response rates is especially serious when it comes to telephone polling but there's more general problems, as well. Now there is some controversy about the overall effects of declining response rates and what's to be done about it. And there's some studies suggesting that perhaps even low response rates are not necessarily fatal for a study. However, because it is generally impossible to determine whether non-response is random, that is, whether the people that refuse to participate differs systematically from those who did participate, we still prefer higher response rates. Because the reason that we can't figure out why non-respondents are different from respondents is that by definition, they decline to participate in the survey. We don't know anything about their characteristics. So we don't know if they differ. So generally, we do like higher response rates in a survey. How do people maximize response rates? In a multistage cluster sample, like we talked about early, where we have perhaps respondents, or targeted respondents in the sample, concentrated in particular blocks, or particular neighborhoods. We can saturate those neighborhoods with information about a survey ahead of the visits of the interviewers. So people that are running very large surveys with large budgets, they do a lot of preparatory work. Once they've drawn the sample they know which blocks, which neighborhoods they may be visiting. They may first distribute flyers, they may even go door to door to say that there's a survey on it's way and explain the survey to people. Interviewers have to be trained to introduce themselves and the project. So it's not simply sufficient to go in and demand that people participate in an interview. Typically there has to be some discussion of the origin of the project. Interviewers typically carry identification. They all include contact numbers for the leaders of the research project. Which allow respondents or potential respondents to contact the professors that are running the survey to confirm the details of the interviewer's explanation. They may need to make repeated visits to the sampled households. So rather than, say knocking once on a door and then moving on and going to a different household if nobody is there, a survey with a large budget may have interviewees going back day after day knocking again and again and seeking other ways to contact people to try to maximize response. Respondents may be offered cash or gifts as incentives. Of course this is rarely a large amount of money. It's more a sign of respect. A token of appreciation for the respondents taking time from their busy schedules to complete an interview. Longitudinal surveys have some special considerations when it comes to maximizing response rates. And in particular when it comes to longitudinal surveys, attrition is a major issue. This refers to the phenomenon of people who started out by participating in the survey. They initially agreed to be interviewed. They perhaps participated in the first wave, but then they are lost in later waves. This may happen because the respondents move and they can no longer be found or because they simply decide to stop participating. This is a real problem if it isn't random. If it's random, then you just have a smaller sample. But if it isn't random, you may be introducing biases in terms of the fact that the people who are observed repeatedly may be different from the people who were just observed once and then dropped out of the survey. Researchers conducting longitudinal surveys typically collect detailed contact information. Not only for the respondents but for the friends of the respondents and relatives of the respondents. So that they have a wide network of people that they can try to contact if they cannot locate the original respondent in a later wave. So these are just a few of the basic considerations that come up when we worry about response rates, attrition, and follow up in surveys. In practice, some of these issues are more complex. And you may need additional training or to do additional reading to really get the details you need if you're going to conduct a study of your own.