So, focusing in more detail on one of the designs, one of the mixed-mode designs we just referred to, concurrent mixed-mode, again, the idea is that within a single sample, different subgroups are interviewed or their data are collected through different modes. So, for example, one might collect data via the web from those with Internet access and use mail for those without Internet access. The benefits are that it should reduce coverage and nonresponse error. The down side is that there may be mode effects on measurement, they're confounded with subgroups. And so, it's hard to know if the differences you're seeing between modes are due to real differences between subgroups or are due to the modes themselves. Mode preference is an important construct in concurrent mixed-mode of designs because one might aim to collect data from respondents in the mode or in a mode that they indicate they prefer. But actually measuring preference is a little bit tricky because respondents tend to indicate that they prefer the mode in which their preferences are being elicited. So, in this table from de Leeuw, we can see that if respondent's mode preference was elicited by mail, they tended to report preferring mail, compared to face-to-face and telephone interviews. But if their preferences were elicited face-to-face, they tended to indicate they preferred face-to-face, and if they were asked in a telephone interview what their preferred mode is they tended to indicate telephone. But, mode preference does seem to have some consequences on participation. So in a study by Olson and colleagues, they measured mode preference in a 2008 telephone survey. They asked respondents if they preferred phone, web, or mail. And then in 2009, respondents were invited to a second survey. And assigned to one of four modes, possibly their preferred mode. As you can see, the yellow bars indicate no mode preference. That is, their respondent's mode preference was not elicited, kind of a control group. And the blue bars correspond to the respondents whose preference was elicited, the y-axis' response right here. If the respondents were interviewed or if their data were collected via web in the second survey and they had indicated a preference for web, then they were reliably more likely to participate. So the blue bar is significantly higher than the yellow bar. In web only, that is, when respondents indicated they preferred web and were interviewed by web, they participated at higher rates. Mail seems to be kind of robust to mode preference. So, if respondents were interviewed by mail, response rates were high no matter what mode they indicated a preference for. Mode preference is important in this kind of design, as I indicated, it's not the same as actually allowing respondents or requiring respondents to choose a mode. So, mode choice is kind of a stronger version of collecting data from respondents in a mode that they not only show a preference for, but actually take the initiative to select. An important meta-analysis was conducted by Fulton and Medway, because it had been reported in a number of studies that giving respondents a choice, while believed to increase response rate, was actually resulting in lower response rates. So, Fulton and Medway looked across a number of studies that paired a male invitation with either a male questionnaire, that is, the male questionnaire was included in the invitation letter, or completing by web, that is, the invitation letter contained a URL that respondents could enter into a computer, and respond by web. So they looked across 19 studies that used this type of mixed-mode design, in which half of the respondents were given this choice between mail and web, and the other half, generally, were only offered a mailed out paper questionnaire. So what they found was that in eight studies, response rate was reliably lower, with a choice then without a choice, and in nine other studies, the effect was in this direction. So overall, there's pretty strong evidence that giving respondents a choice, requiring them to choose between mailed out a paper questionnaire and web actually lowered participation. Only in two studies did the pattern go in the opposite direction. That is, in only two studies did response rate increase when the respondents where given a choice. The explanation that Fulton and Medway suggest is that this kind of choice increases complexity, making both options less attractive. And it creates a break in the response process, if respondents decide to switch modes. And that break can reduce the number of starts. So a respondent who intends to enter that URL into a computer may never actually get there, and so that's the explanation they've offered. A kind of operational extension of this that they also suggest is that by, sort of, pushing respondents to the web, they may encounter connectivity problems or usability problems that they wouldn't have encountered if they'd only been able to respond by paper. So, there does seem to be an impact of choice, at least of this type on nonresponse, but that doesn't mean there aren't advantages for measurement error or for measurement by requiring respondents to choose the mode in which their data are collected. Conrad et al., argued that choice, especially when it's easy to implement, especially if respondents choose a convenient mode, should improve performance. They contact respondents on smart phones in one of four modes. What they call human voice, which is basically a telephone interview. Human text or SMS, in which a human interviewer texted survey questions and respondents texted their answers. Automated voice, in which a recorded voice was played over the phone app and respondents spoke their answers, and automated text, in which a computer texted questions and respondents texted their answers. The respondents are required to choose one of these modes and they could choose the mode that they were contacted in. Essentially what they found was an advantage for measurement, less measurement error, less rounding and straightlining. So these are two measures that have been considered indicative of what's called satisfy images, kind of taking mental shortcuts to reduce effort. So, rounded numerical answers tend to indicate maybe less complete, less thorough, thought about the answer, so I was saying 100 rather than 97 may well reflect a kind of least effort strategy, or an attempt to reduce effort by the respondent. Straightlining refers to selecting the same response option when a series of questions all use the same response scale, and the idea is that if respondents are giving the same answer to all of these questions, they probably aren't fully considering their answers. In any case, when respondents chose their mode of data collection, they exhibited less rounding, less straightlining, so less satisficing by these two measures. Mode choice also reduced breakoffs, that is, abandoning the interview midstream. And it increased satisfaction, than when respondents were assigned a mode, that is, when they weren't given a choice. So in the next segment, we'll talk about the more common type of mixed-mode design, sequential mixed-mode studies.