A lot of the valuable data you get from a user test will come from talking to your participants. Through interviews you conduct as part of user testing, you can learn a great deal about how users reacted to using your system and why they had the reactions they did. As with questionnaires, there's really two times that you're going to have conversations with your participants, before and after they perform the tasks. For a pre-test interview, or an interview that you conduct before the tasks, you would be using this in place of the questionnaire that you might otherwise administer. So this is where you might be asking about the user´s expertise or their behaviors or different characteristics. And you can ask quantifiable questions verbally. So if you want to come up with data that you can use to characterize your participants in a numerical form, you can ask those verbally rather than administering a questionnaire. You just need to make sure that you get a quantifiable response. So let's look at an example, let's say I want to know how many online purchases my participants have made. So that I can characterize them according to their expertise with online shopping. So if I ask verbally how many online purchases would you estimate you've made in the past month? I might get an answer like, God I don't know, a ton! Well that isn't really the number I'm looking for, is it? So then I would follow up and say, would you say more than ten? Participant might say no, not that many. So now I've learned that a ton for them is not necessarily as much as I would have thought it was. Would you say between 5 and 10, perhaps? Yes, that sounds about right. Probably 7 or 8. And then I as the interview would check the box by interview form for the range 5 to 10. A key consideration for a pre-test interview is keep it short, just like we did with the pre-test questionnaire. The tasks are coming up next, and that's where we really want our participants to spend their time. We typically wouldn't do a pre test interview and a pre test questionnaire. It's kind of an either or thing. The advantages of using an interview would be to build rapport with your participant, so kind of help them feel at ease. Help them feel like they trust you and feel connected to you. Also, it could be a way to get some details beyond quantified measures. If you think that maybe your categories aren't quite right, and you need to get some information about what kind of shopping they do, or how they think about online shopping. A disadvantage to an interview is that it can take longer than administering a simple questionnaire. And also some questions are kind of weird to ask out loud, like age or income level. Most of the time spent talking to our participants is going to come after they have accomplished the tasks that we have assigned to them. And there is two kinds of interview questions that we're going to ask. The first are debrief questions, debrief questions are ones that we ask to follow up on the tasks that we have observed. We might follow up on places where they got stuck. Or places where they took a wrong turn and went off into some other part of the interface that wasn't part of the task that they were trying to accomplish. Maybe places where they made errors, maybe places where they didn't even realize they made errors. And any questions that they asked as they went along, which you didn't answer during the think aloud protocol. During the debrief you're often going to want to replay the tasks. You can have them step through the task again, and ask them questions as they go along or ask them what questions they have. So, during the actual task portion of the test, you might have told them, I'm not going to answer questions. I'm not going to stop and talk to you about what's going on. But this is an opportunity to go through and hear what questions they have, and ask them questions that you have. In some cases, it might be useful to show them the right way to do a task and gauge their reaction to it. So if they struggled with a task, or if they did something in a way that was not what you expected, you might show them how the designer of the system had anticipated people would do it. And try to see why they might have done it in a different way, or where was it that they didn't understand what you were trying to convey to them. The other type of question that you're going to ask after users have performed the tasks are general questions about their reaction to the experience of using the system. So here you might ask, what do you think the system does well? And where do you think the system most needs improvement? What if anything would you use this system for, which gets at the perceived usefulness. And who do you think this system would be most valuable for? It's interesting if they say, it would be most valuable for somebody besides me. If you had to explain to someone what the system does, what would you say? Do they actually understand what the system is trying to do and what it's supposed to be for? Have you used any systems that do similar things to what this system does, and how would you compare them? This is a way of getting at how they react to competitive products and where they see the pros and cons of your approach as opposed to the competition. And remember, when asking these questions, what you're interested in is not just the simple answer to the questions, but why. Why do they think the system did something particularly well? And why do they think it needs to be improved in a particular way? Why would they use the system for a particular purpose? And why would they not use it for a purpose that you might have intended? This is an opportunity to really get at how they think about it and why they reacted the way that they did. An important final point is you always want to leave time in the post-test interview to let the user tell you anything you might have forgotten to ask. So, you might say something like, well, I've asked you a bunch of questions now. Before we wrap up, is there anything else you think I ought to know about you're reaction to the system or your experience today. This gives them a chance to tell you other reactions that they might have had both positive and negative that weren't part of the questions that you had thought in advance to ask them. When conducting interviews, or user testing in general, there are a few biases that we want to watch out for. These are demand characteristics, acquiescence bias and confirmation bias. Demand characteristics refers to the tendency of participants to give you what you want. This of course requires them to guess what you want, especially if you haven't told them explicitly. But a general assumption would be that you want them to like the system. So you're showing them a system, perhaps it's a system that you built, and you want to get their feedback on it. They're more likely to tell you positive things about the system if they think you have an investment in it. And as a result participants might end up seeming more positive than they actually feel. A closely related bias is acquiescence bias, which is the tendency to answer positively, when somebody doesn't know the answer or doesn't really care about the answer. Not everyone actually has an answer to all questions. If you ask the question, what do you think the system did particularly well, and they really don't have any idea about something that the system did particularly well. They still might give you an answer, but it might not be a very valuable answer. In general, when people don't know or don't care about the answer to a question, it's socially easier to be positive than to be negative. And so they're more likely to give you a positive answer, even if they don't really have an answer to give. Again, as a result, participants will seem more positive than they actually feel about something. In this case, if they actually feel neutral about something, they might come across as seeming like they feel positively about it. And finally, confirmation bias is a bias that actually you, as the researcher or as the conductor of the study is likely to experience. In general, people or researchers prefer evidence that confirms their beliefs. If you believe that a system is usable and is going to perform well, you're going to be more likely to look for evidence that confirms that belief than to notice evidence that goes against that belief. In general, we as humans seek out and pay more attention to information that confirms our beliefs. And we selectively ignore information that disconfirms our beliefs. A consequence of this is that, especially, designers of a system can have a hard time seeing its flaws and hearing negative feedback about the system. There are a few things that we can do to address these sources of bias. To address demand characteristics and acquiescence bias, it's important to stress clearly to your participants what it is that you want. And hopefully what you want is honest feedback. It's also important to notice if an answer seemed forced. So if somebody gave you an answer just because they thought you wanted an answer, as opposed to an honest answer that really reflected their feelings about something. To counteract confirmation bias, ideally, it's best to have user tests run and analyzed by disinterested third parties. Or people that are not directly connected to the design or development team and don't have a stake in the outcome of the test. However, that's not always feasible or desirable. In cases where a test is going to be run by somebody who is a member of the design or development team, or connected in some way to the team, it's important to be aware of the tendencies towards confirmation bias. And do what you can to honestly consider the evidence that's before you. Including interviews in your user test allows you to understand your participants' reactions to using your system. Most importantly, you will want to interview participants after they complete the tasks you've designed. So that you can better understand any issues that they encountered during the tasks and to learn about their overall reaction to the experience. It's important to be aware of potential biases that both you and your participants bring to the user test. So that you can be sure to get the best insights possible.