Good to see you. We've been talking bunch about user research and I want to bring up a really important topic. And that is the topic of Ethics and Consent in User Research. Ethics and Consent? Might wonder what this has to do with designing interfaces, but Ethics are critically important whenever you're involving people in what you do. Whether you think of it as research or work. And ethical obligations are often also legal obligations that you not harm people, you don't embarrass or shame them that you don't put their jobs in jeopardy that you don't cause them financial or emotional harm. That you respect their right to choose whether to participate, and whether and when to stop participating. And that when you collect data on them, they understand what you're collecting and why, how it's going to be used and how you're going to manage things like recordings. Just to give one simple example. If we're in an environment where we're trying to show and understand where people have trouble with their existing solution to a problem. So let's say we have a financial system and people are having trouble entering payroll. And we say, would you mind if we record your use of the payroll system? Somebody might say, why? Well we want to try to figure out what's hard about it so that we can make it better. And they say, yeah, that sounds good. What we forgot to tell them was, and we're going to post videos of people having trouble with using the payroll system on a public site to try to help every, convince everyone we need a new payroll system. And won't it be great when people see and recognize you and that you're making mistakes on the payroll. Well no, that's not great. That's something that can get somebody fired, get somebody shamed. Now, I know you would never do this, I would never do this. I'd love to say no one would ever do this, but unfortunately, people have done this. And that's why we're going to spend a couple minutes on some principles and some practical issues in ethics and consent. So in the United States, we tend to anchor most of our discussions of ethics with human subjects to a report known as the Belmont Report, which grew out of a national commission that was charged in the wake of some highly publicized unethical research. Most famously the Tuskegee Syphilis Study, where researchers chose to study Syphilis and it's progression in a set of African-American men rather than treat it and cure them. Because they were interested in understanding what the disease did, and this created and uproar, because here you have people who are medical professionals, who are denying people treatment. And letting a disease progress in ways that may cause them unknowable harm. And out of that Belmont Report and the National and in fact, Global consensus, that you can't just take advantage of people came three key principles and these principles still underlie in different versions the ethics of human subjects work. The first is respect for persons. This is really about autonomy and consent. People are not there as an instrument for you to get done what you're trying to do. They are autonomous. They're deserving of courtesy and respect, and they have a right to choose in an informed way, whether or not to be part of whatever it is that you're doing. We'll come back to the idea of informed consent in more detail. The second principle is one of beneficence. Minimize harm, maximize benefit. This is a especially relevant in biomedical studies, where we're talking about, for testing a drug, how do we make sure that it's being tested at low enough dose that if it's harmful, you don't cause serious harm. But minimizing harm turns out to be a big deal in every kind of interaction where you're using people to study something or study people. And the final one is the principal of justice, or fairness to all. This comes up in certain kinds of studies in making sure that you don't only study one population if the thing that you're going to create from it goes out to many populations. This is a principal that's been used to, for instance, prevent people from doing medical studies only on healthy young men. Resulting in the fact that we don't know how the drug might perform on women or the elderly when they might need that drug. You could make the same argument that if your designing a user interface system for people who are using public transit, make sure that you're studying the behaviors and the capabilities and the usage of people who use public transit. Don't go off and say it's more convenient to use other people and design a system, not designed for the people you're trying to actually serve. So with those three principles, the US created a regulatory framework but even though the Belmont Report was focused on US Biomedical Research, the same principles of are found all over the world. Now, Canada's Tri-Council Policy Statement addresses in different language and slightly different issues and certain things they raised to greater prominence about for instance how they deal with first nation's peoples and the integrity of tribes. But they're basically the same core ideas. India has set of guidelines on Institutional Ethics Committees for Human Research. That look a lot like the same ideas you would find elsewhere in other parts of the world. There are guidelines in China. There are guidelines in Finland. Pretty much anywhere you go you'll find that there are some guidelines. In some cases these are only medical. In other cases they've been extended to nonmedical research. Perhaps in part due to examples of harm In non-medical studies like the famous Stanford prison experiments where people had volunteered not really knowing what they were getting into. And were assigned roles of either prisoner or guard, and locked up or locking others up, and lead to some pretty harmful outcomes. There were also some significant concerns raised about some of the compliance experiments where people suffered psychological harm from believing that they were administering shocks to others who turned out to be actors who were pretending to be shocked. Lots of examples that are not specifically. Medical, some of this you may say feels very far away from studying users in their context. But the principles work out very much the same so I want to take a suggest three key concepts and a couple of points as to how you might put them into practice. The first is simply do no harm. Minimize risk. Think about and either eliminate or reduce and disclose possible sources of harm. How might a recording be misused or reused? I think about some of the earliest cases when people were studying telephone interfaces and I've seen recordings where you'd have people struggling mightily to do something like transfer a call. Now, those are remarkably compelling for making the case that the interface had to change. But they could also be terribly embarrassing if they went out and they were pitched in a different way as look out how dumb this person is who can't transfer a call, so what can you do about that? Can we avoid recording people's faces? Can we make them fuzzed out and unidentifiable? Can we delete the recordings and we're done? What are we going to do with work logs? What are the risk of employer retribution if we're studying somebody's work and they'd become less productive or they tell us things your employer doesn't want us to know? One of the risk if embarrassment or violations of privacy or confidentiality. And consider also the context beyond the individual, could we be harming an organization or a community? The number of people I know have tried to study how do we do building systems to support recovery groups, substance abuse groups, cancer survivors, how do we make sure that we're not hurting the group while we're figuring out how we can help them? That comes into issues of cultural and societal expectations, norms and working with people who really understand the group and can help you understand what's appropriate and what isn't when you're dealing with them. Point number two is informed consent. Give the people you're studying the information and time needed to decide whether to participate. No pressure. Consider whether they can really volunteer to participate. Are they minors who need parental consent? They have reduced capacity, is there coercion there? It's not okay to say, your boss said I could study you. Without also coming back and saying, wait a minute. How do I make sure that this person really does want to be studied and I'm not violating them ethically. How do we make sure that the right information is disclosed? Doesn't mean you have to say everything. You don't have to say, gee, I'm trying to figure out how often you use the mouse. Because that might lead somebody to change their behavior but it's reasonable to say, I'd like to understand how you use your computer to get your work done and I'm going to be watching how you do that and recording what you do to better understand if we could design the system to be more efficient. How do we make sure that their questions are answered? And, most important perhaps, how do we honor the fact that consent is not irrevocable? That, as you start the process, they may say, you know what, I'm uncomfortable with this, and we have to avoid possibly coercive measures that would stop them. You don't say, gee, I put all this effort into string to study you. You can't quit now. You can't say what gee if you kept going I'd be giving this huge compensation payment but you get nothing because you quite just before the end. You've gotta make sure that people are not cohorts, that they really want to do it. The third is the notion of ethical review. In many cases, that depends on company, that depends on country. You may have to have your work, your plans for user research reviewed by others before you can start. In some countries, there are entities that have human subjects review boards. The US shall sometimes hear of this as IRBs or institutional review boards. Some countries this is only for medical work. Some countries it's only for research that's funded by a government. In some places it's all work of a certain type. But many larger companies also have the internal review processes before they'll allow companies to contact customers or study them or even study their own employees. You need to understand the context in which you are working. And even when there's no formal review, you may benefit by having somebody knowledgeable about the types of things you're doing. Review your plans ahead of time to make sure that they're ethically comfortable with your not causing harm. That you're not doing something that you really shouldn't be doing. More broadly, recognize ethics are not just about meeting regulations. They're about the spirit of doing the right thing. So if you notice that somebody's becoming increasingly uncomfortable or agitated, don't wait for them to withdraw consent, address this issue proactively. It's appropriate to say to somebody, hey, you're looking like you're not comfortable with me here. Would you rather I stopped? Get help recognizing potential harms in advance. Frequently if you're not an expert in the work environment you need somebody who is. And be careful about setting expectations. Something that's been arising more commonly, recently is the idea that somebody who participates in any way along the process of something being designed might be entitled to a share of whatever comes out. This has happened in cases where people's DNA are used to come up with new drugs. But you could imagine this coming up when you're developing a new app, that somebody's there and you say, hey, can I talk to you and talk with you about your use and watch you for a while to help design this new app? And suddenly, you come up with a new app, and they say, hey, I helped design it. Where's my cut? Make sure your clear up front. If you want to be generous and say, well, by the way if we succeed I'm going to acknowledge you, or if we succeed I'm going to give you a free copy, that's great. If you want to make them a partner, that's great. I don't think you'll often do that. But if your plan is to say look, I just want to make it clear, you're doing this as volunteer work, or you're doing this and I'm taking care of getting you lunch, but you don't have anything else, make that clear up front so that you don't have problems later. So in this lecture, we've talked about ethics and consent in user research. We will come back to this topic of ethics and consent when we talk about usability testing in user studies in the evaluation course. But we wanted to deal with ethics right up front because you're going to be going out and learning about people, and we want you to do that in the most ethical and harm free way possible.