The last step in thinking about these above the line issues will be your decision support or your intervention is, is it used and does it accomplish its goals? So, just because you build it they will not necessarily come and you need to know whether they're seeing it, do they see it enough? Do they respond to it? Do they respond to it the way you want them to respond to it. Remember that desired action in [inaudible] framework. What are your threshold? What is enough? Then if not, why not? Those are the three issues of workflow, usability and effectiveness. So, workflow. Remember the five rights, right place, right time by placing the workflow. If you deliver your decision support after the decision is made, thank you very much, don't waste my time. If you give the decision support before the decision needs to be made, you're confusing me, wait a minute I'm not ready. So, getting it on just right point is really tricky. So, if you come into the office and I examine you, and I come up with a diagnosis and a treatment plan in my head, then machine says, by the way did you consider diabetes? I'm not going to be very happy with this machine because I know what I'm thinking about. You need to tell me when I'm thinking about it not after the fact. Similarly, if I've ordered a medication for somebody who's allergic, why you telling me after? Why didn't you tell me while I'm doing it? Why are you even allowing me to prescribe penicillin when the patient is allergic? Why do I have to find out after I put the order? Then there's interruption. So, yes, you may be giving me the decisions but it at the right time, but if my intention is being drawn away, it's really bad news. A simple case for this is when I'm looking at an ordered set and I had to scroll several pages and there's a drug order I want to put in three pages down, but whether or not the patient is allergic is two pages back and I have to scroll back, look to see what I would use and then scroll forward again. All of you know when you use the Word as soon as you go to another page, you totally forgotten why you were there. So, that is an interruption induced by the system itself. A more classic sort of interruption is the case where the nurse goes and gives wrong medication to the wrong patient and everybody yells at her, ''Have you be so stupid to give that drug to that patient?'' Well, when they look back at what the nurse was doing, it turns that between when she got the medication from the front desk to when she went to the office, she was interrupted 50 times. So, no surprise that mistakes happen when you're interrupted. For usability, this is a ridiculous acronym I-MeDeSA, I don't know. But these are different aspects of usability that you could considered when doing a critique other user interface. I won't go through them all detail, I will point out the color, is on the list, but it's not the only thing and placement of items is not the only thing. So, learnability and confusability have nothing to do with society with specific widgets but rather as a whole can a user figure out how to use it and are easily confused by things going on on the screen. The proximity of task components being displayed is what I've said is related don't interrupt, even if my eyes have to waver, I have to look at different parts of the screen that's actually interruption and we gave it a corrective action. Maybe the number of times I've complain that you have the OK button but what the [inaudible] does that mean or a dashboard there are no buttons. Finally in terms of effectiveness, you can measure effectiveness, it's tough to do it because [inaudible] looking at long-term outcomes which may not be easy available you need to be able to prescribe them to the actions or inactions take in decision support but you can use process measures like the actions and the outcomes as opposed to the objective of the [inaudible] framework. I do need to point out that in the literature when people do these examinations as research, they find that clinical decision support is that's perceived only tension plus 40 years, only two-thirds of studies where they tried to remind somebody to do something, only in two-thirds of studies whose decision support have an impact and when they do have an impact is your only order of 10-20 percent. If you want to take action like this pop-up alerts, there are effective only one third of the time and again, 10-20 percent. Now, there are individual stories of a huge impact or a great impact and they are wonderful but we think we understand the design principles of what's required to get to that point, that's everything I've been talking about so far today and you'll hear more as we talk about putting things together. Here is some of this things that make things go better, automatic prompting to use the system, in other words pushing rather than pulling. A use of home-grown systems, so in the literature there have been a number of centers that build their information systems and they're well tuned to their environment so it's no surprise that they do very well because the developers over the decades have to figure out how to reduce the false positives without increasing false negatives. The sad story is that we are all now in commercial systems that have known that fine tuning and we have to know the next 10-20 years will be spent catching up to the way we used to be at those and it's important point, it's only a few centers that had those home-grown systems that were developed. I already mentioned pushing in providing clear recommendations or what needs to be done. So, in total, we've gone through a fair amount of whether you should put in place decision support or decision support into any one of those other interventions that we showed from WHO and if you were to put the decision support at the place, we went through bit how to test it and then how to assess it. The next chunk we need to address is now how do you actually do it and we'll be turning to that next.