We've talked about the basic design of a service and that the idea that it will wait on a semaphore message queue until a request is made for it and that request should come from an interrupt service routine, or from a hardware interval timer, or from a software interval timer, or perhaps some software signal, which is essentially a software interrupt. So given that scenario, we know we've looked at this response time line a few times, but we know that, for example, once we make a request, the execution really should proceed without any blocking, so we like to use the word blocking, and the only thing we really expected is some interference from higher priority services, so interferences from higher IO services, and that's expected in RMA, so RMA models that. But we don't expect to block on additional IO or memory resources, we expect to just have interference and be redispatched and continue execution. All of our input is at the beginning and at the end, and we are given a semaphore or provided a message and a message queue to say we've got input data available that we can process. That ideally is already sitting in memory, and that we can output results to memory as well that will ultimately be sent out to some device for response, some actuator for example. We can show this diagram here just as we've done, and now the question is what happens if we have blocking like IO or shared memory? Here that we need in addition to the CPU. We could get blocking right in the middle of our execution. This is generally not a good situation, so frowny face. If that happens, we've got a problem. What we'd really like is all of our initial IO from our sensors comes in at the beginning, so really before request, and this comes out as after computation is complete, compute is done. There should be no intermediate blocking, that's the design we want. So we see it here, this is state machine, but I'll draw it more simply. It basically says, we've got a service and it will try to take a semaphore, SemA, let's say an ISR will give that, and here's our service. When it gets a semaphore, it's going to go off and it will go through some state where it's processing, and it will come back to this state where it takes the semaphore again, and I'll just do this over and over indefinitely. But the only time that really blocks is in this S1 blocking state. See that blocking or it's been requested, and or it's done, and it goes back to blocking and otherwise it's processing. That's the normal standard design for a service, simple state machine, really kind of two states at the highest level just waiting for request or processing that request. The problem is if there's any blocking over here, where we go a wait state. Now, we got a problem, and if we need to basically wait on something like some other semaphore or let's say a mutex, before we can go back to our processing state, then that's going to be an issue. We'll try to avoid that, and this simple diagram for this is we just have sensor electronics, things like analog, the digital converters, things like cameras like we're doing in our project. They provide input to something like a digital control law or machine vision processing service. You read that input as it's available on a periodic basis, a periodic service. This is some data available, data avail, and then this is for output. We just have this loop, and this is either in a wait state or its processing and it goes back to the wait state, so it just keeps looping here. Not blocked loads on this loop, so I keep emphasizing that. That's a really simple service, there's no intermediate IL, latency, there's no need for synchronization while it's running, it just synchronizes with data being available and then process it and then goes back to waiting for a new request for service. That would be the ideal service as the only blocking we really would like to have to deal with. But we'll talk about other possibilities now and things that can make this more challenging.