[MUSIC] Hello, and welcome to the first real lesson in real time system development. We start off by defining the framework within which we design real time systems, and firstly the most basic concept, the task structure. The study of real time systems is the planning and the scheduling of workload on a processor, so that the timeline guarantees for workload is never violated. We quantify the workload into discrete pieces which we refer to as a task. The task are illustrated as boxes, and the length of those boxes illustrate the time it takes to execute the code inside the task. To illustrate at what point in time such a task can be executed, we use a timeline. The timeline goes from 0 to a larger value and represents the world clock time of the system execution. And when a task is scheduled, it is placed on the timeline based on the decision of the scheduler. When we think of functionality of a task, in this course, we will refer to as a C-style function containing C code. This here is what the processor actually does when a task is scheduled on the timeline. There is a basic block, usually a loop, some functionality to loop over, and a delay in the end to determine the period of the task. Now the functionality of the task will be executed over and over again based on the loop structure. And we can differentiate between each loop iteration by adding the concept of a job, which is an instance of a task. And a new job is recreated every loop period. We indicate a job with a letter J followed by two numbers. The first number tells us which task the job belongs to, and in this case it is 1. And the second number tells us which instance of a job is currently scheduled. So here you see it goes from 1 to n. The most common type of task used in real time systems is the periodic task, which will release a new job every x time units. We call the time where a task is graded and can start to execute as the release time of the task. At this point the task will start to generate jobs which will request access to the CPU. And now to be able to specify periodic tasks, we must learn a couple of parameters. The three most important ones are P, e, and D. P stands for period. It is the time interval between releases of jobs in a task. E is execution time of the task, simply speaking, how long the box is. The longer the execution time, the more CPU resources are needed. And if the execution time becomes too large, the CPU gets overloaded and it cannot handle all the tasks within the time limit. Speaking of time limit, the parameter D tells us the deadline of a task. This tells us the latest time at which a task must be completed. The deadline can either be defined as absolute or relative. An absolute deadline means that the deadline is always relative to time 0. A relative deadline, again, is with respect to the previous release of a job. And in case no deadline is given, it is assumed that the relative deadline is equal to the task period. Let's now have a look at how a system can be scheduled. So we have a timeline without any tasks scheduled. And now we would like to schedule these periodic tasks called T1, T2, and T3. T1 has a release time of 0, which means that jobs from T1 can start to be generated immediately when the system starts up. T2 has a period of 3 and a deadline of 4, so when a new job is generated from T1, the deadline is always 4 steps ahead. T2 has a release time of 1 so it must wait until this time before jobs can start to be generated. It has a period of 10 and a deadline of 9, so jobs from T2 must actually be completed 1 step before the next job is being released. T3 is released at time 5, and it has a period of 5 and a deadline of 2. So there is only 2 milliseconds here to complete the T3 job when it has been released. The execution time of the tasks are illustrated by the length of those boxes. So T1 has an execution time of 1, T2 has an execution time of 2, and T3 has an execution time of 0.5. And we can start by looking at the release times of the tasks. T1 is released at time 0, T2 is released at time 1, and T3 is released at time 5. We also put out the deadlines for the first job in each task. And D1 is at time 4, D2 is at time 10, and D3 is at time 7. So we can make an example schedule here with the first T1 jobs. T1 reoccurs every 3 milliseconds, so it could be scheduled, for example, like this. And the first job of T1 is clearly not violating the deadline, D1. T2 can be put for example, from time 1 to time 3, and its deadline is still okay. T3 is released only at time 5 but we still have space left there, so it does not violate the deadline either. So this here was an example of a successful schedule. A periodic task is formally defined in the order period, execution time and deadline. But we also have some more terms to define. A task has a feasibility interval telling in which time a task must execute and be completed. And if we have the previously defined task T1 with a period of 5 and a deadline of 4, then the feasibility interval of the first job is between 0 and 4. Additionally, we can define a phase, which is a time shift of the first job in the task. If we add a phase of 1 to a task with a release time of 0, then it must be released earliest at time 1. And the phase is denoted with the Greek letter phi. An unpredictable shift in the release time is the jitter, and it can be expressed in a range from r- to r+. Jitter can in many cases cause problems, because the predictability of the schedule is decreased, and you basically don't know when you can schedule a task exactly. Let's define some periodic tasks. We have T1, T2, and T3. And we assume that we have some good scheduling algorithm in place, and this algorithm is able to schedule all the jobs between 0 and 60. Such schedulers will the make earliest patterns start to reappear after a certain time. So here we start with T1, T2, T3, T1, T2, T3, and so on. In this schedule, this exact release pattern will start to repeat after time 60. So we know if we extend this schedule to time 120, this will look exactly like a copy from time 0 to 60. The time period after which a pattern starts to repeat itself is called hyperperiod H, and is, in fact, the smallest multiple of all the periods. This can be very useful for determining if a schedule will keep all the deadlines forever and not only the first x time steps. The definition of the hyperperiod is calculated as the least common multiple of the individual periods of all the tasks. So if you have a task set of T1 with a period 3, T2 with a period 4, and T3 with a period 10, then we calculate the following. We list all the multiples of the periods until we find the smallest common multiple. For a task with a period of 3, we have 3,6,9,12, and so on. For period 4, we have 4,8,12,16, and so on. And for 10 we have 10,20,30, and so on. So the smallest multiple that is common to all these periods is then 60, which is our hyperperiod. The key points learned in this lesson is how define workload and functionality in a real time system. A task is a component containing the software functionality to execute, and periodic tasks are usually defined by period, execution time, and deadline. Thus a task must never violate a deadline.