Right, so we need a step in our proof to show that, if you take the limit of n over p(n), then this implies that, if that limit is bigger than our bounds, then this implies that the actual limit that we're looking for is also bigger than the bound. Okay, let's have a look. This is the theorem. First thing that we're going to do is to assume that our limit of n over p(n) is actually bigger than the bound that we're looking for. And then we want to reach the conclusion that the limit that we're looking for is also bigger than that bound. Okay, so saying that the limit that we are looking for is bigger than the bound. If you remember the definition of the limit, is the same as saying that, for every epsilon bigger than 0, so lets assume that we have some epsilon bigger than 0, for epsilon bigger than 0, there must exist a maximum antichain, so we have to pick that in some way. And, for every point past that maximum antichain, we want to find that the ratio between the number of tokens at that point, w past the antichain divided by the time at that point, is larger than 1 over the MCM. Okay, how can we do that? Well, if we have a point epsilon bigger than 0, then, of course, we can pick n such that, for every m bigger than n, m divided by p of m is bigger than the minimum of one of the MCM minus this epsilon, right? We can do that because we have assumed the limit above. This is basically the definition of the limit above. The fact that there exists such an n, so we can pick such an n. Once we've picked that n, we can also pick a maximum antichain, namely, the antichain, such of points consisting of all the executions at that point in time. So we have a point in time, p(n), that is the worst time at which, at most n tokens have come out, and then, if we pick an antichain of all executions at that time, so we have an antichain here saying this is a certain point in time. Namely, that point in time, then if we pick any point after that antichain, so if we pick w over here, then what do we know for w? Well, by construction of the antichain, we know that the number of tokens at w is going to be bigger than n because this was a certain point in time, after which we are certain to have n tokens. So, at this point, there must be at least n tokens, and it's also to see that the time at this point is bigger than p(n), or better, p of sigma, p of w. It's the same thing. This is also p of n. Okay, so if we now pick m equal to sigma pw, we call this the number of tokens there we call m. Then we find that sigma p of w divided by sigma t of w is bigger than sigma p of w divided by p of sigma p of w. Basically, because the number of tokens is the worst case number of tokens, the worst time is bigger than this point in time, sorry. That must be bigger than, well, it's actually equal to m divided by pm by the definition of m, and that is bigger than the throughput bound because, well, that's what we already computed over here. Right, so now let's get rid of all these drawings again. This is a proof such as saying that if n over p goes above 1 over the maximum, then also our output will rise above that limit bound. Okay, okay, apparently, I forgot something. What did I forget? There are actually two assumptions that you have to make before this proof actually works out. Can you find them? The most important one, I'll give it to you, is over here. We have secretly assumed that I could find a maximum antichain, such that for all points in the antichain, you are at a certain time. Well, this assumes that, if you go through the prefix order, time is not only a monotone, it's not only rising all the time. But there are also no jumps in time because, if there would be in one execution a jump in the amount of time, then the points in between cannot be touched by a crosscut, so this is a continuity principle. We assume that time is continuous throughout the execution of our interpretation of the graph, and we didn't so before. So, we have to add this assumption before we can accept this proof. Okay, and the other one, well, the other one actually mentioned briefly in the beginning is that p of n is actually defined. This is also not always the case. Suppose that we have, for example, an input and along the different branches this input gives tokens later, and later, and later. Its first token, in one branch, the first token comes in at one second, and the second branch, it comes in at two seconds, and three seconds, four seconds. If we have an infinite number of branches, then the first token actually comes in later and later, and maybe in each of the branches, the average throughput is going to be fine, but p of 1 is not even going to be defined. So, we have to be a bit careful on what kind of inputs we accept, and if we accept the right inputs, then actually the event will be defined for all productions p and all consumptions in the graph. So, in resume, time needs to be continuous, and this is the definition of continuous. For every point in time, there is an antichain that hits that point of time, and for the input, the worst case response time for n tokens need also be defined for all n. If we have that, then we can continue with our proof.