We will now study a very important property of concurrent objects, and that is called linearizability. So a concurrent object is really an object that you may have in a sequential program that has been made what is called "thread safe", where it is safe for multiple threads to operate on it. So it could be an atomic integer, a concurrent queue, a concurrent hash map. It has some set of operations, op1, op2, op3. And the question is, what return values are permissible when multiple threads make these calls in parallel? Given that we know what is correct from a sequential execution. So to understand this, let's use an example where time goes in the vertical dimension and let's say we have two threads, T1, T2, operating on a concurrent queue. And T1 performs an operation, ENQ(x). And T2 performs another operation in time, overlapping in time that's ENQ(y). And then it performs a DEQ operation. Now the question is, what value should DEQ return? Well, you may say it should return x because the ENQ(x) started in parallel. But it could be, depending on how these operations are implemented, for example with optimistic concurrency and the retry loop. It is possible that the ENQ(x) only really took effect over here and the ENQ(y) took effect earlier. So it's also permissible for DEQ to return y. The way we reason about this is that we talk about a linearization of the operations. And we consider the time for each operation in the concurrent object, which is like a method. The start, when it's invoked, and the return, that time interval. And we allow ourselves the liberty of picking any point in that time where the operation could have taken effect. So in this case, we assume that ENQ(y) took effect at this point in time. And then ENQ(x) took effect soon after. And if this was a sequential program, there'd be absolutely no doubt that DEQ should return Y because Y was enqueued before X. However, it's possible that another implementation had the opposite order and in the sense that ENQ(X) took effect earlier and then ENQ(Y). And in that scenario you would have ENQ(x) being performed earlier, ENQ(y) being performed later. And in that case DEQ should return X. So which is correct? Well both are correct. And if you've ever wanted to practice law, here is your chance. Because the way to look at it is, the concurrent objects are your clients and you, as trying to evaluate its concurrency, are their lawyer. And you follow the rule of innocent until proven guilty. You try to find at least one case that can get your client of the hook. So if you have one friend who implemented concurrent queues and in this scenario, the implementation had DEQ returning Y. You can make the case that yes that could be correct because the ENQ(Y) could happened before the ENQ(X). If you had another client that said, well in my implementation, DEQ returns x. Well you could say that could also be true because we can imagine a scenario, given the overlapping time intervals, where ENQ(x) took effect before ENQ(y). However, if you had the third client that said, well in my implementation, the call to DEQ returned empty to say that the queue is empty. Now you're going to really have a hard time and in fact you won't be able to help that client, because given the scenario you know that at least one element must have been enqueued in the queue before the DEQ occurred. And that's because the DEQ occurs on thread 2 after at least ENQ of Y is performed. So while the DEQ could return x or y, they would be normal and plausible linearizations. There is no possible linearization of this execution where DEQ was empty because that would require the DEQ to take effect before both of the ENQ's. So now you get the idea that if you have semantics for the correct sequential execution of an object, you can extend that, using linearizability to define the semantics of concurrent execution. And the way you do that, is you say an execution is linearizable, if you can find some mapping in time for instance where the operations took effect. And the semantics of that linearization is consistent with sequential execution. And the goal for everyone who implements concurrent data structures, is to make sure that all executions of the data structure are linearizable. And implementers use whatever techniques they can, locks, isolated, actors, optimistic concurrency, to ensure correctness while giving the maximum performance.