Now, when a process Pi receives a release message from process Pj,

recall from the previous slide that a release message is received

when Pj has just exited the critical section.

And informs all its voting set members that it has exited the critical section.

Pi looks at its queue of waiting requests.

Remember, that Pi may be in not just Pj's voting set, but

also in the voting set of other processes.

And these other processes may be waiting for a vote from Pi.

But if Pi's queue is empty, it means that Pi doesn't have any outstanding requests

and so it sets its voted variable to be false and it exits in the point of time.

However, if there's something in the queue,

then a dequeue is a head of the queue, say a process, Pk.

Remember, that this means that Pk's voting set contains Pi and

that Pk has previously sent a request to Pi which has been queued.

And the queueing happened here in the past or

right at the top of the slide over here.

That's where the queueing happened.

And in this case, the entry for Pk is dequeued and a reply message is

sent to Pk, meaning that Pi now has voted for Pk to enter the critical situation.

And so Pi sets its voted variable to be true.

So why does this algorithm guarantee safety?

Well, when a process Pi receives replies from all its voting set members Vi

including itself, no other process Pj could have received replies from

all its voting set members Vj.

Because Vj would've had intersection with Vi in at least one process Pk.

And Pk could have sent only one reply or vote at a time.

And it always send a vote to Pi, which means that it could not have voted for

Pj as well.

And so this means that safety is guaranteed by the Maekawa Algorithm.

For liveness, a process needs to wait for

at most N-1 other process to finish the critical section.

And you might think this guarantees liveness.

However, Maekawa's Algorithm has a subtle behavior in its original form

that can actually violate liveness because it can result in a deadlock.

Here is an example of a deadlock.

Again, another example of four process P1, P2, P3, P4.

Suppose all the four processes request access to the critical section.

P1 gets some replies but is waiting for P3's reply.

P3, in turn, is waiting for P4's reply, which is in its voting set.

P4 is waiting for P2's reply and P2, in turn, is waiting for P1's reply.

Now, we've complete a cycle or a circle among these processes, and

this is called a deadlock.

There is not going to be any more progress in the system,

because these processes are going to be waiting for these replies forever.

So the original Ricart-Agrawala Algorithm as we've discussed is deadlock prone and

these deadlocks can occur.

There are of course variance of Maekawa's Algorithms that have been published

that address this issue and that are free from deadlocks.

Returning to original Maekawa's Algorithm, let's analyze its performance.

The bandwidth involved in entering the critical section is square root of

N messages that you send out to your voting set members.

And then square root of N replies so you see back, so

that's 2 times the square root of N messages.

For exit operation you're simply sending a reply message to all your votings and

members, that's just square root of N messages.

These numbers are better than Ricart and

Agrawala's Algorithms which had bandwidths of order N.

You might think square root of N is still a large number.

It's not constant for sure, but it can be fairly small.

So if N is a million for instance,

then the square root of N value is about 1,000, which is a fairly small number.