We can plug in the formula. First, we can plug in the formula of the
tau and C solution turns out we can solve numerically tau to be 0.0765.
That is basically a 7.65% as the contention probability, the probability
you'll transmit as a single station. Okay.
And then this lead to PTPS calculation, which leads to the S calculation,
together with all those constants. Okay.
The exact blown out form of the formula is in the textbook.
And, or you can just verify that through your own, calculation, okay?
So now, we're going to look at this S as a function of a few things.
First of all, as a function of N, the number of user stations,
or in other words the impact of crowd size.
So now, I'm plotting S in meg-, megabit per second over N here.
If you look at the aggregate, okay,
as, as a function of n for all, then it goes up, and then goes down.
Now goes up is actually better understandable, because I have got more
stations. But it quickly starts to go down.
This is the point where, basically the tragedy of commons kicks in so much that
adding more users will reduce even the total throughput across all users.
And this happen around eight users. And if you look at the total throughput
divided by M, okay,
as in over M. That's the average per station
throughput. Then, they actually always go down.
It never goes up. Why?
Because you add more user and more interference.
What is important is that it goes down so rapidly.
As you go from like two, three users down to ten fifteen users.
It went down from 25 so megabit per second now.
Notice not 54 because 54 is the physical layer of speed.
Okay. After the o, the overhead it goes down to
about 25 realistic speed. This is the theme we'll pick up in the
next lecture. Okay.
In today's lecture we'll notice the shape of this drop.
This drop is dropping very rapidly. Okay.
The, the point of going down all the way to only one or two megabit per second.
So no wonder in a busy hot spot. the average per station throughput is so
low, because despite all the smart ideas, the SMA random access controls strata
commons in a very inefficient way. Now few more charts for example we can
also measure S as a function of aggressiveness.
One way to look aggressiveness is to look at the minimum window size you have to
wait up to that point. Now we that for different size of the
crowd all happens that initially, okay, if you assuming W mean a bigger means you
maked it You make this this contention less
aggressive, then the flufoot actually goes up.
Okay? That's very good.
But at some point it will go down because it is so non-aggressive, you're actually
wasting idle resources in the network. So there's a point beyond which deemed
more polite that it actually hurts your through put.
Again, very typical of a cocktail. You don't want to be too aggressive but
if you're too non-aggressive then you're just wasting time slots.
And as the crowd gets bigger and bigger. Okay, we see that.
The. Range of W mean before it start to band
down becomes longer and longer. That means as the crowd gets long- bigger
and bigger it pays more and more to reduce aggressiveness.
Another way to look at aggressiveness is look at
The maximum number of back off stages that you allow is you make B bigger.
You tend to increase the average contention window size and therefore
become less aggressive and you'll see a similar behavior here.
Okay? as the crowd becomes bigger and bigger,
the impact is more prominent. Okay.
The Throughput actually becomes bigger as you
become less aggressive. Okay.
The impact of B however is less prominent than impact of W mean there.
Finally, as a function of the payload size L.
Okay. We were talking about somewhere around
here. Okay.
So get around 25 mega per second. Now you see a monotonic increase curve,
because more payload means less overhead, relatively speaking.
But this is a misleading chart, because remember all the way back early in the
lecture we did not model the actual interference or collision phenomena
accurately. As the.
Payload gets bigger and bigger actually, the chance of collision goes up.
Because the chance of two packages overlapping in time goes up as it takes
longer to transmit the payload. Okay.
If you incorporate that factor, this actually will start to bend over and
downward. So in summary, what we have seen is that
in wi-fi, interference management is done through random access rather than in
power control accelerator. And a big part of the reason is because
it's operating in the unlicensed spectrum.
There are a few very good ideas, including randomized and exponential
backoff. Including differentiated wait and listen
intervals. Including limited explicit message
passing. Which, by the way, the RTS/CTS is not
always enabled. And that may also explain some of the
inefficiency of throughout in hot spots. But, it's got a big limitation.
Now, we went through, Simple, relatively speaking.
But, still a little bit involved approximation of the throughput, and we
saw that this throughput per a station as a function of n drops rapidly the
performance decrease very fast as the contention intensifies.
Even, as we go up from several users, to just say, ten or fifteen users.
And, this is underlying reason why The performance in hot spot tend to be poor
unless you don't have a large crowd, it so happens.
And we see a fundamentally different way to do distributed coordination in taming
this tragedy of commons. So now we're going to wrap up our
wireless lectures with one more lecture on a very practical important question,
what is the actual speed that I can expect on my cellular, LTE or 3G network.
That will be the next lecture, I see you then .