You basically going to say, that's a, that's a, wait, wait for the store, you
sort of put barrier in. So, you're not going to get multiple roll
backs over and over again. So, this is kind of a conservative way
that when you see you're rolling back a lot, there's a lot of thrown away work
that you're doing there. So instead, you can just say, well, this
load, I think is dependent on that store. You can potentially, even remember that
across instruction sequences. So, if it's always the case, let's say,
this load is dependent on the store, you could potentially keep it into your
instruction cache or something like that. And then, you know, when you go to execute
that same load at a different time, it's going to wait for all the stores to
complete and it's going to barrier. This is kind of a prediction.
It's just heuristic, but it's one way you could potentially not cause that load to
always replay. And, if you have sort of multiple
dependent loads or, or, excuse me, it loads in one store you could potentially,
cause those loads not to have to replay multiple times.
But, the, but the big advantage here is when you go to re-execute that load some
time in the future, you're not going to flush the entire pipe.
So, this is kind of like a branch prediction, if you will.
It's like, this branch I've, I've trained the predictor to say that, that load
usually is dependent on the previous store.
So, just wait for all the other stores to clear out of the pipe.
So, it's a cute little trick. Okay.
So, we're almost out of time. All right.
So, speculative loads and stores. We're going to introduce a store buffer to
hold the speculative state. So, let's take a look very briefly here,
I'll flash up what a store buffer looks like.