Okay, so now let's look how the actual analysis and

synthesis work for longer sounds.

So in here, we have the stochastic model.py file,

in which we have different functions that implement

the analysis of a complete stochastic signal and

the synthesis from the analyze stochastic representation.

And basically does, they do what I have done for

a single frame in which the stochastic model are null, it start from the signal,

the size, size, and the stochastic factor.

And it iterates over the whole sound.

And it does what we just did, it performs FFT.

It finds the DB magnitude spectrum then it uses the resample function to

down sample it to a fewer samples depending on the stochastic factor.

And it keeps attending these to a sequence of envelopes.

Because we're going to have a time-varying envelope for

the stochastic representation.

And of course, the synthesis does again, what I showed before,

it has to up-sample the magnitude spectrum to the complete size,

it has to generate the random phases, and it has to convert to the complex spectrum.

And to overlap that to recover the whole signal.

So let's call this for a particular sound.

So in here I wrote a little script that takes that ocean sound and

it calls the stochastic model now, so it analyzes the whole sound.

Okay, and it is a hop size of 128.

The FFT size, it used to be twice as that instead of specifying an FFT size,

the default is normally to just take twice the hub size.

And then the restore in the stochastic factor which is 0.2.

So now if we ran this test 2.