So just to summarize what we've done in this particular module.

We've been introduced to probabilistic models.

We understand that probabilistic models have this key component

that they explicitly incorporate uncertainty, and

that uncertainty can be propagated through to the outputs of the model.

And propagating that uncertainty in the output allows us to give a range

of potential values for a forecast, which is a useful activity.

These probabilistic models are ways of capturing risk in a process.

Risk is uncertainty, and that's what probabilities will capture for you.

So that's why they're important.

We've seen a range of probability models that are used in practice.

We talked about a regression model, we talked about a tree-based model.

We talked about a Monte Carlo simulation and we talked about a Markov chain,

all examples of probabilistic models and all used in practice.

We then went on to define some building blocks of these probabilistic models.

Those building blocks involved random variables and probability distributions.

We saw some basic probability distribution building blocks.

They were the Bernoulli random variable that captures a single experiment that

can take one of two possible outcomes that we typically call success and failure.

We saw the binomial random variable which can be understood as the sum of

n independent trials.

And I showed you an example where that binomial distribution can be used

as the basis of a model for a market.

And we ended up by seeing the normal distribution,

perhaps the most important of all of the statistical distributions,

characterized by its mean and its standard deviation.

We finished off by looking at the empirical rule, which was a rule that

helps you calculate probabilities when you believe that the underlying process or

data can be well-approximated with a normal distribution.