Now we're going to go into project three,

and in this project,

I want you to select the set up over there which looks like this.

And in here, we have four clusters,

and there are in square shapes dividing

the four different parts of the square that you see right here.

Now, like in project two,

this problem cannot be solved by drawing a single line.

For example, in project two we couldn't solve that in here,

project three we can solve it with just one line.

Which means that we will need more than one neuron.

To use the three neuron network model that we used in project two,

well let's try and use that to solve this problem.

Now if you click on the run button over there,

then you may get this result.

And as you can see it's a failure.

Because the lower parts, orange and blue,

were classified, but the upper parts actually were lost in the classification process.

Now, if you want to redo it again,

we can click on the refresh button way over there.

And that will result in a classification result of something like this.

Now, for the same three neurons in a single hidden layer,

we failed earlier but now we succeeded.

Why? Well, proper initialization

of the weights in a neural network is critical to its convergence.

So therefore with improper weight initialization,

compared to proper weight initialization,

the results are significantly different.

We failed, and over here we succeeded.

Now, if you increase the number of neurons to five,

by doing that over there,

you can see the middle part which has five,

then you can obtain very accurate classification results.

And the initialization is not much of

a big deal because the multiple neurons that you have work it out,

and they make it work.

Now when five neurons are used,

classification succeeds very reliably and you get very accurate results.

Comparing the three once again with

improper weight initialization with just three neurons way over there,

and then the one in the middle with proper weights initialization,

but with just three neurons,

look at how we succeeded.

And then right over here, with five neurons,

very stable and the initialization doesn't really matter because

of processing capability due to the five neurons makes it work out.

Now, you can succeed like this and have a very stable neural network with five neurons,

in addition but with proper initialization you can succeed in

this even with three or four neurons working it out.

Now let's look into project four,

and this is a project for you.

And let's work it out together.

Now we will try to classify the most complex data set pattern that we have in this setup.

The swirl structure of orange and blue data points is a challenging problem.

And look at this right here,

I want you to go over there and control that so

that you have this set up for our project four.

Now, even if you increase the number of neurons to its

maximum eight in that single hidden layer that you have in the middle,

and you press on the run button over there,

you will see that still you will fail like this.

Well, therefore we need more hidden layers.

Even with a hidden layer that we had with

eight neurons which was a single hidden layer, that wasn't sufficient.

So you can add on more hidden layers by clicking on that plus sign over there.

Gradually add more hidden layers and neurons using the buttons over here.

Remember, you can have up to six hidden layers,

and for each hidden layer you can add up to eight neurons included,

and use the plus buttons over there to make

the number of layers and neurons that you want.

We now enter the deep learning neural network design challenge.

The objectives are set as,

a target test loss less than or equal to 0.05,

and a training loss less than or equal to 0.02.

An example of a successful classification is shown right here.

And this is a success because,

the orange region covers all of the orange dots,

and the blue region covers all of the blue dots.

Looking at the output over there,

the test loss is zero point zero three four,

and the training loss is zero point zero zero eight.

Which is within, it satisfies the objectives that we set above here.

In order to succeed in the classification of project four,

you will need to make the neural network deeper by adding hidden layers and neurons.

Now, the objective, the target of your design

should be what is the smallest number of hidden layers,

and the smallest number of total number of neurons you

had to use to succeed in the classification of project four.

You must mention, the test loss and training loss output values

of your neural network model.

In addition, give helpful tips so your friends can learn

more and use the tensor flow playground more fluently.

This will be the answer that you need to include

into the discussion prompt of project four.

These are the references that are used for this lecture.

In addition, I include a special thanks to

my wonderful teaching assistant. Thank youct four.