We start the specialization with this course.
How Google does ML,
where I introduce Machine Learning and
what Google means when we say we are AI first.
My colleague Josh then comes in to talk about the strategy of Machine Learning.
We end with a discussion of two link of how to do Machine Learning at
scale using Python notebooks and server less data processing components.
Now, if you are an engineer or a scientist,
you're probably thinking that this is all too high level and you're ready to jump to
the next course which starts to delve into technical details of TensorFlow.
But before you make that decision let me tell you something.
When we taught this set of courses to our customers,
one remarked that often came back was that the model by
Josh was the one they revisited the most often.
After they went back to work, six months later,
the model that they went to look at the most was on the strategy of Machine Learning.
So you want to get the big picture before you delve into the technical details because
the big picture is very important for you to
be able to get by in from the rest of the organization.
To carry out, we then move on to doing Machine Learning with TensorFlow.
That involves two aspects.
One, creating a good Machine Learning dataset and two,
building your first Machine Learning model with TensorFlow.
Creating a Machine Learning dataset is another of
those practical models that you don't want to ignore.
When you create a Machine Learning model and it works well
in your experiments but then fails miserably in production,
the reason that often come back to how you created the Machine Learning dataset.
So give yourself time to absorb the lessons and then we
have two courses that are about improving Machine Learning accuracy.
As you build your first Machine Learning model,
you will learn that there are lots of things you can do to improve that model.
So think of this section as filling your tool chest with a set of ideas.
You'll use different ideas in different situations.
So you'll find that knowing them will be helpful in
your career as you solve different Machine Learning problems.
And as before, it's not enough to just name, check these concepts.
You need to give yourself time to understand them.
And know how to implement them in code.
We then move on to a set of courses
that are about operationalizing the Machine Learning model.
As I mentioned earlier,
operationalizing a Machine Learning model,
and by that I mean,
training it at scale in a distributed way serving out the predictions,
building a Machine Learning model end to end.
Operationalizing a Machine Learning model can be super hard.
It is a stage where most enterprise Machine Learning projects fail.
I cannot tell you how many companies I've talked to,
who have said their innovation teams had devised these cool ML projects,
but they were struggling getting the ML models into production.
In this set of courses,
we will talk about how to train,
deploy, and predict with ML models in a way that their production ready.
And finally, we delve back into Machine Learning theory.
But theory in big air quotes.
Machine Learning theory is mostly heuristics.
Machine learning is an intensely heuristic discipline
and you're only as good as your bag of tools and tricks.
So we'll introduce a number of tools and tricks that
work when the inputs to your Machine Learning models are images.
Tools and tricks that help in your processing are outputting sequences
and sequences in Machine Learning can be either time series data or text data.
And finally, we will look at several ways to build powerful recommendation systems.
Recommendation systems or how you build personalized algorithms.
And so it's something that most ML engineers build at some point in their careers.
In fact, it might be the only ML system that many people will build.
But in order to build powerful recommendation engines,
it turns out that you need to understand tools and
tricks that are borrowed from images and from sequences.
So that's why we're looking at them in this order and that's why we look
at recommendation engines pretty much at the end of the specialization.
Now, you may have encountered some of this material before,
especially in the first four courses.
For example, in the courses on creating
Machine Learning datasets and on TensorFlow basics,
some of the slides will be a repeat of server less Machine Learning with TensorFlow,
that's available on Coursera.
Similarly, if you encountered Google's Machine Learning crash course,
this is an internal Google course but
it has also been taught on some university campuses,
you might find some of the material and feature
representation and art and science of ML to be familiar.
In spite of this though,
the details are quite different.
So don't skip it completely.
Treat these courses as a useful refresher and make
sure that you still remember the ideas presented in those sections.