And so we've definitely covered a lot in this lecture. probably a lot more than we do in a typical lecture. Especially mathematically. And again, this will be the last, lecture where we really do, any s-, semi complicated math at all. This isn't really even complicated math. Just, more complicated than what you might want to see. And so we, we won't be doing that in the future, but it just was really kind of necessary to get the key ideas across. but now so, Netflix we looked at Netflix obviously throughout here, and the first thing we looked at was internet video streaming. You know what that means to be on your device and to be, streaming from the the internet wherever it is on the internet. And that was first thing we looked at. We talked a little bit about that. We looked at movie recommendation. how Netflix, how all your actions that you do on the Netflix site when you either rate a movie, tell them you're not interested or add to a queue, how that effects the movies that get recommended to you. we looked at the Netflix Prize. We looked at the timeline of that and the major events over that 2 year period. which is really ignited the research community for awhile in these fields and was very interesting. then we really just looked at recommendation specifically of the Netflix problem. We looked at prediction, right how it's important. You need to make a prediction of what you think the users are going to rate the movies based upon what you know about them, and maybe what you know about people or movies who are similar to what you're looking at. we looked at two methods. first is the baseline predictor where we just looked at rows and columns in isolation from one another. apart from the raw average mean, which is really taking everything into account but it's not really doing any sort of similarity. And then we looked at the neighborhood method which uh,uh, is an incremental upon the baseline predictor, at least in the way we presented it, and then actually doesn't treat rows and columns. So with the Baseline predictor what we're doing it's looking at any one of these that are given time slice out like that. And with the neighborhood method, then we're looking across columns in the case that we're doing movies neighbors, so we'll be looking across rows, in the case that we were doing. users as neighbors to one another. We looked at the RMSE, right? That acronym Root Mean Square Error. And that's that's a standard approximate used to evaluate recommendation system. So then that's necessary here because you can't necessarily always rely on users to tell you whether they like the movie or not. You need some way to quantify it and the RMSE is very standard way of doing that, beyond the recommendation methods. And, so, 2 major themes. First of all is the idea of collaborative filtering, right, so it's exploiting similarities across the table. It's actually, make predictions that way. exploiting the certain things you might see, certain structure in that. And the second was, reverse engineering which we talked about. the fact, that in this case, we don't, we want to avoid reverse engineering, we want to avoid, having someone build the end results without building more general recommendation system first. An like the idea we said before, it's like getting the test before hand and then taking that test when you walk in. Rather than studying for the test so you learn everything and then being tested on a certain portion of it. And so those are the main themes. Alright, well hope you enjoyed the lecture and I'll talk to you next time.