So what does that mapping that we just discussed in the last video actually look like? So we're mapping from our two dimensional space with x1 and x2 to their new values in three dimensional space, represented by these new axes a1, a2 and a3. And we now have these three axes with a1 being that similarity to Pulp Fiction, a2, our a2 axis is going to be our similarity to Black Swan, and a3 is going to be that similarity to Transformers. So we come up with each one of these three numbers. We map it up to three dimensional space and see how we move from original two dimensions of IMDB rating and budget to our three dimensions of similarity to each one of these different movies. And now, with this new three dimensional space, we can probably find a linear plane which will classify the award winners versus the non-award winners particularly well. Now going back to our original two dimensional space, we have each of these different similarity measures and they're determined by the Radial Basis Function, which will help us create the clusters that are close together in higher dimensional space. And those clusters will be linearly separable, hopefully once we move to that higher dimensional space. Now some things that I want to note here are that in reality, we did not just do this for three random points as we did here, but rather for every point in our data set. So with that, although we're not going into the math here, what makes the kernel trick with support vector machines special is that we can get the calculations in that higher dimensional space. Again, not just three dimensions, but the dimensions of every single one of our rows, without the computational inefficiencies that are required to actually map our data to those higher dimensional spaces. And that is the true magic of support vector machines as well as the kernel trick. So now let's talk about how we can actually implement this code using sklearn. So the firs thing that we're going to want to do is import the class containing our classification method as we've done before. So here we're going to rather than the linear version of support vector machines, we're going to import SVC which will allow for us to identify our kernels. Now I do want to mention you can do the linear support vector machine using SVC with a linear kernel, but it will take a lot longer in order to actually evaluate. So, if you're going to use a linear support vector machine, I would suggest using the code that we highlighted before. Next thing that we want to do is create an instance of our class. So here we're setting rbfSVC, so this is just our SVC within radial basis function kernel. And we're going to set that and initiate our class and pass in each one of our different hyper parameters. So we have here that the kernel is going to be rbf. And we have different options, we can also do polynomial or linear, and you can look at the documentation to see what are other kernels that are available to us. And we also have the associated gamma, and here gamma we'll be controlling the Gaussian distribution's reach. And all you need to know here is that the higher the value of gamma, the less regularization we will have. So, gamma again is going to be one of these regularization terms similar to C as well. And for both gamma and C, lower values mean more regularization and higher values mean less regularization and more complex models. We are then going to fit in an instance of our class to our training data, so we pass an x train and y train as we've done before, and then we can use that fitted model in other to predict on our x test and come up with our predictions for each one of our labels. And as usual we can tune each one of our different hyperparameters using something like our grid search CV, to search through different kernel options, different gammas and different values of C to find that optimal value that will perform best on our holdout set. Now that closes out this video and in the next video, we'll highlight some of the possible computational bottlenecks when using the kernel trick, along with some possible solutions. All right, I'll see you there.