Welcome back for question number 6. I'm going to start quickly by running this code as it will take a second to run. But what we're doing here in question number 6, is we're going to fit a logistic regression with the regularization. So, we see here that we're using the L2 penalty. We're then going to use our voting classifier to fit the logistic regression along with other different models that we learn from before. Something to note with the voting classifier as well as the stacking classifier which we mentioned in lecture. When we use that in scikit-learn, it will actually refit each of the models. So if we're using something like a grid search, which we can actually pass into our voting classifier, it will take all the time it took for our original grid search to run. So be careful not to pass it into complex models. Or if you do want to pass some complex models, just note that it'll take some time to run. Then we're going to take the output from our voting classifier determined the error as we did before and then plot our confusion matrix as well. So the first step is to create a logistic regression model. As I had mentioned, this took a little bit long to run something like 45 seconds to a minute. And we set a penalty to l2, we just have our solver and our max iterations here. And we're going to talk more about gradient descent. But this is something along the lines of just finding that optimal solution step by step. So it will take a certain amount of iterations, a certain amount of steps to get to the proper solution to converge to that solution. So we see that we need at least 500 iterations and then we're going to fit that to our training set, so our x train and y train. Then we're going to see what our classification report was. So first, we get our predictions from LRL2. And we see that a did fairly well not quite as well as our boosting was that about 0.98, accuracy and 0.98 across all the averages macro and weighted. We will look at the confusion matrix in order to see where we got confused and again, it's between Classes 1 and 2 that are most likely to get confused. And then what we're going to do here is we're going to combine two of our estimators and get that voting classifier in order to vote between the outputs of LRL2, as well as our GVC. That's our original gradient boosted classifier that we learned with 400 trees before we did our grid search. And we're going to set voting=soft. So rather than hard, which would mean that we would output either 1 or 0 according to the class that was predicted. We're going to output the probabilities and then average out the probabilities. So if GBC predicted the Class 1 with 70% certainty and LR_L2 set a Class 0 with 100% certainty or 0% certainty of class number 1, then it would class, predict Class 0 because it has more weight towards that class. We can then call vc.fit on our training set and we'll run this and this may take just a second to run as well as we have the LRL2, which took a while to run as well as the GBC which took a minute to run. So I'm going to pause it here and we'll come right back and then we'll see the outputs of each of these, of our voting classifier when we get back. So hopefully that only took about a minute or two for you to run, and we now have our vc, our voting classifier fit on our training set. And we do want to note here that we're running this bonding classifier and it should do better than either the gradient boosted classifier or just the basic regression we had before or at least as well. But what we always want to make sure of especially when we get more and more complex we can add on the stacking classifier or more complex voting classifier. The first thing that we're going to want to do is first establish how well, how accurate do we need to be? Maybe we need to be 100% accurate. So we need to keep honing in to make sure that we get every single class correct. But if we don't need to be that accurate, we don't want to be spending mountains and mountains of time trying to get that extra percentage in accuracy. So be careful to first check that very simple model, see how well that performs and perhaps you already done with your job there. And that should apply across any business decision you're making using data science. So we can get our prediction by calling vc.predict as we do with any of our different models. And then we can check our classification report. We see that we're back up to 0.99, essentially as good as the GBC now that we're combining these two together. And then we can also look at our computation matrix and we see that we are doing even better in regards to classifying those ones and twos correctly. So that closes out our lab here on boosting and stacking. And we get back to lecture we will talk about working even more heavily with unbalanced classes and different solutions for that. All right, I'll see you there.