Hi again Evan here. In this video, I'm going to walk you through three labs that provide solutions to help your engineering teams drive cost optimization. One quick way to optimize costs is to get rid of resources that you're not using. In our first lab, you learn how to identify and then delete unused IP addresses automatically. In the next lab walk you through identifying unused, an orphaned persistent disks, another easy way to reduce your cost. The third lab will show you how to use stack driver, cloud functions and then cloud scheduler to identify opportunities to migrate storage buckets to less expensive storage classes. Okay, so in this lab we use cloud functions and cloud scheduler to identify and clean up wasted cloud resources. So in Google Cloud platform, when a static IP addresses is reserved but not used, it accumulates a higher hourly charge that if it's actually attached to a machine. In apps it heavily depend on static IP addresses, in large scale dynamic provisioning, this waste can become very significant overtime. So what are you going to do? You create a compute engine VM with the static external IP address in a separate, unused static external IP address. Then you'll deploy a cloud function to sniff out and identify any unused IP addresses, and then you create a cloud scheduler job that's going to run every night at 2:00 a.m to call that function and delete those IP addresses. Once again, just remember that GCP is user interface can change, so your environment may look slightly different, though I'm going to show you in this walk-through, let's take a look. Here we find ourselves back in another quick lab. Now this one is all about using cloud functions to do magical things in this realm of world, we're just going to be creating cloud functions to clean up resources aren't being used. This particular case we're going to be creating a function that's going to clean up unused IP addresses. The actual function code is you're going to see a way down there is just a little bit of I think it's written in JavaScript, just a function Java function in JavaScript code. The great news is you don't have to write the function yourself, Google Cloud engineers are provided a lot of these functions on there and their External GitHub repository, which is kind of cool. So if you wanted to just use literally the things that you're using inside of this lab right now, you can copy and paste into your own project at work as well. So highlighting are the things you going to be doing, you first need to create a virtual machine. And like we said in the introduction and creating a couple external IP address one that you're going to use in one and one that's going to be unused, and then you can be building that code that's actually going to go through. Sniff out any that aren't in use and then bring them down. Now that's only if you manually triggered it, so the second part of this lab, is actually just scheduled that cloud function to run in this particular case nightly at 2:00 a.m I believe, and that will automatically invoke that function and then do that clean up automatically. So once you set it up once, which is great, it will just run in perpetuity. So a couple of different things that I want to highlight. So the first thing you need to do is inside of the lab you'll be working off of code that already exists inside of this GitHub repository, in fact this is for the last three labs. Anything that has to do with cleaning up things is going to be based on this repository here, so I'll show you just very briefly. The first lab is going to be on the unused IP addresses. The second cleanup lab is the unattached persistent disk or PD and the third lab is going to be migrating storage to a cheaper storage class, if that bucket isn't being used, that actively sounds kind of cool. So the code for the unused IP address is just this JavaScript code here again, you have to write it. You just have to invoke it, deploy it for your own functional code. But you can kind of see what it's doing, it says all right, you can actually view the walk for this, the function is called. There are how many IP addresses that are out there for each of them. If it's not reserved, then you could potentially delete it. If you can't, it will say could not delete the address and then boom just 60 or so lines of JavaScript code that basically says all right. There are statuses associated with these IP addresses. I want you to have some logic around them to basically say it rates through all the IP addresses that people on my team have created throughout my project and then remove those ones that aren't used. So this is the actual nuts and bolts of the lab, is this public JavaScript code here in the GitHub Library? Let's take a look at how you actually deploy invoke though. So after you've cloned that project code in what you need to do then is you need to kind of simulate production environment. You're going to create the unused IP address and they use IP address. Then you're going to associate them with a particular project, and then you want to confirm that they're actually created. So I'll actually show you just this one command right here. So this says, hey, Gcloud, compute how these commands are structured by the way, is Google Cloud what service or product you want to use, this is compute engine. And then for IP addresses, let's just call it addresses, I just want to list and I only want to filter those for my particular regions, just a flag filter that you can have. I've actually already run through this labs you can see that there is no IP address that doesn't say it's not in use because I actually already ran the function and it deleted it. But as you work with it way through your lab, you'll have a long list of unused IP addresses, that it will trim down to just the ones that are in use, so it's pretty cool. So most of the magic again, since this is using the command line to deploy the cloud function, is going to happen here. But once you actually validate that it works and it cleaned up those IP addresses that weren't in use, what you can then do at the end of the lab is basically say, hey I don't want to come into the command line every time invoke this function. I'll show you what their function invokes looks like right now. Deploy it, trigger it, here we go. So after you deploy it and you get ready to ready to work, the last part of the lab is actually to schedule it so it uses a gcloud scheduler, Cloud Scheduler is a relatively new product. This essentially just like a glorified cron job that Google manage all the maintenance and the hardware behind the scenes for you. You can use the VM, I use the the SSH command line terminal to create it, but then I also like to go into and see where it actually is. I think something like the admin tools here, TOOLS > Cloud Scheduler with a little clock here. And one of the things that you can do is, this one was the unused IP addresses, the next lab you'll be creating one for the unattached persistent disks jobs. You can instead of invoking it via the terminal, you could do the Run Now as well, which is kind of cool. So it goes lightning fast because just running that JavaScript code and then just killing out all the IP addresses that aren't used which is great. So I'm a big fan of seeing after you've created all your work inside of the terminal, you can view all of your jobs either via the terminal or within the UI as well. And then boom, it automatically runs at a frequency and again much like a cron job this denotes 2 AM every night. And there are website utilities out there that help you convert time into a cron job syntax right here so don't worry about that too much. All right, so that's the first deep time that you've had into using a cloud function to do something a little bit more sophisticated than hello world. In our first cleanup use case, we're sort of moving those unused IP address. So go ahead and try that lab, and then all that knowledge that you're going to be learning there will make the next two labs very, very easy because you can be doing the same things operating off of the same repository, good luck. In this lab, we'll use cloud functions and cloud scheduler, to identify and clean up wasted cloud resources. In this case, you'll schedule your cloud function to identify and clean up unattached and orphaned persistent disks. You'll start by creating two persistent disks, and then create a VM that only uses one of those disks. Then you'll deploy and test a cloud function like we did before, that can identify those orphan disks, and then clean them up so you're not paying for them anymore, let's take a look. Here we are into the quick lab for cleaning up those unused, and orphaned persistent disks. Again one of my favorite things about those quick labs is as you're working your way through the lab, youll get those points as you complete all those lab objectives automatically. Quick labs is smart and knows whether or not you did the work or not, but it's also really really fun to get those perfect scores all the way at the end. So as you scroll down and look for this lab, you're already starting to get familiar with cloud functions. And again, those are those magical serverless triggers that can look for things to happen, be triggered, and then do other things. So the other lab that you worked on just before this, was cleaning up those unused IP addresses, and you set that up as running as a cron job via the cloud scheduler at 2 AM. Same general kind of concept for this lab, except you don't care about IP addresses, here you care about persistent disks. Those are those hard drives are attached to the virtual machines because again, inside of Google you have the separation of compute and storage. Just because you get a virtual machine, doesn't mean that you need to have that virtual machine running 24/7 just to keep that data alive. So if you need compute power for an hour, and you need persistent storage for in perpetuity, you can actually separate those, which is kind of cool. But, say you didn't want that data just around when you had no virtual machine associated with it, you can identify those orphaned persistent disks. So as we mentioned in the introduction, you'll be creating two of those persistent disks, the VM is only going to use one of them. We're going to detach that disk and then we're going to create some code or copy some code from the repository. That it's going to be able to look through and find any of those disks that were never attached, never used, and basically say hey why you're paying for stuff that you're not using. Then you deploy that cloud function that will remove all this persistent disks, and then lastly so you don't have to constantly wake up every morning and press that button to say remove persistent disk that'd be a really boring job. You're going to create a cron job via the cloud scheduler to automatically do that for you. So again if you already did the last lab or you've seen the demo video for the last lab, You'll be working off of the code that's in this public Google repository. For gcf-automated-resource-cleanup, GCF is just Google cloud function. Here we have the Unattached persistent disks instead of JavaScript, this time is actually written in Python, which is pretty cool. It's a little bit more involved, so it basically says, all right well, I want to find and look at and delete the unattached persistent disks. So much like you iterated through the IP addresses before in the prior lab, here you're getting the list of all the disks in iterating through them. And then hey, if the disk was never attached, and that's a metadata associated with the disks that it was never attached, there's a timestamp associated with it, in fact it's actually just lastAttachedTimestamp is not present. Then you're basically going to say, all right well, this disk was never attached to anything, was never used, we're going to go ahead and delete that. So this code will run in and handle all of that code automatically for you, you're not going to writing Python, don't worry about it. This is just code that you can lift and shift and use on your own public applications, the main argument that you want to be considering here is deploying this code as a repeatable cloud function. And then having it invoke at a regular nightly interval, say every night at 2 AM, as the cloud scheduler will help you. So back inside of the lab, the orphaned persistent disk, let's take a look at some of the things that we can do, we'll run some of this too. So we just look through the repository after that, you're going to actually create those persistent disks, here is where your give just orphan disk, great unused disk, great. You're actually going to create those two disks, so I'll go ahead and just run these now. Inside of cloud shell, let's see what directory I'm in, I'm in the root directory, I need to go into wherever the code for the unattached persistent disk is. Now I'm in there, as you saw, we were just looking at that Python function before main.py. By the way if you're not familiar with Unix commands, a couple of useful ones are ls, which just lists the contents of a given working directory, cd says change directory. It's kind like double clicking on a particular thing into the directory, so it's double clicking unattached PD, and then cat shows the contents of the file, it doesn't do anything with it, but it shows the contents. So that same Python code that you saw before, is now just visible on the screen here. So what do we want to do? I didn't want to copy that, we want to create some persistent disks, have some that aren't going to be used and then delete them. So we're in that directory we're going to create some names and this is literally what you're going to be doing inside of the lab, is working your way through, copying and pasting, I'm hovering over these boxes, clicking on the clipboard to copy. Creating all of them, I need to make sure that my project ID is set, so let's see. It's probably because I skipped an earlier step inside the lab, but the great news is if your project ID is not set, there is a command for that as well. So we'll set the project ID, it's updated properly now we'll try again. Export is just basically saying define this as a variable, create those two disks, No it's because I didn't run the export project ID up here. Boom done, this is why it's super helpful to go through the lab in order, and then let's create the disks which now work make this a little bit larger. It's creating all these discs automatically and this is exactly what you could be doing through the UI, here we go. We've got some disks that are ready, let's validate these discs were created before we blow away the ones that are unattached. What disks do we have? Whoa we've got a lot, great, we've got an orphaned-disk, an unused-disk, and I have other stuff that I really don't want to be deleted, so hopefully this code works as intended. So orphaned-disk and unused-disk, keep your eyes on those. And of course, as you working with lab, you click on Check your progress in your real lab instances as well. I've created the VM already and then, I'll give it just a different name this time. Let's see, So here we're going to create that virtual machine instance, and then look we're giving it the disk name orphaned-disk. Which I bet you can tell exactly what we're going to do it, so right now we have a virtual machine that's using this disk, so the next thing in order to get orphaned, we've gotta detach it. So let's see inspect to make sure that it was actually attached to the disk, Boom in his last attachment time and everything in there. Now lets orphan it, detach the disks marked orphan, just a command to detach it. So now it's off in the world, Let's see detach-disk disk-instance my name for this demo I just have a dot 1. Boom, it's going to detach it. And now, I detached it and we're going to view the detached disk, it is orphaned, it is detached. Great, now the last part of this code is actually deploying that cloud function that will sniff through and look through all the disks that are out there, and then detach them. It's having you inspect that Python code just kind of be familiar with it, again you don't have to write any of that Python code yourself, but getting with a familiarity with it can't hurt. Okay, so now I've already deployed the cloud function before recording this video, I've scheduled it, now what I want to do, is list all this will be the magic that you're going to labs. I'll list all the disks that are there in orphan disk in an unused disk, and then now, if I got everything set up correctly, I'm going to go into my [SOUND] Cloud Scheduler. I'm going to show you just using the UI here you can use the command line as as you wish. Unattached persistent disk job boom, Run Now, took a second to run, right? Let's see if there's still there, are they gone, all right? So as you see here, we're just about to run that clean up of the unattached persistent disks, we've got an orphaned-disk, and then one that was just never used. But see if and hopefully that code runs gcloud compute disk, we've already run the function, it takes up to a minute for that actually to run. It will submit the function, but sometimes the code will take a little bit longer, so go ahead and run that, gcloud just compute disks list shows the disks that are out there. And if you notice there are two disks that are no longer in here, the one that was unused and the one that was orphaned. So I can say with certainty that the code works, at least when I recorded this video, so go ahead and inside of your labs. Experiment with that and then maybe create three unused ones, or a couple orphaned ones, and just get familiar with how to create and deploy those cloud functions. And then get the ability to invoke them manually via Cloud Scheduler or automatically via Cloud Scheduler Cron job frequency, give it a try. Gcp provides storage object lifecycle rules, they can use to automatically move objects to different storage classes. These rules can be based on a set of attributes, such as their creation date, or their live state. However they can take into account whether not the objects have been accessed. One way you might want to control your costs, is to move newer objects to nearline storage, if they haven't been accessed for a certain period of time. So in this lab, you'll create 2 storage buckets in generate loads of traffic against just one of them, and then you create a stackdriver monitoring dashboard to visualize that bucket's utilization or usage. After that, like what we did before, you'll create a cloud function to migrate the idle bucket to a less expensive storage class. And then we can test this by using a mock stackdriver notification, to trigger that function, let's take a look. Now one of the last optimization strategies that you're going to see here, is saying all right, well I've got objects I'm storing instead of Google Cloud storage bucket, or a GCS bucket. What happens if I have them in a storage class like regional or near line or there's a more efficient way to store those assets depending upon their usage? And how can I migrate them, move them between those storage classes, automatically? One of the first things I want to show you is just what all the difference towards classes are, and you'll experiment with these inside of your lab. So this is just the URL for Google Cloud storage in the different storage classes that are out there. This all shows just the storage classes that are available, so generally for standard if you just create a Google Cloud storage bucket, it will be just standard storage that's accessible. And you don't need to specify any particular classroom first, creating it'll default the standard, but if you don't use your data that often, say it's not a public bucket that gets a lot of traffic. And you want to enable some cost savings like for maybe archival data, or you want to automatically say, well if you're not using it, let's put it on something that cost a little bit less, and is accessed a little bit more infrequently. So, that's when you can actually shift data that's stored in a GCS bucket for standard storage, and then re-class it or re-classify it into something like nearline storage. Or even coldline storage if it's may be accessed like once a year or once a quarter instead of once a day for something like standard storage. So now that you're familiar with the fact that different buckets can have different storage Classes, let's get back to the lab. So the lab here it's going to walk you through the different types of storage and then you're going to be creating different storage buckets. I've already created these buckets a little bit before, but you're going to be running through just the same repository before we're going to be migrating the storage. You're going to be creating a public bucket. You'll be uploading a text file that just says this is a test. And then you've been creating a second bucket that just doesn't have any data in it and then spoiler alert we're going to call that the idle-bucket or the bucket it's not going to do anything. So you've got those two buckets and one of the really cool things that you can do is you'll set up a stackdriver workspace in monitoring dashboard. That's going to see the usage of each of those different buckets, similar in how in a previous lab you monitor the CPU usage instead of this lab just monitoring the usage of the bucket. Again, stackdriver is very flexible in terms of finding a resource on Google Cloud platform and monitoring how well it's used after that.