Hi. We learned about the three R's in Module 1. In this lesson, we're going to learn about how to apply some of those reliability, reproducibility, and reusability techniques in our project for the probot-hello world app. What we'll be doing is we'll be going over the process of dockerizing this particular application, and we'll be looking at the developer experience from a local productivity perspective and then how that translates into a Travis CI build. We'll finally finish the lesson off by looking at the actual Travis CI build and walking through some of the differences in our Travis yml that start to appear when we start using docker for building our projects in Travis. So if you go back to your user account and reload the WEB1066-probot-hello project, you should have a project here that has a special branch on it, and it's called master_docker. If we switch to this branch, you'll notice that we already have this branch including a docker file in the root of the repository. This docker file is basically the configuration file that will contain all of the specific requirements that we need to start developing our application in docker. So for this lesson, we also included in the docs directory a file called HACKING.md, and HACKING.md basically gives you a few commands for docker that you can use on this particular branch to be able to try docker with this project and actually do something useful. So in our GitHub desktop, if we go back to the GitHub desktop and refresh our project to make sure that we have the latest changes, we should be able to change our branch by clicking on the current branch, and then clicking on master_docker. At this point, we should be able to open a shell prompt by choosing the repository, open-end terminal, and if you're on Windows this will be slightly different looking, you'll probably have a command prompt unless you use to my sig when extension for dash which I highly recommend. So in this directory, we're going to use the ls command to list the files in this directory. You'll notice that you do have that docker file in the root of the directory. So if we go back to the HACKING.md and just try a couple of the commands, you'll notice very quickly that if we try these commands without having Node.js installed or any other software development tools needed to work with this project, we are able to basically create a fresh new image for this project that will basically allow us to run our code and develop within our code. So I'm going to pause the video here for a second. Let this finish building, and on your machine you should be seeing something very similar. Fantastic. There's our docker image. So remember the commands that we did in the previous lesson where we were using Node to install and run the modules from the package.json. Now, here what we've done is we've done all that work and put it into a docker image, which we have been tagged with the docker tag probot-hello:latest to give us an easy way to find that docker image. Now, if we go back to the HACKING doc and we look at the test cases, let's try running the npm test cases that are included in this project. So that's just a matter of entering the text that you see there into the command Window and running the docker run minus IT. Minus-minus rm stands for remove the container on exit. Then the tag for our container, we don't have to include the :latest, and the command npm run test. So that npm run tests is passed down to our source code that is testing our code and we can see the test results there. This is very similar to what we saw when we had the simple Travis yml file that ran the installation for our modules and then ran the test cases. Now, this is great. As a developer, I didn't have to have Node installed, I had all of the requirements. So I'm getting some of the advantages here reusability. So our goal here is to take the same work and translate that to our CI environments. So as a developer, if I'm working on changes I can iterate on my changes, and the results that I see here locally will be translated to what we see up on our continuous integration system. So that if I make a mistake here locally, the mistakes are going to be reflected in my continuous integration system and I get some confidence there that I'm not going to introduce a change that hasn't passed my local development workflow as well. So if you notice there's a couple of other run commands that I've included here for local testing, one to startup the local server for testing and I can let you play with that. But the other one here that's really interesting is developing locally. In this particular scaffolding project, what we have included is the ability for us to make changes and develop on our code iteratively within a docker container. This is really useful for if you're making little changes and you just want to see the updates has your make them. So to use that, we can just copy and paste those commands in the command shell, and what's happening here is that we're passing some fake information to it. We're running a command for docker and we're mapping the current working directory through the "$(pwd)" into a container folder inside of the running container for our source code. So our source code will live under the home directory, under the Node folder, and then under the probot-hello-dev folder within the container. So then the container is also exposing port 3,000 from within the container to outside of the container which is on the desktop itself, and using that same probot-hello Node docker image that we ran in that docker build command. Now, we pass a bash command to install Node modules and then run the npm run dev, and under the covers npm run dev uses a tool which watches the updates to the folder home/node/probot-hello-dev. So as I develop and make changes to that folder it will restart the Node server that's running the application within the container and then let me see in real-time the updates to the node application as I'm making them. Now, this is really useful because if I'm making a test case change or if I'm making API change, then I can see the errors immediately as I develop. So I'm going to pause the video again here for a second to give this a chance to start up and then we'll see that real quick. Here we go. We can see our probot application is up and running. Obviously, the probot server can't connect to our GitHub App settings because we haven't actually gone through the cycle of setting up a GitHub App. However, if I go to a web browser and we start a new window and we type in to the web URL http://localhost:3000, then what we should see is the actual probot App in action.