Now, I'm not going to make a lot of changes here. However, I do want you to see the development workflow that, if I'm making changes, let's say to the source code, I can close down all of these tabs, go back to Atom, and look at the index.js. We'll just double-click on that so that we can open it. If I make just a slight change, let's say we change the exclamation point to a period and we save that change, then I could go back to the terminal window and I would see that that change would be reflected here, and that we would get another MPM start for the application, and so now the application would have new changes for it. Now, you can use this workflow relatively to commit and change code locally, and test your changes for your application as you're developing them. So that's the first advantage here of using a Docker file. Now, let's take a closer look at the Docker file and see what we're doing. The Docker file is a sequence of steps where each line potentially represents a layer of a Docker image within your Docker image. Each of these steps builds up the Docker image to a final working state for what functions in your Docker image. Now, we're taking advantage of a re-usability component here, which is Docker allows for you to share other Docker images and then use them as source for the beginnings of your Docker image. So in this case, we got some re-usability out of the node:8-onbuild Docker images, which is also viewable on hub.docker.com. We included some environment variables to help us get started, none of these are actually functional environment variables, but they're fill-ins for what we can potentially use, on an actually deployed applications, say if it worked to a stage where a production environment. Then we set up our paths so that we can use node and install some global node modules. We copy the current source code directory into a well-known directory within the Docker image, which happens to be the /home/node directory, and we decided to call our application probot-hello. Then we change the commission from the root owner to the node user because we don't want our application to run as a privileged mode application within the container, just in case there's any security vulnerabilities there. We switch contexts to the user account called node. So this particular Docker image comes with a built-in account called node, and there's more information here available at this URL within the Docker file, and then we set the current working directory to our probot-hello world or hello app directory so that when the Docker image starts, it's in that current working directory, and any commands that we send the Docker image will start within that directory. So the command that we run finally is to install the node modules so that all of the required modules are installed for this particular Docker image, that is, running for our applications. So the npm install does all the magic here. If you remember the hacking doc, it's also here under HACKING.md, and its raw form. If you hold the Control Shift M directory, you can actually get that into a MD5 formats. So you can see if you make edits, or if you want a copy paste from here, it makes it a little easier to read. All of these commands at the end of the Docker run command are commands that are sent within the package.json. The package.json has all kinds of other run targets that we can use to perform different functions with this project. So at this point, I think we have the makings for a pretty good reliable, reproducible, reusable project that is hosted under Docker. The next step is for us to see how we can take this project and get the same results that we were getting in the master branch for our travis file into the branch for the master Docker project. Now, if you notice, you'll be in this branch when you have the project open. But what I want you to do is go back to the GitHub desktop, and we're going to create a new branch for our travis file. So we'll call this branch travis_for_docker, and we'll click on the "New branch" button here. Now, be careful at this point, I want you to choose the master_docker branch to base this new branch from. That way, when we upload our new branch to GitHub, it will have the Docker file included with it. So if we create the branch at this point, we should now be on a new branch called travis_for_docker, and if we go back to Atom, we'll notice that we have the travis_for_docker. So if you go to Atom and choose File, New File, and then we save this file by choosing File Save As, and we call the new file.travis.yaml just like before, we're going to start a new travis YAML file. So just click "Use the dot" for the dot notation, you'll get the t for the travis YAML. So instead of the languages tag like we did before, this time we're going to use something called the services tag. So we start the project by requiring sudo. So this is an option here for travis that gives us the ability to run sudo commands within our build. Then we include the services tag, and the service is tag is where we introduce Docker. So if we open a new line and use the Docker command here, then that is all that you need to be able to start using Docker. Now, unlike the languages tag for node.js, where we got for free the npm install and npm test commands, we actually have to have scripted steps to be able to perform those commands within our travis build. So the first step here is going to be a before install step, and this is probably the first time we're using this. The step is going to require an entry to be able to pull that base Docker image. So we'll use Docker pull command, and then specify our base image. Now, not all Docker versions require you to pre-pull the image. However, this is really nice to have because if we decide to make a different Docker image for our particular base build image that already has all the node modules installed, then we can just simply update this tag to use our image so that our builds can run faster. Now, the next step in the before_install step will be to actually build a Docker image. So we use the Docker build command, pass it the minus t option, choose a name for the image, like probot-hello, and use a dot to reference that the current working directory is what will be referenced from the Docker file. Now, remember the hacking document. If we go back here, what we're using here if we go back to the actual probot app is the first line in our hacking which is the build locally. So we're taking that same build locally command, and we're putting it into our Travis file. Now, the next command we're going to use is the test the project. So if you highlight that command and you copy it, and then you go back to your travis.yml file, we can start a new scripted step by using the declaration or the tag called script, and we can open up a new command that the script command is going to run, and paste in that docker run command. Now, within Travis, we don't have an interactive terminal interface, so it's important that we remove the minus it options, so we can remove that minus it option. So we'll leave the minus minus rm because it doesn't hurt to clean up after yourself, after you've run something, but we'll leave the npm run test command there so that our test cases are executed when we do that continuous integration process. At this point, you can choose File, Save, to save your file. We're going to go back to the GitHub desktop. So we have a Travis file, a Docker file, and the Travis file is going to operate on the Docker file just like we were operating on it locally when we were testing our changes. That's the power of getting re-usability out of your projects so that you can reuse it locally, or you can reuse it within continuous integration, or you can reuse it within production deployment if you wanted to. That also gives us that reproducibility factor, that all of the requirements are now contained within that Docker file for what's required to build this project. If Travis decided to update how they handled node or what they did with the version of node, those changes wouldn't be reflected within our project because we have them self-contained within the Docker file itself. So let's go to the GitHub desktop, and we have two files that we changed. We don't really need to change this index.js, so we can discard changes for that particular file. So I'll right-click there and click Discard changes for that particular file, and we'll leave it as is. So just click Discard change, and then there's our travis.yml file. Now, it's going to be the subject of our commit, so let's say this is dockerized version of.travis.yml, and let's learn about the three Rs by implementing Docker with Travis. So commit your changes, and click Publish branch. If we go back to the website, the github.com website, and we choose the WEB1066 probot-hello, we now have a branch called travis_for-docker, and we can click on Compare and pull to create a new pull request. Remember, your changes should be submitted to your own repository, so choose your username with the name of the project, just like you might see someone else's username, make sure you choose your username and not someone else's username. That should switch that context here, and now we see that Travis for Docker can be merged to master. However, I think it's good enough if we just merge it to master_docker, and now we have, hey, this is able to be merged, so there's no merge conflicts. The original comments from the command are copied over into the pull requests, so we can create the pull requests at this point. Immediately, Travis picks up that there's a new push and a new pr session with this change. So we can look at the details within Travis, and see what Travis is doing. So let's see what happened within our Travis build. I'm going to pause the video here. Actually, I'm going to look at the View config real quick, and we're going to look at the JSON that's generated. Now, this is generated from our.travis.yml. You'll notice there are some things that didn't come from our Travis yml, these are defaults. But there are others that came directly from our Travis yml, and those are things like the fact that we're using a Docker service, the commands that we're going to use to test our Docker image, and the commands we're going to use to create and build our Docker image before we begin any of our CI. That should be reflected in the job blog as well. Here we see it, we see that the Docker service is now started, our repository is being cloned, and we're just going to give this a few more minutes, is pulling the image from Docker com or hub.docker.com. I'm going to pause the video so that we can review the log after this is done. Let's see what happened with our pull request that we submitted. So we can see the second pull request for a dockerized version of Travis is now completed and finished. It finished in about a minute and 53 seconds, which is almost comparable to what it took to run it without Docker. If we look at the job log, we're going to have some differences here. In particular, we can see that the Docker services are getting started because we used that Services Docker tag within our travis.yml. The clone operations are still the same. Some defaults for the actual build image are taken, so you can ignore those. However, what's important here is that the install step to pull the official node image for node version 8.0 from hub.docker.com was actually performed, and then we build the Docker image for our probot-hello world app which perform the npm install task. Then most importantly, the npm run test task which ran our test cases was also executed in the script step, and we can see that the test case results they're passed and that the Docker run command for that particular npm run tests exited with zero and allowed our bill to get an official green build for this build request. So that's basically it. The three R's are about reliability, reproducibility, and reusability, and we're getting now with Docker. In this case, we're using Docker to dockerize all of the requirements for our node application, we're getting that developer experience locally so that as we develop locally, the same experience that we're having locally on our software development project is actually happening in our continuous integration environment. We're getting some re-usability out of our Docker file so that if we want to develop for local edits, we can make local edits and use our Docker file for that. If we want to develop for our continuous integration environment, we can use our Docker file and get a continuous integration for that, and if we wanted to turn that into a production deployment, we could use our Docker file to turn that into a production deployment as well. We walked through our travis.yml file, the travis.yml was very easy to edit, there were only a few additional steps in tags that we needed to use to be able to talk about using Docker and start using Docker within our project. I hope you enjoyed that, and good luck on dockerizing more projects.