Okay. So let's go to our third example. In our third example, we're going to do the same thing again. In this example, we're covering something new, which is parallel execution of steps or phases. Now, in the previous two examples, what we saw is that we added phases and we added steps, and the steps executed one after the other or in serial. But what happens if we want to execute things in parallel and get some quicker execution on some of the steps that we're executing because they're completely independent of each other. In particular, the test phase and the lint phase for us are definitely independent of each other. There's actually a third target that we can use in the packages.json file called a coverage phase. Coverage testing is useful for development teams when they're trying to understand if they're wrote enough test cases for their project. So some of those languages come with test coverage tools that let you know how many of the functions you actually had test coverage for. So having a check-in CI that verifies that you're meeting some code coverage requirements is also good to have. Now, we're going to execute those three things, not in serial but in parallel. We're going to use a feature and Travis called matrix operations to basically parallelize those things. Now, in addition, instead of having a very boring tag phase in the after success phase, we're also going to have a little bit more complicated shell script that lets us make a decision on when to tag based on the type of tests that we executed and the type of test that was successful. So we'll cover that as well. So let's start. We'll do module two, example two, and we'll build on that. So we'll choose the new branch. Choose module two example two, and list name the new branch module to example three. Click Create Branch, and now go back to Atom. In the section around line four here, we're going to add a new tag. It's called the env tag. Env stands for environment variables, and we're going to create an array of key-value pairs for each environment variable. So an environment variable that we're going to use is called NPM target, and this can really be any arbitrary key name, but we're going to make the value equal to test. NPM target is going to basically replace with an environment variable, the string that you see here for test and lint. Now, in matrix operations, if we reuse at the top level of our travis.yml file the same key value and give it a new value lint, then the matrix operation allows us to create another job for that particular environment variable that changes the job with the environment variable that we specify and its new value. So this is a way for us to get parallelization in the actual travis.yml file. Now, this can be used in conjunction with the languages tag as well. So if we had specified multiple languages for this travis.yml file and we were building let's say a Ruby package or something like that and we chose two different languages to support them, the language is multiplied by the number of environment instantiations we have here would be the total number of jobs that we would have. We can actually expand on this even further by using the matrix tag and using additional other environment variables to multiplex our execution phases on. I won't go into that complicated of an example, but that might be a good exercise for you to try when you have a little bit more time. Now, the install phase stays basically the same, before install phase also stays the same. The script phase, however, we can remove a line here. There's no need for us to have two of these lines that are almost doing the same thing, and now we're going to use the $NPM_TARGET environment variable to specify what happens at each instantiation of the NPM target environment variable in the matrix operations. So that'll give us three phases for the script phase that are actually executing this command with NPM run tests, NPM run lint, and NPM run coverage. Now, in the success section, we don't necessarily want to tag every single time we run a particular target phase. So we're going to conditionalize when we tag for this particular phase. So in this state, we're going to use multiline yaml specification here to basically provide a little bit more complicated script for what we execute. So Travis will use bash bash to expand this when it executes. We're going to use the set minus x minus v option and be careful with these options because if you have secrets and your phase, then you'll expose those secrets by accident. But we're going to have this script basically show us the commands it's executing with the minus x minus v option. We'll use the minus e option to fill the script if there's any errors, and then we'll use an if statement here. This is a bash if statement which will use the NPM_TARGET environment variable, and check to see when that's equal to test. When that's equal to test, we're going to tag our docker image with the tag that we want. Otherwise, if it's not equal to test and it's the lint phase or the coverage phase, then we're just going to skip it. We're going to say we're skipping the docker tag. So we'll close the if statement there, and we'll just use the docker images command to see a list of images that we actually tagged. You can imagine if I had a set of tagged images that I can then publish those images to something like a docker registry so that we could do a deployment phase later on. So I'm going to save that. Let's go back to the GitHub desktop. We should see all of our changes that we just made for the matrix operation and for the multi-line syntax. So example three, and we'll commit that and then publish. So at this point, we should be able to go back to Travis and check to see if we got the syntax right. So let's go to request. We can see an example three looks like it's syntactically right, and we can see module two example three is now building. Notice that we now have three parallel jobs that are executing. One for each of the matrix instantiations of the environment variable that we specified, which are doing all of the phases for tests when NPM_TARGET is test, when NPM_TARGET is lint, and when NPM_TARGET is coverage. If we look at an individual one and look at the config, we can see that it executes all our phases, and it includes the script step for doing the run command with the particular target that we specified in this example as NPM_TARGET coverage. Then we also have the after success phase that executes, and we're using the conditional check here to make sure that we only tag when the NPM_TARGET is going to equal test. So for the coverage and the lint step, we're not going to tag the image again. However, remember this job is running in a virtualized environment using a docker image. If you have some stateful changes that are in these steps, then be careful because these things are running as three separate processes on the same docker container, and they're actually sharing the same workspace. So if you're making changes across these different phases or across these different jobs that impact one or the other, you might stand trip up a little bit. So it's just good to be aware that's what's going on under the covers as well. I'm going to pause the video here for a little bit and then we'll look at the finished results to see what we got. Okay. So our build is done. Now, let's take a quick look at what happened in this build. This one I ended up having to cancel the previous run of this build. There's a button here that shows up when the build is running. I chose cancel, and then you can choose restart the build to run it again. But the previous example had gotten stuck on one of the build steps, and that's a pretty common thing due to build state if you're not careful at build stage, and there's an optimization that we'll talk about in the next example that actually tripped me up in this one. But look at this. We ran these parallel steps all under three minutes. These two ran in under two minutes, and this one ran in two minutes and 18 seconds. Had we have done these steps serially, we would have basically taken up a total of around six minutes. But this build actually ran in two minutes and 20 seconds. If we compare that to some of the build history and we look at some of the previous examples, the previous example in example two which did something somewhat comparable ran in two minutes six seconds, but had only done two of the steps, the run tests and the lint, it didn't do the run coverage. So you can see there's some optimizations that we get here when we start running things in parallel. So if you can think of how you're doing testing and how you're executing the builds, this is a good way to get results back faster to developers, and getting faster results back to developers on whether they're making changes are good or bad, is going to allow them to iterate faster and it's going to allow your team to basically develop new features and capabilities for their application even faster for your customers. Faster delivery of features means better outcomes for your project.