In this video, we will explore the technology of containerization and how it can help simplify deployments in the cloud. Earlier in this module, we looked at using a microservice architecture as a path to application modernization. Microservices provide a number of advantages, including: Achieving improved organizational agility and scale by having smaller deployment units, however, these come at the cost of a more complex deployment model. In a traditional bare metal deployment model, the operations team will need to install and configure the runtime dependencies on the organization’s servers. This would include the operating system, application runtime, and application servers, along with any other supporting software and configuration files. This software and configuration would also need to be kept in-sync across all the environments and developer systems to ensure tests and deployments in those environments accurately represent application behavior in production. This can be a difficult and time-consuming process and is often the source of much frustration as bugs as behavior seen on a local developer’s machine or in a non-production environment isn’t replicable in a production environment, with the underlying cause being a difference in either the underlying runtime dependencies or how they were configured. Beyond these difficulties though, is this form of deployment created an implicit relationship between applications. Applications deployed in this environment must also be compatible with the underlying dependencies. An application requiring a different version of a dependency, or a different configuration might force other applications to change in that region and/or run regression testing to ensure the change doesn’t impact the application. Let’s look at how containerization helps address these issues When adopting a microservice architecture, a popular approach is to containerize the services. Containerization is a process of bundling an application with its runtime dependencies, such as application server, runtime, and operating system, in a portable, shareable, and lightweight package. Containers are lighter weight when compared to traditional VMs because they are able to make use of native operating systems. This makes it practical to have a one container to one service model. A popular containerization tool is Docker. Let’s take a look at creating a container with docker When talking about containers, there are a few terms to become familiar with. The first is an image. An image is the defined package of the application and all of its dependency and runtime instructions. An image can be thought of as like a recipe. Next is a container, a term already mentioned. A container is a specific instance of an image. There can be multiple instances of an image. To continue the recipe metaphor, a container would be an example of a meal made following a recipe. Last is a Layer. A layer is a discrete slice of an image defining one of its characteristics. To finish the recipe metaphor, a layer would be a specific step within a recipe. In Docker, an image is defined using a Dockerfile. Here is a very simple example of a dockerfile: In the first line, the FROM line, this defines the image that this image is based on. Images can be built on top of other pre-defined images. In this example, it is being built off a Java image, for running a java application. In the second line, we are copying in the application. In this example, a Java artifact, into the image. Finally, the last line, the CMD line, is the command that will be executed when an instance of this image, a container, is initialized. Each of these three lines represent a layer in the image. After building an image, that image would typically be pushed to a hosted Container Image Repository. Container repositories allow images to be shared easily. Instead of a clunky FTP process, an image can be pulled from a repository following a very familiar repo URL, namespace, image name, and tag nomenclature. Beyond serving as a place to store and distribute images, container image repositories also often provide a number of other services including: Access controls, to allow control over who can add or retrieve images from a repo, vulnerability scanner to detect is a security vulnerability has been found in an image, and backup services, in case of a system outage or other issues occur. When using containerized microservices, deployments becomes much easier. Environments only need the containerization software needs to be present, which makes installation and configuration much easier as well as keeping environments in-sync. Applications, now with all their dependencies packaged internally can evolve independently. If an application needs a newer version of an application server or runtime environment, it can upgrade to it without forcing other applications to upgrade as well. IBM also provides a number of pre-built containerized solutions called Cloud Paks. Cloud Paks are AI-infused containers that can help businesses accelerate their digital transformation. IBM offers cloud Paks for: applications, data, integration, automation, multi-cloud-management, which we will cover in module 5, and security. You should now be familiar with: The benefits of running microservices in a containerized environment, Creating containers, and IBM Cloud Paks.