This video discusses three related paths that can be taken to modernize applications to cloud native. Cloud migration is often the first path application owners pursue to experience some of the benefits of cloud infrastructure including its expanded capacity, resource elasticity and access to operational services. Commonly, this is done through a process called “Lift and shift,” where an exact copy of the application (along with its libraries, data, and dependencies) is installed into a custom Virtual Machine (VM) operating system image. The finalized VM image, containing the application, can then be “lifted” from on-premise and “shifted” and run on cloud environments (that have compatible hypervisors) such as private or public clouds. This same “lift and shift” approach can be applied to Containers, but with some additional amount of redesign or refactoring. Some considerations for refactoring include: Planning for persistent storage as Containers are transient and local storage not saved. Virtualizing hardware or system-level networking for container platforms. Improving application orchestration by breaking up application tiers into separate containers. In many cases, the time taken to redesign and refactor is modest and opens the application up to the full benefits of cloud native and hybrid cloud which can be invaluable. Additionally, refactoring can be accelerated through an ever-increasing set of migration toolkits that can analyze different application workloads to begin the process (and provide options). Many modern IDEs, like Eclipse and Visual Studio Code (VSCode), even have options to build the application, its language runtime and web framework into the image as well as limiting copied libraries to ones actually used. Sometimes they are also able to install API clients for the external or target platform services your application depends upon. Let’s take a look at how we build a Container image as a series of layers including our application code. Container technology is so prevalent in Cloud, that you can start building your applications using officially distributed container images of most major Linux distributions, for example, Red Hat Enterprise Linux (RHEL), Suse and Ubuntu. These OS images are called “base” images which contain read-only layers where the OS is the lowest layer. You then add or “write” other software into a layer on-top of this layer using simple commands that can be captured in a manifest. Most “base” (or sometimes called “parent”) images are usually published to open, public image repositories for simplified download and reference in your continuous integration build processes. As you add software on-top of a “base” image, you can save it out to create a new “base” image and everything you added forms a new fixed layer. Multiple layers can even be “flattened” to a single layer to make the base image smaller and more efficient. In fact, most all programming language runtime owners regularly publish their own “parent” images for each significant release, as well as interim images with early builds of pre-release versions (or even daily build versions) to allow DevOps teams to test early in their CI/CD toolchains. This is true for prevalent “legacy” languages like Java, Lightweight, script languages like Python and NodeJS, and “old school” languages like C++ and even COBOL. It is even common for application middleware and framework providers to publish their own “base” images with releases inclusive of one or more language runtime images. This leaves application developers a great deal of choice and flexibility in choosing the right “base image” to build their application onto, minimizing application runtime installation, while also adding any supporting tools or libraries. This rich pool of pre-made application runtimes provides application owners compelling options to start migrating legacy applications onto as well as the opportunity to more easily refactor using microservices. As you can imagine, application migration to Containers is happening everywhere in almost every language and for every hardware architecture. Virtual Machines and Containers both provide a means for applications to “move to the cloud” and securely share its resources, but there are key differences and fundamental advantages to choosing containers. Virtual Machines are created by and isolated from one another on a server by a Hypervisor, which typically runs on a Host Operating System. The hypervisor virtualizes and partitions the server’s hardware resources allowing the VMs to share its processors, memory and storage. Each VM image, running in a VM instance, must always contain a “guest” Operating System along with device drivers, services, and managers that help with virtualizing the hardware. Unfortunately, this creates a significant overhead in each VM image that causes image sizes to be measured in terms of Gigabytes of size. This overhead, along with the numerous software components that virtualize hardware, cause several issues when using VMs to run your applications. These issues include: VM images can be slow to load, restart and failover (especially if they must be copied over a network). The large “guest” OS footprint immediately consumes large amounts of the physical server’s resources when loaded. The overhead can also cause compatibility issues on different physical hardware, adding complexity to the Software Development Lifecycle, slowing automation and hindering application agility. Instead of hardware virtualization using a Hypervisor, Containers have a Container Engine which virtualizes the Host operating system that are isolated using kernel–level processes. In Linux-based operating systems, the Container engine achieves resource isolation using system Namespaces which abstract and partition the system resources to processes and kernel-supported “cgroups” which limits and isolates the use of resources like CPU, memory, local storage, and networking. Since Container virtualization does not require a guest image to be copied into each image, they are much smaller in size and measured in megabytes (instead of gigabytes). Most, general purpose, base operating system images can be found in sizes under 75 Megabytes (MBs). When choosing base container images, the same and even more choices are available when compared to VM guest images. Containers overcome many of the issues that limit agility as they are smaller, with the ability to flatten and compress layers. This helps make container-based applications: Faster to load, start and failover, as well as more easily stored locally to a server or a cluster. These smaller images can be built very quickly and reliably using declarative manifests that are easily automated in agile CI/CD processes. Their smaller base images means much less resource overhead leaving more for the applications and even allow adding more containers on the same server. Lastly, if you need to refactor parts of your application as part of migration, you can easily create new purpose-built microservices to replace functions like logging or add new features such as analytics (where doing similarly with a VM would be much harder). This brings incremental benefits, since containerized services are loosely coupled, they can be co-located with your app., scaled independently, and written in any language. You should now be familiar with: The “lift and shift” approach to migration, The differences between VM and Container virtualization, The benefits of Containers of VMs when being virtualized.