Now that you understand containers, let's take that understanding a step further and talk about using them in a modern hybrid cloud on multi-cloud architecture. But before we do that, let's have a quick look at a typical on-premises distributed systems architecture. Which is how business is traditionally made their enterprise computing needs before cloud computing. As you may know, most enterprise scale applications are designed as distributed systems. Spreading the computing workload required to provide services over two or more network servers. Over the past few years, containers have become a popular way to break these workloads down into microservices, so they can be more easily maintained and expanded. Traditionally, these Enterprise systems and their workloads, containerized or not, have been housed on-premises, which means they're housed on a set of high-capacity servers running somewhere within the company's network or within a company own data center. When an application's computing needs begin to outstrip its available computing resources, a company using on-premises systems would need to procure more powerful servers. Install them on the company network after any necessary network changes or expansions. Configure the new servers and finally load the application and it's dependencies onto the new servers before resource bottlenecks could be resolved. The time required to complete an on-premises upgrade of this kind could be anywhere from several months to one or more years. It may also be quite costly, especially when you consider the useful lifespan of the average server is only three to five years. But, what if you need more computing power now, not months from now? What if your company wants to begin to relocate some workloads away from on-premises to the Cloud to take advantage of lower cost and higher availability, but is unwilling or unable to move the enterprise application from the on-premises network? What if you want to use specialized products and services that only available in the Cloud? This is where a modern hybrid or multi-cloud architecture can help. To summarize, it allows you to keep parts of your systems infrastructure on-premises while moving other parts to the Cloud, creating an environment that is uniquely suited to your company's needs. Move only specific workloads to the Cloud at your own pace because a full scale migration is not required for it to work. Take advantage of the flexibility, scalability, and lower computing costs offered by cloud services for running the workloads you decide to migrate. Add specialized services such as machine learning, content caching, data analysis, long-term storage, and IoT to your computing resources tool kit. You may have heard a lot of discussions recently concerning the adoption of hybrid architecture for powering distributed systems and services. You may have even heard discussions of Google's answer to modern hybrid and multi-cloud distributed systems and service management called Anthos. But, what is Anthos? Anthos is a hybrid and multi-cloud solution powered by the latest innovations in distributed systems, and service management software from Google. The Anthos framework rests on Kubernetes and Google Kubernetes engine deployed on-prem. Which provides the foundation for an architecture that is fully integrated with centralized management through a central control plane that supports policy based application lifecycle delivery across hybrid and multi-cloud environments. Anthos also provides a rich set of tools for monitoring and maintaining the consistency of your applications across all of your network, whether on-premises, in the Cloud, or in multiple clouds. Let's take a deeper look at this framework as we build a modern hybrid infrastructure stack step by step with Anthos. First, let's look at Google Kubernetes Engine on the Cloud site of your hybrid network. Google Kubernetes Engine is a managed production-ready environment for deploying containerized applications. Operates seamlessly with high availability and an SLA. Runs certified Kubernetes ensuring portability across clouds and on-premises. Includes auto-node repair, and auto-upgrade, and auto-scaling. Uses regional clusters for high availability with multiple masters. Node storage replication across multiple zones. This is as of October 2019, the number of zones is three. Its counterpart on the on-premises side of a hybrid network is Google Kubernetes Engine deployed on-prem. GKE deployed on-prem is a turn-key production-grade conformed version of Kubernetes with the best practice configuration already pre-loaded. Provides an easy upgrade path to the latest Kubernetes releases that have been validated and tested by Google. Provides access to container services on Google Cloud platform, such as Cloud build, container registry, audit logging, and more. It also integrates with Istio, Knative and Marketplace Solutions. Ensures a consistent Kubernetes version and experience across Cloud and on-premises environments. As mentioned, both Google Kubernetes Engine in the Cloud and Google Kubernetes Engine deployed on-premises integrate with Marketplace, so that all of the clusters in your network, whether on-remises or in the Cloud, have access to the same repository of containerized applications. This allows you to use the same configurations on both the sides of the network, reducing the time spent developing applications. It's like right ones replicate anywhere and maintaining conformity between your clusters. Enterprise applications may use hundreds of microservices to handle computing workloads. Keeping track of all of these services and monitoring their health can quickly become a challenge. Anthos, an Istio Open Source service mesh take all of these guesswork out of managing and securing your microservices. These service mesh layers communicate across the hybrid network using Cloud interconnect, as shown to sync and pass their data. Stackdriver is the built-in logging and monitoring solution for Google Cloud. Stackdriver offers a fully managed logging, metrics collection, monitoring dashboarding, and alerting solution that watches all sides of your hybrid on multi-cloud network. Stackdriver is the ideal solution for customers wanting a single easy to configure powerful cloud-based observability solution, that also gives you a single pane of glass dashboard to monitor all of your environments. Lastly, Anthos Configuration Management provides a single source of truth for your clusters configuration. That source of truth is kept in the policy repository, which is actually a git repository. In this illustration, this repository is happen to be located on-premises, but it can also be hosted in the Cloud. The Anthos Configuration Management agents use the policy repository to enforce configurations locally in each environment, managing the complexity of owning clusters across environments. Anthos Configuration Management also provides administrators and developers the ability to deploy code changes with a single repository commit. And the option to implement configuration inheritance, by using namespaces. If you would like to learn more about Anthos, here are some resources to get you started.