Okay. For the last section of this training,
we're going to talk about an offering that we packaged with VIO called VIO Kubernetes.
This is a container as a service solution
that VIO customers are entitled to and that we're
positioning with OpenStack implementations to
bring the value of Kubernetes to your OpenStack cloud.
So, this VIO K8s or VIO Kubernetes solution runs on top of the SEC,
vSphere NSX and VMware vSphere datastores,
requires VMware integrated OpenStack as the abstraction layer.
Then on top of that,
we created a life cycle management tool called
the VIO K8s VM that includes all the scripts and
orchestration required to quickly
create Kubernetes clusters and do lifecycle management of this cluster.
Lifecycle understood as scale out,
the cluster scale down,
upgrade the cluster, patch the cluster, et cetera.
So, very, very powerful proposition.
So, if you have an OpenStack implementation,
you want to bring Kubernetes as a service into the cloud and offering that on top of
the same OpenStack infrastructure running on top of
the SEC via Kubernetes is a very good solution for that purpose.
We are using standard open source tools
to create the clusters and do management of the clusters.
We use terraform, kubespray,
these are some of the solutions that are included with this implementation;
but the idea here is that,
the lifecycle manager which is VIO K8s,
the blue box that you see there on the far right,
will call the OpenStack APIs to create Nova instances that will eventually
be personalized to support
Kubernetes and then expose those Kubernetes APIs to the depths, all right?
So, we're leveraging OpenStack as the glue to make it easy for these clusters
to instantiate it and to be managed over time.
Some of the features that are included with this solution I have listed here,
the idea just like VMware integrated OpenStack,
the value propositions that you're running that on top of robust infrastructure,
that is the same idea here.
Run your Kubernetes implementation,
your Kubernetes as a service implementation on top of vSphere for on
parallel HA and performance capabilities.
We make it very easy to deploy these Kubernetes clusters via Kubernetes,
includes an API and also some CLI tools for that purpose.
Day 2 operations are also part of what the solutions offers out of the box,
like adding more worker nodes,
patching and upgrading the cluster.
We also provide authentication and multi-tenancy with OpenStack Keystone.
So, there's the users that you create in keystone and you
have the keystone back by active directory and LDAP,
you can leverage the same authentication for consuming Kubernetes using this solution.
Certificate management for configuring and
replacing default certificates with your own is also part of this integration.
Persistent volumes in Kubernetes are also offered via sender integration,
and we will show you in the last demo of this presentation some of
the networking services that are possible in Kubernetes with NSX-T specifically.
So, and then there are other value added services that we provide with this solution.
But again, if you have an OpenStack implementation based on VIO and there is
a need for a container as a service offering inside of your cloud,
VIO Kubernetes will be a great solution for this application.
So, let's show the last demo of
the presentation which is the VIO Kubernetes and NSX integration.
In this demo, will show our VIO Kubernetes and the integration with
VMware integrated OpenStack and NSX specific to the Kubernetes solution.
So, in this particular example,
we have used VIO Kubernetes to create
a very simple cluster with one master and one worker,
that is the VM or the VMs that you've seen in the two previous demos, right?
There're instances in OpenStack,
VMs in vSphere; but from a container perspective,
they are Kubernetes, Kubernetes hosts or Kubernetes servers.
So, as I mentioned, this particular case,
we created a very simple cluster consistent with one master and one worker,
and the VIO Kubernetes solution gives me the dashboard URL to
administer that Kubernetes cluster as part of
the lifecycle management that is provided by VIO Kubernetes.
We automatically deploy the Kubernetes dashboard to every cluster that is provision.
I've already, in preparation for this demo, created a very,
very, very simple application in the default namespace.
This application, in its current form,
consists of two Kubernetes pods called demochat and MongoDB.
I created the source for this Kubernetes application in the presentation.
Later, I just took this example from the Internet,
and there's also some services that we have exposed,
specifically the demochat service.
This is a very simple 2-tier application consisting of a web front-end
and a database back-end that exposes some demochat application.
This is type load balancer,
meaning that we're leveraging an OpenStack load balancing to expose
the virtual IP that will be used to consume my app.
This virtual IP in OpenStack is
an external IP that is sitting on the northbound interface of my neutron router.
So, if I click on this IP, right?
I'm taken to the actual application,
to the actual demochat application.
So, that is what my container app consisting
of a couple of services and a couple of pods.
Very, very simple, right?
Obviously, I created that app using standard Kubernetes tools like kubekettle, et cetera.
But as a cloud admin,
what I had to provide to my depths was a Kubernetes cluster container as a service.
I got a request to create and provide a Kubernetes cluster,
and I used VIO Kubernetes because I'm using OpenStack to create and manage that cluster.
So, if we go to NSX, right?
The first thing that I want to show you is
the load balancer component that was leveraged by OpenStack
and honored by NSX to provide the connectivity for the app, right?
There're are a couple of load balancing services here that are used;
one is for API traffic and the other one is for the actual application traffic.
So, by going here to the native load balancer tab of NSX,
I can see who created that load balancer and for which purpose is being used.
This one in particular is being used for
API services in OpenStack and
the load balancer component for my actual application is listed here,
and this is the one that exposes the demochat front in service here.
So, this solution leverages NSX in a number of ways,
and for load balancing services,
we will automatically provide a load balancer to load balance
the API traffic in case I have more than one master in my topology.
Then, we'll use another service component of the load balancer,
the NSX-T load balancer to expose and
express the VIP that is actually used by the application itself.
There is also native,
in this native integration of Kubernetes with NSX-T,
the moment you create a namespace,
there's some automation that takes place and
some network constructs are automatically configured.
So, I want to take you to the switching tab here,
and show you a number of logical switches that were created automatically,
once I created my Kubernetes topology.
We don't need to go into each one of these switches, right?
But these are switches that are created by the NSX container plug-in or NCP,
and that is part of the automation that NSX provides natively for Kubernetes solutions,
and these are driven,
basic layer, two layer,
three services are driven by the namespace creation and manipulation.
For security services, you can either interact directly
with the firewall on NSX or you can leverage
Kubernetes network policy to
create east-west security for your container pod inside of the NSX firewall.
So, a couple ways to do that.
There's another section this training that goes into
more details of what that Kubernetes and NSX-T automation looks like.
Notice that that is not related to the automation that is provided by via Kubernetes,
but it's an augmentation of that automation to provide
resolution of your network and security services at
a container pod level in your Kubernetes clusters.
So, with that, we conclude demo number three.