In this video, we will discuss the merits of reactive programming using microservices and how Serverless technologies have fully enabled it for cloud-native applications. What is Reactive Programming? It is a paradigm built around asynchronous data streams where changes in information drive the flow of application logic and is compatible with cloud-native goals. In fact, many modern languages have frameworks that supports an execution model for reactive programming. They provide a means to implement asynchronous data streams that drive the scheduling and execution of reactive functions in response to informational changes. These informational changes can be initiated from wide range of source including databases, message queues, event streams or calls to application APIs. Reactive functions have some common characteristics: First, they are message-driven, meaning the operate strictly on message data that arrives on the stream and provided as an input. Likewise, their output data is also constructed as message data, which can be treated as an “informational change” which can be further propagated to other reactive functions. Additionally, reactive functions must always be responsive and non-blocking to new input messages on the stream. To help with this, reactive programming languages often provide libraries to create per-message placeholders called “Promises” which can hold the future result of an associated operation which runs on its own thread. This a graceful way for a reactive functions to remain non-blocking yet synchronize work as it arrives without needing to write threading logic. Reactive functions are also resilient meaning that they plan for failures and always propagate the failure as an output message on the data stream. Lastly, they should be elastic such that they can easily scaled by the execution environment under any workload. So how does Reactive factor into application modernization? If you recall the strategies used in modernizing an application using the “strangler” pattern, they focused on isolating functions based upon which data asset they managed. If you followed this pattern, you would end up with containerized microservices with sets of functions built around some data object. Reactive programming techniques perhaps suggest a way to refactor these large, functional “chunks” into smaller microservices that can react to changes to, or operations on, that data object. Such refactoring could result in cloud-native optimizations providing greater code clarity, performance and resource efficiency. Serverless is a cloud-native, event-driven programming model with no deployment or operational considerations. It is often referred to as Function-as-a-Service (or FaaS) as it is often offered as a cloud platform service that runs functional workloads as microservices. It can either run your functions within existing language runtime containers (that match each function’s language) or run prepackaged containers you reference. It is designed to support Reactive programming and driven by events, from a variety of event sources that “trigger” the functions, but still allows imperative logic where needed. One of the primary benefits of serverless is “No-Ops” Developers just code the functional microservice(s) and associate them with event streams that operate on the message data from the event a return results as a message. That’s all that is needed; nothing else. The Serverless platform handles all the scheduling and provisioning to run the containerized microservice functions you provide. Serverless is “Polyglot” (or language agnostic) meaning functions can be coded in any language that can be containerized as a service and run by the platform. Your function’s containers are automatically scaled “on-demand” to reactively respond to to accommodate request load in the form of incoming events. The Serverless platform asynchronously triggers all functions associated with an event stream associated to an event source (such as a database, source code repository and API invocations). The functions are provided with the event message data with either the data it needs to process, or the information needed to locate the data and access it. When events stop arriving, the containers instances with your functions are scaled back to zero. The beauty of this is that most cloud providers only charge you for the time the functions are actually executing using a ”pay as you go” (or PAYGO) model. Most Serverless platforms do the work of providing integrated event sources (as mentioned earlier) for many cloud-native technologies and data storage services. You simply need to provide a declarative configuration that associates the event source (that may have a few configuration options) and the named Serverless function to send events to. Lastly, most Serverless platforms also provide integrated API Management to simplify creation of RESTful APIs to export your functions “as services” to other applications and clients. As you can see, Serverless might be considered perhaps the ultimate and most cost –effective way to write reactive applications using cloud-native microservices. If you want to explore Serverless, you might want to try Knative. Knative is a cloud-native, Serverless technology built on the Kubernetes orchestration platform; it can be installed and used on any Kubernetes instance. It implements Kubernetes extensions for deploying and running containers as Serverless workloads. It also offers “service-level“ APIs and configurations with a simplified command line experience for developers and operators. Primarily, Knative provides two major service components that support reactive Serverless. The first is called “Serving “ which simplifies the description and deployment of modernized containerized microservice workloads. Horizontally scales containerized function, using Kubernetes, in response to requests on the service’s endpoints or indirectly from configured event sources. Scales-to-Zero when when requests stop arriving to the service’s endpoint. It also employs “service mesh” capabilities that support endpoint routing between revisions to permit “Update-in-place” using your choice for a pluggable Service Mesh including Istio or Gloo. The project supports “build and deploy” pipelines for applications written for popular frameworks such as Spring, NodeJS, Django, Ruby on Rails and many more. The second component of Knative is “Eventing”. It provides building blocks for consuming and producing events that adhere to the CNCF’s CloudEvents specification. A growing set of ready-made integrations for event source like GitHub, GitLab, Kafka and Cloud Storage can be configured to generate data change events to your containerized microservices. Ready made patterns or for consuming event data for on workflows can be used to speed implementations; Including patterns for DevOps, GitOps, Sequencing and more. Clearly, Knative can be leveraged for not only for running your cloud-native applications using containers but can also automate tool chains used to build them as part of your CI/CD processes. You should now be familiar with: The Reactive programming paradigm and its characteristics, How Reactive techniques can be applied to microservices, A typical Serverless computing platform and its features and benefits, Serverless computing’s relationship to Reactive, What features the Knative Serverless technology brings to Kubernetes