In this video, we will walk you through the most popular approach and its strategies to modernize a monolithic application to containerized microservices. Modernizing a legacy application is not always straightforward, but a popular approach, dubbed the “Strangler pattern”, suggests a way to incrementally replace parts of a monolith with smaller, independent services. It was so named by Martin Fowler in 2004 from observing how “strangler” fig trees grew from small seeds to slowly weave around host trees until they were completely consumed. How, then, would we begin applying this pattern on a monolith to move to microservices? First, establish a cloud-native environment to host the “strangler” application’s microservices as they are migrated. Alongside this, prepare CI/CD automation to build, deliver, and deploy them as containerized microservices. Next, stand up a proxy, that will act as an intermediary to route requests (from clients) between the monolith’s backend functions and their equivalent microservices in the “strangler” application. The proxy provides a safe means to develop, test, and validate your microservice migrations, in a side-by-side manner, while slowly starving out their monolithic counterparts. To fully leverage cloud-native technologies and platforms, it may be wise to utilize an API gateway service as a proxy, as they offer a wide-array of security and other services that will help with future service integrations and facilitate a wider application ecosystem. As each component is migrated, its associated data should also be migrated to take advantage of cloud-native storage options. You should weigh the benefits and cost of using SQL and NoSQL databases as well as Cloud Object Storage (COS) for each class of data. Often, legacy data is not easily migrated as data ownership between low-cohesive, monolithic components can be shared. Because of this, cross-connectivity may still be needed from the new platform to the legacy storage systems for a period of time. It may also be possible to setup a data synchronization service between the old and new platforms that can intercept changes and allow for rollbacks. Now you are ready to begin the “strangler” cycle. This involves identifying “chunks” of the monolith that concern themselves with the same functional domain and data objects. Refactor each of these functional “chunks” into containerized microservices using your CI/CD system, and then Re-platform them into the cloud-native environment (using container orchestration) on your planned, provisioned infrastructure. These microservices will Co-Exist with their monolithic counterparts for some time while you use the proxy to carefully test and validate your migration. Once confident in the microservice, turn off the proxy’s route to the corresponding monolith’s function Replacing it entirely. Rinse and repeat this cycle until the monolithic functions have all been “starved” and can be removed entirely. Retain your API gateway and continue to use it to break down your microservices even further and add innovative services. Although the “strangler” analogy suggests a uniform approach to break apart monoliths, the reality is that every monolith is different and often bound to complicated frameworks and older technologies. It is also not always easy with the low-cohesive code to understand what parts need to be migrated together into the same microservice. Thankfully, the “strangler” pattern suggests two strategies to help intelligently “divide and conquer” the monolith over time. The first is Event interception. Every application is arguably event-driven; that is, some API, endpoint, or entry point invocation initiates some change to a data object (or asset) that it manages or operates on. For example, customer accounts, transaction logs, or product orders. Event interception acknowledges that eventually some event (carrying the change) will be funneled through a data service layer, that interfaces (as “connector” or “adapter”) to the database instance, which holds the master data. If this is the case, then it is possible to intercept or “tap” these database change events, in that layer, to understand how API calls into your application work their way through the code to affect changes on the data. The set of intercepted events, that operate on the same data asset, helps you identify its managing code in order to isolate and migrate them together into a microservice. This is called an Asset Capture strategy used to identify functionally bounded contexts based on data; for example, the “Receiving” functional context in the diagram. Analysis of each context helps determine which ones are the most cohesive and simplest to decouple from the old codebase and migrate as “chunks” to microservices. As a second step, it may be possible to further subdivide these larger microservices into more granular and efficient ones using similar strategies. It should be noted that these strategies work best when the monolith has a clear set of APIs; unfortunately, this might not always be true. For example, some may be coupled to frameworks that introduce a layer of abstraction that maintains URL schemes between clients and the application code. Others may not have APIs or URLs and instead have dynamically generated User Interfaces (or “meta UIs”) obscuring entry points. In these cases, event interception and asset capture may still be viable strategies but present a greater challenge in generating the events and isolating the code. Look to mitigate these issues by engaging with communities who may already have experience in migrating the same legacy technologies. Once you have found the functional code boundaries in your application around all data assets, you must analyze the complexity of migrating each and weigh it against the cost and business value to work out a roadmap. Let’s cover some best practices and considerations. Migration costs are not just measured in the time to extract, copy, and repackage the code. Sometimes clear functional boundaries around data assets are unclear or overlap. Undue respect and time may be given to “unraveling” strange and complex code. Do not assume there is always a reason for complexity; past developers may have just patched code hastily to “get it working.” You also need to account for technical debt that may have accumulated in your legacy app. over a long period of time. Carrying over that debt into your new microservice is not desirable. For some data assets and functions, it may actually be more cost-effective to rewrite the capability as a new microservice. If so, you should now have full understanding of requirements based on how existing APIs, (which can be preserved or redesigned), change the data. Any “strangler” transformation should be treated as an “all or none” endeavor. Partial migrations will only multiply operational complexity and negate the cloud-native benefits of a full migration. Minimize work on the legacy app to only what is necessary and focus on the new microservice-based version. One of the things you may need to address is Data Sharing between your migrated microservices. The common practice is to expose operations on data assets as RESTful APIs from a single microservice others can reference. Another option is Data Projection, where the owning microservice notifies other services of data changes in the master that can be copied to their local data. Lastly, Change Data Capture (CDC) software might be a consideration, for certain datastores, to actively track data changes, synchronize it between instances, and perform roll-backs. You should now be familiar with: The “Strangler” pattern approach for modernization, The steps of the “Strangler” migration cycle, The two strategies used to identify functional boundaries for migration to microservices, and Best practices for “Strangler” implementation including options for data sharing between microservices.