One might think that the easiest approach to deal with databases in a transition from monolith to microservices is to divide the whole database in one go, but that is not the case. In this lesson, let’s try to understand how to do it efficiently.
First, check whether you can isolate the database structures in your code to be broken down, and then align this with the newly defined vertical boundaries. Secondly, identify what it would take to break down the underlying database structure as well. This ensures that, when a database change is picked up, the code that depends on the change is already ready to absorb the change and you don’t have to pick up the battle of data integrity. Refer to the following diagram:
What we have done here is mapped the code data structure to the database so that they no longer depend on each other.
If we can transition the code structures being used to access the database along with the database structure, we will save time. But, that may not always be the case. You might notice that some changes may impact modules that are not yet marked for transition. If I were you, I would move on.
You also need to understand what kind of changes are acceptable when you break down this database table or when you merge it with another partial structure. The most important thing is to not shy away from breaking those foreign key relationships. This might sound like a big difference from our traditional approach to maintaining data integrity.
Removing your foreign key relationships is the most fundamental challenge when restructuring your database to suit the microservice architecture. Remember that a microservice is meant to be independent of other services. If there are foreign key relationships with other parts of the system, this makes it dependent on the services that own that part of the database.
As part of step two, we have kept the foreign key fields in the database tables, but we have removed the foreign key constraint. Consequently, the ORDER table still holds information about ProductID, but the foreign key relation is broken now.
With the two steps performed, our code is now ready to split ORDER and PRODUCT into separate services, with each having their own database.
Before we go any further, there is just one more item that we need to think about. The master data or static data.
Handling master data
We might be better off with the configuration files or even code enumerations if the general assumption is that the data is not going to change for ages and occupies an insignificant amount of records.
This way, we might just need to push the configuration file/s to production when ever changes happen to the data. We may probably live with this setup but, there could be projects that become complex over time dependency on this file by multiple services could pose a problem.
One way to handle this elegantly is to create a separate service for masterdata altogether. Having the masterdata delivered through a service would provide the advantage of the services knowing the change instantly and understanding the capability to consume it as well.
The process of requesting this service might not be much different from the process of reading configuration files, when required. It might be slower, but it is to be done only as many times as necessary.
Moreover, you could also support different sets of master data. It would be fairly easy to maintain product sets that differ every year. With the microservice architecture style, it is always a good idea to be independent of any kind of outside reliance in the future.
With the foreign keys gone, imagine a scenario where a user has ordered a specific product. The product was available when it was added to cart but, by the time the user reached checkout, it was not possible to complete the order. We do not know if the issue is because of products running out or some communication error within the system. When we come across such scenarios, there are two ways to handle them. Let’s understand them in detail.
- One way to handle this is to try again and perform the remaining part of the transaction sometime later. This would require us to orchestrate the whole transaction in a way that tracks individual transactions across services. So, every transaction, which leads to transactions being performed for more than one service, must be tracked. If one of them does not go through, it deserves a retry. This might work for long-lived operations.
If the operation is not long-lived and you still decide to retry, the outcome will result in either locking out other transactions or making the transaction wait—meaning it is impossible to complete it.
- Another option is canceling the entire set of transactions that is spread across various services. This means that a single failure at any stage of the entire set of transactions would result in the reversal of all of the previous transactions.
This is one area where maximum prudence would be required, and it would be time well invested. A stable outcome is only guaranteed when the transactions are planned out well in any microservice-style architecture application.