The microservice architectural style promotes the development of complex applications as a suite of small services based on specific business capabilities. This course will help you take a hands-on approach to build microservices and deploy them using ASP .NET Core and Microsoft Azure.
Let us imagine a fictitious company called Sportopia Inc that sells sports merchandise. It has become very popular and has become a household name. On the technology front, they started off by building their website on the .NET stack using the monolithic, traditional three-tier architecture. They have a massive database and a number of peripheral applications also talk to the same database. The iOS and Android apps for shopping are examples of such peripheral applications.
Off lately they have been having a number of performance issues and they noticed that a number of customers are abandoning their carts because of the slowness of their website and mobile apps.
On the resources front, they have an awesome in-house development team supported by super helpful external consultants.
Let’s quickly take a look at how the logical design of their application looks like. (To keep it simple, we would be talking just about their customer-facing website and not the other peripheral apps).
Cool. Now let’s take a look at their technology stack when imposed on their logical design.
Sportopia’s website is a single page application built with C#, MVC5, and Entity Framework. When a user accesses their website over HTTPS protocol, it will hit their user interface developed using MVC5 and jQuery(for some simple UI manipulations). For cart activities, the UI interacts with the shopping cart module in the business logic layer(written in C#) that in turn interacts with the database layer (SQL Server 2008R2).
To keep things even simpler, let’s just consider the functions in their shopping cart module only. The various functions involved in a customer’s journey from hitting homepage to checkout looks like below:
The customer opens a browser, navigates to Sportopia’s website, and lands on their homepage. The customer now searches for the product of choice and adds it to the cart. The customer then verifies the items in the cart and proceeds to checkout. To pay the products that the customer has chosen to buy, Sportopia redirects the customer to an external payment gateway. On successful payment, the customer is redirected back to Sportopia’s website.
Challenges with the current setup
As discussed previously, Sportopia’s website is built using monolithic architecture. It is structured to be developed and deployed as a single unit. This application has a large codebase that is still growing. Even for small updates, we would need to deploy the whole application at once. It is not scaleable.
What needs to change?
So, Sportopia’s business is growing rapidly but we are still facing challenges with the existing application and struggling to serve the existing user base itself. To address the challenges, moving to microservices is not the single magic bullet. We would need to make changes on multiple fronts. Each of those changes is detailed below:
- Reduce module dependency: Our modules are interdependent and we are facing issues such as code re-usability and unresolved bugs due to changes in one module. To tackle these issues, we can segregate our application in such a way that we will be able to divide modules into submodules. We could use Dependency Injection to achieve that.
Dependency injection (DI) is a design pattern that provides a technique so that you can make a class independent of its dependencies. It can be achieved by decoupling an object from its creation.
- Introduce code reusability: By dividing modules into sub-modules, we and using DI, we are on our way to re-use code.
- Improve code maintainability: We have now divided our modules into submodules or classes and interfaces. We can now structure our code in such a manner that all the types (that is, all the interfaces) are placed under one folder and follow the structure for the repositories. With this structure, it will be easier for us to arrange and maintain code.
- Unit testing: Our current monolithic application does not have any kind of unit testing. With the introduction of interfaces, we can now easily perform unit testing and adopt the system of test-driven development with ease.
- Database refactoring / Schema Correction: in the previous section, our application database is huge and depends on a single schema. This huge database should be considered while refactoring.
Our huge database has a single schema (which is now dbo), and every part of the code or table should not be related to dbo. There might be several modules that will interact with specific tables. For example, our Order module should contain some related schema names, such as Order. So, whenever we need to use the tables, we can use them with their own schema, instead of a general dbo schema. This will not impact any functionality related to how data is retrieved from the database, but it will have structured or arranged our tables in such a way that we will be able to identify and correlate each and every table with their specific modules. This exercise will be very helpful when we are in the stage of transitioning a monolithic application to microservices.
- Moving the business logic to code from stored procedures: In the current database, we have thousands of lines of stored procedure with a lot of business logic. We should move the business logic to our code base. In our monolithic application, we are using Entity Framework; here, we can avoid the creation of stored procedures, and we can write all of our business logic as code.
- Database sharding and partitioning: When it comes to database sharding and partitioning, we choose database sharding. Here, we will break it into smaller databases. These smaller databases will be deployed on a separate server.
In general, database sharding is simply defined as a shared-nothing partitioning scheme for large databases. This way, we can achieve a new level of high performance and scalability. The word sharding comes from the shard and spreading, which means dividing a database into chunks (shards) and spreading it to different servers.
Sharding can come in different forms. One would be splitting customers and orders into different databases, but one could also split customers into multiple databases for optimization. For instance, customers A-G, customers H-P, and customers Q-Z (based on surname).
By using schema correction, we had logically divided our database based on modules/submodules. Based on that logical division, we now take our architecture to the next level by doing a physical division. As shown in the next diagram, our application now has multiple smaller databases. Each service now has its own database. See how we start to move towards microservices architecture?
- DevOps culture: In the previous sections, we discussed the challenges and problems faced by the team. Here, we’ll propose a solution for the DevOps team: the collaboration of the development team with another operational team should be emphasized. We should also set up a system where development, QA, and the infrastructure teams work in collaboration.
- Infrastructure Automation: Infrastructure setup can be a very time-consuming job, and developers remain idle while the infrastructure is being readied for them. They will take some time before joining the team and contributing. The process of infrastructure setup should not stop a developer from becoming productive, as it will reduce overall productivity. This should be an automated process. With the use of Chef or PowerShell, we can easily create our virtual machines and quickly ramp up the developer count as and when required. This way, our developers can be ready to start work on day one of joining the team.
Chef is a DevOps tool that provides a framework to automate and manage your infrastructure. PowerShell can be used to create our Azure machines and to set up Azure DevOps (formerly Team Foundation Server).
- Adopt test-driven development (TDD): With TDD, a developer writes the test before its actual code. In this way, they will test their own code. The test is another piece of code that can validate whether the functionality is working as intended. If any functionality is found to not satisfy the test code, the corresponding unit test fails. This functionality can be easily fixed as the developer knows where the problem is. In order to achieve this, we can utilize frameworks like MS tests or unit tests.
- Automate Testing: The QA team can use scripts to automate their tasks. They can create scripts by utilizing tools like QTP or the Selenium framework.
- Versioning: The current system does not have a versioning system, so there is no way to revert back changes if something goes wrong during a change. To resolve this issue, we need to introduce a version control mechanism. In our case, this should be either Azure DevOps or Git. With the use of version control, we can revert our change if it is found to break some functionality or introduce some unexpected behavior in the application. This also gives us the capability of tracking the changes being made by the team members working on the application, at an individual level.
- Continuous Integration and Continuous Deployment(CI/CD): We had noticed that in the current setup, deployment is a huge challenge. To resolve this, we’ll introduce continuous integration (CI). With the introduction of CI, the entire process is automated. As soon as the code is checked in by any team member, using version control Azure DevOps or Git, in our case, the CI process kicks into action. This ensures that the new code is built and unit tests are run along with the integration test. In the scenario of a successful build or otherwise, the team is alerted to the outcome. This enables the team to quickly respond to the issue.
Next, we move onto continuous deployment. Here, we introduce various environments, namely, a development environment, a staging environment, a QA environment, and so on. Now, as soon as the code is checked in by any team member, CI kicks into action. This invokes the unit/integration test suites, builds the system, and pushes it out to the various environments we have set up. This way, the turnaround time for the development team to provide a suitable build for QA is reduced to a minimum.
Will the above changes in place, we would be on the right track, using the right tools and right architecture to take Sportopia Inc. to new avenues.
You might be interested in the following courses: