Top Challenges In The Migration To A Microservices Platform
Introduction
We specialize in migrating systems from monolithic to microservices architecture and have experience dockerizing multiple applications. One of our clients an Edutech company was facing challenges related to scalability, and downtime with the increasing number of users. The company is a top-growing company with hundreds of thousands of users on their education platform, the company’s users were experiencing difficulties due to these challenges. They approached us seeking solutions, and we investigated these problems to find the best solution.
To address these problems, we proposed two strategies:
- Transition the application from a monolithic platform to Kubernetes for improved scalability and consistency.
- Migrate the application from Digital Ocean to AWS EKS for enhanced services and partner discounts.
These solutions aim to simplify operations, reduce costs, and improve overall performance.
In this article, we’ll explore the challenges we encountered during this migration, transitioning from a monolithic platform to a microservices architecture. Additionally, we’ll discuss the difficulties they faced with the monolithic platform, including scalability, cost, downtime, and time-consuming processes.
Business Challenges
When the application was on the monolithic platform and hosted on Digital Ocean, it faced several challenges:
- System Outages: The application often crashes during busy times, 2 to 3 times daily. To fix it, they had to restart the machine every day.
- Time-Consuming Deployment Cycle: They manually deploy the application, so the release process takes a long time. Testing and deploying new changes can take 2 to 3 days because it’s all done by hand. This manual process causes issues every time.
- High Infrastructure Cost: In the monolithic platform, everything runs on virtual machines. Sometimes, these resources are fully used, and other times they’re not, which increases costs. By containerizing them, we can allocate CPU or memory as needed, helping to save costs.
- Inconsistent System Behaviourtency: Due to the lack of Infrastructure as Code (IAC) or Terraform scripts, setting up infrastructure for testing in the monolithic platform takes 2 to 3 days. Due to this testing areas aren’t always the same, making it hard to test things properly.
- Overwhelming System Load: The application struggles to keep up with higher volumes of users. As more people use it, the system slows down or crashes, impacting the user experience and making it challenging to accommodate the growing demand.
These challenges highlighted the need for careful planning and execution to ensure a successful transition while minimizing downtime, managing costs, and maintaining consistency and scalability.
The problem we faced during the migration
At the start, our project encountered hurdles in these areas:
- Lack of requirement documents: When we started working on this project we didn’t have any documents and the foundation of any successful project lies in clear and comprehensive requirement documentation. Unfortunately, this was an area where this project faced initial setbacks. Without proper written requirements, navigating the development path became akin to wandering in a maze without a map.
- Difficulties encountered when downloading the software copy to Kubernetes: Using GitLab Container Registry proved to be a crucial part of our deployment pipeline during the transition from Digital Ocean to AWS. Yet, it brought unexpected challenges. Issues like authentication problems and image storage constraints emerged, prompting a deep dive into Docker and GitLab configurations. We tackled each problem methodically with patience and perseverance, ensuring a smooth containerization process.
- Dealing with GitLab Runner limits: After setting up and testing the CICD pipeline, we discovered that GitLab Runner has a limit of 400 runs. This required us to find a solution.
- Log management: As this project moved forward, we faced a new challenge with handling container logs efficiently because the logs weren’t centralized. To fix this, At first, we tried using AWS services, but they were too expensive. So, we decided to use the Fluentd agent to send logs directly from the container to ElasticSearch.
- Migrating the database: When we began the database migration, we initially tried using the DMS standard tool. However, it didn’t create secondary indexes and functions properly, causing significant delays and wasted time.
- Migration to S3 buckets: We utilized the S3 sync tool for this task. While it successfully synced the data, it didn’t transfer the ACL/permissions.
Find the solution on our GitHub link.
Our Solutions
1. Create a knowledge base for the Requirement
We used our knowledge management system, the Clickup tool to overcome this challenge. We created a knowledge base to capture requirements and communicate through tickets. This helped us capture requirements effectively and plan accordingly.
2. Configured GitLab credentials for Kubernetes
Pulling images directly from GitLab Container Registry to Kubectl wasn’t straightforward. The workaround involved a meticulous process:
- Generate Base64 Encoded Credentials
- Update Kubernetes Secret: Use the generated Base64 encoded credentials to update the Kubernetes Secret.
- Implement registry credentials in Deployment
This ensured secure and authenticated access to the GitLab Container Registry during the transition from Digital Ocean to AWS, effectively resolving the challenges encountered with image pulling.
3. Resolving GitLab Runner limits
To overcome this challenge, we decided to set up our own GitLab Runner environment on AWS. This not only removed the limitations but also gave us a scalable and dedicated infrastructure for our project’s growth.
Here’s how we did it:
- Install and Configure GitLab Runner on AWS
- Register the Runner with the GitLab Instance
- Scale the Runner as Needed
This approach not only overcame the constraints but also provided a scalable infrastructure to meet our project’s evolving needs. You can find more details in this video.
4. Setting up Log management
To tackle this, we created a fluent-configmap.yml file. This configuration included settings for source, filter, and kubernetes_metadata, ensuring smooth log flow. Below is an outline of the configuration:
- Source: Specifies the path to container logs and sets parameters for log parsing.
- Filter: Implements Kubernetes metadata enrichment for logs.
- Match: Directs logs to Elasticsearch for storage and indexing, with settings for host, port, and buffer management.
This allowed us to review configurations and debugging processes to ensure that logs flowed smoothly, making them easy to access for analysis and troubleshooting.
After implementing and testing it, we encountered a timezone challenge where the logs were in the UTC timezone. To fix this, we configured the Dockerfile for the IST timezone, which resolved the issue.
5. Migrating the database from monolithic to microservices
When we started moving the database, we first tried a tool called DMS standard. But it didn’t work wellβit didn’t create secondary indexes and functions like it should have. This caused big delays and wasted a lot of time.
Moving the database was tough. We had to plan it out carefully to make sure our data stayed safe and didn’t have too much downtime, worked closely with our database team to make a detailed plan, exported data from the old database, moved it to the new one, and made sure everything was still okay afterward. We used tools like AWS Database Migration Service to help us do it right. And in the end, we managed to move the database without causing too much trouble for our work.
6. Migration to S3 buckets from monolithic to microservices
Moving S3 buckets came with its own challenges, especially in keeping data consistent and secure throughout the process. We carefully planned the migration, considering factors like how to transfer data, who has access, and keeping metadata intact. Using AWS DataSync and our own scripts, we smoothly moved data from Digital Ocean to AWS S3 buckets, making sure data stayed safe and unchanged. We also wrote a script to sync permissions from Digital Ocean to the S3 bucket, ensuring everything stayed as it should.
Benefits and Outcomes
As a result, we’ve cut down development time by 30%. Now, the system can support 4x times as many users and automatically scales as needed. It’s also 30% more cost-effective and consistent. Plus, with continuous integration and continuous deployment (CICD), the latest changes are deployed automatically. Overall, this project is now stronger, more adaptable, and better prepared for future challenges.
Conclusion
In the journey from a monolithic platform to a microservices architecture, we encountered numerous challenges that required innovative solutions and meticulous planning. Our client, an Edutech company, faced issues of scalability, downtime, and inefficient processes due to their growing user base. By analysis and strategic planning, we proposed solutions aimed at enhancing scalability, consistency, and cost-effectiveness.
Transitioning the application to Kubernetes and migrating from Digital Ocean to AWS EKS were pivotal steps in addressing these challenges. These solutions not only simplified operations but also optimized costs and improved overall performance.
Thank you for Reading !! ππ»ππ, see you in the next blog.π€
I hope this article proves beneficial to you. In case of any doubts or suggestions, feel free to mention them in the comment section below or directly contact us.
The end βπ»
“If you’re facing migration issues and need our help, don’t hesitate to contact us for a free consultation.”