How To Cut Cloud Storage Pricing For AI-Chatbots By 80%?

Introduction

We specialize in AI chatbots and have experience with multiple applications. Join us on an exploration of the steps taken to save costs in the cloud storage pricing of one of our clients, an AI-Chatbots company, with a focus on optimising expenditures through the archival of data in a database. We’ve stood by them throughout the entire journey, observing their challenges.

Presently, the company is a top-growing company with hundreds of thousands of users. They had a workforce ranging from 50 to 100 employees, and their data was experiencing rapid expansion. The size of their database was progressively growing from GB to TB and so on, posing challenges to effective management. They encountered difficulties in handling the increasing database size and faced the dilemma of either expanding the database capacity or engaging in the time-consuming task of deleting outdated or unused data. In light of these challenges, they reached out to us for assistance.

We offered a solution by assisting them in archiving their MySQL database, which involves storing the most frequently used or recent data and exporting the remaining data to some files. This approach enables them to retrieve data as needed. These solutions aim to simplify database management, reduce costs, and improve overall performance.

We will explore a comprehensive set of strategies. And best practices that help AI chatbot companies reduce their database expenses.

Challenges Faced by the Company

cloud storage
Image credit : Canva

As the application expanded and data rapidly grew, storing large volumes of data in the database posed several challenges for the company:

  1. Increased cloud storage cost : As the application’s data continued to grow daily, they found themselves needing to expand their database size regularly in terabytes, resulting in the necessity to purchase additional cloud storage, consequently escalating costs. The cost 2x within a few weeks as a result of the recurring need to increase cloud storage. The storage was also increasing by 20% to 30% from time to time.
  2. Slower Query Performance: While managing large volumes of data, the application encountered difficulties in accessing information and experienced a gradual decline in performance over time. Initially, accessing any data from the database took only seconds, but as the size of the database increased, retrieval times extended to minutes. Subsequently, it transitioned to taking 5–10 minutes to retrieve any data from the database.
  3. Backup and Restore Challenges: Due to the substantial size of the data in the application, both backing up and restoring it became time-consuming tasks for the company, thereby raising the risk of data loss. They were experiencing the backup taking more than an hour due to the large size of the data.
  4. Data Management Complexity: As the application and data continued to expand, the task of managing the company’s data became increasingly challenging. Distinguishing between what was more important and what was less important became particularly difficult.
  5. Data Consistency : The application faced challenges in maintaining data consistency and integrity within a large-scale database, particularly during updates and deletions.

These challenges underscored the importance of meticulous planning and execution to achieve a smooth transition, while controlling costs, and upholding consistency and scalability.

The solution we offered for cloud storage

cloud storage
Image credit : Canva

Given the challenges mentioned above, the company sought a solution from us to initiate archiving of their MySQL database. We assisted them in implementing the database archiving process. and recommended archiving the MySQL database with partitioning. We delved into the archiving process using partitioning, providing clear steps and simplified code to facilitate database archiving. Our solution comprises the following steps for archiving:

  1. Preparing tables for partitioning in database
  2. Partitioning the tables
  3. Exporting old data from the table
  4. Restoring data from the archive
  5. Automating above steps

To implement archiving techniques, you can follow this link.

When we began the task of archiving the MySQL database, we encountered a scarcity of comprehensive guidance on the steps involved. The necessary instructions were fragmented across various sources, posing a challenge in connecting them into a cohesive process. Despite these obstacles, our diligent efforts eventually led us to devise a comprehensive process encompassing all steps, from data preparation for archiving to database restoration. These steps also includes the automation of MySQL archiving.

The majority of the data in the database consists of entries inserted once for logging purposes, never to be used again, but retained for records. With daily insertions, the database size steadily increased over time. Additionally, numerous tables contained only recent data that required regular access. To address this, we opted to partition many of these tables. Partitioning enables the segmentation of table data into separate physical files, allowing for independent management. As an added benefit, the remaining tables in the live database are considerably smaller and therefore quicker to operate on and select from.

Archiving

We employed the following key steps in table data archiving:

  1. Initially, I manually partitioned the tables.
  2. Implemented shell scripts to automate the archival process fully in the future.

The data will be archived periodically. Although MySQL offers various options for partitioning tables, dividing them based on time intervals appears to be the most practical choice for us.

Our archival process was able to manage the partitioning of a specified table by years, months, or weeks. This approach can be readily expanded to accommodate other time intervals, such as days or even hours, if the data growth demands it.

For each table marked for archiving, we’ll have the flexibility to configure the archival period and determine the amount of historical data to retain in the live database.

The older data was exported from the tables (Partitioned table) and compressed into gzipped files. Also, we were able to restore that data from that files later, if required.

We aim for the entire process to occur seamlessly, without any database downtime or significant performance impact.

Benefits they experienced following our solution

cloud storage
Image Credit : Canva

Here are the benefits they experienced after integrating efficient archiving:

Reduced Cloud Storage Costs

Upon implementing archiving within the application, it facilitated the transfer of data to more cost-effective storage solutions. This effectively minimised the storage footprint of the primary database, leading to an 80% reduction in storage expenses.

Improved Query Performance

Upon completing the data archiving process within the application’s database, it facilitated data retrieval by dividing it into smaller tables with reduced data. This streamlined querying and indexing process resulted in enhanced query performance. By transferring older or less critical data to archival storage, the database could respond more efficiently to queries due to the reduced data load, consequently improving performance. Accessing of data became more efficient by 50%.

Faster Backup and Restore

The company observed that after implementing data archiving, the backup and restore processes significantly accelerated. Smaller databases facilitate quicker backup and restoration operations. By reducing the size of the database through data archiving, backup and restore procedures become more efficient and require fewer resources. The backup time was reduced by almost 95%.

Simplified Data Management

Archiving aids the application in streamlining data management by separating active and historical data, thereby simplifying data maintenance tasks. This ensures that the database remains optimised for ongoing operations.

Improved Data Consistency

The company discovered that archiving also proved beneficial in CRUD (Create, Read, Update, Delete) operations, as smoother operation performance resulted from the reduced data, leading to enhanced data consistency.

Conclusion

Despite encountering obstacles, this project proved to be a crucible for professional development and problem-solving. By employing strategic communication, innovative problem-solving techniques, and a dedication to ongoing improvement, every obstacle was turned into a chance for learning and optimization.

By using the above solutions, we’ve controlled and cut down on database expenses by 80%. We reduced time by 95%, and efficiency was increased by 2x. Now, the application can expand alongside increasing data without affecting storage capacity. Data will be archived periodically, ensuring limited storage usage and eliminating the need for additional storage purchases. In conclusion, this project has become more resilient, flexible, and well-equipped to tackle future challenges.

“If you’re facing the same issues and need our help, don’t hesitate to contact us for a free consultation.”

References

Follow US

Disclaimer

The views are those of the author and are not necessarily endorsed by Madgical Techdom.