At Kudi, we believe that people of all classes should have access to financial services and seamlessly perform transactions. We make this possible through our agency banking network.
Kudi agents maintain a wallet and have access to this via a mobile application. This application is currently native to the Android platform.
Our Internal Tools team is made up of various individuals who make use of various tools and programming languages. One of these languages is JavaScript. Our code implementations in JavaScript are broken down into two:
Node.js for the backend
React and Redux for the frontend
We also maintain a number of databases. These include:
MongoDB for transactions
Google Cloud SQL for auxiliary services
Google BigQuery for analytics
Transactional databases are really good at processing realtime transactions but they slow down when hit with transactional workloads. Our transactional databases currently process hundreds of thousands of transactions on a daily basis. Our analytics systems get updated periodically from our transactional databases. A simple illustration is shown below.
We have a non-core service that needs access to wallet balances. Where should it fetch this information from? Should it fetch it from the transactional DB? What impact would it have on availability and user experience?
An alternative would be to query the analytics DB for this information. The DB itself can handle the workload. However, the DB is updated every second, so this information would be out of date. Imagine working with stale data!
Since wallet balances get updated every time an agent performs a transaction, it would make sense to implement an event listener to listen to these updates. The only problem is this: how exactly do you implement a resilient system? This are bound to break down at intervals.
One way to handle these event updates and also implement resilience is to implement a push mechanism to a queue. Queues are not a new programming construct. They have existed since the days of mainframes. You implement a persistent queue so that you write something to it, and the party responsible for processing picks it up whenever it is ready. It's how email works, for example. I can send an email, and you can check for it whenever you are ready.
One modern implementation of a message queue is Google Cloud Pub/Sub. It is Google's managed instance of Apache Kafka. It implements scalable message delivery with push and pull modes. With a pull mode, Pub/Sub will act like a post office, holding onto these event notifications for up to a week. This makes it possible to recover from any downtimes without losing any notifications.
Going back to our diagram above, we can see that the Transactional DB is capable of publishing new events to Cloud Pub/Sub. We decided to leverage this to implement a notification system that would provide the required up-to-date information where it is needed.
We made use of the pull mechanism from a Kubernetes service that fetches these notifications that get sent to PubSub and then uses the information in the notification object to update the wallet balances for it's own purposes. This implementation is illustrated below.
On whether we could have solved this problem without using Google Pub/Sub, the answer is absolutely yes, However, the impact on the DB would have resulted in the need to implement additional nodes in order to handle the additional requests and stop our users from experiencing a negative user experience.
If you find our use of technology to be of interest and would like to join our team, please take a look at our careers page here, or drop us a note at engineering@kudi.com. Also, please subscribe to receive updates from us on new posts and job openings.
Comments