Alongside moving our infrastructure to Kubernetes to handle our ever-expanding APIs and user base, we’ve also been hard at work solidifying and expanding our caching layer.
Read more about our improved architecture in the blog post written by our Senior Product Manager, Jennia Williamson.
In 2018, we saw a huge leap in the number of stores using Moltin. More stores leads to more customers, which is great from a company perspective, but brings new problems from a technical one. We're now serving more requests to more users in more countries than ever before!
With the introduction of Kubernetes, we're now able to handle increased traffic, but the wide geographic range of where these requests come from means that no matter how quickly we serve a request from our data centres, shipping it halfway around the world can take a large amount of time.
Previously, cached data lived primarily within our systems, so although we didn’t waste time re-building responses, we were still at the mercy of network latency to deliver that cached data.
To combat this, our engineers have completed a large scale project to serve cached data to stores from edge locations around the world. James Owers, our Lead Engineer, says that currently most people worldwide will see at least 10 times speed increase while the US-based users will get even better results. James will be writing a more in-depth overview of the new cache and its performance, so stay tuned for his blog post, if you wish to know more details.
We’ll be rolling out the new cache by endpoint, as we test with a selected group of existing customers. The public release begins with the
You don’t need to make any changes to your application to enjoy the benefits of the reduced response times. We’ll keep you up to date as the project continues to roll out.
Think big and reinvent commerce with Moltin!