Discover The Final Way to Cut Cloud Costs

Remember when they told you that you needed this shiny new thing called โ€˜cloud?โ€™ That it was silly to pay for more computing resources than you needed most of the time, just because it was required to cope with traffic and demand peaks (like Christmas) a couple of times a year?

In fairness, this was trueโ€”cloud is perfect for scaling up and scaling down and it was (and is) a better way to even out your computing resourcing.

But, they didnโ€™t tell you that this was only one sort of scalingโ€”scaling the application up and down to serve varying demands. If you want more people connecting to your service, doing more transactions and creating more data, you need a bigger database at the back end to do this.

โ€˜Scaleโ€™ doesn’t necessarily mean petabytes of data; very few CIOs have applications requiring that much data at run time. What weโ€™re talking about is the number of concurrent active users rather than storage.

If hundreds of thousands of people connect at the same time, you need a database capable of managing thatโ€”itโ€™s a processing issue, not a storage one.

Ten years agoโ€”and often still now, to be honestโ€”we worked around this by buying lots of little application servers to run those connections. But, to make it happen they all connected back to one big Oracle or DB2 style box somewhere in a data centre. Back then, that Oracle database was bigger than it needed to be because of Black Friday, or Singles Day in China. And it still is.

In other words, the problem we thought we’d cracked didnโ€™t go away. Monolithic databases are still being bodged to make cloud business work.

Why do I say โ€˜bodgedโ€™? Because these engines were just not built for the cloud and the unique way it works with data. You have kept paying for proprietary software, running on specialist hardware.

Scale your database capability up and down as you scale your applications

This set-up is limiting the positive impact of cloud on your overall budget. Itโ€™s a cost youโ€™ve kept having to carry, and it also means you’re not exploiting all the capability and innovation the cloud offers.

Cloud has been fantastic for one part of the scaling challenge, application scalingโ€”but thatโ€™s really only possible by using as big a single database as you can power up.

The uncomfortable truth about using cloud to scale a business process is that youโ€™ve always had to use traditional, monolithic SQL databases to make it fulfil that promise of smoothing out your IT peaks and troughs. This was pricey enough before; now itโ€™s getting out of hand.

How can you solve this challenge and make business savings? One option is to adopt a cloud native database which allows you to scale your database capability up and down in the same way you scale your applications. Instead of a single, giant machine and expensive proprietary licences, you could have three smaller ones, perhaps in different geographies. Then, if one suffers an outage, youโ€™d fail over to the other two, keeping up and running without any business (or customer) impact. Itโ€™s also cheaper. Running a big transactional app in the cloud might require renting a 32, 48, or even 64 core serverโ€ฆ but, two 8 core virtual machines are cheaper than renting a 16 core machine, etc.

In fact, even if you don’t need to scale up immediately, it quickly becomes cheaper to have a number of small machines cooperating to carry your workload, rather than one big machine.

Why isnโ€™t this the norm, though?

Because we left the cloud revolution unfinished. We have done so much โ€“ moving capital expenditure into operating expense, lifting and shifting workloads (then finally redesigning and optimising them). Today, the application layer of most organisations is much healthier than it was, as we rapidly moved from virtual machines to Docker containers, and now in many instances Kubernetes pods. All this has made building applications and distributing them resiliently around the world more straightforward and cost effective.

However, database evolution has lagged behind and is the final step we need to take to complete the cloud revolution. Itโ€™s a step we might call enabling, not vertical, but horizontal scaling.

Transactional consistency, and the ability to horizontally โ€˜scale outโ€™

A good example of this is a major US retailer we worked with. Though a very successful bricks โ€˜n mortar brand name, the company was moving aggressively into the e-retail space even before Covid accelerated the importance of a strong online store. It was doing all the right things with microservices and modern application development techniques and technologies, but the database side posed a problem.

Even the biggest database they could build, on the biggest virtual machine they could rent in the cloud, could not handle their load.

To work around this, its engineers had to do messy fixes like sharding, which started to get complicated and raised the spectre of transactional inconsistency. This meant that one shopper might ask for something that the system had just placed in another customerโ€™s shopping basket elsewhere.

It got very messy. Their solution was to adopt a cloud native database that gave them both transactional consistency and the ability to horizontally scale out. And it worked! The company now benefits from sub-10 millisecond performance, boosted operational simplicity and efficiency, and crucially, can now easily meet massive eCommerce peaks.

To complete the cloud revolution and achieve the horizontal scaling that you want, itโ€™s worth considering distributing your business data and sharing the load among lots of machines rather than pouring all your money and hopes into one database Leviathan. The answer is a scalable, resilient open-source-based PostgreSQL database (like YugabyteDB).

This kind of technology is available now and easy to install. It allows you to finally achieve the time and cost savings promised by the cloud all those years ago.

Website | + posts

Martin Gaffney, Vice President, EMEA, Yugabyte, the leading PostgreSQL-compatible distributed database company, was appointed in early 2021 to lead and grow the EMEA operation. Previously, Gaffney had been involved in building successful EMEA operations at high-growth companies across the technology sector. He founded the EMEA operation of ThoughtSpot Inc in 2015 and helped grow the company to a circa $2billion (USD) valuation during his tenure.

More recently he was Regional Sales Director, EMEA, H2O.ai, and earlier in his career, was an executive at the EMEA operations of Sequent Computer Systems, Tivoli Systems and Netezza Corporation, the last three listed of which were acquired by IBM. Martin was also co-founder of Volantis Systems, now part of Pegasystems. During his career, he was awarded runner-up in the Ernst & Young Entrepreneur of the Year Awards.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Investment Opportunities for Startups and Technologies in AIย 

Although artificial intelligence developed from niche technology has become...

Four Surprising Lessons I’ve Learned Leading Tech Teams

Techies. Geeks. Boffins. Whatever your organisation calls its IT...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...

Data privacy concerns linger around LLMs training

We have all witnessed the accelerated capabilities of Large...