High-performance computing (HPC) is moving into the mainstream – migrating away from the halls of academia and rapidly establishing itself in the arenas of business and industry. Key to its on-going success has been the willingness of businesses to embrace online, on-demand business models.
HPC cloud services have the potential to appeal to any organisation with complex modelling and simulation requirements and fluctuating power needs. The approach typically involves the solutions provider investing in infrastructure that gives prospects the opportunity to buy access to that computing resource rather than making an upfront investment in a complex IT hardware implementation.
The benefits may be compelling but before HPC-focused businesses take the plunge and start implementing HPC cloud services, they first need to look at their own business model. In some cases, using a cloud service will not be the best option. If the data being worked on is constantly changing and the business constantly has to move it to a remote location to work on it, the cost (in time and resource) is likely to offset any benefits of using a remote service.
Where cloud models make more sense is where the business has a large dataset that is relatively static and wants to run multiple different analyses against that data. In that case, it won’t need to transfer large amounts of data back and forth; it will just be sending input data and results data (which may be quite small).
Having taken the decision to go down the cloud route, however, HPC business users need to realise that success will be about much more than just putting servers in the cloud. As jobs run by HPC users tend to be resource intensive, these users will typically be looking for a HPC cloud services model that uses a high-performance interconnect, allowing for faster, more efficient processing. Unlike commercial clouds, HPC cloud services address the processing requirements of HPC customers by providing an on-demand remote compute facility with a pre-installed and configured environment where independent software vendor applications and open source codes are available.
Where cloud models make more sense is where the business has a large dataset that is relatively static and wants to run multiple different analyses.
Data processing is a key element of the HPC cloud approach. But once they have carried this out, how can they retrieve the data? Typically, they will need a combination of front-end tools, web browser tools and remote visualisation tools. Ease of access is also key. We are seeing growing usage of smartphones and tablets that provide a mobile interface to the cloud.
Visualisation is another driver. Large engineering companies running crash simulations no longer need to bring the data generated back into their own local systems. Instead they can run visualisations on remote machines – a quicker, easier and more cost-effective approach.
HPC users in the cloud can now also make use of computational steering techniques to monitor and, where necessary, change a simulation they are working on, enabling them to save time and money invested into long-running jobs.
There is a massive potential for HPC in the cloud. Available network bandwidth has increased and broadband has become all pervasive, further driving up-take. These kinds of services will not be for every organisation. But today, businesses with specific types of data processing needs are increasingly attracted to them. For many organisations, the time for HPC cloud services is now.
Director, HPC and Big Data Practice, Bull UK