How to Control Cloud Repatriation with App-centric Infrastructure Performance Management

Cloud adoption is on the rise. According to an IDC report, enterprises are driving the increase with 68 per cent using public or private cloud: a 61 per cent jump since 2015. Why? Because businesses are experiencing the benefits. Most of you are cloud experts, so I don’t need to tell you that among the many reasons given for organisations moving their data to the cloud, flexibility, scalability and perceived lower cost are often top of the list. But it’s not a one-way street. Just as some businesses are dipping their toes into the cloud infrastructure, others are pulling back – returning applications from the public cloud to a private cloud or a physical data centre on site. In fact, research by ESG has revealed that 57 per cent of enterprises has moved at least one or more of their workloads from the cloud to an on-premise infrastructure.

Honing performance

Although security is a concern, performance is also a key driver for this shift. At best, a poorly performing application can result in a slow-down, at worst a complete outage. Both scenarios have serious implications for the business, whether it’s that customers are driven elsewhere, staff are unable to do their jobs properly or that the whole infrastructure grinds to a halt. As a result, a slow application will be whipped out of the public cloud and repatriated somewhere else. The underlying cause is often hidden, still lurking, ready to cause further performance issues later down the line, and result in more applications being moved, perhaps unnecessarily.

Although most teams monitor their applications and infrastructure in some way – Gartner analysts recently reported that most organisations host five or more infrastructure monitoring tools per data centre – many of the tools employed just aren’t up to the job. For example, tools to monitor application performance highlight application speeds in isolation without looking at the impact on the wider infrastructure. Alternatively, domain-specific tools focus on an individual component’s performance but offer no insight into the cause of an issue. It’s like being given a piece of a jigsaw puzzle, without understanding how it fits into the full picture.

Getting the full picture

Application-centric infrastructure performance management (IPM) offers the best of both worlds: visibility and analytics across application and infrastructure domains, helping to ensure optimum performance and spot potential issues before they become a problem. It records and correlates thousands of metrics in real-time every second: giving insight into how each component fits with the others, recognising when some applications will be working harder than usual and how to manage an increase in demand. And pinpointing what is causing an application to struggle. Teams don’t need to wait for a crisis to realise that an application isn’t performing optimally; with a holistic view, it’s much less likely that the infrastructure will ever reach that point.

[easy-tweet tweet=”Understanding application workloads mean the IT team can manage their infrastructure more effectively” hashtags=”IT, Cloud”]

As well as offering the full picture into how each component is performing, app-centric IPM means IT teams no longer have to over-provision infrastructure for fear of future performance problems. They know exactly how their infrastructure plays, recognise performance limits and understand what’s truly needed to make sure everything runs smoothly.

Understanding application workloads mean the IT team can manage their infrastructure more effectively: so no more knee-jerk reactions to poor performance. And no more improperly blaming performance issues on a specific silo team.

It’s easier to plan much more efficiently too: teams can predict with much greater accuracy how their workloads are likely to expand and how demands on the IT infrastructure are expected to increase. So the cloud’s flexibility and scalability can be called on in a much more strategic and managed way.

Performance over availability

I recognise that it’s a change in focus: until now availability has been the watchword. In fact, many service level agreements (SLAs) promise incredibly high levels of uptime. But technological advances mean that availability should be a given – it’s performance that needs to be guaranteed. And by that, I mean both performance of applications and the infrastructure they run on. There’s an appetite for it, and the first public cloud providers to offer performance-based SLAs will be setting the bar for the rest.

Refocusing on performance is not just a game changer for cloud service providers, it’s also transforming infrastructure management. IT teams are identifying business-critical applications and taking time to understand how each element fits into the IT infrastructure. With this knowledge, it’s much easier to make informed decisions about where each application should run – whether it would perform better in a public or private cloud, or whether it makes more sense for it to sit in a physical data centre onsite.  CIOs are starting to take control: moving applications between different infrastructures only when it’s needed. This insight and planning will mean that applications aren’t dragged from the cloud because their performance is impacted by another component. Instead, CIOs and IT teams will be able to determine how an application will fare in that environment before it’s implemented so that each one will be in the best-suited infrastructure from the get-go. And the flow from the public cloud will be controlled.

 

+ posts

Bringing 30 years of marketing leadership at both leading public and privately held IT infrastructure companies, Len Rosenthal is Virtual Instruments CMO. He is responsible for corporate, channel, and product marketing, including demand generation and overall awareness. Before joining Virtual Instruments, Len ran marketing at Load DynamiX and held executive marketing positions at Astute Networks, Virtual Instruments, Panasas and PathScale (acquired by QLogic). He also held senior marketing management roles at Inktomi, SGI, and HP. Len earned an MBA from UC Berkeley’s Haas Business School, a BSEE from the University of Pennsylvania and a BS in Economics from Penn’s Wharton Business School.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...