Testing 1-2-3:  Three Reasons Why Cloud Testing Matters

 

It has been nearly three years since an Amazon Web Services senior executive said: Cloud is the new normal”.  Since that time, the momentum behind cloud migrations has become unstoppable, as enterprises look to take advantage of the agility, scalability and cost benefits of the cloud. 

In its 2017 State of the Hybrid Cloud report, Microsoft found that 63 percent of large and midsized enterprises have already implemented a hybrid cloud environment, consisting of on-premise and public cloud infrastructures.  Cisco’s latest Global Cloud Index predicted that 92 percent of enterprise workloads would be processed in public and private cloud data centres, and just 8 percent in physical data centres, by 2020.

So the future is cloudy, with enterprises adopting hybrid cloud strategies using services from a mix of providers.  But irrespective of the cloud services they use, or the sector in which they operate, all enterprises share common goals:  they want their business applications to deliver a quality user experience under all conditions; they want those applications to be secure and resilient, and they want them to run as efficiently as possible.

Shared responsibility

However, achieving those goals is not always straightforward.  To paraphrase computer security analyst Graham Cluley, the public cloud is simply somebody else’s computers.  While the provider should offer a strong foundation for high-performance and secure applications, the enterprise still needs to assume responsibility for the security, availability, performance and management of the processes associated with those applications because that responsibility cannot be abdicated.  More importantly, the enterprise is responsible for properly configuring and managing the security controls provided by the cloud provider.

Let’s examine the challenges enterprises face in ensuring their cloud applications are secure, deliver a quality user experience and are cost-efficient.

Challenge #1:  Cloud Security

Achieving robust security in the cloud is challenging for three reasons.  First, understanding an organisation’s current security levels, where additional protection is needed and where potential vulnerabilities may lie, is difficult regardless of whether the environment is on-premise or in the cloud. As there are more and more security products and platforms to manage across complex hybrid environments, having a single comprehensive view of the security posture becomes more difficult.

Second, the highly dynamic nature of cloud environments, coupled with an every widening cyber threat landscape, requires security in those environments to be similarly flexible and fluid. Policies need to scale up in line with the infrastructures they are protecting.  Third, there is a shortage of security expertise, with IT teams’ already stretched to manage the tools and processes in place across the hybrid environment.

Cloud security solutions also generate huge volumes of security events, making it difficult for personnel to prioritize and remediate risks.

Challenge #2: User experience

While different applications have slightly different SLAs and user expectations – think about the difference between a training sandbox and real-time online retail applications.  User experience is typically predicated on two things:  application performance and service availability.  When these are compromised, user dissatisfaction can quickly translate into the loss of business.

The complexity of multiple design choices in the public cloud, from hardware architectures to instance types optimized for different applications, make guaranteeing a consistent user experience that much more complicated.  Factors such as the underlying cloud infrastructure hosting the application, the network connectivity between user and application, the performance of application delivery elements (for example session load balancers), and the actual design and architecture of the application, can all impact the user experience.

Challenge #3: Cost and efficiency

Cloud providers enable a variety of options to build cost-effective, scalable and high-available applications. From utility-based models with on-demand charges to reserved price options and spot instances or price bidding, there is flexibility for an enterprise to choose the model that suits their needs.  The challenge is to identify which is best.

Cost optimisation is a case of weighing price and performance, according to the precise needs of the organisation in question. Settings and architecture designs must be optimized to deliver required application auto-scaling, and support demand peaks and troughs as they occur.  Design choices relating to securing workloads range from security endpoints running inside each instance, to network security appliances in various locations, to a security control offered by the cloud provider.

Each of these choices operates at different cost rates, impacts application performance in different ways, and delivers various levels of security effectiveness. Given this complexity, understanding how to select the solutions that are most efficient is not an easy task, unless organisations can model the applications and threat vectors targeting them.

Meeting the challenges:  how testing can provide value

To meet these challenges, organisations migrating some or all of their workflows to the cloud must be prepared to embed consistent testing into their processes, in both pre-production and production. There is a direct relationship between test and risk – by getting testing procedures right from the start, enterprises can dramatically reduce their risk exposure, and ensure they successfully harness the full benefits of the cloud.

In pre-production, before a cloud migration takes place, testing can provide quantifiable insights to empower security architects, network architects and security teams during vendor selection, performance and cost optimisation processes, scaling up, availability, and training.  For example, on the vendor selection side, assuming the functional requirements are met, procurement managers need to ascertain which public cloud vendor is cost-efficient regarding price and performance.  They need to establish which of the available tools for securing application workloads are efficient, secure and, ultimately, ideal for their specific requirements.

Moving on to questions of performance and cost optimization, IT and security managers need to confirm how security policies and architectures can be optimized, and what the best settings are for an auto-scaling policy. These decisions are based on a range of factors, from memory utilisation to new connection rates, and again, consolidating and analyzing those factors can only be done via a rigorous, real-world testing process.

Then there are questions around how the cloud architecture will perform once deployed. Where are the bottlenecks in the application architecture as it scales? How fast will applications self-recover from errors, and how will the user experience be impacted if some application services fail?

Testing from pre- to post-production

Answering these questions requires an extensive pre-production testing program, with realistic loads and modelling threat vectors, as well as failover scenarios. This provides the assurance that the cloud architecture will empower rather than restrict the business.  It also enables security engineers and analysts to understand better what they are working with. And testing must not end once a cloud environment has gone live.  Production-stage, continuous testing is essential to monitor for service degradations, while continuous security validation is essential to provide security service assurance.

In conclusion, as the cloud is the new normal, continuous testing of cloud workloads needs to be embraced as the new normal too, at all stages of application deployment and delivery. Testing is the only means of ensuring that organisations can fully realize the benefits of the cloud, without the risks of security breaches, poor user experience, or unnecessary costs.

www.ixiacom.com

Website | + posts

Jeff Harris is chief marketing officer (CMO) responsible for Ixia’s brand and global marketing efforts, including product and solutions marketing, corporate marketing, field marketing, corporate communications and partner marketing. He drives Ixia’s corporate positioning, messaging and communications to both internal and external audiences. Jeff has more than 20 years of marketing leadership experience spanning security, wireless networking and semiconductor markets.
 
Before joining Ixia, Jeff was a senior consultant at Check Point Software Technologies where he led the solution marketing team creating thought leadership campaigns that positioned Check Point for significant growth. He also consulted at Cisco driving go-to-market programs for the networking and security business units. Prior to that, he was Vice President at TrellisWare Technologies, a ViaSat Company, where he led the company’s entrance and leadership in the mobile ad hoc networked communications market. Jeff also worked at General Atomics, Lockheed Martin, and UUNET in senior product management and product marketing capacities. 
 
Jeff received his Bachelor’s in Electrical Engineering and Master’s degree in Electrical Engineering from George Mason University.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Driving the Future of Connectivity with Data Centres

As an astonishing matter of fact, there are 5.3 billion people...

Willow’s Breakthroughs in Quantum Stability

The Start of a New Era Have you heard the...

Cloud Computing Demands Robust Security Solutions

Modern organisations are increasingly reliant on cloud computing to...

Unlocking the future of manufacturing; AIOps network optimisation

Global manufacturing activity has failed to show signs of...

Why is the hybrid cloud the future of computing?

Have you ever wondered how companies can benefit from...