Migrating to the cloud can be somewhat liberating. It allows enterprises to leverage operational tools and practices pioneered by the cloud titans. But while these operational tools give enterprises a path to a much more agile IT environment, that speed comes at the cost of control.Â
How can IT teams balance the agility of the cloud with the control required to run a modern enterprise?
Command and control
Enterprise IT has traditionally operated using a strict command and control model. BEHAVIOUR is determined by configuration being precisely set over a distributed infrastructure. If something needs to be amended, operators propose changes to the configuration.
There are a few drawbacks to the command-and-control model. First, infrastructure becomes quite brittle when it is dependent on the kind of operational precision required to specify specific behaviour across diverse infrastructure. This is a big reason that many enterprises use frameworks like ITIL. When change is difficult, the best you can do is inspect in excruciating detail. This, of course, makes moving quickly nigh impossible, and so our industry also employs excessive change controls around critical parts of the year.
Second, when behaviour is determined by low-level, device-specific configuration, the workforce will naturally be made up of device specialists, fluent in vendor configuration. The challenge here is that these specialists have a very narrow focus, making it difficult for enterprises to evolve over time. The skills gap that many enterprises are experiencing as they move to cloud? It’s made worse because of the historical reliance on device specialists whose skills often do not translate to other infrastructure.
Translating command-and-control to cloud
For command-and-control enterprises, the path to cloud is not always clear. Extending command-and-control practices to the public cloud largely defeats the purpose of the cloud, even if it represents a straightforward evolution. Adopting more cloud-appropriate operating models likely means re-skilling the workforce, which creates a non-technical dependency that can be hard to address.
The key here is in elevating existing operational practices above the devices. Technology trends like SDN are important because they introduce a layer of abstraction, allowing operators to deal with intent rather than device configuration. Whether it’s overlay management in the data center or cloud-managed SD-WAN, there are solutions in market today that should give enterprises a path from CLI-driven to controller-based control.
Minimally, this helps provide a proving ground for cloud operating models. More ideally, it also serves as a means to retrain the workforce on modern operating models, a critical success factor for any enterprise hoping to be successful in the cloud.
Intent-based policy management
Abstracted control is critical because it leads naturally to intent-based management. Intent-based management means that operators specify the desired behaviour in a device-independent way, allowing the orchestration platform to translate that intent into underlying device primitives.
An IT operator ought not have to specify how an application is to connect to a user. Whether it is done on this VLAN or that VLAN, across a fabric running this protocol or that protocol, is largely uninteresting. Instead, the operator should only have to specify the desired outcome: application A should be able to talk to application B, using whatever security policies are desired, and granting access to people of a certain role.
By elevating management to intent, enterprise teams do two things. First, they become masters of what matters to the business. No line of business cares about how edge policy is configured; rather, they care about what services and applications are available. Second, by abstracting the intent from the underlying infrastructure, operators create portability.
Multicloud and portability
Portability is a huge part of maintaining control in an environment where infrastructure is spread across owned and non-owned resources.
If abstraction is done well, the intent should be able to be implemented across whatever underlying infrastructure exists. So whether an application is in a private data center or AWS or Azure, the intent should be the same. When paired with an extensible orchestration platform with suitable reach into different resource types, that intent can service any underlying implementation.
For example, assume that an application workload resides in AWS. The policy dictating how that application functions will be applied to the AWS virtual private cloud (VPC) gateway. If the workload is redeployed in Azure, the same intent should translate to the equivalent Azure configuration, without any changes initiated by the operator. Â If a similar workload is launched in a private data center on a VM, the same policy should be used. If that application moves over time to a container, the policy ought to be the same.
By making policy portable, the enterprise actually maintains control. Even though the underlying implementation might vary, the behaviour should be uniform. This, of course, relies on multicloud management platforms capable of multi-domain and multivendor support, and there needs to be some integration with different application lifecycle management platforms. But operating at this abstracted level is actually the key to maintaining control.
Trust but verify
Having control but being unable to verify is really no better than not having control in the first place. Ultimately, for an operating model to be sustainable, it needs to be observable. This is true from both a management and even compliance perspective.
This means that enterprises looking to maintain control in a post-cloud world will need to adopt suitable monitoring tools that grant them visibility into what is happening. Similar to the policy portability discussion, though, these tools will naturally need to be extensible to any operating environment—private or public, bare metal or virtualized, cloud A or cloud B.
For most enterprises, IT operates in discrete silos. The application team and the network team are run separately. The data center team and the campus and branch team are run separately. If a prerequisite for control is visibility, and that visibility has to extend end-to-end over a multicloud infrastructure, it means these teams need to come together to ensure observability over the full multicloud ecosystem.
Tools like performance monitors and network packet brokers will need to be evaluated, not in domain-specific contexts but over the full end-to-end environment. This might mean trading off one tool that is superior in a particular domain for another that is more capable of spanning multiple domains.
Ideally, these tools would plug into a broader orchestration platform, allowing observable events to trigger additional action (if this, then that).
Start with culture
While there are certainly dependencies on underlying technology, the ultimate key to maintaining control in the cloud will fall back on that common dependency for much of IT: people. Enterprises should evaluate technology but failing to start with people will mean that the path forward is only partially paved.
Coaching teams to elevate their purview above the devices is an absolute prerequisite for any cloud transformation. Breaking free from CLI-centric operating models is critical. And embracing more diverse infrastructure will be essential. The cloud doesn’t care about legacy products managed in legacy ways.
With a willing and trained workforce, the technology on which multicloud management is built can be effectively deployed in such a way that enterprises get the best of both worlds: agility and control.
Mike has 15 years of professional experience and can offer an uncommon blend of technology savvy, strategic sense, influencing skills, and storytelling magic that allows him to develop a clear vision, set product direction, and gain consensus across diverse internal and external stakeholders.