Labels are an important part of the DevOps armoury. They help guide developers, streamline processes, and ensure seamless collaboration. They simplify large quantities of information, enabling DevOps teams to use performance metrics as part of the software development pipeline.
Metric assessments like DevOps Research and Assessment (DORA) have been designed to provide DevOps teams with the essential data required to have visibility and control over the development pipeline.
But labels don’t always paint the complete picture. Yes, they can capture information such as quantitative metrics, but they’re not as adept at gathering information on DevOps culture such as collaboration and communication. Nor are labels the best method to ensure that a team is aligned with broader organisational goals.
The reality is simple: not all teams are built the same, which means they shouldn’t be measured in the same way.
Take the issue of team capacity, which is influenced by factors critical to understanding the capabilities and constraints for effective software development. It goes without saying that traditional DevOps metrics offer valuable insights into deployment frequency and lead time.
But there’s a problem. Sometimes, these metrics simply fall short and fail to deliver— especially when it comes to capturing the reality of how IT teams work on the ground.
After all, modern IT environments are more complicated now than ever before, especially since more and more organisations are balancing a mixture of on-prem, cloud, and hybrid environments. While the use of AIOps has helped to address this increasing complexity, DevOps teams are often pulled in many directions.
Another issue is that DevOps metrics don’t always account for the size, scale or scope of individual organisations and their DevOps teams. This means that ‘small but mighty’ teams—those that punch above their weight and are very efficient given their scale—can often be evaluated poorly if only focusing on performance metrics.
Aligning goals
Another key component that traditional performance metrics sometimes fail to capture is the alignment of their work with the overall objectives and vision of the organisation.
Metrics such as ‘change failure rate’ and ‘time to restore’ services are important but may not provide the level of nuanced understanding needed in terms of common business outcomes. For instance, many DevOps team members spend considerable blocks of time identifying and fixing minor issues across applications and services.
While AIOps and observability can help IT teams by automating the detection and resolution of service and application problems, DevOps teams are still largely on the ground troubleshooting these situations. And while this may be achieved quickly, it still eats into a team’s ability to build and deploy the innovative solutions that customers need.
One way around this is to ensure that organisations complement quantitative metrics with qualitative assessments and other feedback. That way, they can foster a more ‘holistic approach’ that considers both operational efficiency and strategic coherence within the DevOps landscape.
The case for measuring customer satisfaction
There’s another thing worth considering as well. Alongside traditional DevOps performance metrics, organisations should also consider customer satisfaction as a measure of performance.
While customer satisfaction is multifaceted and complex, influenced by factors beyond the scope of traditional DevOps performance metrics, it is a good way to gauge the user experience. For this reason, it is arguably one of the most important metrics available.
Perhaps that explains why organisations looking for a more rounded understanding of their DevOps teams’ impact are casting their net further. Instead of solely relying on traditional metrics, they’re supplementing them with customer-centric feedback mechanisms, user surveys and qualitative assessments to ensure that the development processes align with—and enhance—customer satisfaction.
Supporting DevOps teams
Whatever your perspective, organisations need to adopt a balanced and inclusive approach to evaluating performance. While it would be great to be able to use a single set of performance metrics for all functions and teams, a one-size-fits-all approach simply does not work.
Thankfully, there are a number of ways organisations can begin to improve the lives of DevOps teams while also achieving high performance in the standards of traditional metrics.
For instance, integrating observability solutions can allow teams to scale their work while also being able to more easily identify and resolve problems as they arise. Not only is this a more efficient and productive approach, but it can also help free up time for smaller teams.
Elsewhere, observability can also be used to help measure and provide insights into end-user experience. The benefit of this approach is that it helps to ensure that the needs of customers, DevOps teams and the organisations they represent are aligned.
Recognising the human element in the work done by DevOps teams, and supporting team members through professional development opportunities, can significantly contribute to a positive culture at both individual and organisational levels. This has a broader impact across the entire DevOps community.
Jeff Stewart brings more than 20 years of monitoring and observability expertise, with over 13 years of product strategy and solutions engineering at SolarWinds.