The cloud is now a significant attack surface for most organisations, with 80% of exposures deemed medium, high or critical associated with assets housed in the cloud, according to the Unit 42 Attack Surface Threat Report. It’s a shocking reflection on the state of cloud security today which has had considerable time to mature, with only 19% of on-premise assets deemed vulnerable to security exposures in comparison. The question is why are assets in the cloud so vulnerable and what can be done to improve protection?
Firstly, patching remains problematic. The report found that, of the 30 common vulnerabilities and exposures (CVEs) tracked, three were exploited within hours and 19 within the next three months. This provides a wide window of opportunity for attackers to leverage and exploit a security vulnerability that has been fully disclosed publicly. But it’s not simply a case of tardy patching here.
The Unit 42 Cloud Threat Report, Volume 7 found 63% of source code repositories have high or critical vulnerabilities, 51% of which were at least two years old revealing the persistency of these issues which aren’t being identified soon enough. Amongst Internet-facing services in the public cloud, 11% of exposed hosts had high or critical vulnerabilities and 71% of which were at least two years old. Any single one of which could pose a threat to the supporting infrastructure, applications or data.
Config confusion
Other practices aimed to make it easier and more efficient to use cloud services were also criticised for increasing risk. The report refers to ready-to-use templates and default configurations supplied by Cloud Service Providers (CSPs) which are contributing to the problem of misconfiguration, long been identified as the primary security issue facing the cloud by the Cloud Security Alliance (CSA).
Instances of common misconfiguration include overly permissive identity and access management settings (the report found 76% don’t enforce multi-factor authentication (MFA) for console users and 58% don’t do so for admin users), the inadvertent exposure of cloud storage buckets, overly permissive network access controls, poorly configured logging and monitoring, and a failure to keep on top of DNS subdomains and records.
Moreover, the Attack Surface Threat Report found that the opportunity for misconfiguration is also significantly higher in the cloud due to the continual swap in and out of services. It estimates that on average, 20% of externally accessible cloud services were changed every month across the 250 organisations surveyed, making it highly problematic to screen for misconfigurations. Nearly half (45%) of exposures were then found to have originated in these new services. This rapid rollout is not being matched with the retiring of infrastructure, however, with the same report revealing that 95% of end-of-life systems were still in situ over the cloud.
In addition, many organisations make it far too easy for attackers to gain access by storing credentials in plaintext. The report found 83% of organisations have credentials hard-coded into their source control management systems and 85% in virtual machine user data. Those details can then be used to access privileged user accounts and move laterally across the organisation exploring assets before exfiltrating data or escalating the attack. In some cases, they won’t need to look far, with sensitive data found housed in 66% of cloud storage buckets.
Don’t lose the essentials
In the event of an attack, it’s the logging data that will help pinpoint exactly what happened and that will help inform remedial action but while logging is usually offered by CSPs, it is often disabled due to concerns over storage costs. The number of businesses opting to switch off logging is worryingly high – 75% for AWS trail logs, 74% for MS Azure key vault audit logging, and 81% for Google Cloud Platform storage bucket logging – and it’s a problem that is likely to get worse as teams seek to cut costs.
So, what can organisations do to mitigate these risks? Firstly, an organisation could consider conducting a gap analysis assessment or cloud configuration review. This can identify where services have been configured incorrectly or in a way that elevates risk as well as comparing current settings against common best practice and compliance standards. Configuration documentation can then be created which the security team can use as a blueprint when deploying new builds, managing services, or testing changes to existing live configurations, with those changes then also documented and reported on.
As touched upon, default configurations are not always the best option so these must be reviewed to establish whether they are in the best security interests of the business. However, it’s a false economy to disable logging which is essential for remediation and analysis in the event of a security incident. Logs should be collated and centralised for analysis by a SOC team, a process usually carried out with a Security, Incident and Event Management (SIEM) solution.
When it comes to access management, it’s vital to ensure MFA is enabled and that the policy of least privilege is applied, i.e. that users only have access to cloud management functions that are required to support their role. Conducting a specific cloud penetration test can also be useful to determine whether there is a route to gain higher privileged functions from a typically assigned user account.
The problem of scanning of open-source code as part of application development is also problematic. Using tools to conduct vulnerability scanning of code can help here but as the issue can run layers deep if the vulnerability is in an imported code package from a third-party repository that has been compromised, detection can be difficult. The persistence of old vulnerabilities is testament to this fact.
Thankfully the industry itself and various governments are now taking steps to improve the security of open-source software in light of devastating attacks such as Log4j which illustrated the susceptibility of the software supply chain. The Open Source Security Foundation (Open SSF) has launched the Alpha-Omega project to find and fix vulnerabilities in open source code, with Alpha focused on critical projects and Omega using automated tooling to analyse, score and provide remediation advice to open source maintainer communities. Legislatively, in the UK, a consultation to improve cyber resilience took place in 2022, with the response published later the same year and this is likely to see changes brought in during 2024. In the EU, the Cyber Resilience Act also aims to address issues with the supply chain.
These actions also testify to the importance of the cloud which has become the foundation of our digital ecosystem. Its flexibility is vital in allowing organisations to capitalise on opportunities but as the Unit 42 reports reveal, it’s also highly susceptible to attack and the applications it relies upon do not provide as firm a basis as they should. It’s therefore down to the individual organisation to ensure it takes the necessary steps to configure and secure its systems rather than rely on a shared ownership model with the CSP.
Phil Robinson is the founder of Prism Infosec which offers cutting edge penetration testing, red teaming and security consultancy services of cloud and traditional on-prem architectures and enterprise applications. He has been instrumental in the development of numerous penetration testing standards and certifications and has provided consultancy to some of the world’s largest organisations and government departments. He regularly speaks out about penetration testing and e-crime to help promote cybersecurity awareness and industry best practice.