With businesses preparing for its arrival for what felt like an eternity, the GDPR finally came into force on May 25th, and organisations across the globe who do business in Europe will now be held accountable for the way in which they handle or process personal data. Indeed, much has been written about the size of the fines that companies could face if they fail to comply: up to โฌ20 million, or four per cent of a firmโs global turnover, whichever is highest.
Given the regulationโs focus on data privacy and protection, the security of an organisationโs network and, by extension, the information it holds, are integral to GDPR compliance. Organisations must, therefore, ensure they have measures in place to minimise the effect on their network of any potential breaches, attacks or outages, particularly now that, under the GDPR, data subjects have the right to access any data held on them by an organisation.
To protect the privacy of personal data, for example, Article 32 of the new legislation requires its โpseudonymisation and encryptionโ. It further states that companies must โensure the ongoing confidentiality, integrity, availability and resilience of processing systems and servicesโ and be able to โrestore the availability and access to personal data in a timely manner in the event of a physical or technical incidentโ.
In short, itโs more important than ever that organisations take steps to keep network downtime to an absolute minimum, otherwise, they could find themselves on the wrong side of the regulations, and potentially facing an eye-watering high financial penalty.
Layers of complexity with GDPR
The size and complexity of IT networks today means that itโs almost impossible to detect when a network failure might occur. Now, with the GDPR requiring more data than ever to be stored for longer periods, and for it to be available for access at any given time, organisations need to understand what can be done to assure that their networks are able to cope with a sudden increased workload.
If and when a problem does occur, IT teams need to be ready to deal with it, with all the information at hand they need to triage and resolve it as quickly as possible. The ideal situation, of course, would be for them to be able to detect when services are degrading before users are even aware of the problem, thereby allowing the IT team to prevent any negative impact it might have on the wider business.
Traditional point tools are no longer sufficient for this, however, as they do not account for the interactions between various aspects of the overall integrated system, such as the hybrid network, applications, servers, and supporting services.
The situation is complicated further when you consider that much of the functionality that runs on an organisationโs network โ its key services and applications – tends to be multi-vendor, requiring IT teams to ensure that everything is working together without friction. Achieving visibility into this environment is hindered somewhat by the fact that these services will be running across both physical and virtualised environments as well as private, public and hybrid cloud environments, which only adds to the levels of complexity.
Whatโs required, therefore, is complete โ vendor agnostic – visibility across the entire network; the data centre, the cloud, the network edge, and all points in between.
The smart approach to assurance
Continuous end-to-end monitoring and analysis of the traffic and application data that flows over their organization will provide IT teams with the holistic visibility of their entire service delivery infrastructure they need for full-service assurance.
This โsmartโ approach involves monitoring all of the โwire dataโ information; every single action and transaction that traverses an organisationโs service delivery infrastructure. By continuously analysing and translating the wire data into metadata at its source, this โsmart dataโ is normalised, organised, and structured in a service and security contextual fashion in real time.
The inherent intelligence of this metadata will then allow self-contained analytics tools to clearly understand application performance, infrastructure complexities, service dependencies and threats or anomalies across the network.
By continuously monitoring this wire data, businesses will have access to contextualised data that will provide them with the real-time, actionable insights they need for assurance of effective, resilient and secure infrastructure. Without this assurance, however, detection, triage and resolution times would be extended. Customers would suffer and the organisation itself would be at risk of not upholding its duty in protecting their personal data.
Compliance with Article 32 of the GDPR, along with much of modern business activity, is dependent on the continuous and consistent availability of effective, resilient and secure IT infrastructure. By taking a smart approach to assuring complete visibility and availability, businesses everywhere can be confident in the reliability of their networks, and in their efforts to comply with the new regulations.
With over 20 years as a solutions engineer, Ray focuses on complex large-scale deployments for corporate, government, multi-national and telecoms organisations. His role requires him to support sales teams in presales, business development, technical, and product management roles, bridging customer requirements with solution capability.