The first law for AI was approved this month and gives manufacturers of AI applications between six months and three years to adapt to the new rules. Anyone who wants to utilise AI, especially in sensitive areas, will have to strictly control the AI data and its quality and create transparency – classic core disciplines from data management.

The EU has done pioneering work and put a legal framework around what is currently the most dynamic and important branch of the data industry with the AI Act, just as it did with GDPR in April 2016, and with Digital Operational Resilience in January 2025. And many of the new tasks from the AI Act will be familiar to data protection officers and every compliance officer involved in GDPR and DORA.

The law sets a definition for AI and defines four risk levels: minimal, limited, high and unacceptable. AI applications that companies want to use in aspects of healthcare, education and critical infrastructure fall into the highest security category of โ€œhigh riskโ€. Those that fall into the โ€œunacceptableโ€ category will be banned, for example if considered a clear threat to the safety, livelihoods and rights of people.

AI systems must, by definition, be trustworthy, secure, transparent, accurate and accountable. Operators must carry out risk assessments, use high-quality data and document their technical and ethical decisions. They must also record how their systems are performing and inform users about the nature and purpose of their systems. In addition, AI systems should be supervised by humans to minimise risk, and to enable interventions. They must be highly robust and achieve a high level of cybersecurity.

Companies now need clear guidance. Because they want to use the great potential of this technology, the first companies are already doing so and at the same time must be prepared for the future in order to be able to implement the upcoming details of the regulation. There are five clear recommendations on how companies can approach this without causing legal issues and while still not standing in the way of innovation. And at the same time, be positioned in such a way that you can fully implement the AI Act without turning your business upside down:

Allow AI to act with trust: If you want to achieve this, you have to completely understand the AI data content. The only way to get there is to closely control the data and data flows into and out of AI. This close control is similar to the requirements of the GDPR for personal data. Companies should always consider this compliance when they use and develop AI themselves. If you want to use AI in a GDPR and AI Act-compliant manner, you should seek the advice of a data protection expert before introducing it.

Understand the data: Much of the law focuses on reporting on the content used to train the AI, the datasets that gave it the knowledge to perform. Companies and their employees need to know exactly what data they are feeding the AI and what value this data has for the company. Some AI providers consciously transfer this decision to the data owners because they know the data best. AI must be trained responsibly, and with the right controls in place for data access by approved individuals.

The question of copyright: Much of the law focuses on reporting on the content used to train the AI, the datasets that gave it the knowledge to perform. Previous models of AI have used available internet and book crawls to train their AI. This was content that contained protected content – one of the areas the AI Act seeks to clean up. If companies have already used controlled records without accurately labelling them, they will have to start over.

Understanding the contents of the data: This is an essential task. In order for data owners to make correct decisions, the value and content of the data must be clear. In everyday life, this task is gigantic and most companies have accumulated mountains of information that they know nothing about. AI and machine learning can help massively in this area and alleviate one of the most complex problems by automatically indexing and classifying companies’ data according to their own Relevant Record strategy. Predefined filters immediately fish compliance-relevant data such as personal data like credit card details, and business-specific data like mortgage records, architectural blueprints, seismic surveys, etc from the data pond and mark them. Security principles can also be added during AIs research to identify insecure data, threat patterns, and more. Once unleashed on the data, this AI develops a company-specific language, a company dialect. And the longer she works and the more company data she examines, the more accurate her results become. The charm of this AI-driven classification is particularly evident when new specifications have to be adhered to. Whatever new requirements the AI Act brings up in the longer term, ML and AI driven classification will be able to search for these additional attributes and provide the company with some future security.

Control data flows: If the data is indexed and classified with the correct characteristics, the underlying data management platform can automatically enforce rules without the data owner having to intervene. This vastly reduces the chances of human error and risks. A company could enforce that certain data such as intellectual property or financial data may never be passed on to other storage locations or external AI modules. Modern data management platforms control access to this data by automatically encrypting it and requiring users to authorize themselves using access controls and multifactor authentication. It can also direct most valuable data to an air-gapped vault to ensure continuity of brand and business.

The AI Act is also getting teeth

The AI Act has another similarity to GDPR and DORA. Once enacted, sanctions for non-compliance will be enforced. Anyone who violates the AI Act must expect penalties of up to 35 million or 7 percent of global sales.

The supervisory authorities have imposed fines of 4.5 billion euros since the GDPR came into force until February 2024. When DORA arrives in January 2025 it brings with it potential penalties of 1% of average worldwide turnover the previous year, applied daily until compliance is achieved for 6 months, and the potential for criminal sanctions, alongside existing regulatory penalties.

The AI Actis likely to be ratified this summer and will come into force 20 days after publication in the EU’s Official Journal. Most of its provisions apply after 24 months. The rules for banned AI systems apply after six months, the rules for General Purpose AI after twelve months and the rules for high-risk AI systems after 36 months.

The EU has taken a big step with the AI law and underlines the seriousness with which it wants to balance the great potential of AI and the potential risk it brings with it. Responsible AI, coupled with full AI governance, is now the only true and right way to bring AI into your business. The major practical applications of AI are now subject to detailed compliance requirements. Anyone who sets out early will have to examine their approach and the data used in detail to ensure that they do not violate the regulations. Companies need to act now and get their AI house in order.

Image of Mark Molyneux
+ posts

Mark Molyneux is EMEA CTO at Cohesity.

Unlocking Cloud Secrets and How to Stay Ahead in Tech with James Moore

Newsletter

Related articles

Investment Opportunities for Startups and Technologies in AIย 

Although artificial intelligence developed from niche technology has become...

Four Surprising Lessons I’ve Learned Leading Tech Teams

Techies. Geeks. Boffins. Whatever your organisation calls its IT...

A Business Continuity Cheat Sheet

Right, let's be honest. When you hear "business continuity,"...

Challenges of Cloud & Ultima’s Solution to Transform Business

With the way that AWS and Microsoft dominate technology...

Data privacy concerns linger around LLMs training

We have all witnessed the accelerated capabilities of Large...