info
This article describes the state of play around the European AI Regulation (EU 2024/1689) in spring 2026. Legislation is still evolving. For specific situations always consult a legal advisor or contact Cloud Captains.

The European AI Regulation, better known as the EU AI Act, is the first binding AI legislation in the world. The law was published in the Official Journal of the European Union on 12 July 2024 and entered into force in stages from August 2024. From 2 August 2026 onwards, most obligations will apply to the vast majority of organisations that deploy AI in their business processes. This is not limited to AI developers. It also applies to ordinary companies that use AI through subscriptions or integrated tools.

In this guide we explain exactly what the regulation entails, which risk categories exist, which deadlines apply and how your organisation can become compliant step by step. We close with the most common mistakes and a concrete action plan.

What is the EU AI Act?

The AI Act is European legislation that regulates artificial intelligence based on the risk a system poses to people and society. The greater the risk, the heavier the requirements. The regulation uses a layered approach with four categories, each with its own obligations.

The AI Act has extraterritorial reach. Organisations established outside the EU must also comply once their AI systems or the output thereof are used inside the EU. Legal scholars refer to this as the Brussels Effect.

According to the final text, an AI system is a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from the input it receives how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

The seven cumulative elements of the AI definitionexpand_more
ElementExplanation
Machine-basedThe system operates via hardware and software to perform computations.
Levels of autonomyThe system does not function without any human input, but must not rely solely on explicit instructions.
AdaptivenessThe ability to autonomously learn and adapt behaviour after deployment, optional for classification.
InferenceDeriving outputs from inputs through models rather than fixed programming rules.
Internal objectivesGoals that are explicitly embedded or implicitly derived from behaviour.
Intended purposeThe external goal set by the provider.
Influence on environmentThe capacity of the output to drive decisions or actions.

Systems that rely solely on simple mathematical optimisation or long-established methods, such as standard linear or logistic regression without adaptive components, fall outside the scope. Once these methods are combined with techniques such as reinforcement learning, they are typically classified as AI systems.


The four risk categories

blockUnacceptable risk
AI applications that pose a clear threat to safety, livelihoods or fundamental rights. Banned across the EU since 2 February 2025.
warningHigh risk
Systems that affect fundamental rights or safety. Permitted, but subject to the strictest conformity requirements. Deadline 2 August 2026.
visibilityLimited risk
Chatbots, deepfakes and systems with transparency obligations. Users must know they are interacting with a machine.
check_circleMinimal risk
Spam filters, recommender systems and AI in video games. No additional legal obligations, voluntary codes of conduct are encouraged.

1. Prohibited AI practices

Since 2 February 2025, eight categories of AI applications have been fully prohibited in the EU. The legislator has drawn a hard line against practices considered unethical or manipulative.

  • check_circleHarmful manipulation through subliminal techniques that change behaviour unconsciously.
  • check_circleExploitation of vulnerabilities based on age, disability or socio-economic situation.
  • check_circleSocial scoring by governments or private bodies based on behaviour or personality.
  • check_circlePredictive policing that predicts the likelihood of a crime based on profiling.
  • check_circleEmotion recognition in the workplace or in educational institutions, except for strict safety purposes.
  • check_circleBiometric categorisation based on race, religion, political opinion or sexual orientation.
  • check_circleReal-time remote biometric identification in public spaces by law enforcement.
  • check_circleUntargeted scraping of biometric data from the internet or CCTV.

The Digital Omnibus agreement of May 2026 added a ban on so-called nudifier apps: AI systems that generate intimate or sexually explicit content of identifiable persons without consent.

2. High-risk AI

High-risk systems are permitted, provided the strictest requirements are met. These systems are identified through two paths. The first concerns systems that act as safety components in products already covered by existing EU harmonisation legislation, such as medical devices, lifts and heavy machinery. The second concerns the stand-alone systems explicitly listed in Annex III.

warning
Do you work with AI in HR, lending, education, healthcare or access to public services? Chances are high that you fall in this category and must meet heavy compliance requirements.

Examples of Annex III applications:

  • Biometrics and remote biometric identification
  • Critical infrastructure such as road traffic, water supply and electricity
  • Education and vocational training, including admission and examination grading
  • Employment and workforce management, including recruitment software and performance evaluation
  • Access to essential services such as credit scoring and insurance risk assessment
  • Law enforcement, migration, asylum and border control
  • Justice and democratic processes

For these systems, providers must perform a conformity assessment, implement a risk management system, draft technical documentation and ensure effective human oversight.

3. Limited-risk AI

This includes systems where transparency is mandatory but no heavy compliance requirements apply. Examples are chatbots that answer customer questions, emotion recognition outside the prohibited context and systems that generate deepfakes. Two core rules apply:

  • Users must explicitly know they are interacting with a machine.
  • AI-generated content must be marked recognisably, for example through a watermark or clear notice.

4. Minimal risk

The vast majority of AI applications, such as spam filters, recommender systems and AI in video games, fall in this category. There are no additional legal obligations under the regulation, although voluntary codes of conduct are encouraged.


General-Purpose AI: regulating the foundations

The rapid rise of Large Language Models such as GPT, Claude and Gemini led to a separate chapter on General-Purpose AI, or GPAI. These models can be integrated into countless downstream applications, which is why specific rules apply to their providers.

GPAI providers must:

  1. Maintain technical documentation about the model and its training.
  2. Provide information to downstream users integrating the model.
  3. Implement a policy to comply with EU copyright law.
  4. Publish a public summary of the training data used.
Requirements for the public summary of training dataexpand_more
SectionRequired information
Model characteristicsModalities such as text, audio and video, language coverage and intended use.
Source attributionDescription of datasets, individual mention of large datasets, narrative explanation of web-scraped data.
Crawling detailsList of the top ten percent domain names, five percent for SMEs, and operational details of web crawlers.
Copyright and moderationExplanation of compliance with Text and Data Mining opt-outs and measures against illegal content.
Synthetic dataWhether training data was generated by other AI models and identification of those source models.

Models trained with computational power exceeding ten to the power of 25 floating point operations are automatically classified as systemic-risk models. Providers of such models must perform additional evaluations, mitigate risks through adversarial testing and report serious incidents directly to the European AI Office.


Provider or deployer: your responsibility

This is the part of the law that surprises most entrepreneurs. Even if you do not build AI yourself but simply purchase it through a subscription, you are legally a deployer and carry your own obligations.

The law makes a fundamental distinction between providers, which are companies that develop or place AI systems on the market, and deployers, which are organisations that use AI in their work processes. Under Article 3 of the AI Act, any organisation that applies AI to support or automate business processes is a deployer.

Your responsibility as a deployer includes:

  • Understanding what purpose you use AI for and which risks are involved.
  • Ensuring human oversight for decisions that directly affect people.
  • Being transparent towards customers, applicants or other stakeholders.
  • Documenting how you use AI and which measures you have taken.
  • Meeting the AI literacy requirement for your staff.
warning
When you substantially modify or adapt an existing AI system for a specific high-risk purpose, you legally become the provider. Full responsibility for technical documentation and conformity assessment then shifts to your organisation, even if you did not build the core model yourself.

Which tools are you actually using?

Many entrepreneurs do not know exactly which AI systems are used in their organisation, for what purpose and by whom. That is usually the first obstacle: you cannot assess whether you are compliant if you lack an overview. Below is a list of commonly used tools and the risk category they typically fall in.

Tool or systemRisk categoryComment
ChatGPT, Claude, Gemini for internal tasksLimited or minimalLow risk for text, summarisation or analysis. Direct decisions about people shift the responsibility.
AI chatbot on your websiteLimited riskUsers must explicitly know they are talking to AI.
AI recruitment tool ranking candidatesHigh riskFalls under Annex III, employment and workforce management.
AI in accounting or invoicing softwareMinimal riskNo legal obligations, provided no decisions about people.
AI scoring for credit or payment behaviourHigh riskExplicitly mentioned in Annex III.
AI for automated CV screeningHigh riskFalls under employment and workforce management.
Marketing AI for segmentationLimited or minimalDepends on impact on individuals.
AI in medical diagnosis supportHigh riskFalls under regulated products.

Timeline and deadlines

The AI Act is rolled out in stages. The Digital Omnibus agreement of May 2026 shifted some deadlines, but the overall direction remains. A wait-and-see strategy is risky because conformity assessments and the selection of a Notified Body often take twelve to eighteen months.

ElementOriginal deadlineNew deadline
Prohibited practices2 February 2025Already in force
AI literacy (Article 4)2 February 2025Already in force
GPAI governance and rules2 August 20252 August 2025
Watermarking and labelling2 August 20262 December 2026
High-risk AI Annex III2 August 20262 December 2027
High-risk AI in regulated products2 August 20272 August 2028
info
On 7 May 2026 the EU institutions reached a political agreement on the amendments to the AI Act. After an earlier failed trilogue on 28 April 2026, the new compromise confirms the postponement of the high-risk deadlines to 2 December 2027 (Annex III) and 2 August 2028 (Annex I). An important detail in the agreement: these new dates are fixed regardless of whether harmonised standards and guidelines are available by that time. The final legal text was not yet adopted at the time of publishing this article, but the direction is clear. Organisations should not use this postponement to relax compliance efforts, as the core structure of the law remains fully intact.

Registration in the public EU database

The AI Act foresees a central, publicly searchable EU database in which high-risk AI systems are registered. Under the political agreement of May 2026 this registration obligation is extended to providers of so-called exempted AI systems, meaning systems that fall outside the high-risk category based on a risk assessment, even though they operate in an Annex III domain.

For these exempted systems a lighter information duty applies. The provider does not need to submit the same level of technical detail as for genuine high-risk systems, but must transparently document why the exemption applies.

lightbulb
Note that even if your system is classified as exempted, you must substantiate and register the classification yourself. Retain the risk assessment, the rationale for the exemption and supporting documentation for at least the lifecycle of the system plus ten years.

What this means in practice:

  • Providers of Annex III systems who consider that their system poses no significant risk must be able to demonstrate this through a structured self-assessment.
  • The registration in the EU database is publicly searchable. Customers and supervisors can therefore see that your system exists and what the basis for the exemption claim is.
  • In case of doubt or identified shortcomings, the supervisor can withdraw the exemption and retroactively impose full high-risk obligations.

AI literacy: Article 4

Since 2 February 2025, organisations must take measures to safeguard the AI literacy of their staff. This applies to anyone working with AI output, from HR managers to customer service staff. Employees must understand how the AI tools they use work, what their limitations are and which ethical risks attach to the output.

lightbulb
Start an AI literacy training programme immediately. It is an obligation already in force and it directly reduces risks in daily operations. Good training combines fundamentals with practical examples from your own organisation.

Human oversight: Article 14

High-risk AI systems must be designed so that natural persons can effectively oversee them during operation. This oversight must prevent humans from blindly trusting machine suggestions, a phenomenon known in the literature as automation bias.

The supervisor must:

  • Understand the system well enough to judge it critically.
  • Be able to disregard outputs or fully stop the system.
  • Be aware of limitations and possible errors of the system.
  • Be able to request a second opinion or arrange an independent assessment in case of doubt.

Dutch enforcement: the hybrid supervision model

In the Netherlands, supervision of the AI Act is laid down in the national Implementing Act on the AI Regulation. The cabinet has chosen a hybrid model combining sectoral expertise with central coordination.

gavelDutch Data Protection Authority
Coordinating supervisor for algorithms and AI since 2023. Oversees high-risk applications without an existing sectoral supervisor, such as in education, recruitment and selection.
hubDigital Infrastructure Inspectorate
Focuses on AI in critical infrastructure and streamlines coordination between supervisors together with the Data Protection Authority.

Existing sectoral supervisors retain their role for AI in their domain:

  • NVWA for consumer products
  • IGJ for medical devices
  • AFM and DNB for the financial sector
  • ILT for transport

Interaction with the GDPR: synergy or conflict?

For systems that process personal data, both the GDPR and the AI Act apply simultaneously. Under the GDPR, a Data Protection Impact Assessment is mandatory for high-risk processing. Under the AI Act, certain deployers, particularly public bodies and providers of essential services, must perform a Fundamental Rights Impact Assessment.

AspectDPIA (GDPR)FRIA (AI Act)
FocusProtection of personal dataBroad fundamental rights
Mandatory forHigh-risk processingSpecific deployers of high-risk AI
ScopePrivacy and data breachHuman dignity, non-discrimination, freedom of expression, children rights
lightbulb
Combine the FRIA and DPIA into a single process. This reduces administrative burden and provides a holistic view of the impact on individual rights.

A notable provision in the AI Act allows providers of high-risk systems to temporarily process special categories of personal data, such as race, religion or health, to detect and correct bias in models. Under the political agreement of May 2026 these possibilities are broadened, provided the use is strictly necessary and exclusively aimed at specific, predefined forms of bias. It remains a direct exception to the general processing prohibition under the GDPR. Strict security and pseudonymisation measures remain mandatory and the data must be deleted after correction.


Penalty structure

Sanctions are designed to be deterrent even for large tech companies. SMEs and startups are subject to proportionally lower amounts.

Type of violationMaximum fineAlternative based on turnover
Use of prohibited AI practices35 million euro7 percent worldwide annual turnover
Non-compliance with high-risk requirements15 million euro3 percent worldwide annual turnover
Breach of transparency rules15 million euro3 percent worldwide annual turnover
Incorrect information to authorities7.5 million euro1 percent worldwide annual turnover

Action plan: how to become compliant

A structured approach is essential to be ready in time for the deadlines of 2026 and 2027. Below is a practical action plan that Cloud Captains uses for compliance projects.

1
Step 1: AI Inventory
Map all AI systems used within the organisation, including tools deployed ad hoc by employees. This Shadow AI is often the biggest blind spot. Document for each system the purpose, provider, department and stakeholders involved.
2
Step 2: Classification
Determine the risk level for each system and check the Annex III categories thoroughly. Distinguish between provider and deployer roles. Document why a particular classification applies.
3
Step 3: AI literacy
Implement a training programme for all employees working with AI output. Start with management and HR and expand to the rest of the organisation.
4
Step 4: Risk management system
For high-risk systems you must set up a formal risk management system covering the full lifecycle, from design to post-market monitoring.
5
Step 5: Documentation and transparency
Draft technical documentation, perform a Fundamental Rights Impact Assessment where required and ensure transparency towards end users.
6
Step 6: Governance
Appoint an AI Officer or a multidisciplinary working group that oversees the lifecycle of systems. Establish clear responsibilities and escalation procedures.
7
Step 7: Continuous monitoring
Implement post-market surveillance, incident reporting and periodic reviews. The law requires you to actively follow up on changes in risks.

Common misconceptions and pitfalls

1. Provider status through modification

An organisation that substantially modifies or adapts an existing AI system for a specific high-risk purpose legally becomes the provider. Full responsibility for technical documentation and conformity assessment then shifts to that organisation.

2. The incident reporting paradox

In an AI-related incident, an organisation often has to report within three different timelines to three different authorities: 24 hours for NIS2, 72 hours for the GDPR and fifteen days for the AI Act. The risk is that statements made within the first 24 hours can later be used against the organisation in a GDPR or AI investigation.

3. Ambiguity in GPAI metrics

The obligation for GPAI providers to mention the volume of scraped content leaves open whether this should be in file size, number of tokens or number of documents. This ambiguity can lead to inconsistent reporting.

4. The supply chain

Organisations deploying AI systems must request contractual guarantees from their suppliers. GPAI providers are obliged to share information with downstream users, but the depth of that information is often a source of commercial disputes. Make sure to capture this in your procurement contracts.

5. Open source is not a free pass

Models released under open-source licences are not automatically exempt. For non-systemic open-source models certain transparency exemptions apply, but the basic obligations remain.


The environmental dimension

The AI Act introduces, for the first time, obligations regarding the ecological footprint of technology. GPAI providers must document known or estimated energy consumption. The European Commission is working on delegated acts to set standardised measurement and calculation methods. In future, the ecological impact of an AI system may even play a role in admission to specific sectors.


Operational costs for SMEs

According to impact assessments by the European Commission, compliance costs for an SME can reach up to four hundred thousand euro per high-risk product. That sounds like a lot but is spread across multiple components.

Cost itemDescriptionEstimated SME range
Quality management systemWorkflows for data governance and monitoring71,400 to 330,000 euro
Technical documentationDescription of architecture and trainingPart of QMS
Conformity assessmentExternal audit by a Notified BodyUp to 1 million euro one-off
AI literacy and trainingTraining staff according to Article 46,000 to 7,000 euro
Continuous monitoringPost-market surveillance and incident reportingAnnual overhead 17 percent
info
For many SMEs that only deploy AI as a user and do not develop it, costs remain limited to training, documentation and governance. The heavy amounts mainly apply to providers of high-risk systems.

Conclusion

The AI Act is much more than a legal checkbox. It is a redefinition of the innovation process. For successful implementation, organisations must move away from the idea that AI compliance is a task for the IT department alone, or for the legal department alone. It touches strategy, governance, HR and operations.

The costs and regulatory burden are significant, but the Brussels Effect is likely to make the European norm the global standard for trustworthy and human-centric artificial intelligence. Those who start now build trust with customers and citizens, and prevent compliance from becoming a late-stage problem.

Organisations that embrace the AI Act as an opportunity to improve their governance will, in the long run, score better on trust, quality and customer loyalty than those who wait until the supervisor is at the door.

- Cloud Captains