Request a
quotation

Calculate
price

Call
hotline

Upwards
> Skip to main content

The EU AI Act governs the development, marketing, and use of AI systems in the EU, aiming to protect fundamental rights, minimize risks, and ensure fair competition.

In addition to its strict requirements for companies, consultants who support organizations in implementing AI face new challenges. This article examines the key obligations of the EU AI Act and outlines the steps companies and consultants should take to remain compliant.

The Step-by-Step Implementation of the EU AI Act

The EU AI Act will be implemented gradually to allow companies sufficient time to adapt to the new rules:

  • August 2, 2024: The EU AI Act officially enters into force, setting the foundation for compliance with future regulations.
  • February 2, 2025: Certain high-risk AI practices, such as social scoring, are prohibited. Companies must also demonstrate that employees working with AI systems have the necessary competencies.
  • August 2, 2026: The first complete implementation of the rules for high-risk AI systems, including technical requirements and safety standards.
  • August 2, 2027: Full implementation of all provisions for high-risk AI systems as outlined in Annexes I and II of the regulation. By this date, all obligations must be met​​.

The Core Principles of the EU AI Act

The EU AI Act is based on a risk-based approach, categorizing AI systems into four groups:

  • Unacceptable Risk
    Systems that violate fundamental rights or cause harm through manipulative practices, such as social scoring or emotional manipulation, are prohibited.
  • High Risk
    Includes applications used in sensitive areas such as healthcare, law enforcement, or human resources. These systems are subject to stringent testing, risk assessments, and monitoring requirements. Providers and users must comply with measures such as conducting data protection impact assessments (DPIAs).
  • Limited Risk
    Systems like chatbots, which pose limited risks, are subject to transparency obligations. Users must be informed when interacting with AI.
  • Minimal Risk
    AI systems that present no significant risks require no specific regulatory measures​​.

Obligations for Companies

Companies developing or using AI systems must implement extensive measures to comply with the EU AI Act:

1. Technical and Organizational Requirements

  • Providers must adhere to extensive technical documentation and safety standards. High-risk AI systems require measures to prevent discrimination and ensure human oversight.
  • Users of high-risk systems are responsible for correct data usage and documentation, including log file storage and staff training.

2. Demonstration of AI Competence

As of February 2025, companies must prove that their employees have the necessary knowledge and skills to use AI safely and effectively.

3. Transparency and Labeling Obligations

  • Transparency Requirements for High-Risk AI Systems
    High-risk AI systems must meet strict transparency standards to ensure responsible usage. Providers must guarantee system outputs are interpretable and usable. A detailed technical documentation must outline the system’s purpose, training data, algorithm functionality, and expected performance. This documentation must be available to market surveillance authorities upon request.
    Providers must also issue an EU declaration of conformity, confirming that the system meets the Act’s requirements. This declaration must comply with Annex V of the regulation and be presented in a language understandable to the authorities of the member state where the system is marketed. Non-product-related high-risk systems must also be registered in a publicly accessible EU database.
  • Transparency for Limited-Risk Systems
    Systems like chatbots are required to inform users that they are interacting with AI. This requirement is waived only if the AI nature is obvious from the context.
  • Labeling Requirements for Deepfakes
    Special rules apply to content generated or manipulated by AI that realistically depicts people, objects, or places. Such deepfakes must be clearly labeled, using means such as codes, watermarks, or notices provided by users.
  • Transparency for General-Purpose AI Models
    Providers of general-purpose AI models must offer technical documentation to support downstream integration. They are also required to implement strategies to ensure compliance with EU copyright laws​​.

4. Integration with the GDPR

The parallel application of the GDPR and the EU AI Act ensures the protection of personal data when using AI systems. Companies must ensure that data processing and storage comply with applicable regulations​​.

What Consultants Need to Know

Consultants supporting companies in implementing and using AI systems face specific challenges:

  • In-Depth Knowledge of the Regulation
    Consultants must have a thorough understanding of the rules, especially for high-risk applications. This includes knowledge of technical requirements and legal frameworks such as the GDPR.
  • Contract Design
    Drafting and maintaining data processing agreements (DPAs) is essential, especially when handling clients’ personal data.
  • Training and Awareness
    Consultants should help companies build their employees’ AI competence by providing training on legal and ethical issues.
  • Risk Management and Monitoring
    A key task is assisting in risk analysis and compliance with documentation requirements, such as creating DPIAs for high-risk applications​​.

Conclusion: A Shared Responsibility

The EU AI Act challenges companies and consultants to meet legal obligations while fostering trust in AI systems. Developing a comprehensive AI strategy and engaging experts early on are crucial for minimizing legal risks and driving innovation.

The next phase of AI adoption will depend on the ability to create ethical, transparent, and secure applications. Companies and consultants can gain a competitive edge while contributing to the advancement of trustworthy AI.

Disclaimer

The information provided in this blog post is for general guidance only and does not constitute legal advice. The author assumes no liability for the accuracy, completeness, or timeliness of the information provided. Binding legal guidance can only be obtained from the official text of the EU AI Act and its implementation in national legislation. For legal questions or case-specific guidance, consulting qualified legal professionals is strongly recommended.

FAQs

The EU AI Act and the GDPR (General Data Protection Regulation) complement each other but focus on different aspects. The GDPR regulates the processing and protection of personal data. It is technology-neutral and ensures that personal data is processed transparently, securely, and in compliance with the law.

In contrast, the EU AI Act focuses specifically on AI systems and addresses potential risks arising from their use. While the GDPR primarily protects individual rights, the EU AI Act emphasizes technical, ethical, and legal standards for AI. Both regulations apply in parallel, meaning businesses must comply with both data protection and AI-specific requirements​​.


The EU AI Act imposes significant penalties based on the severity of violations:

  • Prohibited AI Practices: Using banned AI practices, such as social scoring, can result in fines of up to €35 million or 7% of global annual turnover.
  • Failure to Meet Obligations: Breaches of specific obligations, such as failing to provide technical documentation or register in the EU database, may lead to fines of up to €15 million or 3% of global annual turnover.
  • False Information: Submitting false information to authorities can incur fines of up to €7.5 million or 1.5% of global turnover.

These stringent penalties underline the EU’s commitment to strictly enforcing the regulatory framework​​.