Compliance Guide

EU AI Act Compliance for AI Builders: What You Actually Need to Know

R
Rogue AI
··10 min read

The EU AI Act entered into force on August 1, 2024, with a phased implementation timeline stretching to 2027. Since then, we have fielded dozens of questions from European businesses ranging from "do I need to do anything?" to "do I need to hire a compliance team?" The answer, for most small and mid-sized businesses, is somewhere in between. This guide cuts through the legal language and explains what the AI Act actually requires, who it applies to, and what practical steps you need to take — based on our experience building AI systems that comply by design, not by afterthought.

The Risk Classification System: Where Your AI Fits

The AI Act classifies AI systems into four risk tiers. Your compliance obligations depend entirely on which tier your system falls into. Getting this classification right is the single most important step — everything else follows from it.

Unacceptable Risk (Banned)

These AI applications are prohibited entirely in the EU. You cannot build, deploy, or sell them:

  • Social scoring systems — rating citizens based on behavior or personal characteristics for government purposes.
  • Real-time biometric identification in public spaces — facial recognition by law enforcement without judicial authorization (limited exceptions exist for serious crimes).
  • Emotion recognition in workplaces and education — AI systems that infer emotions from biometric data of employees or students.
  • Manipulative AI — systems designed to exploit vulnerabilities of specific groups (age, disability) to distort behavior.
  • Predictive policing based on profiling — using AI to predict criminal behavior based solely on personal characteristics.

If you are a typical European SMB building internal tools or customer-facing applications, you are almost certainly not building anything in this category. But it is worth knowing the boundaries, particularly around emotion recognition — some "sentiment analysis" features in customer service tools could brush against this line depending on implementation.

High Risk

High-risk AI systems face the most extensive compliance requirements. The Act defines two categories of high-risk systems:

First, AI systems that are safety components of products already regulated under EU law — medical devices, aviation systems, vehicles, machinery, elevators, and similar. If your AI is embedded in a product that requires CE marking, it is likely high-risk.

Second, standalone AI systems in specific domains listed in Annex III of the regulation:

  • Biometric identification and categorization — remote biometric identification systems (not the banned real-time ones, but deferred or voluntary systems).
  • Critical infrastructure management — AI systems used in managing road traffic, water supply, gas, heating, or electricity networks.
  • Education and vocational training — AI that determines access to education, assesses students, or monitors exam behavior.
  • Employment and workers management — AI used for recruitment screening, hiring decisions, task allocation, or performance monitoring.
  • Essential services access — AI that evaluates creditworthiness, sets insurance premiums, processes emergency calls, or assesses public assistance eligibility.
  • Law enforcement — AI used for individual risk assessment, polygraph-adjacent tools, evidence evaluation, or profiling in criminal investigations.
  • Migration and border control — AI for travel document verification, asylum application assessment, or migration risk detection.
  • Administration of justice — AI that assists courts in researching and interpreting facts or law.

Reality check for most businesses

If you are building a document processing system, a customer service chatbot, a content generation tool, an internal knowledge base, or an analytics dashboard — you are almost certainly not in the high-risk category. The majority of business AI applications fall into the limited or minimal risk categories. Do not let compliance anxiety prevent you from adopting AI.

Limited Risk (Transparency Obligations)

Limited-risk AI systems have one primary obligation: transparency. If your system interacts with humans, you must tell them they are interacting with AI. Specifically:

  • Chatbots and conversational AI: Users must be informed they are communicating with an AI system, not a human. A clear label or disclaimer is sufficient.
  • AI-generated content: Text, images, audio, or video produced by AI must be labeled as such. This includes deepfakes and synthetic media, which must be marked with machine-readable labels.
  • Emotion recognition or biometric categorization: If you use AI to detect emotions or categorize people by biometric data (where legally permitted), you must inform the individuals.

This is where most business AI applications land. A customer-facing chatbot powered by an LLM, a content generation tool, or an AI-assisted search system all fall into this category. The compliance burden is minimal: label your AI as AI. Do not pretend it is human. Include appropriate disclosures in your user interface.

Minimal Risk (No Specific Obligations)

Internal tools, analytics systems, spam filters, recommendation engines (for non-harmful content), and most back-office AI applications fall into minimal risk. The AI Act imposes no specific compliance requirements for these systems. You are free to build and deploy them following general good practices.

That said, the AI Act encourages (but does not require) voluntary codes of conduct for minimal-risk systems. Following basic responsible AI practices — documenting your systems, monitoring for bias, keeping humans informed — is good engineering regardless of regulatory requirements.

High-Risk Systems: What Compliance Actually Requires

If your AI system does fall into the high-risk category, the compliance requirements are substantial but not insurmountable. Here is what the Act requires:

RequirementWhat It MeansPractical Implementation
Risk Management SystemContinuous identification and mitigation of risksDocument risks at design time, monitor in production, update regularly
Data GovernanceTraining data must be relevant, representative, and error-freeDocument data sources, check for bias, maintain data lineage
Technical DocumentationComplete system documentation before market placementArchitecture docs, model cards, testing methodology, performance metrics
Record-KeepingAutomatic logging of system operationStructured logs of inputs, outputs, decisions, and system events
TransparencyUsers must understand the system's capabilities and limitationsUser documentation, instructions for use, clear capability statements
Human OversightHumans must be able to understand, monitor, and override the systemDashboard for monitoring, override mechanisms, escalation procedures
Accuracy and RobustnessConsistent performance across expected conditionsTesting suite, performance benchmarks, adversarial robustness checks
CybersecurityProtection against unauthorized manipulationInput validation, access control, prompt injection defenses, audit trails

The Timeline: What Is Required When

The AI Act does not require everything at once. The phased implementation matters for planning:

  • February 2, 2025: Prohibited AI practices (unacceptable risk tier) become enforceable. Already in effect.
  • August 2, 2025: Rules for general-purpose AI models (GPAI) apply. This affects providers of foundation models like GPT-4, Claude, and Llama — not businesses using those models via APIs.
  • August 2, 2026: High-risk AI systems listed in Annex III must comply. This is when most of the substantive requirements take effect for businesses deploying high-risk systems.
  • August 2, 2027: High-risk AI systems that are components of regulated products (Annex I) must comply. This primarily affects manufacturers of medical devices, vehicles, and similar products.

If you use an LLM via API, the model provider carries most of the burden

The AI Act distinguishes between "providers" (who develop the AI system) and "deployers" (who use it). If you build an application using Claude or GPT-4 via API, Anthropic or OpenAI is the GPAI provider and bears the model-level compliance obligations. You, as the deployer, are responsible for how you use the model — your specific application, your data handling, and your user-facing transparency. This is a critical distinction that dramatically reduces the compliance burden for most businesses.

General-Purpose AI (GPAI) Models: What Changes

The AI Act introduces specific obligations for providers of general-purpose AI models — the foundation models that power most modern AI applications. This primarily affects companies like OpenAI, Anthropic, Meta, Mistral, and Google. If you use their models via API, you are a downstream deployer, not a GPAI provider.

However, if you fine-tune a model substantially or train your own model, you may become a GPAI provider depending on the scope of your modifications. The general rule: if your fine-tuning creates a model with significantly different capabilities or intended purposes from the base model, you take on provider obligations for your modified version.

GPAI providers must maintain technical documentation, provide information to downstream deployers about the model's capabilities and limitations, implement policies to comply with copyright law regarding training data, and publish a sufficiently detailed summary of training data. Models classified as "systemic risk" (those trained with more than 10^25 FLOPs of compute) face additional obligations including model evaluations, adversarial testing, and incident reporting.

Practical Compliance Checklist for SMBs

Based on our experience building compliant AI systems for European businesses, here is the practical checklist we recommend. This is ordered by priority and assumes your system is limited or minimal risk (which covers 90%+ of business AI applications).

  • Step 1: Classify your risk tier. Map your AI system against the four risk categories. Be honest — erring toward a higher classification means more work but less regulatory risk. Document your classification reasoning.
  • Step 2: Implement transparency labels. If your system interacts with humans or generates content, label it as AI-powered. Add disclosures to your UI. This is required for limited-risk systems and good practice for all systems.
  • Step 3: Document your system. Write down what your AI system does, what data it uses, what decisions it influences, and what its known limitations are. This is basic engineering documentation that you should have anyway. The AI Act just makes it legally required for high-risk systems and strongly recommended for everything else.
  • Step 4: Ensure human oversight mechanisms. Build interfaces that let humans review, understand, and override AI decisions. This is the human-in-the-loop design pattern we use in all our systems. It is good engineering and it satisfies the Act's requirements simultaneously.
  • Step 5: Log system operations. Maintain structured logs of AI system inputs, outputs, and significant decisions. Retention period should match your business needs and any sector-specific regulations (GDPR data minimization applies here too).
  • Step 6: Choose EU-hosted infrastructure. Run your AI systems on EU-based servers. Use EU-based model providers where possible. This does not eliminate all data transfer concerns, but it simplifies GDPR compliance and demonstrates good faith under the AI Act's data governance requirements.
  • Step 7: Review third-party AI components. If you use external AI APIs or models, document which providers you use, what data you send them, and what their own compliance posture is. You are responsible for your deployment even when using third-party models.

How We Build for Compliance by Default

At Rogue AI, compliance is not a bolt-on audit we do at the end. It is embedded in how we architect systems from the start. Every system we build includes:

  • EU-hosted infrastructure: All our systems run on European servers. We use Hetzner (Germany) for VPS hosting and self-hosted Ollama for LLM inference. No data leaves the EU unless a client specifically requests it.
  • No US API dependencies by default: We do not route data through OpenAI, Google, or AWS AI services unless a client explicitly chooses that option after understanding the data transfer implications. Self-hosted models are our default architecture.
  • Human-in-the-loop design: Every system includes review interfaces where humans can inspect, verify, and correct AI outputs. No AI decision is fully autonomous unless the client specifically requests and accepts that architecture.
  • Structured logging: All AI operations are logged with timestamps, input summaries, output summaries, and confidence scores. This creates the audit trail that both the AI Act and GDPR expect.
  • Documentation as a deliverable: Every project includes system documentation — architecture description, data flow diagrams, model specifications, and known limitations. This is not an extra we charge for; it is part of building a professional system.

Penalties and Enforcement

The AI Act's penalty structure is significant but proportionate:

  • Prohibited AI practices: Up to EUR 35 million or 7% of global annual turnover (whichever is higher).
  • High-risk non-compliance: Up to EUR 15 million or 3% of global annual turnover.
  • Incorrect information to authorities: Up to EUR 7.5 million or 1% of global annual turnover.

For SMBs and startups, the Act includes proportionality provisions — penalties should consider the size and economic situation of the company. The European Commission is also establishing regulatory sandboxes where SMBs can test AI systems under regulatory supervision before full deployment. The intent is to encourage innovation while maintaining safety standards, not to penalize small companies for trying to adopt AI.

The Bottom Line

For most European SMBs, the EU AI Act requires transparency labels on customer-facing AI systems, basic documentation of what your AI does and how it works, and sensible data governance practices you should have anyway. It does not require hiring a compliance team, abandoning AI adoption, or building everything from scratch with custom models.

The businesses that will struggle are those that have deployed AI without any documentation, without any human oversight, and without knowing what data flows where. The businesses that will thrive are those that built responsibly from the start — documented systems, EU-hosted infrastructure, human-in-the-loop design, and clear communication with users about AI involvement.

If you are building AI and want to ensure your systems comply with the EU AI Act from the architecture level, book a free discovery call. We build compliant by default — not as an afterthought.

Rogue AI • Production Systems •