Skip to main content
· 13 min read

EU AI Act compliance for mid-market companies: what you actually need to do by August 2026

The EU AI Act deadline is August 2026. Here's what mid-market DACH companies need to do — risk classification, documentation, and oversight.

aigdpreu-ai-actdach

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive AI regulation, requiring companies deploying AI systems in the EU to classify risk, document systems, and implement oversight controls by August 2, 2026.

If you’re a mid-market company in DACH wondering what you actually need to do: you need to inventory every AI system you use or operate, classify each one by risk level, write up the required documentation, design human oversight mechanisms, and set up ongoing monitoring. That’s it. Five things. The regulation is long, but the practical work is finite and manageable — especially if you start now, because you have roughly four months left.

I’ve spent the last year working through EU AI Act compliance with mid-market clients across Germany, Austria, and Switzerland. What follows is what we’ve learned about what matters, what doesn’t, and where companies waste time.

Who this applies to

The most common mistake I hear: “We don’t build AI, so this doesn’t apply to us.”

It does. The EU AI Act distinguishes between providers (companies that develop AI systems) and deployers (companies that use AI systems in their operations). If your sales team uses an AI-powered lead scoring tool, if your HR department screens CVs with an AI plugin, if your finance team runs forecasts through a machine learning model — you are a deployer. The regulation applies to you.

According to the Stanford HAI AI Index Report 2024, 67% of organizations globally have adopted at least one AI tool in their business operations. In the DACH mid-market, that number tracks similarly based on what we see in the field. Most of these companies don’t think of themselves as “AI companies,” but the EU AI Act doesn’t care about your self-image. It cares about what systems you operate.

Here’s who specifically needs to pay attention:

  • Any company using AI tools in business processes — ChatGPT, GitHub Copilot, AI-powered CRM features, automated document processing, chatbots on your website.
  • Any company deploying AI that affects people — hiring tools, credit scoring, customer service automation, insurance underwriting.
  • Any company selling into the EU market — even if you’re headquartered outside the EU, if your AI system is used by EU residents, the Act applies.
  • Any company in a regulated industry — finance, healthcare, insurance, legal. These sectors have the highest concentration of high-risk AI systems.

The European Commission’s own guidance makes this explicit: the obligations follow the AI system, not the company’s primary business. A logistics company using AI for route optimization has compliance obligations for that system, same as an AI startup.

Risk classification explained simply

The entire EU AI Act framework is built on risk classification. Every AI system you operate falls into one of four categories, and your obligations scale with the risk level. Get the classification right and everything else follows logically.

EU AI Act risk tiers

Every AI system you operate falls into exactly one category. Your obligations scale with the tier.

Unacceptable risk

Prohibited since Feb 2025

Examples: social scoring, subliminal manipulation, real-time biometric ID in public spaces, emotion recognition in workplaces.

→ Obligation: banned outright. Operating one is a regulatory violation.

High-risk

Most compliance effort lives here

Examples: CV screening, credit scoring, insurance risk assessment, automated grading, critical infrastructure management.

→ Obligation: conformity assessment, technical documentation, data governance, human oversight, EU database registration.

Limited risk

Transparency obligations

Examples: chatbots, AI-generated content (text, image, audio, video), non-banned emotion recognition.

→ Obligation: users must be told they're interacting with AI; generated content must be labelled.

Minimal risk

Document the classification itself

Examples: spam filters, recommendation engines for non-critical apps, AI-powered internal search, analytics dashboards.

→ Obligation: no specific requirements — but you still need to record why the system is minimal risk.

The four-tier classification from Regulation (EU) 2024/1689. Most mid-market AI systems fall into limited or minimal risk; the dangerous ones are high-risk HR and finance tools that companies don't yet recognise as high-risk.

Unacceptable risk

These AI practices are banned outright. Since February 2025, you cannot operate:

  • Social scoring systems that rank people based on behavior or personal characteristics
  • Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
  • AI that manipulates people through subliminal techniques or exploits vulnerabilities
  • Emotion recognition systems in workplaces and educational institutions

Most mid-market companies don’t have these. But check your vendor tools — if you’re using an employee monitoring tool that claims to detect “engagement” or “sentiment” from webcam feeds, that’s now illegal.

High-risk

This is where most compliance effort concentrates. An AI system is high-risk if it’s used in specific domains listed in Annex III of the regulation:

  • Employment and worker management — CV screening, automated interview analysis, performance monitoring, task allocation based on individual behavior
  • Access to essential services — credit scoring, insurance risk assessment, utility service allocation
  • Education and vocational training — automated grading, admission decisions, learning assessment
  • Law enforcement and migration — lie detection, risk profiling, border control (mostly relevant for public sector, but private contractors take note)
  • Critical infrastructure — AI managing energy grids, water systems, transport networks

For a typical mid-market company, the most likely high-risk systems are in HR and finance. That AI-powered applicant tracking system? High-risk. The credit assessment model for your B2B customers? Likely high-risk. The AI feature in your ERP that flags employee performance anomalies? High-risk.

High-risk systems require: a conformity assessment, technical documentation, data governance measures, human oversight design, accuracy and security standards, and registration in the EU database.

Limited risk

Limited-risk systems have transparency obligations. You must tell people when they’re interacting with AI. This covers:

  • Chatbots — users must know they’re talking to an AI, not a human
  • AI-generated content — text, images, audio, or video created by AI must be labeled as such
  • Emotion recognition or biometric categorization — if not banned outright, users must be informed

For mid-market companies, this usually means: label your chatbot as an AI chatbot, disclose when marketing content is AI-generated, and make sure your customer-facing AI interactions include clear notices.

Minimal risk

Everything else. Internal analytics dashboards, spam filters, AI-powered search within your intranet, recommendation engines for non-critical applications. No specific regulatory requirements, though the European Commission encourages voluntary codes of conduct.

Most of your AI tools will fall here. The key is documenting why they’re minimal risk — that classification decision itself needs to be recorded.

The five things you must do before August 2026

I’m going to be specific. Not “develop an AI governance framework” — actual tasks with concrete outputs.

AI system inventory

You cannot classify what you haven’t cataloged. The first step is a complete inventory of every AI system your company uses, develops, or deploys.

This includes:

  • Commercial AI tools — ChatGPT Enterprise, Microsoft Copilot, Salesforce Einstein, HubSpot AI features, any SaaS product with AI functionality
  • Custom-built models — anything your data team or external consultants built
  • Embedded AI — features inside existing software that use machine learning (your ERP might have AI-powered demand forecasting that nobody thinks of as “an AI system”)
  • Third-party APIs — if you call OpenAI’s API, Google’s Vertex AI, or any ML inference endpoint, that’s an AI system you deploy

For each system, document: what it does, what data it processes, who it affects, who operates it, and who the vendor or developer is.

We typically find that mid-market companies have 5-15 AI systems when they actually look. Most of them didn’t realize half of those existed.

Risk classification

For each system in your inventory, determine the risk level using the categories above. This isn’t guesswork — the regulation is specific about which domains trigger high-risk classification.

The practical approach:

  1. Check Annex III of the EU AI Act for your system’s domain
  2. Assess whether the system makes or materially influences decisions about people
  3. Document your classification reasoning
  4. When in doubt, classify higher — it’s easier to downgrade later than to explain to a regulator why you classified a borderline system as minimal risk

PwC’s 2024 Global AI Governance Survey found that 73% of companies surveyed had not yet completed risk classification for their AI systems. If you haven’t started, you’re in the majority — but that majority is running out of time.

Documentation and technical docs

For high-risk systems, you need technical documentation covering:

  • System purpose and intended use — what the AI does and what it’s supposed to be used for
  • Data governance — what training data was used, how it was collected, quality measures
  • Architecture and design — how the system works at a level sufficient for auditing
  • Performance metrics — accuracy, error rates, known limitations
  • Risk management — identified risks and mitigation measures

For systems you didn’t build (most of them, for mid-market deployers), this means getting documentation from your vendors. Start asking now. Some vendors have EU AI Act compliance packages ready. Others have nothing. You need to know which is which before August.

For limited-risk systems, the documentation is lighter: system description, transparency measures, and classification reasoning.

For minimal-risk systems: document the classification decision and move on.

Human oversight mechanisms

The EU AI Act requires that high-risk AI systems include mechanisms for human oversight. This doesn’t mean a human reviews every output — it means:

  • A designated person understands the system’s capabilities and limitations
  • They can interpret the system’s output in context
  • They can decide not to use the system’s output or override it
  • They can intervene or stop the system when needed
  • There’s a feedback loop for flagging issues

In practice, this means designing workflows where AI recommendations are presented to a human decision-maker with enough context to evaluate them. For an HR screening tool, that might mean the AI ranks candidates but a human reviews the ranking with access to the AI’s reasoning before anyone gets rejected. For a credit assessment model, it means a credit officer sees the AI’s score alongside the factors that drove it and can override the decision.

The oversight design needs to match the risk. A chatbot giving product information needs a different oversight model than an AI system deciding who gets a loan.

Ongoing monitoring and audit trails

Compliance isn’t a one-time project. After August 2026, you need:

  • Logging — high-risk systems must maintain logs of their operation, sufficient for post-hoc auditing
  • Performance monitoring — track accuracy, drift, and bias indicators over time
  • Incident reporting — serious incidents involving high-risk AI must be reported to authorities
  • Regular reviews — periodic reassessment of risk classification as systems evolve

Set up automated logging now. If your AI systems don’t produce audit-friendly logs today, retrofitting that after the deadline is painful and expensive.

What compliance actually looks like in practice

I want to be clear about what we’re not talking about. We’re not talking about a 200-page compliance manual that sits in a SharePoint folder and gets reviewed once a year. That approach fails.

What works for mid-market companies:

Integrate into existing workflows. If you already have ISO 27001 or SOC 2 processes, your AI documentation fits into those structures. Risk assessments, vendor management, incident response — you’re already doing versions of these. Add the AI-specific elements to what exists rather than building parallel systems.

Use templates, not blank pages. The European Commission has published guidance documents and templates. We’ve developed our own documentation templates specifically for mid-market deployers that map directly to the regulation’s requirements. The goal is filling in specifics, not inventing a format.

Automate audit trails. Most modern AI platforms produce logs. The work is in routing those logs to a system that retains them, makes them searchable, and connects them to your compliance documentation. Oleks builds the technical controls, I handle the compliance framework — between us as brothers, we cover both sides of this: the governance paperwork and the actual infrastructure that makes it enforceable.

Make it someone’s job. The regulation doesn’t require hiring a dedicated “AI officer” for mid-market companies, but someone needs to own it. In most cases, this maps to whoever handles data protection or IT compliance today. Give them the mandate, the training, and the time allocation.

Start with your highest-risk systems. If you have 10 AI systems and two are high-risk, get those two compliant first. The minimal-risk systems need only basic documentation. Don’t let perfect be the enemy of done.

Common misconceptions

”We just use ChatGPT — this doesn’t apply to us”

It does. If your employees use ChatGPT in their work — drafting emails, analyzing data, generating reports — you’re deploying an AI system. The transparency obligations apply: if outputs reach customers or partners, they may need to know AI was involved. If you’re using it in a high-risk domain (like generating text for employment decisions), the full high-risk requirements kick in.

The distinction isn’t whether you built the AI. It’s how you use it.

”We’re too small for this”

The EU AI Act does include some proportionality provisions for SMEs, including lighter conformity assessment options and fee reductions. But it does not exempt small or mid-market companies from the core obligations. If you deploy a high-risk AI system, you need documentation, oversight, and monitoring regardless of your company size.

That said, the practical effort scales with your AI footprint. A company with three AI tools spends far less time on compliance than one with thirty. For most mid-market companies, we’re talking weeks of work, not months — if you approach it methodically.

”Our AI vendor handles compliance for us”

Partially. Providers (vendors who build AI systems) have their own obligations under the Act — they must deliver technically compliant systems with adequate documentation. But deployer obligations are separate and non-transferable. You can’t outsource your duty to implement human oversight, maintain logs, conduct risk assessments, or report incidents.

Think of it like GDPR: your cloud provider handles data security at the infrastructure level, but you’re still responsible for how you process personal data. Same structure here.

Ask your vendors for their EU AI Act compliance documentation. If they can’t provide it, that’s a red flag — and it’s your problem to solve, not theirs.

”We can figure this out after the deadline”

You can. And you’ll pay for it. Penalties under the EU AI Act range from €7.5 million to €35 million, or 1% to 7% of global annual turnover, depending on the severity of the violation. For prohibited practices, it’s the maximum. For deployer obligations, it’s the lower end — but €7.5 million is still significant for a mid-market company.

Beyond fines, the market pressure is real. Enterprise clients and public sector procurement offices in DACH are already including EU AI Act compliance as a requirement in RFPs. If you can’t demonstrate compliance, you lose deals. We’ve seen this happen in Q1 2026 already.

The regulation has been public since 2024. Authorities will have limited patience for companies that simply didn’t bother to prepare.

Frequently asked questions

The FAQ section above covers the most common questions we get. If your question isn’t there, the short version is: yes, this probably applies to you, and no, you probably can’t ignore it.

The European Commission maintains an official EU AI Act page with the full regulation text, guidance documents, and updates on implementation. The AI Act Explorer by the Future of Life Institute is also a useful reference for navigating the regulation by topic.

For DACH-specific guidance, the German Federal Ministry for Digital and Transport (BMDV) and the Austrian Federal Ministry for Digital and Economic Affairs (BMDW) have published national implementation updates worth reviewing.

Get compliant before August 2026

We help mid-market companies in DACH get EU AI Act compliant — classification, documentation, and technical controls. Book a 20-minute call to assess where you stand, or see our Applied AI practice.

Sources & further reading

Published:

Have a similar challenge?

20 minutes. No slides. We'll dig into your specific situation.