The Slack ping came at 11:47 PM, Frankfurt time. Klaus had been on the call for ninety minutes already, and he wasn't supposed to be working at this hour on a Tuesday, but the Notified Body assigned to the CE conformity assessment had emailed at 4 PM with what sounded like a simple question: could his team produce the Article 12 logs for the past fourteen months of production data? He typed yes. He was wrong.

Wrong. Klaus is a certification engineer at a German automotive supplier outside Stuttgart, and what his team had built was a vision-based driver assistance module classified as a safety component of a regulated product under Annex I of Regulation (EU) 2024/1689. They had ISO 27001. They had a functional safety dossier signed off by their TÜV auditor in March. They did not have Article 12 logs structured the way the EU AI Act would soon require, and they had less than ninety days to fix it.

That date is August 2, 2026, and unlike most regulatory dates it does not slide. After it, the high-risk AI provisions of the Act take legal effect across the single market. Three obligations dominate the conversation for teams shipping production AI: Article 12 (automatic logging of events for traceability), Article 18 (ten-year retention of technical documentation), and Article 99(4) (administrative fines up to 15 million euros or 3% of total worldwide annual turnover, whichever is higher). Most teams I talk to have heard of the deadline. They are not ready.

What does August 2, 2026 actually trigger?

August 2 is not a soft date. The European Parliament passed the Act in June 2024, the regulation entered into force on August 1, 2024, and the implementation calendar was structured in tiers. Prohibited-practice rules took effect February 2, 2025. The general-purpose AI rules came online August 2, 2025. The high-risk obligations, the ones that hit production AI systems used in employment decisions, critical infrastructure, regulated products, education, and a dozen other Annex III categories, land on August 2, 2026.

I used to think the high-risk classification was rare. I was wrong about that. After three months reading conformity assessments for clients in fintech, regulated SaaS, and HR technology, I now think most enterprise AI systems built for the European market trigger at least one Annex III bucket, even when the team that built them never expected to be classified as a provider under Article 25. The classification cascades. A coding assistant that scores candidate resumes for a hiring manager? Annex III, Point 4. A safety component embedded in a medical device? Annex I. The list is broader than most legal teams realize.

Here is the part that catches CISOs off guard. The Act does not ask whether you trained the model. It asks whether you put it on the EU market. If you fine-tune a base model and deploy the result inside a product sold in Munich, you are a provider under Article 25, and the high-risk clock starts ticking the day the system goes live. Compliance kicks in at deployment. Not at training. Not at conception. At deployment.

EUR 15M / 3%
maximum administrative fine for non-compliance with high-risk obligations under Article 99(4), whichever amount is higher

Why is your CI pipeline already non-compliant?

Most CI pipelines log everything they need for site reliability: build duration, test pass rate, deployment artifact, container hash, the unit test that flaked at 3:14 AM. None of that is what Article 12 cares about. Article 12 cares about events that affect the operation of the AI system itself: input distribution drift, output classifications, human override actions, threshold crossings on monitored metrics, the moment a confidence score falls below the policy bar and a fallback path executes. The logs you already have? Wrong layer entirely. They tell you the pipeline ran. They do not tell you what the model decided, when, on what input, with what confidence, and whether a human reviewer got involved.

You can fix this in two ways. The wrong way is to bolt on a compliance layer at the end and hope the auditor accepts a screenshot of Datadog. The right way (and Klaus's team eventually got there, two months after that Tuesday call) is to make Article 12 logging a first-class architectural concern, the same way you treat security headers or audit fields on a database row. Logs are not an afterthought. They are part of the contract you signed when you placed the system on the market.

What kinds of AI systems trigger Article 12 logging?

Three buckets, mostly. First, anything in Annex III: biometric identification, employment screening, credit scoring, education evaluation, law enforcement support, migration management, administration of justice, democratic processes. Second, anything in Annex I: AI as a safety component of products already regulated under sector legislation (medical devices under MDR, machinery under the Machinery Regulation, vehicles under the General Safety Regulation). Third, the catch-all: any high-risk system covered by Article 6, even if it lives inside a broader product line that nobody flagged for AI scrutiny.

Here is what surprises people. The classification logic does not care about your intent. A team building a customer-facing chatbot at a Quebec credit union (and yes, I have been asked about this exact scenario by a CTO at a Desjardins-affiliated tech vendor, twice in the last quarter) probably falls outside Annex III for the chatbot itself. But if that same chatbot is connected to a model that scores creditworthiness, the score-producing model is high-risk, full stop. The product team had not realized that. Their compliance counsel had not flagged it either. The first time it came up in writing was when their banking partner asked for the conformity dossier.

How long do you have to keep the logs and documentation?

Two windows that matter. Article 19 says the provider must retain automatically generated logs for at least six months, where logs are under the provider's control. Six months. That is the floor, not the ceiling. The ceiling is set by sectoral law, by your customer contracts, and by the technical documentation rule in Article 18, which fixes the retention period for technical documentation, the quality management system documentation, the conformity assessment, and the EU declaration of conformity at ten years from the moment the system was placed on the market or put into service.

Ten years is a long time. Most observability stacks I have audited keep three to ninety days of full-fidelity logs and aggregate the rest into rollups that strip out the per-event detail an auditor will ask for. That works fine for site reliability. It does not work for the EU AI Act. You either need a separate compliance log path with cold-storage retention, or you extend your existing pipeline to keep the events Article 12 cares about for the legally required window. The good news? The events you actually need to retain are smaller in volume than you think. The bad news? You probably are not retaining them today, and the gap is invisible until someone files an Article 21 request.

Log retention: where most teams sit vs. where the Act lands them Same data feed. Different retention windows. Different audit outcomes. Typical SRE Article 19 Article 18 30 to 90 days, then aggregated rollups Min 6 months, automatic logs under provider control 10 years for technical documentation, QMS, conformity assessment, EU declaration of conformity Bar lengths are illustrative, not to logarithmic scale. Source: Articles 18 and 19, Regulation (EU) 2024/1689. Specira
Article 19 sets a minimum log retention of six months. Article 18 fixes documentation retention at ten years. SRE defaults sit far below both.

What does compliant logging actually look like?

Six event types. A separate log path. Cold storage retention indexed by the conformity dossier. The short version fits on a sticky note. The long version is what Klaus's team built over eight weeks, and the pattern generalizes to any high-risk system with a Notified Body breathing down its neck.

Klaus's team at the German automotive supplier rebuilt their logging architecture in three increments over eight weeks. Weeks 1 to 2: they settled on six event types Article 12 actually cares about for their vision module: input frame metadata, classification output, confidence score, human override flag, threshold breach indicator, and a synchronized inference timestamp. Weeks 3 to 5: those six events moved to a separate log path, written to a WORM (write-once-read-many) cold-storage bucket with retention set to ten years. Weeks 6 to 8: the log path got wired into the technical documentation pipeline so the Article 18 dossier could reference the schema and access controls directly. Notified Body sign-off, week nine.

Source: pattern documented in the Augment Code EU AI Act developer guide, generalized from observed enterprise rollouts in regulated industries.

How can teams compress 14 months of compliance into eight weeks?

Klaus's team did it because they had a forcing function. Calendar, checklist, deadline, fine. Most teams do not have that pressure yet, which is exactly why August 2 will catch them off guard.

Four moves, in the order I see them work. (1) Map every AI-touching feature against Annex I and Annex III today, not in July. Classification is not legal-only; product owners need to be in the room because the answer depends on use, not technology. (2) Decide who plays Provider and who plays Deployer under Article 25. If you fine-tune, you are probably a Provider, and most teams I have audited got this wrong on the first pass. (3) Separate the compliance log path from the SRE log path. Same data feed, different retention, different access controls. (4) Front-load the technical documentation. Article 18 is not a 100-page Word document; it is a structured artifact that must survive ten years of audits and version drift. Treat it like code.

Teams that move first ship the same software with a structural moat. Read more on the audit-trail logic in our piece on AI governance for requirements; the principles are the same, even if the regulation now has teeth.

The August 2026 reality

The EU AI Act does not ask whether your AI works. It asks whether you can prove it works the way you said it does, every time, for ten years. Article 12 logging and Article 18 documentation are not paperwork; they are the audit trail that turns your AI system from a black box into a governed product.

Ship without that trail and you carry an EUR 15-million liability on your balance sheet under Article 99(4). Build the trail correctly the first time and you ship the same software with a structural moat that compounds with every audit you pass cleanly.

What are the most common questions about August 2026 compliance?

Yes. The high-risk AI obligations under Articles 8 to 15, plus the transparency rules under Article 50, apply to systems placed on the EU market from August 2, 2026 forward, including systems already in production. The Act does not grandfather existing high-risk systems. Providers must reach compliance by the date or risk fines up to EUR 15 million or 3% of total worldwide annual turnover under Article 99(4).
Probably yes if you place an AI system on the EU market or if your output is used in the EU. Article 2 of Regulation (EU) 2024/1689 has explicit extraterritorial reach. A US or Canadian company that fine-tunes a model and ships a SaaS product to EU customers triggers provider obligations under Article 25, even if no infrastructure runs inside the EU.
Not on their own. General-purpose AI models fall under a separate regime in Chapter V of the Act, with obligations focused on transparency, copyright, and (for systems with systemic risk) additional safety duties. A high-risk classification kicks in when the model is integrated into a product that touches Annex I or Annex III use cases, like resume screening, credit scoring, or a safety component in a regulated medical device.
A Provider develops or substantially modifies an AI system and places it on the market. A Deployer uses an AI system under its authority. Article 25 says fine-tuning or substantial modification of an existing system can convert a Deployer into a Provider, which means the heavier obligations of Articles 8 to 21 transfer to the modifying party. Most enterprise teams that fine-tune base models are Providers, even if they did not realize it.
Enforcement starts the day the deadline passes. National market surveillance authorities can request technical documentation under Article 21, demand corrective action, and issue fines under Article 99. The proposed Digital Omnibus extension that would push some deadlines to December 2027 or August 2028 is not enacted as of May 2026. Plan for August 2, 2026 unless and until the legislator changes it.
Application logs capture system events: requests, errors, build runs, deployments. Article 12 logs capture AI-specific events relevant to traceability and risk: input metadata, model output, confidence score, human override actions, and threshold crossings. The retention requirement under Article 19 is at least six months, often longer in practice, and the logs must be accessible to market surveillance authorities for the lifetime of the system.
Nicolas Payette, CEO and Founder of Specira AI
CEO and Founder, Specira AI

Nicolas Payette has spent 25 years in enterprise software delivery, leading digital transformations at companies like Technology Evaluation Centers and Optimal Solutions. He founded Specira AI to solve the root cause of project failure: unclear requirements, not slow code.