The European Union’s AI Act has arrived. It was finalized in December 2023, enters enforcement by January 2026, and it’s already reshaping how companies around the world think about AI governance.
If you’re a US executive, you might think this doesn’t affect you. Your company is headquartered in New York, your data centers run in Virginia, your customers are in California. Surely the EU can regulate the EU and leave you alone?
That’s not how it works. The EU AI Act has extraterritorial reach, and if any part of your AI program touches European users, data, or operations, you’re in scope. For many US technology companies, that’s unavoidable.
Here’s what you need to know, and what you should do before enforcement deadlines hit.
Why US companies are affected: extraterritorial scope
The EU AI Act applies to three categories of companies:
- Companies placing AI systems on the EU market (directly selling or distributing)
- Companies whose AI systems are intended to be used in the EU (even if not sold there directly)
- Companies affected by outputs of AI systems in the EU (regulated entities whose business is impacted)
If your company operates internationally, deploys AI models that serve European users, or exports data or models to EU-based partners, you’re almost certainly in one of these categories.
What makes this different from other EU regulations is the scope of “AI system.” The EU defines it broadly: any software system that uses machine learning or logic-based approaches to infer or generate outputs that influence decisions or environments. That covers recommendation systems, automated decision-making, content moderation algorithms, fraud detection, recruiting tools, and most other ML applications your company operates.
There’s no exception for non-EU subsidiaries if the parent company is regulated. If you’re a US parent company with a European subsidiary, or if your US operations support EU business, you need to comply.
Risk classification: know what tier your systems are in
The EU AI Act uses a risk-based approach. Not all AI carries the same regulatory burden. The regulation sorts AI systems into four tiers:
Unacceptable risk
These systems are simply prohibited. Examples: social credit systems, real-time biometric identification in public spaces (with narrow exceptions), or AI systems manipulating human behavior in ways that cause harm. If your organization is running AI systems in this category, you need to shut them down before the law takes effect. This is non-negotiable.
High risk
These systems require extensive compliance. They include systems that make or influence important decisions about people: hiring, lending, access to education or benefits, immigration, law enforcement, and other critical life domains. High-risk systems require impact assessments, extensive documentation, human oversight, robust testing, and continuous monitoring. Expect significant operational overhead here.
Limited risk
Systems in this tier (mostly chatbots and generative AI) require transparency: you need to disclose that an AI system is being used and provide adequate information about its capabilities and limitations. Lower compliance burden than high-risk, but still mandatory.
Minimal risk
Everything else. No specific regulatory requirements, though you should still follow good governance practices.
What high-risk compliance actually requires
If your AI systems fall into the high-risk category, here’s what compliance looks like:
Risk assessment and documentation. You need to conduct an impact assessment for each high-risk system, documenting the intended purpose, the data it uses, how it makes decisions, and the potential harms if it fails. You must maintain this documentation in an EU language and be prepared to share it with regulators.
Data quality and governance. High-risk systems must use training data that’s representative, free of bias, and properly documented. You need to demonstrate data governance practices, including data source tracking and quality assurance.
Model testing and performance. You must test your models for accuracy, robustness, and cybersecurity. For systems making consequential decisions, you need to validate that they perform consistently across different demographic groups. Disparate impact testing is not optional.
Human oversight and explainability. High-risk systems must have human oversight mechanisms in place. Decision-makers need to understand how the system reached its conclusion. “The model said so” is not sufficient; you need explainability that a human can act on.
Continuous monitoring and incident reporting. You must monitor system performance after deployment. If the system underperforms or causes harm, you need to report it to regulators and users. You must maintain an audit log of all decisions and be able to produce it for review.
User information and consent. You need to inform people that they’re being evaluated by an AI system and provide information about their rights to contestation and human review. In some cases, you need to obtain explicit consent.
This is comprehensive, and it’s expensive. But it’s non-negotiable for any company with high-risk AI systems serving European users.
Timeline and enforcement
The EU AI Act enforcement timeline breaks down like this:
- Immediate (now): Unacceptable-risk systems must stop being developed or deployed
- January 2026: Enforcement begins. High-risk systems must be compliant. Penalties for non-compliance are steep: up to 6% of annual global revenue or 30 million euros, whichever is higher
- 2027 onward: Additional compliance requirements roll out for other risk categories
For US companies, this means the decision point is now. You have less than a year to audit your AI systems, classify them by risk level, and bring high-risk systems into compliance. That’s a substantial effort for most organizations.
How EU AI Act compliance intersects with US frameworks
You might already be operating under US regulatory frameworks: SEC disclosures about AI risk, FTC guidance on AI transparency, industry-specific regulations for financial services or healthcare. The EU AI Act doesn’t replace these. It adds on top of them.
The good news: there’s significant overlap. If you’re already doing disparate impact testing for fair lending compliance, that work counts toward EU high-risk compliance. If you’re documenting AI systems for internal governance, you can expand that documentation to meet EU requirements. If you’re building explainability into your models, you’re already addressing one of the core EU requirements.
The strategy is to treat EU compliance as an enhancement to your existing framework, not a parallel system. Start by inventorying your AI systems and their current governance status. Then identify the highest-risk systems and work backward from the January 2026 deadline to get them compliant.
What US executives should do now
First: audit your AI portfolio. Which systems serve European users? Which make consequential decisions? Which have potential for discrimination or harm? Get clear on what’s in scope before you try to fix it.
Second: classify by risk tier. Use the EU AI Act definitions. Be honest about which systems are high-risk. This determines your compliance roadmap.
Third: identify gaps in your current governance. Compare what you’re already doing against what the EU requires. Where are the gaps? Are you doing disparate impact testing? Do you have explainability mechanisms? Can you produce audit logs of decisions? What’s missing?
Fourth: build a compliance roadmap. Prioritize by deadline (unacceptable risk first, then high-risk) and by scope (start with the systems that serve the most European users or carry the most liability). Budget for the effort. This isn’t a three-month project; it’s an ongoing investment in governance infrastructure.
Fifth: think about this strategically, not just as a compliance checkbox. The EU AI Act is the regulatory future. Other countries will follow. If you build AI governance that meets EU standards, you’re building a system that’s defensible in multiple jurisdictions. You’re also building organizational practices that catch problems before they become regulatory violations.
The broader implication
The EU AI Act signals a regulatory turn toward AI governance that’s detailed, prescriptive, and attached to real penalties. This isn’t a guidance document or a best-practice framework. This is law with enforcement mechanisms and fines that can reach billions of dollars.
For US executives, the lesson is clear: if you’re not building AI governance infrastructure now, you’re running an unquantified regulatory risk. The EU isn’t the only place where AI regulation is coming. Your governance practices should be robust enough to survive scrutiny in multiple jurisdictions.
The executives who move early on this—who start the audit now, who prioritize compliance before the January 2026 deadline—will have a significant advantage. They’ll avoid last-minute scrambling. They’ll catch problems early. And they’ll have built organizations that think rigorously about AI risk before regulation forces them to.
The EU AI Act isn’t a threat to innovation. It’s a forcing function for responsible AI development. If you’re already doing it right, compliance is straightforward. If you’re not, now is the time to start.