Your board is asking about AI. Maybe they saw a headline about a lawsuit, or they’re worried your competitors are moving faster, or they’re genuinely concerned about risk. Either way, you’re in the room and someone says, “So what’s our AI risk exposure?”
If you’re not prepared, that question is hard to answer. You probably have AI systems in production—built by your teams, bought from vendors, or both—and you may not have a clear, board-ready way to talk about what could go wrong.
But the board doesn’t want a technical deep dive. They want to know: Are we safe? Are we keeping up? What could hurt us? Here’s what they’re asking, even if they don’t say it that way, and how to prepare answers that actually satisfy.
The five questions boards are really asking
Question 1: What’s our legal liability if an AI system fails or discriminates?
This is existential. The board is asking: could we get sued? Could we face regulatory action? What’s the financial exposure?
This question surfaces most after headline-grabbing cases—hiring algorithms that discriminate, lending systems that embed bias, content moderation that causes harm. Boards rightly worry: could that be us?
Weak answer:
“We have controls in place” or “Our systems are accurate.”
This doesn’t address liability. Accuracy ≠ safety. You can have an accurate model that discriminates.
Strong answer:
“We’ve assessed our AI systems for fairness and discrimination risk. We’re monitoring performance across demographic groups. If we find issues, we have a remediation process. Here’s our exposure: [system], [risk], [mitigation].”
What you need to have: documentation of which AI systems exist, what decisions they influence, which ones have fairness/discrimination risk, and what you’re doing about it. Not perfect—good enough to explain credibly.
Question 2: If we don’t use AI aggressively, will competitors eat our lunch?
This is the flip side. Boards are terrified of competitive risk—the risk of inaction. They hear about AI productivity gains and worry your organization is falling behind.
Weak answer:
“We’re still evaluating AI” or “We’re being careful about adoption.”
This reads as slow and defensive. It doesn’t address the competitive question.
Strong answer:
“We have a phased AI adoption strategy. Here’s where we’re using AI today [specific examples], here’s where we’re piloting [specific examples], and here’s our roadmap for the next year [specific examples]. We’re balancing speed with governance.”
What you need to have: a clear statement of where AI is already delivering value, where you’re experimenting, and where it makes sense for your business. Not a list of every possible use case—a focused roadmap that shows you’re not standing still.
Question 3: Do we understand our regulatory exposure?
The regulatory landscape is still forming, but major jurisdictions are moving. GDPR has data and transparency requirements. The EU AI Act creates liability for high-risk systems. The U.S. is moving toward sector-specific regulation. Boards know they need to understand this before it becomes urgent.
Weak answer:
“We’re complying with applicable laws” or “Our legal team is monitoring this.”
Vague and reactive. You’re not showing awareness of specific risks to your business.
Strong answer:
“Our primary regulatory risk is [sector/region]. Here’s what that requires: [specific requirements]. We’re currently [compliant/preparing]. Here’s our timeline for full compliance: [date].”
What you need to have: a map of which AI systems are subject to which regulations, what compliance looks like, and whether you’re on track. You don’t need to be perfect—you need to show you’ve thought about it and have a plan.
Question 4: Do we have the talent to build and govern AI responsibly?
This is often asked as a proxy for “can we execute?” Boards want to know if your organization has people who can build AI well and people who can make sure it’s used responsibly.
Weak answer:
“We have some data scientists” or “We’re hiring in this area.”
Doesn’t address governance. Governance is not an engineering problem—it’s an organizational problem.
Strong answer:
“We have [N] data scientists and engineers building AI. We’ve assigned governance responsibility to [role/team]. Here’s how we’re building governance capability: [training/hires/partnerships].”
What you need to have: a clear picture of your AI talent—both builders and governors. You probably have builders. You may not have strong governance capability yet, which is fine, but you need to be explicit about building it.
Question 5: If an AI system fails or causes harm, do we have an incident response plan?
This is the nuclear option. A system makes a bad decision, causes a lawsuit, or creates a PR nightmare. Does your organization know what to do?
Weak answer:
“We’ll handle it case by case” or “Our legal team will manage it.”
This signals you haven’t thought about it in advance. Incidents move fast. You need a playbook.
Strong answer:
“We have an incident response plan that includes: detection and escalation [how you know], decision-making [who decides], mitigation [how you respond], notification [stakeholders], and remediation [how you fix it]. We’ve tested it.”
How to prepare materials
You don’t need a 100-page risk assessment. You need three things:
1. An AI inventory and risk summary. What AI systems do you have? Which ones are highest risk? (Risk = impact if it fails × probability of failure.) You can do this in a simple table: system name, use case, risk level, who’s responsible, status of governance.
2. A governance and strategy narrative. How do you make decisions about AI? What’s your approach to fairness, transparency, and safety? What’s your competitive strategy? Write this down. It doesn’t need to be long—one page is fine. But it should show that you’ve thought about it.
3. A timeline for the next 12 months. Where are you piloting? Where are you scaling? What governance work are you doing? What talent are you building? This shows you have momentum and a plan.
Translation is key: from technical to business
The most common mistake is answering as if the board is technical. They’re not. They care about business impact, legal risk, competitive position, and execution capability.
So translate:
- Instead of “bias in training data,” say “we could discriminate against certain customer segments, which exposes us to lawsuits and damages our brand.”
- Instead of “model drift,” say “our systems’ performance degrades over time without monitoring, so we could make bad decisions without knowing.”
- Instead of “hallucinations,” say “the system generates plausible-sounding but false information, which could confuse our customers or employees.”
Frame risks in terms of things the board already worries about: liability, brand damage, competitive risk, operational disruption, regulatory fines.
The confidence zone
You’re aiming for a sweet spot: confident but not overconfident. You acknowledge risks. You show you’re managing them. You have a plan.
Red flags that suggest you’re not ready to brief the board:
- You can’t name your AI systems or who owns them.
- You have no idea what decisions they’re making.
- You have no plan for governance or risk management.
- You haven’t thought about fairness, transparency, or safety.
- You don’t know what regulations apply to your business.
If you’re here, the honest answer to the board is: “We’re still building our AI governance capability. Here’s what we’re doing in the next 90 days.” Then go do it.
Get the board comfortable
The board is asking about AI risk because they’re uncertain. They want to know you’re aware, thoughtful, and in control. You don’t need perfect governance—you need to show that you’re building it intentionally.
Have the answers to these five questions, translate them into business language, and you’ll walk out of that board meeting with confidence. They’ll see that you’re not asleep at the wheel, and that’s what matters.