What Investors Are Asking — and What Your Answers Signal About Your Business

Operational Due Diligence has always probed technology infrastructure, cybersecurity posture, and business continuity. But over the past 12 months a new category of questions has arrived in ODD questionnaires, and many management teams are caught off guard by them.

Investors — particularly sophisticated VC and PE firms — are now asking pointed questions about Artificial Intelligence (AI) and Artificial Intelligence governance (what some firms refer to as "AI" in the operational sense — the policies, controls, and oversight structures around how AI is used). How you answer these questions increasingly affects how investors perceive your operational maturity, your risk profile, and in some cases, your valuation.

This article explains what investors are actually asking, why it matters, and how to answer with confidence.

Why ODD Now Includes AI Questions

Three things happened in quick succession that put AI on the ODD agenda:

  • AI became operationally material. It's no longer a research project or a future roadmap item. Employees at most companies are actively using AI tools today — in some cases without their employer's knowledge. When something is operationally material, ODD has to cover it.
  • The liability landscape shifted. Regulators in the US and EU have begun issuing guidance on AI use in financial services, HR, and data processing. Companies without documented AI governance policies now carry a regulatory risk that wasn't there 18 months ago.
  • High-profile failures created precedents. From AI-generated legal filings to biased investing algorithms to customer data leaking through public AI tools, real-world incidents have given investors concrete reasons to ask hard questions.

What Investors Are Actually Asking

ODD questionnaires vary, but AI questions tend to cluster around five themes:

  • 1. Inventory and Awareness "What AI tools does your organization use, and who approved them?"
    Investors want to know whether leadership has visibility into AI tool usage or whether it's happening organically and untracked. Shadow AI — employees using personal ChatGPT, Claude, or Gemini accounts to process company data — is a specific concern. A company that can't answer this question confidently signals weak operational controls generally.

  • 2. Data Governance "What data do your AI systems have access to, and where does it go?" This is the most technically consequential question. When employees paste client data, financial projections, or proprietary information into a public AI tool, that data may be used to train future models, stored on third-party servers, or exposed in ways that violate NDAs, client agreements, or data protection regulations. Investors want to know you've thought about this.

  • 3. Policy and Controls "Do you have an AI use policy? How is it enforced?" Having a policy matters, but investors increasingly ask about enforcement. A policy that lives in a handbook nobody reads is not a control. They're looking for evidence that the policy is communicated, that there are technical guardrails where possible, and that there's accountability when the policy is violated.

  • 4. Competitive and IP Risk "Could investor information or trade secrets have been exposed through AI tool usage?" This question is particularly sharp for companies with genuine IP — proprietary models, formulas, strategies, or partner relationships. If employees have been feeding this information into public AI tools, there may be no way to fully recover it. Investors want to understand your exposure, and how you will prevent breaches of this data in the future.

  • 5. AI in Your Product or Service "Does your decision-making process or service incorporate AI, and if so, how do you manage model risk?" For companies that have built AI into their decision-making processes, investors want to understand how you validate outputs, manage model drift, handle errors, document the source data, and disclose AI use to LPs. This is a separate but related inquiry from the governance questions above.

How to Answer Well

The goal is not to claim you've solved every AI governance challenge — sophisticated investors won't believe that and it will undermine your credibility. The goal is to demonstrate that you are aware, thoughtful, and actively managing the risks.

  • On inventory and awareness: Be specific about what tools are sanctioned and what the approval process looks like. If you're still building that inventory, say so and describe the timeline. "We are currently conducting an AI tool audit and expect to complete it by Q2" is a far better answer than a vague claim of comprehensive oversight.
  • On data governance: Describe what categories of data are prohibited from AI input, and how that prohibition is communicated and enforced. If you use enterprise versions of AI tools (ChatGPT Team or Enterprise, Google Gemini, Claude for Enterprise) that include data protection agreements, say so explicitly — this is a meaningful distinction from employees using free consumer tools.
  • On policy: Have something written before the ODD conversation, even if it's a first-generation policy. A two-page AI use policy or “use this don’t use that” memo that's been reviewed by counsel and communicated to staff is infinitely better than nothing. Be prepared to share it.
  • On IP risk: This is the hardest question to answer if you haven't been proactive. If you genuinely don't know what employees have put into public AI tools historically, acknowledge that, and describe what you're doing going forward to prevent it. Trying to claim certainty where you have none will be exposed quickly.
  • On AI in your processes: If you've built AI into your internal processes, have a clear, documented explanation of how the model works, how outputs are validated, how underlying data is documented, and what disclosures you make to LPs. This signals process maturity as much as it signals governance maturity.

The Bottom Line

AI governance is the new cybersecurity in ODD — five years ago, investors were asking whether you had a cybersecurity policy. Today they expect you to have one and are asking whether it's mature. AI is following the same curve, just faster. The firms that get ahead of these questions now — with real policies, real visibility, and real controls — will handle ODD with confidence. The ones that wait until they're sitting across the table from an operational due diligence team will wish they hadn't.

Need help preparing your AI governance posture for ODD? We work with portfolio companies and investment firms to document, implement, and communicate AI controls that hold up under scrutiny. Email us at support@hybridge.com to get in touch with our compliance team.


Share this blog:

ai-odd-guidance