Insights from our recent discussions with Private Equity Operating Executives on AI implementation
Building on our deep activity in the AI and portfolio operations space, we recently completed a search where we spoke with senior leaders across data and ops functions at middle-market and large-cap firms. Based on those conversations, a consistent story emerged.
First: Adoption is nearly universal. Execution is not.
The vast majority of firms we spoke with say AI has moved from experimentation to a core operating expectation. Yet that same majority identifies change management — not model quality or tooling — as the primary reason initiatives fail.
The technology is largely ready. The organizations are not. This is the defining tension in PE right now, and it plays out the same way across firm sizes: a capable tool gets deployed into an organization that hasn’t aligned on who owns it, what success looks like, or how it connects to the people doing the actual work.
MIT’s NANDA initiative points out that about 95% of AI pilot projects show no clear impact on profits. McKinsey’s 2025 State of AI survey backs this up, reporting that only one-third of organizations have successfully scaled AI throughout their companies. The other two-thirds are still in testing or pilot phases. The technology isn’t the constraint – and never was.
“The models work — but the business must change.”
What to do about it:
- Map human workflows before any deployment — not just technical ones
- Assign a named business owner (not an IT owner) to every initiative
- Build change management milestones into the project plan from day one
- Test leadership alignment early: if the portfolio CEO isn’t invested, the initiative will stall regardless of model quality. Near-term value is focused on cost reduction — but that’s table stakes
The vast majority of executives say near-term AI returns are overwhelmingly cost-driven: services automation, G&A reduction, customer support, and analytics. These initiatives aren’t glamorous, but they’re measurable within a single budget cycle, and they’re where most firms are starting.
Revenue-side applications — pricing optimization, churn reduction, sales effectiveness — are where real multiple expansion lives. Fewer than half of firms report meaningful traction there yet, largely because these use cases require cleaner data, tighter cross-functional alignment, and longer feedback loops.
“Cost out is table stakes. Multiple expansion comes later.” — Operating Partner
Generic “innovation” funding is disappearing. AI initiatives get approved when tied to explicit dollar targets — typically in the $10–30M cost-out range. MIT and McKinsey’s joint research shows that companies usually recoup their AI investments within 12 to 18 months, but it often doesn’t lead to a significant earnings increase. For underperformers, recovery may take 18 to 24 months. Over a 3- to 5-year period, this timeline leaves little margin for delays or setbacks.
“If it doesn’t tie to EBITDA in 12 months, it won’t survive budget season.” — CFO-Level Executive
What to do about it:
- Start with cost-reduction use cases to build credibility — but treat them as a foundation, not the destination
- Begin mapping revenue-side opportunities in parallel: where does pricing variability exist? Where is churn predictable but unaddressed?
- Attach a dollar figure and a 12-month timeline to every initiative before it enters the budget process
- Use early wins to fund the harder, higher-value revenue work that follows
Second: Governance matters less than mandate.
Two operating models have emerged. Large-cap and multi-strategy firms are building centralized AI and data centers of excellence, typically comprising 20 to 50 people. Mid-market firms predominantly rely on a single senior AI-literate operating partner supported by specialist vendors. Neither model reliably outperforms the other.
What does predict success is the mandate. Firms where AI leaders have direct access to investment committees, deal teams, and portfolio CEOs consistently outperform those where AI reports into IT or operates in a purely advisory capacity. The org chart matters far less than the decision rights attached to it.
What to do about it:
- Define AI leadership’s decision rights explicitly before the hire is made
- Ensure direct access to the investment committee and deal teams, not just to operations
- If running a single-operator model, scope that person’s mandate tightly: three to five portfolio companies, not fifteen
- Revisit reporting lines annually
Third: The talent gap is structural, and the hiring profile is changing.
A majority of executives flagged a persistent mismatch between what firms expect from AI hires and what experienced operators actually require. Firms want a single hire to solve AI across the portfolio. Experienced operators want a clear mandate, executive air cover, real budget authority, and realistic timelines.
“Everyone wants a silver-bullet hire; the talent wants vision and air cover.” — Strategic Investor
The hiring profile that’s working has shifted meaningfully — away from data scientists and toward AI-literate operating partners with finance, product, operations, or GTM backgrounds. These are people who can connect AI initiatives directly to EBITDA, drive adoption across a portfolio company’s leadership team, and operate credibly with both the investment committee and a plant manager.
Churn among AI leaders is high and accelerating. Exits cluster around a predictable and avoidable set of triggers: unclear mandate, lack of leadership alignment, and unrealistic expectations about what one person can accomplish in year one.
What to do about it:
- Do the organizational work before you hire — define the mandate, secure the budget, align the leadership team
- Write the job description around EBITDA outcomes, not technical capabilities
- Evaluate candidates on their ability to drive adoption and change, not just AI fluency
- Set explicit 90-day, 6-month, and 12-month expectations in writing before the offer is made
Lastly: Speed is a competitive advantage, and LPs are starting to grade on it.
High-performing firms target deployment timelines of 90 to 180 days per initiative — not multi-year roadmaps. This aligns with documented observations in private equity portfolio operations: firms that succeed in AI move directly toward EBITDA impact, prioritizing “good enough” over perfecting data and technology before validating a single use case. A 90-day window necessitates specificity: a defined use case, a measurable outcome, and a designated owner before work begins.
Firms operating on longer timelines tend to have the opposite — broad mandates, diffuse ownership, and initiatives that drift through committees without producing a dollar of value.
LP scrutiny is intensifying. LPs are no longer asking whether AI exists in the portfolio. They’re asking how it’s embedded in value creation, whether it’s part of the diligence process, and critically, whether it will survive at exit.
“You can’t outsource AI for four years and expect it to show up at exit.” — Portfolio Operations Executive
AI value that resides with external vendors doesn’t appear in the CIM. Building internal capability — embedding AI into standard operating procedures inside the portfolio company — is what transfers at exit and what sophisticated buyers and LPs are beginning to price.
What to do about it:
- Restructure AI roadmaps into 90-to-180-day sprints with hard go/no-go decision points
- Kill initiatives that haven’t produced measurable results by day 90 — restart with better scoping, don’t extend timelines
- Develop an AI narrative for LP reporting: a value creation story with dollar outcomes, not a list of tools
- Prioritize initiatives that build durable internal capability at the portfolio company level
Closing the Distance Between Strategy and Results
The vast majority of firms are treating AI as a core operating expectation, but that same majority is failing primarily in change management. The firms succeeding are running 90-to-180-day cycles with explicit dollar targets attached to every initiative from day one.
The technology is not the constraint; it never was. The limitation is organizational — and that’s exactly where the right talent, the right mandate, and the right operating model make all the difference.
Methodology & Sources
Primary research: executive interviews conducted in December 2025 with portfolio operations leaders, operating partners, and AI/data heads across middle-market and large-cap private equity firms.
Third-party sources: McKinsey 2025 State of AI; MIT NANDA initiative; MIT/McKinsey joint research on AI payback periods in enterprise operations.