← Back to Blog Buy vs Build AI

What's involved in implementing enterprise AI tools across multiple departments?

Adam Harris Jan 28, 2026 10 min read
Enterprise rollout map showing AI deployment spreading across departments with governance overlay

Here's the pattern we see over and over again. A company buys an AI tool. One department gets excited, runs a pilot, sees promising results. Leadership says "roll it out everywhere." Six months later... three departments are using three different AI tools, none of them talk to each other, the data team is drowning in integration requests, and the CFO is asking why the AI budget tripled with nothing to show for it.

That's not a technology failure. It's an implementation failure. And it happens to roughly two-thirds of organizations attempting enterprise AI, according to McKinsey's 2025 State of AI report. While 88% of organizations now use AI in at least one business function, only about one-third have actually scaled it across the enterprise.

The gap between "we're using AI" and "AI is working across our organization" is enormous. Here's what's actually involved in crossing it.

The Four Walls You'll Hit

Every multi-department AI rollout runs into the same four problems. Not maybe. Not sometimes. Every time. The organizations that succeed aren't the ones that avoid these problems... they're the ones that plan for them.

1. Your Data Isn't Ready (And You Don't Know It Yet)

This is the one that kills most projects before they start. Gartner predicts that through 2026, organizations will abandon 60% of AI projects unsupported by AI-ready data. Not 60% of bad projects. 60% of all projects where the data foundation isn't solid.

The problem isn't that you don't have data. You have too much of it, scattered across too many systems, in too many formats, owned by too many teams. Marketing has customer data in HubSpot. Sales has it in Salesforce. Support has it in Zendesk. Finance has it in NetSuite. Each system has its own definition of "customer," its own data quality standards, and its own access controls.

When you try to deploy AI that works across departments, you're asking it to make sense of data that your own teams can't reconcile. A Gartner survey of 248 data management leaders found that 63% of organizations either don't have or aren't sure if they have the right data management practices for AI. That's not a rounding error. That's the majority of enterprises.

What actually works: Start with a data audit before you buy a single AI license. Map where your critical data lives, who owns it, how it flows between systems, and where the gaps are. This isn't glamorous work, but it determines whether your AI deployment succeeds or joins the 60% that get abandoned.

2. Integration Is the Real Project

Here's something vendors won't tell you: in our experience, the AI model is maybe 20% of the work. The other 80% is integration... connecting the AI to your existing systems, workflows, and processes so it can actually do useful things.

Each department has its own tech stack, its own workflows, its own approval chains. Marketing runs on a content management system and marketing automation platform. Engineering lives in Jira and GitHub. Finance operates in ERP systems with strict audit requirements. Getting AI to work meaningfully in each of these contexts requires deep integration work that's specific to each department.

McKinsey's research found that of all organizational changes linked to gen AI success, fundamental workflow redesign ranks highest in correlation with EBIT impact. But only 21% of organizations using gen AI have redesigned even some workflows. The rest are bolting AI onto existing processes and wondering why it doesn't transform anything.

What actually works: Pick one cross-departmental workflow to redesign around AI first. Not five. Not "all of them." One. Get that working end-to-end, learn from it, then expand. The incremental rollout approach consistently outperforms the "big bang" launch.

3. You Don't Have the People (Yet)

The talent gap is the most underestimated blocker. McKinsey found that 46% of organizations cite talent skill gaps as the primary reason AI adoption has been too slow. Not budget constraints. Not technology limitations. People.

And it's not just about hiring AI engineers. You need people who understand AI well enough to implement it in specific business contexts. Your marketing team needs someone who can build and evaluate AI-generated content workflows. Your support team needs someone who can design and monitor AI-assisted ticket routing. Your finance team needs someone who understands both the AI capabilities and the compliance requirements.

Deloitte's 2026 State of AI report found that education was the #1 way companies adjusted their talent strategies for AI... not hiring, not restructuring. Training the people you already have. That's a tell. The AI skills gap won't be solved by a hiring spree. It'll be solved by upskilling existing domain experts.

What actually works: Build cross-functional pods, not AI departments. A small team (4-7 people) combining AI engineering skills with deep domain expertise in the target department. We've written about the specific team composition that works in detail.

4. Nobody Defined "Success"

This one's painful because it's entirely self-inflicted. Most multi-department AI rollouts start with vague goals like "improve efficiency" or "leverage AI across the enterprise." Those aren't goals. Those are wishes.

Without clear success metrics per department, you can't prioritize, you can't measure progress, and you definitely can't justify the investment when the CFO comes asking. Deloitte found that 66% of organizations report productivity gains from AI, but only 20% are seeing actual revenue growth. The gap between "feels more productive" and "measurably more profitable" is where unclear goals live.

What actually works: Define department-specific KPIs before implementation starts. Support might measure ticket resolution time and escalation rates. Marketing might measure content production velocity and quality scores. Finance might measure processing time per transaction. Each department gets its own scoreboard with its own baseline. We've outlined a comprehensive measurement framework that covers this in depth.

The Systematic Approach That Works

Knowing the problems is one thing. Here's the playbook for actually getting AI working across departments without the usual chaos.

Phase 1: Foundation (Months 1-2)

Don't touch AI yet. Seriously. This phase is about getting the organizational infrastructure right.

  • Data audit and remediation plan. Map critical data across all target departments. Identify gaps, inconsistencies, and access barriers. Build a realistic remediation timeline.
  • Governance framework. Decide now who owns AI decisions, who approves deployments, who monitors performance, and who pulls the plug when something goes wrong. Governance isn't a document... it's an operating system.
  • Success criteria per department. Work with each department head to define measurable outcomes. "Save 10 hours per week on report generation" beats "use AI to improve reporting" every time.
  • Talent assessment. Figure out where your skills gaps are. Do you need to hire, train, or partner?

Phase 2: Pilot Department (Months 2-4)

Pick one department. Not the one that's most excited... the one where you have the best combination of data readiness, a clear use case, and a willing team lead.

  • Deploy a focused use case. Not "AI for marketing." Something specific like "AI-assisted first drafts for blog content" or "AI-powered lead scoring using CRM data."
  • Build the integration layer. Connect the AI to the department's existing tools. This is where you learn how hard integration will be for the rest of the org.
  • Establish monitoring. Track the KPIs you defined. Also track cost per task, quality metrics, and user adoption rates.
  • Document everything. What worked, what broke, what took longer than expected. This becomes your playbook for the next department.

Phase 3: Controlled Expansion (Months 4-8)

Now add two to three more departments, applying what you learned from the pilot.

  • Shared infrastructure, customized applications. The data pipelines, security controls, and monitoring dashboards should be shared. The AI applications themselves should be tailored to each department's workflows.
  • Cross-department coordination. This is where a central AI governance team earns its keep. They ensure consistency in security, compliance, and cost management while letting departments own their specific implementations.
  • Feedback loops. Regular reviews with each department. What's working? What's not? Where are the integration gaps between departments?

Phase 4: Enterprise Scale (Months 8-12+)

Only now should you think about "enterprise AI." By this point you have proven patterns, trained people, working integrations, and real data on what AI costs and what it returns.

  • Standardize what can be standardized. Shared model access, common security controls, centralized cost management, unified audit trails.
  • Customize what must be customized. Each department's prompts, workflows, and evaluation criteria stay department-specific.
  • Build the operational layer. Ongoing monitoring, continuous improvement processes, model migration plans, and governance that evolves with your usage.

The Governance Question Nobody Wants to Answer

Here's the uncomfortable truth about multi-department AI: someone has to own it at the enterprise level. Not just champion it. Own it. Be accountable when department A's AI tool conflicts with department B's data policies. Be responsible when costs spike. Be the one who says "no, we're not deploying that" when a use case fails the risk assessment.

Harvard Business Review warns that AI often reinforces organizational silos rather than breaking them down, with departments adopting AI tools independently and generating fragmented gains that don't add up to strategic impact. The antidote is enterprise-level governance with teeth... not a committee that meets quarterly, but an active function that coordinates across departments daily.

What does that look like practically?

FunctionCentralizedDepartment-Owned
Model selection & accessApproved model list, API key managementWhich approved model to use for which task
Data governanceClassification policies, access controlsData preparation, context assembly
Security & complianceStandards, audit requirements, incident responseImplementation within department workflows
Cost managementBudget allocation, usage monitoringOptimization within budget
Use case developmentRisk assessment, approval processIdentification, design, deployment

The split is important. Centralize the guardrails. Decentralize the applications. If you centralize too much, departments can't move fast enough. If you decentralize too much, you end up with the fragmented mess we started this article with.

What This Actually Costs

Nobody wants to put numbers on this, so let me be direct. A multi-department enterprise AI implementation for a mid-size company (500-5,000 employees) typically looks like this:

  • Foundation & pilot (months 1-4): Based on our client work, $150K-$400K. This covers data assessment, governance setup, the first department's implementation, and the integration layer.
  • Expansion (months 4-8): $100K-$250K per additional department. The infrastructure exists; you're customizing and integrating for each new department.
  • Ongoing operations: 20-30% of implementation cost annually. Monitoring, model updates, prompt optimization, governance, and continuous improvement.
  • AI model costs: Variable, but 42% of enterprises cite high LLM costs as a scaling challenge. Budget for token-based costs to grow as usage grows.

The ROI timeline is typically 6-12 months for the pilot department, 12-18 months for enterprise-wide impact. McKinsey's data shows that only about 6% of organizations... what they call "AI high performers"... report EBIT impact of 5% or more from AI. The rest are somewhere on the journey. Setting realistic cost and timeline expectations upfront prevents the disillusionment that kills projects in month eight.

Key Takeaways

  • Data readiness is the #1 determinant of success. 60% of AI projects fail because the data foundation isn't there. Audit before you implement.
  • Integration is 80% of the work. The AI model is the easy part. Connecting it to your systems, workflows, and processes is the real project.
  • The talent gap is real but solvable. Train your domain experts in AI, don't just hire AI experts who lack domain knowledge. Build cross-functional pods.
  • Define success before you start. Department-specific KPIs with baselines. "Improve efficiency" is not a goal.
  • Phase the rollout. Foundation, pilot, expand, scale. Trying to deploy everywhere at once is how you get fragmented AI that costs more than it saves.
  • Governance needs teeth. Centralize the guardrails, decentralize the applications. Someone at the enterprise level has to own it.

The companies that get multi-department AI right aren't doing anything magical. They're being systematic about something that most organizations try to rush. They invest in the boring stuff... data, integration, governance, measurement... before they invest in the exciting stuff. And they phase the rollout instead of launching everything at once.

It's not fast. It's not glamorous. But it's the approach that actually works.

Sources

  1. McKinsey -- "The State of AI in 2025: Agents, Innovation, and Transformation" (2025)
  2. Gartner -- "Lack of AI-Ready Data Puts AI Projects at Risk" (2025)
  3. McKinsey -- "Superagency in the Workplace: Empowering People to Unlock AI's Full Potential at Work" (2025)
  4. Deloitte AI Institute -- "State of AI in the Enterprise: The Untapped Edge" (2026)
  5. Harvard Business Review -- "Don't Let AI Reinforce Organizational Silos" (2025)