Someone in the room asks the question. It's always the same one: "What's the return on this?" And suddenly the whole AI conversation shifts from possibility to spreadsheet. Which is exactly where it should be.
The problem is that most ROI frameworks for AI are garbage. They either wildly overestimate value ("AI will save you 40 hours per person per week!") or they're so conservative they make the investment look pointless. Neither helps you make a real decision.
Here's a practical framework for estimating ROI on custom AI workflows... the one we actually use when scoping engagements. It's not perfect. No estimate is. But it's honest, and it gives you numbers you can defend in a board meeting.
Why Traditional ROI Models Break Down for AI
Traditional software ROI is straightforward. You know the license cost, the implementation cost, the maintenance cost. You estimate the productivity gain. You divide and conquer. Done.
AI workflows break this model in three ways.
First, costs are variable. AI API usage scales with volume. A workflow that costs $200/month processing 1,000 requests might cost $3,000/month at 15,000. Traditional software doesn't do this. Your CRM costs the same whether you use it twice a day or two hundred times. With AI, ongoing costs are usage-dependent and harder to project.
Second, value is layered. The obvious win is labor savings. But the bigger wins... speed improvements, error reduction, new capabilities you couldn't offer before... those are harder to quantify and often take 3-6 months to materialize. According to McKinsey's 2025 State of AI survey, only 6% of organizations qualify as "AI high performers" who report significant business value. The difference between the 6% and everyone else? The top performers measure the right layers.
Third, AI projects fail at a high rate. Gartner predicted that 30% of generative AI projects would be abandoned after proof of concept by end of 2025. That means your ROI estimate needs to account for the risk of the project not delivering at all. Most proposals quietly ignore this.
The Four-Layer Value Model
We estimate AI workflow value across four layers, in order of how easy they are to measure. Start with the concrete stuff. Work your way up to the strategic value. If the first two layers alone don't justify the investment, the project probably isn't ready.
Layer 1: Labor Savings (Direct)
This is the one everyone starts with, and for good reason. It's the most measurable.
The formula is simple: (hours saved per week) x (fully loaded hourly cost) x 52. If a workflow automates 10 hours per week of work from a $75/hour employee, that's $39,000/year in direct labor savings.
But here's where most estimates go wrong: they assume 100% automation. Real AI workflows don't eliminate jobs. They eliminate tasks. A content team that spends 15 hours/week on first drafts might get that down to 4 hours with AI... but they're still reviewing, editing, and publishing. The honest estimate is 11 hours saved, not 15.
Be specific. Map the actual tasks. Time them. Then estimate the realistic reduction, not the theoretical maximum.
Layer 2: Speed and Throughput
Faster workflows mean more output from the same team. A sales operations process that took 3 days now takes 4 hours. That doesn't just save labor... it means deals close faster, prospects don't go cold, and your team can handle more volume without hiring.
Quantify this as: (additional capacity created) x (value per unit of output). If your team can now process 40 proposals per month instead of 25, and each proposal averages $50,000 in potential revenue, the throughput gain is significant even at modest close rates.
Speed gains compound in ways that are hard to predict. We've seen teams start using AI for proposal generation, then realize the same workflow handles RFP responses, client reports, and competitive analyses. The initial estimate covered one use case; the actual value covered four.
Layer 3: Error Reduction and Quality
This layer is underrated. Every manual process has an error rate. Data entry, report generation, compliance checks... humans make mistakes, especially on repetitive work. AI workflows with proper guard rails and validation can cut error rates dramatically.
The value formula: (error rate before) - (error rate after) x (cost per error). A billing process with a 3% error rate that drops to 0.5% doesn't sound dramatic until you multiply by the cost of each error... customer churn, refund processing, support tickets, audit findings. For a company processing 10,000 invoices per month, even a small error rate reduction can save six figures annually.
Layer 4: Strategic Capability
This is the hardest to measure and the most valuable long-term. Custom AI workflows let you do things you literally could not do before. Real-time competitive analysis. Personalized client intelligence at scale. Intelligent routing across multiple AI models that optimizes for both cost and quality simultaneously.
You can't put a clean ROI number on strategic capability at the estimation stage. Instead, frame it as: "What would this capability be worth if we could buy it as a product?" If the answer is "we'd pay $100K/year for a tool that did this," that's a reasonable proxy for value.
Don't lean on Layer 4 to justify the investment. It's a bonus, not a foundation. If Layers 1-3 don't carry the math, Layer 4 won't save it.
The Cost Side: What to Actually Budget
The value model is half the equation. The other half is what you'll actually spend. We've written extensively about AI project costs, but here's the ROI-specific breakdown.
| Cost Category | Typical Range | ROI Impact |
|---|---|---|
| Discovery and scoping | $15K-$40K | One-time; de-risks the build |
| MVP build | $75K-$175K | One-time; proves value hypothesis |
| Production build | $100K-$300K | One-time; scales to full organization |
| AI API costs | $500-$5,000/mo | Ongoing; scales with usage |
| Maintenance and support | $5K-$25K/mo | Ongoing; keeps the system healthy |
| Infrastructure | $200-$2,000/mo | Ongoing; hosting, databases, monitoring |
For a typical mid-market AI workflow project, you're looking at $150K-$500K in year one (build plus 6-12 months of operation) and $70K-$360K/year ongoing. Those numbers are real. If someone quotes you significantly less, ask hard questions about what's being cut.
Building the ROI Model: A Worked Example
Let's walk through a real scenario. A professional services firm with 50 consultants wants to automate client report generation. Currently, each consultant spends 6 hours per week compiling and formatting client-facing reports.
Layer 1 (Labor): 50 consultants x 4 hours saved/week (conservative; they'll still review and customize) x $85/hr fully loaded x 52 weeks = $884,000/year.
Layer 2 (Speed): Reports that took 2 days now take 30 minutes. Clients get faster turnaround, consultants can take on additional project work. Estimated 15% increase in billable capacity for report-heavy consultants = roughly $200,000/year in additional revenue potential.
Layer 3 (Quality): Current error rate on reports is about 5% (data discrepancies, formatting issues, outdated metrics). AI workflow with validation brings this below 1%. Cost per error (rework, client dissatisfaction, delayed approvals) estimated at $500 per incident. 50 consultants x 4 reports/week x 4% error reduction x $500 = roughly $208,000/year in avoided costs.
Total estimated annual value: $1.29M (excluding Layer 4 strategic value).
Costs: $250K build + $15K/month ongoing ($180K/year) = $430K in year one, $180K/year thereafter.
Year 1 ROI: 200%. Payback period: roughly 4 months post-launch.
Now... is this exact? No. The real number will be different. But this gives you a defensible range and a clear picture of which value layers drive the case.
The Measurement Cadence That Actually Works
An ROI estimate is a prediction. It's worthless unless you track actuals against it. Here's the cadence we recommend for measuring AI success post-deployment.
Weekly (first 90 days): Usage metrics. How many people are using the workflow? How often? Where are they dropping off? If adoption is low, your ROI model is dead on arrival regardless of how good the math looks.
Monthly: Efficiency metrics. Time-per-task before and after. Volume processed. Error rates. These are your Layer 1-3 indicators. Track them against your original estimates.
Quarterly: Financial reconciliation. Actual cost versus projected cost. Actual value delivered versus projected value. Adjust the model based on what you're seeing. A good estimate evolves; a bad one gets filed away and ignored.
According to Google Cloud's guide to measuring AI success, organizations that establish measurement cadences before deployment are significantly more likely to achieve meaningful ROI. The measurement infrastructure isn't overhead... it's how you prove the investment was worth it.
Common ROI Estimation Mistakes
We've reviewed dozens of AI business cases from prospective clients. The same mistakes show up constantly.
- Assuming full adoption from day one. Plan for 30-40% adoption in month one, 60-70% by month three, 80%+ by month six. Your ROI model should reflect the ramp, not the steady state.
- Ignoring ongoing costs. The build is not the investment. The build plus two years of operation is the investment. Include API costs, maintenance, model updates, and the inevitable scope expansion when the first workflow succeeds and leadership wants five more.
- Double-counting value. If a workflow saves 4 hours per week of an employee's time, that's only valuable if those hours get redirected to something productive. "They'll do higher-value work" is not a number. Define what that higher-value work is and how you'll measure it.
- Comparing to the wrong baseline. Don't compare AI workflow costs to doing nothing. Compare to the next-best alternative. Sometimes the right comparison is Zapier or Make, not manual labor. If a $20/month Zap solves 80% of the problem, your custom workflow needs to justify the delta.
- Skipping the discount for risk. Not every AI project delivers. Apply a 20-30% haircut to your value estimates to account for implementation risk, adoption risk, and the chance that the AI just doesn't perform as well as you hoped on your specific data.
How We Do This at Last Rev
When a client asks us to estimate ROI on a custom AI workflow, we don't hand them a pretty deck with hockey-stick projections. We do the math together, and we're conservative by default.
Our approach starts with a paid discovery phase. Two to four weeks where we map the actual workflows, time the actual tasks, identify the actual pain points, and build a cost model based on the actual architecture we'd recommend. Not theoretical. Not generic. Specific to your systems, your team, and your data.
From there, we build the four-layer value model with the client. They know their business better than we do. We know the technology. The estimate is only credible when both perspectives are in the room.
Then we phase the build to validate the estimate early. The MVP proves (or disproves) the core value hypothesis within 6-10 weeks. If the numbers hold up, we scale to production. If they don't, we've spent $75K-$175K learning that instead of $500K.
This is the part most vendors won't tell you: the best ROI estimate includes the option to stop. A phased approach with clear measurement gates means you're never more than one phase away from a go/no-go decision based on real data.
The Real Bottom Line
Estimating ROI on custom AI workflows is not a prediction exercise. It's a risk management exercise. You're building a model that says: "Given these assumptions, here's the range of outcomes. Here's what we'll measure. Here's when we'll know if it's working."
The companies that get real value from AI aren't the ones with the best estimates. They're the ones that estimate honestly, measure relentlessly, and adjust quickly when reality diverges from the plan.
If your current AI ROI model fits on a napkin or requires a leap of faith... it's time for a better model.
Sources
- McKinsey & Company -- "The State of AI: Global Survey 2025" (2025)
- Gartner -- "Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025" (2024)
- Google Cloud -- "KPIs for gen AI: Measuring your AI success" (2024)
- Deloitte -- "AI Is Capturing the Digital Dollar: 2025 Tech Value Survey" (2025)