The worst part of the AI boom isn't hype. It's false confidence.
I watched a founder tell his team they were going to use AI to replace half of customer support. No plan. No testing. No idea what the actual throughput savings would be. Just a gut feeling that AI could do it.
They spent three months building it. Launched it. Got 40% coverage (not the 50% they expected). Saved about 20% on cost (not the 35% they budgeted). And now they're underwater on the whole thing because the economics don't work.
That's not an AI problem. That's a planning problem.
The Numbers Don't Lie... Most AI Projects Fail
This isn't just one founder's story. It's the pattern across the entire industry.
Gartner predicted that 30% of generative AI projects would be abandoned after proof of concept by the end of 2025. The reasons? Poor data quality, inadequate risk controls, escalating costs, and unclear business value. "Unclear business value" is the polite way of saying nobody defined what success looked like before they started building.
Deloitte's research on AI ROI paints an even sharper picture. 85% of organizations increased AI investment in the past year. 91% plan to increase it again. But only 15% of generative AI users report achieving significant, measurable ROI. For agentic AI? That number drops to 10%.
That's a lot of money chasing a feeling instead of a number.
What Disciplined AI Adoption Actually Looks Like
The companies I respect don't start with "AI is going to revolutionize our business." They start with a spreadsheet.
It looks like this: "Customer support handles 400 tickets a day. 60% are status updates we can automate. If we automate those, we free up the team to handle harder issues and improve retention. That's worth $200K a year in labor savings plus improved customer lifetime value. Let's measure both."
That's not flashy. But it's real.
Research from UC Berkeley backs this up. Their analysis of AI measurement argues that "we're not experiencing an AI failure; we're experiencing a measurement failure." Organizations that define specific metrics before they start... time savings, error reduction, employee capability expansion... consistently outperform those chasing vague "transformation" goals.
The difference between the founder who went underwater and the team that saved $200K? The second team knew what they were measuring before they wrote a single line of code.
The False Confidence Trap
Here's what false confidence looks like in practice:
- "AI will handle 50% of our tickets." Based on what? Industry deflection rates start at 20-40% on day one and take 6-12 months to reach 60%+. Starting with a 50% target and no ramp-up plan is setting yourself up to declare failure on something that was actually working.
- "We'll save 35% on costs." Did you factor in the implementation cost? The monitoring overhead? The escalation handling when the AI gets it wrong? Real cost savings after accounting for total cost of ownership are almost always lower than the pitch deck says.
- "We'll have this rolled out in three months." Deloitte found that most AI projects take 2-4 years to achieve satisfactory returns, not months. Only 6% report payback within a year. That three-month timeline wasn't a plan; it was wishful thinking.
The pattern is always the same. Ambitious vision, no measurement framework, inevitable disappointment. Then the team blames AI instead of the process that deployed it.
How to Start an AI Initiative That Actually Works
The companies winning on AI aren't the ones with the most ambitious visions. They're the ones with discipline. They test small. They measure everything. They move to the next thing when the numbers justify it.
Here's what that looks like in practice:
| Step | What to Do | Why It Matters |
|---|---|---|
| 1. Define the metric | Pick 1-2 specific, measurable outcomes before you build anything | If you can't measure it, you can't know if it worked |
| 2. Baseline everything | Measure your current state: ticket volume, resolution time, cost per interaction | You need a "before" to prove there's an "after" |
| 3. Start small | Automate one ticket type, not all of support | Smaller scope means faster learning and lower risk |
| 4. Measure the ramp | Track week-over-week improvement, not just launch-day numbers | AI systems improve with data; day-one numbers are the floor, not the ceiling |
| 5. Decide with data | Expand, adjust, or kill based on actual results | Gut feelings got you into trouble; numbers get you out |
What We Do Differently
At Last Rev, we don't start AI engagements with architecture diagrams. We start with the measurement plan. What are we measuring? What's the current baseline? What number makes this worth doing? What number means we stop?
That sounds boring compared to "we're going to build an AI-powered everything." But boring is what works. Every AI initiative we've built that delivered real ROI started with someone writing down a specific number they expected to hit... and then actually tracking whether they hit it.
The ones that didn't? They started with excitement and ended with a Slack thread asking why the numbers don't look right.
The Bottom Line
False confidence kills AI adoption. Real measurement accelerates it.
If you're starting an AI initiative, start by asking one question: what would you actually measure? If you can't answer that clearly, you're not ready to build. And that's fine. Getting the measurement right first is faster than building the wrong thing and unwinding it later.
The companies that win on AI aren't smarter. They're not using better models or fancier frameworks. They just know their numbers.
Sources
- Gartner -- "Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025" (2024)
- Deloitte -- "AI ROI: The Paradox of Rising Investment and Elusive Returns" (2025)
- UC Berkeley Professional Education -- "Beyond ROI: Are We Using the Wrong Metric in Measuring AI Success?" (2025)
- Freshworks -- "How AI is Unlocking ROI in Customer Service" (2025)