Here's how my morning used to go. Open Slack... scroll through overnight messages across eight client channels. Open GitHub... check PRs across a dozen repos. Open GA4... squint at traffic charts. Open the CRM... see if any leads came in. Open Contentful... check if anyone published something weird. That's five tools, minimum 45 minutes, and I still wasn't sure I had the full picture.

Now I open one tab. Four minutes later, I know everything.

I'm going to walk you through exactly how I built that; what it replaced, what it costs, and why I think every ops leader running a small team should have one.

The Real Problem Isn't "Too Many Tools"

Everyone talks about tool sprawl like the problem is the number of apps. It's not. The problem is that none of them talk to each other, and you become the integration layer.

Think about it. Your project manager checks GitHub for PR status, then walks over to Slack to relay it, then updates a spreadsheet, then mentions it in standup. That's one person spending 30 minutes to move information from Point A to Point B. Multiply that across every tool and every team member.

Harvard Business Review published a study where they tracked 137 people across three Fortune 500 companies. The average worker toggled between apps and websites 1,200 times per day.1 Not per week. Per day. And every single switch has a cost; Gloria Mark's research at UC Irvine found it takes over 23 minutes to fully refocus after an interruption.2

I was living this every day. Not because I'm disorganized... because the tools made it impossible to stay organized. Each one had its own dashboard, its own notification system, its own mental model. I wasn't doing operations; I was doing data collection.

So I Built One Screen That Does It All

The Command Center is a single dashboard that pulls data from every system I use to run Last Rev. GitHub, Slack, Google Analytics, Contentful, our CRM, uptime monitors, Zoom... all of it feeds into one place. But here's the important part: it's not just aggregating data. An AI agent analyzes it overnight and surfaces only what actually matters.

Let me show you what a real morning looks like now.

6:00 AM... Before I'm Awake

While I'm sleeping, 63 automated jobs are running. The AI agent has already scanned every open pull request across all our client repos. It's reviewed the code, left comments, and flagged anything that needs my attention. In parallel, it ran a code quality audit across every app in our workspace; checking for duplicate code, missed shared components, consistency issues.

6:30 AM... The Slack Digest

A cron job fires and scans every Slack channel from the past 24 hours. Instead of reading 200 messages, I get five paragraphs organized by client. If a developer flagged a blocker at 11 PM, it's in there. If a client mentioned wanting to reschedule something, it's in there. Prioritized by what needs action first.

7:00 AM... Meeting Prep That Does Itself

I have a client call at 10. The system already pulled their health score, their open PRs, their recent deploy history, and their latest Slack conversations. It generated prep notes. I walk into the call already knowing what to ask about.

7:15 AM... I Actually Open the Dashboard

One tab. Five sections. I scan for color. Green means healthy. Red means act now. Today I see one client flagged red; 13 stale PRs, zero commits this week. I make a note to bring it up in the sync. I see a traffic spike on one of our blog posts. I see a new lead scored at 9/10. Total time: four minutes.

That's it. Full situational awareness. No tab switching, no Slack scrolling, no "did I forget to check something?"

What's Actually In There

The dashboard is organized into five tabs. Each one replaces a workflow that used to eat hours.

Admin: The Daily Pulse

This is where I start. It has four modules...

  • Task Queue shows everything the AI agent is processing; from "summarize this Zoom recording" to "research this lead." It processes one task every 60 seconds from a database queue. I can see what's pending, what's running, what's done.
  • Daily Feed is an AI-generated digest of everything that happened in the last 24 hours. Deploys, PR merges, Slack conversations, anomalies. A personalized newspaper for the company.
  • Slack Summaries are the per-client breakdowns I mentioned. Five paragraphs instead of 200 messages.
  • Calendar shows today's meetings with AI-generated prep notes. Not just "10 AM call with IAS" but "here's their health score, their open issues, and what they mentioned in Slack last week."

Code: Engineering Without Standups

I can see every open PR across all repositories. Right now that's 30+ PRs across 8 client projects. The system flags anything that's been open longer than 48 hours. It runs automated code reviews hourly during business hours. It catches things like a developer's PR that has a web component race condition... and sometimes fixes it before I even see it.

There's also a nightly Lighthouse audit that checks performance scores on every client site. If a deploy tanks Core Web Vitals, I know before the client does.

Client Health: The Score That Replaced Check-Ins

This is the one that changed everything. Each client gets a composite health score based on real signals: commits in the last 7 days, how stale their PRs are, open issues, when they last deployed, Slack activity. It's not a gut feeling; it's math.

Here's a real example. One client's score dropped to red. Zero commits, 13 stale PRs, no Slack activity in a week. Without the dashboard, I might not have noticed for another few days. Instead, I reached out proactively before they had to ask "hey, what's going on with our project?" That's the difference between reactive and proactive ops.

The health tab also monitors CMS content. We manage thousands of Contentful entries across clients. The system flags drafts that haven't been touched in over 1,000 days; content that's just sitting there, never published, taking up space and confusing editors.

And there's GA4 anomaly detection. When sessions drop 28% compared to the weekly average, I get an alert. When a blog post spikes 316%, I know to amplify it. No more logging into Analytics and trying to spot patterns manually.

Financial: Pipeline Without a CRM

Every inbound lead gets automatically enriched with company data, tech stack analysis, and a fit score. When a healthcare company using Contentful and Next.js came in, the system scored them 9/10 and generated talking points: "Ask about their portal redesign timeline. Discuss composable architecture for HIPAA compliance." That's 30 minutes of sales research, done in seconds.

Every night at 9 PM, a cron job scans tomorrow's calendar for meetings with external contacts and pre-researches each person and company. I walk into every meeting with context.

Personal: The Async Standup

At 6 PM, the system generates a dev journal. What got done today, what's pending, what needs attention tomorrow. It's the standup I give to myself. There's also a curated news feed and weather briefing... small things, but they make the dashboard feel like home base rather than just another work tool.

63 Jobs Running in the Background

The dashboard is just the surface. Behind it are 63 scheduled jobs that feed it data, run analysis, and take action. The breakdown...

  • 35 nightly reviews covering PR/merge cycles, code quality audits, UX reviews, and per-app code reviews across 20+ applications
  • 5 system jobs including the task queue processor (every 60 seconds) and dashboard data refresh (every 30 minutes)
  • 4 monitoring jobs for PR staleness, weekly triage, and backlog prep
  • 4 content generation jobs for idea scoring, skill discovery, and competitive research
  • 4 morning briefings covering weather, Slack summaries, calendar prep, and the evening journal
  • 2 research jobs for lead enrichment and meeting prep
  • 9 other jobs handling recipe audits, brainstorming, memory cleanup, and reminders

The most critical one is the task queue processor. It runs every 60 seconds, polls a database table for pending work, processes one task per cycle, and delivers results to Slack. It's the nervous system that connects everything. If someone says "research this company" in Slack, it hits the queue, gets processed, and the research lands back in Slack... usually before the person's moved on to something else.

Let's Talk About What This Replaces

I want to be direct about the math here, because this is where it gets interesting.

To get this level of operational awareness without the Command Center, you'd need at minimum...

  • A project manager for client health monitoring, PR triage, blocker escalation, meeting prep. $85K-$110K/year.
  • A part-time DevOps engineer for uptime monitoring, deployment tracking, Lighthouse audits, dependency updates. $60K-$80K/year.
  • A marketing analyst for GA4 monitoring, content audits, traffic anomaly detection, lead enrichment. $65K-$85K/year.
  • A sales ops coordinator for lead research, CRM maintenance, pipeline reporting. $55K-$70K/year.
  • An executive assistant for calendar management, Slack monitoring, daily briefings, follow-up tracking. $50K-$65K/year.

That's $315K to $410K per year in headcount. For a 10-person agency or a solo founder, that's not realistic. But you still need that coverage. The Command Center provides it for a fraction of the cost.

And I want to be clear... I'm not saying fire people. I'm saying most small teams can't hire for all of these roles in the first place. The Command Center fills the gap that's already there.

The Tech Is Deliberately Simple

People assume this requires a complex stack. It doesn't.

  • Frontend: Vanilla Web Components. No React. No build step. No framework. Each module is a self-contained custom element that fetches its own data and renders independently. 27 modules total, most under 200 lines of code.
  • Data: Supabase for real-time tables and static JSON files for snapshots. The AI agent writes data; the dashboard reads it. That's the entire architecture.
  • AI: Cron jobs run, the agent analyzes something, writes a JSON file or database row. No complex API layer. No webhook configuration. Just files and a database.
  • Hosting: Static files. No server. No SSR. Loads in under a second.

This architecture makes the dashboard almost indestructible. If a cron job fails, the dashboard shows stale data with a timestamp instead of crashing. If the database goes down, the JSON files still load. If the AI agent is offline, the last known data persists. Graceful degradation at every layer.

What I Learned Building This

If you want to build something similar, here's what actually matters.

Start with the morning briefing. Don't try to build the whole thing at once. Set up one automated Slack summary that hits your DMs at 6:30 AM. Once you experience walking into your day with full context already waiting for you, you'll be motivated to add more. That first morning where you don't have to scroll through Slack... you'll feel it immediately.

Use boring technology. I chose vanilla Web Components specifically because there's nothing to break. No dependency updates, no framework migrations, no build pipeline. Each module is one file. I can write, test, and deploy a new module in 15 minutes. When you're building an operational tool, reliability matters more than elegance.

Not everything needs to be real-time. Client health scores update every 30 minutes. PR data refreshes hourly. GA4 alerts run daily. The only real-time piece is the task queue at 60-second intervals. Match your refresh rate to how fast you actually make decisions. Most ops decisions don't need sub-second data.

Make the AI write files, not call APIs. The simplest integration pattern I've found: a cron job runs, the AI analyzes something, writes a JSON file. The dashboard reads the file. No API auth, no webhook config, no error handling for failed calls. Just files. It sounds primitive and it works beautifully.

Color-code ruthlessly. Three colors. Green means healthy. Amber means pay attention. Red means act now. Every module speaks the same language. When I glance at the dashboard, I don't read... I see colors. If everything's green, I move on. One red dot anywhere gets 100% of my focus.

The Part Nobody Talks About: It Gets Better Over Time

Here's what surprised me. The Command Center compounds.

Each new cron job adds another data stream. Each new module creates another lens into the business. The nightly brainstorm agent generates ideas that reference patterns spotted by the client health module. The lead enrichment system generates talking points informed by content audit findings. The whole system cross-pollinates in ways I didn't plan for.

After three months of running this...

  • 63 automated jobs running on schedule
  • 27 dashboard modules providing visibility across every part of the business
  • Thousands of content entries continuously audited
  • 30+ open PRs tracked across 8+ repositories
  • Every inbound lead enriched with fit scores and talking points
  • Zero missed client escalations
  • 4-minute mornings replacing 45-minute review sessions

McKinsey pegs the long-term AI productivity opportunity at $4.4 trillion across corporate use cases.3 That's a macro number. At the micro level, for one person running one company, the impact is simpler: I got my mornings back. I stopped being a human router for information. I started actually doing the strategic work I'm supposed to be doing.

This Isn't Science Fiction

I think the biggest misconception about AI operations dashboards is that they require some massive infrastructure investment or a team of ML engineers. They don't. The Command Center runs on a free-tier database, static file hosting, and an AI agent that costs less per month than a single team lunch.

The shift is simple but significant. Traditional dashboards are passive... you look, you search, you find. The Command Center is active... it looks, it finds, it tells you. I didn't write a rule that says "alert me when a client has 13 stale PRs." The system noticed the pattern and raised it. That's the difference.

If you're an ops leader drowning in tabs, or a founder who keeps putting off hiring a PM because the budget isn't there... the tools to build this exist right now. Not in some future product roadmap. Right now.

Let's talk about building yours.


Sources

  1. Rohan Narayana Murty, Sandeep Dadlani, et al. — "How Much Time and Energy Do We Waste Toggling Between Applications?" Harvard Business Review, August 2022.
  2. Gloria Mark, University of California, Irvine — Research on digital distraction and task switching. Cited in Forbes via UCI Informatics. Finding: it takes over 23 minutes to fully regain focus after an interruption.
  3. McKinsey & Company — "Superagency in the Workplace: Empowering People to Unlock AI's Full Potential at Work" January 2025. Sizes the long-term AI opportunity at $4.4 trillion in added productivity growth potential.