Every Monday morning used to look the same: open Slack, scroll through a weekend's worth of noise, check the calendar for surprises, scan GitHub for stale PRs, glance at the weather to decide if I'm walking to the coffee shop or staying put. Thirty minutes of context-gathering before any actual work happened. Multiply that by fifty-two Mondays a year and you've lost an entire workweek just orienting.

We decided to automate all of it. Not with some monolithic dashboard or a fancy app — with a batch of cron jobs, a Supabase queue, and an AI agent that posts everything to Slack before anyone's first cup of coffee is ready.

This is the story of how we built the Monday Morning system at Last Rev, and how it quietly became the most valuable automation in our entire AI operations stack.

The Problem: Context Switching Before the Day Starts

As a small agency running multiple client projects, Monday mornings were a cognitive assault. The typical routine looked something like this:

  • Check weather to plan the day (2 minutes)
  • Scan Slack for weekend messages, client escalations, team chatter (10–15 minutes)
  • Review Google Calendar for the day's meetings, check for conflicts (5 minutes)
  • Open GitHub, look at open PRs, figure out what's stale vs. urgent (10 minutes)
  • Mentally piece together what everyone's working on for standup (5 minutes)
  • Actually start working (whenever all that's done)

That's 30–40 minutes of pure overhead. And it's not deep work — it's shallow scanning, tab-switching, and trying to hold a dozen threads in working memory simultaneously. The information existed in five different tools. The synthesis happened entirely in my head.

We asked a simple question: what if all of that was already done and waiting in Slack when I woke up?

The Architecture: Cron Batches, Supabase Queues, and Agent Turns

The system we built has three layers: a scheduler, a queue, and an executor. It's simpler than it sounds.

Layer 1: Cron Scheduling

Alpha Agent's cron system fires jobs on a schedule. Each morning batch is a collection of jobs with defined execution order. The cron_batches table in Supabase defines the batch (Monday morning, nightly review, etc.), and cron_batch_jobs links individual jobs to their batch with a sort order.

Here's the actual Monday morning timeline:

TimeJobWhat It Does
6:00 AMmorning-weatherFetches hourly forecast for SF, posts to Slack
6:30 AMslack-summarySearches Slack for 24h of notable messages, organizes by client/project
7:00 AMtodays-meetingsPulls Google Calendar events, flags conflicts and back-to-backs
7:30 AMpr-triageScans all GitHub repos for open PRs, categorizes by urgency, auto-closes stale dependabot PRs
8:00 AMdaily-feed-populatePopulates the idea discovery feed with fresh signals

Each job is defined as a prompt file — a plain text instruction that tells the AI agent exactly what to do. No code, no API calls baked in. The prompts live in scripts/cron-prompts/ and read like instructions you'd give a sharp junior developer:

"Get the current weather and hourly forecast for today for 1116 Balboa St, San Francisco, CA 94118. Break it down by hour. Keep it concise but useful — include temperature, precipitation chance, wind, and anything notable. This is Adam's 6 AM morning briefing."

That's the entire weather job. The AI agent figures out which tools to call, formats the output, and posts it to Slack. The prompt is the program.

Layer 2: The Supabase Queue

When a cron fires, it doesn't execute the job directly. Instead, run-batch.sh reads the batch configuration from Supabase, iterates through the assigned jobs in sort order, and inserts each prompt into a trigger_queue table with a status of pending.

This queue-based architecture gives us several things for free:

  • Retry logic — if a job fails, it stays in the queue for retry
  • Observability — every job execution is a row in Supabase with timestamps, status, and source
  • Decoupled scheduling — the scheduler doesn't need to know how jobs execute, and the executor doesn't need to know when jobs were scheduled
  • Manual triggers — anyone can insert a row into trigger_queue to run any job on demand

The queue-cron-job.sh script handles the insertion. It takes a source name and a prompt file, reads the prompt text, JSON-encodes it, and POSTs it to Supabase's REST API. Twelve lines of bash. That's the entire queuing mechanism.

Layer 3: The Agent Executor

A separate process polls the trigger_queue table for pending items. When it finds one, it spins up an Alpha Agent agent session, passes in the prompt, and lets the agent do its thing. The agent has access to all the tools it needs — Slack search, GitHub CLI, Google Calendar, weather APIs, web search — and it figures out the execution plan from the prompt text alone.

This is the key insight: the prompts are the automation logic. There's no workflow engine, no DAG, no visual builder. Each job is a natural-language instruction that an AI agent interprets and executes. Adding a new job means writing a text file and inserting a row in Supabase. Modifying a job means editing a sentence.

The Monday Morning Jobs in Detail

Morning Weather (6:00 AM)

The simplest job in the batch, and arguably the most delightful. The agent calls the weather skill, gets the hourly forecast, and posts a concise summary to Slack. It includes temperature, precipitation chance, wind speed, and anything unusual (heat advisory, fog warning, etc.).

Is this mission-critical? No. But it sets the tone. When you open Slack at 7 AM and the first thing you see is a neatly formatted weather briefing, it signals that the system is alive and working. It's the canary in the coal mine — if the weather job didn't run, something's wrong with the batch system.

Slack Digest (6:30 AM)

This is the workhorse. The agent searches Slack for notable messages from the past 24 hours (or 48+ hours on Monday mornings after a weekend), organizes them by client and project, and produces a digest with brief summaries and permalink links.

The prompt is specific about what to skip: "Skip internal chatter unless it's significant." The AI learns what's significant from context — a message about a production issue is significant; a message about lunch plans is not. The digest typically surfaces 8–15 items organized under 3–5 client headings.

Before this automation, Slack catch-up was the single biggest time sink on Monday mornings. The digest compresses 45 minutes of scrolling into a 3-minute read.

Today's Meetings (7:00 AM)

The agent pulls Adam's Google Calendar events for the day, lists them chronologically with time, title, attendees, and relevant links. It flags scheduling conflicts and back-to-back meetings where there's no buffer. On Mondays, it also notes any meetings that were added over the weekend (a common source of surprises).

The meeting briefing has an underappreciated benefit: it forces you to look at your calendar proactively rather than reactively. When the first meeting of the day shows up in your Slack at 7 AM, you have time to prepare instead of being ambushed at 9:05.

PR Triage (7:30 AM)

The most complex job in the batch. The agent scans all open pull requests across 14 GitHub repositories in the last-rev-llc organization. It categorizes each PR into four buckets:

  • Red — needs Adam's review (authored by team members, touching critical paths)
  • Yellow — other human PRs (may need review but not urgently)
  • Green — dependabot/automated PRs (informational only)
  • Stale — PRs older than 90 days (candidates for closing)

The agent auto-closes stale dependabot PRs that are over 90 days old — these are almost always superseded by newer dependency updates. For human PRs, it posts a prioritized Slack digest with emoji markers, repo name, PR number, title, author, age, and URL. It even estimates review time based on the diff size.

The triage results get saved to memory/pr-triage-{date}.md for historical tracking. Over time, this creates a paper trail of how long PRs sit open, which repos accumulate the most review debt, and which team members are most active.

The Data Layer: Supabase as the Backbone

Supabase plays a central role that goes beyond just queuing. The cron system uses several tables:

  • cron_batches — defines named batches (monday-morning, nightly-review) with enabled/disabled flags
  • cron_jobs — individual job definitions with prompt text or file references
  • cron_batch_jobs — many-to-many join table linking jobs to batches with sort order
  • trigger_queue — the execution queue where pending jobs wait to be picked up
  • crons — a synced snapshot of all cron state for the Command Center dashboard

The sync-crons-to-supabase.sh script keeps the Supabase crons table in sync with the actual cron state. It dumps the current state to JSON, upserts it to Supabase, and cleans up orphaned rows. The Command Center dashboard reads from this table to show real-time cron status.

This architecture means we can manage the entire cron system through Supabase's dashboard if needed — enable/disable batches, reorder jobs, edit prompts, all without touching the codebase. It's infrastructure-as-data rather than infrastructure-as-code, and for an operations system like this, that's exactly the right trade-off.

The Nightly Complement: Journals and Reviews

The Monday morning batch doesn't exist in isolation. It's the mirror image of a nightly batch that runs in the evening:

  • Daily Journal (6:00 PM) — the agent pulls the day's GitHub activity, Slack highlights, and completed tasks, then writes a structured journal entry to memory/journal-{date}.md
  • Nightly App Reviews — for each enabled app in the apps table, the agent runs a code review, checking for issues, performance regressions, and improvement opportunities
  • Command Center Refresh — dashboard data is rebuilt from live sources

The morning batch consumes what the nightly batch produces. The Slack digest surfaces issues found in nightly reviews. The PR triage catches PRs opened in response to review findings. The meeting briefing includes standups scheduled to discuss blockers identified the night before. It's a feedback loop that runs on autopilot.

What We Learned Building This

1. Prompts Are Better Than Code for Glue Logic

The traditional approach to this kind of automation would involve writing API integrations, data transformers, and output formatters in Python or Node. Each job would be hundreds of lines of code with error handling, pagination, rate limiting, and output formatting.

Our jobs are text files. The morning-weather job is two sentences. The slack-summary job is four sentences. The pr-triage job is the most complex at about 15 lines of instructions — and it handles 14 repos, four categorization buckets, auto-closing logic, estimated review times, and file output.

When we need to change behavior, we edit a sentence. When we need to add a job, we write a paragraph. The AI agent handles all the API mechanics, error handling, and formatting. This is a fundamentally different development velocity than traditional automation.

2. The Queue Pattern Is Essential

We initially tried firing agent sessions directly from cron. It worked, but it was fragile. If the agent was busy, jobs would fail silently. If a job errored out, there was no retry. If we wanted to run a job manually, we had to simulate the cron trigger.

Adding the Supabase queue solved all of these problems with minimal complexity. The queue is just a database table. The executor is a polling loop. The total additional code was about 30 lines of bash. But the operational benefits — retries, observability, manual triggers, execution history — were enormous.

3. Ordering Matters More Than You Think

The jobs run in a specific sequence for a reason. Weather goes first because it's the fastest and sets the "system alive" signal. Slack digest goes second because it takes the longest (searching across all channels). Meetings go third because they depend on knowing what's in the Slack digest (context for meeting prep). PR triage goes last because it's the most action-oriented — by the time you see the PR list, you've already absorbed the context from the earlier briefings.

We tried running everything in parallel initially. The raw wall-clock time was faster, but the user experience was worse. Messages arrived in Slack in random order, breaking the narrative flow. Sequential execution with the sort-order column in cron_batch_jobs fixed this.

4. The Canary Job Pattern

The weather job serves double duty. Its primary function is the weather briefing, but its secondary function is system health monitoring. If the weather message doesn't appear in Slack by 6:15 AM, something is wrong — the cron system is down, Supabase is unreachable, or the agent executor is stuck. It's the simplest possible health check because it has the fewest dependencies.

We've since formalized this: the first job in every batch should be a low-dependency "canary" that verifies the pipeline is working. It's cheaper and more reliable than building a dedicated monitoring system.

5. Memory Files Create Compound Value

The PR triage saves results to memory/pr-triage-{date}.md. The daily journal saves to memory/journal-{date}.md. Over weeks and months, these files accumulate into a searchable operational history. When someone asks "what happened with the IAS project in January?", the answer is in the memory files — every PR, every Slack conversation, every blocker, every resolution.

This wasn't intentional at first. We added file output as a debugging aid. But the compound value of having weeks of structured operational data turned out to be one of the system's most important features.

The Numbers

After running this system for six weeks, here's what we measured:

MetricBeforeAfter
Monday morning context-gathering35 min5 min (reading Slack)
Stale PRs (>90 days)12–180–2
Missed meeting conflicts~2/week0
Time to first productive work9:15 AM7:30 AM
Slack messages read manually~200/day~40/day (digest + targeted)

The headline number: we reclaimed 30 minutes every Monday morning and 15 minutes every other weekday. That's roughly 2.5 hours per week, or 130 hours per year — over three full workweeks recovered from pure operational overhead.

But the qualitative improvement matters more than the time savings. Starting the day with a synthesized briefing instead of raw information means better decisions earlier. The PR triage alone prevented three instances where stale PRs would have caused merge conflicts in active work. The meeting briefing caught two double-bookings before they became embarrassments.

Extending the Pattern

The beauty of this architecture is how easy it is to extend. Since we built it, we've added:

  • Idea Discovery — a morning job that scans industry feeds and surfaces relevant opportunities
  • Recipe Audit — a weekly job that reviews automation recipes for staleness
  • Lead Research — a job triggered by new inbound leads that researches the company before the sales call
  • Nightly Brainstorm — an end-of-day creative session that generates product ideas based on the day's signals

Each new job follows the same pattern: write a prompt file, add a row to cron_jobs, link it to a batch in cron_batch_jobs, done. No deployment, no code review, no infrastructure changes. The time from idea to running automation is measured in minutes, not sprints.

How to Build Your Own

You don't need Alpha Agent to replicate this pattern. The core ingredients are:

  1. A cron scheduler — anything from system crontab to GitHub Actions to a cloud scheduler
  2. A queue table — any database with a status column works (Supabase, Postgres, even SQLite)
  3. An AI agent with tool access — the agent needs to call APIs (Slack, GitHub, Calendar) and post results
  4. Prompt files — plain text instructions for each job, stored in version control
  5. A polling executor — a loop that checks the queue and dispatches to the agent

The total implementation is surprisingly small. Our batch runner is ~40 lines of bash. The queue inserter is ~12 lines. The prompt files average 3–5 sentences each. The most complex piece is the agent executor, and if you're using an AI platform with tool calling, that's mostly handled for you.

Start with the canary: automate your weather briefing. Once that's running reliably, add the Slack digest. Then calendar. Then PR triage. Each job is independent, so you can build incrementally without risking what's already working.

The Bigger Picture

The Monday Morning system isn't really about Monday mornings. It's about a principle: operational intelligence should be delivered, not discovered. Every minute spent hunting for context is a minute not spent on judgment, creativity, or execution.

Traditional tools give you dashboards to look at. This system gives you briefings that come to you. The difference is the same as checking a news website versus having a personalized newspaper on your doorstep. Both deliver information; only one respects your time and attention.

For agencies and small teams running multiple projects, this pattern is transformative. The operational overhead of context-switching between clients, codebases, and communication channels is the hidden tax on productivity. Automating the synthesis — not just the data collection, but the making sense of it — is where AI operations delivers its highest ROI.

The Monday morning is just the beginning. Once you've tasted automated briefings, you start seeing the pattern everywhere: sprint kickoffs, client check-in prep, quarterly reviews, board meeting summaries. Every recurring synthesis task is a candidate for a cron prompt and a queue row.

Our Monday mornings are quieter now. The Slack messages are already organized. The PRs are already triaged. The calendar conflicts are already flagged. The weather is already checked. And by 7:30 AM, we're doing actual work instead of figuring out what work needs doing.

That's what AI operations looks like in practice. Not a revolution — just thirty minutes back, every single day. Ready to automate your mornings?