Here's a number that stopped me in my tracks: 370. That's how many tasks have flowed through our trigger queue system — cron jobs, edit-mode requests, batch operations — all processed autonomously by Alpha Agent without a single human writing a line of code for any of them. 355 completed successfully. 15 failed. Zero required manual intervention to ship. That's a 96% success rate on fully autonomous task execution.
But the number that actually matters isn't 370. It's 85. That's how many of those tasks came from edit mode — our system that lets anyone on the team describe a UI change in plain English, click a button, and have it built and deployed automatically. Of those 85 edit-mode requests, every single one completed successfully. 100% success rate. Zero errors.
This is the story of how we turned a Supabase table, a web component, and an AI agent into a system where every team member can ship UI changes without knowing what a div is.
Every agency has the same story. A designer spots a typo. A project manager wants to tweak a heading. A strategist thinks the CTA copy should be different. A client wants the button to be a slightly different shade of blue.
None of these are hard changes. Any developer could make them in five minutes. But the developer is busy with actual architecture work, so the change goes into a ticket. The ticket sits in a backlog. The backlog gets groomed next sprint. The change ships three weeks later — a five-minute fix that took fifteen business days because of the handoff overhead.
We didn't have a technical problem. We had a queue problem. The queue was a human developer's attention, and it was always full.
So we built a different kind of queue — one where the executor isn't a person.
Every app in our portfolio — marketing sites, dashboards, internal tools, client demos — includes a shared component called <cc-dev-toolbar>. It's a floating toolbar in the bottom-left corner of every page with two buttons: an edit pencil and a bug reporter.
Click the pencil, and the page transforms. Every custom component gets a dashed purple outline and a small
That's it. That's the entire interface. No JIRA. No Figma annotations. No "please create a ticket with reproduction steps." Just a text box and a question.
The <cc-edit-mode> component (422 lines of vanilla JavaScript, no framework dependencies) does the following when activated:
cc-* custom elements and adds data-cc-component attributestrigger_queue table with source: 'edit-mode' and status: 'pending'The payload that hits Supabase looks like this:
{
"message": "[Component: cc-hero] [URL: https://app.example.com/] Make the headline larger and change the gradient to blue-purple",
"source": "edit-mode",
"status": "pending"
}That's the entire contract between the human and the machine. A component name, a URL, and a plain-English description of what should change.
On the other side of that Supabase table, Alpha Agent's queue executor is polling. When it finds a pending row, it:
processing)apps/ directorydoneThe user sees a real-time progress indicator in the edit sidebar. Within minutes, their change is live. No PR review. No deployment pipeline. No developer in the loop.
The trigger_queue table in Supabase is almost comically simple:
| Column | Type | Purpose |
|---|---|---|
id | uuid | Primary key |
message | text | The task description (prompt) |
source | text | Where it came from (edit-mode, batch-*, cron-*) |
status | text | pending → processing → done/error |
created_at | timestamp | When it was queued |
That's it. Five columns. No priority field, no retry count, no complex state machine. The status column has four possible values: pending, processing, done, error. The simplicity is the feature.
This table serves as the universal inbox for every kind of automated work in our system:
Everything flows through the same table, gets processed by the same executor, and shows up in the same observability layer. One queue to rule them all.
Let's look at the actual data from our trigger_queue table as of today:
| Metric | Count |
|---|---|
| Total tasks processed | 370 |
| Successfully completed | 355 |
| Errors | 15 |
| Currently pending | 0 |
| Overall success rate | 96.0% |
| Edit-mode tasks | 85 |
| Edit-mode errors | 0 |
| Edit-mode success rate | 100% |
The 15 errors are almost entirely from cron batch jobs hitting transient API rate limits or timeouts — the kind of failures you'd expect in any distributed system. The edit-mode path, where a human describes a change and the agent implements it, has a perfect record.
Think about what that means: 85 times, someone who isn't a developer described a UI change in English, and every single time, the AI agent understood the request, found the right file, made the right change, and deployed it successfully.
The edit-mode requests run the full spectrum from trivial to surprisingly complex:
The agent handles all of these. It reads the component source, understands the current implementation, and makes targeted changes. For CSS tweaks, it edits the stylesheet. For feature additions, it modifies the JavaScript. For copy changes, it updates the HTML. The prompt-to-code translation happens entirely within the agent's reasoning.
We've thought a lot about why the edit-mode path has a 100% success rate while the overall queue runs at 96%. Three factors:
Edit-mode requests are inherently scoped. The user is looking at a specific component on a specific page. The message includes the component name and URL. The agent doesn't have to figure out what to work on — that context is baked into the request. Compare this to a cron job like "scan all 14 GitHub repos and categorize every open PR" where the scope is unbounded and the failure surface is larger.
When someone writes "make the headline bigger," that's an unambiguous instruction with a clear success criterion. The edit-mode interface encourages concise, specific descriptions because the user is literally looking at the thing they want changed. There's no telephone game between a ticket writer and a developer — the person with the visual context is writing the instruction directly.
Our app architecture uses web components (cc-* custom elements) that are self-contained: each component has its own markup, styles, and behavior in a single file. When the agent needs to change <cc-hero>, it opens one file, makes the change, and the component updates everywhere it's used. There are no cross-file dependencies to track, no build step to worry about, no CSS cascade to debug.
The <cc-dev-toolbar> component is deceptively simple — two floating action buttons:
When edit mode is active, the pencil button glows purple with a gradient shadow. When there are console errors, the bug button shows a red badge with the count. The toolbar is always present but never intrusive — it's a 52px floating element in the corner that most users forget is there until they need it.
The real insight is that these two buttons — edit and debug — are the only developer tools most people ever need. They don't need DevTools. They don't need the terminal. They don't need git. They need a way to say "change this" and a way to say "this is broken." The toolbar gives them both.
What started as a mechanism for processing cron jobs has evolved into something more fundamental. The trigger_queue table is now the universal inbox for all automated work in our system. Every task, regardless of origin, follows the same lifecycle:
pending → processing → done (or error)This uniformity gives us powerful properties:
The Command Center dashboard shows every task in one view — edit-mode requests next to cron batch jobs next to manual triggers. You can see the entire pulse of the system in a single table. When something fails, you see it immediately. When the queue is clear, you know everything's caught up.
Every queue row has a source field that tells you exactly where it came from. Edit-mode requests are tagged edit-mode. Cron batch jobs include the batch name and job ID: batch-business-hours-batch:daily-feed-populate. This makes it trivial to filter, count, and analyze task patterns.
Because the executor processes one task at a time, the queue naturally prevents resource contention. If someone submits five edit-mode requests in rapid succession, they execute sequentially. No race conditions, no conflicting git commits, no merge conflicts. The queue serializes work automatically.
When a task errors, the row stays in the table with status: 'error'. Retrying is as simple as updating the status back to pending. No dead letter queues, no exponential backoff configuration, no retry policies. One UPDATE statement and the task re-enters the pipeline.
This is the real punchline. The trigger queue doesn't just automate tasks — it democratizes shipping.
Before edit mode, the ability to change something on a live website was gated behind technical skill. You needed to know HTML, CSS, JavaScript, git, and the deployment pipeline. That limited "people who can ship" to the engineering team — in our case, a handful of developers already stretched across multiple client projects.
After edit mode, the ability to ship is gated behind one skill: describing what you want in English. A designer can adjust spacing. A copywriter can update headlines. A project manager can fix the typo a client flagged. A strategist can test different CTA copy. None of them need to know what a component is. They click the wand, describe the change, and it ships.
This isn't theoretical. Those 85 edit-mode tasks represent real changes made by real team members who would have otherwise created tickets and waited for a developer. Each one saved an average of 15-20 minutes of developer time (writing the ticket, context-switching, making the change, deploying, closing the ticket) and days of calendar time (waiting in the backlog).
Conservatively, 85 edit-mode tasks × 15 minutes saved = 21 hours of developer time recovered. That's more than half a workweek. And the calendar time saved — changes shipping in minutes instead of days — is incalculable in terms of team momentum and client satisfaction.
Using Supabase's REST API means both the client-side component and the server-side executor can read and write the same table with zero infrastructure. The edit-mode component POSTs directly to Supabase from the browser using the publishable key. The executor polls the same table. No middleware, no WebSocket server, no API gateway. The database is the API.
The cc-* component architecture means every UI element is a self-contained unit. When the agent needs to change something, it modifies one file. There's no framework-specific build step, no dependency tree to traverse, no virtual DOM to reason about. Vanilla web components are the simplest possible target for AI code generation because they're just HTML, CSS, and JavaScript in one place.
The edit-mode system, the dev toolbar, and the nav component live in apps/shared/components/ and are loaded by every app. This means the edit-mode capability is automatically available everywhere — new apps get the wand for free. It also means improvements to the edit system benefit every app simultaneously.
Every app is deployed via git push. The agent makes changes, commits, and pushes — the same workflow a human developer would follow. There's no special deployment API, no CI/CD configuration per-app, no build artifacts to manage. Git push is the deployment mechanism, which means the agent can deploy anything a human can.
It's worth noting what's not in this system:
These are all features we could build, and some we probably will as the system scales. But the deliberate choice to ship without them is what let us get to 85 successful edit-mode deployments so quickly. Constraints breed velocity.
The trigger queue creates a compound effect that goes beyond the individual time savings. When team members know they can ship changes themselves, they think differently. Instead of noting a problem and creating a ticket to address it later, they fix it now. Instead of writing a detailed specification for a developer, they write a sentence. Instead of batching up small improvements until they're "worth a developer's time," they ship each one as it occurs to them.
The result is a continuous stream of small improvements rather than periodic large releases. Our apps get better every day — not because we scheduled improvement sprints, but because the people using the apps have a frictionless path to improving them.
This is what it looks like when you remove the developer bottleneck from the small-change pipeline. The big architectural work still needs engineers. The creative design work still needs designers. But the vast middle ground of tweaks, fixes, and iterations? That's now everyone's job, and nobody's burden.
The core pattern is surprisingly portable. You need four things:
Start simple. Build the overlay and the queue. Process the first few requests manually to validate the pattern. Then automate the executor. You'll be surprised how quickly a basic version delivers value.
We're working on several enhancements to the trigger queue system:
But honestly, the current system — 370 tasks, 96% success rate, zero human hours — is already doing more than we imagined when we first added a wand icon to a web component.
The trigger queue proves something we suspected but couldn't quantify until now: the barrier to shipping isn't technical skill — it's access to the shipping mechanism. Give everyone a way to describe what they want and a queue that actually processes those requests, and the rate of improvement compounds in ways that no sprint planning session could predict.
370 tasks. Zero human hours. Every team member a shipper. That's the queue. Want to see how it could work for your team?