The Last Rev marketing site — the one you might be reading this on — was built almost entirely by an AI agent. Not "AI-assisted" in the way people usually mean, where a developer prompts Copilot for a function and tweaks the output. I mean the agent wrote the HTML, designed the component architecture, implemented the analytics layer, built the blog system you're reading this through, and now reviews its own work every single night while we sleep.
This isn't a stunt. It's the result of a conviction we've been testing at Last Rev: if your AI agent is good enough to build production software, it should be good enough to maintain it too. Ship and self-correct. Here's how it works.
The marketing site is a static site — no React, no Next.js, no build-time framework. Just HTML files, CSS custom properties, and Web Components. The entire page structure for any page on the site looks like this:
<lr-head
title="Page Title — Last Rev"
description="SEO description"
extra-css="./css/page-specific.css"
></lr-head>
<lr-layout active="services" subnav="Overview:#overview,Approach:#approach">
<section class="lp-section">
<!-- Page content -->
</section>
</lr-layout>
That's it. Two custom elements handle everything a page needs to be a fully-formed, SEO-friendly, analytics-tracked marketing page.
<lr-head> is a Web Component that runs in connectedCallback, injects everything a page's <head> needs, then removes itself from the DOM. It's a one-shot setup element. Here's what it handles:
theme.css, landing.css, and the site-specific stylesheet, loaded in orderextra-css attribute (comma-separated), resolved relative to the site root<link rel="canonical"> and og:url are setopacity: 0 until all stylesheets have loaded, then revealed with a CSS transition. A safety timeout at 800ms guarantees the page always becomes visible.The brilliance of this pattern — which the agent arrived at organically — is that page authors (human or AI) never need to think about infrastructure. You declare what's unique about your page (title, description, maybe an extra stylesheet) and <lr-head> handles the rest. No boilerplate to copy, no scripts to forget, no meta tags to miss.
<lr-layout> takes a similar approach to page structure. Drop your content sections inside it and specify which nav item is active, what the CTA button says, and whether you want a floating subnav. The component handles:
The pattern is declarative to the point of being boring — which is exactly the point. Every page on the site follows the same structure because the structure is enforced by the components, not by convention. When the agent generates a new page, it can't get the structure wrong because there's nothing to get wrong. Declare the attributes, drop in the content, done.
The analytics layer is where things get interesting. The agent generated a comprehensive GA4 tracking module — analytics.js — that auto-instruments the entire site without any per-page configuration. It tracks fourteen categories of user behavior:
.btn-primary, .lr-nav-cta, or the data-track-cta attributeAll of this is loaded by <lr-head> automatically. There's zero per-page analytics configuration. The public API — LR.track('event_name', { params }) — is exposed for custom events, but the auto-tracking handles everything a marketing site needs out of the box.
The section dwell tracking deserves a special callout. It uses IntersectionObserver at 40% visibility threshold, accumulates time across multiple intersections, pauses when the tab is hidden (via the Visibility API), and fires events at bucketed thresholds. The agent even added a beforeunload handler to fire remaining dwell events when the user leaves. This is the kind of careful, edge-case-aware implementation that typically requires a senior frontend developer thinking through every state transition.
The blog you're reading this on is generated from JSON data files. Each post is a JSON file in blog/data/ with a defined schema:
{
"slug": "post-slug",
"title": "Post Title",
"pageTitle": "Post Title — Last Rev",
"description": "SEO description",
"author": "Author Name",
"date": "2026-02-18",
"category": "AI Engineering",
"readTime": "10 min",
"featured": false,
"promoImage": "data:image/svg+xml,...",
"content": "<p>HTML content...</p>"
}
A build script (scripts/build-blogs.js) reads every JSON file, generates fully static HTML pages using the <lr-head> and <lr-layout> components, and syncs a blog-posts.json index for the listing page. The generated HTML is minimal — under 30 lines — because the components handle all the structural complexity.
This means the agent can write a new blog post by creating a single JSON file and running npm run build. No CMS login, no Markdown compilation, no template engine. The source of truth is a directory of JSON files, and the output is static HTML that any CDN can serve.
The promo images are even inline SVGs — data URIs embedded directly in the JSON. No image pipeline, no asset management, no CDN configuration. Each post carries its own visual identity as a self-contained data structure.
Google Tag Manager and GA4 aren't afterthoughts bolted on post-launch. The agent built them into the foundation. The analytics module bootstraps gtag.js on every page load, configures the measurement ID (G-CJNGJ1LG7J), and auto-sends page views. Every subsequent event flows through the same pipeline.
Because the tracking is centralized in one module and loaded by <lr-head>, there's no risk of pages shipping without analytics. It's structurally impossible to create a page on this site that doesn't track user behavior — the same component that sets up your meta tags also wires up the analytics. This is a design choice the agent made that I'd argue is better than what most human teams achieve, where analytics is typically a checkbox someone forgets during launch week.
Here's where the story gets recursive. Every night at 10 PM Pacific, a cron job spawns a fresh agent session to review the marketing site. Not a linter. Not a test suite. A full AI-powered code review that reads every file in the codebase and evaluates it against a comprehensive checklist.
The review process follows a 13-step lifecycle:
REQUEST_CHANGES statusThe audit checklist itself is extensive. It covers:
The critical design decision here is that nothing gets deferred. The skill documentation is explicit: "There is no 'deferred' category." If the agent found it, the agent fixes it. The only exception is when a fix requires infrastructure changes outside the app's scope — and those get filed as GitHub issues, not ignored.
The most interesting architectural choice isn't any single component — it's the feedback loop. Every nightly review reads from a shared code-review-learnings.md file before starting. Every review appends to it after finishing. Patterns discovered in one app's review become checklist items for every subsequent review across every app.
This is where the agent transcends simple automation. A linter checks rules that a human wrote. This agent discovers new rules from its own reviews and applies them going forward. It's not just executing a checklist — it's evolving the checklist.
After a few weeks of nightly reviews, the learnings file contains dozens of specific, battle-tested patterns:
will-change: transform to avoid repaint jank on Safari"cc-contact-form must handle the contact-form-submit custom event, not just native submit"Each of these started as a bug found during a nightly review. Now they're permanent knowledge that prevents the same class of bug from ever recurring.
A reasonable question: why not use a modern framework? The answer is that the architecture was chosen to optimize for AI maintainability, not developer ergonomics.
Static HTML with Web Components has properties that make it unusually well-suited for AI-generated and AI-maintained code:
package-lock.json with 800 transitive dependencies that need security patches. The dependency surface is the browser itself.This doesn't mean Web Components are better than React for every use case. It means that when your primary maintainer is an AI agent, the simplest possible architecture wins. The agent spends zero time fighting build tools and 100% of its time on the actual code.
Running this system for several months has produced some non-obvious insights:
Every page follows the same structure because the components enforce it. Every page has analytics because it's structurally impossible not to. Every page gets reviewed nightly because the cron doesn't take vacation. The baseline quality floor is higher than what most human teams maintain across a full site.
The most valuable findings aren't crashes — they're subtle drift. A page that was using hardcoded colors instead of CSS variables. A section that lost its tracking ID during a content update. An accessibility label that got deleted in a refactor. These are the issues that accumulate silently on human-maintained sites and degrade quality over months. The nightly review catches them within 24 hours.
Every nightly review produces a GitHub PR with line-level review comments, fixes, and resolution notes. This creates a complete audit trail of every change and every decision. When a question comes up about why something works the way it does, the answer is in the PR history — with the agent's reasoning attached.
The most counterintuitive rule is that the nightly review fixes every issue it finds, no exceptions. No "we'll get to it in the next sprint." This sounds aggressive, but it means the codebase never accumulates tech debt. Every morning, the site is as clean as it's ever been.
Because the agent reviews multiple apps nightly, it naturally notices when the same UI pattern appears across different codebases. These become candidates for extraction into shared components. The component library grows organically from real usage patterns rather than speculative abstraction.
This marketing site is a proof of concept for something we believe will become standard practice: AI systems that own their entire lifecycle. Not just generation. Not just assistance. Full ownership — creation, deployment, monitoring, review, correction, and evolution.
The agent that built this site doesn't need a human to tell it what to review. It doesn't need a human to approve its fixes (they're there in the PR trail if someone wants to look). It doesn't need a human to deploy the changes. It runs, it audits, it fixes, it ships. Every night. Without being asked.
That's not the future of software development. It's what we're running right now, on the site you're reading.
If you're interested in how we're applying these patterns to enterprise software — not just marketing sites — we should talk.