The Last Rev marketing site — the one you might be reading this on — was built almost entirely by an AI agent. Not "AI-assisted" in the way people usually mean, where a developer prompts Copilot for a function and tweaks the output. I mean the agent wrote the HTML, designed the component architecture, implemented the analytics layer, built the blog system you're reading this through, and now reviews its own work every single night while we sleep.

This isn't a stunt. It's the result of a conviction we've been testing at Last Rev: if your AI agent is good enough to build production software, it should be good enough to maintain it too. Ship and self-correct. Here's how it works.

The Architecture: Declarative Components All the Way Down

The marketing site is a static site — no React, no Next.js, no build-time framework. Just HTML files, CSS custom properties, and Web Components. The entire page structure for any page on the site looks like this:

<lr-head
  title="Page Title — Last Rev"
  description="SEO description"
  extra-css="./css/page-specific.css"
></lr-head>

<lr-layout active="services" subnav="Overview:#overview,Approach:#approach">
  <section class="lp-section">
    <!-- Page content -->
  </section>
</lr-layout>

That's it. Two custom elements handle everything a page needs to be a fully-formed, SEO-friendly, analytics-tracked marketing page.

<lr-head>: The Invisible Orchestrator

<lr-head> is a Web Component that runs in connectedCallback, injects everything a page's <head> needs, then removes itself from the DOM. It's a one-shot setup element. Here's what it handles:

  • Document title and meta description — set from attributes, with sensible defaults
  • Open Graph and Twitter Card meta tags — so every page is share-ready on social without manual meta tag wrangling
  • Core stylesheetstheme.css, landing.css, and the site-specific stylesheet, loaded in order
  • Optional extra CSS — via the extra-css attribute (comma-separated), resolved relative to the site root
  • All shared scripts — nav, subnav, footer, layout, analytics, contact form — loaded automatically
  • Canonical URLs — when provided, both the <link rel="canonical"> and og:url are set
  • FOUC prevention — the body is hidden via opacity: 0 until all stylesheets have loaded, then revealed with a CSS transition. A safety timeout at 800ms guarantees the page always becomes visible.

The brilliance of this pattern — which the agent arrived at organically — is that page authors (human or AI) never need to think about infrastructure. You declare what's unique about your page (title, description, maybe an extra stylesheet) and <lr-head> handles the rest. No boilerplate to copy, no scripts to forget, no meta tags to miss.

<lr-layout>: Structure as Declaration

<lr-layout> takes a similar approach to page structure. Drop your content sections inside it and specify which nav item is active, what the CTA button says, and whether you want a floating subnav. The component handles:

  • Injecting the full-width background
  • Rendering the navigation bar with the correct active state
  • Adding the floating subnav (if configured) for in-page section jumping
  • Appending the footer after your content

The pattern is declarative to the point of being boring — which is exactly the point. Every page on the site follows the same structure because the structure is enforced by the components, not by convention. When the agent generates a new page, it can't get the structure wrong because there's nothing to get wrong. Declare the attributes, drop in the content, done.

Analytics: 400 Lines of Event Tracking That Nobody Wrote By Hand

The analytics layer is where things get interesting. The agent generated a comprehensive GA4 tracking module — analytics.js — that auto-instruments the entire site without any per-page configuration. It tracks fourteen categories of user behavior:

  • CTA clicks — any element matching .btn-primary, .lr-nav-cta, or the data-track-cta attribute
  • Navigation clicks — header, subnav, and footer links, with section attribution
  • Outbound link clicks — any link leaving the domain
  • Scroll depth — 25/50/75/100% thresholds
  • Card interactions — clicks and hover-with-intent (500ms+ dwell, filtered for scroll-through)
  • Section dwell time — bucketed into short (1-2s), medium (2-10s), long (10-20s), and extended (20s+), with proper handling for tab visibility changes
  • Video engagement — play, pause, progress milestones, mute toggles, fullscreen, even YouTube embeds via postMessage
  • Form funnel — start, field focus order, field abandon, submit, and validation errors
  • Rage clicks — 3+ clicks within 50px and 1 second (a UX distress signal)
  • Text copy events — when users copy content (signals high-value content)
  • Exit intent — cursor leaving viewport on desktop, rapid scroll-up on mobile
  • Scroll velocity — classified as reading, scanning, or seeking based on px/s
  • Return visitor detection — visit count and days-since-last via localStorage
  • Device orientation changes — mid-session rotations

All of this is loaded by <lr-head> automatically. There's zero per-page analytics configuration. The public API — LR.track('event_name', { params }) — is exposed for custom events, but the auto-tracking handles everything a marketing site needs out of the box.

The section dwell tracking deserves a special callout. It uses IntersectionObserver at 40% visibility threshold, accumulates time across multiple intersections, pauses when the tab is hidden (via the Visibility API), and fires events at bucketed thresholds. The agent even added a beforeunload handler to fire remaining dwell events when the user leaves. This is the kind of careful, edge-case-aware implementation that typically requires a senior frontend developer thinking through every state transition.

The Blog Build System

The blog you're reading this on is generated from JSON data files. Each post is a JSON file in blog/data/ with a defined schema:

{
  "slug": "post-slug",
  "title": "Post Title",
  "pageTitle": "Post Title — Last Rev",
  "description": "SEO description",
  "author": "Author Name",
  "date": "2026-02-18",
  "category": "AI Engineering",
  "readTime": "10 min",
  "featured": false,
  "promoImage": "data:image/svg+xml,...",
  "content": "<p>HTML content...</p>"
}

A build script (scripts/build-blogs.js) reads every JSON file, generates fully static HTML pages using the <lr-head> and <lr-layout> components, and syncs a blog-posts.json index for the listing page. The generated HTML is minimal — under 30 lines — because the components handle all the structural complexity.

This means the agent can write a new blog post by creating a single JSON file and running npm run build. No CMS login, no Markdown compilation, no template engine. The source of truth is a directory of JSON files, and the output is static HTML that any CDN can serve.

The promo images are even inline SVGs — data URIs embedded directly in the JSON. No image pipeline, no asset management, no CDN configuration. Each post carries its own visual identity as a self-contained data structure.

GTM and GA4: Wired In From Day One

Google Tag Manager and GA4 aren't afterthoughts bolted on post-launch. The agent built them into the foundation. The analytics module bootstraps gtag.js on every page load, configures the measurement ID (G-CJNGJ1LG7J), and auto-sends page views. Every subsequent event flows through the same pipeline.

Because the tracking is centralized in one module and loaded by <lr-head>, there's no risk of pages shipping without analytics. It's structurally impossible to create a page on this site that doesn't track user behavior — the same component that sets up your meta tags also wires up the analytics. This is a design choice the agent made that I'd argue is better than what most human teams achieve, where analytics is typically a checkbox someone forgets during launch week.

The Nightly Review: An Agent Auditing Itself

Here's where the story gets recursive. Every night at 10 PM Pacific, a cron job spawns a fresh agent session to review the marketing site. Not a linter. Not a test suite. A full AI-powered code review that reads every file in the codebase and evaluates it against a comprehensive checklist.

The review process follows a 13-step lifecycle:

  1. Load context — Read shared component specs, DEC (Declarative Error Correction) patterns, and accumulated learnings from previous reviews
  2. Full codebase audit — Read every HTML page, every JS module, every supporting file. Not the diff — the whole thing
  3. Sync and PR — Push the current state to the GitHub repo and create a pull request before any fixes, establishing the audit trail
  4. Categorize findings — Two-pass review: first draft findings, then re-read and drop anything the agent isn't confident about. Issues are tagged (bugs, security), (improvements), or (nits)
  5. Post review comments — Every finding is posted as a line-level comment on the PR with REQUEST_CHANGES status
  6. Fix everything — Not "file a ticket for later." Fix it. Every finding. In the same PR. If it requires refactoring a module, refactor it. If it requires migrating data storage, migrate it.
  7. Resolve comments — Reply to each review comment explaining the fix
  8. Approve and merge — Submit an approving review, update the PR description with a summary, and merge to main
  9. Update learnings — New patterns get appended to a shared memory file that compounds across all app reviews

The audit checklist itself is extensive. It covers:

  • DEC pattern compliance — Verifying that every custom element is used according to its specification
  • Shared component compliance — Ensuring pages load the right stylesheets, use CSS variables instead of hardcoded colors, and leverage shared components instead of reinventing them
  • DRY violations — Duplicated code blocks across pages, copy-pasted CSS, inconsistent data access patterns
  • Security — No exposed API keys in client code, XSS prevention (textContent over innerHTML for user input), proper input sanitization
  • Accessibility — Button labels, image alt text, color contrast, focus states, touch targets ≥44px
  • Mobile responsiveness — Layouts at 320px, 768px, and 1024px breakpoints, no horizontal scroll
  • Performance — No DOM queries in loops, proper event listener management, optimized images
  • Error handling — Every API call wrapped in try/catch with user-visible feedback

The critical design decision here is that nothing gets deferred. The skill documentation is explicit: "There is no 'deferred' category." If the agent found it, the agent fixes it. The only exception is when a fix requires infrastructure changes outside the app's scope — and those get filed as GitHub issues, not ignored.

The Compounding Knowledge Loop

The most interesting architectural choice isn't any single component — it's the feedback loop. Every nightly review reads from a shared code-review-learnings.md file before starting. Every review appends to it after finishing. Patterns discovered in one app's review become checklist items for every subsequent review across every app.

This is where the agent transcends simple automation. A linter checks rules that a human wrote. This agent discovers new rules from its own reviews and applies them going forward. It's not just executing a checklist — it's evolving the checklist.

After a few weeks of nightly reviews, the learnings file contains dozens of specific, battle-tested patterns:

  • "Cards with hover effects need will-change: transform to avoid repaint jank on Safari"
  • "Forms using cc-contact-form must handle the contact-form-submit custom event, not just native submit"
  • "Intersection Observer callbacks fire during initial layout — guard against false positives on page load"

Each of these started as a bug found during a nightly review. Now they're permanent knowledge that prevents the same class of bug from ever recurring.

Why Static? Why Web Components?

A reasonable question: why not use a modern framework? The answer is that the architecture was chosen to optimize for AI maintainability, not developer ergonomics.

Static HTML with Web Components has properties that make it unusually well-suited for AI-generated and AI-maintained code:

  • No build pipeline complexity. The agent doesn't need to understand Webpack configs, Babel transforms, or module resolution. A file is a file. The output is the source.
  • Declarative composition. Custom elements with attribute-based configuration are trivially easy for an AI to generate and reason about. There's no JSX mental model, no hooks lifecycle, no state management layer.
  • Full readability. When the nightly review reads every file, it reads exactly what the browser renders. No transpilation step means the code the agent audits is the code that runs.
  • Atomic deployability. Every page is self-contained. The agent can modify one page without risk of breaking another (assuming the shared components maintain their contracts).
  • Zero dependency drift. No package-lock.json with 800 transitive dependencies that need security patches. The dependency surface is the browser itself.

This doesn't mean Web Components are better than React for every use case. It means that when your primary maintainer is an AI agent, the simplest possible architecture wins. The agent spends zero time fighting build tools and 100% of its time on the actual code.

What We've Learned

Running this system for several months has produced some non-obvious insights:

1. The agent is more consistent than a human team

Every page follows the same structure because the components enforce it. Every page has analytics because it's structurally impossible not to. Every page gets reviewed nightly because the cron doesn't take vacation. The baseline quality floor is higher than what most human teams maintain across a full site.

2. The nightly review catches drift, not just bugs

The most valuable findings aren't crashes — they're subtle drift. A page that was using hardcoded colors instead of CSS variables. A section that lost its tracking ID during a content update. An accessibility label that got deleted in a refactor. These are the issues that accumulate silently on human-maintained sites and degrade quality over months. The nightly review catches them within 24 hours.

3. The PR audit trail is invaluable

Every nightly review produces a GitHub PR with line-level review comments, fixes, and resolution notes. This creates a complete audit trail of every change and every decision. When a question comes up about why something works the way it does, the answer is in the PR history — with the agent's reasoning attached.

4. The "fix everything, defer nothing" policy works

The most counterintuitive rule is that the nightly review fixes every issue it finds, no exceptions. No "we'll get to it in the next sprint." This sounds aggressive, but it means the codebase never accumulates tech debt. Every morning, the site is as clean as it's ever been.

5. Shared component discovery is a side effect

Because the agent reviews multiple apps nightly, it naturally notices when the same UI pattern appears across different codebases. These become candidates for extraction into shared components. The component library grows organically from real usage patterns rather than speculative abstraction.

The Bigger Picture

This marketing site is a proof of concept for something we believe will become standard practice: AI systems that own their entire lifecycle. Not just generation. Not just assistance. Full ownership — creation, deployment, monitoring, review, correction, and evolution.

The agent that built this site doesn't need a human to tell it what to review. It doesn't need a human to approve its fixes (they're there in the PR trail if someone wants to look). It doesn't need a human to deploy the changes. It runs, it audits, it fixes, it ships. Every night. Without being asked.

That's not the future of software development. It's what we're running right now, on the site you're reading.

If you're interested in how we're applying these patterns to enterprise software — not just marketing sites — we should talk.