Agentic Site — V1 PRD (Refined)

Status: Planning. Refined from agentic_website_system_v_1_prd.md (ChatGPT draft) on 2026-04-08. Replaces that document as the working spec. Original kept for provenance.


1. Thesis

Clients are increasingly doing work through agents — Claude, ChatGPT, Gumloop, Slack bots. As that habit spreads, logging into a dashboard to update a website will feel anachronistic. A website that can't participate in natural-language workflows will feel broken.

G&M is building that capability ahead of demand. The Agentic Site platform is a frontend-first website system where:

  1. Clients talk to their site — edit inline via an AI agent that knows their brand.
  2. Clients ask their site — external agents (Slack, Gumloop, Claude Desktop) query and eventually update content via an MCP server per site.
  3. Agencies ship a site — a constrained module + content model that makes 1 and 2 safe and fast.

This PRD defines V1 (proving the concept on a synthetic pilot site) and the roadmap that extends it into a real multi-client delivery platform.


2. The Three Pillars

Pillar What it does V1 status
Talk to your site Inline editing on the live site via AI side panel + click-to-edit. Site Brain constrains output to brand voice, tokens, and structural guardrails. In V1
Ask your site MCP server per site exposes content as queryable tools. External agents read, and later write, with approval routing. Architecture built in V1. Feature ships v1.1 (read) / v1.2 (write).
Ship your site Strict module + content model, Site Brain configuration, per-site deployment. In V1

The critical V1 architectural discipline: build the data layer, module contracts, and content schema as if MCP lands tomorrow. Don't take shortcuts (JSON-in-git, file-based content) that would force a rewrite.


3. V1 Scope — What Actually Ships

A working Next.js site for a synthetic brand ("Tailwind Cellars") that demonstrates:

3.1 Editing experience

3.2 Live-with-approval model

3.3 Module system

V1 ships this module set. All modules have fixed schemas; fields are strongly typed.

Core: - Text (heading + rich body) - Image (with alt text) - Video (embed URL + poster)

Composed: - Hero (headline + sub + CTA + optional media) - Stats (array of {value, label}) - Testimonials (array of {quote, attribution, company}) - Service blocks (array of {title, body, icon})

Page types: - Page (any combination of allowed modules, constrained by content-type rules) - Blog post (fixed structure: hero → body → optional media — not reorderable in V1)

3.4 Site Brain (per-site configuration)

Stored as structured rows, not a prose file, so it can be queried by AI and by the future MCP layer.

The Site Brain is loaded into every AI interaction as structured context. It is the prompt the agent cannot override.

3.5 Authentication & edit mode

3.6 Pilot site — Tailwind Cellars

A fictional winery riffing on the existing Soaring Wings Vineyard & Brewing theme (G&M client site in ~/Local Sites/soaring-wings). Used as a content scaffold so V1 isn't blocked writing placeholder content from scratch.

The fiction: - Tailwind Cellars — small-batch winery in Cedar Ridge, Oregon (Willamette Valley-adjacent fictional town) - Same general content shape as Soaring Wings: home, wines, about, visit/events, blog, contact - Name winks at the Tailwind CSS stack the platform runs on

Why this approach: - Real-shaped content spine to stress-test the module system (wines, events, testimonials, blog posts) - Reuses Soaring Wings visual identity (logo, typography, palette) as a starting point — swap for proper branding later in ~1 hour - Gives the "search/replace a name globally" demo real data (winemakers, staff, quoted sommeliers) - Zero clash risk with real clients because Tailwind Cellars is obviously fictional

V1 content set: - 6 pages — Home, Wines, About, Visit & Events, Blog index, Contact - 4–5 blog posts (harvest notes, event recaps, winemaker interviews) - 6–8 named wines with descriptions and varietal tags - 10+ named people across pages (winemakers, staff, quoted sommeliers, testimonial authors) — so the global search/replace demo has real targets - 4–6 testimonials from fictional sommeliers and companies

Content sourcing: - wp export / WP-CLI pulls Soaring Wings content as JSON - Lightweight rename script swaps brand-specific entities (town, family names, wines) to Tailwind Cellars fiction - Seeded into Postgres via migration, mapped to V1 module schemas - Imagery reuses Soaring Wings' existing stock photography (vineyard, bottles, tasting rooms, events) — already curated and winery-appropriate. Swap for a real brand shoot later

3.7 First-run setup — /setup route

Single admin route, used once per site, that configures the Site Brain on first login. This is the one concession to the "soft admin rule" — Site Brain configuration is a setup task, not a content task, and deserves its own focused flow.

The flow: 1. Owner clicks the magic link → lands at /setup 2. Short guided prompt sequence — ~6–8 questions: - What's the business? (one sentence) - Who's it for? - What's the voice? (pick 2–3 adjectives + 1 free-form sentence) - Any phrases you always use / never use? - Brand colors (paste hex or image) - Typography preference (pick from starter pairs or paste font names) - Any hard rules? ("never mention competitors", "always capitalize 'Estate'") 3. Answers feed into a Site Brain generation template — Claude (Opus for this one call) produces a full structured Site Brain: tokens, voice description, do/don't examples, guardrails, page manifest stub 4. Owner reviews the generated Site Brain on one screen, can tweak any field inline 5. On save → Site Brain writes to DB, owner is redirected to the live site with edit mode active

Why AI-generated not hand-drafted: - On-brand for the platform thesis — the AI configures the AI that edits your site - Gives every new client a full Site Brain in ~3 minutes instead of a brand workshop - G&M-written template + Opus-powered expansion = high quality baseline - The setup flow IS a feature demo — the first thing a prospective client sees is "it wrote my brand guidelines"

What gets hand-crafted by G&M: the template prompt itself. The lib/ai/site-brain-template.ts file is the source of truth for what a good Site Brain looks like, and the model fills it in from the owner's answers. Iterating on the template is how the whole platform gets smarter over time.

Tailwind Cellars specifically: seeded via migration with a hand-drafted Site Brain (no /setup flow run), so the pilot demos editing, not onboarding. A separate /setup demo run (second synthetic site, maybe "Updraft Coffee") proves the setup flow later.


4. Architecture

4.1 Stack

4.2 Content data model (sketch)

sites              (id, slug, name, domain, owner_user_id, created_at)
site_brain         (site_id, tokens_json, voice_json, guardrails_json, updated_at)
pages              (id, site_id, path, type, title, meta_json, module_order_json)
modules            (id, page_id, type, position, content_json, schema_version)
module_index       (module_id, field_path, field_value_text, field_value_type)  -- denormalized for queries
blog_posts         (id, site_id, slug, hero_json, body_json, media_json, published_at)
assets             (id, site_id, kind, url, alt, metadata_json)
patches            (id, site_id, actor, source, target_type, target_id, diff_json, status, approved_by, applied_at, model, input_tokens, output_tokens, cost_usd)

Key decisions: - JSONB for module content keeps schemas flexible and versionable per module. - module_index denormalized table makes the "find every mention of 'Jane Smith'" query trivial and MCP-ready. Updated on patch apply. - patches table is the full history of every proposed change (human or AI), even rejected ones. It's the versioning story, the approval ledger, the future MCP write log, and the usage/billing ledger. Every patch records model, input_tokens, output_tokens, and cost_usd at time of generation. Zero extra work in V1, full metering story ready when billing lands in V2.

4.3 Module component contract

Every module is a React component that: 1. Declares its schema in a sibling schema.ts file — Zod schema for runtime validation + type generation. 2. Renders editable fields wrapped in <Editable> — a platform component that attaches data-agentic-path="modules.{moduleId}.{fieldPath}" to the DOM node and registers the field with the edit engine. 3. Exports a describe() function — returns natural-language description of the module for AI context ("This is a testimonial block with N testimonials"). 4. Provides a preview() fallback — how the module renders in approval diffs.

This contract is mandatory. It's the thing that makes click-to-edit, AI context, and future MCP queries all work from the same source of truth.

4.4 Editing flow (both modes)

User intent (click or chat)
  ↓
Edit engine builds context: { target element, module schema, current content, site brain }
  ↓
Sent to Claude with: system prompt (Site Brain + guardrails) + user message + context
  ↓
Claude returns a structured patch proposal (JSON) + human-readable explanation
  ↓
Patch is validated against the module schema (Zod)
  ↓
Approval UI: visual diff (before/after render) + explanation + approve/reject
  ↓
On approve: patch inserted into `patches` table, applied to `modules` table, `module_index` rebuilt for that module, page revalidated
  ↓
On reject: patch stored with status=rejected (keeps the learning signal)

4.5 Patch format

Patches are JSON Patch (RFC 6902) scoped to a module or page. Every edit, whether "fix a typo" or "swap Hero A for Hero B", produces a patch. This keeps the write path uniform — one protocol for humans, the editor, and future MCP clients.


5. AI Interaction Protocol

5.1 System prompt structure

Every AI call includes: - Site Brain (tokens, voice, guardrails, page manifest) as structured JSON in the system prompt - Current page/module context — what the user is editing, the module schema, current content - Allowed operations — the patch types the user's tier permits

5.2 Output contract

The model MUST return: - A human-readable explanation of what it's changing and why - A structured patch proposal conforming to the module's schema

Enforced via tool use — the model calls a propose_patch tool whose parameters ARE the patch schema. This prevents free-form JSON drift.

5.3 Fallback and retry

5.4 Context scoping (core design principle)

Every edit sends the minimum context needed to produce a correct patch. Nothing more. Client cost scales with what we send, and this platform resells its AI usage to clients, so token efficiency is a product requirement, not an optimization.

Context is scoped by edit target, not blasted wholesale:

Edit type Context sent
Single field edit (click-to-edit on a headline) Current field value + the parent module's schema + brand voice slice of Site Brain. Nothing else.
Whole-module edit ("rewrite this hero") Full module content + module schema + brand voice + relevant guardrails. No neighboring modules.
Cross-module edit ("add a testimonial block after the hero") Page structure summary + target section context + module catalog for insertable types + guardrails.
Structural / multi-module chat ("restructure this page") Full page manifest + relevant modules + full Site Brain. Higher cost, rarer operation, uses Opus.
Global context-aware operation (v1.3 "rename every mention of X") Query-scoped results from module_index, not full-page context.

Baseline target: field-level edits should be ~1–2k input tokens + ~300 output tokens = under $0.01 per edit on Sonnet 4.6. Whole-module edits 2–3× that. Page-level structural edits are the rare expensive operation.

The context assembler lives in lib/ai/context.ts and is the single place where token budgets are enforced. Every edit type declares its context shape. Drift here is tracked as a cost regression.


6. Out of V1 (explicitly)

To keep V1 shippable, these are deferred and tracked on the roadmap: - MCP server and external-agent integration (architecture ready, feature v1.1+) - Multi-tenant infrastructure beyond the synthetic pilot (V1 is single-site) - Tiered AI capability model (V1 is full-capability for the owner) - Module variant system (Hero A → Hero B) — V1 modules have one layout each - Module reordering — V1 order is fixed per page type - Full version history beyond session undo - Blog post structural flexibility - User-defined design systems / token editor - Multi-user collaboration, roles, invites - Custom modules per client - Image upload pipeline (V1 uses static assets; client uploads v1.1+) - Billing / metering


7. Roadmap

V1 — Foundation (this PRD)

Synthetic Tailwind Cellars site. Editing + Site Brain + patches + approval. Single owner. Postgres-backed. Built architecturally as MCP-ready.

V1.1 — Ask your site (MCP read)

V1.2 — Ask your site (MCP write)

V1.3 — Context-aware global operations

V1.4 — Image and asset pipeline

V1.5 — Blog flexibility

V2.0 — Multi-tenant platform + usage billing

V2.1 — Module variants and layout controls

V2.2 — Full version history

V3.0 — Agency platform


8. UX Principles (retained from original, sharpened)


9. Success Criteria for V1

V1 is done when: 1. A person unfamiliar with the platform can edit a Tailwind Cellars page via chat or click-to-edit, approve the change, and see it live — in under 2 minutes, with no code or admin dashboard. 2. Every AI-proposed patch validates against the module schema before reaching the approval UI. Zero schema escapes. 3. The Site Brain demonstrably constrains output: the same prompt produces brand-consistent copy across three different Site Brain configurations. 4. The patches table captures every accepted and rejected change, and any accepted change can be undone within the session. 5. The data model supports a synthetic MCP query (find all modules mentioning 'Acme Corp.') returning results in under 200ms — proves v1.1 is a thin layer away. 6. The site loads, renders, and revalidates on Vercel in production, not just locally.


10. Ratified Decisions

Planning complete 2026-04-08. No open questions blocking implementation planning.

Stack & architecture

Product decisions

Pilot (Tailwind Cellars)

Cost / metering


11. Appendix — What We Deliberately Did Not Build

This PRD is intentionally narrower than the original draft in some places and more ambitious in others. Decisions worth remembering:


End of V1 PRD.