PRD — Figma + Brand Guide MCP, Path A POC
Status: drafted 2026-05-06 · stretch / unconfirmed
Owner: Eric Downs (G&M)
Test tenant: Guardify (3rd brand-guide-mcp tenant, has fresh
surface=claude_code key issued today)
Companion docs:
- FIGMA-INTEGRATION-RESEARCH.md — landscape, three-path matrix, public-proof gap analysis, all source links
- docs/MCP-INSTALL.md § Pairing with Figma (live in repo)
1. One-paragraph summary
A designer working in Figma asks Claude Code to apply Guardify's
primary brand color to the selected frame. Claude calls Guardify's
brand-guide MCP for the color, calls Figma's Dev Mode MCP use_figma
to write the fill, and the Figma canvas updates in real time. This
PRD scopes a proof-of-concept that proves the dual-MCP pattern works
end-to-end on a real Guardify file, hands off to a developer for
honest stress-testing, and produces shareable artifacts (Loom +
written walkthrough) that can be handed to designers without further
explanation.
2. Why this matters
- Strategic positioning. Brand-guide MCPs are a near-empty product niche. Combining one with Figma's MCP for a real designer workflow hasn't been demonstrated publicly (see research doc § 4 "Public-proof gap analysis"). Shipping a working POC + writeup gives G&M the canonical content piece on this combo.
- Per-surface attribution validation. We shipped per-surface API
keys and the
/portal/[tenantId]/integrationspage today. This POC is the first real exercise ofsurface=claude_codetraffic — proves the attribution discipline holds up under non-trivial usage. - Customer story. Guardify designers using brand-guide MCP from inside their actual Figma workflow becomes the first concrete example we can show the next prospect.
- De-risks Path B. A native Figma plugin (Path B in the research doc, ~1-2 weeks of build) is a real product investment. Path A is ~hours of work. If Path A doesn't reach designers, Path B's premise is broken — better to learn that now.
3. Scope
In scope
- End-to-end working test of Path A on a real Guardify Figma file (deck cover, social tile, or poster — not a blank canvas).
- Four scripted test prompts run from a fresh Claude Code session with both MCPs registered. Each prompt produces an observable outcome on the Figma canvas (or in the LLM response, for the audit prompt).
- Per-surface attribution confirmed — every brand-guide call
tagged
claude_codein/admin/usage?tenant=guardify. Figma's tools billed by Figma, not visible to us; we just confirm our half. - Two artifacts for handoff:
a. A 60-90 second Loom showing the workflow from a designer's POV
(Figma window + a Claude Code prompt + the canvas updating).
b. A one-page written walkthrough (lives in repo
docs/or in the research doc as an appendix) — what was tried, what worked, what didn't, with screenshots. - Developer test pass. A G&M (or Guardify-side) developer independently runs the four prompts on their own machine using the docs and Loom — no live walkthrough from us. They report what broke, what was unclear, what surprised them.
- Findings folded back into both companion docs (research doc
"Open questions" → answered or replaced;
MCP-INSTALL.mdFigma section → updated with anything we learned).
Out of scope
- Building a native Figma plugin (Path B) — not until Path A adoption is measured.
- Wiring Figma Make → brand-guide MCP (Path C) — gated on Figma's roadmap.
- Onboarding non-developer Guardify designers (Nicholas's team) directly. The POC stops at the developer-test handoff. Designer rollout is a separate decision after the POC succeeds.
- Building a public marketing piece (blog, landing page) before the POC + dev test confirm the pattern works.
- Per-user identity in Figma (no plugin = no popup sign-in flow). The
Claude Code session uses the existing per-tenant
claude_codekey. - Cross-tenant testing — Guardify only for now.
- Recording cost/quota impact — Figma's MCP bills are theirs; brand-guide calls fall under Guardify's existing monthly cap.
4. The four scripted test prompts
Each prompt has a pre-condition (something selected on canvas), an expected tool-call sequence, and an acceptance criterion. The acceptance criteria are what the developer tester verifies and what the Loom captures.
Prompt 1 — Apply primary color (load-bearing)
Pre-condition: A frame or shape is selected in the Guardify Figma file. The frame's current fill is some non-Guardify color.
Prompt to Claude: Apply Guardify's primary brand color to the
selected frame's fill.
Expected tool-call sequence:
1. get_colors (brand-guide MCP) — returns Guardify's palette JSON
2. use_figma (Figma MCP) — runs JS to set the fill on the selected
node. Color converted from hex to Figma's 0-1 RGB range.
Acceptance criteria:
- Selected frame's fill changes to Guardify primary on canvas, visible
in real time.
- The hex Claude wrote matches the canonical hex from our brand guide
(no drift, no "close enough").
- usage_events row written with surface=claude_code,
apiKeyId=<guardify claude_code key id>,
callType=mcp_tool_call:get_colors.
This is the critical-path prompt. If this fails, everything else is moot.
Prompt 2 — Headline in voice
Pre-condition: A text layer is selected (or a frame the agent can add a text layer to).
Prompt: Write a 5-word headline in Guardify's voice for this
poster, and place it on the canvas.
Expected tool-call sequence:
1. get_voice (brand-guide MCP) — voice rules, do/don't, examples
2. get_messaging (brand-guide MCP) — banned terms, tone refs
3. get_typography (brand-guide MCP) — font family + scale (so
Claude can spec the right typeface)
4. use_figma — create / edit the text layer with the headline,
apply font
Acceptance criteria:
- A 5-word text layer appears in the file.
- The text reads as on-brand — judged by Eric on first read, then
cross-checked against the voice rules from get_voice. Subjective
but the bar is "Eric would not be embarrassed to send this to
Nicholas."
- No banned terms appear (compare against get_messaging output).
- Font matches Guardify's specified family if Figma has it loaded.
Prompt 3 — Audit a frame
Pre-condition: A finished-looking frame is selected — could be intentionally on-brand or intentionally off-brand. Either is fine for the audit.
Prompt: Audit the selected frame — is the fill in Guardify's
palette? If not, suggest the closest match.
Expected tool-call sequence:
1. get_screenshot or get_metadata (Figma MCP) — pull the frame's
fill data
2. get_colors (brand-guide MCP) — Guardify palette
3. Reasoning step (no tool call) — Claude compares
4. Optional: check_text (brand-guide MCP) if any text layer in the
frame should be voice-checked too
Acceptance criteria: - Claude returns a yes/no on palette match with reasoning. - For "no" cases, Claude names the closest Guardify color and the hex distance (or a qualitative "very close / off by a lot"). - Response is a report, not a canvas mutation — Claude doesn't silently "fix" the frame.
Prompt 4 — Build a deck cover set (the wow demo)
Pre-condition: Empty page or a fresh frame ready to receive content.
Prompt: Build a 6-slide deck cover set on-brand for Guardify —
varied layouts, each cover featuring a different headline in our
voice and using our color palette. 1920×1080 each.
Expected tool-call sequence:
1. get_brand_foundation (brand-guide MCP) — mission, audience,
personality (so headlines match brand attitude)
2. get_colors (brand-guide MCP)
3. get_typography (brand-guide MCP)
4. get_voice + get_messaging (brand-guide MCP)
5. Many use_figma calls — one per slide, possibly batched. Claude
uses Figma skill figma-generate-design patterns.
Acceptance criteria: - 6 frames appear at 1920×1080 each. - Each cover uses Guardify colors only. - Each cover has a unique headline that passes a quick voice check. - Layouts are varied (not 6 carbon copies). - This one we treat as the "best-case demo, may have rough edges." If this prompt produces a usable rough draft a designer would refine in 10 minutes, that's a win.
5. Acceptance — how we know the POC succeeded
The POC ships when ALL of these are true:
| # | Acceptance | Verifier |
|---|---|---|
| 1 | All four prompts run end-to-end with the documented tool calls firing | Eric, on his own machine first |
| 2 | Prompt 1 produces a canvas-write within ~30 seconds. Prompt 4 within ~3 minutes. | Eric (timed) |
| 3 | Per-surface attribution confirmed in /admin/usage?tenant=guardify — visible spike in claude_code rows after a test run, with the matching apiKeyId |
Eric (DB query or admin UI) |
| 4 | Loom recorded showing prompt 1 + prompt 4 working live | Eric |
| 5 | Written walkthrough committed to repo with screenshots + the actual prompts that worked (vs. the ones we wrote in this PRD, which may need rewording) | Eric |
| 6 | A G&M or Guardify-side developer (NOT Eric) runs the four prompts on their own machine using only the docs + Loom + the existing portal install snippet, and reports back | Nicholas at Guardify, OR a G&M dev |
| 7 | Findings folded back into research doc and MCP-INSTALL.md |
Eric |
The POC fails (and we replan) if:
- Prompt 1 doesn't reliably produce a canvas write after 3 attempts
with prompt rewording — that means either Figma's MCP setup is
broken on the test machine OR Claude isn't routing to use_figma
reliably.
- Prompts 2-3 produce on-brand-looking output that's actually
ungrounded (Claude hallucinated colors / voice without calling our
MCP). Detectable by checking usage_events — no rows = not
grounded.
- The developer test pass surfaces 3+ blocking issues (missing
permission, undocumented step, broken install snippet).
6. Risks and unknowns
The research doc § 6 covers strategic risks. These are POC-execution risks:
| Risk | What it means | Mitigation |
|---|---|---|
Figma's use_figma may need desktop app + a fresh Figma file format that supports MCP write. Open question from research. |
Designer can't run prompts on legacy .fig files OR has to relaunch Figma a specific way. |
Document the exact setup that worked (Figma version, file age, desktop OS) in the walkthrough. |
| Claude routing ambiguity. Both MCPs expose tools that "could" answer some prompts. Claude might pick the wrong one (e.g. ask Figma for colors instead of us). | Surface attribution becomes unreliable; brand grounding is bypassed. | Inspect usage_events after each prompt. If Claude bypassed our MCP, reword the prompt to be more explicit ("look up Guardify's palette via the brand-guide MCP, then..."). Document final wording. |
Guardify's brand-guide content is thin in some categories. get_voice may return TODO placeholders if voice rules aren't filled in. |
Prompt 2 produces lukewarm output. Prompt 4 has weak headlines. | Pre-flight: run tools/list + a sample of each get_* against Guardify's MCP before the POC. Fill any gaps in guardify-brand-guide content first. |
| Figma MCP rate-limited or quota'd during test runs. | Prompt 4 (many use_figma calls) may hit a wall. |
Start with Prompts 1-3, batch-test prompt 4 once. |
| Developer tester can't replicate. Common: missing 1Password access, missing Figma desktop, missing per-tenant key. | Handoff fails, POC stalls. | The developer-test card explicitly lists prereqs. Do a pre-flight credential check before handing off. |
use_figma fills/strokes "0–1 range, not 0–255" and read-only-array gotcha (per Figma's figma-use SKILL.md). Naive prompt may make Claude write {r: 255, g: 0, b: 0} and fail silently. |
Prompt 1 fails for non-obvious reason. | Note this in the prompt-writing card. Claude's been trained on the skill so shouldn't trip, but verify on first run. |
7. Phasing — execution order
This is the order I'll actually execute the cards in. Each phase is sequential; subtasks within a phase can run in parallel.
Phase 0 — Pre-flight (~1h, before any prompts run)
Goal: prove the test environment is sane before burning time on the actual prompts.
- Verify Guardify's brand-guide content has real values for
colors,typography,voice,messaging,brand_foundation. Run atools/list+ sampleget_*calls againsthttps://app.heybrandbot.com/api/mcp/guardify. Note any TODO placeholders or thin sections. - Pick the test Figma file (or create one). Real Guardify content, not a sandbox. Confirm Figma desktop runs the file cleanly.
- Confirm the
claude_codekey in 1Password (Claude Bot → "Brand Guide MCP — Guardify Claude Code key") still works against the MCP (existing smoke test from today's session). - Confirm Figma's Dev Mode MCP setup steps in their docs are current — version drift in the past 2 weeks could change install.
Exit: sane test environment + a list of brand-guide content gaps (if any) that could weaken prompts 2-4.
Phase 1 — Eric runs the four prompts solo (~1-2h)
Goal: prove Path A actually works on Eric's machine before documenting anything.
- Set up a fresh Claude Code session (NOT the brand-guide-mcp dev session). Either an empty dir or a different project.
- Configure both MCPs in that session's
~/.claude/mcp.json. - Run prompts 1 → 2 → 3 → 4 in order, with the Figma file open and the right pre-condition for each.
- After each prompt, query
usage_eventsto confirm attribution. - Capture in real time: which prompt wordings worked, which had to be re-tried, what surprised, what failed.
- Decision gate: if Prompt 1 fails after 3 reword attempts, stop and replan — Path A is broken or the wiring is wrong.
Exit: four prompts that work reliably, plus notes on the wording.
Phase 2 — Capture artifacts (~1h)
- Record Loom: prompt 1 + prompt 4. Designer audience — show Figma side, prompt, canvas update. Don't show terminal noise.
- Write the walkthrough: one paragraph per prompt, before/after Figma screenshots, the exact wording that worked, the gotchas discovered in Phase 1.
- Commit walkthrough to repo (
docs/FIGMA-PATH-A-WALKTHROUGH.md) and link fromMCP-INSTALL.md§ "Pairing with Figma". - Update
FIGMA-INTEGRATION-RESEARCH.md"Open questions" with answers from Phase 1.
Exit: two artifacts a developer can use to replicate the workflow without further help.
Phase 3 — Developer test pass (~1-2h, mostly waiting)
Goal: honest stress test by someone who didn't write the docs.
- Pick the tester. Likely candidate: Nicholas (Guardify-side dev),
or another G&M developer if available. Send them: the Loom + the
walkthrough + the install snippet from
/portal/guardify/integrations+ the 1Password share for theclaude_codekey (or a tester-specific newly-issued key). - Have them run all four prompts on their own machine.
- Have them write up: what worked, what didn't, what was unclear, anything they'd change about the docs.
- Triage their findings: trivial → fix in walkthrough; substantive → file as new cards.
Exit: independent confirmation that the POC is reproducible.
Phase 4 — Decide on Path B (~30 min, decision-only)
Goal: make the build-or-don't-build call on Path B (native Figma plugin) using actual data.
- Look at the developer tester's report.
- Look at
surface=claude_codetraffic from Guardify in the days following the test pass — did anyone use it organically? - Honest call: - If the dev test went smoothly AND the docs flow is clean → Path B is justified for the non-developer designer audience. File cards under a new parent. - If the dev test surfaced enough friction that designers wouldn't stick with it → Path B is even more justified, to bypass the command-line entirely. - If the dev test flopped (Path A fundamentally not working) → don't do Path B, regroup.
Exit: Path B decision recorded in the research doc + roadmap.
8. Cards (what gets filed in Todoist)
The Phase breakdown maps directly onto a parent card with subtasks. Following the Todoist task-shape rule (subtask vs top-level — see todoist skill): one parent for the deliverable, subtasks for each phase.
Parent: 🧪 Figma + brand-guide-mcp Path A POC (stretch / unconfirmed) [the existing card 6gXR44mX3wMG34Wf gets repurposed as the parent — its description gets replaced with a pointer to this PRD]
Subtasks (one per phase): 1. Phase 0 — pre-flight: verify Guardify content + test file + key + Figma MCP install steps 2. Phase 1 — Eric runs the four prompts solo and captures wordings + findings 3. Phase 2 — record Loom + write walkthrough + update companion docs 4. Phase 3 — hand to developer for independent test pass; triage their report 5. Phase 4 — decide on Path B based on Phase 3 outcome
Plus one orphan top-level (NOT a subtask, since it's a different deliverable):
- 🚧 Audit Guardify brand-guide content depth — flag any TODO
placeholders in
guardify-brand-guiderepo's category JSON files before they bottleneck Path A's voice/messaging prompts. Not blocking on Eric specifically — could be filed against Nicholas.
9. Decisions (answered 2026-05-06)
- Test Figma file: Create a fresh file specifically for the POC. Faster to start (no Nicholas dependency), fully under our control. Tradeoff acknowledged: less "real designer doing real work" framing — compensate by populating it with realistic Guardify-flavored content (a deck cover or poster, not a sandbox).
- Phase 3 dev tester: Both — G&M dev first, then Nicholas.
G&M dev catches the rough edges first; Nicholas validates the
polished version. Two-stage handoff. Issue separate
claude_codekeys for each tester so attribution stays clean (apiKeyIdper tester inusage_events). - Walkthrough doc home: Repo
docs/FIGMA-PATH-A-WALKTHROUGH.md. Sits next toMCP-INSTALL.md. Becomes the canonical public artifact if the repo opens up, fits the first-mover content opportunity. - Loom destination: G&M Loom workspace. Visible to G&M team, brandable, can be unlisted/shared externally with one click for Nicholas's pass.
- Content gap behavior: Pause and fill gaps before Phase 2 if
blocking. Phase 0 audits content depth; if voice/messaging/
typography are TODO-placeholder, fix in
guardify-brand-guidebefore Phase 1 prompts run. Cosmetic gaps (e.g. photography section thin and we're not testing photography prompts) can push through with a documented note.
Phase 3's dual-tester decision adds one wrinkle: Phase 3 splits into 3a (G&M dev) and 3b (Nicholas). Cards updated accordingly.
10. Success-and-also signals
If the POC works, watch for these as evidence to invest more aggressively:
- Any Guardify designer asks "can I do this?" after seeing the Loom → strong demand signal; Path B becomes higher-priority.
- Eric's own usage of the Path A combo crosses N calls/week organically (not test runs) → the workflow is sticky for him personally.
- A post on the walkthrough draws inbound interest → confirms the public-content gap is a real opportunity, not just a nice-to-have.
- During Phase 3, the dev tester suggests improvements rather than bailing → the install flow is friendlier than I've assumed.
If none of these signals fire after a month: Path A is technically working but practically not landing. Reconsider whether Path B is worth the build, or whether the brand-guide MCP needs richer content first to make any Figma-side workflow compelling.
Appendix — what changed in companion docs
FIGMA-INTEGRATION-RESEARCH.md: added § 4 "Public-proof gap analysis" framing this as stretch / unconfirmed; refreshed Sources with all leads from the 2026-05-06 deeper search.docs/MCP-INSTALL.md§ "Pairing with Figma": added today, will be augmented in Phase 2 with the validated prompt wordings from Phase 1.app/portal/[tenantId]/integrations/page.tsx: callout under Claude Code snippet pointing at the Figma combo. Live in production.