KJP — AI & LLM Strategy
Consolidated from Notion 2026-04-23. Notion is deprecated. Status: Phase 2 — deferred to fall/winter 2026.
Parent doc: KJP-Project-Hub.md Last updated: 2026-03-04 (source) · 2026-04-23 (migrated here)
Context
KJP originally asked about AI-generated descriptions for their 6,124 photography posts missing descriptions. We built a detailed plan around a custom AI description tool with a feedback/learning loop.
Casey shifted the scope: She said tags are the priority over descriptions. Designers search by mood (uplifting, calm, soothing), flower species, visual style — and those tags don't exist yet. Descriptions are secondary and only worth it if they significantly help SEO.
Eric wants to go bigger — rethink how KJP's images are handled and positioned for LLM/AI search discovery, so KJP gets more business from the shift toward AI-powered search.
Current State of KJP's Image Data
| Data Point | Coverage | Notes |
|---|---|---|
| Photography posts | 8,372 | |
| Tags (subject/scene) | 99%+ | 2,778 terms, but 396 are zero-post junk. No mood/emotion tags exist. |
| Type | 99%+ | 15 active types (Flowers, Landscapes, Waterscapes, etc.). 16 typo terms to clean. |
| Color | 67% | 10 active colors, 6 junk terms. Casey wants AI help here. |
| Region | 45% | Client will handle manually — overlapping regions. |
| Descriptions | 27% | 6,124 posts missing. Casey wrote existing ones — match her voice. |
| Alt text | Unknown | Images stored as flat files via wpallimport, not in Media Library. Likely no alt text. |
What's Missing That Matters
For clients (Casey's ask): - Mood/emotion tags — calm, uplifting, soothing, serene, energetic, peaceful, warm - Species identification — most flowers aren't identified beyond "flower" - Visual style tags — close-up, wide-angle, aerial, abstract, detail, macro (some exist but inconsistently)
For LLM/AI discoverability (Eric's vision): - Structured descriptions that AI can parse and cite - Rich alt text on every image - Schema markup for image content - Semantic relationships between images, installations, and institutions - Content that answers the questions AI search engines ask: "nature photography for healthcare spaces," "calming artwork for hospital lobbies," etc.
Taxonomy Cleanup (Prerequisite)
Before any AI work, clean up existing taxonomy data. Rolls into whichever tier the client picks.
Tags: Delete 396 zero-post terms, merge duplicates (flower/flowers → flowers, tree/trees → trees, landscape/landscapes → landscapes, etc.) Type: Delete 16 typo terms (Wasterscapes, Gresses, Landscaspes, Ainmals, etc.) Color: Delete 6 junk terms (Pruple, Yelllow, Gold, Gray, Grasses, Black) Region: Delete 4 unused terms (Great Lakes, Northeast, Mid-Atlantic, East Coast)
Estimated time: 1–2 hours via WP-CLI
Tiered Options
Tier 1: Small — AI Tag Enhancement
What it is: Use AI vision to analyze photos and add the tags Casey asked for — mood, species, visual style. Also fill in missing color classifications.
What the client gets: - Mood/emotion tags on all 8,372 photos (calm, uplifting, serene, etc.) - Plant/flower species identification where possible - Missing color classifications filled in (3,416 posts) - Multi-color support (tag dominant + secondary colors) - Taxonomy cleanup included
How it works: - Batch process images through AI vision API (Claude or GPT-4o) - AI analyzes each image → suggests tags from a controlled vocabulary we define - Client reviews a sample batch (~50 images) to calibrate - Bulk process remaining with spot-check review
What it does NOT include: - Descriptions - Alt text - Schema markup - LLM optimization - Ongoing automation
Estimated hours: 15–25 (capped) — includes taxonomy cleanup, tool build, calibration, bulk processing Estimated API cost: ~$60–120 (passed through)
Tier 2: Medium — Tags + Descriptions + Alt Text
Everything in Tier 1, plus: - AI-generated descriptions for 6,124 posts (matching Casey's voice/tone) - Alt text generated for all images (critical for accessibility AND LLM crawling) - Feedback loop — client reviews batches, system learns their preferences - Review dashboard for approving/rejecting/editing descriptions
Why descriptions matter (the SEO case for Casey): - Google increasingly uses page content quality as a ranking signal - Images with descriptions get indexed for long-tail searches ("peaceful waterscape photography for medical office") - Alt text is a direct accessibility requirement AND feeds AI crawlers - Blog posts help, but 6,124 pages with zero content are 6,124 missed ranking opportunities
Voice/tone constraints: - Match Casey's existing description style - Healthcare-focused client base — no negative, dark, or threatening language - Never use the word "magical" (Kurt hates it) - Warm, straightforward, descriptive — not overly poetic
Estimated hours: 35–50 (capped) — includes everything in Tier 1 + description tool build, feedback loop, review dashboard, calibration rounds Estimated API cost: ~$60–180 (depends on model choice)
Tier 3: Large — Full LLM & AI Search Strategy
Everything in Tier 2, plus a comprehensive strategy to make KJP discoverable by AI search engines.
Additional deliverables:
-
Schema markup — Structured data on every photography and installation page (ImageObject, CreativeWork, Product schemas) so AI engines can parse and cite KJP's content.
-
LLM-optimized content structure — Rewrite key landing pages and archive pages with content that directly answers the questions AI search surfaces: - "nature photography for healthcare facilities" - "calming artwork for hospital lobbies" - "healing environment art for medical offices" - "biophilic design photography"
-
Installation case studies — Use AI to draft mini case studies for top installations (institution name, what was installed, the impact). These are exactly what LLMs cite when recommending vendors.
-
FAQ / knowledge base content — AI-generated FAQ pages answering common client questions ("How do I choose art for a hospital?" "What makes good healing environment photography?"). These are the queries AI assistants pull from.
-
Ongoing automation — New images automatically get tags, descriptions, alt text, and schema on import. No manual work.
-
AI search audit — Test KJP's visibility across ChatGPT, Perplexity, Google AI Overviews. Benchmark and track improvement.
Why this matters: The shift to AI-powered search is real. When a hospital administrator asks ChatGPT "who does nature photography for healthcare spaces," KJP should be in that answer. Right now, without structured content, descriptions, or schema — they're invisible to these systems.
Estimated hours: 60–90 (capped, phased) — discovery + build + content generation + schema + ongoing automation Estimated API cost: ~$200–400 total across all content generation
Casey's Open Questions (Need Answers)
- What specific tag categories matter most? Propose: mood/emotion, species, visual style, season. Are there others?
- Should AI tags supplement or replace existing tags? (Recommend supplement — don't touch what's working)
- Multi-color tagging — When a photo has multiple prominent colors, tag all of them or just dominant? (Recommend tagging up to 2)
- Is the description tool still wanted? Or just tags + color for now?
- What tier interests you? Even a rough sense of budget/ambition helps us scope the discovery phase.
Recommended Path
Recommendation: Start with a discovery conversation (3–5 hours, quoted in main project doc) to walk Casey through these tiers, answer her questions about AI capabilities, and let her choose. Then quote the build phase with capped hours based on their chosen tier.
The discovery phase would include: - Call with Casey to walk through options - Define the tag vocabulary (mood categories, species list, visual descriptors) - Test AI tagging on 10–20 sample images so Casey can see real output - Finalize scope and provide capped-hour estimate for the build
Technical Reference
The full build spec for the AI description tool (Tier 2/3) is stored locally at:
~/.claude/project-notes/kurt-johnson-photography/claude-bundle-full/
This includes PRD, architecture, data models, API specs, WP plugin spec, and workflow docs. The spec was built around descriptions but the architecture applies to tags as well — same vision API pipeline, same batch processing, same review workflow.
Also see: - KJP-AI-Tagging-PRD.md — simplified/updated PRD post-Casey-shift - KJP-AI-Cost-Estimate.md — token math, model comparison, OpenRouter strategy - KJP-AI-Feature-Brief.md — plain-English process overview - KJP-AI-Description-Tool-Plan.md — original description-focused plan (pre-Casey-shift, kept for reference) - KJP-PIPELINE-AND-CRON-PLAN.md — pipeline + cron migration plan