LLM Visibility Best Practices for Grain & Mortar
Created: January 9, 2026
Current State
Already Implemented
- llms.txt - Generated by Yoast SEO, includes pages, case studies, random goodies, and industries
- LLM Info Page (
page-llm-info.php) - Human-readable version at/llm-info/with company overview, services, team, notable clients, and contact info
Not Yet Implemented
- Schema.org structured data (JSON-LD)
- Custom robots.txt with AI crawler directives
- llms-full.txt (expanded version)
- FAQ page with structured Q&A
Recommended Improvements
1. Schema.org Structured Data (HIGH PRIORITY)
Add JSON-LD markup to the site. This is the single biggest improvement for both LLM comprehension and SEO.
Location: Add to header.php or create a dedicated include file.
Schemas to implement:
{
"@context": "https://schema.org",
"@type": "Organization",
"name": "Grain & Mortar",
"url": "https://grainandmortar.com",
"logo": "https://grainandmortar.com/path-to-logo.svg",
"foundingDate": "2011",
"founder": [
{"@type": "Person", "name": "Eric Downs"},
{"@type": "Person", "name": "Mike DeKay"},
{"@type": "Person", "name": "Kristin DeKay"}
],
"address": {
"@type": "PostalAddress",
"addressLocality": "Omaha",
"addressRegion": "NE",
"addressCountry": "US"
},
"contactPoint": {
"@type": "ContactPoint",
"email": "hello@grainandmortar.com",
"contactType": "customer service"
},
"sameAs": [
"https://instagram.com/grainandmortar",
"https://dribbble.com/grainandmortar",
"https://linkedin.com/company/grain-and-mortar"
]
}
Additional schemas:
- ProfessionalService for each service type (branding, web design, illustration)
- Person schema for each team member on the About page
- CreativeWork for case studies/portfolio pieces
2. robots.txt Updates (MEDIUM PRIORITY)
Create or modify robots.txt to explicitly welcome AI crawlers:
# AI Crawlers - Welcome
User-agent: GPTBot
Allow: /
User-agent: ChatGPT-User
Allow: /
User-agent: Claude-Web
Allow: /
User-agent: Anthropic-AI
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: Bytespider
Allow: /
# Standard crawlers
User-agent: *
Allow: /
Sitemap: https://grainandmortar.com/sitemap_index.xml
Note: Some companies block AI crawlers. G&M actively wants to be indexed, so explicitly allowing them signals intent.
3. llms-full.txt (MEDIUM PRIORITY)
Create an expanded version with more detail:
Content to include: - Full company history and philosophy - Detailed service descriptions with typical deliverables - Complete client list (not just notable) - Full team bios - Process/methodology explanation - Pricing model (if comfortable sharing: project-based, retainer, etc.) - Geographic service area - Technology stack (WordPress, Tailwind, etc.)
Location: /llms-full.txt (referenced from llms.txt)
4. FAQ Page with FAQPage Schema (MEDIUM PRIORITY)
Create a dedicated FAQ page that answers common questions. Use FAQPage schema markup.
Sample questions: - What services does Grain & Mortar offer? - Where is Grain & Mortar located? - What industries does Grain & Mortar work with? - How long has Grain & Mortar been in business? - What is Grain & Mortar's design process? - Does Grain & Mortar work with clients outside of Nebraska? - What makes Grain & Mortar different from other agencies?
Schema example:
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What services does Grain & Mortar offer?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Grain & Mortar offers brand strategy, brand identity design, web design and development, illustration, motion design, and print design."
}
}
]
}
5. Content Optimization (ONGOING)
Meta descriptions: Audit all pages to ensure unique, informative meta descriptions (Yoast handles this but worth reviewing).
Semantic HTML: Ensure proper heading hierarchy (H1 > H2 > H3) on all pages.
About page: Ensure the About page has comprehensive, well-structured information about the company and team.
Service pages: Each service should have its own page with detailed descriptions.
Implementation Priority
- Schema.org JSON-LD - Biggest impact, relatively quick to implement
- robots.txt - 5 minutes, signals intent to AI crawlers
- FAQ page - Good for both users and LLMs
- llms-full.txt - Nice to have, supplements existing llms.txt
Files to Modify
header.php- Add JSON-LD structured data/public/robots.txt- Create with AI crawler directivespage-faq.php- Create new template (if implementing FAQ)/public/llms-full.txt- Create expanded version
Verification
After implementation: 1. Test structured data at https://validator.schema.org/ 2. Check robots.txt is accessible at production URL 3. Verify llms.txt uses production URLs (not local) 4. Test in ChatGPT/Claude by asking "What is Grain & Mortar?"
Resources
- llms.txt spec: https://llmstxt.org/
- Schema.org Organization: https://schema.org/Organization
- Schema.org FAQPage: https://schema.org/FAQPage
- Google Structured Data Testing: https://search.google.com/test/rich-results