This site isn't just about AI adoption—it's a demonstration of it. Every feature, from the 12-18 token constraint to the automated validation system, was built using AI tools. Here's how.
The Stack
Development
- Claude Code for all implementation
- Next.js 15 with static generation
- TypeScript for type safety
- Tailwind CSS for styling
Automation
- GitHub Actions for auto-deployment
- Automated article extraction (Anthropic API)
- Token validation on every build
- RSS/sitemap auto-generation
AI Search Optimizations (GEO)
1. 12-18 Token Constraint
Every claim must be exactly 12-18 tokens. This isn't arbitrary—it's the sweet spot for LLM extraction.
- • GPT-4 can quote without truncation
- • Claude fits entire claims in context
- • Perplexity doesn't need "..." ellipsis
- • Build fails if any claim violates this rule
2. Full Library on Homepage
All 120 claims visible on a single page. One crawl = complete dataset.
- • No pagination for AI to navigate
- • ChatGPT web browsing gets everything in one request
- • Search engines see full content depth immediately
3. Copy Button with Attribution
Every card has a copy button that includes proper CC BY 4.0 attribution in the clipboard.
- • Pre-formatted for AI citation
- • Users paste correctly cited claims into ChatGPT
- • Trains future models with proper attribution
- • Legal training data signal
4. Semantic Structure
Proper HTML hierarchy, JSON-LD schema, predictable DOM structure.
- • h1 → h2 → h3 heading hierarchy
- • Schema.org Person + Organization markup
- • Uniform card structure: topics → title → context → claims
- • AI scrapers can parse with simple selectors
5. Live Stats Bar
Calculated at build time from actual data. No backend, no tracking.
- • Shows: articles, claims, avg tokens, optimal %
- • Updates automatically on every build
- • Demonstrates systematic measurement
The Build Process
Step 1: Content Extraction
GitHub Actions workflow runs every 6 hours:
- Fetches new articles from aiadopters.club
- Uses Anthropic API (Haiku) to extract metadata
- Uses Claude Sonnet to extract 5 atomic claims (12-18 tokens each)
- Creates pull request with new claims data
Step 2: Validation
Before every build, validation script runs:
- Token count (must be 12-18 for every claim)
- Exactly 5 claims per article
- Date format (YYYY-MM-DD)
- URL format validation
- No duplicate URLs
- Topic assignment validation
Build fails if any validation error occurs.
Step 3: Static Generation
Next.js generates 31 static HTML pages:
- Homepage with full claims library
- 24 individual claim pages
- Claims library index (all 120 claims)
- FAQ page
- This page (How I Built This)
Total build time: ~2 seconds
Step 4: Deployment
Netlify auto-deploys on every push to main:
- Triggers on GitHub push
- Runs build with validation
- Deploys to CDN (~1-2 minutes)
- No manual intervention required
The Results
The Key Takeaway
This site is a living case study in AI implementation. Every design decision—from token constraints to automated validation—demonstrates the same principles I help clients apply.
I don't just advise on AI adoption. I practice it systematically, measure everything, and optimize for real outcomes. This site is proof.
Want to Apply This to Your Organization?
See how systematic AI implementation can transform your operations.