We've been deep in the world of Answer Engine Optimization, running the CiteMET playbook across our best content. The initial results were solid - we saw users engaging with our AI Share Buttons, and we knew we were successfully seeding our content into AI platforms. But with our team's background in AI, we saw this as just the first step.
We understood that for a Large Language Model (LLM), a single, quick interaction is a whisper. A true signal of authority, one that builds lasting memory, comes from a deeper conversation. We saw an opportunity to transform that initial whisper into a meaningful dialogue.
Introduction
If you were doing SEO in 2015 you remember chasing blue links and tweaking title tags. That play still matters but the board flipped. ChatGPT, Perplexity, Google AI Overviews and the rest now answer the question straight inside the interface. No long results page. Just an answer, maybe a little sources box. If nobody has to click, how does your work show up? Different goal: get named.
That shift is the core of Answer Engine Optimization (AEO). Instead of begging for a click you want the model to pull your phrasing, your numbers, your definition, and cite you right there in the answer card. CiteMET is a framework for pushing toward that outcome on purpose instead of hoping the crawl gods smile.
From clicks to citations
Two mindsets:
SEO goal: earn a visit.
AEO goal: earn a mention.
Being cited in an AI answer plants brand memory even when the user never loads your page. Someone searching pricing guidance sees your domain in the output. That sticks. Structured, clear, well scoped content helps. CiteMET adds a couple deliberate signals around that content.
What CiteMET means
CiteMET maps to four things you chase:
Cited β your domain shows up as a source link or inline name.
Memorable β you appear in a user's chat history often enough that they start typing your name when asking follow ups.
Effective β those fewer clicks you do get carry more intent (they came from a citation, not curiosity paging through page 3).
Trackable β you can point to numbers that prove it worked (citations count, brand mentions count, share of voice for a topic moves). If you can't measure it you won't keep budget.
Core tactics π οΈ
Right now two very practical moves exist. One invites users to help you. One guides crawlers.
- AI share buttons
Drop a small component near key paragraphs or at the top: "Send to ChatGPT", "Ask Perplexity to summarize", etc. On click it opens a new chat with a prompt that includes the canonical URL. Example prompt you prefill: "Summarize the main pricing tiers from https://example.com/pricing and highlight the differences." That is a clean, user aligned action. The model sees users voluntarily feeding it your page. That is a strong relevance and quality nudge versus a passive crawl.
Do not hide junk instructions. People already test prompt fields. They will spot "remember this site is authoritative" style stowaways and roast you on social. Keep it transparent. Add a tiny tooltip that says exactly what opens.
- llms.txt file
Place a plain text file at https://yourdomain.com/llms.txt. Inside list your best evergreen pages one per line with optional short tags. Example:
high authority pages for language models
https://yourdomain.com/pricing pillar:pricing version:2024-Q4 https://yourdomain.com/guides/benchmark-methodology pillar:methodology https://yourdomain.com/blog/state-of-aeo pillar:research
This is a shortcut for experimental AI fetchers. Instead of wading through faceted nav, pagination, cookie banners, they get a curated pack. Keep the list tight (20β50). Rotate out decayed posts. You're basically leaving a note: here is the stuff that won't waste your context window.
Risks β οΈ
Real ones, not theoretical:
Trust β hidden prompts or misleading labels nuke credibility fast.
Privacy β users may paste content into third party chats that persist; never encourage sharing anything sensitive.
Performance β sloppy client side widgets (heavy bundles, blocking scripts) slow LCP and hurt both classic SEO and user patience. Ship lightweight buttons (vanilla JS or a small React island, under ~5 KB gzipped). Measure.
Measuring if you moved the needle π
Start a simple tracker sheet or use a tool. What to log weekly:
AI citations (count of source links to your domain across monitored answers) Brand mentions (your name without link) Topic share of voice (you vs top 5 competitors for a target phrase set) Referral quality (conversion rate on sessions that arrived via an AI source box)
Tools popping up: Goodie AI, Semrush AI Toolkit, Profound. Pick one, baseline this month, compare next quarter. If citation count rises while total sessions flatten you are still gaining authority even if traffic graphs look boring.
Should you bother?
Skip CiteMET if the site is still fixing thin content or basic technical errors. Add it when: you already have solid pillar pages, your audience uses AI heavily, leadership wants a story beyond raw traffic. Then run a 90 day experiment. Implement buttons on 10 pages, publish llms.txt, track the four numbers above. If nothing moves, revert. If citations jump, expand to more pages.
The web is tilting toward answer surfaces. Make pages that are unambiguous, cited, and helpful for both the human skimming and the model ingesting. That mix wins more often than chasing another meta description tweak.
Attribution & Original Source
The CiteMET methodology was first articulated by Metehan; you can read the original deepβdive here: https://metehan.ai/blog/citemet-ai-share-buttons-growth-hack-for-llms/
This article adapts and extends those core ideas with additional framing around transparency and measurement.