We've been deep in the world of Answer Engine Optimization, running the CiteMET playbook across our best content. The initial results were solid - we saw users engaging with our AI Share Buttons, and we knew we were successfully seeding our content into AI platforms. But with our team's background in AI, we saw this as just the first step.
We understood that for a Large Language Model (LLM), a single, quick interaction is a whisper. A true signal of authority, one that builds lasting memory, comes from a deeper conversation. We saw an opportunity to transform that initial whisper into a meaningful dialogue.
The headline: JavaScript execution in LLM browsers
Two of the biggest LLMs - OpenAI's ChatGPT and Google's Gemini - now support browsing modes that can execute JavaScript. Practically, this means they can load Client-Side Rendered (CSR) sites, run the hydration code, and then extract DOM content for summarization.
Until now, most AI crawlers only fetched static HTML. CSR apps (React, Vue, Lovable, Bolt) shipped minimal markup and relied on client runtime to render content - so bots saw a blank page. JavaScript-capable browsing changes that for these two tools, but it doesn't make CSR 'safe' or universally visible.
Who can and cannot run JavaScript today
• Can run JS: ChatGPT (Browse), Gemini (Browse). Both can evaluate client scripts and read the hydrated DOM in many cases.
• Cannot reliably run JS yet: Claude, Perplexity, and several others. Their crawlers still read raw HTML responses without executing scripts, which means CSR content is largely invisible to them.
• Edge cases exist: rate limits, bot detection, blocked resources, and async rendering timeouts can still cause partial or zero content capture even for JS-capable modes.
Why you still shouldn't rely on CSR for visibility
Even with JS execution in some tools, CSR remains fragile for three critical workflows:
- Human link previews: Messaging apps, social networks, and many link unfurlers do not execute JS. They scrape static HTML (title, description, og:image) only. CSR breaks shareability and kills engagement.
- Non-JS crawlers: Claude, Perplexity, and specialized bots continue to index the static response. If your HTML is empty, you won't be cited.
- Performance and determinism: JS rendering adds variability - timeouts, blocked third-party scripts, client state, and cookie gates. Static HTML is deterministic and fast; CSR isn't.
What this unlocks (and what to keep)
Good news: JS-capable browsing reduces the 'Blank Page Problem' for ChatGPT and Gemini use cases like one-off summaries and QA on live sites.
Keep doing: Static Site Generation (SSG) for canonical visibility; JSON-LD for machine context; robots.txt welcoming AI agents; llms.txt to map discovery; and fast, stable HTML for share previews.
Practical stance: Treat JS execution as a compatibility bonus - not a dependency. If your visibility plan requires JavaScript to show content, your citations and shareability will remain inconsistent.
Bottom line
JavaScript execution in browsing is progress. But answer engines and human link previews still depend on static HTML for reliability, speed, and trust. If citations, shareability, and cross-tool consistency matter, keep shipping pre-rendered pages and structured data - then enjoy the bonus of JS-capable modes where they exist.
If you want your CSR site to become an SSG site without any development needed, sign up for CiteMET now.