AIEO.
SEO wins the click.
We win the sentence.
AI Engine Optimisation — also called GEO or AEO. The discipline of getting your brand cited and well-described inside ChatGPT, Claude, Gemini, Perplexity and Grok.
AIEO (AI Engine Optimisation) is the practice of monitoring and shaping how a brand appears across LLM-based answer engines. Where SEO optimises ten blue links on a results page, AIEO optimises the sentence the model produces when the user never clicks.
Practitioners call the same discipline GEO (Generative Engine Optimisation) or AEO (Answer Engine Optimisation). The acronym varies. The workflow is identical.
Roughly 40% of B2B research queries are resolved without a click in 2026. If your brand isn't in the sentence the model produces, you aren't in the consideration set — no matter where you rank on Google.
The Prompt & Pencil AIEO platform tracks, diagnoses and ships updates against five engines weekly.
What AIEO actually is.
AIEO is one of three names for the same discipline. The acronyms collided as the field formed in 2024–2025.
| Acronym | Stands for | Most common in |
|---|---|---|
| AIEO | AI Engine Optimisation | APAC practitioners, our preferred term |
| GEO | Generative Engine Optimisation | US academic and enterprise contexts |
| AEO | Answer Engine Optimisation | Older term, dating to featured-snippet work in 2019 |
All three describe the same job: get your brand cited and well-described in AI answer engines. We use AIEO because it travels better in mixed APAC-Western teams and doesn't get confused with geographic SEO.
Same crawler. Different game.
| SEO | AIEO | |
|---|---|---|
| What it optimises | The ranked list of blue links | The sentence the model says |
| Unit of work | Page-level (URL → query) | Sentence-level (prompt → cited answer) |
| Primary metric | Position, CTR, impressions | Mention Rate, Citation Rate, Position, Sentiment |
| Audience | People who click | People who never click |
| Recency window | Months to years | 7–14 days |
| Off-site signals | Backlinks | Reddit, Wikipedia, YouTube transcripts |
| Content structure | Topic clusters, depth | Question H2s, lists, tables, definitions |
| Schema priority | Helpful | Mandatory |
Track. Diagnose. Ship.
Three loops. Weekly cadence. Same four numbers on the dashboard every Monday.
A prompt panel of 50–200 queries, asked weekly to all five engines.
We co-design the panel with your team — competitor framings, category questions, brand-defence prompts, hostile probes. Every prompt is asked weekly to ChatGPT, Claude, Gemini, Perplexity and Grok. Outputs feed the four-KPI dashboard.
What does each model know — and what does it get wrong?
We compare answers across engines, flag factual drift, surface competitor framings the model has internalised, and map citation provenance: which URLs (yours and others') are feeding the answers right now.
Structured updates that move the needle in days.
Each cycle produces a punch list: schema deltas, FAQ additions, off-site placements (Reddit, Wikipedia, Quora), data-asset publishing. Most clients see measurable Mention Rate movement within 6–8 weeks of consistent execution.
Four numbers replace 'rank #1'.
These four metrics are what we report to clients every Monday morning. Tracked across all five engines and the full prompt panel.
Five engines, one weekly cycle.
Coverage expands quarterly as new answer engines pass a usage threshold.