Tuesday, February 3, 2026

AI search is here: why your content strategy needs to change now

7 min read

AI Search

For two decades, search meant one thing: type a query, get a list of links, click one. Brands optimised for that system. Keywords in title tags, backlink profiles, meta descriptions. It worked.

That world is over.

Search engines now use vector embeddings as a key layer in a hybrid system that still includes PageRank, link analysis, and keyword matching: mathematical representations that allow AI to grasp the meaning behind content, not just the words. And now, those same AI systems aren't just ranking pages. They're generating answers directly. The question is no longer whether your content strategy needs to change. It's whether you'll adapt before your competitors do.

The search landscape has fractured

Today, users discover information through ChatGPT, Google Gemini, Perplexity, Claude, Copilot, and Google's own AI Overviews. Each of these platforms retrieves, evaluates, and synthesises content differently. Each has its own crawlers, its own trust signals, and its own citation logic.

The numbers are hard to ignore:

  • 58% of consumers have already replaced traditional search engines with AI tools for shopping research.
  • Gartner predicts traditional search volume will drop 25% by 2026 due to AI chatbots and virtual agents, though this forecast remains contested and hasn't materialised to that degree yet.
  • AI referral traffic grew 357% year-over-year, generating 1.13 billion visits in June 2025 alone.
  • Early data suggests AI referral traffic may convert at significantly higher rates. Semrush found ChatGPT visitors convert at roughly 4.4 times the rate of Google organic visitors, though results vary widely by industry and sample size.

We're no longer optimising for one search engine. We're engineering visibility across an entire ecosystem of AI-powered discovery surfaces.

Zero-click is the new default

If you're still measuring success purely by organic clicks, you're measuring a shrinking pie.

Zero-click searches, queries where the user never leaves the search results page, have reached staggering levels:

  • 58.5% of US searches and 59.7% of EU searches end entirely within Google's results.
  • AI Overviews significantly increase zero-click behaviour. An Ahrefs study found that AI Overviews reduce organic clicks by around 58%.
  • Google's new AI Mode, a dedicated tab powered by Gemini 2.5, produces a 93% zero-click rate.

Google launched AI Mode in March 2025 with advanced reasoning, multi-step query handling, and what they call "Deep Search": a feature that issues hundreds of searches behind the scenes to generate expert-level, fully-cited reports. As of August 2025, it's available in over 180 countries.

The implication is clear. For a growing number of queries, the AI is the destination. Your content either gets cited in the answer, or it doesn't exist.

What this means for your brand

Traditional SEO metrics (rankings, impressions, click-through rates) still matter. But they no longer show the full picture. A page can rank #1 and still lose visibility if an AI Overview absorbs the answer and the user never scrolls down.

The measurement paradigm is expanding from "How many visits did we get?" to "How often are we mentioned, cited, and trusted by AI systems?"

How AI systems actually work under the hood

To understand what you need to change, you need to understand how these AI engines find and select content in the first place.

Modern AI search is built on vector embeddings: a way to translate text into lists of numbers (vectors) in a high-dimensional mathematical space. In this space, concepts with similar meanings are located close to each other. "Best places to eat in Rome" and "top Roman restaurants" land near each other mathematically, even though they share almost no keywords.

When an AI engine receives a query, it:

  1. Converts that query into a vector.
  2. Searches its database for content chunks with the closest vector representations.
  3. Feeds those chunks into the language model to generate an answer.

But proximity in vector space is only the first filter. What determines whether your content gets selected, and cited, involves several additional layers.

Semantic clarity

LLMs understand direct, clear language better than figurative or ambiguous writing. Content structured in semantically complete chunks of roughly a few hundred tokens (research suggests 128 to 512 tokens as the sweet spot) performs significantly better in retrieval. Each chunk needs to be a coherent, self-contained unit of meaning, because that's exactly how retrieval systems process your pages.

Structure as signal

Content with consistent heading hierarchies (H2 → H3 → bullet points) is significantly more likely to be cited by AI systems. Research from Princeton (GEO) found that content using expert quotations was 40% more likely to be surfaced by LLMs. Question and how-to formats are also significantly more likely to trigger AI Overviews: Ahrefs data shows question-type queries trigger AI Overviews at roughly 57.9%, compared to 15.5% for other query types. Semantic URLs with descriptive words also tend to perform better in AI citation, though the exact uplift varies.

This isn't decoration. Structure is how embedding models parse your content into retrievable units. Clean architecture means cleaner chunks, which means higher similarity scores at query time.

Authority and trust signals

Content backed by proprietary data, first-hand research, statistics, and links to credible sources is cited more often by LLMs. Princeton's GEO research found that various optimisation types (adding statistics, quotations, and citations) each improved LLM visibility by 30 to 40%, though results varied across categories. This aligns directly with Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, and Trustworthiness), which has become the credibility foundation not just for Google, but for AI citation decisions across all platforms.

Trust is the most important signal. Without it, expertise and experience become irrelevant to AI systems evaluating whether to cite you.

The rise of AI share of voice

The SEO industry is converging on a new primary metric: AI Share of Voice.

AI Share of Voice measures how frequently your brand appears in AI-generated responses compared to competitors. It encompasses both mention-based presence (how often your brand comes up in conversations) and citation-based authority (how often you're the source the AI links to).

New tools are emerging to track this: HubSpot's AEO Grader, Profound's AI visibility leaderboard, and platforms like Goodie AI and SE Ranking's Visible that monitor brand appearances across ChatGPT, Gemini, Perplexity, Claude, and Copilot.

Key metrics replacing traditional KPIs

  • AAIR (AI Answer Inclusion Rate): The percentage of relevant prompts where your brand appears in the AI-generated answer.
  • Entity visibility: How well your brand's entity is recognised and disambiguated across AI systems.
  • Citation frequency: How often your content is cited as a source.
  • Topical authority coverage: The breadth and depth of your topic coverage in vector space. The denser your semantic neighbourhood, the stronger the authority signal.

This is the new scoreboard. And it rewards a fundamentally different approach to content.

What to do about it: a practical framework

Understanding the shift is one thing. Engineering your response is another. Here's what actually works.

1. Structure your content for retrieval, not just reading

AI crawlers like GPTBot and ClaudeBot collect your page content for training data, and retrieval-augmented generation (RAG) systems will chunk and embed your content at query time. The quality of those chunks determines your retrievability.

  • Use clear, hierarchical headings that describe the content below them.
  • Write in direct, declarative sentences. Avoid burying answers in preamble.
  • Implement comprehensive schema markup. A Data World study showed GPT-4 accuracy at answering enterprise database queries improved from 16% to 54% when using knowledge graph representations, illustrating how structured data helps AI systems extract meaning.
  • Treat each section as a standalone unit of meaning that can be extracted and cited independently.

2. Build entity authority, not just keyword rankings

AI systems don't think in keywords. They think in entities and relationships. Entity-based SEO defines your brand's identity through its connections to recognised people, places, concepts, and topics.

  • Develop a robust knowledge graph around your core domain.
  • Link content internally in a way that reinforces semantic relationships, creating a dense cluster in vector space that signals topical authority.
  • Ensure your brand entity is consistent across platforms: your website, Google Business Profile, Wikipedia, industry directories, and social profiles.

3. Produce citable content

Generic summaries don't get cited. Original research does.

  • Publish proprietary data, case studies, surveys, and benchmarks.
  • Include specific statistics, methodologies, and verifiable claims.
  • Make your content easy to extract, verify, and reuse. Not just easy to read.
  • The Authority Flywheel is real: being cited increases authority, which leads to more citations.

4. Optimise your technical foundation for AI crawlers

Your robots.txt now governs access for GPTBot, ClaudeBot, PerplexityBot, and others, not just Googlebot. Decide deliberately which AI crawlers can access your content, and ensure your technical infrastructure supports them.

  • Audit your crawl directives for AI-specific user agents.
  • Implement structured data comprehensively. It's the bridge between your content and AI understanding.
  • Monitor the emerging llms.txt standard as an additional governance layer.

5. Measure what actually matters

If your reporting dashboard only shows organic sessions and keyword rankings, you're flying blind in the new landscape.

  • Track AI Share of Voice alongside traditional metrics.
  • Monitor citation frequency and AAIR across major AI platforms.
  • Measure AI-driven conversion rates separately. Early studies suggest they're several times higher than traditional organic, though results vary significantly by industry.
  • Treat AI visibility as a leading indicator of future demand, not a vanity metric.

The next frontier: the agentic web

We're already looking beyond AI as an answer engine toward AI as an executive assistant.

The "agentic web" means AI systems that don't just tell you which running shoes are best. They find your size, apply a coupon, and complete the checkout. McKinsey estimates agentic commerce could redirect $3 to 5 trillion in global retail spend by 2030.

When AI agents start executing transactions on behalf of users, being the brand that the agent trusts and recommends becomes the entire game. The foundations you build now (structured data, entity authority, citable content, clean technical architecture) are the same foundations that will determine whether AI agents route revenue toward you or your competitors.

Conclusion

The search landscape has fractured, and it's not going back together. Traditional SEO remains important. It's the foundation. But it's no longer sufficient on its own.

The brands that will win are those that treat visibility as an engineering problem: building the technical architecture, semantic authority, and structured content that AI systems need to find, trust, and cite them. Not just on Google, but across every surface where their customers are discovering answers.

The future of visibility isn't about being found by a crawler. It's about being understood, and trusted, by every model that matters.

Frequently asked questions

Related glossary terms

Entities

Entities in SEO are uniquely identifiable, well-defined concepts that search engines recognise through structured knowledge bases, enabling semantic understanding rather than keyword matching.

Vector Embeddings

Vector embeddings are numerical representations that transform unstructured data into arrays of floating-point numbers in high-dimensional space, where semantic similarity is preserved as geometric proximity.

E-E-A-T

E-E-A-T stands for Experience, Expertise, Authoritativeness, and Trustworthiness: Google's quality assessment framework used by human raters to evaluate content credibility, particularly for YMYL topics.

Keep reading

More articles about generative engine optimisation.

The Ranking Apocalypse: Why Position One Means Nothing in AI Search
Tuesday, March 24, 2026
Traditional SEO rankings have collapsed as predictors of AI visibility, with only 37% of AI Overview citations now coming from top-10 pages.
Jenoff Van Hulle
Entity Optimisation for AI Overviews: Staying Visible in Breaking News
Monday, March 23, 2026
When AI Overviews do appear for news queries, the publications that get cited aren't necessarily the top-ranking ones in traditional search.
Jenoff Van Hulle
Visibility Engineering
Wednesday, February 25, 2026
The search landscape fractured. AI Overviews, ChatGPT, Perplexity, and Gemini now sit between your content and your audience. Most of the industry responded by producing more content. That was the wrong move. The real shift isn't about content volume. It's about technical architecture.
Jenoff Van Hulle