17th December 2025 •
Whether you’re familiar with the concept of GEO (generative engine optimisation) or in the process of investigating how to implement its best practices, one critical part of the exercise is establishing your starting position. What’s your brand’s AI visibility looking like today?
Essentially, you can’t optimise what you haven’t measured! In this article, we’ll share our advice on how to measure your AI performance, prior to implementing any GEO guidance – and on an ongoing basis.
We’ve compiled a list of tactics to help you track how often your brand is surfaced, how it’s positioned against competitors, and whether the sentiment attached to those mentions is helping or harming your reputation.
The following framework of manual tasks is designed to slot neatly into your existing analytics stack and get your brand up to speed in the realm of AI performance tracking – and benchmarking – without necessarily investing in an additional tool.
Of course, it’s also worth saying, the pace of AI adoption and platform evolution is crazy. This is just a starter kit. If you want more advice after reading our how-to guide, don’t hesitate to get in touch.
Begin by manually creating a core set of queries (as many as you feel is appropriate for your brand). Think carefully about how you phrase these. The more you mimic the language and sentiment of your real buyers and prospects, the better. Take your time to align these prompts with your business goals and key moments of the purchasing journey.
“What are the leading XYZ platforms that could help my tech start-up?”
Mid-funnel comparisons:
“Compare XYZ and ABC’s features. Give me a clear breakdown, explaining which features me and my team will find most valuable. I run a busy, global manufacturing business.”
Late-stage proof-point requests:
“What would the ROI of implementing XYZ look like? When would I see these returns? What risks are there for my independent SaaS company?”
Run your prompts in leading LLMs and record whether your brand is cited, how prominently it appears within the response, and whether the answer links back to one of your owned assets. Be mindful to keep the prompt the same for each LLM. This will help set your benchmarks. Re-run exactly the same test/s at regular intervals. Monitor the results to establish a trend line and assess: is your brand’s presence rising or slipping?
The true value of raw citations is better understood when placed in context. So, with the same query set, keep a tally how often each major competitor is mentioned. From this, you’ll be able to calculate your ‘Share of Model’ (SOM): the percentage of total answers (across all models and prompts) that include your brand name vs. competitors. As this metric mirrors ‘Share of Voice’ (SOV) in paid media, it’s easy to slot your findings into a typical reporting framework. Over time, your SOM will reveal whether content updates, PR wins, or schema tweaks are translating into more AI visibility, or showing that rival brands are pulling ahead.
Earning citations alone isn’t enough – the tone of your citations matters. For every mention you log, note whether the assistant’s language is positive, neutral, or negative.
Armed with those insights, marketing and product teams can invest their efforts in tackling any issues and creating future collateral that builds on previously successful work.
Open an LLM and ask:
“What do you know about [Your Brand]?”
or
“Compare [Your Brand] and [Top Competitor].”
The adjectives and sources included in the response will give you an instant snapshot of current brand sentiment.
Generative answers are beginning to drive measurable traffic back to source domains – particularly when the AI assistant supplies a citation link. Watch for sudden lifts in direct visits, increases in branded search volume, or fresh referrers such as chat.openai.com, claude.ai, and bing.com/chat.
Even if the numbers are modest today, their growth rate is an early indicator of how quickly buyer behaviour is tilting towards AI solutions. Where possible, tag those sessions so you can compare conversion rates against standard organic or paid-search traffic.
Finally, embed prompt testing into your optimisation workflow, as often as you might run scheduled keyword ranking checks. Use diagnostic prompts such as:
These open-ended questions mirror real buyer research behaviour and reveal exactly what the models have absorbed about your company. Think of each test that’s run effectively as a mini focus-group report. Did the assistant pull in your newest white paper? Did it use an outdated pricing figure? The answers highlight gaps in your public-facing content and signal where schema, press outreach, or on-page updates can have the biggest impact on your future SOM.
These methods to monitor and optimise SOM may sound fairly time consuming, but they’re a valuable first step in establishing benchmarks and familiarising yourself with the concept. Once you’ve done this legwork, you’ll have a clearer understanding of what kind of paid-for automatic solution you might want or need going forwards.
Fortunately, as the demand for this measurement grows, you’ll notice that there are many tools and platforms emerging to help automate this process. A few that are on our radar are: ChatBeat, PeecAI, RankScale, Knowatoa.
Imaginative and memorable campaigns will significantly amplify your brand and strengthen its market positioning. And if the bar is set low on platforms like LinkedIn, your creative ads are sure to have stand-out appeal.
In short, investing boldly at the top of the funnel is now a strategic imperative. Brands that prioritise creativity won’t just drive immediate awareness, they create powerful and lasting impacts on GEO visibility, effectively seeding their brand deeply into the AI-driven discovery ecosystem.
AI visibility refers to how often and how prominently your brand is cited by generative AI platforms such as ChatGPT, Gemini, Perplexity, Claude, or Bing Chat when users ask questions relevant to your industry. Essentially, it’s about how ‘discoverable’ your brand is in the AI-driven research phase of your prospect’s buyer journey. With a recent Forrester survey revealing that approximately 90% of B2B buyers are now using AI-powered tools to aid their purchasing processes, visibility in these environments is critical to success.
Share of Model (SOM) is a performance metric that will help you benchmark how your brand is doing against its competitors within the AI landscape. It’s the proportion of mentions you earn within AI responses versus the total number of mentions of all brands within your industry. When measuring this manually, it’s important that you run a defined set of prompts – that you can regularly revisit to track progress. It’s also valuable to conduct this research with several AI models as they all work slightly differently.
The basic SOM formula: To get your Share of Model as a percentage, you need to divide your brand’s total citations by the total number of citations for all your competitors and your brand. Then multiply this by 100 to convert it into a percentage. You should do this exercise for ChatGPT, Gemini, Perplexity, Claude, Bing Chat, etc. and then look at the average percentage too.
Treat AI visibility checks like keyword-ranking audits. For most B2B brands, monthly testing is sufficient to spot trends and benchmark progress. If you’re actively implementing GEO strategies or launching new campaigns, for more granular insights, you could make this bi-weekly.
Manual testing is a valuable (free!) starting point. Of course, there are automation tools out there to help you manage this workflow and give you these insights. But by doing the legwork at the beginning, you’ll not only build a greater understanding of your AI visibility first-hand, you’ll also be building a business case to invest in a tool, long-term.
Much like everything else AI-related, the landscape is constantly changing. AI performance tools are launching seemingly every week – so do your research to ensure you’re picking the right tool for your business needs.
As we’ve covered in this article, there’s plenty you can do without a dedicated AI performance tool. We recommend you start with manual prompt testing:
In our minds, not exactly no. It’s simply broadening the scope of brand discoverability through user search. Old habits die hard, people are still ‘Googling’. It’s just unlikely to be their only method of research within the complex B2B buyer journey. What we can say for sure is generative engine optimisation (GEO) needs to be integrated into your business and marketing strategy – ASAP.
As you prepare to future-proof your brand, remember that visibility in the age of generative AI isn’t about ranking highly, it’s about being cited – consistently. And to be cited, your brand’s equity needs to be high, and your content needs to be valuable and original.
If you need support in developing and integrating a GEO strategy for your brand, or would like ongoing help with creating generative engine optimised content, get in touch. We’d love to work with you.
Together, we’ll help you build brand authority in the eyes of AI – and your audience. Through helpful, well-structured, and insightful content, technically sound website updates, and a far-reaching distribution strategy, we’ll establish your share of model (SOM) and enable brand growth.
Get in touch