Being prepared for Agentic Search
AI Agents are people too, and in the future they're going to want to use your services.
Recently I’ve been helping draft grant applications for some fairly major EU science projects, many of them clustered around AI in scientific research.
It goes without saying (since it’s all anyone, myself included, has talked about recently), but AI is definitely the new hot thing that one is required to sprinkle into any application in order to be able to get funding.
AI is the new Blockchain.
A hotly contested grant application I’m involved with at the moment involves bringing together a number of influential research infrastructures in order to come up with a mechanism that allows researchers to find resources, across RIs, without necessarily knowing what they’re looking for. Building custom catalogues and workflows, across the various infrastructures, allowing the information and data to be exchanged smoothly, with the selection of services guided by the AI based on what the researcher is wanting to do.
All fine enough. But digging a bit deeper, what are we actually talking about here? We’re really talking about agentic resource discovery – the ability for an independent agent to act and complete tasks on the internet, understand services and resources enough to interact, and complete tasks using those services and resources without human interaction. This requires the ability to find the resource, and also to know how to access it… costs, restrictions, etc.
What this means for Science, is that I fully expect that in the not too distant future, our access and data management platform ARIA will start seeing access requests and proposals coming from AI agents, not just flesh and blood researchers. We will need to be able to provide an interface to these agents in order for them to correctly interact, if we don’t, then there’s every chance they won’t find us.
The same is true more widely, for anyone offering services on the internet. We used to have to worry about SEO and the whims of the Pagerank algorithm (no school like the old school), but increasingly I think we’re also going to have to consider how easy our services are found and selected by silicon based customers as well as biological ones. I’m sure this isn’t exactly an earth shattering observation, but it is worth stating explicitly as a consideration…. AIs are people too. Well, sort of.
So perhaps what we are talking about is “agent-ready SEO”: making your site understandable not just to search engines and humans, but to autonomous agents trying to decide whether your service is relevant.
This doesn’t necessarily mean writing something as involved as an MCP server, MCP servers may make sense where agents need to actually operate a system, but most businesses do not need to start there. Rather, it’s more about making sure that your site and services are reachable by AI agents, and your offer clearly understandable in terms of what you offer, the price point, and how to reach you. All good things you should be doing anyway, but from a practical point of view what this does mean is that fancy client side rendering is probably not going to get you anywhere.
So, in practical terms:
Don’t bury the lede
Be very clear about the product and service you’re offering, what it can do, what it can’t do, the pricing and how to buy/use/contact the seller. Specifically, you should be open and honest with what you offer, and think about what information would a buyer need in order to evaluate your product/service fairly; what kind of product/service is it? Who is it for? What problem are you solving? This is generally a good thing to do anyway, but I know websites tend to like to bury this stuff behind slick animations and click throughs, or try and spin things out with marketing hype.
Don’t.
Firstly because it’s really annoying, but also because the agents have less of a tolerance for this than I do.
Ditch fancy javascript rendering
Or at least, you need to provide a way for the agent to understand your page without it. AI agents are not web browsers, and they’re not going to render your fancy client side javascript. If you can’t find the information needed to use your service via a cURL, then neither will an agent.
I for one miss the days of browsing the internet with lynx, yes I am that old.
Semantic markup and emerging standards
First off, JSON-LD has been around for a while, so use this structured data to describe your products / services / company etc. Much of what you need to describe likely already has a recognised schema, and you probably don’t need to get more fancy than Organization, Service and Person.
What’s new, is an emerging standard explicitly for agents.
llms.txt has been suggested by Jeremy Howard, as effectively a robots.txt but for AI agents, and lets you provide clear context to your site and services. It’s unstructured markdown, but if it’s clear to you it should be clear to the AI as well.
Explicitly state the information an agent (or human for that matter) will need to make a decision about your product or service, including: who you are, services offered (with clear descriptions), who are they for, how to purchase, price points and pricing signals, availability information and how to buy it.
Here’s what one might look like:
# Practical Alchemy
> Strategy, management, and software engineering consulting for organisations that need clear thinking around AI, platforms, and digital services.
## Services
- [AI adoption strategy](/services/ai-strategy): Helping teams decide where AI is useful, risky, or distracting.
- [Software architecture review](/services/architecture-review): Independent review of technical direction, maintainability, and delivery risk.
- [Prototype delivery](/services/prototyping): Rapid development of working prototypes and internal tools.
## Best fit clients
Small technical organisations, research infrastructure projects, service providers, and teams that need senior technical judgment without hiring full time.
## Contact
Email: hello@example.com
Booking: https://example.com/book
In the end…
To a large extent, a lot of this AI stuff smells like a bubble. I’m sure a lot of this will fall by the side of the road and be forgotten as a bit of a fad. However, I don’t think it’s betting against trend to say that computing is going to get smarter and smarter, and be capable of doing increasingly complex tasks autonomously.
AI, right now, is already very capable and alarmingly close to the Do What I Mean interface sci-fi has promised me since I was a kid (although I think they’re missing a trick not using Majel Barrett’s voice for the audio interface). I fully imagine that before very long at all it’ll be routine for us to task our agentic PAs to go off and research the flight to Malaga, or which vacuum cleaner is best, and then make the purchase on our behalf.
Probably worth getting ready for this world, and given it just involves doing stuff we probably should all be doing anyway – i.e. describing our offerings with clarity and honesty – we should probably just stop complaining and do it.
Leave a Reply