

The AI features getting the most press in cultural tech right now are probably the ones that will matter least. I build this stuff for a living, and I think we're investing in the wrong layer.
That's not a comfortable thing to say when you're a CTO whose job is partly to evaluate and ship AI features. But after years of building digital experiences for some of the world's leading cultural institutions, first at Art Processors and now at Pladia, I've developed a working theory about where AI genuinely delivers in this sector, and where it's theatre.
The theory is simple: the AI that transforms cultural venues will be the AI nobody notices.
Visitor experiences in cultural venues are fundamentally about meaning, not information.
When a visitor stands in front of a Rothko, a dinosaur skeleton, or a living coral reef, they don't need more data. They’re not looking for a personalised narrative generated by a large language model. They're looking to feel something. The digital layer should deepen that attention, not compete with it.
This is where most visible AI features go wrong. The chatbot that asks how a painting makes you feel. The generative audio that writes a bespoke script for every visitor. The digital curator that synthesises collection themes on demand. These are optimised for information delivery at scale, which is transformative in e-commerce or healthcare. But a gallery isn't a shopping cart. Pulling a visitor's attention toward a bot conversation, however sophisticated, isn't an enhancement. It's an interruption. The best technology removes friction so the art can do the work. That's a design principle, not a limitation.
To invest wisely, you need to separate AI into two distinct categories.
Visible AI is the spectacle: impressive in a demo room, often intrusive in a gallery.
Invisible AI is the utility. The algorithm that automatically generates alt-text across an entire collection so a visually impaired visitor can navigate without waiting months for manual descriptions. The system that learns which tour paths lead to longer dwell times and quietly adjusts recommendations. The translation layer that gives a Mandarin-speaking visitor the same depth of written or audio experience as a local, without the venue managing separate productions for every language.

At Pladia, we've built image recognition that identifies an object in under a second and then gets out of the way. We are also extending this to a context-aware translation layer. The goal is to ensure a French or Mandarin-speaking visitor receives a written or audio experience just as rich as a local English speaker, without the venue needing to manage separate, manual productions for every language.
One category is designed to be noticed. The other is designed to disappear so that your visitor's attention stays exactly where it should be.
The real opportunity for AI lies in the "unfunded mandate" problem: the essential tasks organisations are legally or practically required to do, but realistically cannot resource at scale.
Consider accessibility. Most institutions know they should have descriptive alt-text for every image in their collection. Few actually have it. Writing descriptions for 40,000 objects requires thousands of hours of skilled human labour. AI can do a credible job in a fraction of the time, freeing human curators to focus on the objects where nuance actually matters. That's not a glamorous feature. It doesn't demo like a talking robot. But it's the difference between a collection being accessible or inaccessible to a significant portion of your visitors.
The same logic applies to multilingual content. A venue might have extraordinary audio guides recorded in English, written by curators who spent months getting the tone exactly right. But that depth of experience is only available to English speakers. Every other visitor gets a shorter, thinner version, or nothing at all. That's not a feature gap. That's an equity gap.
Operational tagging, behavioural pattern recognition, and anomaly detection follow the same pattern. None of these make the front page. All of them have measurable returns and direct visitor benefit. Most importantly, they make the experience better without demanding a second of the visitor's attention.
This is where I'd put my money. Not because it's exciting, but because it works.
Here's the uncomfortable truth about "AI-powered experience" products: they are only as good as the data and content underneath them.
AI is a multiplier. If your objects aren't tagged, your content isn't structured, and your accessibility obligations are half-met, an AI layer won't fix those issues. It will amplify them. A retrieval system built on poorly organised content retrieves poorly. A personalisation engine with no meaningful behavioural data personalises nothing.
Before buying a visible AI product, ask what specific problem it solves. Then ask whether that problem is a visitor experience issue, or a content and data problem in disguise. In my experience, it's usually the latter. No chatbot is a substitute for getting your foundations right.
The venues that will use AI most effectively in 2027 are building data infrastructure now. They are doing the unglamorous work of structuring collections and instrumenting visitor flows. That work is invisible to the public, but it makes everything else possible.
In three years, the cultural venues most transformed by AI will be the ones where visitors never noticed it.
The venues that made AI the centrepiece of their visitor experience will likely be quietly retiring those features. Meanwhile, the venues that used AI to automate the mundane and clear the path for accessibility will be running better experiences than they ever could have afforded to build manually.
The revolution is coming. You just won't see it.
Sign up for our newsletter to stay up-to-date.