AI can be complacent. You can't be.
AI agents can automate your work in minutes, but they're inherently complacent. The competitive edge isn't using AI. It's refusing to accept "good enough" when you can drive better outcomes. Here's why complacency kills and what to do about it.
John Prior, Consulting Partner , 20 January 2026

Today finds me snatching at brief whispers of time to write a blogpost, momentary light breezes between the heavy gusts of AI agent output review and prompt tuning. And I'm writing this blogpost about one thing AI won’t do for me, yet.
Maybe I should lead with the fact that I'm not using AI to write this post. That’s definitely not the one thing it won’t do for me, but then that's the route of complacence, of prioritising ease of delivery over creating anything with value. We'll return to that theme.
My main task today was completing production of a flotilla of autonomous business research agents. The concept is simple - automate the pre-engagement organisation, technology and individual background research that gets done before a B2B company engages with a new client. It's a productised service that runs in the growingly excellent Opal, the Agentic AI platform in the Optimizely suite, so it's designed to operate within the overall prospect capture, intelligence and experience flows that run through a modern DXP platform, but then to extend beyond that and to deliver actionable customer alignment analysis and properly hyper-personalised outreach and response strategies and the associated content needed for it.
The workflow's in a pretty good state. It can do in 10 minutes what would take days for me to do manually (if I ever had that much time available), and it's delivering to a much better standard, both in breadth and depth. Those agents can operate at scale whereas I can, well, not. I'll continue to evolve the agent prompts and context, but ultimately that was one part of my job that AI will now do instead.
So is producing the agent architecture and prompting my job now instead? Well, one thing about that: actually Claude wrote most of the Opal agent architecture and prompts for the workflow. I've built out agent architecture patterns, Opal agent characteristics, tool calling definitions and key lessons learnt as context for Claude flows that produce far more effective agents than anything I would write manually. Sure, there's still scope setting and iteration, but as the supporting context builds out I'm steadily reducing the attention needed for that.
And let's be honest: I'm using Claude to build and improve the context it needs through analysis of each project as it's completed. So even in building the agents, a growing proportion of my time is spent simply managing the Opal user interface rather than the agent design. And I mustn't get complacent, I must automate those inefficient manual steps out of the process. The new Claude CoWork feature is probably the quickest way until we build a proper pipeline.
GenAI in its current form clearly isn't what we historically meant in science fiction terms; we label that AGI (Artificial General Intelligence) now. We've come a long way from the early dismissal of [i]stochastic parrots[/i] and [i]spicy autocomplete[/i] - AI abilities to correlate, infer, combine, and generate more than the sum of its parts has progressed astoundingly.
Yet we've all still got one of those "AI can't" lists. AI can't be genuinely original, it has no vision or real creativity, it can't build something it doesn't have context for, it can't stay on track on complex tasks, it confidently hallucinates inaccurate information with incomplete context, it can't resist contrastive framing and other repetitive writing patterns, and perhaps the biggest tell from a creative perspective, it just can't be funny.
The problem for me is that not a lot of those flaws would have historically been as problematic for much of my work as I'd like to think. Absorbing information, understanding needs, drawing on domain expertise and previous experience and applying best practice approaches - those have been some of the key characteristics I've tried to bring to my work in digital over the last couple of decades, and those are all increasingly GenAI strengths. And whilst I try not to hallucinate too much while I'm at work (or to be too unfunny), being confidently incorrect is a pretty common human flaw and I’m definitely not exempt from that.
Still, there's that word "complacent" again. Let's pick it up.
Complacency is a killer. Complacent brands lose their audiences and die. Complacent products and platforms don't deliver to increasing expectations and get abandoned. Complacent people fall behind in their skillsets, and their career trajectories fall away.
Complacent AI - well actually, current AI models are inherently complacent. To be clear, that's not the companies making AI. They're setting a terrifying pace of progress because they're [i]not[/i] complacent. AI models, on the other hand, are driven by attention rationing and reward functions that could loosely and only partly unfairly be summarised as "was that good enough? will that do? can I avoid trying harder?"
Look back at that list of flaws - they're all a form of structural complacence. Did we create anything new or simply rehash old ideas? Complacence. Did we set a vision or simply follow one? Complacence. Did we miss out part of the task? Complacence. Did we use a whole catechism of cliches? Complacence. Did we gloss over something we didn't understand and just make something up? Complacence.
And that's it, the irreducible nub, the current human advantage: the litany against complacence.
I will not follow the obvious paths simply because they have been walked before, ask easy questions because I know the answers, or accept easy answers because they are familiar. I will find new ideas, set new objectives, find new paths and new destinations. I will test what is working and drive out ways to make it work better. I will not stand on pride in what I have done well before when I can find better ways to do it now. And when I have relearnt what I can best do now, I will not abdicate my understanding of what is being done at my behest because that is to resign my ability to make it better next time.
And you know it's not so little, that nub. It's how we make things better, a little bit better, every time we do something. I can work with that.

NEWSLETTER _
Expert Insights
Get expert insights on digital transformation, customer experience, and commercial impact delivered to your inbox.
SIGN UPRelated articles _
&w=3840&q=75)

&w=3840&q=75)