"We were hoping for some romance. All we found was more despair. We must talk about our problems. We are in a state of flux

With apologies to Bloc Party for the headline, but……

…Like most in the creative industries, I’m both excited and deeply concerned by the state of flux we are in.

🟢 The astounding potential of what AI ‘could’ be.

🔴 The appalling standards of current AI content

🟢 The awe-inspiring creativity our industry can produce

🔴 Short-termism as a management team sport

🔴 Endless ‘reorganisations’ and resultant redundancies

But in the last few days, with a bit more time on my hands than normal** Three excellent bits of thinking have really hit home. I urge you to check them out (full links in comments)

First Ivan Fernandes has produced a succinct, spot-on, analysis from great research into who the real winners are from WPP actions of the last few years. Spoiler alert, it’s neither the staff nor clients…

Then the highly prolific Joe Burns. In multiple posts, he rails against the sloppiness and lack of foresight in our rush to get AI tools embedded in workflows. Yes, you might get home earlier on Friday, but you’re training your clients and the rest of the agency on fools gold.

Finally, and with a real flourish. Karen Martin with the 'Women Who Walked Around Soho'. A call to arms to embrace the timeless magic of talking and collaborating. To hire more, not less, and embrace a lack of conformity. Because the potential of new scary ideas is the real wow of the industry

Remember

The real beneficiaries are in another room

We are in danger of shooting ourselves in the face

The resolution is within our remit, but we need to keep talking

AI and a lesson in Kafkaesque Bureaucracy.

I’ve just read two amazing things, weirdly linked by themes from Franz Kafka’s ‘The Trial’. The first about Cursor, an AI coding tool that went rogue and second, a brilliant HBR article "How Gen AI Is Transforming Market Research" co-authored by Olivier Toubia.

The spectacular face-plant of Cursor—an AI-powered coding tool whose support bot confidently fabricated a non-existent policy about device login limits. So, without human oversight, this digital 'assistant' convinced users they were experiencing an intentional restriction rather than a simple bug, precipitating a mass exodus of paying customers.

Meanwhile, the HBR article highlights some limitations within, the really rather exciting world of AI created consumer panels.

The piece on AI panels points out that when presented with emotive subjects. They can demonstrate peculiar behavioral anomalies.

Like manifesting responses that defy logical consistency. i.e. exhibiting minimal price sensitivity when confronted with contextually absurd pricing structures.

The common thread?

We're unwittingly creating bureaucracies operating on their own inscrutable logic.

So, so, relevant to so many companies approach to AI integration

As in ‘Does this work? Yes. How does it work? Can’t say. Will it work tomorrow? Can’t say.’

Just as traditional bureaucracies rigidly adhere to processes that make perfect internal sense while baffling outsiders, AI platforms is pretty similar — operating on mathematical principles producing outputs that seem coherent until they spectacularly aren't.

The deeper issue emerges when these digital bureaucrats gain autonomous control.

In the Cursor debacle, removing humans from the support loop allowed a hallucinated policy to become de facto reality.

In market research eliminating human oversight can lead to synthetic consumers who behave like a character in Franz Kafka’s The Trial

“You don’t need to accept everything as true, you only have to accept it as necessary.”
— Franz Kafka The Trial

This pattern reveals something fundamental about our relationship with AI: we aren't simply deploying tools; we're installing bureaucratic structures that create their own reality.

Traditional bureaucracies might eventually acknowledge mistakes—though like the Post Office Horizon scandal, they rarely go full Mea Culpa. AI, on the other hand, only knows it's right.

A fully-functioning bureaucracy requires oversight, accountability and appeal mechanisms. Similarly, effective AI implementation demands human reality-checking and clear intervention processes.

The irony is exquisite: in our rush to eliminate human inefficiency, we've created digital bureaucracies replicating the worst aspects of their human counterparts—rigidity, opacity, and occasional absurdist logic—without the capacity for self-correction.

Remember: the best AI implementations are like the best jokes—they require perfect timing and human judgment about when they're appropriate.

Literacy, AI, and the Decline in Productivity in the UK

Alvin Toffler warned that 21st-century illiteracy is defined not by an inability to read, but to adapt. As the UK grapples with AI adoption, his words ring alarmingly true.

Tariffs and taxes are fleeting events. There are much more profound challenges threatening the UK’s long-term competitive edge. ( and it's not feckin’ remote working! )

It’s the intricate relationship between declining literacy levels and sluggish AI adoption.

I write quotes and thoughts on my notebooks, this one, inspired this essay from a few months back

The Uncomfortable Truth About UK’s AI Readiness

The data tells a sobering story:

18% of adults in England are functionally illiterate

Only 39% of UK businesses have actively implemented AI technologies

Britain’s productivity lags 18% below the G7 average

76% of UK professionals are excited about AI, but only 44% receive organizational support

90% of UK primary school children experienced negative literacy impacts during COVID-19. With improvements still stubbornly low

Conventional wisdom treats literacy challenges and tech adoption as separate issues. Our recent work suggests they’re two sides of the same coin — a cognitive-literacy crisis undermining the countries' long term productivity.

The Cognitive Infrastructure of Innovation

Literacy is far more than reading and writing — it’s the cognitive infrastructure that enables tech adaptation.

The BrainWare Learning Company defines cognitive literacy as the “mental toolkit” of attention, working memory, and processing speed required for learning new systems.

Research shows 62% of UK workers score below OECD cognitive flexibility benchmarks, with 3x higher AI implementation failure rates in low-literacy sectors like construction and retail compared to tech.

Take it from a dyslexic with a slight stutter and a South London accent: prompting AI ( whether by voice or text ) is not as straightforward as the makers of these tools suggest. The cognitive demands of effective AI usage require sophisticated literacy skills that many in our workforce currently lack.

The Dangerous Feedback Loop

More worrying is the feedback loop emerging between literacy gaps and AI dependence:

Workers with literacy gaps show 73% higher reliance on AI for basic tasks

This “cognitive offloading” accelerates skill atrophy (22% decline in critical thinking scores over 6 months)

Younger workers (18–25) are especially vulnerable, with 89% using AI for writing/analysis versus 52% of workers 45+

This creates a productivity doom cycle:

Underdeveloped literacy >> Over-reliance on AI >>

Further erosion of cognitive skills >> Ineffective AI Implementation

The JL4D Institute identifies a critical threshold: workers need Level 2 literacy (GCSE English equivalent) to effectively collaborate with AI systems. Yet 38% of UK frontline workers fall below this standard.

Breaking the Cycle: Evidence-Based Interventions

The path forward requires recognizing AI adoption as a literacy development challenge, not merely an IT rollout. Companies investing in cognitive literacy programs see 22% higher AI success rates versus their peers.

Strategic Imperative for Business Leaders

The UK’s AI adoption gap with the US isn’t primarily technological — it’s cognitive. Every stage of AI implementation is impacted by literacy challenges:

For business leaders, this means:

Invest in workforce AI training programs, beginning with the C-suite

Create structured, continuous AI literacy updates for everyone

Recognize that AI literacy is a growth opportunity, not a cost-center

Measure cognitive flexibility alongside technical metrics

Partner with educational institutions to align curricula with emerging AI needs

Helen Milner of Good Things Foundation notes:

“AI doesn’t replace literacy — it demands new literacy dimensions. Our 8.5 million digitally excluded adults aren’t just missing opportunities; they’re becoming cognitive debtors in an AI-powered economy.”

It’s Not Too Late

The economic stakes couldn’t be higher.

AI could potentially increase corporate profits by $4.4 trillion annually. Sales teams using AI are 1.3 times more likely to see revenue increases.

Every 10% improvement in workforce literacy correlates with 6.7% faster AI implementation.

But more than economic opportunity is at stake. Without addressing literacy deficits and cultivating sophisticated AI engagement, the UK risks enabling technologies that will amplify existing socioeconomic disparities rather than catalyzing inclusive growth.

As Toffler’s ‘learn-unlearn-relearn’ imperative points out. Both educators and businesses, should treat AI adoption as improving much-needed literacy, not just an IT rollout. In Britain’s productivity crisis, upgrading our cognitive infrastructure is not optional — it’s existential.

(Written by a hyper-active dyslexic, made readable by Claude.ai)