AI is just like Homer Simpson – far too dumb and simple to replace us

ARTIFICIAL intelligence is so dumb, says Professor Angus Fletcher, it’s basically the technological equivalent to Homer Simpson: a slob with a pea-brain full of junk.

“The idea that artificial intelligence will replace humans is a hoax. It’s just marketing,” he says. “It’s not intelligent. It’s just an enormous calculator. Do you think calculators can replace Van Gogh or Shakespeare? Even the guy down the pub?”

Fletcher is one of Scotland’s most distinguished polymaths. He’s the neuroscientist who developed the ground-breaking theory of the human mind called ‘story-thinking’ – or narrative intelligence – which found that the brain doesn’t work like a computer. The brain, he says, is a ‘story-machine’. Our minds work along narrative lines – creating plots, imagining futures. Brains don’t really operate according to logic at all.

That’s why the idea AI can ever replace humans is nonsense. Fletcher isn’t just a neuroscientist. As part of his research into how the brain works like a story-producing machine, he took a doctorate in literature at Yale, so he could understand the minds of the world’s greatest story-tellers and apply that to his scientific theories.


Professor Angus Fletcher training American school children in his theory of ‘story-thinking’

Professor Angus Fletcher training American school children in his theory of ‘story-thinking’



Fletcher brought these two disciplines together at the Project Narrative Laboratory at Ohio State University where he’s professor of ‘story science’. When translated from the laboratory to the real world, his research is shown to boost intelligence and creativity. In other words, humans do better when we think in narrative terms, rather than in solely logical terms, as we’re taught in school and university.

The US army asked Fletcher to teach special forces operatives the art of ‘story-thinking’ to make them better soldiers. His work earned him the military Commendation Medal. He’s also trained members of Ukraine’s army. On the flip side, he works extensively with school-children using story-thinking to unlock their educational potential. The preponderance of a ‘logical’ approach to schooling holds back far too many kids, he says.

Fletcher knows the world of AI inside out. He’s worked with major AI companies on their natural language processors (NLPs), technology similar to ChatGPT. These corporations were “convinced NLPs would replace screenwriters,” he says. The idea was proven to be absurd.

His new book, Storytelling: The New Science of Narrative Intelligence, which is just out, brings his research on the way the human brain works, and the limitations of AI, out from academia’s ivory towers and into the public sphere.


Fletcher left these shores many years ago. Today, he’s in his Ohio lab when we speak. “Human brains are much more complicated than computers,” Fletcher says. Even the term ‘AI’ is bogus as machines cannot, and never will, have ‘intelligence’.

AI could have a suitable role in the world, though. It’s great at assisting humans with logistics – working out the fastest way to get from A to B; or in vast library searches; or in tools like audio transcribers converting words into text: simple tasks which help humans work better. But it will never be a teacher, engineer, doctor, writer, therapist, carer, or manager. It will never do the things we do.

Fletcher points out that if we just think back to 2011 we’d already be aware of AI’s limitations. That’s when IBM’s Watson computer won the US quiz show Jeopardy. Suddenly, tech gurus were claiming Watson would replace doctors. Clearly, that didn’t happen. “It was a complete bust,” says Fletcher. “The same thing will be revealed soon when it comes to this generation of AI. It’ll be seen as a bust, scam, fad.

“The real problem isn’t that AI will take over the world, or replace humans. It’s not going to do any of that. The problem is: humans have to fix all the problems facing us which we’re currently hoping AI will fix.”


Conceptual image of the brain. Brain and digital. (Photo by: BSIP/Universal Images Group via Getty Images).

Conceptual image of the brain. Brain and digital. (Photo by: BSIP/Universal Images Group via Getty Images).



On top of that, AI is filling the world with “spam”. This AI-generated ‘content’ – digital garbage – is now cluttering up the online realm, getting in the way of real facts, and spewing out disinformation.

Fletcher says “there’s basically two things happening under the AI hood. First, there’s just randomness. Secondly, AI has a huge capacity to synthesise, blending lots of stuff together. So you give it millions of books and it starts mixing and matching different texts together and coming up with something that looks ‘new’. Initially, that seems impressive as it’s moving faster than humans think, but essentially it’s just mindless random plagiarism, constant streams of banality at an incredible rate.”

Currently, at best, “AI is a fun game”. The reason AI is so incapable of replicating human intelligence is that computers are driven by logic, and, as Fletcher’s research shows, our minds run on narrative, or story-telling.

“The human brain evolved to initiate actions, launch new behaviours. When you put these new behaviours together they become plots and plans, strategies. When you watch children, most of what they’re doing is launching new actions to see what works and what doesn’t. That produces feedback where they think ‘ok this works, let me chain those actions together’. When you chain actions together you get plans, sequences of action and behaviour: stories. Computers cannot understand stories.”

That’s why ‘stories’ are so embedded in human life: they aren’t just a form of communication; stories are “how we think”.


This is radically different to how we’re taught at school. Fletcher wants to shake up the education system, which he feels fails most kids. In school, the focus is logic: being asked questions and coming up with learned answers. This seriously limits and underestimates the vastness of human creativity.

The human mind, Fletcher says, is constantly engaging in story-telling. When an event happens to you – say, you see a street fight – your brain, he explains, “essentially enters the middle of a story”. When that happens your brain does two things: it jumps backwards to come up with the reason why this event happened. That’s ‘causal thinking’. And it jumps forward to work out what happens next. That’s ‘counterfactual thinking’.

“So your brain goes ‘middle, beginning, end’,” Fletcher explains. It’s a story pattern often used in novels and screenplays. His work into narrative thinking, or story-thinking, is designed to help us navigate everyday life better. How do we make the correct call on what caused that street fight? How do we decide what the best course of action is after witnessing it?


Fletcher’s work really came into its own through the US Special Forces. Military planning is ruthlessly logical. No deviation is allowed. However, if battlefield plans fail, events will quickly turn catastrophic unless soldiers are taught to think creatively.

Too often, because of how we’re taught to think at school, university or at work, we fall back on old ‘formulas’ when it comes to making plans for the future and determining our views of the past. So if your boss is demanding, or your partner grumpy, we use previous behaviours we’ve employed before to deal with the situation. Often, that’s the wrong response, and can make matters worse. How many times have you fought the same battles at work or at home?

This is all a very crude way to explain Fletcher’s complex work, but broadly, if we think creatively – like novelists, say – we can develop multiple responses to stressful events and pick the one which leads to the best outcome.

“Rather than using pre-made formulae, open your mind, learn something new, and expand the ways you can flex your life. If you do that, you’ll feel calmer and be more effective. If you don’t, you’ll be more anxious and angry.”


There’s an almost psychotherapeutic side to his work. Story, says Fletcher, isn’t just about telling people ‘things’. The best way to live your life is to be in command of “your own story” as much as you can.

By taking the ‘logical’ approach to life: the rigorous ‘point A leads to point B leads to point C’ approach that we’re taught since childhood, we risk real upset in life. Plans seldom work out. When plans fail, Fletcher explains, our ‘fight or flight’ instinct kicks in. We get scared that our one idea failed, and that makes us susceptible to being influenced by others, as we think their plan will help us; or we become angry and push through with one plan aggressively. Neither course is a route to happiness. Creativity – narrative-thinking – offers multiple routes to the future and success.

Now, clearly there’s some parts of the human brain that are very much like a computer. The mind isn’t one big story-machine. The computer-like parts are mostly in the visual cortex, Fletcher says: taking in images, processing and converting them.

Fletcher’s work – heavily peer-reviewed – received its biggest validation from the US army. When 150 special operations officers were subjected to Fletcher’s training all showed high improvement in planning and creativity, making them, fundamentally, better soldiers. Strikingly, the end result is that they weren’t just more effective on the battlefield, they were better at finding solutions to problems without having to resort to violence.

This appealed to Fletcher as he’s “about as close to being a pacifist as you can imagine, and soldiers don’t want to die”. He was initially approached by the Army Nursing Corps, which wanted to know if there were “smarter ways to solve problems than dropping bombs on them. That was their literal quote”.


A complex electronic circuit board containing an artfiicial intelligence chip

A complex electronic circuit board containing an artfiicial intelligence chip



Fletcher has taught special forces how to avoid fear or anger when plans fail, and rather be “more flexible”. With beautiful irony, it’s the same training he gives primary school kids. When life gets tough for kids, just like soldiers, they become scared or angry – or in pre-teen terms, they cry and throw tantrums. When trained in ‘story-thinking’, “children become much more able to solve their own problems”.

Fletcher insists none of this is touchy-feely hippy stuff about ‘teaching kids through play’. It’s not some reprised “Montessori school” for arty youngsters. What it does do, though, is reject the notion that logic is the only way to solve human problems. Focusing on this math-heavy approach to learning – where children simply repeat what they’ve been taught – doesn’t develop creative people skilled at solving real-world issues.

In fact, Fletcher says, what happens is that schools and universities create an elite trained in logical thinking with little creativity, who then go out into the world and find leadership roles in politics and business. Their approach to life often fails, and hey presto, we’ve got the world in which we live today.

Another reason for the US army calling on Fletcher was this: they couldn’t understand why so many college graduates with top grades were failing miserably when they applied for the special forces.

“How can they be so smart, yet so unable to solve problems? Because school didn’t teach them to access real intelligence. They have these incredibly simplistic logic-based views of how life works. The double tragedy is that there’s so many kids in school who are really smart but told they’re dumb.” These kids would flourish if they were taught to access their creative rather than logical side: in other words, if they were taught to think like humans not computers.

Fletcher stresses that there’s no deliberate malice in the education system. “It doesn’t come with bad intention. It started with ideas of fairness. We wanted to standardise education, create a meritocracy, we wanted to evaluate students objectively so we developed these systems of memorisation, maths, education driven by assessment. Focusing on getting ‘the right answer’, learning formulas, is a recipe for being out of touch with reality, though – not having a mind that grows.”

Evidently, none of this means maths is ‘bad’. Clearly, it’s essential. But it has come to dominate, says Fletcher, “like an invasive species. If you’ve a box of crayons and the only colour is red, that’s not good.” Logic provides the “false promise of efficiency”, but the real world doesn’t run on logic, it’s chaotic and only creative thinking can help us navigate the chaos.


And that’s why AI is a massive con-trick. “It can’t think in story. It can’t plan. It can’t plot to take over the world. It can’t invent new futures. It exists in the permanent mathematical present tense. It doesn’t understand the idea of change.”

If an AI is fed good data, then it can be a real boon to humans, helping us work out how to do tasks faster and more efficiently, but it will never ‘create’ anything. The problem is AI is being fed the internet, a bunch of racist, sexist, homophobic garbage. “We’re dumping all our nonsense into it, and AI thinks it’s all true. It’s not being used for what it could be used for.”

Yet even in a perfect world where humans only fed AI truth and facts, it would still simply be at best an assistant. It can never do what humans do. Currently, we’re using AI all wrong. Fletcher says it’s like carpenters using hammers to open tin cans. “Understand the tool, respect it, use it for what it can do.”

Anyone investing in AI, believing it can replace humans, should cut and run, he says.


Although AI can’t truly ‘replace’ us, it is causing damage. The myths that are being spread about AI mean some companies are now laying off staff in the belief that AI can do their job. For example, some of the words you now read on company websites are AI-generated, making copywriters redundant.

However, have those AI words made you care more about the company? Buy its products? Do you remember a single word ever written by an AI?

Most corporations, Fletcher says, will quickly work out how pointless AI is: “AI is banal. Corporations don’t want banal. They want engagement. We’re in a war for attention today. Banal AI-generated swill isn’t going to get many clicks.”

The other risk to jobs is that AI “provides strategic cover” for mercenary CEOs to sack staff, claiming they’re using AI to cut costs and boost shareholder dividends. It’s a short-term exercise which will achieve little, except pain for those made redundant by machines – which ironically cannot do their jobs. “We’re going through a bubble right now, but it’s going to burst,” Fletcher adds.

However, the biggest fear around AI is that it will become sentient and kill us in some Terminator-style apocalypse. AI, Fletcher says, could never do that. However, go back to that analogy at the beginning about AI being dumb like Homer Simpson. Remember, Homer works in a nuclear power plant, and he’s idiotic enough to press the wrong button. AI could do the same simply because it’s so stupid.

Evidently, Fletcher wants AI “heavily regulated”. The biggest danger, he believes, is AI simply filling the world with “nonsense” which constantly distracts humans, and makes us wrongly believe computers can solve problems only we can tackle. Clearly, AI is also increasingly spewing out the conspiracy theories and hate already undermining democracies.


“AI isn’t dangerous in the sense that it’s going to intentionally kills us. But could it do something harmful? Sure. AI is dumb and it’s making us dumb. It’s creating ignorance. The real danger is that AI isn’t intelligent yet we’re putting it in charge. If you put something unintelligent in charge, what will happen? Something not very intelligent.” The ‘classic example’, says Fletcher is deaths caused by self-driving cars which cannot understand what they are ‘seeing’ on the road.

Putting faith in AI, Fletcher says, is like believing in witches in the 1600s. As he wryly adds, that didn’t turn out too well.

Nor will AI ever improve to such an extent that it becomes capable of replacing humans. “It’s a hardware limit, not a software limit,” Fletcher explains. “Even if computers one day become ‘conscious’, they’d be doing logic. They couldn’t consciously do narrative.”

To believe that computers could ever ‘think’ like humans is to imagine that cows can become apples. “It’s just marketing. Companies need something to sell. It’s a fad. For a while it was cryptocurrency, then the Metaverse. Corporations exist to convince you there’s something you don’t have that you need.”

What we must do, says Fletcher, is harness the power of our own creativity to solve our own problems, and the problems of the world, as human beings. “Dependency is infantilising,” he adds. “I hope my work helps people. I want people to realise there’s problems out there, but AI won’t fix them. Nor will AI take over, but it will cause damage – damage we’ll also need to figure out how to fix.”

Grace Reader

TheHitc is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – The content will be deleted within 24 hours.

Related Articles

Back to top button