Air travel is, famously, a rather stressful experience. A few months ago, I found myself at an airport, and my stress levels were further heightened by the fact that my flight was delayed by over two hours. As it turned out, the plane that was supposed to get me to Luxembourg had not even left Findel Airport yet. I thus found myself with some free time on my hands and decided to track the flight so that I would receive notifications about its status.
Luckily for me, we live in the 21st century, and tracking a flight is a piece of cake. I took out my smartphone and navigated to the Lux-Airport website, marvelling at the convenience of modern technology and the ease of… wait, what’s this? On the homepage, I was startled to find myself staring into the dead eyes of a fox.
Gudde Moien. Wëllkomm um Fluchhafen Lëtzebuerg!
I’m Réno, your guide for your lux-Airport journey. You can ask me anything!
Réno, huh. And he’s bilingual too, I guess? He did mess up the Eifeler Reegel there, but hey, he’s made an effort, and I’m the last person to discourage a learner. But anyway, I’m not in the mood to chat with a floating fox head wearing a bow tie, I just want to track my flight. I switch to the Departures tab, locate my flight, and there it is: “Email me Updates.” I’ve done this a million times before, easy. I tap the button, but to my confusion, the website automatically switches back to Réno – whose motionless eyes have been staring at me this entire time – and starts typing out a message:
I would like to track flight XYZ to XY on DD/MM/YYYY at XX.YY.
Excuse you, who the hell gave you permission to put words in my mouth and force me to speak to your creepy fox ghost? Réno spends a few seconds “thinking” and informs me that, why yes, he can track my flight for me, but requires my email address to do so. Great. While still annoyed that Lux-Airport is basically holding my flight information hostage until I humour the Fantastic Mr Fox rip-off their marketing team farted out in two seconds, my pragmatic side prevails and simply wants to get this over with. I type out my email and send the message. Réno starts “thinking” again. Two seconds pass. Then 5. 10. 20. 30. After a full minute has passed, I start to think that Réno may have had a stroke or simply could not wrap his mind around the insane thing I just did (i.e. give him my email after he asked me to do so). But all right, I guess bugs happen sometimes, so I refresh the page and go through the whole charade again. I submit my email, Réno “thinks”, and this time he does come up with a response:
It looks like you gave me an email address. What would you like me to do with it?
I gave up and went for a drink instead.
The “AI” deception
Experiences like this have become increasingly common. Since the “AI” boom kicked off, it seems businesses and institutions everywhere suddenly turned around and said: “You know that thing that’s been working perfectly fine for years? What if we made it worse in every way and ensured it also wastes a small lake’s worth of water every time someone uses it?”
There are many criticisms you can level against “AI”, but in this piece, I want to focus on those that exemplify a broader phenomenon: the decline of the US-dominated internet and the potential for a new digital world built from its ashes. From this angle, the story of “AI” actually offers an interesting case study in how the US big tech industry operates and why we should be wary of anything these companies try to sell us.
First, let me address what you have probably already noticed. Whenever I use the term “AI” to refer to the technology sold by companies such as OpenAI, I put it in quotation marks. You see, as a translator and writer, language is central to my life, and as such I am always interested in words and how we use them. When you start looking into “AI”, you will find that it is a rather broad and loosely defined term. The field of AI research dates back to at least the 1950s and, over the years, humans have explored a wide range of ways to mimic aspects of our own intelligence.
I raise this because my first point is that “AI” is not a technical term that accurately describes the technology relentlessly shoved in our faces over the past few years. Right at the start of their book, “The AI Con: How To Fight Big Tech’s Hype and Create the Future We Want,” Drs Emily Bender and Alex Hanna point out that “AI is a marketing term.”
In fact, it has been since its inception. The man who originally coined the term “AI” in 1956, a Dartmouth assistant professor called John McCarthy, later explicitly admitted that he came up with the term to attract more funding for research he had originally conducted under a different name. In an interview with Novara Media, Karen Hao, author of the book “Empire of AI,” noted:
“That marketing root to the phrase is part of why it’s really difficult to pin down a specific definition today […] So, quite literally, when people say ‘AI’ they’re referring to an umbrella of all these different types of technologies that appear to simulate different human behaviours or human tasks.”
This is important because the vagueness of this term significantly muddies the discourse around the issue. Whenever someone voices even the slightest criticism of tools like ChatGPT, you can be sure that some AI Bro (and I can guarantee you it will be a man) will jump out of the bushes and smugly point out that, uhm, actually, AI is helping doctors spot tumours, so it’s kind of messed up that you would be against that.
Yes, there is tech that falls under the very large umbrella of AI that does that. But let’s be real, doctors are not consulting bloody ChatGPT only to be told for the third time in a month: “You are absolutely right, that was a malignant tumour. I am sorry your patient died. Would you like me to generate a paragraph to break the news to her family?”
So, before we move on, I want to be very clear: what I criticise in this piece are the LLMs and image generators operated by OpenAI, Anthropic, etc., and not the whole field of AI. I’m not disputing that there is AI technology that is useful, nor am I even asserting that LLMs are entirely useless. However, in the case of the latter, I will argue that their capacities are vastly oversold.
Certainly artificial, but not really all that intelligent
So, let’s talk about the actual tech, the thing these companies want you to think of as “AI”. Whenever I talk to people about this or generally follow the mainstream discourse, I am rather baffled to find that a large number of people have absolutely no understanding of what this stuff is or how it works. The best some seem to be able to come up with when asked how ChatGPT works is to vaguely suggest that it’s “a thing that knows a bunch of stuff.”
Obviously, I cannot go into great detail here (if you want that, check out this video), but at a very basic level, Large Language Models (LLMs) are neural networks trained on obscenely vast amounts of data to predict words in a sequence. Gary Marcus, a cognitive scientist who has long been critical of the current “AI” fad and whom I will come back to later, describes them as “AutoComplete on steroids.” In other words, these things are glorified word-guessing machines based entirely on probability. This means that whenever you ask ChatGPT anything, what it does not do is search through a vast repository of knowledge and retrieve the exact answer you were looking for. All it does is statistically predict the most probable next word in the sequence that forms its response.
Another way of putting this is: LLMs don’t know anything! They just guess and present the result with the confidence of a straight white man in a corporate meeting. Admittedly, they’re very good at guessing. But they mess up. All the time.
The myth of AGI
“But Tom,” I hear you say, “that might be true now, but AI is getting better all the time! Soon it won’t make any mistakes and will become smarter than any human who’s ever lived!”
Ah, there it is. The other killer argument that is meant to shut down any criticism instantly. Except that it’s complete bollocks.
The Holy Grail for all these “AI” companies, and the reason so many people are pouring trillions of dollars into them, is the promise of “Artificial General Intelligence”, or AGI – basically a fancy term to describe some future technology that would achieve human-level intelligence. Now, the position of “AI” companies is that all you need to do is scale up the data centres and make these models larger, and larger, and larger until, eventually, boom, superintelligence. The problem is that this is, and always has been, the opinion of a fringe minority.
The vast majority of the world’s most experienced AI researchers believe we have not yet developed the techniques that would lead to AGI. On this point, it is also worth noting that AGI is not even rooted in any sort of scientific evidence. It is an idea describing something that might exist at some point in the future. At this stage, AGI is more about ideology than science.
Okay, so we probably won’t all be killed by a Silicon Valley-branded terminator any time soon. But won’t LLMs still improve all the time? In an interview published on 20 January, Gary Marcus stated that he believes LLMs have now reached diminishing returns, and he pointed to the massive let-down that was the release of GPT-5 last August. LLMs are not intelligent, and they never will be. As far back as 2001, Marcus explained in his book “The Algebraic Mind” that these models will always hallucinate (i.e., when they just make stuff up and present it as true). Eleven years later, when a major breakthrough in a technique known as “deep learning” laid the groundwork for the LLMs we know today, Marcus wrote in an article published in The New Yorker:
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like ‘sibling’ or ‘identical to.’ They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
And in case you think I’m cherry-picking by focusing on the statements of a single critic, OpenAI themselves admitted in a research paper published in September last year that hallucinations are mathematically inevitable.
Most bosses are not firing workers because of “AI” – and those that do quickly regret it
Okay, so “AI” is an empty marketing term meant to obfuscate the actual technologies we’re dealing with, and the neural networks we currently have will not only never lead to AGI but have probably already reached the peak of what they can do. But aren’t they already good enough? Aren’t millions of people losing their jobs because of “AI”? Observant readers might have already guessed where this is going.
First, while everyone and their mother is talking about all of these workers supposedly being replaced by gentrified Siris and Alexas, there is little actual evidence to support this. Now, that’s not to say people are not being laid off, but the point is they are not being fired because bosses are replacing them with “AI”. Instead, an Oxford Economics briefing from earlier this month suggests that employers appear to use “AI” as a bogus justification to cover staffing reductions they would have made anyway. In other words, declaring you’re firing a bunch of people to “embrace the AI transition” simply makes you sound like much more of a very important big boy to your board members than admitting that you overhired during the Covid-19 pandemic.
But even in caseswhere businesses did try to integrate “AI” in some sort of significant way, everything indicates that it has been an unmitigated disaster. Last August, an MIT report found that 95% of “AI” pilots at companies are failing. Many people are even being rehired, as companies such as Klarna are realising that LLMs can’t actually do the things they were led to believe they could.
As you might be able to tell, I have become slightly obsessed with this topic. But the thing is, we haven’t even so much as scratched the surface of all the problems with “AI”. I haven’t even mentioned the theft from artists, including through pirated files; the problem of “AI-induced Psychosis”; the allegations that chatbots drove people to suicide; the horrific exploitation of workers in the Global South; the economics that make absolutely no goddamn sense; and, of course, the unforgivable environmental implications.
The story of “AI” is a profoundly depressing one. But if you were to ask me what I believe to be the single worst thing about it, I would have to agree with tech journalist Ed Zitron, who said in an interview with The Guardian earlier this month:
“The biggest thing we’ve learned from the large language model generation is how many people are excited to replace human beings, and how many people just don’t understand labour of any kind.”
The post-US internet
But as bad as the “AI” bubble is (and yes, it is almost certainly a bubble), there is honestly nothing surprising about it. For years now, the internet has been in the clutches of US big tech companies, and we have all been worse off for it.
The Silicon Valley oligarchs have long been looking for the “next big thing” to feed their hypergrowth obsession. Some of you might remember that Mark Zuckerberg told us back in 2022 that the “Metaverse” would revolutionise everything. He even changed the name of his company. All of that, and for what? 77 billion dollars evaporated into nothingness, and a bunch of shitty VR apps that nobody uses.
The US has forced its version of the internet, and tech in general, on us, and now we are trapped in it. Now that it is clearer than ever that Europe’s “alliance” with the US never existed and that it has always merely been about vassalisation, the need to break out of this structure has perhaps never been more urgent.
There are a lot of things to criticise about the internet, but I have to admit that, for me personally, it has played a hugely positive role in my life. And with the US’s power over this domain, and the world at large, finally waning, I am actually excited about what might come next.
As with climate change, we already have the tools to bring about the transition that is so desperately needed. Activist and sci-fi writer Cory Doctorow, known for his “Enshittification” thesis, has pointed out on several occasions that an obvious next step would be to repeal anti-circumvention laws, which currently prohibit us from modifying things that belong to us if the manufacturer does not want us to (i.e., the reason you can’t just install a third-party app store on an iPhone or deactivate the feature that forces you to use extortionately expensive branded printer ink instead of a generic one). If Europe decided to go down that route, it could be the first in the world to legalise jailbreaking, develop the tech to do it reliably, and sell it to anyone interested around the globe.
Europe is also home to an incredibly vibrant open-source community that it should finally embrace fully. I mean, the Linux kernel was developed by a Finn, for God’s sake! Linux, i.e., the family of operating systems already used on the vast majority of servers, all of the world’s 500 fastest supercomputers, and the bloomin’ International Space Station. At the regional level, there have already been cases of entire institutions switching from Windows to Linux. Imagine what could be achieved with large-scale European support for a transition.
The future of the internet need not be an endless cycle of enshittification and rot. If we choose, it can be community- rather than profit-driven, collaborative rather than antagonistic, and a tool that meaningfully augments our lived experience. In some corners, this version of the internet already exists. Let’s expand it.
But until such day, I guess we have to put up with Réno and his fellow slop-brethren. And who knows, maybe someday he’ll figure out what to do with the email addresses he keeps asking people for.