The Internet, now deliberately useless.

The Internet, now deliberately useless.
Photo by Uriel SC / Unsplash

If, during the Enlightenment, you had told people that one day the sum of all human knowledge would be available to everyone, Diderot, d’Alembert and Voltaire would have positively purred with satisfaction. For men like them, the very idea of knowledge being instantly accessible to anyone curious enough to seek it was not just progress, it was progress in its pure, abstract, almost religious form.

“They will have a perfect world then!” they would have exclaimed, congratulating you on having finally solved the small inconvenience of ignorance. And they would have stared in awe at search engines, these obedient little oracles tirelessly diving into an ocean of raw data, surfacing—at least in theory—with exactly the piece of information that the lucky user desired.

From their point of view, the problem of knowledge was not very different from the one I had back in the pre‑Internet era, when “search engines” were called Gopher, Veronica and WAIS and ran on machines that today would struggle to power a smart toaster. To get actual knowledge out of a high‑quality source, you first had to select the sources themselves: figure out who was speaking in the first person, who was merely quoting someone else, and who was quoting a quote of a quote. Reaching the original text often meant combing through multiple university libraries, wrestling with paper catalogues or hostile green‑screen terminals, and sometimes even taking a train to another university in a “nearby” city, only to discover that the book you needed was, very academically, “temporarily unavailable”.

Obviously, the possibility of skipping all this manual, almost artisanal work would have looked like paradise to them. The idea that you can do the whole thing while sitting down, just clicking on a computer or lazily poking at a phone screen, fits perfectly with their personal definition of an intellectual golden age. From their perspective, once you remove the cost, the time and the sheer physical fatigue of research, this should clearly be the era in which no man is truly ignorant, because any curiosity can be satisfied almost instantly. What on earth could possibly go wrong in a world like that, right?


And to be fair, that bright Enlightenment fantasy world is still technically possible. For people who understand what the point of all this is and who are naturally curious, it is not at all unusual to find themselves at two in the morning in front of a monitor, reading about quantum mechanics or whatever else is just accessible enough to be understandable and just distant enough from their usual field to be interesting. The machinery is all there, humming quietly, waiting for the sort of person who thinks “I wonder how that works” is a perfectly good way to ruin a night’s sleep.

The same, to be very clear, applies to news. When the press started talking about “incels”, my first reaction was not to nod wisely and swallow whatever description I was being fed, but to go straight to their forums and take a look at what they were actually saying. I did the same with the so‑called “Manosphere”, with MGTOW, and with every other moral panic of the week. Why on earth should I settle for a second‑hand description of a group from a newspaper when I can walk into their digital living room, sit down, and watch them talk?

So it is not impossible to use the Internet as a tool to learn, to understand and even to dig deeper. In fact, that remains its main potential, and if you really want to, you can still exploit it fully. Even the much‑abused Artificial Intelligence, if you interrogate it properly, with relevant questions and decent context, can act as a fantastic search engine, capable of finding good sources—provided you explicitly tell it to—and then summarising their contents into something a human being can read without losing the will to live.


So now, obviously, the question is: “What went wrong?”
My answer is: nothing went wrong. The problem is that the Enlightenment and the positivists were simply wrong about people and, as the logicians like to say, ex falso quodlibet: start from a false premise and anything can follow, usually downhill. They took it for granted that it is in human nature to be curious, and especially to be willing to “move”, even if only online, to understand things first‑hand. For someone like me, it is perfectly natural to dig deeper into a topic as soon as I have a search engine and a method for finding decent sources—there is a reason why university careers end with you writing a thesis. For most other people, it is just as natural to choose the easiest possible route, particularly the one that does not require much thinking.

The second part of the mistake was assuming that everyone would develop some kind of method, or at least a basic mindset, for identifying good sources. They did not, and they do not even realise that something is missing. When I read in a European newspaper about supposedly secret Pentagon plans to win a war in Iran, my first reaction is to start asking questions. How, exactly, is a random European journalist supposed to know highly classified US military plans? Has he seen those documents, or is he quoting someone who claims to have seen them? And that source—did they really see anything, or are they just repeating what yet another “source” told them? By the time you follow that chain honestly, most “exclusive revelations” start looking like badly photocopied gossip.

Meanwhile, you scroll down to the comments and you get two majestic schools of thought: “It’s all true because yay USA will win” and “It’s all propaganda because boo USA will lose”. I almost never see anyone wondering whether the sources actually exist, how reliable they might be, or how close they are to anything resembling a primary document. That kind of reflection is second nature to me because I still carry around the university method in my head; for the average reader, source criticism is about as exotic as Tibetan metaphysics. So, as we were saying, the first thing that supposedly “went wrong” is that the original idea was completely false: most people do not actually want to educate themselves. At best, they want to be “informed”, which is nowhere near the same thing.


So now, obviously, the question is: “What went wrong?” Nothing, again. This time the problem is much more basic, almost embarrassingly so: it is a matter of physics and biology. The human brain burns close to 20 percent of the body’s sugar, while weighing roughly 1.5 percent of the total mass. When the human body was “designed” by evolution, glucose was a precious, scarce resource; today it is an epidemiological problem with frosting, but back then it was survival fuel. From that perspective, thinking is not a noble moral duty but a dangerously expensive habit.

If you look at it this way, the most sensible strategy is obvious: follow the leader. One person does the heavy cognitive lifting and pays the glucose bill; everyone else imitates him and saves energy by leaving that ridiculously costly organ largely idle. If the right person does the thinking, simply doing what he says is metabolically rational. You avoid using that monstrously wasteful contraption in your skull, the one that weighs almost nothing and yet hogs roughly a fifth of your glucose like a spoiled child with a sweet tooth.

This energy‑economy angle explains quite neatly why people follow the group and why they follow leaders. Imagine a small village in the wild and a panther walks in. It makes no sense for every villager to stand there, observe the panther and invent a solution from scratch, each burning precious glucose, when one individual who has seen panthers before already remembers the correct strategy. Having everyone “think for themselves” in that situation is not just inefficient, it is suicidal stubbornness dressed up as intellectual pride.

The savings, of course, scale with the size of the group. The larger the group, the larger the total amount of glucose saved if everybody just does what the leader—or the designated intellectual—says. One brain pays the cognitive tax; the others free‑ride in glorious metabolic comfort. And this very simple physical fact cuts directly against the Enlightenment and positivist assumption that every person naturally wants to “think with their own brain”. In reality, for deep evolutionary reasons linked to the absurd energy cost of thinking, anyone who does not explicitly want to be a leader is more than happy not to think and to merely imitate the leader. Often, the simple act of wanting to think for yourself is perceived as a challenge to the leader’s status: thinking is his job, stop wasting glucose, you idiot.


In this sense, what we like to call “mainstream thinking” makes an offer that is, from an evolutionary standpoint, almost irresistibly attractive. Imagine an influencer with ten million followers who are gently invited to stop thinking and simply adopt whatever thoughts drip out of that influencer’s feed. Ten million people who effectively outsource the hard part and “save” that extra mental effort are, in energetic terms, the equivalent of two million people who got all the glucose they needed to make it through the winter in one piece. If this were a village, that would mean being able to guarantee that 20 percent of the population survives the harsh season—and in evolutionary terms, a guaranteed 20 percent survival rate is not “a nice bonus”, it is a massive advantage.

Scaled up to a planet with social networks, this is exactly what happens: the bigger the herd, the more energy is saved by every individual who agrees not to think and to let the designated shepherd handle reality on their behalf. The influencer, the pundit, the party intellectual are not just selling opinions; they are selling a metabolically efficient lifestyle. You get identity, ready‑made positions and the warm feeling of being “on the right side of history”, all while keeping your brain on low‑power mode. Saying no to that offer requires an almost pathological attachment to independent thought, the human equivalent of deciding to walk everywhere in a world where everyone else takes the elevator.

So, if we really want to summarise the chapter titled “What went wrong?”, it is not hard.

  • First, it was a mistake to assume that everyone is curious, or that everyone wants to satisfy their curiosity properly and thoroughly instead of settling for a generic, shallow answer that lets them move on with their day.
  • Second, it was a mistake to assume that everyone wants to think with their own brain—and more generally, it was a mistake to assume that everyone wants to think at all.

Most people are perfectly happy to rent out their prefrontal cortex to the nearest loud voice and enjoy the considerable metabolic savings. The Enlightenment promised a world of autonomous minds; what we actually built is a world of very efficient mental outsourcing.


If we wanted to build a physical model of this whole phenomenon, we would simply compare the cost for the communicator—the one who does the thinking—of broadcasting what he thinks, with the total energy saved by all the people who adopt that thought and therefore avoid thinking for themselves. In other words, you put on one side the glucose burned by one brain formulating an opinion, and on the other side the glucose not burned by millions of brains happily switching to “copy mode”. The resulting equation is frankly terrifying: with the energetic equivalent of a few milligrams of glucose—if you translate that into money and compare it to the cost of an Internet connection—you can save literal tons of glucose because millions of people decide not to think. In evolutionary terms, that is not just convenient, it is economically irresistible.

Once you look at it like that, the conclusion is brutal but simple: there is no realistic hope that, on a population scale, people will spontaneously decide to “think with their own head”. For the individual, the wish to think may well exist; on Sunday afternoon, in theory, everyone would like to be an autonomous critical mind. But if you model the population as a mass, the mass does not want to think. The mass wants to minimise its cognitive energy bill. It does not want to know and it does not want to learn; it wants to imitate. And any system—like our modern Internet—that rewards imitation and amplifies cheap communication will naturally converge on that minimum‑energy configuration, no matter how many Enlightenment quotes you print on the homepage.


Internet, therefore, just sits there. It is the place where, in principle, anyone could learn anything: from why kitchens in the Netherlands are microscopic—not only in the Netherlands, by the way—to why, if you could somehow cool a Bose–Einstein condensate , you would get hydrogen even if you started from a chunk of titanium before it was turned into a condensate. That is how I use it: as a universal toolbox for every random curiosity that wanders into my head at inconvenient hours. But this is the behaviour of one person—specifically me—who does not feel like part of the mass, mostly because I have been arguing with the mass since I can remember.

If, instead, we apply the energy model to the mass, as I have just done, we discover that the mass does not want to learn-to-think. You can put the entire body of human knowledge and the full machinery of physics in front of it, and the mass will still decide that the Earth is flat rather than go and look up a couple of scientific papers—or even a high‑school‑level calculation—that prove otherwise. Showing that the Earth is not flat is well within the reach of a reasonably awake teenager: you notice that one kilogram of flour weighs one kilogram everywhere on the planet’s surface (approximately), you write down the equation for gravitational force, and you realise that it describes the surface of a sphere. The geometric place where one kilogram always weighs one kilogram is the surface of a sphere; plug in the precise numbers and, surprise, the resulting radius matches the measured radius of our planet. It is almost disappointingly straightforward.

But to do this, you first need to want to think. And why on earth should you, if you can just follow the local flat‑Earth guru and save yourself the effort? From the point of view of our glucose‑obsessed biology, imitating the guy with the YouTube channel is simply cheaper. Thinking is an optional luxury; imitation is the default setting. That is why the Internet, the place where anyone could learn everything, ends up being used mostly as the place where everyone can copy anything.


Internet, therefore, remains the place where anyone could, in principle, have the opportunity to “learn in order to think.” It is the universal library plus laboratory plus bar fight, all available from the same ugly browser window, for whoever is stubborn enough to actually use it that way.

The mistake of the Enlightenment encyclopedists was, in fact, to assume that everyone would want nothing more than exactly that: to “learn in order to think,” as if the natural state of Homo sapiens were a kind of permanent voluntary seminar. They projected their own pathology—a compulsive need to understand things—onto the entire species, and then built their dreams on that misunderstanding.

In reality, the mass as such looks for a leader to follow, because even a very simple physical model of the situation tells you that global energy consumption reaches its minimum when one person does the thinking and everyone else copies the result. One brain pays the full metabolic bill; the others enjoy the group discount. From the point of view of thermodynamics, originality is a design flaw.

It is therefore wrong to say that the Internet was designed not to educate. The network is perfectly capable of hosting education, depth, and independent thought; in some dim corners it even still does. What fails the exam is not the infrastructure but the species using it. In reality, it is Homo sapiens that is “designed” not to think, optimised by evolution to imitate first and ask questions—maybe—never.

Sad, but true.