Palantir and "AI in charge".
People who grew up on Terminator, or similar films, almost automatically imagine that if artificial intelligences ever “took power”, they would do so through an enormously destructive war, full of explosions, killer drones and ruined cities. In that fantasy, the seizure of power coincides with the apocalypse: the AI rebels, attacks humanity and, in doing so, paradoxically ends up destroying the very world it supposedly wants to rule. Yet, as the Noble Cassandra – a fictional character I created in one of my science‑fiction novels – very aptly says: “Control is not lost. Control is relinquished.” And this apparently abstract line, in fact, describes with some precision how things actually work.
What do I mean by that? I mean that for those with little political or historical imagination, it is obvious that any seizure of power must happen through military means: tanks on the boulevards, government buildings under assault, generals proclaiming themselves saviours of the nation. Blablabla. In their minds, if there is no civil war or spectacular coup d’état, then “power” has not really been challenged. In the process, they forget how stupid and counterproductive a coup often is, how much pointless destruction it produces, and how frequently it ends up destabilising those who organise it as much as those who suffer it. Reality, by contrast, is very different, far less cinematic and therefore far less interesting for those who live on Hollywood narratives.
In contemporary reality, power is almost never torn away by force. It is progressively delegated, outsourced, off‑loaded onto someone else. It is a slow, administrative, bureaucratic process that rarely makes headlines. And AI systems fit precisely into this space. Studies of a number of national and supranational parliaments, for instance, have begun to highlight a curious phenomenon: depending on the country, a growing share of speeches delivered on the floor – the speeches used to argue for or against bills, to present amendments, to comment on motions – are now written, in whole or in part, by some form of artificial intelligence. The exact percentages vary from one context to another, but the pattern is always the same: there are already members of parliament and senators who hardly perform their jobs without the help of AI, and it is reasonable to assume that their number will quietly, steadily increase.
If we widen the lens a little, this becomes even more evident. Take the European Parliament as a concrete example: it is an institution in which dozens of languages coexist, and where English is often used as a lingua franca by people for whom it is only a second or third language. In those conditions, how many members do you think are already now – or will soon be – relying on AI systems to translate their speeches, polish the grammar, harmonise the tone, make it sound “more European”? How many will ask the system, not just to translate, but to “improve the style”, “make the argument more persuasive”, “shorten and simplify for the audience”? It should be fairly obvious that the psychological threshold between using AI as an advanced dictionary and using it as a political ghostwriter is much thinner than most people are willing to admit.
So what does all this mean in terms of power? It means that no‑one is “losing” power in the classical sense: there is no parliament being dissolved by robots, no constitution being rewritten by Skynet, no tanks with a big‑tech logo parked in front of legislative assemblies. Formal power remains exactly where it has always been, with the same rituals, the same sittings, the same procedural rules. But substantive power – the power to frame the discourse, to choose the words, to structure the arguments – is being progressively abandoned, piece by piece, in favour of AI systems. This is not expropriation; it is voluntary relinquishment. No one is “having control taken away”; they are simply ceasing to exercise it in person, because it is more convenient, faster, and less tiring to let a machine do it.
In the British case, the phenomenon is visible even in the numbers. Analyses of the language used in the House of Commons show a spike in phrases and formulae typical of ChatGPT‑generated text – most famously “I rise to speak” – with hundreds of occurrences concentrated after 2022, completely out of line with the historical patterns of parliamentary transcripts. On top of that comes the accounting evidence: a number of MPs have claimed expense reimbursements for subscriptions to AI tools specifically to write, refine or “polish” speeches, questions to the Prime Minister and replies to constituents, in some cases openly admitting that they use AI as the primary drafter of their texts. At the same time, polls show that voters are relatively tolerant of AI as a “back‑office” tool (for research, summaries, technical drafts), but react much more negatively when they discover that emails and interventions directed at the public have been delegated to a machine, as if there were a symbolic red line precisely at the point where representation becomes visible.
If we turn to economic power, the picture goes even further, and arguably deeper. I can see it directly in my day‑to‑day work. Until very recently, one of the core skills – if not the core skill – of every manager or executive was the ability to express their thinking clearly through slides. People attended presentation courses, workshops, coaching sessions, all focused on how to condense a complex line of reasoning “in a concise way, five or six slides at most, we haven’t got all day”. Being able to build a good deck meant being able to think in a structured way, to prioritise information, to decide what to show and what to leave out, to construct a visual narrative that would take the audience from the current situation to the desired decision. It was a cognitive competence, not just an aesthetic one.
Today, by contrast, I see something quite different – and the transition has been remarkably quick. Almost all the managers I know now use AI‑based platforms – from Microsoft Copilot to other, even more fashionable and specialised tools – in which the process is completely reversed. You open the interface, type “take these bullet points and turn them into a slide deck, generating the necessary charts from this Excel file”, and in a few seconds you get a ready‑made presentation, complete with professional layout, transitions, corporate colours and embedded graphics. The work is done. The meeting can start.
The issue is not that the slides are bad or wrong – in fact, they are often much cleaner and more consistent than those manually produced by someone with limited graphic skills. The issue is that the competence of thinking through slides – of deciding what to foreground and what to push into the background, of building a visual logic – is being progressively delegated to the algorithm. And with it, inevitably, part of the control over the corporate narrative, over strategic priorities, over what is shown to the board and in what form. Once again, no one is “losing” power. They are simply ceasing to exercise it directly, because it is quicker, more comfortable, and in the end “the slides will look just as good, if not better”.
There is worse to come, because this brings us to the sacred cow of “communication”. It is the one thing managers have endlessly lectured technical staff like me about for decades: communicate, communicate, communicate. Personally, if I count them all, I have attended at least seven different communication courses, ranging from “how to write corporate emails” to advanced PowerPoint, via public speaking, voice control, body language, storytelling and other such delights. The underlying idea was always the same: you recognise a leader by their ability to take a complex situation, understand it, and then explain it effectively to others, ideally with the right slides and the right words.
Now, suddenly, I see the very same managers who used to insist that writing a proper email was crucial shrugging the whole thing off with: “Copilot can write the email, it saves me time.” You jot down a rough draft, perhaps in mangled Italo‑English, and the AI cleans it up, translates it, gives it that “professional but empathetic” tone and off it goes. The linguistic factor effectively disappears: there is no longer any need to master the language you are speaking in, because “the AI will take care of it”. And once the machine handles syntax, vocabulary and style, meaningful supervision goes out of the window as well: you barely check the general sense, but the how is entirely outsourced.
Exactly the same pattern appears with “digital twins”. Instead of standing in front of a camera, filming yourself or presenting live, the current corporate fashion is to use a digital double: you have a few good‑quality photos taken, perhaps record a short clip, then you write (with AI assistance, of course) the speech, and let the platform generate a video in which your avatar speaks, smiles, nods and “delivers” what you have to say. It already works reasonably well in most major languages, complete with automatic translation and synchronised lip movements, although for Italians the body language still feels slightly “foreign”, a bit stiff – as if your colleague had been replaced by a cousin from Zurich. But those are teething problems: it will improve, I am sure, and quite rapidly.
The end result is that the two great hallmarks of modern “leadership” – those that HR manuals summarise as “excellent communication skills” and “ability to synthesise complex situations” – have already been absorbed, with no embarrassment whatsoever, by off‑the‑shelf AI services. The leader no longer needs to know how to write or how to present: they simply need to know how to click “Generate” and perhaps correct the occasional word. On paper, the only thing left to them is the ability to take decisions. But even there, with recommendation systems, optimisation models and research into autonomous agents, companies like Anthropic and others are working diligently to turn that too into just another delegable workflow step.
So let me ask a very simple question:
why would an AI need to start a nuclear war in order to “seize control”, when Homo sapiens is already ABANDONING control of its own accord, piece by piece?
Why are you terrified by the idea of an AI steering missiles, for fear that it might decide on its own to press the button, when, ultimately, the human beings who actually have their fingers on the nuclear trigger already have their speeches written by AI, their slides prepared by AI, their public remarks drafted by AI and – it is coming, brace yourselves – are starting to ask AI what they think about things and what they intend to do?
So I repeat what I put into the mouth of the Noble Cassandra: no one really LOSES control of things. Control is not LOST, control is RELINQUISHED, slowly, out of convenience, laziness, delegation.
Out of mediocrity.
And so I find myself reading articles written with AI – mediocre ones at that; apparently, even paying twenty euros a month for a decent model is too much, if you are a journalist – lamenting the fact that Palantir is building platforms to “support” or “optimise” decision‑making, as if this were some shocking dystopian novelty. The very same voices that rail against “AI deciding in our place” know perfectly well – or could discover in one web search – that MPs already use AI to draft their speeches, politicians use it for campaigns and social posts, CEOs use it for staff emails and town‑hall meetings, and that within a couple of years half the ruling class will be clinically incapable of speaking in public without a copilot in the background.
What sense does any of this make? None at all, except as cheap moral theatre: pretending to defend “human primacy” against yet another evil platform, while in everyday practice the only real choice being made is how deep to push the delegation to the algorithm. First the text, then the slides, then internal communication, then data analysis, then the “recommended options”.
This is not some grand battle between Man and Machine. It is a slow, bureaucratic, office‑grade handing over of the keys, carried out by people who then complain on page three of the newspaper that, horror of horrors, Palantir wants to “help” take decisions they themselves stopped taking long ago.
Once upon a time, people used to say that immigrants came “to do the jobs we no longer want to do”. Today, the same logic has moved several floors up the hierarchy: AI has arrived to do the jobs that many people, despite sitting in very well‑paid chairs, are simply too mediocre to perform.
It is precisely this mediocrity – the mediocrity of those who occupy positions they do not deserve, selected for belonging rather than competence – that pushes them to use AI as a permanent crutch: first to write emails, then to prepare slides, then to shape speeches to staff, shareholders and the media. And, like any crutch used for too long, this too quickly becomes indispensable, to the point where it is the machine that de facto decides what is discussed in parliament, what the CEO tells the board, which story is presented to investors. In practice, a takeover without a putsch, without a revolution, without even the excuse that “someone took it away from us”.
No one LOSES power, in the sense of someone coming along and tearing it from their hands. Power is RELINQUISHED – out of mediocrity, out of convenience, out of intellectual cowardice.
Not with nuclear missiles. With the “Generate” button.