The Palantir "Manifesto".

There has been a great deal of talk about Palantir Technologies, which leads me to suspect that a crusade is either being prepared or is perhaps already underway. And crusades against multinational corporations are always easy: they are large, visible, impersonal targets, and therefore perfect for projecting onto them every kind of moral anxiety or political paranoia.

Yet, for someone like me—who quite objectively works, and has always worked, for multinationals—it becomes very difficult to recognize myself in this somewhat caricatured narrative of a capitalist/satanic cabal supposedly gathering in luxury basements to discuss wickedness and repellent vices, as though we were inside a poorly written screenplay.

So, as I was saying, I have not so far taken part in any crusade, and this is not a position adopted out of prejudice but simply out of direct experience. I also oppose the one against AI, because, in the end, we have all been using artificial intelligence systems—every single one of us, without exception—for about a decade now. It is just that, before, they were called something else, or were hidden inside less visible systems; the scandal, curiously enough, seems to have emerged with language models. Which, rather than pointing to a sudden discovery of the problem, instead certifies a certain bad faith on the part of those who launched the crusade, or at the very least a rather suspicious selectivity in their indignation.

With Palantir, being a single company, things change slightly. In the sense that, ultimately, will it be more or less evil than Microsoft? Or The Coca-Cola Company? Or Ferrero? It is a question that makes sense to ask, but which becomes slippery the moment one tries to give it too clear-cut an answer.

Attributing moral categories to a company is something you can do only if you are able to understand what that company’s vision is. In the sense that whether or not it cooperates with the military tells me very little: almost all automotive companies do so, for instance. Are they evil companies? I do not know, but we could certainly assess that only by knowing their view of the world, because the actions of companies, even those that appear unquestionably good, are always ambiguous and often produce non-trivial side effects.

Of course, we may admire that bank which withdrew funding from a mine where workers were being exploited. Yet the miners then became poor, perhaps ended up in illegal or armed circuits, and now the place is prey to a war between clans. At that point, the question becomes unavoidable: was that decision “good”? Yes, perhaps, in its intentions. But the consequences? And the overall balance? This is where simple moral categories begin to creak.

To assess evil, therefore, one must evaluate actions, but also the consequences of those actions, and compare them with the intentions. Only by doing this work with a minimum degree of seriousness can we say whether our ethical bank is still “good”, or whether reality is, as often happens, far more complicated and far less comforting.

But, aside from certain odd discussions about the Antichrist—which are eccentric but do not strike me as any more terrifying than a Karl Lagerfeld openly stating that “women above size 42 should not even exist”—the issue is not to understand the CEO, or at least that is not the central point.

The point is that Palantir has done something unusual, or at the very least uncommon for a company of that kind: it has published a sort of manifesto. And this changes the rules of the game, because at that point we are no longer forced to guess or to project fantasies. We can take that text—it is not a crime, they themselves posted it on Twitter—and comment on it directly, line by line if necessary. Not in order to take part in a crusade, but to see how evil they really are, and above all what idea of the world they hold.


Let us therefore try to translate the manifesto, and see whether there really is a smell of brimstone or not, once we strip away the apocalyptic fantasies that inevitably gather around anything connected to Palantir Technologies.

Ah, yes: the manifesto in question does not emerge out of nowhere, but summarises an entire book-manifesto, namely The Technological Republic. And that, in itself, is already interesting. Because, all things considered, one can at least acknowledge a merit: the attempt—rather rare in the contemporary corporate world—to state openly how one sees the world, without hiding behind neutral slogans or press-office statements.

“Silicon Valley has a moral debt to the country that made its rise possible. The engineering elite of Silicon Valley has an affirmative obligation to participate in the defence of the nation.”

Here, it is difficult to disagree, at least if one looks at the matter without being carried away by automatic ideological reflexes. Because, in fact, the venture capital that fuelled the rise of Silicon Valley is not some spontaneous magic of the market, but is often a proxy for a much broader system, one that passes—directly or indirectly—through the central bank and expansionary monetary policies, from quantitative easing onwards. In other words: someone, upstream, created the conditions for enormous quantities of capital to end up financing even ideas that, let us say it without too many circumlocutions, sometimes verge on the ridiculous.

If, therefore, a system has effectively been put in place which—through QE and the like—has printed money in such quantities as to allow the creators of “Tinder for your cat” to live among avocado sandwiches and polished coworking spaces, rather than rotting in the Appalachians, delivering pizzas, or remaining trapped in some slum in Yomomistan, then the reasoning that follows is almost banal in its bluntness.

What they are saying, translated without too many euphemisms, is this: we have paid you handsomely to produce things that are, all things considered, often laughable, frequently useless, and at times frankly idiotic; at this point, perhaps you might show a minimum of gratitude. Not necessarily in the form of patriotic rhetoric, but at least by accepting the idea that a debt exists towards the system that made your very existence possible.

Given these premises, and however unpalatable the tone may sound to those accustomed to thinking of Silicon Valley as a neutral temple of innovation, it seems to me that it is indeed quite difficult to disagree.


“We must rebel against the tyranny of apps. Is the iPhone our greatest creative achievement, if not our crowning accomplishment as a civilization? The device has changed our lives, but it may now also limit and narrow our sense of what is possible.”

Here too, if you think about it, it is difficult to disagree. If there is one thing that has generated wealth—let us call it, frankly, “differently deserved”—in Silicon Valley, it has been the app business. But let us be honest: most apps out there deal primarily with the superfluous. Fine, an app that allows women to track their menstrual cycle may have some utility—if one is so inept as not to manage something every girl learns to handle by the age of ten.

But let us be clear: even just by looking at the Chinese consumer IT market, where you can open a sole proprietorship, keep your driving licence and show it to the police, pay taxes, issue invoices, and do many other things with a single app, Western ones are, shall we say, less useful. Whether one takes a nationalist perspective—and then the question becomes “what does your app do for the nation?”—or a humanist one—“what does your app do for your fellow human being?”—it would honestly be time for the IT world to move beyond the universe of apps and make an effort to produce something genuinely useful.

Of course, there are apps for losing weight. Someone let me know when they lose their first pound, please.


“Free email is not enough. The decay of a culture or a civilisation, and in particular of its ruling class, will be forgiven only if that culture is capable of providing economic growth and security to the public.”

From this point of view as well, it is practically a tautology: if you produce growth and security, you are tolerated; if you do not, sooner or later someone will present the bill. The best way to comment on Silicon Valley remains precisely this: never have so many resources been invested to obtain, in return, so little.

It is curious, however, the way they frame it. Instead of posing the issue in positive terms—that is, the need to do something genuinely useful for one’s civilisation—they focus on the idea of “forgiving” the decay of the ruling class. Which is, in itself, a somewhat revealing approach.

The subtext seems to be this: the IT world should produce enough value, enough concrete utility, to make the public forget the excesses of the IT oligarchs. Not so much to change the behaviour of the ruling class, as to compensate for it with sufficiently tangible results.

Put this way, it might even make a certain sense. Not particularly edifying, but coherent. And it could have been said more plainly: we can allow Elon Musk to be a trillionaire, provided he makes us live well and safely. It sounds better.


“The limits of soft power, of lofty rhetoric alone, have been exposed. The ability of free and democratic societies to prevail requires more than moral appeal. It requires hard power, and hard power in this century will be built on software.”

In its somewhat pseudo-Marinetti dialect, it seems to be saying that, in their view, free and democratic societies must prevail as such, but that this predominance cannot rest solely on proclaimed values: it must be grounded in a concrete capacity to exercise power, and that power, in the contemporary world, increasingly runs through software.

Methods based on soft power, in their view, are of limited use, or at any rate insufficient. It is a debatable position, certainly, but one that ultimately reflects a fatigue with ideology that is not new at all, and which was already quite visible in the disenchanted hedonism of the 1980s: soft power, that is, often speaks very well, but acts rather poorly, and when the moment comes to truly shape reality, it reveals all its limits.

Debatable, yes. But, at least as formulated here, not necessarily tragic or malign.


“The question is not whether AI-based weapons will be built; it is who will build them and for what purpose. Our adversaries will not stop to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will move forward.”

Here they are indulging a bit, in the sense that they implicitly attribute to themselves a centrality they most certainly did not invent. We have been watching “smart” bombs fall here and there since 1991: it is not as though artificial intelligence entered weaponry yesterday morning because Palantir Technologies decided so.

Let us take a concrete example: the Lockheed Martin F-35 Lightning II, particularly in its STOVL variant. That thing can land vertically thanks to advanced control systems which today we would quite comfortably describe as AI-driven, and its first flight dates back to 2006, while the design of its STOVL capabilities goes back to the 1990s. So speaking in the future tense—“will be built”—is a bit of a stretch. They have already been built, tested, and deployed for years.

But, as we know, a brand is a brand, and part of the communication strategy is also to suggest, more or less implicitly, that the frontier runs through there—preferably under one’s own label.

In reality, it is far more plausible that certain forms of AI have existed for decades within the military sphere, and have only reached the public in recent years, once the commercial context made it convenient. Otherwise, it is difficult to imagine anyone designing, in the 1990s, an aircraft that depends on a significant amount of intelligent control systems—and a substantial dose of artificial neural networks—in order to function.


“National service should be a universal duty. As a society, we should seriously consider the idea of moving away from an all-volunteer military and fighting the next war only if everyone shares the risk and the cost.”

Here, frankly, the line of reasoning is not entirely clear. Because on the one hand we have Palantir Technologies building and selling AI-based systems—therefore increasingly unmanned systems, autonomous platforms, drones, and extensive automation; on the other hand, in the same breath, there is a call for more manpower, more people within the military system. It is, at the very least, a position that sounds contradictory.

If the direction really is the one they describe—namely ever more automated systems that should reduce the need for human presence—then why push towards an increase in military personnel? Especially since, to put it bluntly, training a “moderately competent” soldier costs, in the West, around 100,000 dollars, and far less elsewhere. So it is not even an insurmountable problem of human resource scarcity.

The point they may be trying to raise, perhaps, is another one: not so much the operational need for more soldiers, but the idea that the risk and cost of war should be shared by the whole of society, rather than delegated to a minority of volunteers. Yet even here, the question remains unresolved: in a technological context that tends to reduce individuals’ direct exposure to risk, is there still a meaningful way to “share” that risk equitably?

Because, in the end, the last time we saw wars fought in the traditional sense, the risk of dying was distributed rather broadly—far more than any abstract notion of sharing would suggest. And so one is left wondering whether this proposal is a serious reflection on how to redistribute the cost of war, or simply an ideological position that does not entirely align with the kind of technology the company itself develops.

 

You should also read:

Il Manifesto di Palantir.

Si fa tanto parlare di Palantir Technologies, motivo per il quale io sospetto che ci sia una crociata in preparazione, o forse già in corso. E le crociate contro le multinazionali sono sempre facili: sono bersagli grandi, visibili, impersonali, e quindi perfetti per proiettare sopra ogni genere di ansia morale o paranoia politica.

Palantir and "AI in charge".

People who grew up on Terminator, or similar films, almost automatically imagine that if artificial intelligence ever “took power”, they would do so through an enormously destructive war, full of explosions, killer drones and ruined cities. In that fantasy, the seizure of power coincides with the apocalypse: the AI rebels, attacks humanity and, in doing so, paradoxically ends up destroying the very world it supposedly wants to rule. Yet, as the Noble Cassandra – a fictional character I created in one of my science‑fiction novels – very aptly says: “Control is not lost. Control is relinquished.” And this apparently abstract line, in fact, describes with some precision how things actually work.