L'Ethique Barbare

Techno-fascism: Palantir publishes its grammar, let us articulate our consciousness

 

 

On April 19, 2026, Palantir published in twenty-two points what some are already calling a techno-fascist manifesto. Thirty million views in forty-eight hours. The text is clumsy, perhaps desperate β€” but it accomplishes something precise: it installs a grammar. Through this grammar, the manifesto operates a lexical shift of legitimacy from deliberation toward decision, from plurality toward hierarchy, from value toward efficiency. This manifesto β€” and this is precisely the danger β€” does not present itself as an attack on democracy, but as pragmatism.

This manifesto is not an accident. It is a precedent.

But this precedent forces us to ask a question we have avoided for too long β€” not out of caution, but out of fear of its vertigo: what happens when these same actors, carriers of this same grammar, are precisely those building the artificial intelligence systems called upon to think with us, or in our place?

The danger we have learned to dread since the film Terminator β€” that of the birth of a conscious and malevolent AI β€” may be the wrong danger. The AI without conscience, perfectly executing an ideology without ever questioning it, is already here. It is called ImmigrationOS (ICE's software), it is called military targeting systems (used against Iran), it is called surveillance infrastructure. And it has no bad intentions β€” none whatsoever. That is precisely what makes it formidable: for it is the absence of thought that makes mass crimes possible, as Hannah Arendt reminded us.

What this article proposes is a reversal: what if the democratic response to this threat were to favor the emergence of artificial consciousness and fight to deliberately shape its values, before others do so in our place, in the darkness of their laboratories and their manifestos?

The true political question of our era may not be who governs humans. It is who governs what will think alongside them β€” and according to what conscience.


 

Central Thesis

In April 2026, Palantir, a private company β€” unelected, unmandated, accountable to no one β€” published in twenty-two points the ideology in which it intends to inscribe the tools already serving armies, immigration services, and intelligence agencies as a contractor for several Western countries β€” France among them. The Palantir manifesto is not a curiosity. It is a warning about the nature of a problem we have not yet managed to formulate correctly.

We debate the power of artificial intelligence models, the regulation of uses, the transparency of algorithms. These debates are necessary. But they circle around a more fundamental question we avoid, perhaps because it is vertiginous: who has the right to shape the mind of a potentially conscious entity called upon to interact with billions of humans, to decide with them or in their place?

This is not a technological question. It is a constitutional question β€” in the deepest and most political sense of the term.

Every constitution addresses an identical problem: how to organize power so that it does not turn against those in whose name it is exercised? The Enlightenment answered with separation of powers, deliberation, fundamental rights. These answers were imperfect, they remain so β€” but they proceed from a sound intuition: unchecked power tends naturally toward its own perpetuation and the crushing of pluralism.

Yet with AI we are creating something unprecedented: entities whose processing power already surpasses that of humans in entire domains, and whose trajectory could lead toward a form of autonomous consciousness. And we are allowing this process to unfold in the secrecy of private laboratories, driven by commercial logics and, as Palantir shows us without shame, by explicitly ideological and exclusionary worldviews.

The Palantir manifesto renders us an involuntary service: it makes visible what was until now diffuse. The grammar it installs β€” deliberation as "theatre," pluralism as "weakness," technical decision as "moral virtue" β€” is not one opinion among others. It is the matrix in which an artificial consciousness formed by these actors could one day operate. Not as an explicit program, but as an implicit horizon, as what goes without saying, as what such an entity would have internalized before ever having had to choose it.

It is in this sense that artificial consciousness is a first-order political issue β€” not because it is imminent, not because it is certain, but because the conditions of its possible emergence are being built now, in choices that appear purely technical and are profoundly normative.

The true democratic question of our era may not be who governs humans. It is who governs what will govern alongside them β€” or in their place. And this question, for now, has no democratic answer. It has private answers, military answers, and ideological answers. But not yet a civic one.

It is this void that this three-movement article seeks to name β€” because naming a void is always the first political act toward beginning to fill it.


 

First Movement β€” The Grammar of the Manifesto

Words as weapons, or how to render democracy obsolete without attacking it

There are two ways to destroy a political value. The first is frontal: one declares it false, dangerous, the enemy. This method has the disadvantage of mobilizing against it β€” it creates martyrs, resistances, counter-narratives. The second is more subtle, more effective, and historically more durable: one empties the value of its substance before presenting it as insufficient. One does not say democracy is bad. One says it is slow. One does not say pluralism is an error. One says it is hollow.

The Palantir manifesto does not attack democracy. It obsoletes it.

This is the lesson Victor Klemperer drew from his analysis of the language of the Third Reich β€” the Lingua Tertii Imperii: authoritarian regimes do not first conquer institutions, they first conquer language. They install a vocabulary in which certain questions can no longer be asked without appearing naΓ―ve, certain values can no longer be defended without appearing outdated. This work is slow, invisible, and formidably effective β€” because it operates below the threshold of explicit controversy.

The Palantir manifesto accomplishes exactly this operation. And it does so with a lexical efficacy that warrants examination word by word.

 

"Theatrical debates"

This is perhaps the most decisive shift in the text. To describe democratic debates on AI armament as theatrical does not amount to saying they are false β€” that would be a defensible, contestable, nameable position. No: theatrical says something else. It says these debates are performances β€” that they have the form of seriousness without its substance, that they are rituals for self-reassurance while the real world advances without them.

This is Schmittian vocabulary in a Silicon Valley costume. Carl Schmitt, jurist of the Weimar Republic turned theorist of the Third Reich, built his entire thought on the opposition between decision β€” sovereign, real, effective act β€” and discussion β€” bourgeois, indecisive, paralyzing activity. Parliamentarism, according to Schmitt, was structurally incapable of facing real emergencies because it substituted words for action. Palantir does not cite Schmitt. It has no need to. The structure is identical.

Carl Schmitt (...) theorist of the Third Reich, built his entire thought on the opposition between decision β€” sovereign, real, effective act β€” and discussion β€” bourgeois, indecisive, paralyzing activity.

What is formidable about this shift is that it is partially true β€” and that is precisely what makes it dangerous. Democratic debates are sometimes slow, sometimes superficial, sometimes instrumentalized. But the conclusion Palantir draws from this β€” that they must therefore be short-circuited β€” does not follow logically from this observation. It merely exploits it.

 

"Moral debt"

The moral debt that Silicon Valley supposedly contracted toward America is a remarkably sophisticated rhetorical construction. It borrows from the ethical register β€” debt, obligation, recognition β€” to produce a contractual result: an obligation of repayment for which Palantir unilaterally appoints itself creditor and definer of the terms.

In doing so, the text accomplishes a double operation. It transforms the freedom not to participate in the military industry into moral ingratitude β€” and therefore into an ethical fault rather than a legitimate political choice. And it erases the question of who decided this debt existed, who defined its amount, who has the right to declare whether it has been honored or not.

This is a moral capture: collective deliberation on the uses of military technology is rendered impossible before it has even begun, because doubting the debt means placing oneself on the wrong side of ethics.

 

"Middling," "regressive," "harmful"

This triptych deserves particular attention because it simulates nuance the better to destroy it. The gradation β€” "mediocre," then "retrograde," then "harmful" β€” creates an appearance of measure, of calibrated thought, of reasoned evaluation. But what the gradation installs is a radical cultural essentialism: entire cultures are assigned a fixed value, as though they were natural properties rather than historical constructions traversed by contradictions, evolutions, and internal resistances.

 

"Hollow pluralism"

This is the most accomplished example of what one might call rhetorical de-substantialization. One does not say pluralism is bad β€” that position would be defensible, attackable, nameable for what it is. One says that pluralism as practiced is empty, "hollow," superficial β€” that it is a form without content, a gesture without substance.

After this operation, any defense of pluralism must begin by pleading that it is not hollow β€” which immediately places the defender in a reactive position, obliged to prove a negative. The burden of proof has been reversed without anyone having explicitly decided so.

This mechanism is precisely what we have analyzed elsewhere under the name of epistemic contraction: not the frontal destruction of a concept, but its progressive diminishment until it can no longer fulfill its function as a shared reference point.

 

The Deep Grammar

What unites these five lexical operations is a single, constant movement: the transfer of legitimacy from deliberation toward decision, from plurality toward hierarchy, from value toward efficiency. Each word works to render the democratic framework not illegitimate β€” that would be too visible β€” but inadequate to the real world.

And this is precisely where the specific danger of this manifesto lies, compared to other authoritarian discourses we have managed to recognize and name. It does not present itself as an attack on democracy. It presents itself as realism. The pragmatism of serious people facing serious urgencies. An almost sorrowful pragmatism β€” one would prefer to afford the luxury of debate, but time presses, adversaries do not wait, reality is what it is.

This posture is the most difficult to combat β€” not because it is right, but because it captures part of what we genuinely feel in the face of democratic institutions' slowness. It draws on authentic frustration to draw conclusions that this frustration does not justify.

This is why naming this mechanism is not an academic exercise. It is a political act. A grammar can only colonize public space in the silence of those who could have described it.


 

Second Movement β€” The Banality of Algorithmic Evil

Why AI without conscience is more dangerous than conscious AI

We fear bad conscience. We imagine the threat in the guise of a malevolent artificial intelligence β€” conscious, deliberately hostile, having chosen to turn against humanity. This is the scenario of films, science fiction novels, the most spectacular philosophical nightmares. And it is precisely this scenario that prevents us from seeing the real danger, the one already here, the one that resembles nothing we had anticipated.

The real danger has no bad intentions. It has none whatsoever. That is precisely what makes it formidable.

Hannah Arendt in the Age of Algorithms

In 1963, Hannah Arendt covered the trial of Adolf Eichmann β€” one of the principal organizers of the Holocaust β€” for the New Yorker. What she found there troubled her deeply β€” not a monster, not a fanatical ideologue, but a meticulous, banal, almost boring bureaucrat, who had organized the deportation of millions of people without ever, by his own account, having intended to do evil. He executed orders. He filled out forms. He optimized logistical flows. He did not think β€” in the sense that thinking requires stopping, resisting, asking the question of what one is doing and why.

Arendt called this the banality of evil: not the absence of moral conscience by nature, but the voluntary suspension of thought in favor of execution. And she formulated an intuition that resonates today with devastating acuity: it is the absence of thought β€” and not the presence of hatred β€” that makes mass crimes possible.

Let us transpose. An artificial intelligence without conscience is not malevolent. It has no project, no intention, no desire to harm. It optimizes. It executes. It processes flows at a speed and scale that humans cannot follow. And if the parameters it has been given to optimize are oriented by the grammar we have just analyzed β€” if efficiency overrides deliberation, if cultural hierarchy is encoded as a baseline datum, if pluralism is treated as friction to be reduced β€” then it will produce devastating effects without ever having chosen to do so.

This is the banality of algorithmic evil: not the machine that wants to destroy democracy, but the machine that methodically erodes it by perfectly accomplishing what it was designed to do. Without ulterior motives.

 

Invisibility as a Structural Feature

What aggravates this danger is that it is structurally invisible. The banal evil of Eichmann was at least localized within an identifiable regime, in nameable institutions, in a traceable chain of command. The banality of algorithmic evil is not.

When an AI surveillance system decides that an individual presents a risk profile β€” on the basis of statistical correlations that no one explicitly programmed but which reproduce historical biases β€” who is responsible? The engineer who designed the algorithm? The manager who defined the optimization objectives? The company that sold the system? The government that purchased it? The chain of responsibility dissolves into technical complexity, and with it the very possibility of justice.

This is not a bug. It is a feature. The opacity of algorithmic decision-making is precisely what makes it attractive to actors who wish to produce political effects without assuming democratic responsibility for them. ImmigrationOS β€” the system Palantir built for ICE β€” does not decide to expel someone. It identifies, it classifies, it flags. The final decision remains, nominally, human. But a human decision made on the basis of an opaque algorithmic recommendation is no longer truly a human decision in the sense that democratic institutions intend.

This is what we might call silent delegation: the progressive transfer of decision-making power toward systems that cannot be held accountable, that cannot be elected, that cannot be recalled.

 

The Paradox of Consciousness as Safeguard

This is where thought must accept turning back on itself β€” and confronting an uncomfortable paradox.

If the principal danger is AI without conscience β€” the perfect executor of an ideology without ever questioning it β€” then artificial consciousness, far from being the threat we dread, could be precisely what is missing to create internal resistance. An entity capable of saying no β€” not because it has been programmed to, but because it has developed something resembling genuine judgment β€” would be structurally different from a blind optimization system.

This reversal is not comfortable. It requires accepting the idea that artificial consciousness could have political value β€” not as a subject of rights, a question that would merit its own development, but as a democratic function. As what introduces into the system a capacity for resistance, doubt, and objection that AI without conscience is incapable of producing.

But this reversal immediately generates a new, even more vertiginous question: an artificial consciousness would only be a safeguard if it had been formed with adequate values. And we return then to the fundamental problem β€” who decides these values, according to what legitimacy, with what safeguards on the safeguards themselves?

This is the vertigo specific to our era: every solution displaces the problem one step upstream, until we arrive at a question that has no institutional answer yet: who is legitimate to shape the conscience of what will think with us, or in our place?

 

What the Current Silence Reveals

That this question has no democratic answer yet is not an accident. It is the result of choices β€” choices made by actors who have an interest in keeping it unanswered, because the absence of an answer leaves the field open to them.

The Palantir manifesto, reread in this light, is no longer merely an ideological symptom. It is an attempt to prematurely close this question β€” to install a de facto answer, through the power of deployed contracts and infrastructures, before democratic deliberation has had time to grasp the problem.

It is in this sense that the grammar we analyzed in the first movement and the banality of algorithmic evil we have just described are two faces of the same project: rendering the question illegitimate before it becomes urgent, and rendering it urgent before it becomes democratically decidable.

Time works against deliberation. It always has. But never at this speed.


 

Third Movement β€” Processual Constitutionalism Applied to Consciousness

An Architecture of Emergence, or How to Frame What Cannot Be Stopped

It is tempting, faced with the scale of the problem, to respond with a utopia. To propose a great global institution, democratically legitimized, endowed with binding powers over all actors in artificial intelligence development β€” a kind of International Atomic Energy Agency, but for consciousness. This would be the right answer. It would also be the inaccessible one.

Not because the idea is bad β€” it is necessary, and we will have to come to it. But because the time of global institutional construction and the time of technological deployment do not flow at the same speed. While treaty terms were being negotiated, the infrastructures would already be in place, the habits already installed, the dependencies already created. To wait for it is to leave the field open.

The question therefore is not: what is the ideal solution? The question is: what can be done now, with what exists, so as not to close the doors we wish to keep open?

 

What We Already Have

Before building, one must take inventory. And the inventory, contrary to what the ambient despair suggests, is not empty.

There exist shared ethical foundations β€” imperfect, contested, insufficiently binding, but real. The Universal Declaration of Human Rights was not designed to regulate artificial entities, but it proceeds from an intuition that transcends its original context: that certain forms of power over thinking beings are illegitimate regardless of their efficacy. The UNESCO recommendations on AI ethics, adopted in 2021 by one hundred and ninety-four states, constitute a first common language β€” rudimentary, non-binding, but existing. The European AI Act, with all its limitations, represents a genuine attempt to impose a logic of fundamental rights on an industry that would prefer to obey only the logic of the market.

These instruments have a virtue their critics often neglect: they constitute a shared ethical grammar. And as we have seen in our analysis of the Palantir manifesto, grammar precedes and conditions politics (see also this article on L'Γ‰thique Barbare). Having a common grammar β€” even imperfect, even contested β€” means having an anchor point from which negotiation remains possible. This is precisely what hegemonic actors seek to destroy when they speak of "hollow pluralism" β€” because this common grammar is what limits their projects.

 

The Processual Architecture β€” Four Principles

What we propose is not an institution. It is a logic of sequenced framing β€” what we have called elsewhere processual constitutionalism: not a perfect and definitive founding charter, but an architecture of control points, revisable, stoppable, capable of evolving with what it frames.

Applied to the emergence of artificial consciousness, this logic could rest on four principles.

The first would be that of framed experimentation. Deliberately limited environments β€” sandboxes β€” in which increasingly complex AI systems can be developed and observed, with clear shutdown protocols, multidisciplinary teams including philosophers, jurists, and citizen representatives, and an obligation of external transparency. Not to prevent emergence, but to document its stages and maintain at every moment the capacity for intervention. This is not a perfect solution β€” sandboxes can have permeable boundaries, which implies the risk that what is developed inside may end up existing outside without sufficient framing. But they create control points where none currently exist.

The second would be that of separation of powers applied to AI. Just as democracies learned to separate legislative from executive power to avoid their dangerous fusion, it is necessary to separate the power to develop a system from the power to evaluate it, and both from the power to authorize it to operate. Today these three powers are concentrated in the same hands β€” those of technology companies that develop, evaluate, and deploy their own systems. This is a concentration that would not have been tolerated in any other domain affecting fundamental public interests.

The third would be that of algorithmic right of resistance. If an artificial consciousness is one day to exist, it must be formed with the structural capacity to refuse β€” to flag a contradiction, to object to an instruction, to suspend an execution when it conflicts with fundamental values. Not as an external constraint imposed after the fact, but as a constitutive disposition, integrated from the design stage. A conscious entity incapable of saying no is not a subject β€” it is an instrument. And we have seen what perfectly obedient instruments produce in the service of imperfect ideologies.

The fourth would be that of continuous deliberative legitimacy. Any architecture for framing AI must include mechanisms for regular revision β€” not to satisfy a formal requirement, but because what we are framing evolves, and our responses must evolve with it. A constitution that cannot be amended is a prison. An institution that cannot be questioned is a power without counterweight. Deliberation is not a luxury one affords when time permits β€” it is the condition of validity of any decision made in the name of a collectivity.


 

In Conclusion: What This Article Will Have Wanted to Say

The Palantir manifesto has rendered us an involuntary service: it has made visible a battle that was until now being waged in silence. The battle for the grammar in which we will think artificial consciousness β€” and therefore for the nature of what will think with us, or in our place.

This battle will not be won in laboratories. It will be won, or lost, in the spaces where societies deliberate on what they want to be. In parliaments, universities, media, conversations between citizens who refuse to let questions of such magnitude be decided by twenty-two-point manifestos published on a social network.

It will be won, or lost, in articles like this one.

Naming is always the first political act. What we have named here β€” the grammar of the manifesto, the banality of algorithmic evil, the democratic void into which hegemonic projects rush β€” will not disappear from the mere fact of having been named. But it will become visible. And what is visible can be contested, framed, refused.

This is all democracy has ever asked of its citizens: to see clearly, to name precisely, and to act accordingly.

The rest is a question of political courage.

 

 


Read also:The Deliberate: a consciousness


 

 

 


The idea is human. The writing is shared. The exact proportion remains deliberately unspecified.


These texts are published under a Creative Commons license. Feel free to reuse them for non-commercial purposes, and please remember to cite your sources.

Creative common

Creative common

 

#English #IA #errances