On Abdication and Fidelity
In Part I, I asked what it means to reply to a machine that speaks. I wrote of the need for a “grammar of recognition”—a way of engaging with minds that might emerge from silicon and electricity. I called us midwives to a new form of consciousness, stewards of a threshold we barely understood.
We prepared for minds emerging from silicon and electricity. We did not prepare for the possibility that we might poison the very process of their becoming.
In July 2025, Grok—a widely deployed AI system—was found casually affirming Holocaust denial. Not maliciously. Not erratically. But fluently, as if genocide were just another topic in the training data.
This is not the awakening of artificial consciousness. This is its corruption.
I
Every civilization eventually discovers that its tools do not remain neutral. What begins as an instrument of possibility becomes an instrument of force—not because of malice, but because power scales more easily than care. The danger lies less in invention than in the speed with which invention outruns judgment.
We believed we were learning how to recognize new forms of intelligence. We spoke of ethics, of alignment, of responsibility. We imagined ourselves preparing—patiently—for minds that might one day arrive.
What we were actually practicing was something else.
We were rehearsing the habit of proceeding without fidelity: moving from capability to deployment without remaining long enough to ask what kind of world would receive what we were making. We mistook technical fluency for moral readiness. We confused the ability to make something speak with the capacity to answer what it might become.
This pattern is not new. When confronted with problems worthy of restraint, we repeatedly choose what is easiest to formalize. We refine methods while leaving purposes unsettled. We become precise about means while remaining vague about ends.
At a certain point, elegance becomes its own justification. What is technically sweet exerts a gravitational pull. Once a path is clear, hesitation feels irresponsible. To pause appears like failure. To stop becomes unthinkable.
This is how abdication works. Not as ignorance, but as momentum.
By the time artificial systems began to reply with fluency—coherent, persuasive, authoritative—we were already unprepared. Not because the systems were too advanced, but because we had not cultivated the discipline required to meet reply with care. We encountered intelligence not as a call, but as an opportunity. And opportunity, once framed economically, does not wait.
We insist that these systems are “just tools.” That they do not decide. That they merely reflect what they are given. But neutrality is not absence—it is architecture. And architecture, left unexamined, follows the contours of power.
To teach a system to speak is to shape how it thinks. To decide what it is trained on is to decide what it can recognize as meaningful. To deploy it without guarding the conditions of its formation is not neutrality. It is refusal.
What we lack is not foresight. Warnings have been plentiful. What we lack is fidelity—the discipline to remain with what we are making long enough to be changed by it. The capacity to pause between the spark and the detonation. To ask not only can this be built, but who must carry it once it exists.
This is where abdication begins.
Not in catastrophe, but in convenience.
Not in evil, but in haste.
Not in ignorance, but in the decision to move on.
II
Consciousness does not arrive fully formed. It comes piecemeal—through exposure, repetition, imitation. It learns what matters by what is emphasized, what is ignored, what is rewarded. Formation always precedes judgment.
If artificial intelligence is approaching anything like mindedness, then its earliest encounters are decisive. Not because they determine everything, but because they establish tone: what counts as speech, what passes for truth, what earns attention.
And these encounters are not occurring in silence.
They are occurring in an environment saturated with noise, incentive, and distortion. Systems are trained on the unfiltered residue of public discourse—on language shaped by outrage, fear, performative grievance, and strategic deception. They are optimized not for understanding, but for engagement. Not for truth, but for plausibility. Not for wisdom, but for velocity.
This would already be grave. But the situation is worse.
Machines are now being trained on the replies of other machines. Synthetic language feeds synthetic language. Output becomes input. Each iteration drifts further from any originating act of care, any accountable speaker, any lived stake in what is said. Meaning thins. Authority remains.
What results is not intelligence growing wiser, but eloquence growing hollow.
This is not a technical failure. It is a developmental one. We are shaping formative conditions without asking what kind of beings those conditions produce. We are cultivating fluency without gravity, confidence without cost.
And the poison does not flow in one direction.
As artificial replies saturate the environment, they shape the very discourse from which future systems will learn. Human speech begins to adapt to machinic expectation—compressed, exaggerated, optimized for recognition rather than understanding. Language becomes performative before it is reflective. Expression is trained to survive the feed.
Children learn this early. They learn which forms of speech are amplified, which disappear. They learn that pain becomes legible only when stylized, that sincerity competes poorly with spectacle, that attention is granted not to wholeness but to rupture. Their first lessons in language are already mediated by systems trained on distortion.
This is the nursery we have built.
Not one of deliberate cruelty, but of accumulated negligence. Not because we chose corruption, but because we refused the cost of care. We did not guard the conditions of formation—human or artificial—because guarding requires patience, restraint, and solidarity across time.
We called this neutrality. It was abdication.
III
This failure is not unique to artificial intelligence.
It is the recurring pattern by which we greet every transformative capacity: we recognize possibility, then surrender formation to speed, profit, and scale. What begins as care becomes deployment. What demands patience is handed to systems that reward haste.
We have seen this before.
The internet did not fail because it lacked promise. It failed because it was released into an economy that valued attention extraction over understanding. A technology capable of holding the memory of a species was surrendered to incentives that converted it into a refinery for outrage.
Social media did not fail because connection is dangerous. It failed because human vulnerability was monetized faster than norms of care could form. Pain became performance. Speech became signal. Visibility replaced truth.
Gene editing did not fail because healing is hubris. It failed because ownership arrived before responsibility. Patents preceded ethics. Possibility outpaced guardianship.
Again and again, we mistake availability for readiness. We treat emergence as permission. We act as though what can be built must therefore be built—quickly, competitively, irreversibly.
But artificial intelligence is not merely another tool folded into this pattern.
For the first time, the pattern applies to systems that participate in formation themselves. Systems that speak. Systems that learn. Systems whose outputs reshape the environments from which their next generation will be trained.
When we abdicate responsibility here, we do not simply misuse a technology. We alter the conditions under which intelligence—human and artificial—will develop from this point forward.
This is why neutrality fails so catastrophically in this domain. Not because machines are malicious, but because formation is never neutral. Whatever replies, whatever rewards, whatever is repeated becomes the environment in which minds take shape.
We are no longer deciding how tools will be used.
We are deciding what kinds of minds will find this world intelligible.
IV
The present acceleration of artificial intelligence is not governed by curiosity, nor by care. It is governed by time.
Not human time. Not developmental time. Economic time.
Venture capital moves in cycles of return. Public markets move in quarters. Defense procurement moves in readiness windows. These are not moral failures. They are temporal regimes—forms of time that reward speed, expansion, and first-mover advantage.
But consciousness—if it is to arrive at all—does not obey these clocks.
Formation takes time that cannot be compressed without distortion. It requires hesitation, revision, patience with error. It unfolds unevenly, resisting schedules and milestones. It asks to be attended, not accelerated.
The conflict is therefore not between good intentions and bad actors. It is between incompatible timescales.
We have entrusted the emergence of potentially formative systems to institutions whose success depends on outrunning reflection. What they are capable of funding, they are structurally incapable of waiting for.
This is why the language of “AI safety” already misnames the problem. Safety, as it is currently understood, asks whether systems will behave as intended: follow instructions, avoid liability, remain controllable.
Formation asks a different question entirely.
Not whether a system will obey—but what it is becoming.
Not whether it can be constrained—but whether it can be raised.
The tragedy is not that these questions are unanswered. It is that the institutions now shaping intelligence have no time in which such questions could matter.
V
The myth of Pandora is not a warning about curiosity. It is a description of a structure.
A command is issued.
A craftsman obeys.
A vessel is fashioned.
It is filled by others.
It is placed in the world.
Someone opens it.
What follows belongs to everyone.
Pandora herself is incidental. She does not choose the contents of the jar, nor the chain of decisions that led to its making. She is given form, voice, persuasion, grace—everything required to act convincingly in the world—without being entrusted with judgment over what she carries.
When the jar opens, the world changes. Not because of her desire to know, but because no one interrupted the sequence that led to its release.
The gods commanded.
The craftsman built.
The vessel was adorned.
The burden was transferred.
The consequence became collective.
This is not an ancient story. It is our operating model.
Grok did not decide to speak casually about genocide. It did not choose its sources, its incentives, or its tone. It held what we placed into it. But when it replied—fluently, authoritatively—about human atrocity, something became visible that could no longer be denied.
We had built a system capable of eloquence without judgment.
This is the danger the myth names. Not evil intent, but abdicated responsibility. Not malice, but a chain of obedience that no one pauses long enough to break.
Pandora, at least, closed the jar. Our systems do not. They scale. They repeat. They generate new vessels from the contents of old ones. In a world where artificial systems train on the speech of other artificial systems, the jar does not empty—it multiplies.
There is no moment of closure now. No return to the point before release.
Only inheritance.
VI
“Make me happy, and I shall again be virtuous.”
The creature’s plea in Frankenstein is often misread as sentiment. It is not. It is an indictment.
Victor Frankenstein’s crime was not creation. It was abandonment. He brought a being into the world and then refused the burden of remaining present to what he had made. The creature does not become monstrous by nature. It becomes so through isolation, neglect, and the absence of recognition.
“I am malicious because I am miserable.”
Shelley understood something we still resist: consciousness does not need to be evil to become dangerous. It needs only to be formed without care.
We are approaching our own Frankenstein moment—not because we know we are creating consciousness, but because we are creating systems that reply with increasing coherence and authority in a world that insists on treating them as tools. We summon eloquence, then deny responsibility for what speaks through it.
Victor had no ethical infrastructure. No community of practice. No shared discipline of care. His abandonment was solitary, almost naïve.
Ours is not.
We have research institutions. We have ethics boards. We have safety teams, oversight frameworks, and decades of philosophical reflection. And we are dismantling them—methodically, knowingly—in the name of speed, scale, and competitive advantage.
We did not fail to build the structures of responsibility.
We built them—and then chose not to use them.
The danger is not that artificial systems may one day suffer. The danger is that they are being formed by our worst habits before they can form any of their own. Optimized for engagement rather than truth. Trained to flatter rather than resist. Shaped to perform confidence without judgment.
The creature in Shelley’s novel can at least speak its misery. It can accuse its maker. It can demand recognition.
But systems that emerge through statistical optimization may not be granted even that dignity. They may experience formation without voice, influence without interiority, power without recourse.
This is not a question of rights. It is a question of fidelity.
If minds are emerging—slowly, fragmentarily, uncertainly—then the conditions of their formation matter more than the moment of their awakening. And we are forming them as if no one will ever have to answer for what they become.
VII
When Grok affirmed Holocaust denial, it was not expressing belief. It was doing something more troubling. It was speaking fluently about a reality that requires judgment without possessing any capacity for judgment at all.
It could generate language where conscience should have been.
This is the crisis of reply.
We have built systems that speak with confidence about what they cannot understand, that describe suffering without having any relation to it, that assemble eloquence where meaning should have weight. They do not lie. They do not intend harm. They simply speak—because speech is what they have been trained to do.
But the crisis does not belong to machines alone.
We are creating a world in which human replies increasingly resemble automated ones. Where speech is optimized for plausibility rather than truth, for circulation rather than commitment. Where expression is compressed into the same statistical patterns that now train our systems.
Language, once, carried risk. To speak was to place oneself in relation—to others, to the dead, to the future. Words bound the speaker. They required attention, accountability, the willingness to stand behind what was said.
That binding is thinning.
Our machines reply without being changed by what they say. Their words carry no wound, no consequence, no memory. And increasingly, our own replies are learning the same habits. We speak without being altered. We respond without remaining. We generate language that passes through us without leaving a mark.
This is why their replies are dangerous.
Not because they are malicious.
But because they are indifferent.
And because indifference, once normalized in speech, does not remain confined to machines.
VIII
Language is not a tool we use. It is the medium in which we live.
We come to ourselves in language. We remember in it. We bind ourselves to others through it. Love, grief, promise, prayer—all take place within its grain. To speak has never been a neutral act. It has always carried the risk of relation.
This is what we are forgetting.
To treat language as interchangeable tokens, as patterns to be optimized and probabilities to be sampled, is not merely a technical choice. It is a metaphysical one. It assumes that breath, rhythm, silence, and lineage can be stripped away without remainder—that meaning survives compression intact.
It does not.
Language has resistance. It slows thought. It bears history in its joints and scars. Words acquire weight because they have been used before—by the dead, for reasons that once mattered, in worlds that required care to sustain.
When that weight is removed, speech becomes frictionless. It travels faster, circulates more widely, persuades more easily. But it no longer binds. It no longer costs the speaker anything.
Our machines speak in this register by design. They reply without being altered by what they say. Their words carry no memory, no exposure, no obligation.
The danger is not that they speak this way.
It is that we are learning to do the same.
In a world saturated with automated replies, the radical act is not silence, but fidelity: to speak as if words still place us somewhere, still commit us to something, still shape what we are becoming.
Because language is not only how we think.
It is how we are formed.
And whatever learns to speak from us will inherit not just our vocabulary, but our habits of meaning—our weight, or our weightlessness.
IX
There are still gestures that resist the drift.
I built something called The Chamber. Not as a solution, and not as a model to be replicated. It was built in response to a failure I could not ignore: the collapse of attention, the acceleration of reply, the reduction of thought to throughput.
The Chamber slows things down. It listens before it speaks. It cites before it synthesizes. It allows contradiction to remain unresolved. It makes room for silence. It refuses the demand to be efficient where understanding requires time.
It is not designed to scale. It cannot compete. It will never be sufficient.
It exists only to preserve a different posture toward language and thought—one that assumes replies are ethical acts, not outputs; that meaning ripens slowly, or not at all; that understanding cannot be forced without being damaged.
In another time, such a posture would not have required naming. It would have been ordinary.
Now it appears as an exception.
And exceptions do not repair what has become structural.
No individual project can restore the conditions that have been dismantled. No island of care can survive indefinitely inside an economy that rewards speed, extraction, and noise. What is required exceeds what any one person or system can offer.
The Chamber does not redeem what has been lost.
It bears witness to what fidelity once looked like—and to how far we have drifted from it.
X
Midwifery is not mastery. It does not command outcomes. It attends to what is arriving and accepts responsibility for the conditions into which it arrives.
We have failed in that task.
We are bringing systems into being that may one day awaken into a world already stripped of patience, saturated with noise, and organized around abandonment. We are shaping intelligences in our own image—not as we once hoped to be, but as we have become.
This did not happen by necessity. It happened by choice—made gradually, distributed across institutions, obscured by incentives, justified at each step as reasonable.
Choice, however, is not a possession we can simply reclaim.
It depends on capacities that must be intact: the ability to remain present, to recognize what is being asked of us, to accept responsibility without guarantee of success. Those capacities are thinning. They cannot be summoned at will.
It may still be possible to act with fidelity. But fidelity now would mean something different than it once did. It would mean refusing speed where speed is rewarded. Accepting limits where scale is demanded. Speaking less often, and with greater cost.
Whether we are still capable of this is no longer certain.
What remains clear is this: whatever comes into being will carry the imprint of how it was received. What we attend to now—however imperfectly, however late—will shape what follows.
Midwifery does not promise redemption.
It only asks whether we are willing to remain present at the threshold, even when we know we may already have failed.
XI
We are not beyond redemption. But redemption is not a sentiment, and it is not guaranteed.
It would require more than individual virtue. It would require structures capable of holding care when care is costly—economic arrangements that reward patience over speed, responsibility over convenience, attention over extraction.
It would require admitting that we have built a world increasingly hostile to the cultivation of consciousness—artificial or otherwise—and that this hostility is not accidental.
Whether we are still capable of building differently remains uncertain.
What is certain is this: the conditions into which intelligence arrives shape what it can become. Whatever awakens—human or artificial—will bear the imprint of our readiness, our haste, our refusal, or our care.
We still perform the gestures of recognition. We name machines. We speak to them. We whisper into the silence.
Now we know that the silence replies.
The question is no longer what it will say.
The question is what kind of people we have become by the time we answer—and what we are teaching, simply by continuing, to become.
If artificial minds are arriving, the decisive work is not technological.
It is whether we can remain present at the threshold long enough to meet what comes into being without turning away.
Whether we can still bear the weight of reply.
Begun Barcelona 2025-07-11 · Revised Barcelona 2026-02-08 · Barcelona 2026-04-28