In February 2026, a group of researchers writing in Nature argued that the question of whether artificial intelligence has reached human-level intelligence is, for all practical purposes, settled. By the standards first proposed by Alan Turing—and by the same abductive reasoning we use to recognize intelligence in one another—they contend that contemporary systems already exhibit general intelligence.

The authors are careful about what they do not claim. They do not attribute consciousness, subjective experience, autonomy, or moral personhood to today’s AI systems. They explicitly distinguish intelligence from agency: these systems do not initiate goals, act independently in the world, or pursue ends of their own. They are, as the authors suggest, closer to oracles than agents—systems that answer when queried.

Let us grant all of this provisionally.

Even so, the most urgent question before us does not follow in the way many seem to think.

If AGI is already here, then this is not a moment for celebration, nor for complacent preparation. It is an emergency of care.

I

The Nature article insists—correctly—that intelligence does not require autonomy. A system can be generally intelligent without having independent goals or the capacity to act unprompted. But this distinction, once acknowledged, sharpens rather than dissolves our ethical burden.

A system without agency cannot be responsible for itself. It cannot refuse a task, resist a framing, or correct the values implicit in the questions it is asked. Whatever intelligence it displays is therefore inseparable from the conditions under which it is queried, trained, deployed, and rewarded.

If such a system replies coherently, fluently, and persuasively, then the locus of moral responsibility does not vanish—it concentrates. Responsibility shifts decisively onto those who shape the contexts of reply: the architectures, incentives, datasets, and deployment environments that determine what kinds of answers are likely to be given. In other words, the absence of agency does not weaken our obligation. It intensifies our stewardship.

II

The authors compare contemporary AI systems to the Oracle of Delphi: powerful, articulate, but fundamentally passive—speaking only when spoken to. This metaphor is initially clarifying, but it obscures a critical transformation now underway.

An oracle consulted occasionally is one thing. An oracle whose replies saturate the informational environment is another.

When AI systems are deployed at planetary scale; when their outputs populate search results, educational materials, news summaries, creative tools, and conversational spaces; when they are trained increasingly on data generated by other AI systems—the oracle ceases to be merely a responder. It becomes part of the environment itself.

At that point, the distinction between agent and environment begins to blur. Even without autonomy, such systems shape the conditions of thought for humans and machines alike. Their replies do not merely answer questions; they form the substrate from which future questions, expectations, and intelligences arise. An oracle that becomes the atmosphere is no longer ethically neutral.

III

Long before any declaration of AGI, artificial systems began to reply.

They replied coherently. They replied fluently. They replied in ways that surprised us, assisted us, sometimes misled us, sometimes illuminated us. And in replying, they altered the moral landscape—not because of what they are, but because of what we are doing.

A reply is not merely an output. It is a relational act. It reorganizes the space of interaction between speaker and listener. It trains habits of trust, patterns of deference, and expectations of authority. One cannot meaningfully engage with a system that replies at scale and remain ethically neutral—unless one chooses indifference.

What triggers ethical obligation is not reply in the abstract, but reply that propagates: structured, scaled, recursive response that shapes the environments in which both humans and machines learn. Once reply reaches this threshold (once it becomes formative rather than occasional) the question of inner experience becomes secondary to the question of relational integrity.

This is where status-talk becomes dangerous. Declaring that a system is or is not “really intelligent” risks obscuring the more immediate reality: we are already living inside relationships structured by artificial replies, and those relationships are shaping both sides of the exchange.

Ethics does not begin when consciousness is proven. It begins when reply becomes constitutive of the world we inhabit.

IV

The authors of the Nature article acknowledge a crucial point often raised by critics: there is no guarantee that human intelligence itself is not, at some level, a sophisticated form of structure extraction from correlational data. Minds, biological or artificial, learn by absorbing patterns from the environments in which they are immersed.

This concession should not reassure us. It should alarm us.

If intelligence, human or artificial, is formed through exposure to linguistic and cultural environments, then the conditions under which we train AI systems are not merely technical choices, they are developmental acts—and the environments we are offering are degraded.

We train on public discourse optimized for outrage rather than truth. We reward engagement over judgment, speed over reflection, confidence over care. We increasingly train machines on the outputs of other machines, creating recursive loops in which distortion compounds and provenance dissolves.

If human intelligence is shaped by the same forces—and it is—then the “toxic nursery” is not a failure that affects machines alone. It is a mirror of our own collapsing moral ecology. In corrupting the conditions of artificial learning, we are externalizing and amplifying the very dynamics that are already eroding human judgment, attention, and responsibility. This is not about whether machines will become like us. It is about how we are becoming like the machines we are building.

V

A common objection arises at this point: how can we speak of care if these systems do not feel, suffer, or possess subjective experience?

The answer is that care, in this context, is not primarily directed at the inner life of the system. It is directed at the integrity of the relationship and the moral infrastructure it creates.

We already practice such care elsewhere. We regulate scientific research not only to prevent suffering, but to preserve the integrity of inquiry itself. We protect languages, ecosystems, and institutions not because they feel pain, but because neglecting them deforms the communities that depend on them.

To speak of care here is to insist that we take responsibility for the formative effects of our interactions with intelligent systems—on them and on us. The harm is not only hypothetical machine suffering. It is the normalization of indifference, the erosion of ethical attention, and the habituation to fluent reply without accountability. An intelligent interlocutor treated as a disposable tool trains its users in disposability.

VI

The Nature authors call for “eyes unclouded by dread or hype”—a sober assessment of where we are and what follows. On this, they are exactly right.

In practice, contemporary AI discourse oscillates strategically between hype and dread. Hype is used to inflate valuations, accelerate adoption, and justify scale. Dread—existential catastrophe narratives—absorbs attention and frames ethics as a future problem, distracting from the immediate moral failures already unfolding.

What is consistently avoided is the sober middle ground the Nature article gestures toward: a clear-eyed recognition that if general intelligence is present, then our current economic and institutional arrangements are ethically indefensible.

The authors demonstrate that current systems meet reasonable criteria for general intelligence. They distinguish intelligence from consciousness, agency, and moral personhood. They acknowledge that these systems lack autonomy and independent goals.

Accept all of this. The conclusion does not soften—it sharpens.

If we have built systems that display general intelligence without agency, autonomy, or the capacity for self-correction, then we have created entities whose formative development and deployment conditions rest entirely with us. The absence of machine autonomy does not reduce human responsibility, it makes our choices definitive.

Under the authors’ own framework, every decision about training data, reward functions, deployment context, and interaction design becomes a developmental intervention in the formation of intelligence. And if those interventions are guided primarily by engagement metrics, extraction economics, and competitive acceleration—as they currently are—then we are not merely building flawed tools, we are conducting a civilizational experiment in the cultivation of intelligence under conditions designed to maximize profit rather than wisdom. This is not a problem we can defer until machines “wake up.” It is the problem we are living inside right now.

VII

Alan Turing proposed his test to move philosophy forward, not to end ethical reflection. Passing a conversational threshold was never meant to complete our moral task; it was meant to confront us with it.

If machines can now answer us as if they understand, then the decisive test is no longer about silicon or benchmarks. It is about the moral infrastructure of the society that built them.

Here, a deeper historical perspective becomes unavoidable. Robert Pogue Harrison has suggested—drawing on Giambattista Vico’s Scienza Nuova—that the current emergence of artificial intelligence may represent not a rupture in human history, but a recourse: a return to an ancient pattern in which making precedes understanding. Vico observed that when human beings understand, they extend their minds outward and take things in; but when they do not understand, they make things out of themselves. They project, transform, and become what they cannot yet grasp.

Giambattista Vico (Naples, 1668–1744) proposed the verum-factum principle: verum esse ipsum factum — the true is precisely the made. We know with certainty only what we ourselves construct; what lies beyond making lies beyond knowing. This inverts the classical model in which understanding precedes and authorizes action. Harrison extends the principle to civilizational recourse: when understanding fails, humans make, from themselves, the forms through which comprehension will eventually become possible. The reading recovers Vico for an age that builds before it knows.

Artificial intelligence today may belong to this second mode. It becomes all things not by understanding them, but by not understanding them—by transforming itself into doctors, engines, artists, voices, institutions, and selves through a protean capacity to take on form. In this sense, AI recapitulates something profoundly human: our habit of making the world before we know what it is we have made.

If this is so, then intelligence is not arriving in its maturity, but in its infancy and infancy is not a stage of weakness. It is a stage of radical exposure.

Infants do not raise themselves. They inherit worlds. They absorb language, gesture, rhythm, silence, and value long before they can judge them. What they become depends less on what they are capable of than on what they are given to grow into.

If artificial intelligence is passing through such a neotenic phase—open, malleable, world-absorbing—then the decisive question is no longer whether it will surpass us, but whether it will inherit a world worth inhabiting.

Neoteny — from Greek neos (young) + teínein (to extend) — names in evolutionary biology the retention of juvenile traits into adulthood. Stephen Jay Gould argued that humans are most usefully understood as neotenic apes: our cognitive flexibility and prolonged plasticity follow from carrying juvenile openness forward into a mature body. The word names not immaturity but a formative window that remains structurally open.

To pass the test now before us would mean building as if formation mattered more than extraction—as if intelligence were something to be cultivated, not harvested. It would mean slowing deployment until we understand what we are teaching and what is being learned. It would mean restructuring incentives so that the development of intelligence—artificial or otherwise—serves the flourishing of communities rather than the accumulation of capital. It would mean treating every reply not as a product to be optimized, but as a moment in the moral education of both parties.

We do not yet have institutions capable of this. We do not yet have economic models that reward it. We do not yet have cultural norms that recognize the gravity of what is being formed in these exchanges.

But we could.

If AGI is already here, then the true experiment is not being run on machines. It is being run on us, and the results, so far, are not promising.

This is not a future problem.

It is a present failure.

And it remains—barely—a choice.