VOOZH about

URL: https://thebulletin.org/2026/04/outdated-language-obscures-the-risks-of-autonomous-weapons/

⇱ Outdated language obscures the risks of autonomous weapons - Bulletin of the Atomic Scientists


Follow our Iran coverage

Outdated language obscures the risks of autonomous weapons

By Arthur Holland Michel | Opinion | April 2, 2026

👁 mind-your-law-b
Mindful human control reaffirms the centrality of the most important computer involved in any lethal decision, the human mind. Photo illustration by Thomas Gaulkin. Source images by Leremy/DepositPhotos.com; CCTV.

For the last dozen years, international efforts to establish regulations for autonomous weapons have traced a series of widening circles.

Within the Convention on Certain Conventional Weapons (CCW), the main arena for multilateral negotiations on the matter, states generally agree that giving weapons the capacity to select and engage targets on their own could lead to unintended consequences, including serious harms for which no human would be held accountable. But after more than a dozen years of talks, the odds are slim to none that states will adopt common measures anytime soon to foreclose upon those risks. The technology, meanwhile, only keeps getting more advanced and more prolific.

Why has this happened? Multilateral dysfunction is a factor. But a major part of the problem is that states insist on distinguishing acceptable autonomous weapons from unacceptable killer robots with a principle known as “meaningful human control,” or MHC.

To a casual observer, “meaningful human control” may sound like a fair and unproblematic phrase. And indeed, two of those three words have every right to be there. “Control” is a central tenet for operating any weapon. And the premise that a “human” must engage in said control, rather than some other kind of being, is also not in dispute.

The stumbling block is that first word, “meaningful.” Negotiations on matters like these thrive or die on the potency of their terminology, and “meaningful,” as a modifier for human control, has proven to be the Achilles’ Heel of the debate on autonomous weapons. It is overly vague, far too contentious, and it elevates the wrong aspects of the human role in decisions on the use of force while excluding the dimensions of human oversight that really matter.

In short, if leaders want to preserve humanity in warfare, they need a better term—one that activates a vision of human control grounded in deliberate, conscious judgment. How about “mindful”?

Do not think that this is an attempt to bring new age spiritualism into the regulation of lethal autonomous weapons. Soldiers don’t need to do breathwork before a mission (although it would probably be helpful if they did). Rather, parties to the debate could appropriate the notion of literal mindfulness as a principle for control—not only for autonomous weapons but all forms of military AI.

The problem with “meaningful” is that it is too vague to be of any practical use. To some, such as those who call for a total ban on autonomous weapons, meaningful control implies that the human must keep their hand directly on the trigger. For others, it might extend all the way out to the possibility of distributing control across an ecosystem of humans involved in the development and deployment of the weapon. In this view, an engineer may exercise a degree of human control by coding a targeting element in a weapon during development, leaving the weapon, once deployed, to make individual decisions in battle. For the purposes of building norms based on international consensus, this ambiguity is problematic. Indeed, the more that states have bashed against each other over the meaning of “meaningful,” the less inherently meaningful the term has become.

Practically, meaningfulness is also much too passive. According to the proposed norms currently under consideration among states, control means having mechanisms in place to intervene if the weapon is about to do something it wasn’t supposed to do. It also implies a responsible chain of command populated by humans who can be held accountable in the event of unintended harm. These elements can certainly create the conditions for human control, in theory. If applied in good faith, they would create requirements that entail a certain amount of human deliberation.

But they do not guarantee perpetual literal control, per se. A human user of a weapon might possess the infrastructure of meaningful control without being beholden to any immediate persistent obligation to activate that infrastructure. That’s not really good enough. In decisions on the use of force, meaningful control cannot simply be the act of following a checklist or having a kill-switch that you’ll never touch.

By comparison, “mindfulness” is more precise, more concrete, and more active. As a principle, it sets a high, measurable cross-cutting bar for the human role in decisions on the use of force. In a framework of mindful human control, the user is present, conscious, and undistracted. Mindfulness implies engaging actively with that which is directly in front of them, as well as the broader context in which that thing and their mind, and everything else of relevance, exists in relation to each other. These are all the factors that must go into any decision on the use of force, and the principle means that not one of them is ever sidelined.

More profoundly, the decisions of a mindful mind are the result of deliberate judgment, not reflexive check listing. Toby Drinkall of the Oxford Internet Institute has made a similar argument, calling for meaningful human control to be swapped out for “meaningful human deliberation” in order to set human reasoning as a non-negotiable in decisions on the use of force. Mindful human control represents a similar emphasis without sacrificing the essential idea of “control.”

For the user, the principle of mindfulness also facilitates tighter and more effective interaction with the machine. In the field of human-machine interaction, an acute awareness of one’s own state of mind, which is known as “metacognition,” is understood to be essential for enabling users to assess their own biases and capacities in relation to the information that they are receiving from the machine.

For example, a metacognitive user engaging with a target recognition system will consider the possibility that their judgment could be swayed by their societal biases toward certain types of target. Metacognitively engaged users recognize when they do not have the necessary information or tools to make the right decision; they understand when they are in the wrong state of mind to make good call, either because they are too tired, stressed, or distracted. They know when they are swayed by emotional factors that cloud their judgment; they see when they are experiencing things like apophenia, the human tendency to conflate correlation and causation.

A growing body of empirical research shows that extensive AI use degrades metacognitive abilities. This suggests that controlling AI and autonomous weapons requires measures to redouble user metacognition. Certain user interface features such as cognitive forcing, which require the user to engage in active thinking prior to making a decision, can help to serve as an enabler of mindful control. For example, the weapon’s interface may require a user to cross-check, through other sources, that there are no civilians in proximity to a potential target before they can approve an autonomous attack against that target.

To be sure, it may be too late to switch out meaningfulness for mindfulness in the normative lexicon. But at a minimum, people might look to mindfulness as a yardstick for evaluating whether control—not just of autonomous weapons, but of all forms of military AI—is meaningful.

In reviewing incidents of harm, for example, investigators might assess the user’s actions against a reasonable standard of mindfulness. Was the user attentive? Did they act with deliberate presence of mind? Did they uncritically trust the AI? Or did they blindly dismiss it? A mindful mind takes nothing on faith. A mindful mind does not wander.

Mindfulness, as a metric, also sets a temporal pace for decisions. As many point out, a major concern regarding the use of AI and autonomous weapons is that they accelerate decisions on the use of force to the point that practical control becomes impossible. Mindful control could thread this needle. It is impossible to rush a mindful act.

Mindful human control could also help draw attention to, and counteract, the broader effects of integrating autonomy into human decisions in warfare. In the civilian realm, mindfulness has become trendy precisely because the intelligent technological affordances that surround life are alienating individuals, as thinking beings, from the object and context of their thoughts and actions.

The more that humans interact with the world by means of digitized, algorithmic and robotic interfaces, the less present they are. With less mindfulness, the less control people have over their thoughts and, by extension, their choices. They begin to exist in a suspended state, uncoupled from the emotional or moral considerations that give their choices meaning.

Such cognitive and emotional alienation could happen to military operators of AI and autonomous systems, creating new forms of alienation, trauma, and moral injury, just as it did with remote drone operations. The effects could be grave. A measure of mindfulness could be just the right countermeasure.

Ultimately, mindful human control is a helpful principle because it reaffirms the centrality of the most important computer involved in any lethal decision, the human mind.

Most people agree that only humans can enact compliance with the law. Therefore, decisions on the use of force under the laws of war require a mind—an active and engaged mind, not a blob of passive neurons whose control over the use of force is merely symbolic.

Regardless of how leaders choose to couch the principle of human control as the debate lugs forward, one inevitable fact will always remain: Mindless human control is meaningless.


Together, we make the world safer.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Get alerts about this thread
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Arthur Holland Michel

Arthur Holland Michel is a writer and researcher focused on artificial intelligence, advanced surveillance technologies, and drones—among other ... Read More

RELATED POSTS

Receive Email
Updates

Recent Stories

wpDiscuz