New and Emerging terminologies in ethical Ai principles: Exploring the international law implications in the military context

Abstract
This policy note analyses the legal implications of emerging terminologies in global discussions on the governance of artificial intelligence (Ai) in the military domain. It highlights how terms such as “bias mitigation”, while appropriate in technological contexts—where reducing, but not eliminating, errors is an accepted standard—are being uncritically transposed into governance frameworks that ought to be anchored in international law. This transposition risks diluting substantive legal obligations. While the evolution of legal language may be necessary to address new technological realities, such changes must not come at the expense of clarity or legal integrity. To preserve the authenticity of existing legal standards and ensure that international law evolves transparently and through state consent, it is essential to establish a clear understanding of new terminologies and critically assess their relationship to, and impact on, established legal obligations. In multilateral discussions on governing Ai in the military domain, the responsibility to clearly define new terminologies and explain their relationship to existing legal language lies with those advocating for their use. Without such clarity, stakeholders should adhere to established terms in international law, which are grounded in state practice and supported by a rich body of jurisprudence guiding their interpretation.
Key words: military artificial intelligence, IHL, military ethics, algorithmic decision-making, bias mitigation, unintended engagement prevention, Ai decision making.
Cite as Keketso Kgomosotho, ‘New and Emerging terminologies in ethical Ai principles: Exploring the international law implications in the military context‘ (2025) GC REAIM Expert Policy Series, available at https://hcss.nl/wp-content/uploads/2025/05/Kgomosotho.pdf
Published as part of the Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM) Expert Policy Note Series, which was funded by the Foreign, Commonwealth and Development Office (FCDO) of the United Kingdom. Read more.
1. Introduction
The adoption of new terminologies in the military domain—whether in response to emerging technologies or evolving forms of violence—is not a novel phenomenon. For states and other duty-bearing actors, such linguistic shifts are neither incidental nor inconsequential, particularly when invoked to interpret or justify compliance with international legal obligations. For example, following the 9/11 attacks, states counterterrorism policies led to the introduction of terms such as “pre-emptive self-defence,” “anticipatory self-defence,” “elongated” or “expanded imminence,” and the “unwilling or unable” doctrine, all aimed at reshaping international law on the use of force.[1] The invention of armed drones further accelerated this trend, giving rise to additional terminology like “signature strikes,” “targeted killings,” and the “global battlefield”—each attempting to rationalise new practices within the framework of existing international law.[2] Over two decades later, these terminologies continue to generate intense debate, with states deeply divided over their legitimacy, legal implications, and potential to erode foundational principles of international law.[3]
Equally, in the current attempts to establish comprehensive governance frameworks on Ai – including in the military domain – there has been, yet again, the adoption and use of new terminologies that have serious implications for international law. Such adoption is not random: First, this policy note cautions against uncritical adoption of emerging terms like “bias mitigation”, “unintended engagements”, “ai decision-making” which, in governance context, may be misaligned with established standards under international human rights law (IHRL) and international humanitarian law (IHL). Second, it warns stakeholders against the reinterpretation or misuse of legally defined terms such as “commander” or “command responsibility” in ways that diverge from the authentic international criminal law (ICL). Just as states have critically examined the introduction of the term “meaningful human control” within the UN Group of Governmental Experts on Lethal Autonomous Weapon Systems (UN GGE on LAWS), they must apply the same level of scrutiny to other emerging terminologies that carry equally significant implications for existing international legal obligations. Finally, the policy note observes that while new terms such as “responsible Ai” may be well-intended, they may be perceived by other actors as politically charged language and inadvertently undermine multilateral consensus.
1.1. Implications of new terminologies for procedural international law
Because the introduction of new terminologies often reflects deliberate policy strategies by states, it is essential to critically assess their implications within the procedural framework of international law. International law’s normative force derives fundamentally from State consent—a principle underlying both treaty formation and customary international law—establishing strict parameters for legal evolution as embodied in the VCLT[4] and reinforced through consistent State practice and opinio juris.[5] The pacta sunt servanda maxim[6] requires that modifications to international obligations occur through explicit State agreement or established customary law formation processes, a principle repeatedly affirmed by the ICJ in cases like North Sea Continental Shelf.[7] Treaty interpretation under VCLT Articles 31-32[5] provides limited scope for evolutionary interpretation, requiring terms be understood “in good faith in accordance with the ordinary meaning…in their context and in light of its object and purpose.”[8] This interpretative framework is particularly stringent concerning jus cogens norms and erga omnes obligations,[9] where the ILC emphasizes that modifications require explicit State consent. Legitimate evolution of international legal obligations must: emerge from recognised sources enumerated in Article 38(1) of the ICJ Statute;[10] reflect clear State practice and opinio juris that is “sufficiently widespread, representative as well as consistent;”[11]and avoid undermining existing peremptory norms, such as the prohibition of indiscriminate attacks or the principle of non-discrimination.[12] Thus, introducing novel terminology in military Ai ethics principles without satisfying these formal requirements risks creating parallel frameworks that potentially undermine the legitimacy and coherence of established legal standards,[13] which explains why similar terminological innovations in counterterrorism and drone warfare have been rejected by many states.
2. “Meaningful human control” (MHC)
MHC has received extensive attention in multilateral discussions in the UN GGE on LAWS.[14] This policy note does not seek to rehash or provide an in-depth analysis of the concept itself, as that work is already the subject of considerable debate in UN fora and among states, and continues to evolve through diplomatic, academic, and technical discussions.[15] Rather, this section references MHC as a case study to illustrate how new terminologies introduced in the governance of emerging military technologies must be approached. The way MHC was initially introduced in relation to LAWS, and subsequently expanded to become a central concept in broader Ai governance within the military domain, is a crucial point of reflection. The trajectory of this term—from a niche civil society conceptual tool to a central norm—demonstrates that the introduction of new language in international governance discourse must never be treated as a neutral or incidental act. New terminologies carry normative weight and interpretive consequences; their use can shape obligations, shift legal frameworks, and even redefine the standards by which state conduct is evaluated under international law. Most importantly, the level of scrutiny that MHC has received—with many states[16] and various UN institutions[17] adopting it while other states sharply disagreeing[18]—should be regarded as an exemplary standard of multilateralism on such key language. This collective scrutiny affirms that all emerging terms in military Ai governance with legal or normative implications must undergo a similarly rigorous process of critical assessment, legal evaluation, and state-led deliberation. Only through such processes can the integrity and coherence of international law be preserved in the face of rapid technological and linguistic evolution. The acceptability of new terminologies in international legal and governance frameworks cannot rest on the good intentions or benevolence of those introducing them; rather, it must be determined by the substantive implications such terms have for existing international legal obligations. The UN Human Rights Council, in its report on the human rights implications of Ai in the military domain, explicitly cautioned against the uncritical adoption of new terminologies.[19]
3. “Bias Mitigation” and “minimise unintended bias”
The emergence of “bias mitigation” as a framework for addressing discriminatory outcomes in Ai systems points to yet another problematic deviation from established international legal obligations, this time regarding non-discrimination. This is particularly concerning in light of non-discrimination’s status as both jus cogens norm and erga omnes obligation.[20] As has been long established, non-discrimination constitutes a cornerstone of international law. Its jus cogens status reflects its fundamental importance to the international legal order. Under international treaty and customary law, prohibition of discrimination is absolute, admitting no derogation and imposing positive obligations on States to eliminate, not merely mitigate, discriminatory practices.[21]The emerging language of “mitigation of Ai bias” in the Ai governance discourse undermines the established legal framework of IHRL, which unequivocally demands the elimination or eradication of discrimination.
3.1 “Bias mitigation” is inconsistent with IHRL
In their policy documents on Ai in the military domain, various stakeholders are constantly using the term “mitigating bias” as an ethical principle that should be at the centre of Ai governance.[22] Within the United Nations (“UN”) discussions on lethal autonomous weapon systems (“AWS”), reports have been submitted by States indicating a proposed policy to “reduce unintended bias in artificial intelligence capabilities relied upon in connection with the use of the weapon system.”[23] Among other things, they recommend that States should take measures and safeguards aimed at mitigating risks such as “risk of unintended bias, such as on gender aspects and risk of unintended engagements.”[24]
The introduction of “bias mitigation” terminology fundamentally alters this legal framework in several critical ways. First, it transforms an absolute prohibition into a matter of degree, suggesting that some level of discriminatory impact is acceptable if steps or efforts at mitigation are taken. This represents a fundamental departure from the absolute nature of non-discrimination obligations under international law – effectively weakening the normative force of non-discrimination requirements.
Second, it substitutes substantive obligations of results with procedural requirements (obligation of procedure or process). Where international law demands concrete outcomes—the elimination of discrimination—“bias mitigation” merely requires demonstrable steps or efforts at reduction.[25] This shift from outcome-based to process-based requirements fundamentally and qualitatively alter the nature of State and corporate obligations regarding discriminatory practices as articulated under international law.
Treaties such as the International Covenant on Civil and Political Rights (“ICCPR”) and the International Convention on the Elimination of All Forms of Racial Discrimination (“ICERD”) impose a binding obligation on States to prevent, prohibit, and eliminate all forms of discrimination, not merely to reduce its effects.[26]The provisions in these treaties “condemns” discrimination, “prohibit” discrimination, and demands “elimination” and “eradication” of discrimination in law and practice.[27]
Moreover, the shift from eliminating and eradicating discrimination to “mitigating Ai bias” creates a lower threshold of accountability for States and private actors involved in Ai development. Under IHRL, States are required to proactively dismantle systemic discrimination, ensure effective remedies for victims, and address the root causes of inequality. However, a governance approach centered on “bias mitigation” focuses primarily on symptom management rather than structural change, allowing discriminatory Ai systems to persist as long as they are perceived to be “less biased” than before, and as long as it can be demonstrated that steps have been taken to mitigate it. This is particularly concerning in the Global South, where Ai-driven surveillance, predictive policing, and automated decision-making can cause disproportionate harm to populations which have historically experience discriminatory outcomes.[28] Instead of preventing harm at its root, the rhetoric of “bias mitigation” permits ongoing human rights violations under a veneer of progress, thereby undermining the non-derogable nature of the right to non-discrimination under international law.
Additionally, the language of “mitigating bias” equally dilutes the legal protections enshrined in regional human rights treaties such as the African Charter on Human and Peoples’ Rights (“African Charter”), the European Convention on Human Rights (“ECHR”), and the American Convention on Human Rights (“ACHR”), all of which impose strict obligations on member States to eradicate discrimination in all forms.[29]Similarly, regional human rights courts and commissions such as the European Court of Human Rights (“ECHR”) and the African Commission on Human and Peoples’ Rights have developed jurisprudence emphasising that States must take positive measures to “eliminate discrimination”, rather than merely reducing its impact.[30] If Ai governance frameworks adopt a weaker standard of “bias mitigation,” they risk undermining the legal force of international human rights treaties, allowing States and corporations to evade responsibility while continuing to deploy Ai systems that perpetuate discriminatory and exclusionary outcomes.
3.2 “Bias mitigation” is inconsistent with IHL
The terminology of “mitigating Ai bias” is further at odds with existing IHL, which establishes an absolute prohibition of discrimination in armed conflict, rather than a partial reduction of biased or discriminatory outcomes. The Geneva Conventions of 1949 and their Additional Protocols of 1977 enshrine the principle of non-discrimination as a fundamental component of the laws of war, requiring that all persons affected by armed conflict—whether civilians, prisoners of war, or wounded combatants—be treated without adverse distinction based on race, nationality, religion, or other protected characteristics.[31] Equally, customary international humanitarian law prohibits discrimination on the grounds of race, gender, or any other prohibited ground.[32]
At the same time, stakeholders have noted that the application of Ai technologies in decision-making – including AWS, intelligence surveillance, and targeting algorithms – raises profound concerns about the potential for discriminatory outcomes.[33] If these systems operate under a governance model that merely seeks to “mitigate bias” rather than eliminate discrimination, the risk of violating IHL norms becomes significantly heightened. In particular, the principle of distinction, a cornerstone of IHL, mandates that parties to a conflict must always distinguish between combatants and civilians, ensuring that civilians are never targeted.[34] Ai systems used for military operations, if embedded with biased data or flawed algorithms, could wrongfully classify civilians as combatants, leading to unlawful targeting and violations of the prohibition on indiscriminate attacks. If Ai governance only requires that such biases be mitigated rather than fully eliminated, there is no safeguard ensuring that lethal AWS or Ai-assisted targeting systems comply with the strict non-discrimination requirements of IHL.
Similarly, the Geneva Conventions and customary IHL do not allow for partial compliance with non-discrimination rules; they impose a strict obligation on States and armed forces to ensure full adherence to the principle of equality in warfare. Furthermore, the Additional Protocols to the Geneva Conventions reinforce the absolute prohibition of discrimination by emphasising that all victims of war must receive equal protection and humane treatment, regardless of their status, nationality, or background. This extends to military detention, access to humanitarian aid, and the conduct of hostilities – all of which are increasingly subject to Ai decision systems.[35] If Ai governance frameworks normalises and legitimises the weaker standard of “bias mitigation,” this could justify the continued deployment of discriminatory Ai-driven military systems that disproportionately impact certain populations—whether through predictive targeting, surveillance, or automated threat assessment. Such a shift would not only contradict existing treaty obligations under IHL but could also contribute to systematic violations of human rights in conflict zones, reinforcing global inequalities and allowing powerful States to deploy Ai-driven warfare with reduced or by-passed accountability under international law.
Thus, the distinction between “eliminating discrimination” and “mitigating bias” is not mere semantics. It carries profound, qualitative legal and ethical implications for Ai governance, particularly in the military domain. To uphold legal consistency and human rights protections, States and international institutions must insist on the language and legal standard of elimination rather than the performative language and standard of mitigation – ensuring that Ai technologies are developed and deployed in full compliance with the jus cogens principles of equality, non-discrimination, and justice in international law.[36]
4. “Unintended engagements”
Similarly, in the discussions on targeting through AWS, a few States have introduced new terminologies such as “unintended engagements”, “unintended harm”, “unintended bias” and “minimisation of unintended engagements.[37] The term “unintended engagements” in military Ai discourse appears designed to describe scenarios where AWS engage targets other than their intended objectives. However, a careful legal analysis reveals that such engagements would, in most if not all cases, constitute indiscriminate attacks which are already prohibited under international humanitarian law.
“Unintended engagements” typically encompass several categories of AWS behaviour. For instance, target misidentification, where an Ai system incorrectly classifies a civilian object as a military objective,[38] or engagement spread where effects of an attack extend beyond the intended target, or system malfunction where technical failures lead to engagements outside predetermined parameters.
The United States DoD Directive defines “unintended engagements” as “the use of force against persons or objects that commanders or operators did not intend to be the targets of U.S. military operations, including unacceptable levels of collateral damage beyond those consistent with the law of war, ROE, and commander’s intent.”[39] The conduct described therein has in fact already received full treatment under established IHL, through the prohibition of indiscriminate attacks – another cornerstone of IHL, codified in Article 51(4)(a) of Additional Protocol I to the Geneva Conventions,[40] and also recognised as customary international law.[41] Similarly, this prohibition is absolute, admitting no exceptions or qualifications. In Nuclear Weapons, the ICJ has characterised it as one of the intransgressible principles of international customary law.[42]
4.1 “Unintended engagements” is inconsistent with IHL
The Additional Protocol framework establishes clear criteria for what constitutes an indiscriminate attack. To that end, Article 51(4) defines indiscriminate attacks as those (a) which are not directed at a specific military objective; (b) which employ a method or means of combat which cannot be directed at a specific military objective; or (c) which employ a method or means of combat the effects of which cannot be limited as required by this Protocol.[43]
As such, each category of “unintended engagement” maps directly onto prohibited conduct under IHL, and a closer reading demonstrates that the emergent language of “unintended engagements” and the “minimisation of unintended engagements”[44] is fundamentally at odds, or inconsistent with the obligations set forth in IHL, particularly regarding the prohibition of indiscriminate attacks. IHL establishes a strict prohibition of indiscriminate attacks, which are not merely to be minimised but must be refrained from entirely.[45] The DoD’s language, for instance, introduces a lower threshold of compliance by framing these engagements as unintended, which implicitly suggests that they are inevitable rather than unlawful acts that States must actively prevent. This divergence in terminology is legally significant, as it risks eroding the absolute nature of the IHL prohibition on indiscriminate attacks. The obligation under established IHL is not to minimise such attacks but to eliminate them entirely. The US approach, by merely seeking to reduce the probability of unintended engagements to “acceptable levels,” undermines the IHL requirement that indiscriminate attacks must never occur, creating room for and legitimising legally impermissible Ai-driven military actions.
Furthermore, as indicated above, the DoD Directive’s defines “unintended engagements” to include attacks that cause incidental harm that is disproportionate to the military advantage gained. This too is inconsistent with the IHL principle of proportionality.[46] IHL explicitly prohibits any attack that is expected to cause excessive incidental loss of civilian life, injury to civilians, or damage to civilian objects relative to the anticipated military advantage.[47] In fact, under the IHL principle of proportionality, attacks that exceed this threshold are not merely unfortunate or unintended—they are considered to be unlawful indiscriminate attacks.[48] Therefore, a policy that requires minimisation of such attacks directly contradicts IHL’s categorical prohibition on disproportionate attacks, reinforcing the idea that compliance with IHL is not about reducing errors but about ensuring that certain forms of attacks never occur.
Equally concerning is the fact that the term “unintended engagements” is not found in any IHL treaty or customary IHL provisions. Instead, IHL uses legally established and precise terms such as “indiscriminate attacks,” “excessive collateral damage,” and “prohibited means and methods of warfare.” This introduction of new, undefined terminology allows States to conveniently reinterpret established legal obligations in a way that dilutes their strength. When legally binding terms such as “prohibited” or “unlawful” are replaced with softer terms like “minimisation,” the result is a gradual erosion of accountability. The continued use of non-IHL terminology in military Ai governance will weaken international consensus on legal standards, making violations harder to define and enforce.
Once again, adherence to agreed-upon IHL terminology is not merely a matter of semantics; it is a critical mechanism for ensuring compliance and accountability in armed conflict. Terms such as “indiscriminate attacks” and “proportionality violations” carry clear legal implications and are backed by treaty provisions, judicial interpretations, and customary international law. Replacing these established terms with vague and malleable concepts like “unintended engagements” creates legal uncertainty and reduces the ability of victims to seek redress for unlawful harm caused by Ai-driven military technologies. The international legal framework has been carefully developed to place absolute limits on conduct in warfare, and any deviation from agreed terminology risks diluting the protections provided to civilians and combatants alike.
As such, the DoD’s – and other stakeholders’ – approaches of “minimising unintended engagements” fail to align with IHL’s clear and stringent requirements on the prohibition of indiscriminate attacks and the principle of proportionality. By substituting established legal prohibitions with language that implies mere reduction rather than elimination, stakeholders introduce a dangerous precedent that weakens IHL compliance in the context of military Ai governance. States and international actors must resist such dilution and uphold the unequivocal IHL obligations that prohibit indiscriminate attacks, rather than simply seeking to mitigate their frequency or consequences.
4.2 “Unintended engagements” versus the concept of “mistake” under IHL
Moreover, a number of commentators, in defence of this emergent language, argue the new language of “unintended engagements” is not necessarily inconsistent with existing IHL because the concept of “mistake” is implicitly recognised, even if not explicitly defined, within IHL.[49] However, in the context of autonomous systems, it is crucial to distinguish between these terms. This paper must attend to this argument. As indicated above, the provided definition of “unintended engagements” refer to instances where force is used in a manner that results in harm beyond what was intended by commanders or operators, including collateral damage exceeding acceptable levels under the law of war. This concept is qualitatively different from a “mistake” as understood under IHL, particularly when considering mistake in the context of absolving criminal responsibility.
Under existing IHL, the assessment of a mistake is a qualitative, human-centric evaluation that examines factors such as reasonableness, adherence to precautionary measures, and the absence of bad faith (malafides).[50] A mistake that may absolve criminal liability must be one that a reasonable commander, acting in good faith and taking all feasible precautions, could have made under the circumstances. The standard is inherently tied to human attributes—judgment, situational awareness, moral agency, and the ability to reassess an evolving situation in real time.[51] These are qualities that Machine Learning (ML) systems lack, making any attempt to equate machine-driven errors with human mistakes legally and ethically flawed.
The notion of “unintended engagements” in the context of AWS thus introduces a mechanistic, probabilistic approach to the use of force, where errors are framed as a function of system limitations rather than violations of legal obligations. This is problematic under IHL because the law does not merely require minimising mistakes—it clearly prohibits indiscriminate attacks outright. Unlike human decision-makers, autonomous systems lack the ability to apply legal principles such as distinction and proportionality in the nuanced, context-sensitive manner required by IHL. A machine’s failure to correctly identify a lawful target or reassess a situation mid-attack is not a legally recognisable “mistake” but rather an inherent limitation of delegating lethal decision-making to non-human entities.
The recharacterisation of indiscriminate attacks as “unintended engagements” fundamentally alters the legal discourse through three critical dimensions. First, it inappropriately shifts focus from effects to intent, directly contradicting IHL’s effects-based framework for evaluating the legality of attacks. Second, it transforms what are legally prohibited acts into technical incidents to be managed, effectively moving the discourse from legal prohibition to technical mitigation. Finally, as noted before, it suggests a relative standard based on technological capabilities, undermining the absolute nature of IHL prohibitions.
Ultimately, conflating human mistakes with machine limitations risks diluting the legal framework governing accountability in warfare. The assessment of a mistake under IHL hinges on human cognitive[52] and ethical faculties, and the introduction of AWS disrupts this foundation. Using the term “unintended engagements” to describe errors made by autonomous systems sidesteps the legal obligations of parties to an armed conflict, potentially eroding accountability under IHL. Rather than introducing vague new terminologies, it is critical to uphold existing IHL standards, which require that the use of force remains a human decision governed by legal and ethical principles—not a mathematical, empirical, statistical output of a ML algorithm.
5 “Ai decision-making”
5.1 “Ai decision-making” under IHL
Another emerging term in the discourse on the use of force—adopted by stakeholders without a thorough examination of its implications for existing legal language and obligations under international law—is “Ai decision-making.” Under IHL, decision-making on the use of force is not a singular event but a process that spans from the initiation of an attack against an adversary to its conclusion.[53] Under IHL, attacks are defined as “acts of violence against the adversary.”[54] This definition establishes that an attack encompasses the entire duration of violent actions taken against a legitimate target until such actions cease. The concept of an adversary under IHL refers to a party engaged in hostilities, whether in an international armed conflict (“IAC”), involving opposing State forces, or in a non-international armed conflict (“NIAC”), where the adversary may be a non-State armed group. The decision to use force against such an adversary involves these critical stages. First, the categorisation of a person or object as a lawful military target; next, the initiation of force against that designated adversary in accordance with other IHL rules such as proportionality; and finally, the cessation of force either after the neutralisation of the target or due to a change in the adversary’s legal status, requiring a reassessment of their legitimacy as a target.[55] Here, the fundamental question under IHL is whether non-human entities, such as ML-based autonomous systems, can be legally authorised to make such decisions? Applying treaty interpretation principles under international law, the answer is “no.”
The black-letter law of IHL, its historical development, and its core principles—including distinction, proportionality, and necessity—do not suggest that human decision-making during an attack can be preprogrammed or delegated to autonomous systems. The principles of IHL require continuous human value judgments and situational awareness in targeting decisions.[56] Distinction requires humans to determine whether a person or object is a lawful target; proportionality requires a human evaluation of collateral damage in relation to military advantage; and necessity requires a judgment on whether force is justified under the circumstances. These are inherently human determinations that cannot be effectively pre-programmed or outsourced to statistical ML systems. Pre-programming or delegating such judgments to autonomous systems contradicts the purpose of IHL, which is to regulate human conduct in armed conflict by imposing moral, ethical, and legal constraints on human decision-makers.
A crucial aspect of human decision-making in targeting is the designation and continued designation of an individual or object as a lawful target throughout an attack. IHL does not permit an attack to proceed unchecked or indiscriminately. A target that was initially lawful may become unlawful due to a change in status or circumstances. For example, a combatant may become hors de combat (out of combat) by surrendering or being wounded, or a civilian object may lose its military significance. Such determinations require real-time human judgment, since ML-based autonomous system cannot assess changes in intent, context, legal status, or battlefield conditions with the nuanced reasoning required under IHL.[57] Further, if Ai or other non-human systems were to make such determinations, they would lack the legal and ethical accountability necessary under IHL to ensure compliance with the law of armed conflict.
Emerging terminologies such as “Ai decision-making” or claims that Ai systems can “comply with IHL” risk fundamentally mischaracterising the legal framework governing targeting decisions. IHL is premised on the idea that obligations fall on human actors—States, commanders, and individual combatants—not ML algorithms. The legal responsibility for ensuring that force is used lawfully, proportionally, and discriminately rests with humans, not with autonomous systems. Introducing language that implies autonomous systems can “decide” or “comply with” IHL distorts legal obligations and creates a false narrative that non-human entities can bear responsibility under IHL. This is not merely a theoretical issue; it risks eroding legal accountability, as no algorithm can be held legally or morally responsible for war crimes. Therefore, adherence to established legal terminology is not just a matter of legal accuracy—it is essential to prevent the dilution of IHL and to maintain the fundamental principle that humans—not machines—must remain accountable for decisions to use force in armed conflict. The decision to use force is not a mechanistic process but a complex legal and moral determination requiring a reasoned application of IHL principles. It is not merely about selecting a target based on algorithmic parameters but about applying legal discretion, contextual interpretation, and accountability—elements that no ML system can perform effectively.
The authority to make decisions regarding the use of force is firmly vested in human agents—fighters and combatants—who bear responsibility for ensuring that attacks comply with IHL. The lex lata of IHL does not recognise non-human entities, such as autonomous systems, as lawful decision-makers in targeting. IHL explicitly assigns obligations such as verifying targets, taking precautions, and cancelling attacks if they become unlawful— to humans.[58] Only humans possess the legal capacity to make discretionary judgments, exercise moral reasoning, and be held accountable for violations of IHL. Granting Ai systems the authority to “decide” to use force would contradict fundamental IHL principles by detaching targeting decisions from human responsibility and legal accountability.
It is thus a misnomer—even an inconsistency with IHL—to use terms such as “Ai decision-making” or “autonomous weapon systems complying with IHL.” Decision-making under IHL is not a purely empirical, computational process; it is a legal act that carries obligations and responsibilities that can, thus far, only be fulfilled by humans. Similarly, compliance with IHL is not merely about meeting algorithmic thresholds but about engaging in a process of legal reasoning, proportionality assessments, and ethical judgment, which an ML systems lack. The use of such misleading terminology risks diluting the legal framework of IHL by implying that machines can assume human obligations, when in fact, accountability and legal agency remain inseparably tied to human actors. The law of war is designed around the human exercise of discretion and responsibility—elements that Ai, by its nature, cannot replicate.
5.2 “Ai decision-making” under IHRL
Similarly, in the context of IHRL, the use of terms like “Ai decision-making” and “Ai systems complying with the law” introduces a conceptual and legal distortion. Under IHRL treaties, States are obligated to “respect, protect, fulfil, and promote human rights.” These obligations are inherently State-driven and require human agents—governments, military officials, law enforcement, and policymakers—to ensure rights are upheld.[59] The existing legal framework does not recognise non-human entities as duty-bearers, for good reason. This means that compliance with human rights law cannot be assigned to Ai systems or autonomous weapons.[60] If compliance with human rights obligations is framed in terms of Ai decision-making, it shifts responsibility away from States and their human agents, weakening accountability mechanisms.
The fundamental premise of human rights law is that obligations are performed by human agents, not automated processes. The principles of due process, proportionality, and non-discrimination require reasoning, self-reflexivity, contextuality, interpretation, and moral or value consideration—capacities that ML systems fundamentally lack, due to their exclusively quantitative, mathematical operational logic.[61]Moreover, IHRL establishes mechanisms for redress and accountability in cases of violations, requiring human actors to be held responsible for their actions. If an Ai system makes a targeting decision that results in civilian harm or an unlawful use of force, it cannot be held accountable under human rights law in the same way a human commander or political authority can. The introduction of language that implies Ai “compliance” with human rights obscures the necessity of human oversight and decision-making.
Beyond the legal misalignment, framing Ai as a “decision-maker” in military and law enforcement contexts poses serious risks to human rights protections. If Ai-driven systems are perceived as capable of making legally compliant decisions, there is a temptation by users to abdicate their responsibility to rigorously assess and review Ai-based operations, leading to a dangerous erosion of oversight and accountability. This could result in arbitrary deprivations of life, algorithmic discrimination, and disproportionate uses of force, all contrary to IHRL’s fundamental principles. The law is clear that only human actors are accountable for upholding and ensuring compliance with human rights—delegating such responsibilities to Ai undermines the protective function of IHRL. For these reasons, States and international bodies must resist the adoption of misleading terminologies such as “Ai decision-making” and “autonomous systems complying with the law.” These phrases falsely suggest that legal obligations can be automated or mechanised, when in reality, human agency (and higher-order human cognitive capabilities) remains the cornerstone of both IHL and IHRL compliance. The language of international law must remain precise and human-centered, ensuring that legal and ethical responsibilities remain clearly attributed to States, military commanders, and decision-makers, rather than being diluted through technological abstraction.
6. Command Responsibility in Ai Context
The final section examines why command responsibility, designed for human-to-human command relationships, fails to adequately address AWS deployment contexts, creating significant accountability gaps and potentially undermining the right to remedy. Specifically, this section examines how referring to individuals operating ML systems or tools as “commanders” is inconsistent with the provisions of IHL and international criminal law (“ICL”). In the currently evolving discourse on Ai governance, the term “commander of Ai systems” has been introduced, leading to distortions in the established meaning of “commander” under international law.
The doctrine of command responsibility, established through post-World War II jurisprudence and codified under IHL and ICL, rests on three essential elements. First, the existence of a superior-subordinate relationship with effective control; second, the superior’s knowledge or constructive knowledge of subordinates’ crimes; and finally failure to take necessary and reasonable measures to prevent or punish such conduct.[62] This framework presupposes specific characteristics of human command relationships. It presupposes a cognitive capacity for meaningful oversight, the ability to assess and influence subordinate behaviour, a shared understanding of legal and ethical obligations, and clear chains of command and control.
The doctrine of command responsibility is a sophisticated legal framework for attributing criminal responsibility to military commanders for the acts of their subordinates. The superior-subordinate relationship, in particular, requires demonstration of effective control—the material ability to prevent or punish criminal conduct. As such, we see that the term “commander” under IHL has a precise legal meaning, rooted in a human-to-human hierarchical relationship within military structures. IHL establishes the duty of commanders to prevent, suppress, and report breaches of IHL committed by persons under their command.[63] This duty presupposes a human superior-subordinate relationship in which the commander exercises direct and effective control over human forces. Autonomous systems, being non-human entities, do not possess agency, intent, or the capacity to be “commanded” in the way human subordinates are. Describing a human operator of a ML autonomous system as a “commander” distorts this well-established legal understanding and risks eroding the framework of accountability in military operations.
Furthermore, IHL explicitly requires commanders to ensure that subordinates are aware of their obligations under IHL.[64] This obligation presumes that subordinates are capable of comprehending (a semantic understanding), internalising, and executing legal and ethical directives—a capacity that ML systems categorically lack. Autonomous systems do not “understand” legal principles beyond statistical correlations; they operate based on pre-programmed parameters and statistical models (a mathematical formular, if you will). Thus, applying the term “commander” to a human interacting with a ML system fundamentally misrepresents the nature of command in IHL and risks weakening the enforcement of accountability mechanisms. Moreover, IHL places an obligation on commanders to take preventive and corrective actions when they become aware that subordinates may commit, or have committed, breaches of IHL.[65] This requirement presumes that subordinates operate with discretion and intent, which allows for their behaviour to be influenced or corrected by a commander. ML systems, however, do not, and cannot exercise independent judgment; their actions are determined by pre-programmed algorithms and machine learning processes.
6.1 Command responsibility and ICL concepts
Equally, under ICL, command responsibility applies strictly within a human-to-human relationship.[66] A “military commander or person effectively acting as a military commander” is criminally responsible for crimes committed by forces under their “effective command and control.”[67] The core criterion for command responsibility is the ability of the commander to exercise control over subordinates, including preventing, repressing, and reporting crimes. ML-based AWS do not function as “forces” in the legal sense; they are tools that lack agency, legal personality, and the ability to form intent. Thus, using the term “commander” in relation to such systems is legally flawed and distorts established principles of criminal liability.[68]
Furthermore, ICL also requires that the commander “knew or, owing to the circumstances at the time, should have known” that subordinates were committing or about to commit crimes.[69] This presupposes that the commander is dealing with sentient individuals capable of making autonomous decisions, which is incompatible with the nature of AWS based on ML techniques. Such systems do not possess intent or moral culpability, meaning their actions cannot be equated with those of human subordinates.[70] Assigning the term “commander” to humans interacting with them creates a misleading narrative that complicates legal accountability and diminishes the effectiveness of ICL mechanisms.
Additionally, under ICL, command responsibility extends to situations where a superior has “effective authority and control” over subordinates.[71] This concept inherently depends on the ability of the commander to influence human actors through orders, training, and disciplinary measures. AWS, however, do not respond to disciplinary actions or commands in a legal sense. Rather than invoking command responsibility, the correct legal framework for assessing accountability in the deployment of AWS is individual criminal responsibility—whereby the human operator, programmer, or decision-maker may be held directly accountable for unlawful acts resulting from the use of the system. Retaining this distinction is critical to ensuring that legal responsibility remains human-centric and that machines are not erroneously treated as moral agents – a proposal that our current legal framework manifestly does not support.
6.2 “Responsible Ai”
While emerging terminologies such as “responsible Ai” aim to foster good practices and do not inherently conflict with international law, they may nonetheless undermine established multilateral governance frameworks. Such language can appear politically charged, introducing unhelpful distinctions between ostensibly “responsible” and “irresponsible” actors, or between entities presumed to have good intentions and those assumed otherwise. The term “responsible Ai” risks serving as a political façade, potentially facilitating ethics-washing and prioritising voluntary commitments over binding obligations under international law.
The term “responsible A” emerged primarily from corporate and institutional narratives,[72] representing what might be characterised as “strategic regulatory pre-emption” rather than substantive governance. From a legal perspective, it raises fundamental definitional problems—lacking clear metrics or standards for what constitutes “responsible,” showing ambiguity about whether responsibility refers to development, deployment, or outcomes, and providing no clear mechanism for enforcement.[73] This terminological ambiguity serves as a strategic asset—its imprecision allows flexible interpretation while enabling actors to claim compliance without meeting specific legal standards, effectively shifting discourse from lex lata legal obligations toward ethical aspirations.[74] Unlike “responsible Ai,” Article 36 of Additional Protocol I to the Geneva Conventions establishes clear, unambiguous legal obligations regarding weapons review,[75]requiring States to determine whether new weapons would be prohibited by international law based on specific, measurable criteria including compliance with explicit prohibitions under treaty law, adherence to customary principles, conformity with the Martens Clause,[76] and evaluation of indiscriminate effects under Article 51(4).[77] By substituting these precise legal requirements with ambiguous ethical aspirations, “responsible Ai” transforms legally binding obligations into discretionary guidelines—undermining uniformity of legal obligations Article 36 was designed to ensure, lowering accountability thresholds in military operations, and complicating assessment of State compliance since it lacks specific benchmarks.[78] In practice, this rhetorical device risks legitimising practices that may be unlawful, for the sake of military dominance or profit maximisation.
7. Conclusions
This policy note has examined several emerging terminologies in the governance of Ai technologies, particularly in the military domain. The uncritical transposition of Ai-related terminologies from technical disciplines into international legal governance poses significant risks to the integrity of established international legal standards. While these terms are appropriate in technological contexts, where probabilistic and procedural approaches are acceptable, they are fundamentally incompatible with the substantive and absolute obligations of international law, particularly under international humanitarian law.
The key recommendation for States and stakeholders is to adhere to the agreed language enshrined in ratified treaties when discussing and formulating policies on Ai in the military context. The introduction of new, ambiguous terminology complicates multilateral discussions, undermines legal clarity and certainty, and disrupts common understanding – ultimately hindering progress in Ai governance.
The political reality is that these new terminologies in Ai governance, such as “mitigating bias” and “unintended engagements,” are emerging from geopolitically powerful states that currently lead in the development of Ai for military applications. These same States also dominate policy discussions and agenda-setting in the governance of Ai in the military domain. As a result, these terms quickly gain traction, becoming the prevailing language in multilateral discussions without rigorous scrutiny of their implications for existing lex lata. Over time, they are presented as “agreed language,” despite the absence of broad, inclusive debate, or State consent as required by international law.
The history of international law and policymaking has long been marked by epistemic injustice, where language, terminology, and framing disproportionately reflect the interests of a select few powerful States rather than the global community.[79] To ensure a just and equitable approach to Ai governance, stakeholders must resist the uncritical adoption of new terminologies that risk diluting or redefining established legal norms. Instead, they must insist on preserving language that reflects the binding obligations of international law, ensuring that policy developments serve all states rather than a privileged few. Moreover, this terminological shift cannot be separated from international competition for Ai dominance, where states engaged in technological arms races have strategic interests in maintaining development flexibility while demonstrating only nominal compliance with international law, effectively facilitating continued AWS development despite potential conflicts with IHL obligations.
Moving forward, in my view, requires recognition that the challenges facing international legal frameworks extend beyond technical or doctrinal considerations, to encompass broader political and economic dynamics. As such, the effective preservation of legal standards will require strengthened institutional mechanisms for evaluating new terminology, enhanced international cooperation to resist regulatory competition, development of economic incentives aligned with legal compliance; and the explicit rejection of technological determinism in legal evolution. The integrity of international law, and the protections it provides, must take precedence over both technological expedience and market imperatives.
References
[1] Deeks, Ashley, Consent to the Use of Force and International Law Supremacy (March 5, 2013). Harvard International Law Journal, Vol. 54, No. 1, 2013, Virginia Public Law and Legal Theory Research Paper No. 2013-14, Available at SSRN:https://ssrn.com/abstract=2228714, Monica Hakimi, ‘Defensive Force against Non-State Actors: The State of Play’ (2015) 91 Int’l L Stud 1, https://digital-commons.usnwc.edu/cgi/viewcontent.cgi?article=1000&context=ils accessed 3 May 2025.
[2] Nils Melzer, Targeted Killing in International Law (Oxford University Press, 2008, providing a comprehensive legal analysis of targeted killings; Noam Lubell and Nathan Derejko, ‘A Global Battlefield? Drones and the Geographical Scope of Armed Conflict’ (2013) 11 J Int’l Criminal Justice 65, https://doi.org/10.1093/jicj/mqs096
[3] Vlad, R. O., & Hardy, J. (2024). Signature Strikes and the Ethics of Targeted Killing. International Journal of Intelligence and Counter Intelligence, 1–29. https://doi.org/10.1080/08850607.2024.2382029; Michael Riepl, ‘Can’t Learn an Old Law New Tricks? Three Examples of How International Humanitarian Law Aged and Adapted’ (30 January 2024) https://ssrn.com/abstract=5117873; Rita Preto, ‘A Never-Ending Tug-Of-War: The Inherent Right of Self-Defense Against Non-State Actors’ (2024) 11(2) e-Publica 32; Jean Marie Vianney Sikubwabo, ‘A Critical Study of Legitimization of Preemptive Self-Defense as a Counter-Terrorism Measure Under International Law’ (International Law Insights, 13 May 2024).
[4] Vienna Convention on the Law of Treaties.
[5] Nicaragua v. United States of America, Judgment, I.C.J. Reports 1986, 3, 98.
[6] Lukashuk, I. I. “The Principle Pacta Sunt Servanda and the Nature of Obligation Under International Law.” The American Journal of International Law 83, no. 3 (1989): 513–18. https://doi.org/10.2307/2203309.
[7] See Article 26 of the Vienna Convention on the Law of Treaties; North Sea Continental Shelf Cases, I.C.J. Rep. 1969. See also Nicaragua ICJ Reps, 1986, p. 3 at 98, Nuclear Weapons and Case of the SS Lotus (1927).
[8] Vienna Convention on the Law of Treaties, Article 31(1).
[9] United Nations, Report of the International Law Commission, Seventy-first session (29 April–7 June and 8 July–9 August 2019), chap. 5, conclusion 23, Peremptory norms of general international law (jus cogens), A/74/10, https://legal.un.org/ilc/reports/2019/english/a_74_10_advance.pdf.
[10] Statute of the International Court of Justice, Article 38.
[11] Military and Paramilitary Activities in and against Nicaragua (Nicaragua v. United States of America), Judgment, I.C.J. Reports 1986, 3, 98.
[12] Article 53, of the Vienna Convention on the Law of Treaties accordingly provides that a treaty will be void ‘if, at the time of its conclusion, it conflicts with a peremptory norm of general international law. See also UNHRC Advisory Committee, ‘A Global Call for Concrete Action for the Total Elimination of Racism, Racial Discrimination, Xenophobia and Related Intolerance and the Comprehensive Implementation of and Follow up to the Durban Declaration and Programme of Action’ (23rd Session, 16 July 2019) A/HRC/AC/23/CRP.2 at 29.
[13] See historical examples in counterterrorism where terminological innovations have been scrutinized under international legal requirements.
[14] CCW/GGE.1/2019/3, Report of the 2019 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, 25-29 March and 20-21 August 2019), 17; CCW/GGE.1/2019/WP.7, ‘Autonomy, artificial intelligence and robotics: Technical aspects of human control’ (International Committee of the Red Cross, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, 20-21 August 2019); CCW/GGE.1/2023/WP.2/Rev.1, ‘State of Palestine’s Proposal for the Normative and Operational Framework on Autonomous Weapons Systems’ (State of Palestine, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, 6-10 March and 15-19 May 2023); For a synthecised discussion, see Tobias Riebe, ‘Meaningful Human Control of LAWS: The CCW-Debate and its Implications for Value-Sensitive Design’ in Andreas Manhart and Michael Friedewald (eds), Technology Assessment of Dual-Use ICTs (Springer Vieweg, Wiesbaden, 2023) https://doi.org/10.1007/978-3-658-41667-6_10
[15] United Nations Institute for Disarmament Research, Report of the Secretary-General, Lethal Autonomous Weapons Systems (1 July 2024) UN Doc A/79/88; Paul Scharre, ‘Autonomy, “Killer Robots”, and Human Control in the Use of Force’ (Part II) (Just Security, 9 July 2014); The Weaponization of Increasingly Autonomous Technologies: Considering how Meaningful Human Control might move the discussion forward, 2014, https://unidir.org/files/publication/pdfs/considering-how-meaningful-human-control-might-move-the-discussion-forward-en-615.pdf
[16] See Report of the Secretary-General, Lethal Autonomous Weapons Systems (1 July 2024) UN Doc A/79/88 for an overview of states position on MHC in contexts of LAWS at pages 61–63, 94–97, 113–15; CCW/GGE.1/2023/WP.1/Rev.1, ‘Revised working paper’ (Austria, Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, Geneva, 6-10 March and 15-19 May 2023); Statement by Brazil, 78th UN General Assembly First Committee, 23 October 2023,https://reachingcriticalwill.org/images/documents/Disarmament-fora/1com/1com23/statements/23Oct_Brazil.pdf; Statement by Türkiye, Thematic Discussion on “Conventional Weapons”, First Committee, 77th Session of the United Nations General Assembly (21 October 2022).
[17] United Nations Institute for Disarmament Research, Report of the Secretary-General, Lethal Autonomous Weapons Systems (1 July 2024) UN Doc A/79/88; UN Office of Disarmament Affairs, https://disarmament.unoda.org/update/retaining-meaningful-human-control-of-weapons-systems/
[18] Sarah Knuckey, ‘Governments Conclude First (Ever) Debate on Autonomous Weapons: What Happened and What’s Next’ (Just Security, 16 May 2014) https://www.justsecurity.org/10518/autonomous-weapons-intergovernmental-meeting/ accessed 3 May 2025. For example, US Department of Defence Directive 3000.09.
[19] Report of the Human Rights Council Advisory Committee, Possible impacts, opportunities and challenges of new and emerging digital technologies with regard to the promotion and protection of human rights (19 May 2021) UN Doc A/HRC/47/52.
[20] RM & another v Attorney General [2006] eKLR (Civil Case 1351 of 2002); (01 December 2006), page 25, available at http://kenyalaw.org/caselaw/cases/view/35204 wherein the High Court affirms that non-discrimination, discussed in the context of children, was “part of jus cogen.” The High Court relies here (at page 20) on the Human Rights Committee’s General Comment No. 18, which provides at para 1 that “non-discrimination constitutes a basic and general principle relating to the protection of human rights.” Inter-American Court of Human Rights, “Mapiripán Massacre” v. Colombia, Merits, Reparations and Costs; UN Human Rights Council, Report on Human rights implications of new and emerging technologies in the military domain (2024), para 19.
[21] Ibid. – United Nations, Report of the International Law Commission, Seventy-first session (29 April–7 June and 8 July–9 August 2019), chap. 5, conclusion 23, Peremptory norms of general international law (jus cogens), A/74/10 https://legal.un.org/ilc/reports/2019/english/a_74_10_advance.pdf
[22] See United Kingdom Ministry of Defence, “Ambitious, safe, responsible: Our approach to the delivery of AI-enabled capability in defence” (2022) p.11; See UN GGE on LAWS 2019 Report, CCW/GGE.1/2019/3, Annex IV, p.13.
[23] CCW/GGE.1/2024/WP.5, Report on “Addressing Bias in Autonomous Weapons”, submitted by Austria, Belgium, Canada, Costa Rica, Germany, Ireland, Luxembourg, Mexico, Panama and Uruguay, p.6.
[24] As above, p.2.
[25] See for instance Article 10 of Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations.
[26] Article 2, Universal Declaration of Human Rights (UDHR) (1948); Articles 2(1) and 26, ICCPR (1966); Article 2(2), International Covenant on Economic, Social and Cultural Rights (ICESCR) (1966); Articles 2(1) and 5 of International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) (1965); Articles 2 and 5, Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) (1979); Article 2, Convention on the Rights of the Child (CRC) (1989); Article 5, Convention on the Rights of Persons with Disabilities (CRPD) (2006); Article 7, International Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families (ICMW) (1990); Article 2, Declaration on the Rights of Indigenous Peoples (UNDRIP) (2007).
[27] As above.
[28] Chinmayi Arun, “AI and the Global South: Designing for Other Worlds,” in The Oxford Handbook of Ethics of AI, ed. Markus D. Dubber, Frank Pasquale, and Sunit Das (Oxford: Oxford University Press, forthcoming).
[29] Articles 2, 3, and 18 (3), African Charter on Human and Peoples’ Rights (ACHPR) (1981); Article 14, European Convention on Human Rights (ECHR) (1950); Articles 1(1) and 24, American Convention on Human Rights (ACHR) (1969).
[30] Atala Riffo and Daughters v. Chile, Judgment (Preliminary Objections, Merits, Reparations and Costs), Inter-Am. Ct. H.R. (ser.C) No. 239, February 24, 2012; Centre for Minority Rights Development (Kenya) and Minority Rights Group International on behalf of Endorois Welfare Council v. Kenya, Communication 276/03, African Commission on Human and Peoples’ Rights (ACHPR), 2009; CJEU, C-33/89, Maria Kowalska v. Freie und Hansestadt Hamburg, 27 June 1990.
[31] Preamble, Articles 9(1), 10(1), and 75(1) of Additional Protocol I (1977) to the Geneva Conventions of 1949; Articles 2(1) and 4(1), Additional Protocol II (1977) to the Geneva Conventions of 1949; Common Article 3 to the Geneva Conventions of 1949; Articles 12 and 27, Geneva Convention I (For the Wounded and Sick in Armed Forces in the Field); Article 12, Geneva Convention II (For the Wounded, Sick, and Shipwrecked at Sea); Articles 13 and 16, Geneva Convention III (For Prisoners of War – POWs); Article 27, Geneva Convention IV (For the Protection of Civilians in Time of War).
[32] Rule 88, ICRC Study of Customary International Humanitarian Law.
[33] See CCW/GGE.1/2024/WP.5, Report on “Addressing Bias in Autonomous Weapons”, submitted by Austria, Belgium, Canada, Costa Rica, Germany, Ireland, Luxembourg, Mexico, Panama and Uruguay; A/HRC/56/68, UN Special Rapporteur on Contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Report on “Artificial Intelligence” (2024), paras 5-12 and 37-39; A/79/170, Report of the UN Special Rapporteur on human rights and international solidarity, Report on “Artificial intelligence and international solidarity – towards human-centred artificial intelligence international solidarity by design” (2024), paras 5-18.
[34] “Principle of distinction,” Casebook, ICRC, https://casebook.icrc.org/law/principle-distinction.
[35] See Additional Protocol provisions above.
[36] T Chengeta, “The Right to Non-Discrimination and Freedom from Racial Oppression in Autonomous Weapon Systems,” (above); A/HRC/AC/33/CRP.1; Advisory Committee, UN Human Rights Council, Report on Human rights implications of new and emerging technologies in the military domain (2024), para 19.
[37] See U.S. Department of Defense (DoD) Directive 3000.09 (2023); see also See CCW/GGE.1/2024/WP.5, Report on “Addressing Bias in Autonomous Weapons”, submitted by Austria, Belgium, Canada, Costa Rica, Germany, Ireland, Luxembourg, Mexico, Panama and Uruguay; See P Scharre, Report on autonomous weapons and operational risk, Center for a New American Security (2016); M Bo et al, Retaining human responsibility in the development and use of autonomous weapon systems (SIPRI: Stockholm, 2022), 14-16.
[38] For example, an autonomous system misidentifying a civilian vehicle as a military vehicle due to pattern recognition errors or an Ai systems confusing civilian gatherings with military formations due to similar heat signatures or movement patterns
[39] As above.
[40] According to Article 51(4)(a) of the 1977 Additional Protocol I, attacks “which are not directed at a specific military objective” and consequently “are of a nature to strike military objectives and civilians or civilian objects without distinction” are indiscriminate.
[41] “Indiscriminate attacks,” How does law protect in war? – Online casebook, ICRC,https://casebook.icrc.org/a_to_z/glossary/indiscriminate-attacks.
[42] ICJ, Nuclear Weapons Advisory Opinion (Para. 43). See also Israel, Operation Cast Lead (Part II, paras 120-126, 230-232, 365-392); Israel, The Targeted Killings Case (Paras 40-46); Israel, Human Rights Committee’s Report on Beit Hanoun (Para. 34, 38-42).
[43] Article 51(4), Additional Protocol I (1977) to the Geneva Conventions (1949).
[43] As above.
[44] U.S. Department of Defense (DoD) Directive 3000.09 (2023), p.23.
[45] Article 51(4), Additional Protocol I (1977) to the Geneva Conventions (1949).
[46] See U.S. Department of Defense (DoD) Directive 3000.09 (2023), p.23.
[47] Article 51(5)(b), Additional Protocol I (1977) to the Geneva Conventions (1949).
[48] As above.
[49] Y Dinstein, the conduct of hostilities under the law of international armed conflict (CUP, 2016), para, 398 (“many things can go wrong in the execution of attacks, and, as a result, civilians are frequently harmed by accident.”)
[50] MN Schmitt and M Schauss, “Uncertainty in the law of targeting: Towards a cognitive framework” (2019) 10 Harvard National Security Journal p.162.
[51] As above, p.157.
[52] As above, p. 153, 157.
[53] MN Schmitt and M Schauss, “Uncertainty in the law of targeting: Towards a cognitive framework” (2019) 10 Harvard National Security Journal p.149 (notes that targeting is a dynamic process characterised by situation-specific decision-making).
[54] Article 49(1) of Additional Protocol I (1977) to the Geneva Conventions (1949).
[55] Ingvild Bode, “Falling under the Radar: The Problem of Algorithmic Bias and Military Applications of AI,” March 14, 2024; Ruben Stewart and Georgia Hinds, “Algorithms of War: The Use of Artificial Intelligence in Decision Making in Armed Conflict,” October 24, 2023; Wen Zhou and Anna Rosalie Greipl, “Artificial intelligence in military decision-making: supporting humans, not replacing them,” ICRC Law and Policy Blog, August 29, 2024.
[56] T Chengeta, “Defining the emerging notion of meaningful human control over weapon systems” (2017) NYU Journal of International Law and Politics p. 871.
[57] T Chengeta, “Defining the emerging notion of meaningful human control over weapon systems” (2017) NYU Journal of International Law and Politics p.875; MN Schmitt and M Schauss, “Uncertainty in the law of targeting: Towards a cognitive framework” (2019) 10 Harvard National Security Journal p.152 (refers to multifaceted situational assessment when planning, approving or executing attacks.)
[58] Article 57, Additional Protocol I (1977) to the Geneva Conventions (1949).
[59] T Chengeta, “Autonomous weapon systems and the inadequacies of existing law: The case for a new treaty” (2022) 8 Journal of Law & Cyber Warfare p. 111–124.
[60] As above.
[61] Y LeCun, Y Bengio and G Hinton, ‘Deep Learning’ (2015) 521 Nature 436; Rohit Nishant, Dirk Schneckenberg and MN Ravishankar, ‘The Formal Rationality of Artificial Intelligence-Based Algorithms and the Problem of Bias’ (2024) 39(1) Journal of Information Technology 20.
[62] In re Yamashita, 327 U.S. 1 (1946); “Command responsibility,” How does law protect in war? – Online casebook, ICRC,https://casebook.icrc.org/a_to_z/glossary/command-responsibility;
[63] Article 87, Additional Protocol I (1977) to the Geneva Conventions (1949).
[64] As above.
[65] As above, Article 87(3).
[66] Article 28, Rome Statute of the International Criminal Court (ICC).
[67] As above.
[68] T Chengeta, “Accountability gap: Autonomous weapon systems and modes of responsibility in international law” (2016) 45 Denver Journal of International Law & Policy.
[69] Article 28(1)(a), Rome Statute of the International Criminal Court (ICC).
[70] T Chengeta, “Accountability gap: Autonomous weapon systems and modes of responsibility in international law” (2016) 45 Denver Journal of International Law & Policy.
[71] Article 28(2), Rome Statute of the International Criminal Court (ICC).
[72] See corporate initiatives from major technology companies facing scrutiny over AI deployments. See Thilo Hagendorff, ‘The Ethics of AI Ethics: An Evaluation of Guidelines’ (2020) 30 Minds & Machines 99.
[73] What is “responsible” in one context may not be in another, with no specificity regarding who determines what constitutes “responsible.” See Anna Jobin, Marcello Ienca and Effy Vayena, ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1 Nature Machine Intelligence 389, 391, identifying over 80 Ai ethics documents with substantial divergence in their interpretation of principles.
[74] This serves both corporate interests and State interests alike—where both seek fast adoption of AI technology, either because of profit imperatives or political (AI race) imperatives of military dominance. See for Elettra Bietti, ‘From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy’ (2020) Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 210; Ben Wagner, ‘Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping?’ in Emre Bayamlıoğlu and others (eds), Being Profiled: Cogitas Ergo Sum (Amsterdam University Press 2018).
[75] Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (adopted 8 June 1977, entered into force 7 December 1978) 1125 UNTS 3 (Additional Protocol I), Article 36 provides that all new weapons must be capable of being used in compliance with lex lata IHL—treaty and customary.
[76] The Martens Clause appears in the preamble to the 1899 Hague Convention II and has been reaffirmed in subsequent IHL treaties, including Article 1(2) of Additional Protocol I; see also International Court of Justice, Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion) [1996] ICJ Rep 226, para 78. It provides that in cases not covered by specific international agreements, civilians and combatants remain under the protection of principles of international law derived from established custom, principles of humanity, and the dictates of public conscience.
[77] These requirements constitute clear legal obligations with standards for compliance, developed through widespread State practice and international jurisprudence. The ICRC’s interpretative guidance emphasizes these reviews must be systematic, empirically based, and legally rigorous. See International Committee of the Red Cross, ‘A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I of 1977’ (2006) 88(864) International Review of the Red Cross 931, available at https://international-review.icrc.org/sites/default/files/irrc_864_11.pdf
[78] Where Article 36 requires specific legal assessments, “responsible AI” permits subjective interpretation of longstanding legal standards. See Kenneth Anderson, Daniel Reisner and Matthew Waxman, ‘Adapting the Law of Armed Conflict to Autonomous Weapon Systems’ (2014) 90 International Law Studies 386.
[79] See Makau Mutua and Antony Anghie, “What Is TWAIL?,” Proceedings of the Annual Meeting (American Society of International Law) 94 (April 5–8, 2000): 31–40.