Weaponization of Artificial Intelligence (AI): Can the United Nations Prevent the Rise of Killer Robots?
1. INTRODUCTION
Artificial Intelligence (AI) has revolutionized nearly every sphere of human activity — from healthcare to defense. However, the emergence of Autonomous Weapons Systems (AWS), capable of selecting and engaging targets without human intervention, has raised serious ethical and legal concerns. Often referred to as “killer robots,” these technologies blur the line between machine efficiency and human moral judgment in warfare. At the center of this global debate stands the United Nations (UN), particularly under the Convention on Certain Conventional Weapons (CCW) framework, where states deliberate whether a binding international treaty should restrict or ban such systems. The issue transcends technology — it questions accountability, human rights, and international law.
2. CONTEXT AND BACKGROUND
The idea of automating warfare is not new. From drones to cyber defense systems, states have increasingly relied on AI-driven tools. However, Autonomous Weapons Systems (AWS) represent a profound shift — they can independently identify, track, and eliminate targets without real-time human command. The 2012 Human Rights Watch report “Losing Humanity” first coined the term “killer robots,” sparking a global advocacy movement that led to debates within the UN Convention on Certain Conventional Weapons (CCW). Since 2014, Group of Governmental Experts (GGE) meetings under the CCW have tried to define the boundaries of “meaningful human control.” Yet, progress remains slow due to political and strategic differences. Legally, the UN Charter (1945), Geneva Conventions (1949), and Additional Protocol I (1977) establish limits on means and methods of warfare. These instruments imply that human judgment and accountability are essential to lawful combat operations. However, AI’s autonomous decision-making challenges these principles. Several powerful nations — including the U.S., Russia, and China — oppose a complete ban, citing national security and technological advantage. Conversely, a coalition of smaller nations, supported by NGOs like the Campaign to Stop Killer Robots, calls for a preventive treaty, akin to the 1997 Ottawa Treaty on Landmines. This ongoing stalemate exposes the limitations of the UN system, particularly the CCW’s consensus-based approach, which allows a few states to block global consensus.

3. DISCUSSION AND LEGAL ANALYSIS
A. International Humanitarian Law and Accountability
The core principle of IHL is distinction — combatants must differentiate between military and civilian targets. Fully autonomous weapons, however, lack the moral and situational understanding to apply proportionality or precaution effectively. Articles 35 and 36 of Additional Protocol I (1977) require states to assess the legality of new weapons. Yet, AI systems, which evolve through machine learning, make such assessments difficult. Once deployed, their decision-making may deviate from programmed parameters, creating accountability gaps. If an AWS unlawfully kills civilians, who is responsible? Possible actors include:
- The commander who deployed the system,
- The programmer or manufacturer,
- Or the state itself under Article 3 of the ILC’s Articles on State Responsibility.
- But none of these options fit neatly, illustrating why traditional IHL struggles to regulate autonomous systems.
Articles 35 and 36 of Additional Protocol I (1977) require states to assess the legality of new weapons. Yet, AI systems, which evolve through machine learning, make such assessments difficult. Once deployed, their decision-making may deviate from programmed parameters, creating accountability gaps.
If an AWS unlawfully kills civilians, who is responsible? Possible actors include:
- The commander who deployed the system,
- The programmer or manufacturer,
- Or the state itself under Article 3 of the ILC’s Articles on State Responsibility.
But none of these options fit neatly, illustrating why traditional IHL struggles to regulate autonomous systems.
B. UN’s Role and Legal Developments
The UN Office for Disarmament Affairs (UNODA) and the CCW GGE have held multiple sessions since 2014. In 2021, the UN Secretary-General António Guterres called for a legally binding instrument banning systems that function without meaningful human control. The CCW’s 2023 report, however, reaffirmed only voluntary principles — emphasizing “human accountability” without imposing binding obligations. This reflects a structural weakness in the UN’s disarmament machinery, where consensus decision-making undermines effective action. Despite this, UN agencies and special rapporteurs continue to warn of potential violations of the right to life (Article 6, ICCPR) and human dignity under international human rights law. The International Committee of the Red Cross (ICRC) has also proposed a three-tier regulatory approach:
- Prohibit unpredictable AWS,
- Prohibit systems targeting humans, and
- Regulate remaining systems under strict human supervision.
C. Ethical and Strategic Dimensions
Ethically, delegating life-and-death decisions to algorithms undermines human agency. Philosophers like Noel Sharkey and legal scholars argue that machines cannot comprehend human suffering, thus violating the Martens Clause — which invokes “the principles of humanity and public conscience.” Strategically, however, major powers resist bans, citing deterrence and military innovation. They argue that banning AWS might hinder defensive capabilities or cede advantage to adversaries. This tension illustrates the power-politics limitations of international law, where moral imperatives often yield to strategic interests.
D. Comparative Legal Efforts
Some regional initiatives provide hope. The European Parliament (2021) adopted resolutions urging global prohibition of fully autonomous weapons. Likewise, countries like Austria, Chile, and Costa Rica have formally proposed negotiating a new UN treaty outside the CCW framework — mirroring how the Ottawa Process achieved the landmine ban despite initial opposition. Thus, while the UN remains the legitimate platform, alternative diplomatic coalitions could accelerate progress.
4. SUGGESTED SOLUTIONS AND POSSIBLE OUTCOMES
To prevent the destabilizing impact of AI weaponization, the following solutions are proposed:
Adopt a Legally Binding UN Convention: The UN General Assembly should initiate negotiations for a Convention on Autonomous Weapons Systems (CAWS), defining “meaningful human control” and establishing global prohibitions similar to the Chemical Weapons Convention.
Establish an International Oversight Mechanism: A UN-based monitoring body, similar to the OPCW, should oversee compliance, requiring member states to report AI weapon testing, deployment, and risk assessments.
Strengthen Article 36 Reviews: States must be mandated to conduct transparent pre-deployment reviews for all AI-based military technologies, ensuring conformity with IHL and human rights standards.
Promote AI Ethics and Dual-Use Regulation: International cooperation through UNESCO and ITU should promote ethical AI design, preventing civilian AI from being repurposed for lethal applications.
Encourage Multistakeholder Engagement: Civil society, academia, and private tech firms must be included in decision-making to balance innovation with humanitarian accountability.
5. CONCLUSION
The weaponization of AI represents a defining challenge for 21st-century international law. While technology evolves rapidly, global governance mechanisms lag behind. The UN, as the moral and legal custodian of world peace, must act decisively to preserve human oversight in the use of lethal force. Failing to regulate autonomous weapons risks a future where machines, not humans, decide who lives or dies. Through a binding treaty, enhanced transparency, and global ethical standards, the international community can reaffirm the timeless principle that war must always remain under human conscience and control.
- ACKNOWLEDGEMENT
At the outset, I would like to thank God for blessing and benevolently granting me the vigor and audacity to complete my Assignment successfully despite with current scenario. Before submitting my assignment, I find an opportunity to place on record my warm gratitude towards I would like to place my warm gratitude towards Dr. Dr. SAMTA KATHURIA, under whose guidance I was able to complete my Continuous Assessment (CA) fruitfully. My sincere thanks also go to the School of Law, Lovely Professional University, for encouraging us to take up this assignment as a compulsory task, which has proved to be a meaningful and enriching experience, providing valuable learning and insight into legal proceedings
REFERENCES
Human Rights Watch. (2012). Losing Humanity: The Case Against Killer Robots.
United Nations Office for Disarmament Affairs. (2023). Report of the Group of Governmental Experts on Lethal Autonomous Weapons Systems.
ICRC. (2021). Position on Autonomous Weapon Systems.
International Covenant on Civil and Political Rights, 1966.
Additional Protocol I to the Geneva Conventions, 1977.
United Nations Charter, 1945.
Sharkey, N. (2022). The Moral Code of War: AI and the Ethics of Lethal Force. Journal of International Humanitarian Studies, 15(3), 211–229.
***
Article is Co-Authored by Dr. Samta Kathuria

