VOOZH about

URL: https://www.justsecurity.org/133656/counterterrorism-ai/

⇱ Will the Next U.N. Counterterrorism Strategy Hold States Accountable For AI Use?


Just Security – JustSecurity.org
Skip to content
👁 An engineer points at a screen with markings for people (in red) and vehicles (in yellow)

Will the Next U.N. Counterterrorism Strategy Hold States Accountable For Their Use of AI?

Published on March 27, 2026

Listen to Article

In his report on the implementation of the United Nations Global Counter-Terrorism Strategy, the U.N. Secretary-General warned of the growing sophistication of terrorist groups in exploiting new and emerging technologies, including AI, for terrorist purposes. He also warned about the risks of deploying these technologies for counterterrorism without adequate human rights safeguards.

Far from hypothetical, such warnings are particularly relevant given recent and current developments. In January, the United States military operation in Venezuela, which involved capturing President Nicolas Maduro, reportedly relied on AI to map sites for targeted bombing in Caracas. Two years earlier, investigative journalists had revealed that the Israeli military uses AI to select “tens of thousands of potential Hamas and Islamic Jihad targets for elimination in Gaza.” The United States and Israel have argued they are using these tools in counterterrorism operations to protect their populations from the threat of “narco-terrorism” and and “to sweep [Gaza] free of terrorists” (although it is worth noting that when operations took place in the context of armed conflict the concomitant application of international humanitarian law, alongside applicable international human rights law, comes into play).

The issue of AI may well feature in the forthcoming 9th U.N. Global Counter-Terrorism Strategy, due to be adopted by the U.N. General Assembly in June. However, discussions on emerging technologies at the United Nations have so far centered on the perceived challenges posed by AI to international counterterrorism efforts, such as addressing the misuse of technologies by terrorist organizations for propaganda purposes. The 8th Global Counter-Terrorism Strategy review already called for scaling up the use of AI and digital technologies to catch up with terrorists’ technological innovations. States have been much less willing to address how AI and other digital technologies have been abused in the name of countering terrorism, in some cases leading to serious human rights violations. As the U.N. Special Rapporteur on the Promotion and Protection of Human Rights and Fundamental Freedoms while Countering Terrorism noted in a  2025 position paper, “the Security Council has urged States to use new technologies to counter terrorism without paying sufficient regard to the human rights risks, including in countries lacking human rights protections, independent judiciaries, a rule of law culture, or democratic oversight.”

The 9th Global Counter-Terrorism Strategy has an opportunity to enhance compliance with international human rights law, in particular by strengthening accountability in the use of AI, as well as human rights-focused due diligence by the U.N.’s programmes of technical assistance in the use of technologies for counterterrorism.

Human Rights-Compliant Use of AI Requires Transparency and Accountability

A 2025 position paper by the U.N. Special Rapporteur on counter-terrorism and human rights notes that “the use of new technologies in counter-terrorism has unleashed new waves of human rights violations targeting civil society, human rights defenders, journalists and political opponents” and warns that “there is good reason to expect the same from AI.”

Amid growing concerns about human rights-compliant uses of technologies, international human rights experts and U.N. resolutions have developed frameworks for general safeguards and limitations in the use of AI. A key principle calls for refraining from deploying AI systems in ways that are incompatible with international human rights law or that otherwise pose undue risks to human rights and fundamental freedoms, unless adequate safeguards—such as systematically conducting human rights due diligence throughout the life cycle of the AI system, requiring adequate explainability of all AI-supported decisions, enabling independent and external auditing of automated systems, and establishing effective remedies for abuses—are in place. In 2024, the U.N. General Assembly adopted a resolution calling on states and other stakeholders to honor this principle specifically in relation to AI.

The fact that AI is deployed for national security or defense purposes should not exempt it from otherwise applicable human rights safeguards. In fact, given the added human rights risks, more stringent limits and controls are appropriate. In spite of this, states are increasingly seeking exemptions to safeguards and constraints in the name of national security. For instance, the Council of Europe Framework Convention on Artificial Intelligence (2024), which establishes legally binding obligations on the use of AI technologies, exempts AI in national defense and national security from certain safeguards. Ironically, it seems that the more severe the risks to human rights (including arbitrary or unlawful killing and detention resulting from counter-terrorism operations), the fewer constraints apply at the domestic level.

To elaborate, the European Union’s AI Act establishes a compliance framework that incorporates mechanisms for human oversight. However, it contains exemptions for AI systems used exclusively for national security, defense, or military purposes, and provides other loopholes in relation to its use by law enforcement, border management, and public security. These are precisely the kind of exemptions most likely to be invoked under the guise of countering terrorism. The Special Rapporteur on counter-terrorism and human rights 2025 position paper argues that the EU landmark regulation is part of an international regulatory approach to AI that “offer[s] the promise of a pragmatic, sustained and consistent attempt to develop guardrails around AI-enabled systems, as long as excessive exceptions for national security, law enforcement and border management are avoided.”

Even where sufficient safeguards do exist, counterterrorism remains an exception to the rule. Modern data protection laws, for example, have well-developed standards for transparency and accountability that apply to the processing of personal data by AI systems. These include an overall prohibition (with narrow exceptions) on “human-free” automated decisions when such decisions have legal or other significant effects. In this context, it is concerning that many intelligence and law enforcement agencies—the forces entrusted to conduct counterterrorism activities—are exempted from relevant data protection legislation. For example, a study of data protection in Africa found that national security carveouts limit the scope of personal data protection laws in seven out of 14 surveyed countries.

U.N. Counter Terrorism Policies and Digital Technologies

The United Nations plays a significant role in providing technical assistance to states, with the U.N. Global Counter-Terrorism Coordination Compact comprising 46 entities supporting states to prevent and counter terrorism. As noted in the Secretary-General’s latest report, U.N. entities continue to develop their programmes to support states addressing the terrorist exploitation of technology, including digital platforms and AI.

Privacy International and Statewatch, where we currently work, have both documented insufficient human rights due diligence and safeguarding in the context of U.N. technical assistance and capacity-building in digital technologies, including AI, used for counterterrorism purposes. Echoing concerns expressed by the U.N. Special Rapporteur on counterterrorism, Privacy International noted shortcomings in the compliance of the U.N. Support to Non-United Nations Security Forces, provided by the U.N. Countering Terrorist (UNCT) Travel Programme, with the Human Rights Due Diligence Policy.

For example, the U.N. Countering Terrorist Travel Programme (“UNCT Travel Programme”), led by the U.N. Office on Counter Terrorism, provides a software solution—goTravel—through which member States around the world may adopt a travel surveillance system in accordance with a variety of Security Council resolutions (in particular, 2178(2014); 2309(2016);  2396(2017); 2482(2019)). According to publicly available information, AI capabilities are currently not included in this software, but the U.N. has indicated that AI may be added in the next generation of goTravel. Meanwhile, private companies are already proposing AI-driven border management software that assesses and generates risk profiles for travellers.

The UNCT Travel Programme does not include any human rights oversight mechanism or monitoring, and relies on national systems of redress. According to the 2023 report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, the programme was rolled out without due regard to the U.N. Human Rights Due Diligence Policy. The UNCT Travel Programme further does not include U.N. human rights bodies among its partners for its development and deployment, contrary to its commitment to an “One-UN” partnership.

The interconnection and automated exchange of watchlists create a risk that domestic abuse of counterterrorism legislation will spread globally and facilitate the criminalization of journalists and political opposition figures while undermining places of refuge for dissidents. This is particularly the case when the UNCT Travel Programme supports states with concerning records of systematic human rights abuse, particularly in respect of the sort of surveillance and persecution of dissidents and journalists which Advanced Passenger Information/Passenger Name Records (API/PNR) data systems facilitate, as noted by the U.N. Special Rapporteur on counter-terorrism and human rights. For instance, according to a 2023 evaluation of the UNCT Travel Programme, Azerbaijan is among the few countries around the world with pre-production deployment of goTravel software. A year after deployment, a report by Amnesty International revealed that the regime had imprisoned 300 political activists, including at least 25 journalists (although this was not directly facilitated by the goTravel software). The crackdown has also carried repercussions beyond national borders, with the arrest of political refugees on arrival home in Azerbaijan following their deportation from Germany, and a Nobel Prize nominee who was prevented from leaving Azerbaijan to attend an Italian literary festival.

The goTravel system is not yet live in Azerbaijan. However, the UNCT Travel Programme has supported the country with legal advice and legislative assistance, and cut the ribbon in 2022 for the Passenger Information Centre to prevent and counter terrorism. As a result, w although the U.N. did not provide the technical system that directly enabled the criminalization of political activists, its close collaboration may have provided a veneer of political legitimacy, and, according to Statewatch, contributed to building the travel surveillance infrastructure that did power the repression of dissidents.

In its Networks of (In)security report, Statewatch found that the deployment of watchlists and travel surveillance technologies like this endanger otherwise positive improvements in the development of redress and monitoring mechanisms in global counterterrorism policies. The United Nations ombudsperson, an independent and impartial advisor to the 1267 Security Council Sanctions Committee responsible for the listing and delisting of persons and entities subject to the Council’s relevant sanctions against Al-Qaeda and the Taliban, is one such positive development that is now at risk of being sidelined. Similarly, the Interpol established Commission for the Control of Interpol’s Files serves as an independent watchdog against state abuse of Interpol watchlists for political gain. However, if states develop and share with other states their own watchlists without the attachment of any meaningful mechanisms for redress, these existing mechanisms will be bypassed and undermined.

Given its central role in supporting states’ counterterrorism policies and practices, the United Nations must be able to demonstrate their capacity to address the human rights implications of providing assistance in relation to digital technologies, including AI systems applied for border management or for social media monitoring. As noted by regulators and experts, rigorous human rights due diligence processes should apply throughout the lifecycle of the AI technology. Due diligence should include human rights and data protection impact assessment, safety and privacy by design, and establishing suspension protocols on capacity-building and technical assistance.

Conclusions

Negotiations on the 9th U.N. Global Counter-Terrorism Strategy are opening against the backdrop of crumbling international legal order and large-scale experimentation with AI in armed conflict and in counterterrorism operations.

We share the views of the U.N. Secretary-General that “a human rights-based approach must be incorporated into all phases of the development and use of such technologies” to address the challenges posed by the lack of transparency and oversight of AI in counterterrorism.

The 9th UN Global Counter-Terrorism Strategy should refrain from generic calls on states to adopt measures to counter the use of AI technologies for terrorist purposes. Instead, it should insist that counterterrorism policies and practices demonstrably comply with existing international human rights law, international refugee law, and international humanitarian law.

We recommend that the Strategy:

  • recall other relevant U.N. resolutions on AI and other digital technologies;
  • call on states firstly to refrain from the use of AI technologies that do not comply with international human rights law or that pose undue risks to human rights;
  • call on states to carry out human rights due diligence assessments prior to and during the adoption of digital technologies for counter-terrorism purposes;
  • call on states to ensure that any restrictions on rights such as privacy, freedom of expression, peaceful assembly or association comply with the principles of legality, legitimacy, necessity and proportionality, and non-discrimination; and
  • require U.N. counterterrorism entities that support Member States in using AI and other technologies to clearly apply the U.N. Human Rights Due Diligence Policy, while ensuring that these entities prioritise the protection and respect of human rights, including through effective cooperation with the Office of the U.N. High Commissioner for Human Rights, the U.N. Special Rapporteur on counterterrorism and human rights, and civil society.
FEATURED IMAGE: An engineer shows the markings for people (in red) and vehicles (in yellow) during surveillance via a drone equipped AI, which enables the state police to surveil the crowd on a laptop during the Maha Kumbh Mela festival in Prayagraj on January 17, 2025. (Photo by NIHARIKA KULKARNI/AFP via Getty Images)

About the Authors

Tomaso Falchetta

Tomaso Falchetta (LinkedIn) is Global Policy Lead at Privacy International.

Romain Lanneau

Romain Lanneau (LinkedIn) is a researcher at the NGO Statewatch. He focuses on policing, surveillance technologies and their impact on human rights.

Send A Letter To The Editor

Read Next:

Featured Articles:

Follow us on BlueSky Follow us on BlueSky Follow us on Linkedin Follow us on Threads Follow us on Facebook Follow us on Instagram Follow us on YouTube
Finding our content helpful?

Just Security is a non-profit, daily, digital law and policy journal that elevates the discourse on security, democracy and rights. We rely on donations from readers like you. Please consider supporting us with a tax-deductible donation today.
Donate Now