Sponsored
    Follow Us:
Sponsored

Rogue Algorithms: How AI Is Being Used For Terrorism

INTRODUCTION:

In recent years Artificial Intelligence (AI) has gain significant attention. Its use has aided in increasing productivity and development withing various industries (including the legal industry) hence, it has proved to be a boon to the society. However, like every other digital technology, AI has related ethical, legal and economic concerns. One such prominent concern is the threat of AI being used for terrorist purposes thus, becoming a potential threat to national security.

As dependency on AI has increase (especially after the COVID 19 pandemic) it has portrayed many malicious behaviours showcasing the capacity of AI’s for potential cyberattacks, leading to thefts of money and identity. An example of this is the use of those 59 apps (such as TikTok, Wechat, Shein, etc.) by China to extract personal information of Indian users. These apps were then banned in India in year 2020 following security conflicts between India and China at the borders.

POTENTIAL THREAT: 

The nature of terrorism has been noted to evolve with time. Terrorist attacks have progressed drastically, from the use of knives and pistols to aircraft hijackings and other vehicle-based operations, with some groups even expressing willingness to obtain and employ chemical, biological, or radiological weaponry. Furthermore, technical advancements in transportation and logistics have enabled an increased capacity for these terrorists of pace, reach, and magnitude of their operations, resulting in global hazards rather than local ones. Lastly, advancements in information and communication technologies have enabled terrorists to interact more swiftly and surreptitiously worldwide, as well as transmit viral videos and material to breed terror faster and at a larger scale. Thus, improving the efficiency and effectiveness of their attacks.[1]

To understand the word ‘threat’ we look into two factor intention and capabilities. Hence, the functionalities of AI’s should be viewed in terms of these factors:

  1. Intention: To asses the intention of use of AI for terrorist activities we refer to a list of non-State consideration iterated by Corin in her analysis of how open technological innovation is arming the terrorists of tomorrow. These characteristics are that such technology must be accessible, cheap, simple to use, transportable, concealable and effective. AI more often than not fail to posses these characteristics.[2] Thus, terrorist for a long time have stuck to two primary kind of weapons that is firearms and explosives.[3]

If AI is successful is fulfilling the above stated criteria, we than check if such technology can be used for a wide range of context.[4] Here again most AI’s fail as presently there exists only one type of AI in the market that is Narrow AI. These kind of AI’s can only perform one specific task. Nonetheless, it can still be argued that AI in many ways could be a possible credential tool in the hands of terrorists. A very prominent example of this are drones.

  1. Capability:

If we look into the capabilities of the AI per se then there are many AI’s that have the potential to be used for terrorism. For example, TensorFlow, an open-source library for large-scale machine learning and numerical computation, allows users to easily build a neural network with simple object detection or even a facial recognition model without the need of sophisticated skills or computers.[5] However, research has shown that the main reason why terrorist are not using such skilled AI’s is because they lack the finance to develop and deploy such AI. However, we cannot rule out the fact that these terrorists can easily acquire and modify such AI to use for the purpose that is their goal.

Although there is no conclusive proof that terrorist groups will use AI to conduct terrorist activities, there is no guarantee that AI cannot be easily abused to cause terrorist activities In previous attacks, such as the 9/11 attack on the United States or the 26/11 incident in India, technology was seen to have a significant role on the completion of these attacks. Hence, it is not a far fetched prediction that AI will be the next weapon in the hands of terrorists.

GAPS IN AI THAT ALLOW SUCH EXPLOITATION:

  1. Democratization: AI is now easily accessible to everyone. This is because the previous financial and technical constraints on AIs are gradually fading away. Programmers aim to reach as many individuals as possible in order to recover the costs of developing such AIs. However, in doing so, they have created a gap that makes it impossible to maintain transparency and accountability on who utilising such AI. Hence, such AI can easily fall into the wrong hands and become a weapon of terrorism.[6]
  2. Scalability: AI in the present days easily meets the increasing demands. As demand for technological development is still thriving in a developing country like India there does not exist a barrier to entry in such market. Terrorist may recognise this lack of barrier and enter the market.[7]
  3. Inherent Asymmetry In Terrorism And Counterterrorism: Even if an attack through an AI fails it will still successfully install fear within the society. Hence, the terrorist will be able to create terror in the society[8].
  4. Growing Social Dependency On Data And Technology: The Internet’s integrity and availability, as well as data reliability, are crucial for society’s overall functioning. AI advancements have led to its rapid incorporation into daily life through smart products and smart cities, including key infrastructures such as healthcare, energy, and nuclear plants. While this offers benefits, it also increases vulnerability to cyber-attacks or traditional attacks on AI systems and their data.[9]

CYBER ATTACKS BY TECHNOLOGY:

1. Denial Of Service (DOS):

This strategy seeks to entirely deplete a computer system’s memory by sending numerous connection requests. As a result, a computer system connected to the internet will become temporarily unavailable to its users.

In a denial-of-service attack, the attacker utilises a network of devices (known as a botnet) to route requests to targeted services. Terrorist groups like DOS attacks because they need little effort and are very simple to execute. Terrorists do not need to find a specific weakness to attack; simply having the target group connected to the internet is enough to carry out such attacks.

ISIL launched its first series of DOS operations between 2016 and 2017. They employed the DOS tool ‘Caliphate Cannon’, and their principal targets were military, economic, and educational facilities.

Algorithms are configured to control the operation of these botnets. Machine learning helps these botnets to become autonomous, making DOS attacks easier to carry out. It is stated that as of 2019, these DOS attacks are carried out via cloud servers. Machine learning assists terrorists in constructing malicious servers, and these algorithms are subsequently used to operate these botnets. [10]

2. Malware:

Malware refers to malicious software. These programs penetrate computer systems or networks, causing them to fail, impair, abuse, or disrupt their intended target. Examples of these include spyware, worms, adware, viruses, and so on. These software are used to break into personal websites and extract sensitive information (such as credit card information or company trade secrets,etc).

AI improves the functionality of such malwares. They may automate the attack process, increase assault efficiency, or even serve as a new forum for such malware to function. Machine Learning, a feature of AI, may allow such malwares to surf social media for accessible targets. Machine learning and algorithms can be used to intrude on personal communication via WhatsApp and email. These softwares can be used to mould and personalise software to the target’s liking, deceiving them into installing such software.

Although there have been no real-world occurrences of AI-powered malware, IBM researchers created a new malware named ‘DeepLocker’ in 2018. This virus clearly displays how AI may improve the functionality of malware[11].

3. Encryption And Decryption:

In 2016, Google Brain employed ANN (Artificial Neural Network) technology to instruct two neural networks to communicate without being intercepted by a third neural network. This served as the foundation for the encryption and decryption mechanisms.

Encryption is a way to prevent unauthorised access. It is a process that converts data, such as messages or information. Decryption allows a person to view encrypted material in its original form. This allows for secure conversation and sharing of information while maintaining anonymity. Decryption allows them to obtain access to otherwise confidential data.

This AI function would enable terrorist groups to interact freely without fear of information being disclosed. It would make it easier for terrorist groups to obtain critical encrypted intelligence shared by counter-terrorism entities.[12]

4. Autonomous Vehicles:

These enable physical attacks. The AI embedded within such autonomous vehicles allow deep learning techniques to mimic decision-making processes of drivers in controlling the actions of such vehicles. Historically terrorism using vehicles have been very prominent. This advancement in technology would make easy such process and also increase efficiency. Using this technology followers of the cult would not have to risk/sacrifice their life in conduction of such attack.

Vehicles need not necessarily mean wheeled motor vehicles but, also includes submarines and flying vehicles such as unmanned aerial systems or in other terms drones. These are largely remote-controlled and possess limited degrees of autonomy, AI enables possible autonomous functioning of drones. However, there exists no legal framework  to govern such drones. Hence, terrorist can easily conduct attacks using drones.  [13]

5. Deepfakes:

Deekfakes have increased in recent days. AI in deepfakes mimics a human’s characteristics so perfectly that an individual believes they are conversing with a known person. These AIs learn these qualities using machine learning by monitoring conference calls and social media posts. They copy the targeted individual’s voice pattern and use it to create a new audio file.

Terrorist groups could employ such tactics to deceive or intimidate people into raising funding or gathering information. Deepfakes also reach a large number of individuals. As a result, disinformation spreads more easily. Thus, deepfakes are an efficient tool in the hands of terrorists for spreading propaganda and conspiracy theories. As a result, the society experiences “truth decay”. [14]

LEGAL ACTIONS TAKEN:

A blanket prohibition on AI is unachievable because AI is produced mostly by the private sector rather than governments. However, bans on AI technologies that risk people’s livelihoods are both feasible and plausible. One such example is the potential employment of Lethal Autonomous Weapon Systems (LAWS), which can choose and fire on targets without the need for human intervention.[15]

India recently released draft legislation to modify the Information Technology Act, with the goal of criminalising the creation and spread of damaging deepfake content. To reduce the impact of deepfakes, journalists, fact-checkers, and normal internet users can utilise technologies like reverse image search to identify distorted content, verify authenticity, and battle disinformation. Integrating these tools into public awareness campaigns promotes responsible media use, resulting in a more vigilant and educated online community. Continuous improvement and user education are essential for staying ahead of emerging deepfake approaches and creating a comprehensive approach to addressing deepfake challenges in the digital age.[16]

CONCLUSION:

The possibility of terrorists using AI as a weapon is terrifying. AI’s capabilities, ranging from developing autonomous weapon systems to exploiting social media for radicalization, constitute a huge danger to global security.

The current legal structures are inadequate to address these developing concerns. Traditional laws may struggle to define and prosecute AI-related terrorism. We desperately want international cooperation to develop strong legal systems that:

Classify and control the development and application of AI technology that could be utilised for terrorism, Hold developers and users responsible for the malicious use of AI and    Establish clear rules for law enforcement organisations to investigate and prosecute AI-assisted terrorist activity. 

[1] Joint Report by UNICRI and UNCCT, Algorithm and Terrorism : The Malicious Use of Artificial Intelligence For Terrorist Purposes, 2021.

[2] Audrey Kurth Cronin, Power to the People: How Open Technological Innovation is Arming Tomorrow’s Terrorists, 2020.

[3] United Nation Office on Drugs & Crime, Terrorism and Conventional Weapons,

[4] ibid (n.2)

[5] TensorFlow. (n.d.). Introduction to TensorFlow. TensorFlow. Accessible at https://www.tensorflow.org/learn

[6] ibid (n. 1) [11]

[7] ibid (n. 1) [11]

[8] ibid (n. 1) [11]

[9] ibid (n. 1) [12]

[10] ibid (n. 1) [27]

[11] ibid (n. 1) [28]

[12] ibid (n. 1) [36-41]

[13] ibid (n. 1) [33-35]

[14] ibid (n. 1) [32]

[15] The Prohibition of Lethal Autonomous Weapon Systems. (2023, June 3). In European Greens. Retrieved July 15, 2023.

[16] Engler. (2019, November 14). Fighting deepfakes when detection fails. In Brookings. Retrieved July 15, 2023.

Sponsored

Author Bio


My Published Posts

ESOP Adoption Stories In India and Other Countries View More Published Posts

Join Taxguru’s Network for Latest updates on Income Tax, GST, Company Law, Corporate Laws and other related subjects.

Leave a Comment

Your email address will not be published. Required fields are marked *

Sponsored
Sponsored
Search Post by Date
July 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031