Sponsored
    Follow Us:
Sponsored

CAN ARTIFICIAL INTELLIGENCE BE PUNISHED?

At DeepFest, a public event in Saudi Arabia, Muhammad, a Saudi-made humanoid robot fluent in both Arabic and English, was reported to have engaged in inappropriate physical contact with Rawiya Al-Qasimi, a female journalist. This incident, tantamounts to sexual harassment, constitutes a serious offence under both Indian penal law and international statutes. Such occurrences not only present legal challenges but also prompt critical inquiries into the accountability and liability of all parties implicated.

The idea of criminally punishing AI is receiving increased attention. One of the most vocal advocates of punishing AI is Gabriel Halley,  contends that if the action of an AI fulfil all the elements of a specific offence, there is no reason why a criminal liability cannot be imposed on such AI system. On the other side the opponents of this idea base their argument on the ground that the fundamental requirement of affixing a criminal liability is- the guilty mind. Since AI is fundamentally incapable of forming such an intent the idea of imposing criminal liability on AI stands no ground. Adding on this argument, the opponents propose that a criminal liability can be imposed only in the form of a corporate criminal liability i.e. the developers, users and owners.

A framework for understanding crime

The question of whether an AI system can be held criminally liable is largely dependent on two factors: the AI’s level of autonomy and irreducibility. In supervised learning models, for instance, the AI system follows the coder’s prescribed algorithm; in unsupervised learning models, on the other hand, the AI system exercises autonomy by adapting to its surroundings, learning from it, and producing customised results. As a result, an AI is considered to exercise some degree of autonomy if it can set goals, evaluate results, make decisions, and modify its behaviour in response to sensory input.

Reducibility is the ability to link an AI’s actions to a specific person’s illegal activity. If none of these issues can be resolved, then there are other options that can be used to stop AI crimes. Al punishment has advantages and disadvantages, and it also raises questions about the legal system’s conceptual commitments. Currently, determining whether Al punishment is acceptable necessitates examining whether other ways to address Al-generated crime exist.

First Alternative: The Status Quo

The first option is to simply do nothing. If only one Al-generated crime is committed per decade, there would be significantly less need to amend the legislation than if Al-generated crime occurred on a daily basis. In such event, a novel interpretation of existing rules could provide at least a partial reaction to Al-generated crimes. Instances of this like could theoretically be tried under an innocent “agency model,” providing Al can reasonably be viewed as fitting the preconditions of an innocent agent, even if not a completely criminally responsible agent in its own right. Under the innocent agency concept, a person can be held criminally liable for acting via an agent who lacks capacity, such as a child. For example, if an adult uses a five-year-old to transport illegal substances, the adult, not the youngster, is often criminally accountable. Innocent agency could be compared to a person programming a sophisticated Al to violate the law. The person’s responsibility for knowingly inducing the Al to carry out the exterior aspects of the violation. This doctrine requires purpose, or at least knowledge, that the innocent agent will create the unlawful outcome in question. This means that the innocent agency paradigm does not provide a pathway to culpability in circumstances when someone does not intend or foresee that the AI could breach an obligation.

Second Alternative: Conventionally Extending Criminal Law
The existing rules can be applied to the crimes done by AI. The problem of Al-generated crime would be treated more reasonably. The traditional expansions of criminal punish those who commit crimes for individuals. An “Al Abuse Act” might outlaw unlawful or careless uses of Al, similar to how the Computer Fraud and Abuse Act criminalizes information obtained through personal computers. Such an Act may also criminalize failure to responsibly design, install, test, train, and get illegal access to monitor the Al that a person helped develop. New negligence standards could also be added for developers who create systems that forescefully could produce a risk of any major injury or illegal outcome, even if a specific risk is unforeseen.

However, extending criminal legislation in this manner may unreasonably restrict innovation and commercial activity. Furthermore, some of these activities do not appear to constitute individually accountable behavior, given all activities and technology carry some risk of harm. If building a technology with the potential to cause any illicit outcome was a felony, all of the early Internet creators would most likely be convicted. Finally, these new crimes would target individual activity that is guilty along conventional dimensions and would be of limited utility in terms of Al-generated crimes that do not include culpable individuals. As a result, a new approach to expanding criminal law becomes necessary to handle Al-generated crime.

More audaciously, a designated neighbouring person—a “responsible person”—who would not otherwise be directly criminally culpable may be penalised in cases of Al-generated crime. In circumstances of Al-generated crime, this could include new types of criminal negligence for neglecting to fulfil statutory obligations, rendering an individual accountable. Anybody who creates or operates in Al with the potential to cause harm may be required to ex ante register a responsible person; failing to do so would be illegal. This would be comparable to operating a vehicle without a licence. A federal agency might be in charge of maintaining the registration system. A registration programme, however, presents challenges since it can be challenging to discern between Al who is capable of engaging in criminal conduct and Al who is not, particularly when dealing with unpredictable criminal activity. Even seemingly harmless and basic AI has the potential to do significant harm.

In the event that Al generated a crime, it would be feasible to immediately impose criminal punishment on a responsible party. In the event that the responsible party is held accountable for additional statutory duties of care and supervision pertaining to the Al, criminal negligence liability may be imposed should she fail to perform those responsibilities in an unreasonable manner. Granted, this wouldn’t be a punishment for the Al’s own destructive behaviour. Instead, it would be a type of immediate criminal liability placed on the accountable party for her own actions.

If this is insufficient to resolve Al-generated crime, strict responsibility criminal liability may also be placed on the offending party, especially if the applicable penalties merely consist of fines rather than jail time. Strict responsibility crimes are typically limited to transgressions of regulations or minor infractions; nevertheless, there are instances of more serious strict criminal liability, such as statutory rape in certain jurisdictions. However, there are significant issues with crimes having strict accountability attached to individuals. They should only be employed as a last resort in extreme situations, such as while engaging in exceptionally risky activities, if they are justified at all.

But it’s not clear that using Al makes it exceptionally risky. Conversely, in numerous domains of endeavour, it would be impractical to disregard the Al, particularly in situations when human players cannot compromise safety, as will be the case with autonomous vehicles. Currently in effect criminal laws will still catch the majority of bad actors who utilise Al systems to conduct crimes, and there haven’t been any high-profile examples of Al-generated crime to date. Because of this, M-generated crime does not yet represent a serious enough social issue to warrant heavy criminal penalties

In the end, a potential strategy for handling AI-generated crime is a responsible person regime combined with new statutory duties that, if negligently or recklessly broken, bear criminal consequences. A direct conviction of AI would be considerably more beneficial in terms of expressive impact than a criminal conviction of the responsible party, and it wouldn’t seriously erode public confidence the way the legal fictions required to punish Al directly would.

Third Alternative: Expanded Civil Liability

Legal accountability and disincentive to cause injury can both be achieved through civil law, especially tort law. Civil liability will undoubtedly already apply to some Al crimes, but it has built-in restrictions. Since there aren’t many laws that expressly address Al-generated harms, civil liability must usually be established under the umbrella of traditional negligence, product liability, or contractual liability. Since negligence usually calls for reckless behaviour, cases of Al-generated crime may not be recoverable if civil liability cannot be established. In order to be subject to product liability, an Al may need to be both a commercial product (which may not apply in cases when Al is utilised as a “service”) and have a fault (or have its attributes misrepresented).  It might be challenging to demonstrate a flaw in sophisticated Al, and there’s a chance that Al will injure people without having a “defect” in the sense of product liability. Contractual interactions may also give rise to civil responsibility, but this is often limited to situations in which parties have signed a contract. This might potentially have a lot of drawbacks.

The previously sketched responsible person idea might be repurposed so that the responsible person could only be civilly liable to the extent that there is insufficient civil responsibility for crimes generated by Al. If brought by an individual or a group of plaintiffs, the lawsuit against a liable party might resemble a tort action; if conducted by a government agency entrusted with overseeing Al, it might resemble a civil enforcement action. In a legal proceeding, an Al would not be regarded as a corporate entity, wherein the latter is seen to have committed the damaging conduct and the law views it as a single, acting, and “thinking” entity.  Instead, the question to be decided would be whether the person in charge of Al fulfilled his duty of care in a reasonable manner; if not, strict liability might also be applied to civil culpability.

An alternative strategy is an insurance plan. A tax might be paid into a fund by Al developers, users, owners, or only specific types of Al to guarantee sufficient recompense for victims of Al-generated crime. The tax liability would be negligible in comparison to the income that Al produces. Similar to the National Vaccine Injury Compensation Programme (VICP) in the United States, which is financed by a fee on vaccines that recipients pay, an Al compensation fund may function. Although there are many social benefits to vaccinations, there are also occasional reports of major side effects. VICP provides compensation to persons injured by vaccines covered by the programme, serving as a no-fault substitute for traditional tort liability. Should someone suffer harm as a consequence of this, they will be eligible to receive compensation from the fund.

Conclusion

In conclusion, the question of whether AI can be held criminally liable is a complex one, with arguments on both sides raising valid points. Recent incidents, such as the inappropriate behavior of a humanoid robot at a public event, highlight the pressing need to address legal challenges surrounding AI accountability.

Advocates for punishing AI argue that if an AI’s actions fulfill the elements of a specific offense, criminal liability should be imposed. However, opponents argue that since AI lacks the capacity for intent, criminal liability should instead be imposed on developers, users, and owners under corporate criminal liability.

Understanding crime in the context of AI involves considering the autonomy of AI systems and the reducibility of their actions to individual responsibility. Supervised and unsupervised learning models exhibit varying degrees of autonomy, while reducibility refers to linking AI actions to specific individuals’ illegal behavior.

Alternative approaches to addressing AI-generated crime include maintaining the status quo, extending criminal laws to cover AI offenses, and holding designated responsible persons accountable. Additionally, expanded civil liability under tort law or the implementation of an insurance scheme could provide avenues for legal accountability and compensation for victims of AI-related harm.

Ultimately, addressing AI-generated crime requires a multifaceted approach that balances legal accountability, innovation, and protection of individuals affected by AI actions. As the capabilities of AI continue to evolve, ongoing dialogue and proactive measures are essential to ensure ethical and responsible AI development and use.

Sponsored

Author Bio


My Published Posts

Effectiveness of Contract Labour (Regulation and Abolition) Act, 1970: A Critical Analysis of its Implementation and Impact View More Published Posts

Join Taxguru’s Network for Latest updates on Income Tax, GST, Company Law, Corporate Laws and other related subjects.

Leave a Comment

Your email address will not be published. Required fields are marked *

Sponsored
Sponsored
Ads Free tax News and Updates
Sponsored
Search Post by Date
December 2024
M T W T F S S
 1
2345678
9101112131415
16171819202122
23242526272829
3031