Sponsored
    Follow Us:
Sponsored

ABSTRACT

Artificial intelligence (“AI”), a booming system across the world, plays a crucial role in the current digital technology era. Unlike other system, AI system does not perform in a linear manner. The multiple set of data’s i.e., algorithm, combine each other to produce end results. Failure in one set of data may post damage to other sets which in further lead to unpredictable and uncontrollable actions beyond human imaginations. Current practice of standard regulatory framework is incapable to regulate AI that involves non-linear and complex algorithms. Thus, in this piece of writing, the author throws some light on the Complex adaptive System (hereafter referred to as the (“CAS Framework”) suggested by the Economic Advisory Council of India[1], along with the existing and proposed framework of practice across different notable countries like United States (“US”), United Kingdom (“UK”), European Union (“EU”) and China (collectively referred as Developed Countries”) to regulate AI and their drawbacks in regulating AI systems.

Keywords: Artificial Intelligence, Complex Adaptive System, Algorithms and Developed Countries.

DEVELOPED COUNTRIES, PRACTICE AND FAILURE IN REGULATING AI

a) US

The US regulations for regulating AI begin with the totally “hands-off” method to self-regulation and voluntary commitments methods. They mainly focus on human-AI collaboration and highly quality data to attract world AI talents toward them.[2] The method of self-regulation and voluntary commitments[3] aligns their regulating policies toward the laissez-faire principle[4], i.e., entities unregulated by governments, while postponing many challenges in safeguarding public rights and safety. Corporate interests override public welfare, leading to a loophole in safety concerns and ethics. Though certain policies and [5]orders commit to pre-release testing of AI, i.e., Chat GPT, core information sharing, and investment in security in cases of damages caused, the voluntary practice method makes it unenforceable in the required direction.

b) UK

The UK’s stand on regulating AI is based on the pro-innovation method, i.e., promoting innovation and managing risk simultaneously. Rather than a single application of regulation, UK’s principles demonstrate a centric application framework, usage of regulations for specific contexts, and precautionary principles in the advancement of AI.[6] It supports and practices sector-specific regulators for AI dealing in specific sectors for the better management of risk by employing specialized human resources. Further, it [7]ensures safety, transparency, accountability, and explainability by mandatorily appointing agencies to address grievances reported by aggrieved parties and entities dealing in AI. However, the lack of detailed regulatory procedures and challenges in converting the principles into mandatory standards make the aforementioned principles less effective in the practical sense.

Complex adaptive System-Welcoming Window to Regulate India’s AI Boom

c) EU

EU handled the regulations on AI with a risk-based approach, i.e., categorization of AI on the basis of levels of risk and imposing mandatory obligations with respect to specific levels of risk[8]. Categorizations of AI systems[9] are covered under the following heads: unacceptable risk, high risk, limited risk, and minimal risk. Context-falling under unacceptable-risk AI systems like social scoring or human behaviour manipulation are totally banned to operate, whereas high-risk AI systems like critical infrastructure, law, etc. are allowed to operate after ensuring conformity assessment procedures, safety, transparency, and accountability[10]. The challenges uplifted between the EU, European Parliament, and European Council in finalizing the matters covered under each categorization of risk, e.g., debate over biometric surveillance. Further, for regulating the AI system, the European Parliament suggested single national surveillance per member state, whereas the European Council and EU stick with a multiple authority framework[11]. Nevertheless, a risk that seems higher today might become less or a huge tomorrow, so categorization on a risk basis alone cannot be a sufficient methodology to encounter unpredictable and non-linear performance of AI.

d) China

China’s approach to regulating AI was entirely based on three regulations. Algorithms recommendations regulation 2021[12] bring clauses to restrict the practice of price differentiation using AI systems; In deep synthetic media, i.e., generating deep fake images, etc., regulation 2022[13] mandate the mention of artificially generated materials on whatever material is generated by an AI system to ensure transparency in the end result; Generative AI systems, i.e., Chat GPT, regulation 2023[14] mandate that all developers submit the output data and input data produced by AI, register the AI’s algorithms in a state registry, and conduct self-assessment tests for security purposes before introducing them to public use. The stringent checklist obliged by Generative AI System Regulation poses several challenges to operation. Unlike other countries, all the regulations specified in China are controlled by the bureaucratic rule itself, which poses a serious transparency issue.

PRINCIPLES OF CAS FRAMEWORK

The CAS framework shares similarities with the AI system, which is self-organizing and acts with the help of individual algorithms with some primary conditions for each algorithm, resulting in a combined effect that can be either malevolent or benevolent. To regulate such unpredictable effects, the CAS framework suggested five core principles, listed below, that align with the current development of AI technology.[15]

a) Guardrails and Partition

Guardrails mean setting boundary conditions in the operational space of the AI system. The boundary conditions work within the technically established limits within which the AI system performs. The limits acts as a triggering point when AI works beyond its intended nature. The limits shall be reassessed regularly on a performance basis. Partition strategy restrict AI system from passing risk from one algorithm to another, avoiding an overall disastrous malfunction. It is achieved by implementing a separate protocol for unique AI systems.

b) Manual overrides and authorization checkpoints

AI systems should not work beyond the control of humans. Humans’ actions should override AI actions whenever required to tackle their unpredictable results[16]. This principle promotes in identifying specific interaction points within the performance chain of AI where humans shall intervene to halt or reassess the process whenever it operates in an unintended way. Further high-stake AI decisions should be reviewed and checked by multiple human validators.

c) Transparency and Explainability

The principle is primarily introduced to [17]curb the practice of “black box,” i.e., the core algorithms of AI systems are hidden from public view for intellectual property concerns. It mandates an open license of core algorithms for external experts’ review; extreme outcomes disclosure; regular audits and assessments for bias decisions, privacy concerns, and security concerns; tracing real-time decisions with the help of AI debugging and monitoring tools; and a uniform AI fact sheet detailing the development procedure of AI systems to understand their function capabilities and limitations.

d) Distinct Accountability

The principle connotes the phrase “skin in the game”. The developers and operators of AI systems should be held responsible and accountable for their malfunctions and unreasonable consequences to third parties[18]. It advocates pre-detailed liabilities of developers and operators; responsibility of end-users; a traceable mechanism by authentication tokens to identify the citizens interacting with the AI-based apps; instant reporting protocols in cases of AI system failure to perform the intended purpose; and adopting standard investigation mechanisms to correct the system failure efficiently in a time-bound manner.

e) Specialized, Agile Regulatory Body

AI system performance has diversified almost in every sector. It is impossible to apply traditional regulators to a robust AI system. A suggested centralized AI regulator[19] body shall address the potential challenges in emerging AI trends, monitor AI system behaviour regularly, scrutinize material breaches, and also undertake a regular feedback-driven approach with the industry and academic experts. Additionally, a centralized database for AI algorithms and a national registry for registering unexpected and unusual performances of AI models contribute to the better development of the AI framework.

D) Financial Market – CAS framework

The conceptual work of financial markets resembles the working of CAS[20], where multiple investors and traders work under the oversight of the financial regulator, the Securities Exchange Board of India, known as SEBI. Investors in financial markets rapidly change their investment strategies in response to price fluctuations and regulatory developments. Regular audits, transparency, accountability of corporate entities, a temporary halt of trading during illegal trading, thresholds and limits for intermediaries to act within the specified area, etc. share the [21]similar working nature of AI systems except of their unique self-improving characteristics. Adopting such a financial market framework to the extent possible in AI systems tremendously boosts AI regulation beyond human expectations.

WAY FORWARD

Though CAS framework suggested comprehensive principles for AI regulation, it is still considered a baby step in India’s AI regulating framework, and recognition of its challenges remains at the implementing stage.

The principles of CAS framework failed to address the significant water consumption issue in training AI. Public data sources estimated that training of GPT-3 consumes 700000 litres of clean fresh water, which can be used to manufacture 370 BMW cars or 320 tesla electric vehicles. Further, for answering 15-20 questions, a Chat GPT consume 500ml of water. One of India’s major issues is water scarcity, and the absence of standard conditions for using water in training AI will lead the country to alarming situations.[22]

Also, AI’s working nature connects with the digital or online era, where it can be accessed by any individual from any corner of the world, irrespective of any countries coming up with robust AI regulatory frameworks, the problems associated with or surrounding AI cannot be effectively encountered until any international body or centre is established.

Notes:

[1] EACPM-WP26-A-Complex-Adaptive-System-Framework-to-Regulate-AI.pdf

[2] https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

[3] Zakrzewski, C. and Tiku, N. (2023). AI companies form new safety body, while Congress plays catch-up. Washington Post. [online] 27 Jul. Available at: https://www.washingtonpost.com/technology/2023/07/26/ai-regulation-createdgoogle-openai-microsoft/.

[4] www.morganlewis.com. (n.d.). The United States’ Approach to AI Regulation: Key Considerations for Companies. [online] Available at: https://www.morganlewis.com/pubs/2023/05/the-united-states-approach-to-ai-regulation-keyconsiderations-for-companies

[5] The White House (2023). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. [online] The White House. Available at: https://www.whitehouse.gov/briefing-room/presidential

[6] GOV.UK (2023). A pro-innovation approach to AI regulation. [online] GOV.UK. Available at: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.

[7] Ada Lovelace Institute. Regulating AI in the UK. [online] Available at: https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/.

[8] Feingold, S. (2023). ‘On artificial intelligence, trust is a must, not a nice to have,’ one lawmaker said. #AI. [online] World Economic Forum. Available at: https://www.weforum.org/agenda/2023/06/european-union-ai-act-explained/.

[9] European Commission (2021). EUR-Lex – 52021PC0206 – EN – EUR-Lex. [online] Europa.eu. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.

[10] Title II & III of the EU AI Act

[11] Müge Fazlioglu, (2023). Contentious areas in the EU AI Act trilogues. [online] Available at: https://iapp.org/news/a/contentious-areas-in-the-eu-ai-act-trilogues/.

[12] Cyberspace Administration of China (2022.). Available at: http://www.cac.gov.cn/2022-01/04/c_1642894606364259.htm.

[13] Cyberspace Administration of China (2022.). Available at: http://www.cac.gov.cn/2022- 12/11/c_1672221949318230.htm

[14] Cyberspace Administration of China (2023). Available at: http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm.

[15] Chan, S. (2001). Complex Adaptive Systems. [online] Available at: https://web.mit.edu/esd.83/www/notebook/Complex%20Adaptive%20Systems.pdf.

[16] Firlej, M. and Taeihagh, A. (2021), Regulating human control over autonomous systems. Regulation & Governance, 15: 1071-1091. https://doi.org/10.1111/rego.12344

[17] Balasubramaniam, N. et al. (2023) ‘Transparency and explainability of AI systems: From ethical guidelines to requirements’, Information and Software Technology, 159, p. 107197. doi: 10.1016/j.infsof.2023.107197.

[18] Novelli, C., Taddeo, M. and Floridi, L. (2023). Accountability in artificial intelligence: what it is and how it works. AI & SOCIETY. doi: https://doi.org/10.1007/s00146-023-01635-y.

[19] Levin, B. and Downes, L. (2023). Who Is Going to Regulate AI? [online] Harvard Business Review. Available at: https://hbr.org/2023/05/who-is-going-to-regulate-ai. 89 Finocchiaro, G. The regulation of artificial intelligence

[20] James, Paulin., Anisoara, Calinescu., Michael, Wooldridge. (2018). Agent-Based Modeling for Complex Financial Systems. IEEE Intelligent Systems, 33(2), 74-82. Available from: 10.1109/MIS.2018.022441352

[21] Polacek, G.A., Gianetto, D.A., Khashanah, K. and Verma, D. (2012). On principles and rules in complex adaptive systems: A financial system case study. Systems Engineering, 15(4), pp.433–447. doi: https://doi.org/10.1002/sys.21213.

[22] Federico Guerrini, 2023: AI’s Unsustainable Water use: How tech giants contribute to the global water shortages.

****

Author: Somasundararajan B, final year BBA LLB(Hons) student from school of Law –UPES.

Sponsored

Author Bio


Join Taxguru’s Network for Latest updates on Income Tax, GST, Company Law, Corporate Laws and other related subjects.

Leave a Comment

Your email address will not be published. Required fields are marked *

Sponsored
Sponsored
Search Post by Date
July 2024
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031