Responsible AI in Indian Finance: Translating RBI’s FREE-AI Framework into Corporate Governance and Compliance Duties
In August 2025 the Reserve Bank of India (RBI) released the Framework for Responsible and Ethical Enablement of AI, a blueprint guiding AI adoption in finance. This follows the December 2024 creation of an expert committee (chaired by Prof. Pushpak Bhattacharya, IIT Bombay) to recommend a comprehensive AI framework for banks, NBFCs, fintechs and payment firms. The RBI noted that while AI can automate complex processes and improve efficiency, it carries risks such as data privacy breaches and algorithmic bias, and must be addressed early in the adoption cycle. The FREE-AI report anchors on seven “sutras” and six strategic pillars with 26 recommendations to balance innovation with risk mitigation. In practice, the framework is advisory, but it signals what regulators expect: a people-first approach where AI augments human judgement, with safeguards to ensure fairness and systemic stability. This initiative aligns India’s financial regulation with a global trend of proactive AI governance, emphasizing that trust and transparency remain paramount as AI reshapes banking and finance.
Key Governance Principles and Mechanisms in the RBI Framework: At its heart the RBI’s framework establishes seven guiding “sutras” or principles to anchor AI use in finance. These are: Trust is the Foundation; People First; Innovation over Restraint; Fairness and Equity; Accountability; Understandable by Design; and Safety, Resilience and Sustainability. In practical terms, these mean that banks and fintechs must embed AI in ways that reinforce customer trust and meet human-centric objectives. For example, “Trust” and “Explainable by Design” mandate that AI decisions be transparent and documented, so that customers and regulators understand how credit or fraud decisions are made. The “People First” and “Accountability” sutras make clear that ultimate decision-making and responsibility cannot be outsourced to machines, corporate boards and senior management must remain accountable for AI outputs, echoing the RBI’s warning that humans must own AI-driven decisions. The call for “Fairness and Equity” compels firms to test AI models for bias and ensure they comply with non-discrimination norms.
At the same time, “Innovation over Restraint” encourages RBI-sanctioned experimentation: the report explicitly recommends setting up AI innovation sandboxes and supporting homegrown models. Finally, “Safety, Resilience and Sustainability” requires robust security, energy-efficient practices and continuity planning around AI systems.
These principles translate into specific governance expectations across six pillars. For innovation enablement, the report urges building common Infrastructure, enabling Policy tools, and strengthening Capacity. The Protection pillar calls for informing consumers when AI is used, bolstering cybersecurity and building AI-specific audit mechanisms. The Assurance pillar proposes mandatory AI audit frameworks, expanded product vetting and updated business-continuity plans that account for AI model failures. In short, financial firms are expected to integrate AI risk into their compliance and internal-control systems, from boardroom charters to incident-response protocols in line with these sutras and pillars.

Comparative Global Perspectives: India’s FREE-AI approach follows broad global currents but with local flavor. In the European Union, the forthcoming EU AI Act will impose strict, risk-based rules on financial AI. Under the Act, AI used for high-impact tasks will be classified as “high-risk” and subject to conformity assessments, documentation and post-market surveillance. Notably, the EU explicitly tasks existing financial regulators (EBA, ESMA, EIOPA and national authorities) to supervise AI compliance in banks and insurers. UK regulators, by contrast, have favored a principles-led stance. The UK government has outlined five core AI principles – safety/robustness, explainability, fairness, accountability and expects existing financial rules to cover AI use. The FCA and Bank of England emphasize that AI should be subject to current governance, audit and conduct standards rather than new prescriptive rules.
In the United States, oversight is similarly dispersed. Banking regulators (OCC, Fed) and agencies like the CFPB stress that AI must be used “in a safe, secure and trustworthy manner”. Recent statements by the OCC affirm that banks are responsible for ethical AI use, and regulators have developed guidance on testing for bias and managing third-party AI vendors. However, the US has yet to enact a comprehensive federal AI law for finance; instead, regulatory bodies encourage risk-based innovation while enforcing existing statutes against AI misuse.
In Singapore, the Monetary Authority of Singapore (MAS) actively promotes AI adoption but under clear guardrails. MAS’s existing “FEAT” principles (Fairness, Ethics, Accountability, Transparency) have guided Singapore’s banks since 2018. Recently MAS launched PathFin.ai, a platform for financial institutions to share AI use-cases and lessons, and has signaled that new supervisory guidelines are coming. MAS will consult on formal AI risk-management rules later this year, building on the FEAT tenets. 2014. MAS also periodically reviews AI model risk practices and expects banks to treat AI-model management like any other model risk.
China’s approach is more top-down. Beijing has promulgated data security and AI-specific rules and in July 2025 unveiled a Global AI Governance Action Plan to shape international norms. Chinese regulations emphasize security, ethical review and alignment with “core socialist values”, and mandate strict oversight of data and algorithms. In practice, multiple agencies (e.g. CAC, MIIT, SAMR) are involved in approving or supervising AI deployments, and China is proposing an international AI body (GAICO) to coordinate global standards.
Corporate Risk, Trade-Offs, and Implementation Gaps: While the FREE-AI framework is forward-looking, implementing it raises real challenges. Boards and risk committees must decide how to translate high-level principles into everyday governance. For example, ensuring auditability and explainability of complex models can be difficult: many AI systems are “black boxes” that defy simple inspection. At present, many institutions lack such processes, a gap the RBI framework implicitly spotlights.
Board-level responsibility is another concern. The Companies Act 2013 mandates directors to exercise due care and lay out risk-management policies in the board’s report. Directors must now consider AI as a source of operational and reputational risk. Yet most boards have limited AI expertise. The FREE-AI report addresses this by urging capacity-building at the board and C-suite. Going forward, companies may need to appoint AI or technology experts to audit committees or set up data committees, much as they do for cyber risk. In parallel, SEBI’s Listing Obligations require boards to oversee emerging risks. SEBI has already reminded listed financial firms that they bear full responsibility for any AI tool they deploy, including its outputs and legal compliance. This reinforces that directors cannot treat AI as a “black box”; failure to govern it could breach their duties under Sections 166 and 177.
Operational and sectoral trade-offs also arise. The framework encourages AI sandboxes and innovation, but participation in such sandboxes is voluntary. Smaller NBFCs or fintech startups may struggle to engage with regulators’ sandbox processes and thus be slower to benefit. Larger banks with tech capabilities may race ahead, potentially widening gaps in the system. There is also a tension between global versus local models. The RBI report advocates indigenously developed models and prioritizes the “Core Banking Solution of India”, but reliance on imported AI is common. Adapting international best-practice to India-specific contexts will take work. Moreover, the push for explainability and fairness may trade off with model performance. Institutions will have to balance these considerations, perhaps using simpler models where transparency is mandated and reserving complex AI for less regulated uses.
Finally, the RBI framework itself is non-binding, raising questions on enforcement. Without an external mandate, adoption could be uneven. Achieving coherence will require that RBI and other regulators (like SEBI, IRDAI for insurers, PFRDA for pensions) integrate these principles into their supervisory regimes. For instance, RBI could modify its supervisory manuals to include AI risk checks, or SEBI could amend its corporate-governance guidelines to explicitly mention AI oversight. Absent such steps, a gap could persist between the aspirational sutras and day-to-day boardroom accountability.
Way Forward and Policy Recommendations: To make FREE-AI meaningful, Indian financial institutions should embed its tenets into hard governance. Boards should immediately review their risk frameworks to include AI governance: for example, adopting a board-approved AI policy that sets out roles, thresholds for model usage, and escalation paths. Compliance and internal-audit teams should be trained on AI risks and incorporate them into audit plans. Institutions can draw on global best-practices: for instance, using the EU’s forthcoming AI Act standards for high-risk models, MAS’s FEAT ethics checklist, or China’s security review guidelines.
Regulators also have a role in transforming voluntary principles into enforceable norms. RBI might require banks to report on AI usage in their risk disclosures or data-resilience filings. It could task inspectors to audit AI processes during inspections. SEBI could update its Corporate Governance Code to require listed banks and NBFCs to disclose their AI governance framework in annual reports, similar to cyber-security disclosures. Additionally, a continuing RBI-led multi-stakeholder council should monitor implementation and update guidance as technology evolves.
The RBI’s FREE-AI framework offers a comprehensive vision for AI in Indian finance, but its success will depend on how well firms and regulators translate its principles into concrete duties. Given the Companies Act and SEBI rules already require boards to manage new risks, there is legal groundwork to integrate AI oversight into existing governance structures. By building on these foundations and learning from counterparts in the EU, US, Singapore and elsewhere, India’s banks, NBFCs and fintechs can operationalize responsible AI. Clear supervisory expectations, combined with industry-led guidelines and capacity-building, can turn the Framework’s sutras from voluntary ideals into embedded best practices. In doing so, financial institutions will not only comply with regulators’ evolving mandates but also strengthen long-term trust and stability in India’s AI-driven financial sector.
****
Author: Jai Narayan, College: Gujarat National Law University, Year: III

