The Kerala High Court became the first court in July 2025 to formalise a policy on use of AI. With this move, Gujarat High Court became the second High Court. This is followed by a detailed paper on AI published by Hon’ble Supreme Court in November 2025.
This policy covers
1. Guiding principles
2. Objectives
3. Scope and application
4. Definitions
5. Permitted use of AI tools
6. Prohibited use of AI Tools
7. Confidentiality and Data Protection
8. Verification and accuracy requirements
9. Human supervision and personal accountability
10. Consequences of violation
The advent of Artificial Intelligence has revolutionized service delivery across virtually every sector of public and private administration, offering unparalleled ease, speed, convenience, and efficiency in handling voluminous data, streamlining workflows, and enhancing overall productivity. The District Judiciary being the primary and most accessible tier of justice delivery handling the highest volume of cases, this technological wave assumes particular significance with the continuous induction of a new generation of young, dynamic, and techno-savvy judicial officers who are inherently proficient in and inclined towards adopting modern digital tools.
While such familiarity presents opportunities for ease in research and learning case laws and in administrative improvements, the unique constitutional mandate of dispensing justice through human conscience, impartiality, reasoned judgment, and personal accountability demands the highest degree of caution.
Unregulated or unchecked use of AI carries the grave risk of gradual over-reliance on AI, less use of human mind, unintended biased decision making, which may cause subtle erosion of public trust in the human-centric nature of adjudication. It is, therefore, imperative to establish a clear, comprehensive, and ultra-strict policy that precisely defines the cautioned narrow scope and strict parameters for use of Artificial Intelligence within the District Judiciary, ensuring that technology remains a strictly controlled research tool and administrative aid and never encroaches upon the inviolable domain of judicial decision-making or evidence evaluation
This Policy applies to all persons working within the judicial and administrative framework of the Registry of the High Court of Gujarat and District Judiciary, including:
- All Judicial Officers.
- All officers and staff, whether regular or contractual, of the High Court of Gujarat and allied institutions such as Gujarat State Judicial Academy, Gujarat High Court
- Arbitration Centre, Gujarat State Legal Services Authority.
- Legal assistants, interns, trainees, and para-legal volunteers, engaged with the Court.
- All ministerial, administrative, and technical staff , whether regular or contractual, of the District Judiciary
The following accountability principles shall apply to all persons covered by this Policy:
1. Every Judge is personally responsible for every order, judgment, and observation issued under their name. This responsibility cannot be delegated, shared with, or diminished by the use of any AI tool.
2. Every court officer is personally responsible for the accuracy and appropriateness of any AI-generated content used in the performance of their official duties.
3. Legal Assistants, Research Associates, and Judicial Assistants using AI tools to assist a Judge shall ensure that the concerned Judge is informed of such use and of any AI-assisted output.
4. Any AI-generated output, once signed or authenticated by a user, becomes the sole responsibility of that user. The signatory shall be held liable for any inaccuracies, errors, or omissions contained within the authenticated material.
5. The use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence. Users cannot disclaim responsibility by attributing errors to an AI Tool
IN THE HIGH COURT OF GUJARAT
POLICY ON THE USE OF ARTIFICIAL INTELLIGENCE IN JUDICIAL AND COURT ADMINISTRATION
Page Contents
- 1. Introduction
- 2. Preamble
- 3. Guiding Principles
- 4. Objectives
- 5. Scope And Application
- 6. Definitions
- 7. Permitted Uses of AI Tools
- 8. Prohibited Uses of AI Tools
- 9. Confidentiality And Data Protection
- 10. Verification And Accuracy Requirements
- 11. Human Supervision and Personal Accountability
- 12. Consequences of Violation
- 13. Review And Revision
1. Introduction
1.1 Need for an AI Policy for the District Judiciary
The advent of Artificial Intelligence has revolutionized service delivery across virtually every sector of public and private administration, offering unparalleled ease, speed, convenience, and efficiency in handling voluminous data, streamlining workflows, and enhancing overall productivity. The District Judiciary being the primary and most accessible tier of justice delivery handling the highest volume of cases, this technological wave assumes particular significance with the continuous induction of a new generation of young, dynamic, and techno-savvy judicial officers who are inherently proficient in and inclined towards adopting modern digital tools. While such familiarity presents opportunities for ease in research and learning case laws and in administrative improvements, the unique constitutional mandate of dispensing justice through human conscience, impartiality, reasoned judgment, and personal accountability demands the highest degree of caution. Unregulated or unchecked use of AI carries the grave risk of gradual over-reliance on AI, less use of human mind, unintended biased decision making, which may cause subtle erosion of public trust in the human-centric nature of adjudication. It is, therefore, imperative to establish a clear, comprehensive, and ultra-strict policy that precisely defines the cautioned narrow scope and strict parameters for use of Artificial Intelligence within the District Judiciary, ensuring that technology remains a strictly controlled research tool and administrative aid and never encroaches upon the inviolable domain of judicial decision-making or evidence evaluation.
1.2. Importance of human conscience and values in decision making
The judiciary stands as the final bastion of impartial justice, where human judgment, guided by conscience, reason, law, and constitutional values, must remain inviolate. In an era of rapid technological advancement, artificial intelligence offers tools that promise advanced, fast and easy research of Judgments, Ratio and settled legal principles. Further it offers tools which may improve administrative efficiency. Yet the core of adjudication, the weighing of evidence, interpretation of law, application of the legal principles to facts, exercise of discretion, and delivery of reasoned decisions belongs exclusively to the domain of the human mind. Any encroachment upon this domain risks undermining public confidence, introducing hidden biases, generating unverifiable outputs, or eroding the sacred principle that justice is administered by accountable human beings driven by conscience and values under the rule of law.

Recognizing recent global and national developments—including documented instances of AI-generated fictitious judgments leading to misconduct findings, judicial warnings against unverified AI citations, and urgent need for safeguard, it is necessary to adopt a restrictive stance towards the use of AI. Artificial intelligence shall be permitted solely as a neutral, administrative instrument for equitable workload distribution among judicial officers and for the purpose of research of case laws and legal principles that too by the tools, mechanism and standards defined by the High Court. No other external tools and application is tolerable, as even marginal expansion invites unacceptable peril to judicial integrity, fairness, and the constitutional mandate of human-centered adjudication.
2. Preamble
The High Court of Gujarat recognises that Artificial Intelligence (AI) and Generative AI tools present significant opportunities to enhance the efficiency, accessibility, and quality of justice delivery in the Indian court system. At the same time, these technologies carry substantial risks — including hallucinations, bias, confidentiality breaches, and erosion of judicial independence — that must be managed with care and institutional discipline.
AI in judiciary should be designed as a decision-support and administrative efficiency tool, not as a replacement for judicial reasoning. With proper safeguards—such as transparency, human supervision, and protection of confidential information—AI can significantly strengthen case management and improve the speed and quality of justice delivery.
This Policy is issued in exercise of the powers under Articles 225 and 227 of the Constitution of India, by the High Court of Gujarat, guided by the constitutional mandate to uphold rule of law, secure the right to a fair hearing under Article 21 of the Constitution of India, and protect the institutional integrity of the judiciary.
It shall be read together with the Information Technology Act, 2000; the Digital Personal Data Protection Act, 2023 (DPDP Act) as and when it comes into force; the Contempt of Courts Act, 1971; and the High Court of Gujarat Rules, 1993, and any subsequent notifications or circulars issued by the Supreme Court of India or this Court.
3. Guiding Principles
By confining AI to the narrowest conceivable role—purely anonymized, metadata-driven case allocation and research of legal principles, the High Court reaffirms its unwavering dedication to human supremacy in justice delivery while harnessing limited technological assistance to reduce administrative imbalances that might otherwise delay access to justice. Keeping in mind the above core philosophy and view points following general principles are laid down.
3.1 Judicial Independence
Artificial intelligence shall never be employed for any form of decision-making, judicial reasoning, substantive order drafting or judgment preparation, bail/sentencing considerations, or any substantive adjudicatory process. Broadly, Artificial intelligence shall not be used—directly or indirectly—for any aspect of judicial decision, adjudication, reasoning, application of law, interpretation of facts, weighing of arguments, determination of rights/liabilities, sentencing, bail, interim orders, or final judgment. Each judge remains fully and personally responsible for every order, judgment, and observation issued in their name.
3.2 Human Supervision
A qualified human officer shall always review, verify, and be responsible for any AI-generated output before it is acted upon, filed, published, or communicated. There shall be no autonomous or unreviewed AI action in any judicial or administrative process.
3.3 Accuracy and Reliability
AI tools are known to produce inaccurate, biased, or fabricated outputs. All AI-generated content — including case citations, statutory references, factual summaries, and translations — must be independently verified against authoritative primary sources before use.
3.4 Confidentiality and Data Protection
C onfidential case information, personal data of litigants and witnesses, and privileged communications shall never be entered into any public AI tool. All AI use must comply with the Digital Personal Data Protection Act, 2023, and applicable rules framed thereunder, as and when it comes into force.
3.5 Fairness and Non-Discrimination
AI systems may encode or perpetuate biases related to gender, religion, caste, ethnicity, or socio-economic status etc. Users shall be alert to such risks and shall not rely upon AI outputs in a manner that promotes systemic bias in the justice system.
3.6 Competence and Continued Learning
Users using AI tools shall first acquire a sufficient understanding of the capabilities and limitations of those tools. The High Court of Gujarat through the Gujarat State Judicial Academy, shall facilitate regular training to ensure all users maintain an adequate level of AI literacy.
4. Objectives
This Policy aims to:
1. Enable judicial officers and court staff to leverage AI tools to improve productivity, reduce administrative burden, and enhance access to justice, while preserving judicial independence and the sanctity of judicial decision-making.
2. Establish clear boundaries on permissible and prohibited uses of AI tools within the High Court of Gujarat and all subordinate courts under its jurisdiction.
3. Protect the confidentiality of case-related information, litigant data, and court records in conformity with the Digital Personal Data Protection Act, 2023.
4. Promote institutional accountability by requiring human supervision and verification in all AI-assisted work.
5. Provide a basis for regular review and evolution of AI governance as the technology landscape develops.
5. Scope And Application
5.1 Persons Covered
This Policy applies to all persons working within the judicial and administrative framework of the Registry of the High Court of Gujarat and District Judiciary, including:
- All Judicial Officers.
- All officers and staff, whether regular or contractual, of the High Court of Gujarat and allied institutions such as Gujarat State Judicial Academy, Gujarat High Court Arbitration Centre, Gujarat State Legal Services Authority.
- Legal assistants, interns, trainees, and para-legal volunteers, engaged with the Court.
- All ministerial, administrative, and technical staff , whether regular or contractual, of the District Judiciary.
5.2 Proceedings and Activities Covered
This Policy applies to all court-related work including hearings, case management, registry functions, administrative tasks, and research activities, whether conducted within the Court premises or remotely.
5.3 Devices and Platforms Covered
The Policy applies regardless of whether AI tools are accessed via:
- Court-owned hardware and software infrastructure
- Personal devices used for court-related work
- Third-party platforms and cloud-based services
- Mobile applications, browser extensions, or API-based integrations
5.4 AI Tools Covered
The Policy applies to use of all AI systems, including but not limited to:
- Generative AI large language models (LLMs) such as ChatGPT, Gemini, Copilot, DeepSeek, Claude, Grok and equivalents
- AI-assisted legal research platforms
- Machine translation tools (whether standalone or AI-integrated)
- AI-powered transcription and speech-to-text systems
- Automated document summarisation tools
- Predictive analytics platforms (including risk assessment and scheduling tools)
- Any AI functionality embedded in productivity software (word processors, spreadsheets, email clients)
6. Definitions
In this Policy, unless the context otherwise requires:
| Term | Definition |
| Artificial Intelligence (AI) | Any machine-based system that can generate outputs such as predictions, recommendations, decisions, or content that influence real or virtual environments, using objectives defined by human beings, from data or knowledge representations. |
| AI-Generated Content | Any text, analysis, summary, translation, code, or other output produced entirely or substantially by an AI system, whether or not subsequently edited by a human user. |
| Court Document | Any pleading, affidavit, order, judgment, written submission, application, report, correspondence, letter or other document filed with or issued by the Court. |
| Generative AI (GenAI) | A category of AI systems capable of generating text, images, audio, video, or other content in response to prompts, including but not limited to Large Language Model-based tools such as ChatGPT, Gemini, Claude, and Copilot. |
| Hallucination | The phenomenon whereby an AI system generates plausible-sounding but factually incorrect, fabricated, or non-existent information, including fictitious case citations, statutes, or quotations. |
| Large Language Model (LLM) | A type of AI system trained on a large collection of text data, capable of producing contextually relevant natural language outputs based on statistical patterns learned during training. |
| Private/Enterpri se AI Tool | An AI system deployed in a private or enterprise configuration under a data processing agreement that ensures user data is not stored, shared, or used for model training beyond the specific contractual terms. |
| Public AI Tool | Any AI system accessible to the general public where prompts or inputs may be stored, used for model training, or disclosed to third parties. |
| User | A user means persons covered under Clause 5.1 |
7. Permitted Uses of AI Tools
Subject to the verification requirements in Clause 10, and the confidentiality obligations in Clause 9, the following uses of AI tools are permitted:
7.1 Administrative and Productivity Tasks
- Code generation or automation for IT department tasks.
- Creating presentations or templates for internal training purposes.
- For drafting and improving circulars and notices, information of which are in public domain.
7.2 Legal Research Support
- Artificial intelligence may be employed for legal research, retrieval or analysis of judgments, extraction of ratio decidendi, identification of precedents, statutory interpretation, or any preparatory intellectual work supporting adjudication but with all human conscience and subject to verification by applying mind. Such research work must be supported and confirmed by comparing with the approved journals of the case laws.
- Generating a preliminary list of potentially relevant cases or statutes for further research (not a substitute for comprehensive AIJEL, SCC Online, or AIR searches).
- Comparing draft legal texts, statutes, or judgments.
3 Drafting Assistance - Improving the language, structure, and clarity of draft orders, judgments, and opinions — provided the substantive legal analysis and reasoning remains entirely that of the judge
- Generating a structural outline or framework for a judgment or opinion, subject to full judicial review and rewriting
- Checking existing drafts for typographical or grammatical errors 4 Language and Translation
- Machine-assisted translation of documents, provided all outputs are verified by a qualified translator or by the judge with competence in the relevant language before any reliance is placed on the translated text
- AI-assisted transcription of hearings, provided the transcript is reviewed and certified by the judicial officer before use
7.5 Case Management and Administration
- AI-assisted scheduling and cause list management, based solely on objective, anonymised metadata e.g. case type, complexity indicators, workload metrics, disposal rates etc.
- Suggesting equitable distribution of cases among judicial officers, based solely on anonymised metadata concerning case nature, complexity, workload balance, disposal rates etc.
- Automated extraction of case metadata for registry purposes.
- Statistical and administrative reporting from CIS data.
8. Prohibited Uses of AI Tools
Notwithstanding any other provision of this Policy, the following uses of AI are absolutely prohibited:
1. Artificial intelligence shall never be employed for any form of decision-making, judicial reasoning, order drafting, judgment preparation, bail/sentencing considerations, or any substantive adjudicatory process. Broadly, Artificial intelligence shall not be used—directly or indirectly—for any aspect of judicial decision, adjudication, reasoning, application of law, interpretation of facts, weighing of arguments, determination of rights/liabilities, sentencing, bail, interim orders, or final judgment.
2. Using AI to arrive at, determine, or substantially influence any finding of fact, finding of law, or operative order in any judicial proceeding.
3. Artificial intelligence shall not be used for sorting evidence, classification of evidence, organization of evidentiary material, credibility assessment, relevance filtering, summarization of depositions/testimony, or any task involving evaluation or categorization of proof.
4. Using AI to author, generate, or substantially compose any judgment, final order, or binding legal ruling, even if subsequently reviewed by a judge.
5. Entering any of the following categories of information into any public AI tool:
5.1. Names, addresses, or identifying information of parties, witnesses, or advocates;
5.2. Details of pending proceedings or unreported orders;
5.3. Privileged communications or confidential legal strategies;
5.4. Sensitive personal data as defined under the DPDP Act, 2023 (including health, financial, biometric, or caste-related information);
5.5. Evidence or documents filed in a case.
6. Using AI to generate, fabricate, embellish, or alter evidence in any form.
7. Using AI-generated citations, case references, or statutory provisions without independent verification from authoritative primary sources such as AIJEL, SCC Online, AIR, the Supreme Court website, or official government gazettes.
8. Sharing AI-generated content that mimics or purports to represent the official reasoning or language of a judge without the judge’s authorisation.
9. AI Tool shall not be used for the purpose of drafting, correcting or summarising any office note or submission.
10. Accessing AI tools using Court credentials on personal devices in a manner that bypasses security protocols established by the High Court.
11. Using AI for any purpose that violates any provision of the laws and rules.
9. Confidentiality And Data Protection
9.1 General Rule
No confidential information or data shall be entered into any public AI tool.
9.2 Public AI Tools
Where a public AI tool (e.g., the free tier of any LLM) is being used, no Court Documents — unless otherwise publicly available — and no case-related information shall be entered as prompts or context. Use of public AI tools shall be restricted to general, non-case-specific research and administrative tasks only.
9.3 Enterprise/Private AI Tools
In cases where the High Court of Gujarat approves an enterprise or private AI deployment under a formal data processing agreement that prohibits the provider from using inputs for model training or third-party disclosure, such tools may be used for a wider range of tasks as specified in the Clause 7. However, the following shall continue to be prohibited even in enterprise deployments:
- Entry of witness identities in pending criminal matters.
- Entry of information subject to confidentiality orders passed by any Court.
- Biometric, health, or other Special Category Personal Data as defined under the Digital Personal Data Protection Act, 2023.
9.4 Compliance with DPDP Act, 2023
All AI-related data processing involving personal data shall comply with the Digital Personal Data Protection Act, 2023 and rules made thereunder as and when it comes into force. Users shall not use AI tools to process personal data for purposes beyond those for which such data was originally collected in the course of judicial proceedings.
10. Verification And Accuracy Requirements
Given the known tendency of AI systems to produce plausible but incorrect outputs (hallucinations), the following verification obligations shall apply to all AI-generated content:
1. All case citations, statutory provisions, and legal propositions generated by an AI tool shall be independently verified against the full text of the original judgment or statute from an authoritative source before use.
2. The fact that a citation appears correctly formatted or internally consistent shall not be treated as evidence of its existence or accuracy.
3. Where an AI tool produces a summary of a document, the officer relying on the summary shall read the original document before acting upon the summary.
4. Machine-translated text shall be verified by a person with competency in the source language before it is relied upon in any Court proceedings or formal communication.
5. AI-assisted transcriptions of hearings shall be reviewed and certified by a responsible officer prior to use as a formal record.
6. Users shall never attribute AI-generated text to themselves as original work in any Court Document without first substantially reviewing, editing, and taking personal responsibility for the content.
7. Where AI tools have been used in the preparation of any research note, bench memo, or legal brief, the responsible officer shall document this fact in the record.
11. Human Supervision and Personal Accountability
The following accountability principles shall apply to all persons covered by this Policy:
1. Every Judge is personally responsible for every order, judgment, and observation issued under their name. This responsibility cannot be delegated, shared with, or diminished by the use of any AI tool.
2. Every court officer is personally responsible for the accuracy and appropriateness of any AI-generated content used in the performance of their official duties.
3. Legal Assistants, Research Associates, and Judicial Assistants using AI tools to assist a Judge shall ensure that the concerned Judge is informed of such use and of any AI-assisted output.
4. Any AI-generated output, once signed or authenticated by a user, becomes the sole responsibility of that user. The signatory shall be held liable for any inaccuracies, errors, or omissions contained within the authenticated material.
5. The use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence. Users cannot disclaim responsibility by attributing errors to an AI tool.
12. Consequences of Violation
Violation of any provision of this Policy shall be treated as misconduct and shall attract appropriate action including departmental or disciplinary proceedings under applicable service rules.
The above consequences are in addition to, and not in derogation of, any civil or criminal liability that may arise under any applicable law, including the Information Technology Act, 2000, the DPDP Act, 2023, or the Bharatiya Nyay Sanhita, 2023.
13. Review And Revision
This policy shall remain in force till revised by the High Court of Gujarat and subjected to direction or circular issued by the Supreme Court of India or its e-Committee regarding AI use in courts or any policy issued by the legislature.


