
By Janelle O’Malley, Director of E-filing and Innovation • Indiana Office of Court Technology
Artificial intelligence is rapidly transforming our world, and the legal system is no exception. AI is not a new concept—anyone who has used Google or Outlook has used it, even unknowingly. And generative AI—that which can create new content and ideas based on input, including text, images, audio, and video—is rapidly on the rise. As generative AI becomes more prevalent in legal research, court administration, and case management, the need for clear guidelines becomes paramount. Recognizing this imperative, the Indiana Supreme Court has adopted a comprehensive AI policy to ensure ethical and effective use of AI.
The Rise of AI in the Legal System
Courts nationwide are exploring AI for tasks like document analysis, document processing, workflow overhaul, scheduling, and even drafting opinions. For more than five years, AI-driven tools have been deployed in some clerk and court offices to process and accept court filings without the regular need for human intervention. In Palm Beach County Florida, more than one-third of e-filed court documents are processed by AI software instead of court staff. In Indiana, the Supreme Court established an expedited mental health appeals pilot project for Marion County that is made possible by AI generated transcripts. These transcripts can be created and available to the appellate courts within hours instead of several weeks.
Attorneys and litigants are also exploring the benefits of using AI to draft documents and perform legal research. While courts, litigants, and attorneys should have the freedom to experiment with using AI, the ethical and professional duties of judges, lawyers, and those who work for them must come first. Because of the increasing use of AI in the legal realm, in 2024 the National Center for State Courts formed an AI Rapid Response Team to offer guidance to courts in developing policies to ensure ethical compliance with rules and policies. Armed with this guidance, several state courts, including Indiana, have drafted their own policies.
Indiana’s Proactive Approach to AI Governance
The Indiana Supreme Court formed an AI Governance Committee whose first charge was to develop an internal AI use policy for the Supreme Court and the Office of Judicial Administration. Representatives from nearly all OJA agencies participated in the committee, which began its work by getting educated on generative AI, its potential uses, and what AI software court staff was already using. The committee set the framework for the policy by ensuring they were carrying out the goal of the Supreme Court: encouraging AI use but putting up guardrails to make sure that use was ethical and responsible.
The committee then asked the question: are the current ethical rules enough or do specific AI rules need to be promulgated? Like all other tools used in the legal field, AI is just that—a tool. The behavior and intent of the user was what needed to be regulated, and this type of regulation already exists in the Code of Judicial Conduct and the Rules of Professional Conduct. This led the committee to analyze these existing rules and how they would apply to the use of AI.

The Court’s AI Policy
The final policy, approved by the Supreme Court, encourages responsible use of AI to support employees’ work while reinforcing the importance of adhering to ethical rules. Employees are required to take notice of the Indiana Code of Judicial Conduct, which binds court employees, and the Rules of Professional Conduct, which bind attorney employees. The policy also highlights the need for vigilance and adherence to ethical standards when using AI, particularly in recognizing the potential bias and inaccuracies that may be embedded in AI-generated content.
The policy draws a distinction between AI software with open models and closed models, alerting employees that open models make any entered information available to all other users of the platform. Because of the sensitive nature of data that court employees work with, this distinction is paramount to responsible AI use. If employees want to work with sensitive or confidential data on an AI platform, the Supreme Court’s Director of Information Technology and Director of Data Governance and Analytics must be consulted. Employees must also follow existing data privacy and cyber security policies when using AI to ensure the security of court data and infrastructure.
When it comes to engaging with vendors, Supreme Court employees are required to confirm the use of AI integration in vendor provided software—whether contracting for a new software license or merely upgrading services to include an AI element. Vendors must also provide assurances that AI tools integrated into their software will not incorporate OJA or Court data into non-sequestered, generative AI models that share input with other implementations of the software. Cohesively, these safeguards ensure that the Supreme Court’s use of AI aligns with ethical obligations, protecting sensitive information and promoting thoughtful, informed adoption of technology offerings.
Next Steps & Impact on the Trial Courts
Now that the policy has been provided to Supreme Court and OJA employees, the committee is turning to its second charge: providing guidance to trial courts on local AI use policies. The committee has added judicial officers to its ranks to provide valuable insight into the issues facing trial courts and their employees when it comes to AI, including use by pro se litigants and deepfakes. These judges represent counties of different sizes, with varied structures and demographics. The committee will develop a toolkit of policy elements that local judges can review and choose from after determining which elements are a good fit for their court.
The committee is also investigating how to train AI users and increase literacy among judges and their staff. Because this is such a new and rapidly evolving area, educating potential AI users on policies and technology will be critical to reducing confusion while encouraging responsible use.
Conclusion
The Supreme Court has put up clear guardrails for its employees’ responsible use of AI through a new use policy. As the Indiana courts move into the age of AI, the Supreme Court is committed to ensuring that judges and court employees around the state are equipped with the tools, knowledge, and guidance necessary for ethical and effective use.