By Janelle O’Malley, Director of E-filing and Innovation | Indiana Office of Court Technology

In the Summer issue of the Court Times, I wrote about the Supreme Court’s creation of an AI Governance Committee to develop internal guardrails around AI use in response to the rise of AI in the courts. At that time, the Court had approved and distributed a foundational policy to its staff to ensure responsible use of AI that complied with Court employees’ ethical duties.
The Court’s original charge for the committee imagined a second phase after the internal policy was drafted. The committee has now completed phase two of their charge: issuing model AI use policy documents for Indiana trial courts, giving judges tools to take on the task of creating their own policies.
The model policies have been approved by the Supreme Court and were circulated to trial courts via the Weekly Bulletin. They are also posted on INcite under the Benchbook application. The committee’s goal is that trial court judges use these tools to build their own policies that meet the needs of their courts.
Toolkit Provides Options
Promulgating model policies for trial courts in Indiana presents specific challenges because of our non-unified structure. County-based trial courts have a varied makeup of judicial officers, caseloads, staffing, and resources; all of these factors are relevant when building an AI use policy. To the committee, this meant a flexible, modular approach to a model use policy was necessary.
The committee took under particular consideration the varying sizes, resources, and infrastructures of court IT departments. On one end of the spectrum, some courts have large, managed IT services from their county, and on the other, some courts have hired separate IT contractors to serve only the courts, rather than the county as a whole. Counties with larger IT departments may have more resources to vet AI tools and test potential uses.
The committee also noted that judges will want to consider the scope of who their policy applies to. For example, judges that manage probation departments will want to consider the reality that AI vendors are attempting to sell their products to probation departments, and the implications of probation and court data being fed into these systems. The same may be true for a court-managed GAL/CASA program or problem-solving court. Judges in these circumstances will want to examine these unique situations based on the type of data those offices manage and how that can be properly protected by an AI use policy.

Components of the Toolkit
The committee developed four tools to assist judges in learning more about AI and preparing to draft their own policies. This model policy toolkit is a collection of customizable elements that courts can adopt, reject, or modify based on their local needs.
AI Starter Pack for Indiana Judges:
This is a starting point for those who are new to considering using AI tools in their court. The benchcard summarizes key rules judges must understand before using AI, questions to ask vendors before adopting off-the-shelf AI tools, and red flags to watch for. The benchcard also encourages judges to begin experimenting with AI, if they have not already. This can be done by accessing NCSC’s AI Sandbox, a secure, hands-on environment with multiple AI tools to try.
AI Policy Development & Implementation Checklist:
This checklist can be used for organizing and planning to create a court AI use policy. The checklist encourages courts to first organize a cross-functional work group that will develop a policy. The committee recommends that representatives from several different specialties will be needed to create a comprehensive policy. The checklist also highlights the importance of training staff on the policy and reviewing the policy regularly to identify needed updates.
Model AI Policy Terms:
This document houses the core model policy terms drafted by the committee. The model policy terms contain detailed information about how to regulate AI within the framework of the Code of Judicial Conduct and the Rules of Professional Conduct. Judges are encouraged to adopt a policy that observes the benefits their courts could gain from judges and staff using AI tools, while balancing responsible use and protection of court data.
Judges are cautioned against drafting a policy that completely bans AI use, as some commonly used programs (e.g., Microsoft Office) incorporate components of AI in their products. The model terms also help judges understand the distinction between open, closed, and closed-and-sequestered AI models and encourage drafting a policy that considers the different types of court data that could be entered into each type of model.
Judges AI Buyer’s Guide:
This guide was designed to assist judges in making decisions about adopting and purchasing off the shelf AI software for their courtrooms. It encourages judges to recognize the unique nature of court data and incorporate that into their decision-making when contracting for AI tools. Relying solely on vendors’ claims of data security and privacy is not advisable, as judges need to fully examine a tool’s handling of their data before using it.
Implementation
We encourage you to work within your own court governance frameworks—whether by county or by district—to review these materials and start discussing how to implement them in a way that works for you.
If you have any questions about the committee’s work or how to implement this AI toolkit, contact [email protected].