2025 Transportation Law Compendium: Litigation Practices (AI/Nuclear Verdicts) Question 2

How is AI being utilized in your jurisdiction? Evaluation of cases, jury selection, drafting, exhibits, etc.

The use of AI by Alabama attorneys is difficult to gauge at this time. We know attorneys are using it for things like form discovery requests, deposition outlines, heavy document analysis, and general research. However, recent backlash surrounding one firm’s improper reliance on AI for a case citation in a motion filed in federal court has certainly spiked caution within the Alabama Bar regarding the use of AI for anything filed in court.

The use of AI by Alabama attorneys is difficult to gauge at this time. We know attorneys are using it for things like form discovery requests, deposition outlines, heavy document analysis, and general research. However, recent backlash surrounding one firm’s improper reliance on AI for a case citation in a motion filed in federal court has certainly spiked caution within the Alabama Bar regarding the use of AI for anything filed in court.

In Alaska, artificial intelligence is increasingly being integrated into litigation practices, though adoption remains cautious and measured. Law firms and legal professionals are using AI primarily to enhance efficiency in case evaluation, jury selection, legal drafting, and exhibit preparation. While Alaska’s legal community is embracing these innovations, there is a strong emphasis on ethical use, data privacy, and maintaining human oversight to ensure fairness and compliance with professional standards.

On November 14, 2024, the Arizona Supreme Court, through the Steering
Committee on Artificial Intelligence and the Courts (AISC) issued a set of ethical best practices on generative AI use.

  • Client Consent: Counsel should first obtain the client’s informed consent to proceed before inputting confidential or privileged information into an AI tool.
  •  Disclosure:
    • If an AI tool is used for a firm’s chatbot;
    • The ethical rules do not require disclosure of AI use connection
      with litigation, however, certain courts’ general, standing, and
      case-specific orders require it.
  • Prohibited Uses: Confidential or nonpublic information absent sufficient
    guarantee that the AI tool will not share or use the information.

For more information, please contact:

ALFA International Headquarters
980 N. Michigan Avenue, Suite 1180
Chicago, IL 60611
Phone: (312) 642-2532

California’s Judicial Council has adopted Rule 10.430 addressing the use of AI by the state’s judicial branch. The Rule requires Courts that adopt the use of AI to also adopt a “use policy” by December 15, 2025. The Judicial Council has set forth Standard 10.80, which has guidelines for use, including considerations for confidentiality, discrimination, accuracy, bias, disclosure, and ethics.

At the end of 2023, the California State Bar Board of Trustees approved the “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law.” It sets forth a variety of applicable authorities (from the Business & Professions Code along with California Rules of Professional) coupled with practical considerations for AI.

Practically speaking, it seems that use of AI varies widely in California depending on the law firm and practice area. There are a variety of AI tools to assist with research, document review, contract review, and e-discovery. We also believe some attorneys are using AI in drafting motions for the Court, which is its own danger.

Artificial intelligence (AI) has been utilized to assist in drafting court documents. Although useful, Colorado litigants have used the tool without checking for accuracy, thus offering non-existent case law to the court. The Colorado Court of Appeals has highlighted the risks with using AI including the possibility of imposing sanctions if used irresponsibly.

The Colorado Department of Transportation relies on legal services provided by the attorney general for various purposes, including acquiring rights-of-way, recovering damages, and enforcing contracts. While the statutes do not explicitly mention AI, the increasing use of AI systems in legal and administrative processes could potentially enhance efficiency in these areas. C.R.S. § 43-1-112.

AI is evolving and is best suited for non-nuanced tasks. For example, a deposition page-line extract can be generated; however, the result is often over inclusive. The same is true for deposition summaries.

Likewise, medical review software is best at proving a treatment sequence and attractive calendar. Direct review of the subjective complaints and objective findings is still necessary.

Artificial intelligence should generally not be used to assist in the drafting of pleadings in Delaware. In certain instances involving pro se litigants, Delaware courts have required certifications regarding whether artificial intelligence was used and warned of sanctions for the use of fictitious citations. See An v. Archblock, Inc., 2025 WL 1024661 at *2 (Del. Ch. Apr. 4, 2025); Lillard v. Offit Kurman, P.A., 2025 WL 800833 at *1 (Del. Super. Ct. Mar. 12, 2025). In a recent Court of Chancery decision, the court held that a party could use a technology assisted review in order to limit the burden of document production. Berger v. Graf Acquisition, LLC, 2024 WL 4541011 at *4 (Del. Ch. Oct. 21, 2024). This type of review uses predictive coding and is trained by a representative seed of documents selected and reviewed by attorneys. Id. at *2. The human reviewers remain involved in quality control. Id. Counsel must remain closely involved in the review and sampling process and must be transparent with the other side regarding the computer-assisted review process. Id. at *4.

The District of Columbia courts have not yet addressed litigant use of AI; however, the courts themselves are beginning to utilize AI in publishing discussions. In Ross v. United States, Associate Judge Howard comments on AI usage for judicial opinions and notes that state courts are cautiously considering AI use in courts. iii

The use of AI continues to be an evolving issue here. In the proper use of software that keeps information confidential, AI can be a helpful tool to assist attorneys in summarizing depositions or legal research.

We have very recently begun seeing disagreements with plaintiffs’ counsel over the use of AI pop up in discussions of language in protective orders designed to protect confidential documents.

Similar to other jurisdictions, attorneys in Florida continue to be sanctioned for the improper use of AI in legal research where attorneys are not corroborating the word and providing “hallucinated” (i.e., false) case law to courts. See, e.g., Versant Funding Ltd. Liab. Co. v. Teras Breakbulk Ocean Navigation Enters., Ltd. Liab. Co., No. 17-cv-81140, 2025 U.S. Dist. LEXIS 98418 (S.D. Fla. May 20, 2025) (imposing sanctions where the “submission of the hallucinated case citation resulted from the use of AI without attorney verification of the accuracy of the case citation”).

The most common use of Artificial Intelligence (“AI”) is for brief writing and other submissions to courts. However, it is also used to:

  • Summarize deposition and hearing transcripts;
  • Prepare medical chronologies;
  • Review documents for discoverable or otherwise relevant information;
  • Analyze contracts and other legal documents; and
  • Organize large sets of documents produced in discovery.

The Judicial Council of Georgia recently established an Ad Hoc Committee on Artificial Intelligence and the Courts (the “Committee”) to examine potential positive and negative impacts of AI on judicial processes. In June 2025, the Committee issued a report finding it is generally acceptable to use AI-powered, closed source legal research tools (Westlaw, Lexis+, etc.), but not acceptable to use commercially available secure Large Language Models (ChatGPT, Microsoft CoPilot, etc.). In addition, the Committee found it unacceptable to use AI for jury selection.

The Committee also recommended that AI users avoid fully automated decision-making and utilize a human-in-the-loop approach to mitigate risk and ensure accountability. A recent opinion by the Georgia Court of Appeals highlights the need for human verification of AI data.[i]

Further, many Georgia judges have enacted standing orders regarding the use of AI and which may, among other things, require attorney certification that any such document has been independently review to confirm the accuracy, legitimacy, and use of good and applicable law.

[i] See Shahid v. Essam, 2025 WL 1792657 (Jun. 30, 2025) (upholding $2,500 sanction against attorney for use of non-existent cases in a brief filed with the court)

For more information, please contact:

ALFA International Headquarters
980 N. Michigan Avenue, Suite 1180
Chicago, IL 60611
Phone: (312) 642-2532

Idaho has no statutes or case law specifically addressing the use of AI and its use is currently at the discretion of individual attorneys and firms, subject to any limitations applicable under the Idaho Rules of Professional Conduct. However, the Idaho Supreme Court recently issued an order advising that submitting fictitious or hallucinated authority to the Court violates Idaho Appellate Rule 11.2 and may result in sanctions.v

While efficiency in any of the above tasks can be (and inevitably is) increased by AI, AI is used primarily as a research tool in conjunction with traditional resources. It is common for attorneys to use Lexis’ AI feature or similar search engines as a starting point in conducting research, but it is important to ensure that the sources identified are good, existing law.

The use of AI to prepare legal filings was first addressed by the Illinois Appellate Court this summer. People v. Lerin H. (In re Baby Boy), 2025 IL App (4th) 241427, ¶ 91. It noted that, according to the Illinois Supreme Court, “while the use of AI is authorized while practicing in Illinois courts, users must understand its capabilities and thoroughly review any AI-generated content” – echoing known concerns surrounding the technology when used in a legal context. Id. (citing Ill. Sup. Ct., Illinois Supreme Court Policy on Artificial Intelligence (Jan. 1, 2025)). The court imposed monetary sanctions on the attorney for drafting a brief through AI which cited fictitious cases. Id. at ¶ 127. A Goldberg Segalla attorney was also sanctioned this summer in Cook County under similar circumstances.

It is also important to note the implications of inserting HIPAA-protected and otherwise privileged information into AI search engines, as these search engines learn from this data which can result in a breach of privacy.

Pursuant to Burns Ind. Code Ann. § 4-13.1-5-3, each state agency may compile, in a form specified by the office, an inventory of all artificial intelligence technologies that are in use, being developed, or considered by the state agency for use by the state agency and submit the inventory to the office and the executive director of the legislative services agency for distribution to the members of artificial intelligence technology included in the inventory.

Additionally, pursuant to Burns Ind. Code Ann. § 4-12-1-7.5, “a state agency may use artificial intelligence software to prepare a statement required under section 7 [IC 4-12-1-7] of this chapter or any budget projections for the state agency.”

As of the drafting of this response, Iowa has not enacted any legislation specifically governing the use of AI within the legal profession. Instead, Iowa has left it to individual attorneys and law firms to determine how best to incorporate AI into their practices.

The Iowa State Bar Association has taken steps to educate both legal professionals and the public on the potential applications and risks of AI in the legal industry. For example, the Iowa Bar Blog has compiled a collection of articles, guides, webinars, and other resources to assist lawyers in learning how to integrate AI into their practice. See Access Resources on AI for Lawyers, IOWA STATE BAR ASSOCIATION (Aug. 21, 2024). Additional posts include information about specific functions where AI can be utilized, such as legal research, data analysis, contract review, and client communication, while others warn about the pitfalls of AI. Morgan Germann, Vetting AI for Attorneys, IOWA STATE BAR ASSOCIATION (last visited July 28, 2025); Jeffrey R. Schoenberger, The Oops of AI, IOWA STATE BAR ASSOCIATION (last visited July 28, 2025).

The Iowa State Bar Association has also addressed the use of AI by hosting attorney presentations on how AI can be integrated into the profession while also reminding legal professionals to use AI responsibly. See Jeffrey Allen & Ashley Hallene, AI and the Practice of Law: The Good, the Bad, and the Ugly!, IOWA STATE BAR ASSOCIATION (June 26, 2005).

In sum, Iowa legal professionals incorporate AI into their practices in a variety of ways based on their individual needs, firm structure, and professional judgment.

Currently, AI is being used as a tool by many attorneys, including drafting, research, deposition and hearing prep, and organization of case files. Importantly, some jurisdictions in Kansas, including Johnson County, are requiring attorneys to include in the Certificate of Service whether AI was used.

While there are no Kentucky Supreme Court Rules of Professional Conduct (“SCR”) that explicitly regulate the use of artificial intelligence in the practice of law, the Kentucky Bar Association issued a thorough Ethics Opinion in March of 2024 exploring the ethical use of AI in legal practice. KBA E-457 recognizes that AI might benefit Kentucky attorneys by:

  • Streamlining legal research to find relevant case law, statutes, and precedents more quickly;
  • Reviewing and analyzing large volumes of documents and summarizing them;
  • Automating repetitive tasks to reduce the requirement for extensive manual labor;
  • Detecting deception in emails or documents;
  • Predicting case outcomes and legal trends based upon historical data;
  • Expediting responses to client inquiries;
  • Providing around-the-clock access to legal information and resources;
  • Reducing legal expenses to the client due to accelerated research and document preparation

Anecdotally, AI is being used for most or all of these purposes by Kentucky lawyers today. However, KBA E-457 cautions that an attorney employing AI in his or her practice must take care to understand how AI works, how it may be used responsibly, and in conjunction with the existing Rules of Professional Conduct. We also stress as a best practice that use of AI in providing legal services to clients should be the subject of an open and continuing dialogue between lawyers and clients.

Artificial intelligence (“AI”) is still a relatively novel and uncharted area in Louisiana. During the 2025 legislative session, House Bill No. 178, created Louisiana’s first comprehensive legal framework for handling AI-generated evidence. The bill amended Louisiana Code of Civil Procedure article 371 which now requires attorneys to exercise “reasonable diligence” in verifying the authenticity of evidence before presenting it in court. The provision states that if an attorney knew or should have known, through reasonable diligence, that evidence was false or artificially manipulated, presenting it without disclosure constitutes a violation of the article.

AI is being used by individual parties as they deem necessary.

Maryland law has not yet addressed the use of AI in court. However, in Mooney v. State, the Supreme Court of Maryland noted the “advent of image-generating artificial intelligence [ ] may present unique challenges in authenticating videos and photographs.”vii Accordingly, Maryland courts may be increasingly vigilant in the authentication of photo and video exhibits, where AI is being used to alter or create images.

For more information, please contact:

ALFA International Headquarters
980 N. Michigan Avenue, Suite 1180
Chicago, IL 60611
Phone: (312) 642-2532

Michigan’s Rules of Professional Conduct requires lawyers have an ethical obligation to understand technology, including AI.v

In June 2025, the State Bar of Michigan authored a Report and Recommendation, identifying the ways most lawyers use AI:vi

  • E-Discovery – identifies, organizes and reviews relevant documents including emails, databases, audio and video files, websites, etc.
  • Legal Research – makes legal judgment predictions and analyzes legal documents
  • Contract Drafting and Analysis – Isolates and summarizes the most important details of legal contracts, such as compensation comparisons, non-compete clauses, etc. and compare against industry standards or outliers
  • Document Storage – provide electronic space for lawyers to clear out the clutter
  • Outcome prediction – creates question prompts during jury selection and creates a profile of prospective jurors

Given the recency of generative artificial intelligence (“AI”) few case studies have been published in Minnesota. Most attorneys are still in the adoption phase where they are learning about the best uses and gaining practice using the tool during every stage of a case or matter. Similarly, legal departments are focused on providing their attorneys with best practices and encouraging daily usage. Many departments are also establishing or updating their AI usage policies to assist their attorneys in navigating the everchanging digital landscape.vii

In 2023, the Minnesota State Bar Association formed an Artificial Intelligence Working Group comprised of legal professionals who were tasked with “exploring how artificial intelligence (AI) might implicate the unauthorized practice of law (UPL).”viii The Working Group met regularly for one year and then published a report containing five recommendations for how AI might impact the legal profession.ix One of those recommendations was to create a AI Intelligence Committee made up of professionals who would be responsible for researching and developing guidelines on the use of AI for attorneys and legal professionals alike.x Another recommended was to launch a generative AI regulatory sandbox to address significant gaps in the public’s access to justice.xi In July 2024, the MSBA Board of Governors adopted the report’s recommendations and began working towards implementation throughout 2025.xii

We are primarily seeing AI being used in drafting contracts, pleadings and briefs. Some expert witnesses may use AI to generate reports. Care should be practiced if using AI to make sure quoted sources exist and stand for the cited proposition.

AI can be a helpful tool for legal practice in Missouri. For example, the AI-assisted research features in online legal research services, such as Westlaw, can improve the efficiency and accuracy of legal research. Moreover, AI tools can be used to summarize lengthy documents, cite-check work product, and make suggestions for deposition questions and written discovery.

Nevertheless, attorneys in Missouri should use AI with caution. For instance, AI-generated work product may include fictitious citations and quotes. When such documents are submitted to Missouri courts, the submitting party may be subject to sanctions. See, e.g., Kruse v. Karlen, 692 S.W.3d 43, 48–54 (Mo. App. 2024). Notably also, some local rules in Missouri require parties submitting AI-generated documents to disclose that AI was used. See, e.g, MO. 7TH CIR. CT. R. 3.3.1.

Furthermore, the use of AI in legal practice warrants ethical considerations. The Office of Legal Ethic Counsel & Advisory Committee of the Supreme Court of Missouri recently addressed this topic in its Informal Opinion No. 2024-11. In particular, the Informal Opinion advises that lawyers must carefully consider confidentiality when inputting client information to a generative-AI platform or service. Additionally, lawyers have a professional responsibility to verify the accuracy of AI-generated product and must maintain independent professional judgment.

Montana has adopted forward-looking laws to regulate AI in government, privacy, and media:

HB 178—signed May 5, 2025 (effective October 1, 2025)—limits government use of AI. It prohibits AI for behavioral manipulation, discriminatory profiling, malicious activity, mass surveillance (with narrow exceptions, e.g., locating missing persons).

Further, any AI-generated recommendations affecting rights must be reviewed and approved by a human, and public-facing AI must be clearly disclosed.

HB 178 would potentially impact the use of AI in jury selection and court proceedings to the extent it could lead to claims of discriminatory profiling and to the extent that any AI output must still be reviewed and authorized by a human. There are no reported cases relating to the use of AI in legal proceedings. It remains to be seen how these laws will function in practice.

We are aware of Nebraska law firms and clients turning to AI for a variety of purposes, including case review, billing review, medical charting review, and generative AI usage for various drafting purposes. Like many other jurisdictions, we are aware of the many issues presented with the use of AI in legal drafting (AI hallucinations, etc…) and the ethical considerations involved. To this point, as of December 1, 2024, our Federal District Court Local Rules require a Certificate of Compliance to include whether a brief has been prepared with generative AI. Even if no AI was used, we are required to state that in the certificate of compliance.

We do note that in our practice the use of AI in assisting with case themes, deposition questions, and outlines is helpful, and beneficial to the client.

AI is gradually being adopted by legal practitioners in Nevada, but the integration remains uneven across firms and case types. While there are no statutes or court rules specifically regulating the use of AI in litigation, a growing number of law firms and insurance defense teams are leveraging AI-supported tools to streamline document-heavy tasks and gain a strategic edge in high-stakes cases.

One of the most common applications of AI is in e-discovery. Platforms using predictive coding and natural language processing are regularly employed to identify relevant communications, maintenance records, and accident documentation. These tools reduce attorney review time and enhance precision, particularly in commercial cases involving large data sets. Similarly, litigation analytics software is used by some firms to evaluate judge and opposing counsel tendencies, assist with case valuation, and predict settlement ranges based on prior verdict data.

AI is also being used in preliminary case evaluation, such as liability modeling or assessing exposure based on accident facts and medical reports. Some insurance carriers operating in Nevada deploy AI-driven triage systems to flag potential nuclear verdict risks or recommend early settlement in cases with sympathetic plaintiffs or problematic liability.

In jury selection, adoption of AI remains limited but growing. A small number of firms utilize third-party jury research platforms that apply sentiment analysis and demographic profiling to refine voir dire strategy. However, these tools are not yet widespread and tend to be reserved for high-dollar, multi-day trials in venues like Clark or Washoe County.

In drafting, some attorneys are beginning to use generative AI to prepare first drafts of pleadings, motions, or deposition outlines. This is most common in internal workflows and is often coupled with attorney review to ensure quality and legal sufficiency. The same holds true for exhibit preparation—AI can assist in organizing, labeling, and summarizing voluminous records, but human oversight remains critical, particularly where admissibility and evidentiary rules are concerned.

It is unclear whether and how AI is being used in New Hampshire, but it is not prevalent. While it is likely that some AI is being used internally by attorneys, perhaps embedded within existing platforms and as an add-on service for legal research tools, it is not being used in any official or formal manner. New Hampshire has yet to adopt any rules governing the use of AI but the New Hampshire Bar Association has formed a committee to provide guidance to attorneys as to best practices.

AI is increasingly used in litigation for case evaluation, jury selection, drafting, and preparation of exhibits. It can streamline discovery, highlight cumulative trial errors, analyze juror predispositions, and enhance visual evidence. Yet as State v. Pickett, 466 N.J. Super. 270 (App. Div. 2021) illustrates, courts remain focused on fairness, reliability, and avoiding prejudice. In Pickett the court reversed a conviction because cumulative errors, including unreliable eyewitness testimony and prejudicial closing arguments, undermined the integrity of the trial.

The New Jersey Supreme Court has issued preliminary guidance emphasizing that attorneys remain ethically responsible for AI-assisted work. Counsel must verify AI outputs for accuracy, safeguard client confidentiality, and avoid overbilling for automated tasks. AI is currently being explored in case evaluation, jury selection, document drafting, and presentation of exhibits.

The New Jersey State Bar Association Task Force on AI continues to develop ethical guidance for attorneys. The New Jersey guidelines confirm that lawyers bear ultimate responsibility for supervising and verifying AI, while Pickett shows that courts will not hesitate to overturn verdicts when fairness is compromised.

While there have not been any cases issued by a New Mexico court on the uses of AI, we are aware that many lawyers are using AI for assistance in creating deposition outlines for various witnesses, aiding in drafting memorandum, summarizing records, and helping identify jurors in the process of voir dire.

For more information, please contact:

ALFA International Headquarters
980 N. Michigan Avenue, Suite 1180
Chicago, IL 60611
Phone: (312) 642-2532

In North Carolina, attorneys are increasingly incorporating AI across many aspects of legal work. Firms in North Carolina are using AI research tools to assist with identifying relevant cases, extracting legal issues, and to provide predictive analytics about judges or opposing counsel. AI is also being used to generate first drafts of motions, client letters, deposition summaries and exhibits to accelerate and make initial drafting and document summarization more efficient.

Legal Aid of North Carolina’s Innovation Lab launched an AI-powered virtual assistant which helps members of the public access legal information concerning areas like landlord-tenant law, custody, and consumer protection issues.

Although useful for summarizing case law and spotting patterns, North Carolina ethics opinions emphasize that AI output should serve as a rough draft, rather than final authority. The North Carolina State Bar has stated that AI usage is permitted for research, drafting, and discovery – but attorneys remain fully responsible for all AI-generated content under Rules 1.1 (Competence) and 5.3 (Supervision) of the North Carolina Rules of Ethics. See North Carolina State Bar 2024 Formal Ethics Opinion 1. The North Carolina State Bar has also stated that, although lawyers may provide or input a client information into a third-party’s AI program for assistance with provision of legal services, the attorney must satisfy themselves that the third-party company’s AI program is sufficiently secure such that the lawyer’s obligations to ensure client data will not be inadvertently disclosed or shared with unauthorized individuals are met. See id.

The Western District of North Carolina has issued a standing order requiring attorneys to disclose whether they used generative AI in drafting briefs or memoranda and, if the attorney did so, to attest that every statement and citation was verified for accuracy by an attorney or attorney-supervised staff member.

AI is gradually being adopted by legal practitioners in North Dakota, but the integration remains uneven across firms and case types. AI is mainly used by law firms to improve efficiency, automate routine work, e-discovery and assist with research, rather than to make binding legal decisions. These tools reduce attorney review time and enhance precision, particularly in commercial cases involving large data sets. Similarly, litigation analytics software is used by some firms to evaluate judge and opposing counsel tendencies, assist with case valuation, and predict settlement ranges based on prior verdict data.

Although specific court rules on AI remain limited, state ethics rules and emerging policies require that attorneys not AI remain fully responsible for the work produced.

AI is also being used in preliminary case evaluation in North Dakota, lawyers use AI to sift through large volumes of case documents and transcripts. Advanced tools can identify key evidence, predict case outcomes, and flag weaknesses. AI platforms like ProPlaintiff.ai help personal injury attorneys draft demand letters and estimate settlement values. Some experts believe AI can help insurers and defense counsel counter litigation tactics that drive nuclear verdicts.

Although, in North Dakota rules specifically address AI in jury selection are limited but growing, there are tools that exist that analyze news articles and social media to profile potential jurors. This practice raises ethical questions, and any use in North Dakota must comply with state policies requiring AI to be fair, ethical and unbiased.

North Dakota courts are considering AI transcription services, with a possible policy change in 2026. State scholars have raised concerns about AI generated “deepfake” evidence, urging updates to evidence rules to prevent authenticity in trials.

There are no specific civil procedure rules written for AI in North Dakota, but its use is guided by several authorities.

Section 7.4 and 7.5 of the North Dakota Information technology (NDIT) Artificial Intelligence (AI) Policy that requires users to evaluate AI for accuracy and ensure its ethical and unbiased use. Likewise, the State Bar and University of North Dakota School of Law have educated members on the ethical responsibilities associated with AI, emphasizing human oversight and verification. ABA Formal Opinion 512, issued July 29,2024, provides guidance for lawyers using Generative Artificial Intelligence (GAI) tools and addresses related ethical obligations.

There are no jurisdiction-wide rules regarding AI in Ohio state courts, or the federal courts sitting in Ohio. Instead, judges and courts adopt ad hoc rules regulating its use. Most of the visibility into AI use comes in the drafting context.

Some judges outright prohibit its use. For example, Judge Boyko (U.S. District Court, Northern District of Ohio) broadly prohibits the use of generative AI except for “information gathered from legal search engines, such as Westlaw or LexisNexis, or Internet search engines, such as Google or Bing.” There are stiff penalties for failure to observe this rule, including striking the pleadings, sanctions, and disqualification. Judge Newman (U.S. District Court, Southern District of Ohio) issued an identical rule. See McComb v. Best Buy Inc., 2024 WL 181857, at *1 (S.D. Ohio Jan. 17, 2024) (enforcing this rule).

Some state courts required a disclosure anytime an attorney uses AI on a filing. For example, Hamilton County Local Rule 49 requires disclosure of AI assistance. Unlike Judge Boyko and Judge Newman’s standing orders, the Hamilton County Local Rule does not carve out an exceptions for AI-assisted legal research engines such as those provided by Lexis and WestLaw. Therefore, it is unclear if attorneys practicing in Hamilton County should start to incorporate AI disclosures anytime they research for a filing. Under a strict reading of the rule, that appears to be required.

Medina County’s Domestic Relations Court imposes a similar rule requiring disclosure of AI assistance in drafting filings. So do Medina’s Juvenile and Probate courts. These latter rules seem to contemplate only generative AI which conducts substantive drafting, not AI-assisted legal research. But the Medina DR Court rules are broader, and include research. Williams County’s Juvenile and Probate Courts also include an AI rule styled “Use of Technology” which requires a disclosure and notice that all info was verified by counsel.

Beyond these categorical bans or disclosure requirements, there is little guidance from Ohio authorities. When the issue appears in caselaw, the concern centers more around the results of using AI, rather than the fact itself that AI was used. See, e.g., Muhammad v. Gap Inc., 2025 WL 1836657, at *14 (S.D. Ohio July 3, 2025); Chasteen v. Lynch, 2024-Ohio-5857, ¶ 55 (12th Dist.); Gamble v. Gamble, 2025-Ohio-2381, ¶ 26 (12th Dist.).

Oklahoma lawyers and judges continue to adapt and implement the use of AI in all aspects of legal practice and case management. At the end of 2024, the Oklahoma Bar Association published an article titled “Navigating Generative AI In Legal Practice: Harnessing Technology While Managing Risks” stating that generative AI can be utilized for tasks such as:

  • Document drafting: contracts, agreements legal notices and pleadings;
  • Legal research: summarizing case law statutes, and generating legal opinions;
  • Contract review: identifying clauses, risks and comparing contracts;
  • Legal writing: drafting briefs, memos and summarizing depositions;
  • Compliance and due diligence: creating regulatory documents and aiding e-discovery;
  • Form generation: customizing legal forms;
  • Client communications: drafting emails, client updates and powering chatbots;
  • Data analytics: predicting case outcomes and analyzing document sentiment; and
  • Translation: translating legal documents.

Although AI is viewed as being helpful in conducting these tasks, the general sentiment is that anything generated by AI is a “starting point” and that any generated material should be thoroughly reviewed by an attorney. A number of judges, such as Judge Scott L. Palk of the Western District of Oklahoma and Magistrate Judge Jason A. Robertson of the Eastern District of Oklahoma, have adopted rules stating any party who utilizes any generative AI in the preparation of any documents must disclose in the document itself that AI was used and the specific AI tool that was used.

The use of AI remains an individual decision for practitioners, though it is becoming more common in all areas of practice.

The Oregon State Bar has issued a formal opinion stating that while Oregon lawyers may use AI tools and Generative AI in their legal practice, they must be competent to use it and conduct due diligence to determine if the product can be used “in a manner consistent with the lawyers’ ethical duties.” Formal Op No 2025-205 (Artificial Intelligence Tools). This includes understanding how AI tools may store private information and produce hallucinations, and considering whether the client’s consent may be required.

The Pennsylvania Bar Associate published an article authored by Daniel J. Siegel on August 8, 2024 which summarizes and provides his summation of the Joint Formal Opinion 2024-200 titled “Ethical Issues Regarding the Use of Artificial Intelligence.” Joint Formal Ethics Opinion gives Practical Guidance on Artificial Intelligence – The Philadelphia Lawyer, August 8, 2024, Siegel, Daniel. Specifically, it details that AI is currently being utilized for:

  • the automation of legal research, document review and case management;
  • drafting documents;
  • and synthesizing large volumes of information.

Ultimately, as AI continues to become integrated within the legal profession, the ethical concerns of such use are at the forefront when dealing with the accuracy and honesty of AI-generated content, maintain confidentiality, verify citations and manage conflicts of interest that AI systems may introduce. Id.

To the best of our knowledge, the use of AI is still a somewhat novel concept in Rhode Island. When it is used, it is primarily for drafting. There is no legal authority controlling the use of AI, however, such use is cautionary and undeveloped. In October 2024, Chief Justice of the Rhode Island Supreme Court, Paul A. Suttell, established the Committee on Artificial Intelligence and the Courts (CAIC) to evaluate AI’s impact on:

  • Judicial conduct;
  • Criminal law and evidence; and
  • Professional ethics.

The committee is comprised of judges, attorneys and technology experts. A comprehensive report is to be published in late 2025.

AI may be utilized in South Carolina in various capacities within the legal system subject to considerations regarding policies implemented by the courts and/or the application of the Rules of Professional Conduct. The South Carolina Judicial Branch has implemented policies to regulate the use of generative AI tools, emphasizing their potential benefits in enhancing productivity while addressing associated risks such as inaccuracies, bias, and cybersecurity vulnerabilities.xviiixvi South Carolina’s Supreme Court issued an interim policy in March 2025 regarding the use of generative AI within the court system.xvii This policy was put in place to “ensure the responsible and secure integration of these technologies into the judiciary, while safeguarding the integrity of judicial proceedings and protecting the privacy and rights of parties and others involved in matters in all courts in the Unified Judicial System.” The policy laid out several guidelines for all Judicial Officers and Employees of the South Carolina Judicial Branch on their use of AI, including:

AI tools such as ChatGPT, Westlaw Cocounsel, Copilot, etc. are permitted only on approved devices and only with Court Administration approval.

AI cannot be used to draft judicial orders, opinions, or memoranda without direct human oversight and approval.
Judicial staff who use AI must ensure output accuracy and confidentiality in compliance with the SC Rules of Professional Conduct.

AI may not be used to process or analyze confidential court records or privileged information or communications unless expressly authorized and in compliance with all applicable rules and policies.

The SC Judicial Branch will develop training programs to educate Officers and Employees on the use of AI.

Lawyers and litigants must also ensure the accuracy of all work product and must use caution when relying on any output of AI.

xixAI tools are also becoming increasingly useful in legal research, document review, draft generation, and predictive analytics, particularly in major firms and litigation-heavy practices.xx Additionally, law schools, such as the Charleston School of Law and the University of South Carolina, are integrating AI training into curricula, particularly for e-discovery, research, ethics compliance, and innovation skill development.xxi

South Carolina’s legal community has begun embracing the use of AI; however, the South Carolina Judicial Branch and the American Bar Association have recommended AI to be integrated with caution. The South Carolina Judicial Branch is committed to ongoing evaluation and policy development regarding AI usage. The interim policy on AI will remain in effect until further orders from the Chief Justice or the Supreme Court.

Given the recency of generative artificial intelligence (“AI”) few case studies have been published in South Dakota. Most attorneys are still in the adoption phase where they are learning about the best uses and gaining practice using the tool during every stage of a case or matter. Similarly, legal departments are focused on providing their attorneys with best practices and encouraging daily usage. Many departments are also establishing or updating their AI usage policies to assist their attorneys in navigating the everchanging digital landscape.

The topic of AI was discussed in June of 2024 at the South Dakota Bar Association Annual Convention. The session addressed AI in relation to professional responsibility, an increasingly important topic in legal practice.1

Tennessee has a landmark law, the ELVIS Act, that took effect July 1, 2024, and protects artists’ voices, names, images, and likenesses from unauthorized use by AI. The state also formed an artificial intelligence advisory council through a bill (HB 2325) to recommend strategies for state government’s use of AI. Tennessee’s proactive stance on AI, particularly the ELVIS Act, is seen as a model for other states and potentially the federal government, though some federal efforts could diminish states’ regulatory power.

AI is actively being used by many Texas personal injury attorneys for: document handling; investigations; preparation of discovery and deposition outlines; research, first-draft briefs, analysis of case law, and outline arguments; preparation of reports and evaluations; and trial preparation including voir dire assistance, panel profiling, cross examination outlines and exhibits creations.
During depositions, a few attorneys are using AI speech-to-text engines to provide real-time deposition transcripts, flag testimony inconsistencies, and suggest follow up questions. The software is sold nationally and costs approximately $300 per license.
However, lawyers must critically review AI-generated content to ensure accuracy and ethical compliance.

  • Texas attorneys bear responsibility for monitoring and mitigating any unlawful bias in AI-powered jury selection tools.
  • There are a number of Texas State and Federal Court decisions sanctioning and chastising attorneys for not checking AI responses before including same in court documents.
  • Furthermore, Texas enacted the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 1, 2026. This law imposes broad governance requirements in legal and business contexts, with the attorney general empowered to investigate violations and impose penalties. It mandates documentation, transparency, and prohibits certain practices—such as discrimination via AI—and impacts how AI can be used in litigation and evidence gathering. In short. It places a compliance burden on Texas litigators to use, monitor, and document their AI systems’ lawful and ethical uses, with sharp penalties for intentional misconduct and safe harbors for prompt remediation and good-faith compliance.

Parties have utilized Artificial intelligence (AI) to assist in drafting court documents. Although they have often found it helpful, litigants have suffered when they used the tool without checking for accuracy. The Utah Court of Appeals has already sanctioned counsel for presenting hallucinated sources to the court.

The Utah Supreme Court has an established order setting up an Innovation Office charged with working to increase access to justice. This office has been charged with reviewing applications by parties other than law firms who wish to increase access to legal services in less risky ways. Some of these applicants may make use of AI. Additionally, the court rules provide for the development of automated processes for expungement and deferred traffic prosecution.

The use of AI in Vermont varies widely from attorney to attorney. However, we have seen it used for jury selection (including AI-guided social media searches), managing exhibits, and especially for drafting arguments, though this should be treated with extreme caution because AI tends to invent case law and regulations. We have also seen it used extensively by pro se litigants, and this reinforces the need to track down any and all claimed authority by party opponents, especially when dealing with pro se plaintiffs.

Some litigators are relying on AI to assist in the drafting of briefs. The extent to which attorneys are employing AI for case evaluation and jury selection is unknown. The Virginia State Bar (the “VSB”) has drafted resources to assist attorneys and provide guidance on the benefits and risks of the use of generative AI: https://vsb.org/Site/Site/about/rules-regulations/rpc-part6-sec2.aspx . There is no requirement to inform clients about the use of generative AI. However, disclosure may be necessary depending on any agreement with the client or whether confidential information will be disclosed.

Some of the Judges of the United States District Court for the Eastern District of Virginia have required counsel to disclose if AI is used for research and drafting filings

For more information, please contact:

ALFA International Headquarters
980 N. Michigan Avenue, Suite 1180
Chicago, IL 60611
Phone: (312) 642-2532

For more information, please contact:

ALFA International Headquarters
980 N. Michigan Avenue, Suite 1180
Chicago, IL 60611
Phone: (312) 642-2532

Wisconsin attorneys are utilizing AI similarly to attorneys in other jurisdictions.

  • Legal research/document analysis; client communications; medical record/deposition summaries; improving efficiency; outlining tasks
  • It is important to be mindful when using AI to check for accuracy so as not to violate ethical duties. See SCR 20 et seq.
  • Be mindful of citations generated by AI to ensure that they are accurate.
  • Wyoming has been slow to adopt extensive AI use after the USDC-WY sanctioning Morgan & Morgan in February of 2025 for its use. A brief submitted by three attorneys contained eight (8) fabricated case citations generated by their in-house AI tool.
  • The largest areas of AI use by Wyoming lawyers has been through products offered by trusted vendors like Lexis and Westlaw.