Proposed Rule on Lawyers Using AI
A proposal is being considered by a federal appeals court in New Orleans that would require lawyers to confirm if they used artificial intelligence (AI) programs to draft briefs. They would need to affirm either independent human review of AI-generated text or no reliance on AI in their court submissions.
Public Comment on the Proposal
The Fifth U.S. Circuit Court of Appeals has invited public feedback on the proposal until Jan. 4. The regulation would apply to both attorneys and litigants without legal representation appearing before the court, obliging them to confirm that if an AI program was employed in producing a filing, both citations and legal analysis were assessed for precision.
Concerns About Generative AI Programs
The introduction of the proposed rule coincided with judges nationwide addressing the swift proliferation of generative AI programs, such as ChatGPT. The challenges associated with lawyers using AI gained prominence in June, as two attorneys from New York faced sanctions for submitting a legal document containing six fabricated case citations produced by ChatGPT.
Local Court Rules on AI Use
In October, the U.S. District Court for the Eastern District of Texas introduced a rule effective Dec. 1, necessitating lawyers using AI programs to “evaluate and authenticate any computer-generated content.” The court emphasized that AI technology “should never substitute for the abstract thinking and problem-solving capabilities of lawyers.”
Hot Take: Legal Implications of AI Use in Courtrooms
The increasing use of AI programs by lawyers to draft legal documents is raising concerns about the accuracy and verification of the content. The proposed rule by the federal appeals court in New Orleans and the rule introduced by the U.S. District Court for the Eastern District of Texas reflect the growing awareness of the potential legal implications of AI use in courtrooms.