In the notice about the requirement on his Dallas court’s website, Starr said generative AI tools like ChatGPT are “incredibly powerful” and can be used in the law in other ways, but they should not be used for legal briefing. “These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations,” the statement said. The judge also said that while attorneys swear an oath to uphold the law and represent their clients, the AI platforms do not. “Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle,” the notice said.
Starr said on Wednesday that he began drafting the mandate while attending a panel on artificial intelligence at a conference hosted by the 5th Circuit U.S. Court of Appeals, where the panelists demonstrated how the platforms made up bogus cases. The judge said he considered banning the use of AI in his courtroom altogether, but he decided not to do so after conversations with Eugene Volokh, a law professor at the UCLA School of Law, and others. Volokh said Wednesday that lawyers who use other databases for legal research might assume they can also rely on AI platforms. “This is a way of reminding lawyers they can’t assume that,” Volokh said. Starr issued the requirement days after another judge threatened to sanction a lawyer for using ChatGPT to help write court filings that cited six nonexistent cases.