Federal Judges In New Jersey and Mississippi Release Statements Regarding Errors In Rulings After Senator’s Inquiry

The U.S. federal court system just got its first real brush with the growing pains of artificial intelligence — and the consequences are exactly what critics feared: error-ridden judicial orders released to the public, created in part by unauthorized use of generative AI tools.

Two federal judges — U.S. District Judge Julien Xavier Neals (New Jersey) and U.S. District Judge Henry Wingate (Mississippi) — admitted this week that staffers in their chambers used AI platforms like OpenAI’s ChatGPT and Perplexity to assist in drafting judicial opinions. The result? Embarrassing legal missteps, factual inaccuracies, and a rare public rebuke from Sen. Chuck Grassley (R-IA), who now chairs the powerful Senate Judiciary Committee.

The issues came to light after lawyers in separate cases flagged troubling errors in the judges’ rulings. Grassley demanded answers. The judges complied.

Judge Neals revealed that a law school intern in his chambers had used ChatGPT to conduct legal research for a June 30 draft decision in a securities case. The AI-generated content was incorporated without review, and the draft was mistakenly published. Once discovered, the ruling was pulled. Neals wrote that using generative AI violated both chambers’ policy and the intern’s law school rules, and that while the policy had been communicated verbally before, it’s now a formal written ban.

Judge Wingate shared a similar story: a law clerk used Perplexity to generate a foundational draft in a civil rights lawsuit. The July 20 order was later scrubbed and rewritten, originally explained only as a “clerical error.” Wingate now admits the AI-generated draft was released due to “a lapse in human oversight” — and says measures have been taken to prevent future lapses.

Grassley, for his part, was firm but diplomatic in his response. “Honesty is always the best policy,” he said, commending both judges for owning their mistakes. But his larger point was unflinching: the judicial system must not allow AI to compromise due process, factual integrity, or fairness under the law.

He warned that laziness, apathy, or unmonitored AI usage could “upend the Judiciary’s commitment to integrity and factual accuracy.” He’s right to be concerned. These aren’t typos in a memo. These are judicial orders — documents that shape case law, affect lives, and carry the force of the federal government.

This isn’t an isolated issue, either. Across the country, attorneys have faced sanctions — including fines and public reprimands — for submitting court filings that relied on AI-generated content, often with made-up citations or fabricated case law. As legal professionals begin experimenting with AI to cut corners, the courts are now being forced to draw a clear line between innovation and negligence.

The underlying lesson is simple: AI tools, no matter how advanced, are not a substitute for trained legal reasoning or judicial oversight. While AI might be useful for summarizing documents or streamlining certain clerical functions, relying on it to generate legal conclusions without verification is a dangerous shortcut — one that undermines public trust in the justice system.

In response, expect more courts to follow Neals and Wingate in issuing formal guidance. Some judges may ban AI outright. Others may seek to implement stricter human-in-the-loop review processes.

LEAVE A REPLY

Please enter your comment!
Please enter your name here