Techno Blender
Digitally Yours.

Michael Cohen says AI-created fake cases mistakenly used in court brief

0 32


Donald Trump’s former lawyer Michael Cohen unwittingly included phony cases generated by artificial intelligence in a brief last month arguing for his release from post-prison supervision, according to court papers made public Friday.

Cohen, who was disbarred in 2019 after pleading guilty to lying to Congress, said in a statement that he used Google’s AI tool Bard to come up with the cases and then sent them to his lawyer. The brief, filed in federal court in Manhattan, was in support of his request for an early end to requirements that he check in with a probation officer and get permission to travel outside the US.

David Schwartz, the lawyer who filed it, said he mistakenly believed the cases had been vetted by Danya Perry, an attorney who had represented Cohen, and that he didn’t check them himself. Perry requested in a letter to the court that “Mr. Schwartz’s mistake in filing a motion with invalid citations not be held against Mr. Cohen” and that the judge release him from supervision.

We are now on WhatsApp. Click to join.

In the wake of the legal faux pas, polite finger-pointing abounds.

The lawyers pointed to their client as the source of the bogus precedents, offering up his own admission that he had gotten the cases from Bard and failed to check them against standard legal research sources.

Cohen, for his part, said “it did not occur to me then – and remains surprising to me now – that Mr. Schwartz would drop the cases into his submission wholesale, without even confirming they existed.” He said he had thought of Bard as a “super-charged search engine” and not a service that would generate real-looking but phony legal cases.

The case is “a simple story of a client making a well-intentioned but poorly-informed suggestion,” trusting that his lawyer would vet the cases before relying on them in a brief, Perry said, arguing that Cohen is blameless. As for Schwartz, she said, he’s guilty only of an “embarrassing” mistake.

Lawyers’ Bane

Schwartz isn’t the first lawyer to find himself forced to explain AI-related errors in Manhattan federal court. In June two lawyers were fined $5,000 after a judge found they had cited phony cases, which had been generated by OpenAI Inc.’s ChatGPT, and then made misleading statements after he called the problem to their attention.

The use of AI for legal research has prompted judges across the country to issue standing orders governing its use. The federal appeals court in New Orleans is contemplating a rule requiring lawyers to certify either that “no generative artificial intelligence program was used” in drafting legal filings or that any AI-created work has been reviewed and approved by a human lawyer.


Donald Trump’s former lawyer Michael Cohen unwittingly included phony cases generated by artificial intelligence in a brief last month arguing for his release from post-prison supervision, according to court papers made public Friday.

Cohen, who was disbarred in 2019 after pleading guilty to lying to Congress, said in a statement that he used Google’s AI tool Bard to come up with the cases and then sent them to his lawyer. The brief, filed in federal court in Manhattan, was in support of his request for an early end to requirements that he check in with a probation officer and get permission to travel outside the US.

David Schwartz, the lawyer who filed it, said he mistakenly believed the cases had been vetted by Danya Perry, an attorney who had represented Cohen, and that he didn’t check them himself. Perry requested in a letter to the court that “Mr. Schwartz’s mistake in filing a motion with invalid citations not be held against Mr. Cohen” and that the judge release him from supervision.

We are now on WhatsApp. Click to join.

In the wake of the legal faux pas, polite finger-pointing abounds.

The lawyers pointed to their client as the source of the bogus precedents, offering up his own admission that he had gotten the cases from Bard and failed to check them against standard legal research sources.

Cohen, for his part, said “it did not occur to me then – and remains surprising to me now – that Mr. Schwartz would drop the cases into his submission wholesale, without even confirming they existed.” He said he had thought of Bard as a “super-charged search engine” and not a service that would generate real-looking but phony legal cases.

The case is “a simple story of a client making a well-intentioned but poorly-informed suggestion,” trusting that his lawyer would vet the cases before relying on them in a brief, Perry said, arguing that Cohen is blameless. As for Schwartz, she said, he’s guilty only of an “embarrassing” mistake.

Lawyers’ Bane

Schwartz isn’t the first lawyer to find himself forced to explain AI-related errors in Manhattan federal court. In June two lawyers were fined $5,000 after a judge found they had cited phony cases, which had been generated by OpenAI Inc.’s ChatGPT, and then made misleading statements after he called the problem to their attention.

The use of AI for legal research has prompted judges across the country to issue standing orders governing its use. The federal appeals court in New Orleans is contemplating a rule requiring lawyers to certify either that “no generative artificial intelligence program was used” in drafting legal filings or that any AI-created work has been reviewed and approved by a human lawyer.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment