Judges given the OK to use ChatGPT to help write legal rulings

Judges given the OK to use ChatGPT to help write legal rulings… despite warnings AI can make up fictional cases that never happened

Judges will be allowed to use ChatGPT to help write legal rulings – despite warnings AI can make up fictional cases that never happened. 

In guidance issued to thousands of judges in England and Wales, the Judicial Office said the tool can be useful for summarising large amounts of text. 

But it said the chatbot was a ‘poor way of conducting research’ that was liable to invent past cases or legal texts. 

Master of the Rolls Sir Geoffrey Vos said AI ‘offers significant opportunities in developing a better, quicker and more cost-effective digital justice system’.

‘Technology will only move forwards and the judiciary has to understand what is going on,’ he said. ‘Judges, like everybody else, need to be acutely aware that AI can give inaccurate responses as well as accurate ones.’

Lord Justice Birss revealed in September that he used ChatGPT when he was summarising an area of law he was already familiar with

In September a judge described ChatGPT as ‘jolly useful’ as he admitted using it when writing a recent Court of Appeal ruling.

READ MORE – Chat GPT boss unveils plans to scan people’s EYEBALLS to help prove they are human

Quoted in the Telegraph, Lord Justice Birss said: ‘I think what is of most interest is that you can ask these large language models to summarise information. It is useful and it will be used and I can tell you, I have used it.

‘I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph.

‘I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful.

He was thought to be the first member of the British judiciary to reveal he used the AI tool to write his judgement. 

Earlier this year, two New York lawyers were fined for using fake case citations generated by ChatGPT, igniting a debate over the tool’s ‘hallucination problem’ where it makes up false information.

Judge Kevin Castel said attorneys Steven Schwartz and Peter LoDuca acted in bad faith by using the AI bot’s submissions – some of which contained ‘gibberish’ – even after judicial orders questioned their authenticity.

New York City lawyer Steven Schwartz was fined earlier this year for using fake case citations generated by ChatGPT

The pair had been representing Roberto Mata, who claimed his knee was injured when he was struck by a metal serving cart on an Avianca flight from El Salvador to Kennedy International Airport in New York in 2019.

When the Colombian airline asked a Manhattan judge to throw out the case because the statute of limitations had expired, Schwartz submitted a 10-page legal brief featuring half a dozen relevant court decisions.

But six cases cited in the filing – including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines – did not exist.

Passing sanction, Judge Castel, of the Southern District of New York, said: ‘Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.

‘But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.’

In guidance issued to thousands of judges in England and Wales, the Judicial Office said the tool can be useful for summarising large amounts of text 

Source: Read Full Article