Skip to content
  • Facebook
  • X
  • Linkedin
  • WhatsApp
  • Associate Journalism
  • About Us
  • Privacy Policy
  • 033-46046046
  • editor@artifex.news
Artifex.News

Artifex.News

Stay Connected. Stay Informed.

  • Breaking News
  • World
  • Nation
  • Sports
  • Business
  • Science
  • Entertainment
  • Lifestyle
  • Toggle search form
  • EAM Jaishankar holds talks with Chinese counterpart Wang Yi World
  • Another Alia Bhatt Deepfake Goes Viral, Fans Express Concern Over AI Nation
  • RCB Had Special Season Turning It Around With Six Wins In Six: Skipper Faf Du Plessis Sports
  • ‘Helped Him In Crisis,’ Recalls Sharad Pawar Day After PM Narendra Modi’s Attack Nation
  • Hungary Parliament elects new President following scandal World
  • Border-Gavaskar Trophy between India and Australia extended to 5 Tests Sports
  • Blake Lively Apologises For “Silly Post” On Kate Middleton’s Photoshop Fail World
  • “Embarrassing”: Wasim Akram Fumes As PSL 2024 Playoffs See Low Turnout In Karachi Sports

AI Starts Creating Fake Legal Cases, Making Its Way Into Real Courtrooms

Posted on March 16, 2024 By admin


Its hardly surprising, then, that AI also has a strong impact on our legal systems. (Representational)

We’ve seen deepfake, explicit images of celebrities, created by artificial intelligence (AI). AI has also played a hand in creating music, driverless race cars and spreading misinformation, among other things.

It’s hardly surprising, then, that AI also has a strong impact on our legal systems.

It’s well known that courts must decide disputes based on the law, which is presented by lawyers to the court as part of a client’s case. It’s therefore highly concerning that fake law, invented by AI, is being used in legal disputes.

Not only does this pose issues of legality and ethics, it also threatens to undermine faith and trust in global legal systems.

How do fake laws come about?

There is little doubt that generative AI is a powerful tool with transformative potential for society, including many aspects of the legal system. But its use comes with responsibilities and risks.

Lawyers are trained to carefully apply professional knowledge and experience, and are generally not big risk-takers. However, some unwary lawyers (and self-represented litigants) have been caught out by artificial intelligence.

AI models are trained on massive data sets. When prompted by a user, they can create new content (both text and audiovisual).

Although content generated this way can look very convincing, it can also be inaccurate. This is the result of the AI model attempting to “fill in the gaps” when its training data is inadequate or flawed, and is commonly referred to as “hallucination”.

In some contexts, generative AI hallucination is not a problem. Indeed, it can be seen as an example of creativity.

But if AI hallucinated or created inaccurate content that is then used in legal processes, that’s a problem – particularly when combined with time pressures on lawyers and a lack of access to legal services for many.

This potent combination can result in carelessness and shortcuts in legal research and document preparation, potentially creating reputational issues for the legal profession and a lack of public trust in the administration of justice.

It’s happening already

The best known generative AI “fake case” is the 2023 US case Mata v Avianca, in which lawyers submitted a brief containing fake extracts and case citations to a New York court. The brief was researched using ChatGPT.

The lawyers, unaware that ChatGPT can hallucinate, failed to check that the cases actually existed. The consequences were disastrous. Once the error was uncovered, the court dismissed their client’s case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny.

Despite adverse publicity, other fake case examples continue to surface. Michael Cohen, Donald Trump’s former lawyer, gave his own lawyer cases generated by Google Bard, another generative AI chatbot. He believed they were real (they were not) and that his lawyer would fact check them (he did not). His lawyer included the cases in a brief filed with the US Federal Court.

Fake cases have also surfaced in recent matters in Canada and the United Kingdom.

If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine the public’s trust in the legal system? Consistent failures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients’ interests, and generally undermine the rule of law.

What’s being done about it?

Around the world, legal regulators and courts have responded in various ways.

Several US state bars and courts have issued guidance, opinions or orders on generative AI use, ranging from responsible adoption to an outright ban.

Law societies in the UK and British Columbia, and the courts of New Zealand, have also developed guidelines.

In Australia, the NSW Bar Association has a generative AI guide for barristers. The Law Society of NSW and the Law Institute of Victoria have released articles on responsible use in line with solicitors’ conduct rules.

Many lawyers and judges, like the public, will have some understanding of generative AI and can recognise both its limits and benefits. But there are others who may not be as aware. Guidance undoubtedly helps.

But a mandatory approach is needed. Lawyers who use generative AI tools cannot treat it as a substitute for exercising their own judgement and diligence, and must check the accuracy and reliability of the information they receive.

In Australia, courts should adopt practice notes or rules that set out expectations when generative AI is used in litigation. Court rules can also guide self-represented litigants, and would communicate to the public that our courts are aware of the problem and are addressing it.

The legal profession could also adopt formal guidance to promote the responsible use of AI by lawyers. At the very least, technology competence should become a requirement of lawyers’ continuing legal education in Australia.

Setting clear requirements for the responsible and ethical use of generative AI by lawyers in Australia will encourage appropriate adoption and shore up public confidence in our lawyers, our courts, and the overall administration of justice in this country.The Conversation

(Authors:Michael Legg, Professor of Law, UNSW Sydney and Vicki McNamara, Senior Research Associate, Centre for the Future of the Legal Profession, UNSW Sydney)

(Disclosure Statement:Vicki McNamara is affiliated with the Law Society of NSW (as a member). Michael Legg does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment)

This article is republished from The Conversation under a Creative Commons license. Read the original article.
 

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

World Tags:AI Creating Fake Legal Cases, AI Ethics, Artificial Intelligence AI, Generative AI

Post navigation

Previous Post: Religiophobia against Hinduism, Sikhism must also be acknowledged: India abstains in UNGA on Pakistan’s resolution on Islamophobia
Next Post: Seeing Rohit Sharma “Pyaar Sa Aata Hai”: India Star Compares Him With Aamir Khan

Related Posts

  • The Hindu Morning Digest – April 28, 2024 World
  • Baltimore bridge collapse | U.S. President Joe Biden approves $60mn aid; Governor Wes Moore warns of ‘very long road ahead’ for recovery World
  • Amid massive search for mass killing suspect, Maine residents remain behind locked doors World
  • Turkey heads to local elections as Erdogan seeks to avenge defeat World
  • CII UK India Business Forum relaunched in London World
  • Talks On Landmark Global Agreement On Future Pandemics End Without Deal World

More Related Articles

EU To Investigate Elon Musk’s X For Potential Hamas-Israel War Disinformation World
39 Killed, Over 360 Injured In Anti-Tax Protest In Kenya: Rights Watchdog World
Australian Teenager Suspended From School For Spraying Milk On Tourists In Viral Stunt World
China to cremate ‘outstanding’ leader Li Keqiang on Thursday World
UN General Assembly to meet over Israel-Hamas war on Thursday World
Iran conservatives secure bulk of seats in elections: media World
SiteLock

Archives

  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022

Categories

  • Business
  • Nation
  • Science
  • Sports
  • World

Recent Posts

  • France is voting in key elections that could see a historic far-right win or a hung parliament
  • Spain Introduces ‘Porn Passport’ To Watch Adult Content Online. Here’s Why
  • “I’m Preparing The Hardik Pandya Role”: Young IPL Star Opens Up On Team India Opportunity
  • Chardham Yatra On Hold Amid Rain Alert, Pilgrims Urged Not To Start From Rishikesh
  • “Good Meri Jaan”: Video Of Hardik Pandya’s Bromance With India Star Goes Viral

Recent Comments

  1. GkJwRWEAbS on UP Teacher Who Asked Students To Slap Muslim Classmate
  2. xreDavBVnbGqQA on UP Teacher Who Asked Students To Slap Muslim Classmate
  3. aANVRzfUdmyb on UP Teacher Who Asked Students To Slap Muslim Classmate
  4. YQCyszVBmIP on UP Teacher Who Asked Students To Slap Muslim Classmate
  5. aiXothgwe on UP Teacher Who Asked Students To Slap Muslim Classmate
  • Elon Musk Could Become Policy Adviser If Trump Wins Election: Report World
  • Campaigning Ends For 4th Phase Of Lok Sabha Polls In 96 Constituencies Nation
  • Morning Digest | India’s first solar observatory mission Aditya-L1 launches today; ED arrests Jet Airways founder Naresh Goyal in bank fraud case, and more World
  • RBI imposes penalty on ICICI Bank, Kotak Mahindra Bank for violation of norms Business
  • “Mother Still In Hospital…”: KKR Star, After IPL 2024 Final Entry, Says He Left Her To Join Team Sports
  • As Apple alert to Opposition leaders is investigated, clarity remains out of reach Business
  • Joe Biden’s Son Hunter Biden To Be Charged In Gun Case This Month: Prosecutor World
  • Nikon Set To Buy US Movie Camera Maker RED World

Editor-in-Chief:
Mohammad Ariff,
MSW, MAJMC, BSW, DTL, CTS, CNM, CCR, CAL, RSL, ASOC.
editor@artifex.news

Associate Editors:
1. Zenellis R. Tuba,
zenelis@artifex.news
2. Haris Daniyel
daniyel@artifex.news

Photograher:
Rohan Das
rohan@artifex.news

Artifex.News offers Online Paid Internships to college students from India and Abroad. Interns will get a PRESS CARD and other online offers.
Send your CV (Subjectline: Paid Internship) to internship@artifex.news

Links:
Associate Journalism
About Us
Privacy Policy

News Links:
Breaking News
World
Nation
Sports
Business
Entertainment
Lifestyle

Registered Office:
72/A, Elliot Road, Kolkata - 700016
Tel: 033-22277777, 033-22172217
Email: office@artifex.news

Editorial Office / News Desk:
No. 13, Mezzanine Floor, Esplanade Metro Rail Station,
12 J. L. Nehru Road, Kolkata - 700069.
(Entry from Gate No. 5)
Tel: 033-46011099, 033-46046046
Email: editor@artifex.news

Copyright © 2023 Artifex.News Newsportal designed by Artifex Infotech.