Courts across the world have started penalising lawyers for submitting fake legal citations generated by artificial intelligence tools, making it clear that errors caused by AI will not shield legal professionals from punishment. Judges are increasingly holding lawyers accountable for accuracy, even when mistakes are unintentional.
From the US to India, the UK and Israel, courts have reported a sharp rise in cases where lawyers relied on AI tools for legal research and ended up citing non-existent judgments. The surge accelerated in 2025 as generative AI became more widely used in drafting legal submissions, according to a research paper by international tax expert Pramod Kumar Siva. The paper documents nearly 800 cases across at least 25 countries where courts flagged fabricated citations, false quotations or invented legal authorities generated by AI tools.
How AI-generated errors reached court records
Generative AI systems can produce legal text that looks authentic, including detailed case names and citations. Problems arise when those references appear credible but do not exist in official law reports.
Siva’s research paper finds that most incidents involved “wholly fabricated case citations or legal authorities.” These errors entered court filings when lawyers copied AI-generated material without independently verifying the sources.
The issue first drew global attention in 2023 after a US court sanctioned lawyers for filing fictitious case law generated by ChatGPT. Similar cases have since emerged across courts and tribunals worldwide.
Courts place responsibility squarely on lawyers
Courts have taken a consistent position that licensed lawyers carry a non-delegable duty to verify every citation before filing documents.
Drawing from multiple judicial orders, Siva notes that judges have rejected attempts to blame AI tools for inaccuracies. In one ruling, a US court stated, “Attorneys cannot delegate the verification role to AI, computers, robots, or any other form of technology.”
According to the research, sanctions imposed on lawyers have ranged from fines and mandatory ethics training to public reprimands and referrals to professional disciplinary bodies. In several cases, courts also ordered lawyers to inform their clients about the errors.
No protection for lack of intent
The paper highlights that courts have not accepted lack of intent as a defence.
Even where lawyers argued that they were unaware the citations were fake, judges treated the failure as negligence. Siva explains that courts are effectively applying a strict duty of verification, holding lawyers responsible for every authority cited, regardless of how the error occurred.
Declining judicial tolerance
Early cases showed some restraint, especially where errors were corrected quickly. That tolerance has narrowed as awareness of AI risks has grown.
An English court observed, “It would have been negligent for this barrister, if she used AI and did not check it.” The research notes that such observations reflect a broader shift towards tougher scrutiny as AI use becomes routine in legal practice.
Different approach for self-represented litigants
The study also highlights a contrast in how courts treat self-represented litigants who rely on AI tools.
In many cases, courts issued warnings instead of sanctions for first-time errors. In one instance, a tribunal declined to impose costs, stating that “AI is a relatively new tool which the public is still getting used to,” and that “the Claimant acted honestly.”
However, the paper records cases where repeated misuse by self-represented litigants led to penalties, including filing restrictions.
AI errors inside judicial decisions
The research also documents instances where AI-generated errors appeared in judicial orders. In India, a tax tribunal withdrew an order after it was found to contain citations to non-existent Supreme Court and High Court judgments. The matter was later transferred to another bench, raising concerns about procedural integrity and verification standards within adjudicatory bodies.
The paper concludes that courts are not creating new legal rules to address AI-related mistakes. Instead, they are enforcing existing duties of competence and honesty more strictly.
As one court warning cited in the research states, “The mission of the federal courts to ascertain truth is obviously compromised by the use of an AI tool that generates legal research that includes false or inaccurate propositions of law and/or purport to cite non-existent judicial decisions.” It concludes that courts worldwide are delivering a consistent message: AI can assist legal work, but humans remain fully responsible. Verification failures now carry real and visible consequences.
From the US to India, the UK and Israel, courts have reported a sharp rise in cases where lawyers relied on AI tools for legal research and ended up citing non-existent judgments. The surge accelerated in 2025 as generative AI became more widely used in drafting legal submissions, according to a research paper by international tax expert Pramod Kumar Siva. The paper documents nearly 800 cases across at least 25 countries where courts flagged fabricated citations, false quotations or invented legal authorities generated by AI tools.
How AI-generated errors reached court records
Generative AI systems can produce legal text that looks authentic, including detailed case names and citations. Problems arise when those references appear credible but do not exist in official law reports.
Siva’s research paper finds that most incidents involved “wholly fabricated case citations or legal authorities.” These errors entered court filings when lawyers copied AI-generated material without independently verifying the sources.
The issue first drew global attention in 2023 after a US court sanctioned lawyers for filing fictitious case law generated by ChatGPT. Similar cases have since emerged across courts and tribunals worldwide.
Courts place responsibility squarely on lawyers
Courts have taken a consistent position that licensed lawyers carry a non-delegable duty to verify every citation before filing documents.
Drawing from multiple judicial orders, Siva notes that judges have rejected attempts to blame AI tools for inaccuracies. In one ruling, a US court stated, “Attorneys cannot delegate the verification role to AI, computers, robots, or any other form of technology.”
According to the research, sanctions imposed on lawyers have ranged from fines and mandatory ethics training to public reprimands and referrals to professional disciplinary bodies. In several cases, courts also ordered lawyers to inform their clients about the errors.
No protection for lack of intent
The paper highlights that courts have not accepted lack of intent as a defence.
Even where lawyers argued that they were unaware the citations were fake, judges treated the failure as negligence. Siva explains that courts are effectively applying a strict duty of verification, holding lawyers responsible for every authority cited, regardless of how the error occurred.
Declining judicial tolerance
Early cases showed some restraint, especially where errors were corrected quickly. That tolerance has narrowed as awareness of AI risks has grown.
An English court observed, “It would have been negligent for this barrister, if she used AI and did not check it.” The research notes that such observations reflect a broader shift towards tougher scrutiny as AI use becomes routine in legal practice.
Different approach for self-represented litigants
The study also highlights a contrast in how courts treat self-represented litigants who rely on AI tools.
In many cases, courts issued warnings instead of sanctions for first-time errors. In one instance, a tribunal declined to impose costs, stating that “AI is a relatively new tool which the public is still getting used to,” and that “the Claimant acted honestly.”
However, the paper records cases where repeated misuse by self-represented litigants led to penalties, including filing restrictions.
AI errors inside judicial decisions
The research also documents instances where AI-generated errors appeared in judicial orders. In India, a tax tribunal withdrew an order after it was found to contain citations to non-existent Supreme Court and High Court judgments. The matter was later transferred to another bench, raising concerns about procedural integrity and verification standards within adjudicatory bodies.
The paper concludes that courts are not creating new legal rules to address AI-related mistakes. Instead, they are enforcing existing duties of competence and honesty more strictly.
As one court warning cited in the research states, “The mission of the federal courts to ascertain truth is obviously compromised by the use of an AI tool that generates legal research that includes false or inaccurate propositions of law and/or purport to cite non-existent judicial decisions.” It concludes that courts worldwide are delivering a consistent message: AI can assist legal work, but humans remain fully responsible. Verification failures now carry real and visible consequences.







