Custody Battle Highlights Dangers of AI in Legal Practice

Mar 28, 2026, 2:46 AM
Image for article Custody Battle Highlights Dangers of AI in Legal Practice

Hover over text to view sources

A custody dispute involving a 16-year-old Labrador retriever named Kyra has recently spotlighted the perils of relying on artificial intelligence (AI) within the legal profession. The case, which unfolded in California, illustrates how AI-generated citations can mislead lawyers and judges, leading to severe consequences for those involved in the litigation process.
The custody battle arose between Joan Pablo Torres Campos and Leslie Ann Munoz following the dissolution of their domestic partnership. After the family court did not specify custody arrangements for Kyra in its order, Torres Campos sought shared custody and visitation rights. However, Munoz's lawyer, Roxanne Chung Bonar, cited two fictitious California cases to support her client's refusal. The first, "Marriage of Twigg," did not exist, while the second reference, "Marriage of Teegarden," was incorrectly dated and irrelevant to pet custody.
Unexpectedly, the opposing law firm failed to identify the inaccuracies, leading to the judge signing an order that included these fabricated citations. This oversight not only compromised the integrity of the judicial record but also highlighted a growing trend of AI fabrications infiltrating legal documents.
Eugene Volokh, a law professor at UCLA, remarked that AI has introduced errors into legal practice that were virtually unheard of previously. Historically, lawyers could expect some degree of honesty in case references, but AI's ability to generate plausible yet fictitious citations has transformed this expectation.
The implications extend beyond individual cases. Federal Magistrate Mark D. Clarke recently imposed significant sanctions on attorneys for incorporating multiple fabricated citations into their filings, underscoring the judiciary's increasing intolerance for such missteps. Clarke's ruling involved a $90,000 penalty alongside the dismissal of a $29 million lawsuit due to reliance on AI-generated inaccuracies.
This case has also raised broader concerns within the legal community regarding the reliability and accountability of AI tools. A database maintained by French researcher Damien Charlotin has documented over 1,174 instances of AI hallucinations in legal contexts, with roughly 750 cases originating from US courts. Volokh estimates that many more cases go unnoticed, leading to a potential crisis in legal documentation and public trust.
As the case involving Kyra progressed, both lawyers failed to verify the citations thoroughly. After the initial ruling, Torres Campos’ legal team acknowledged that the cited precedents were fictitious only during the appeal process. Unfortunately, the appellate judges opted not to overturn the lower court's decision, as both parties had neglected to ensure the authenticity of their references.
In her response to the appellate filing, Bonar doubled down on her claims, insisting that the Twigg case was valid and even adding three more fictitious citations. This led to a $5,000 sanction against her for attempting to shift blame and for failing to acknowledge the inaccuracies promptly.
The reliance on AI in legal practice has prompted calls for greater accountability among lawyers and judges alike. David C. Beavens, Torres Campos' attorney, expressed the need for all parties involved in legal proceedings to assume responsibility for ensuring the accuracy of references.
As this custody battle over Kyra illustrates, the integration of AI into legal practices carries significant risks. Lawyers must remain vigilant and verify all citations, understanding that AI tools, while potentially useful, can also mislead and damage the integrity of the legal system.
In conclusion, the case serves as a cautionary tale about the growing reliance on AI in law, emphasizing that verification of sources is more critical than ever in maintaining the credibility of legal processes.

Related articles

Landmark Verdicts Against Meta and Google Challenge Tech Liability Shield

Recent jury verdicts in California and New Mexico have found Meta and Google liable for harm caused to young users through their social media platforms. These rulings could reshape legal protections for tech companies, particularly regarding their responsibility for user addiction and mental health impacts.

Trump Appoints Tech Titans to Science and Technology Council

President Trump has named prominent tech leaders, including Meta CEO Mark Zuckerberg and Nvidia CEO Jensen Huang, to his newly reestablished President's Council of Advisors on Science and Technology (PCAST). The council will focus on artificial intelligence policy and the challenges posed by emerging technologies.

AI Attack Ads Transform Massachusetts Political Landscape

The rise of AI-generated attack ads in Massachusetts is prompting lawmakers to consider new regulations. Candidates like Marc Lombardo and Brian Shortsleeve are utilizing AI to create controversial political content, raising concerns about misinformation and voter manipulation.

This Week in Tech: AI Moratoriums and Support for Small Innovators

This week, various tech bills made headlines, notably efforts to counter an AI moratorium proposed by GOP lawmakers while also supporting small AI businesses. Key measures include legislation aimed at protecting state-level AI regulations and initiatives to empower smaller AI innovators in a competitive landscape.

Three Charged in Scheme to Smuggle Nvidia AI Chips to China

Three individuals, including a senior vice president at Super Micro Computer, have been charged with conspiring to smuggle Nvidia AI chips into China, violating US export controls. The indictment alleges they diverted billions of dollars' worth of technology through a complex scheme involving fake documents and shell companies.