• 212-827-4501
April 24, 2025 McKenzie MacGibbon

Integrity in the Digital Age: Balancing Efficiency with AI-Generated Citations

Coffey Modica’s Mostafa Soliman, Counsel, was featured in in the The AI Journal for his insights on balancing efficiency and integrity in AI-generated citations within the legal field.
By Mostafa Soliman | April 24, 2025

As attorneys, we are accustomed to rapid technological change shaping the way we practice law—but now we are facing a new and potentially dangerous challenge. AI-driven tools, like ChatGPT and even dedicated legal software, are becoming commonplace in law firms. While these tools offer tremendous efficiency gains, recent high-profile incidents remind us of the ethical risks when attorneys rely too heavily on unverified AI-generated content.

Consider the recent case of Mata v. Avianca, where attorneys Steven Schwartz and Peter LoDuca found themselves fined $5,000 after they unknowingly included fake AI-generated citations in their court filings. Similar issues arose in Wyoming, where attorneys suing Walmart also faced sanctions for citing cases that simply did not exist. Even sophisticated legal-research platforms are not immune: a Texas attorney received a $2,000 fine and was ordered to complete AI-specific training after submitting AI-generated citations in a wrongful termination suit—this despite relying on Lexis AI.  A Minnesota judge also criticized an expert witness for relying on AI-generated references, damaging their credibility.

These cases are not isolated incidents. They are symptoms of a broader problem: the lack of clear, enforceable guidelines around using AI in legal practice. Internal memos, like those issued by Morgan & Morgan warning attorneys to verify AI outputs, are helpful—but relying exclusively on internal policies and self-policing leaves too much room for error. As these examples clearly illustrate, when lawyers fail to adequately verify AI-generated content, our profession suffers serious ethical and reputational harm.

The European Union recently took a decisive step forward with its AI Act, effective August 1, 2024, which imposes stricter regulations on high-risk AI applications. The U.S., however, still lags behind, primarily offering recommendations or frameworks that lack enforceability. Voluntary guidelines from organizations like NITA encourage audits and clear liability standards, but recommendations alone do not guarantee compliance. We need concrete, binding legislation similar to the EU’s, clear standards, mandatory transparency, and enforceable consequences for misuse.

Without federal regulation, we risk creating a patchwork system where different firms operate under different standards, undermining the fairness and consistency essential to our judicial system. To protect the integrity of our profession, we should advocate for comprehensive AI legislation that clearly defines responsibilities, establishes accountability mechanisms, and provides meaningful oversight.

But legislative action alone will not solve the problem overnight. Courts also play a critical role. Mandatory disclosure requirements for AI-generated content in court filings are a good start, but we should go further. Just as plagiarism-checking tools are being revolutionized and integrated by academia—such as at the University of Florida, where an engineering professor is developing a digital watermark that flags AI-generated content—the legal industry needs specialized citation-verification tools integrated directly into the court filing process. Courts could partner with technology providers to implement verification software designed explicitly to detect fabricated citations before filings are accepted—an investment that safeguards judicial accuracy without bogging down efficiency.

Ultimately, the responsibility falls heavily on us as legal practitioners and firms. We must be proactive by conducting regular audits of our AI systems and establishing dedicated governance teams to oversee technology use. Cross-functional groups, including lawyers, tech experts, and compliance personnel, can collaboratively ensure our AI tools are reliable, ethical, and transparent. Additionally, clear internal policies that require attorneys to independently verify all AI-generated content must become standard operating procedures.

As legal professionals, we cannot simply wait for regulators or courts to impose solutions. We must advocate for stronger AI oversight at all levels—legislative, judicial, and firm-level governance. By clearly supporting enforceable AI regulations, pushing courts to adopt robust verification tools, and embedding transparency into our own internal practices, we position ourselves—and our firms—as leaders in responsible technology adoption.

It is critical to acknowledge that AI technology will continue to increase in our daily work. There are some vendors who offer complimentary summaries using artificial intelligence of depositions at no extra charge. That is another example of how the dynamics of the legal industry is being reshaped by such technology.

This is why it is essential for legal professionals, not just tech companies, to lead the conversation on responsible AI use. Law firms that implement safeguards in place will distinguish themselves in the market and raise the standard for the legal industry. Rather than waiting for regulations to catch up, we have a chance to lead by example. By taking the lead today, we can harness the power of AI while ensuring the integrity of the profession in the age of AI.

Our mission is to lead the legal industry in responsibly adopting AI technology, empowering attorneys to leverage efficiency gains while rigorously maintaining ethical standards and safeguarding professional integrity.

Our clients and our profession deserve nothing less.

###