• 212-827-4501

Integrity in the Digital Age: Balancing Efficiency with AI-Generated Citations

Coffey Modica’s Mostafa Soliman, Counsel, was featured in in the The AI Journal for his insights on balancing efficiency and integrity in AI-generated citations within the legal field.
By Mostafa Soliman | April 24, 2025

As attorneys, we are accustomed to rapid technological change shaping the way we practice law—but now we are facing a new and potentially dangerous challenge. AI-driven tools, like ChatGPT and even dedicated legal software, are becoming commonplace in law firms. While these tools offer tremendous efficiency gains, recent high-profile incidents remind us of the ethical risks when attorneys rely too heavily on unverified AI-generated content.

Consider the recent case of Mata v. Avianca, where attorneys Steven Schwartz and Peter LoDuca found themselves fined $5,000 after they unknowingly included fake AI-generated citations in their court filings. Similar issues arose in Wyoming, where attorneys suing Walmart also faced sanctions for citing cases that simply did not exist. Even sophisticated legal-research platforms are not immune: a Texas attorney received a $2,000 fine and was ordered to complete AI-specific training after submitting AI-generated citations in a wrongful termination suit—this despite relying on Lexis AI.  A Minnesota judge also criticized an expert witness for relying on AI-generated references, damaging their credibility.

These cases are not isolated incidents. They are symptoms of a broader problem: the lack of clear, enforceable guidelines around using AI in legal practice. Internal memos, like those issued by Morgan & Morgan warning attorneys to verify AI outputs, are helpful—but relying exclusively on internal policies and self-policing leaves too much room for error. As these examples clearly illustrate, when lawyers fail to adequately verify AI-generated content, our profession suffers serious ethical and reputational harm.

The European Union recently took a decisive step forward with its AI Act, effective August 1, 2024, which imposes stricter regulations on high-risk AI applications. The U.S., however, still lags behind, primarily offering recommendations or frameworks that lack enforceability. Voluntary guidelines from organizations like NITA encourage audits and clear liability standards, but recommendations alone do not guarantee compliance. We need concrete, binding legislation similar to the EU’s, clear standards, mandatory transparency, and enforceable consequences for misuse.

Without federal regulation, we risk creating a patchwork system where different firms operate under different standards, undermining the fairness and consistency essential to our judicial system. To protect the integrity of our profession, we should advocate for comprehensive AI legislation that clearly defines responsibilities, establishes accountability mechanisms, and provides meaningful oversight.

But legislative action alone will not solve the problem overnight. Courts also play a critical role. Mandatory disclosure requirements for AI-generated content in court filings are a good start, but we should go further. Just as plagiarism-checking tools are being revolutionized and integrated by academia—such as at the University of Florida, where an engineering professor is developing a digital watermark that flags AI-generated content—the legal industry needs specialized citation-verification tools integrated directly into the court filing process. Courts could partner with technology providers to implement verification software designed explicitly to detect fabricated citations before filings are accepted—an investment that safeguards judicial accuracy without bogging down efficiency.

Ultimately, the responsibility falls heavily on us as legal practitioners and firms. We must be proactive by conducting regular audits of our AI systems and establishing dedicated governance teams to oversee technology use. Cross-functional groups, including lawyers, tech experts, and compliance personnel, can collaboratively ensure our AI tools are reliable, ethical, and transparent. Additionally, clear internal policies that require attorneys to independently verify all AI-generated content must become standard operating procedures.

As legal professionals, we cannot simply wait for regulators or courts to impose solutions. We must advocate for stronger AI oversight at all levels—legislative, judicial, and firm-level governance. By clearly supporting enforceable AI regulations, pushing courts to adopt robust verification tools, and embedding transparency into our own internal practices, we position ourselves—and our firms—as leaders in responsible technology adoption.

It is critical to acknowledge that AI technology will continue to increase in our daily work. There are some vendors who offer complimentary summaries using artificial intelligence of depositions at no extra charge. That is another example of how the dynamics of the legal industry is being reshaped by such technology.

This is why it is essential for legal professionals, not just tech companies, to lead the conversation on responsible AI use. Law firms that implement safeguards in place will distinguish themselves in the market and raise the standard for the legal industry. Rather than waiting for regulations to catch up, we have a chance to lead by example. By taking the lead today, we can harness the power of AI while ensuring the integrity of the profession in the age of AI.

Our mission is to lead the legal industry in responsibly adopting AI technology, empowering attorneys to leverage efficiency gains while rigorously maintaining ethical standards and safeguarding professional integrity.

Our clients and our profession deserve nothing less.

###

Ethical Rules for Using Generative AI in Your Practice | Model Rule 1.6: Confidentiality

Coffey Modica’s Mostafa Soliman, Counsel, was featured by Fishman Haygood for his insights on ChatGPT.
September 11, 2024

At the risk of stating the obvious, we are still in the early days of what we believe to be an “AI Revolution” in the way that goods and services, including legal services, are and will be provided. Which means that we do not, at this point, have much in the way of formal guidance.*

With that preface, in this series we will examine some of the Professional Rules[i] and other legal requirements that could potentially be implicated by a law firm’s use (or non-use) of ChatGPT or other Generative AI (GAI). Last time, we discussed the importance of establishing, periodically reviewing, and enforcing internal policies and protocols regarding the use—and/or limitation and restrictions on use—of ChatGPT and other AI products by lawyers and other employees at the firm. One reason for this precaution is the issue of confidentiality, which brings us to our fourth rule.

Model Rule 1.6: Confidentiality

Perhaps the most serious concerns that have been raised regarding the use of ChatGPT and other AI systems surround the security of privileged and other legally protected information. Under Model Rule 1.6, an attorney is not only generally prevented from disclosing “information relating to the representation of a client,” but is also charged with an affirmative duty to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[ii]

Using ChatGPT to analyze a client’s legal documents that contain privileged or other confidential information can pose a risk that such information could be misused or exposed.[iii] Generative AI programs that are ‘self-learning’ continue to develop responses as they receive additional inputs, adding those inputs to their existing parameters. The use of these kinds of programs creates a risk that client information may be stored within the program and revealed in response to future inquiries by third parties.[iv]

In March of 2023, for example, there was a data leak at ChatGPT that allowed its users to view the chat history titles of other users.[v] Outside of such data breaches, chat history can be accessed and reviewed by ChatGPT or other Generative AI company employees and may also be provided to third-party vendors and affiliates.[vi]

In addition to attorney-client privileged information and/or work product, one also must be cognizant of other legal protections and requirements that might apply to client information, including:

  • HIPAA (Health Insurance Portability and Accountability Act of 1996)[vii]
  • The European Union’s General Data Protection Regulation (GDPR)[viii]
  • The California Consumer Privacy Act (CCPA)[ix] (and/or other State Privacy Laws)
  • Trade Secret Protection[x] (which may be compromised by “disclosure” to the AI service)
  • Contractual Non-Disclosure Agreements and Obligations

The Florida Ethics Opinion regarding the use of Generative AI advises that existing ethics opinions regarding prior technological advances (such as cloud computing, electronic storage disposal, remote paralegal services, and metadata) have “addressed the duties of confidentiality and competence and are particularly instructive” and generally conclude that a lawyer should:

  • Ensure that the provider has an obligation to preserve the confidentiality and security of information, that the obligation is enforceable, and that the provider will notify the lawyer in the event of a breach or service of process requiring the production of client information;
  • Investigate the provider’s reputation, security measures, and policies, including any limitations on the provider’s liability; and
  • Determine whether the provider retains information submitted by the lawyer before and after the discontinuation of services or asserts proprietary rights to the information. [xi]

The California Practical Guidance for the Use of Generative Artificial Intelligence reinforces this responsibility and further suggests that a lawyer who intends to use confidential information in a generative AI solution should anonymize client information as well as “ensure that the provider does not share information with third parties or utilize the information for its own use in any manner, including to train or improve its product.”[xii] These measures should include reviewing consulting with an IT professional as well as reviewing the program’s Terms of Use.

In the Terms of Use dated March 14, 2023, OpenAI advised that:

If you use the Services to process personal data, you must provide legally adequate privacy notices and obtain necessary consents for the processing of such data, and you represent to us that you are processing such data in accordance with applicable law. If you will be using the OpenAI API for the processing of “personal data” as defined in the GDPR or “Personal Information” as defined in CCPA, please fill out this form to request to execute our Data Processing Addendum.[xiii]

The updated Terms of Use, promulgated in November of 2023 and effective as of January 31, 2024, simply state that:

You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms. You represent and warrant that you have all rights, licenses, and permissions needed to provide Input to our Services.[xiv]

ClaudeAI’s Acceptable Use Policy similarly prohibits users from “violating any natural person’s rights, including privacy law” as well as “inappropriately using confidential or personal information.”[xv]

Natalie A. Pierce and Stephanie L. Goutos of Gunderson Dettmer Law Firm note that challenges to the responsible use of GAI systems are actively being addressed by legal entities, from academic institutions to law firms, through methods such as “employee training, AI governance policies, and the formation of specialized AI task forces.” The authors emphasize the importance of recognizing existing countermeasures that aim to help mitigate risks associated with confidentiality concerns, while the framework for a lawyer’s responsible AI use continues to develop. For example, OpenAI’s April 2023 policy change allows users to disable chat history in ChatGPT. The company’s August 2023 update introduced an “enterprise-focused model that offers enhanced security protocols, sophisticated data analysis, and bespoke customization capabilities.” As the technology in Artificial Intelligence continues to evolve, Pierce and Goutos predict that a “majority of law firms and organizations will adopt custom experiences powered directly into their own applications, as well as prohibit the input of any confidential information into public GAI tools, which will substantially alleviate breach of confidentiality concerns.”[xvi]

A lawyer’s affirmative duty to reasonably communicate with his or her client is also implicated in this context. Model Rule 1.4 requires an attorney to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished” and to explain relevant matters “to the extent reasonably necessary to permit the client to make informed decisions regarding the representation.” [xvii] To the extent use of ChatGPT and other AI services in connection with the representation of a client is contemplated, it is therefore important to discuss the potential risks and benefits with the client, so that an informed decision can be made.[xviii]

 

Explore Mostafa Soliman’s analysis in Navigating the Ethical and Technical Challenges of ChatGPT (2023), available through the New York State Bar Association.

The Constitution Has Entered the ‘Chat’: AI Violates the Right to Effective Assistance of Counsel

Coffey Modica Counsel, Mostafa Soliman, was quoted in Legal Drive, offering insight on the ethical use of generative AI.
By Kerianne Morrissey | September 06, 2023

Artificial intelligence (AI) has permeated various industries and professions because the technology has many benefits. The legal profession has slowly adopted this kind of technology as an aid to tasks such as research and writing. Many commentators have discussed the ethical considerations of using this technology in the context of the Model Rules of Professional Conduct. Nicole Yamane, Artificial Intelligence in the Legal Field and the Indispensable Human Element Legal Ethics Demands, 33 Geo. J. Legal Ethics, 877, 878 (2020); Augustus Calabresi, Machine Lawyering and Artificial Attorneys:Conflicts in Legal Ethics With Complex Computer Algorithms, 34 Geo. J. Legal Ethics 789, 797 (2021); Amy B. Cyphert,A Human Being Wrote This Law Review Article: GPT-3 and The Practice of Law, 55 U.C. Davis L. Rev., 401, 423 (2021).

However, little has been written about the constitutional implications of the use of AI, in particular, the Sixth Amendment right to effective assistance of counsel in the criminal context. Given the rate at which AI continues to develop and anticipating a future that may present artificial lawyers or judges, this issue must be addressed. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023).

While AI technology may have some benefits, they are substantially outweighed by the prejudice and harm to people in contact with the criminal justice system. Because of the dark prospect that technology, namely an artificial lawyer, may be the only gatekeeper between a person and their liberty, this technology must be prohibited in the criminal justice system or, in the alternative, strictly regulated.

Attorneys have a duty to uphold the Constitution and must not let the appeals of AI’s convenience and profit infringe on our constitutional right to effective representation.

Landscape for the Future
Dr. Geoffrey Hinton invented the technology that led to the development of AI language processing programs. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023). Tech giants Google and OpenAI acquired this technology and began creating powerful AI programs, the latest of which is Generative Pre-Training Transformer-4 (“GPT-4”). Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023); Alan Truly, GPT-4: how to use the AI chatbot that puts ChatGPT to shame, Digital Trends, (June 16, 2023).

ChatGPT-4 is the technology behind the latest chatbot from OpenAI that is capable of conversation. Luca CM Melchionna, Bias and Fairness in Artificial Intelligence, N.Y.L.J., 95(4) 29, 30 (2023). These programs are language processing tools that process massive data sets, recognize patterns and then predict language. Cyphert, supra, at 403. Notably, AI’s output is limited to the data set used to train the algorithm, making this technology susceptible to bias. Melchionna, supra, at 30.

With AI’s ability to predict language, the legal profession has been using this technology to assist in various tasks pertaining to research and writing, such as legal research, discovery, drafting briefs or contracts and contract review. Yamane, supra, at 882.

Driven by profit, companies are racing to develop AI to advance the capabilities of this technology in a presently unregulated environment. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023). Dr. Hinton fears that companies will create technology that will outsmart human beings and become a threat to humanity. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023).

With this threat on the horizon, the creation of artificial general intelligence (AGI), with the “flexibility and resourcefulness of human intelligence,” is closer than we think. Gary Marcus, Artificial General Intelligence is Not as Imminent as You Might Think, Sci. Am., (July 1, 2022); Alan Truly, GPT-4: how to use the AI chatbot that puts ChatGPT to shame, Digital Trends, (June 16, 2023).

With the present use of AI technology in practice and AGI technology on the horizon that may provide a substitute for attorneys, the use of this technology and its potentially grave consequences in the criminal justice system must be closely scrutinized.

The Right to Counsel
The Sixth Amendment right to counsel is clearly established, tracing back to the purpose of the Bill of Rights to guarantee “. . . fairness and justice before any person could be deprived of ‘life, liberty, or property.’” Adams v. U.S. ex rel. McCann, 317 U.S. 269, 276 (1942). The Supreme Court has explained, “The purpose of the Sixth Amendment guarantee of counsel is to ensure that a defendant has the assistance necessary to justify reliance on the outcome of the proceeding.” Strickland v. Washington, 466 U.S. 668, 692 (1984).

The constitutional requirement of counsel is not met simply by “… a person who happens to be a lawyer is present at trial alongside the accused.” Strickland, 466 U.S. at 692. It is the counsel’s role in the adversarial process to use their skills to afford the defendant an opportunity to test the prosecutor’s case to ensure fairness and reliability on the outcome of a criminal proceeding. Adams, 317 U.S. 269.

To succeed on a claim of ineffective assistance of counsel, a defendant must show that the conduct of counsel was deficient, falling below an objective standard of reasonableness, and that it resulted in prejudice. 466 U.S. at 668. Counsel’s deficient performance is prejudicial if the defendant shows, but for the attorney’s deficient conduct, there is a reasonable probability that the outcome would have been different. 466 U.S. at 668.

A narrow exception to the prejudice prong of the Strickland test would be circumstances that amount to a constructive denial of effective assistance because they erode the inherent fairness in the adversarial process and undermine the outcome. U.S. v. Cronic, 466 U.S. 648 (1984).

Examples of such circumstances are a conflict of interest or counsel’s failure to test the prosecution’s case. Cronic, 466 U.S. 648. A defendant must overcome the presumption that the counsel’s conduct was reasonable. Cullen v. Pinholster, 563 U.S. 170, 179 (2011).

Prejudice Outweighs Benefits
The use of AI in the context of criminal legal representation falls well below the Strickland and Cronic standards for effective assistance of counsel. 466 U.S. 668; Cronic, 466 U.S. 648. While AI may have some benefits, the risks of AI’s deficiency would prejudice a defendant in violation of their right to effective assistance of counsel.

The benefit of using AI in the legal profession is generally to improve the speed and quality of the services rendered. AI achieves this by completing tasks at a faster pace than a human lawyer, which improves efficiency, accuracy, and is thus cost-effective. Yamane, supra, at 882. Attorneys are using AI to assist in legal research, writing briefs and drafting and reviewing contracts. Yamane, supra, at 882.

Additionally, because the technology predicts a response based on the data it learns from, the technology can be customized to a specific area of practice. Mostafa Soliman, Navigating the Ethical and Technical Challenges of ChatGPT, N.Y.L.J., 95(4), 27-28 (2023).

Further, commentators propose that AI could resolve the equity issue of access to justice for those who cannot afford representation. Yamane, supra, at 886. It is apparent that most of the benefits of AI are predominately related to the efficiency of tasks, increased output, cutting costs for clients, more business and raising profits for the private sector.

The use of AI to complement or substitute for the work of an attorney raises concerns about possible ethical violations under the Model Rules of Professional Conduct, such as competence, unauthorized practice of law, or bias. Yamane, supra, at 883. The basis for any such ethical violations may present a myriad of problems including producing false or biased information or the risk of disclosing client confidentiality. Cyphert, supra, at 434; Roy D. Simon, Artificial Intelligence, Real Ethics, 90 N.Y. St. B. J. 34, 36 (2018).

In the criminal context, because of the problems with this technology, the use of AI and any benefits it may confer are undercut by the enormous risks posed that could result in an attorney’s deficient conduct. Such deficiency may substantially prejudice a defendant and erode the structural fairness of our cherished adversarial system.

First, the use of AI may be a silent element in the course of representation where attorneys may not even realize they are using AI. Calabresi, supra, at 800. Many commonly used legal research databases, such as Westlaw and Lexis, are already using AI technology to improve research results. Calabresi, supra, at 800.

Generally, legal research and writing are fundamental skills an attorney must possess to represent anyone in any matter adequately. One of the main problems with AI technology is producing false or biased information. Yamane, supra, at 882. If an attorney were to rely on this technology to conduct research or draft legal documents, these risks, if they came to fruition, would clearly lower the standard of reasonable attorney conduct under the Strickland framework. 466 U.S. 668.

Furthermore, as a silent element in the course of representation, it would be impossible for a defendant who is challenging the effectiveness of their representation to meet their burden to show that the role of AI changed the outcome of their case. In particular, it would be a hefty burden to overcome when attorneys enjoy the presumption of effective assistance. Cullen, 563 U.S. at 179.

This lack of transparency and accountability in the use of AI through the course of representation may substantially prejudice a defendant.

Second, the use of AI, not only in tasks like research and writing but in the development or analysis of evidence, provides a loophole in our adversarial system to admit untested evidence. The adversarial design of our system provides a truth-seeking forum where the evidence presented is challenged and tested through crossexamination.

However, if the court prohibits counsel from cross-examining admitted evidence that was generated using AI, it creates an inherently unreliable outcome that may amount to a constructive denial of effective assistance.

For example, in People v. Wakefield, the New York Court of Appeals held that the technology used to analyze genotyping DNA evidence was not considered a declarant and the defendant did not have a right to test the evidence through cross-examination. People v. Wakefield, 38 N.Y.3d 367 (2022). While Wakefield involves the confrontation clause of the Sixth Amendment, the inability to test evidence admitted raises the issue of constructive denial of effective assistance of counsel.

Third, one commentator has cited the use of AI as potentially having the benefit of closing the gap in access to justice for those who cannot afford representation because of AI’s capability to generate answers to legal questions. Yamane, supra, at 885-7.

However, if the prospect of AGI is actualized, it would not close the gap to access to justice but create a two-tiered justice system. The issues and limitations of AI are not de minimus: potentially providing output consisting of false information or even racist or sexist information. Cyphert, supra, at 414.

Using AI as a substitute for those who cannot afford human representation would further disadvantage people who would receive legal answers that may not be correct or provide information that is discriminatory or offensive. Melchionna, supra, at 31.

The greater equity that would result from the possibility of closing the access to justice gap is an exciting prospect. However, the massive social problem of inequity in accessing legal representation will not be realized using a technological solution embedded in a neoliberal ideology that plagues our society. Jeff Sugarman, Neoliberalism and psychological ethics, J. of Theoretical and Phil. Psychol., 35(2) 103-116 (2015); Evgeny Morozov, The True Threat of
Artificial Intelligence, N. Y. Times, (June 30, 2023).

Lastly, legal representation involves human elements that are irreplicable. Attorneys are not robots, and they do not simply intake information and spit out answers. There are nuances that accompany the role, such as building a relationship with the person you are representing. The assignment isn’t to represent a case but a person who selected, or was assigned an attorney, to assist in their defense to stand between themselves and the possibility of incarceration.

One of the most critical aspects of an attorney’s role, particularly in the criminal context, is the unwavering loyalty and zealous advocacy for the person they are representing. As the Supreme Court recognized, an attorney is not “simply a person who happens to be a lawyer . . . standing alongside the accused,” but the Constitution requires much more. 466 U.S. 668 at 685.

To strip the art of legal representation of these elements would undoubtedly prejudice a defendant, for no technology is capable of zealous advocacy, loyalty, and empathy. There are some characteristics, “understanding, self-awareness . . . emotions, desires . . .” that are uniquely human. David Brooks, ‘Human Beings Are Soon Going to Be Eclipsed,’ N. Y. Times, (July 13, 2023).

In the criminal context, the use of AI presently and the potential of AGI as a substitution for attorneys creates an unlevel playing field that strikes at the heart of fairness, which is the essence of our
constitutional right to counsel.

Safeguarding Liberty
Despite the proposed benefits of AI, mainly in the private sector, they are substantially outweighed by the enormous harm and prejudice sustained, particularly in the criminal justice system. People will continue to be subjected to harm as AI continues to develop and infiltrate the legal profession.

However, the Constitution requires effective representation of persons at risk of losing their liberty, and the use of AI results in a grossly prejudicial outcome.

Some “solutions” that commentators propose to impose safeguards when using AI in the course of legal representation are appropriate oversight and limiting the use of AI as a tool, not a substitute.
Yamane, supra, at 889.

These proposed safeguards are inadequate not only in the criminal context but for what is to come with the lightning speed advancements occurring with this technology. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023). The only measure that is adequate when liberty is at stake is prohibiting the use of AI in the criminal justice system or strict regulations that at least permit cross-examination.

The legal field is a self-regulating profession, and attorneys are stewards of the Constitution. Innovative technology can be exciting and appealing, but there are no shortcuts to justice, truth, and defending liberty. The legal profession cannot be complicit in the encroachment of unregulated and unbridled technology infringing on the cherished constitutional rights of the people they serve. Liberty is too great a cost for convenience.

###