• 212-827-4501
September 6, 2023 McKenzie MacGibbon

The Constitution Has Entered the ‘Chat’: AI Violates the Right to Effective Assistance of Counsel

Coffey Modica Counsel, Mostafa Soliman, was quoted in Legal Drive, offering insight on the ethical use of generative AI.
By Kerianne Morrissey | September 06, 2023

Artificial intelligence (AI) has permeated various industries and professions because the technology has many benefits. The legal profession has slowly adopted this kind of technology as an aid to tasks such as research and writing. Many commentators have discussed the ethical considerations of using this technology in the context of the Model Rules of Professional Conduct. Nicole Yamane, Artificial Intelligence in the Legal Field and the Indispensable Human Element Legal Ethics Demands, 33 Geo. J. Legal Ethics, 877, 878 (2020); Augustus Calabresi, Machine Lawyering and Artificial Attorneys:Conflicts in Legal Ethics With Complex Computer Algorithms, 34 Geo. J. Legal Ethics 789, 797 (2021); Amy B. Cyphert,A Human Being Wrote This Law Review Article: GPT-3 and The Practice of Law, 55 U.C. Davis L. Rev., 401, 423 (2021).

However, little has been written about the constitutional implications of the use of AI, in particular, the Sixth Amendment right to effective assistance of counsel in the criminal context. Given the rate at which AI continues to develop and anticipating a future that may present artificial lawyers or judges, this issue must be addressed. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023).

While AI technology may have some benefits, they are substantially outweighed by the prejudice and harm to people in contact with the criminal justice system. Because of the dark prospect that technology, namely an artificial lawyer, may be the only gatekeeper between a person and their liberty, this technology must be prohibited in the criminal justice system or, in the alternative, strictly regulated.

Attorneys have a duty to uphold the Constitution and must not let the appeals of AI’s convenience and profit infringe on our constitutional right to effective representation.

Landscape for the Future
Dr. Geoffrey Hinton invented the technology that led to the development of AI language processing programs. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023). Tech giants Google and OpenAI acquired this technology and began creating powerful AI programs, the latest of which is Generative Pre-Training Transformer-4 (“GPT-4”). Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023); Alan Truly, GPT-4: how to use the AI chatbot that puts ChatGPT to shame, Digital Trends, (June 16, 2023).

ChatGPT-4 is the technology behind the latest chatbot from OpenAI that is capable of conversation. Luca CM Melchionna, Bias and Fairness in Artificial Intelligence, N.Y.L.J., 95(4) 29, 30 (2023). These programs are language processing tools that process massive data sets, recognize patterns and then predict language. Cyphert, supra, at 403. Notably, AI’s output is limited to the data set used to train the algorithm, making this technology susceptible to bias. Melchionna, supra, at 30.

With AI’s ability to predict language, the legal profession has been using this technology to assist in various tasks pertaining to research and writing, such as legal research, discovery, drafting briefs or contracts and contract review. Yamane, supra, at 882.

Driven by profit, companies are racing to develop AI to advance the capabilities of this technology in a presently unregulated environment. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023). Dr. Hinton fears that companies will create technology that will outsmart human beings and become a threat to humanity. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023).

With this threat on the horizon, the creation of artificial general intelligence (AGI), with the “flexibility and resourcefulness of human intelligence,” is closer than we think. Gary Marcus, Artificial General Intelligence is Not as Imminent as You Might Think, Sci. Am., (July 1, 2022); Alan Truly, GPT-4: how to use the AI chatbot that puts ChatGPT to shame, Digital Trends, (June 16, 2023).

With the present use of AI technology in practice and AGI technology on the horizon that may provide a substitute for attorneys, the use of this technology and its potentially grave consequences in the criminal justice system must be closely scrutinized.

The Right to Counsel
The Sixth Amendment right to counsel is clearly established, tracing back to the purpose of the Bill of Rights to guarantee “. . . fairness and justice before any person could be deprived of ‘life, liberty, or property.’” Adams v. U.S. ex rel. McCann, 317 U.S. 269, 276 (1942). The Supreme Court has explained, “The purpose of the Sixth Amendment guarantee of counsel is to ensure that a defendant has the assistance necessary to justify reliance on the outcome of the proceeding.” Strickland v. Washington, 466 U.S. 668, 692 (1984).

The constitutional requirement of counsel is not met simply by “… a person who happens to be a lawyer is present at trial alongside the accused.” Strickland, 466 U.S. at 692. It is the counsel’s role in the adversarial process to use their skills to afford the defendant an opportunity to test the prosecutor’s case to ensure fairness and reliability on the outcome of a criminal proceeding. Adams, 317 U.S. 269.

To succeed on a claim of ineffective assistance of counsel, a defendant must show that the conduct of counsel was deficient, falling below an objective standard of reasonableness, and that it resulted in prejudice. 466 U.S. at 668. Counsel’s deficient performance is prejudicial if the defendant shows, but for the attorney’s deficient conduct, there is a reasonable probability that the outcome would have been different. 466 U.S. at 668.

A narrow exception to the prejudice prong of the Strickland test would be circumstances that amount to a constructive denial of effective assistance because they erode the inherent fairness in the adversarial process and undermine the outcome. U.S. v. Cronic, 466 U.S. 648 (1984).

Examples of such circumstances are a conflict of interest or counsel’s failure to test the prosecution’s case. Cronic, 466 U.S. 648. A defendant must overcome the presumption that the counsel’s conduct was reasonable. Cullen v. Pinholster, 563 U.S. 170, 179 (2011).

Prejudice Outweighs Benefits
The use of AI in the context of criminal legal representation falls well below the Strickland and Cronic standards for effective assistance of counsel. 466 U.S. 668; Cronic, 466 U.S. 648. While AI may have some benefits, the risks of AI’s deficiency would prejudice a defendant in violation of their right to effective assistance of counsel.

The benefit of using AI in the legal profession is generally to improve the speed and quality of the services rendered. AI achieves this by completing tasks at a faster pace than a human lawyer, which improves efficiency, accuracy, and is thus cost-effective. Yamane, supra, at 882. Attorneys are using AI to assist in legal research, writing briefs and drafting and reviewing contracts. Yamane, supra, at 882.

Additionally, because the technology predicts a response based on the data it learns from, the technology can be customized to a specific area of practice. Mostafa Soliman, Navigating the Ethical and Technical Challenges of ChatGPT, N.Y.L.J., 95(4), 27-28 (2023).

Further, commentators propose that AI could resolve the equity issue of access to justice for those who cannot afford representation. Yamane, supra, at 886. It is apparent that most of the benefits of AI are predominately related to the efficiency of tasks, increased output, cutting costs for clients, more business and raising profits for the private sector.

The use of AI to complement or substitute for the work of an attorney raises concerns about possible ethical violations under the Model Rules of Professional Conduct, such as competence, unauthorized practice of law, or bias. Yamane, supra, at 883. The basis for any such ethical violations may present a myriad of problems including producing false or biased information or the risk of disclosing client confidentiality. Cyphert, supra, at 434; Roy D. Simon, Artificial Intelligence, Real Ethics, 90 N.Y. St. B. J. 34, 36 (2018).

In the criminal context, because of the problems with this technology, the use of AI and any benefits it may confer are undercut by the enormous risks posed that could result in an attorney’s deficient conduct. Such deficiency may substantially prejudice a defendant and erode the structural fairness of our cherished adversarial system.

First, the use of AI may be a silent element in the course of representation where attorneys may not even realize they are using AI. Calabresi, supra, at 800. Many commonly used legal research databases, such as Westlaw and Lexis, are already using AI technology to improve research results. Calabresi, supra, at 800.

Generally, legal research and writing are fundamental skills an attorney must possess to represent anyone in any matter adequately. One of the main problems with AI technology is producing false or biased information. Yamane, supra, at 882. If an attorney were to rely on this technology to conduct research or draft legal documents, these risks, if they came to fruition, would clearly lower the standard of reasonable attorney conduct under the Strickland framework. 466 U.S. 668.

Furthermore, as a silent element in the course of representation, it would be impossible for a defendant who is challenging the effectiveness of their representation to meet their burden to show that the role of AI changed the outcome of their case. In particular, it would be a hefty burden to overcome when attorneys enjoy the presumption of effective assistance. Cullen, 563 U.S. at 179.

This lack of transparency and accountability in the use of AI through the course of representation may substantially prejudice a defendant.

Second, the use of AI, not only in tasks like research and writing but in the development or analysis of evidence, provides a loophole in our adversarial system to admit untested evidence. The adversarial design of our system provides a truth-seeking forum where the evidence presented is challenged and tested through crossexamination.

However, if the court prohibits counsel from cross-examining admitted evidence that was generated using AI, it creates an inherently unreliable outcome that may amount to a constructive denial of effective assistance.

For example, in People v. Wakefield, the New York Court of Appeals held that the technology used to analyze genotyping DNA evidence was not considered a declarant and the defendant did not have a right to test the evidence through cross-examination. People v. Wakefield, 38 N.Y.3d 367 (2022). While Wakefield involves the confrontation clause of the Sixth Amendment, the inability to test evidence admitted raises the issue of constructive denial of effective assistance of counsel.

Third, one commentator has cited the use of AI as potentially having the benefit of closing the gap in access to justice for those who cannot afford representation because of AI’s capability to generate answers to legal questions. Yamane, supra, at 885-7.

However, if the prospect of AGI is actualized, it would not close the gap to access to justice but create a two-tiered justice system. The issues and limitations of AI are not de minimus: potentially providing output consisting of false information or even racist or sexist information. Cyphert, supra, at 414.

Using AI as a substitute for those who cannot afford human representation would further disadvantage people who would receive legal answers that may not be correct or provide information that is discriminatory or offensive. Melchionna, supra, at 31.

The greater equity that would result from the possibility of closing the access to justice gap is an exciting prospect. However, the massive social problem of inequity in accessing legal representation will not be realized using a technological solution embedded in a neoliberal ideology that plagues our society. Jeff Sugarman, Neoliberalism and psychological ethics, J. of Theoretical and Phil. Psychol., 35(2) 103-116 (2015); Evgeny Morozov, The True Threat of
Artificial Intelligence, N. Y. Times, (June 30, 2023).

Lastly, legal representation involves human elements that are irreplicable. Attorneys are not robots, and they do not simply intake information and spit out answers. There are nuances that accompany the role, such as building a relationship with the person you are representing. The assignment isn’t to represent a case but a person who selected, or was assigned an attorney, to assist in their defense to stand between themselves and the possibility of incarceration.

One of the most critical aspects of an attorney’s role, particularly in the criminal context, is the unwavering loyalty and zealous advocacy for the person they are representing. As the Supreme Court recognized, an attorney is not “simply a person who happens to be a lawyer . . . standing alongside the accused,” but the Constitution requires much more. 466 U.S. 668 at 685.

To strip the art of legal representation of these elements would undoubtedly prejudice a defendant, for no technology is capable of zealous advocacy, loyalty, and empathy. There are some characteristics, “understanding, self-awareness . . . emotions, desires . . .” that are uniquely human. David Brooks, ‘Human Beings Are Soon Going to Be Eclipsed,’ N. Y. Times, (July 13, 2023).

In the criminal context, the use of AI presently and the potential of AGI as a substitution for attorneys creates an unlevel playing field that strikes at the heart of fairness, which is the essence of our
constitutional right to counsel.

Safeguarding Liberty
Despite the proposed benefits of AI, mainly in the private sector, they are substantially outweighed by the enormous harm and prejudice sustained, particularly in the criminal justice system. People will continue to be subjected to harm as AI continues to develop and infiltrate the legal profession.

However, the Constitution requires effective representation of persons at risk of losing their liberty, and the use of AI results in a grossly prejudicial outcome.

Some “solutions” that commentators propose to impose safeguards when using AI in the course of legal representation are appropriate oversight and limiting the use of AI as a tool, not a substitute.
Yamane, supra, at 889.

These proposed safeguards are inadequate not only in the criminal context but for what is to come with the lightning speed advancements occurring with this technology. Cade Metz, ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, N. Y. Times, (May 1, 2023). The only measure that is adequate when liberty is at stake is prohibiting the use of AI in the criminal justice system or strict regulations that at least permit cross-examination.

The legal field is a self-regulating profession, and attorneys are stewards of the Constitution. Innovative technology can be exciting and appealing, but there are no shortcuts to justice, truth, and defending liberty. The legal profession cannot be complicit in the encroachment of unregulated and unbridled technology infringing on the cherished constitutional rights of the people they serve. Liberty is too great a cost for convenience.

###