Coffey Modica Counsel Mostafa Soliman was featured in The Daily Record offering expert insight on how artificial intelligence is transforming evidentiary standards in modern litigation.
By Mostafa Soliman | March 24, 2026
The recent technological advancement of generative artificial intelligence (AI) over the past few years created unpresented challenges for the authentication of photos and videos. Deepfakes, which are digitally transformed and manipulated media made to resemble a person’s likeness with someone else, have evolved significantly from a novelty to a serious threat, especially in the identity verification and authentication of photos and videos. What previously required deep understanding of complex technical production, requiring specialized expertise in manual editing software, such as Photoshop, is now widely accessible with anyone with access to the internet.
Such is the case in New York, where the state’s highest court just ruled to dismiss video evidence of child sexual abuse over concerns of deepfake manipulation, with Chief Judge Rowan Wilson writing “here, the confluence of factors — including the bizarre circumstances surrounding the discovery of the videos and the long time period between their creation and their recovery — raise doubts about their authenticity.”
These challenges are further evident in the 2024 case of a high school athletic director in Maryland, who was arrested following the allegations of using Large Language Models, such as ChatGPT, to generate an audio of the voice of Pikesville High School’s principal making racial slurs. An expert stated that creating a voice of a person is as easy as uploading someone’s voice and inputting a text to generate the synthetic media.
Deepfake poses a unique threat because it is increasingly difficult to detect, which will likely pass the low standard for authentication. With such complexity, there is also an increased threat of the fraudulent claims using deepfake photographs and videos. This situation is likely to overwhelm court resources while failing to adequately address the fundamental authentication challenge.
Such is the case in Huang v. Tesla, where the presiding judge found it troubling that Tesla could not admit or deny whether some videos of electric cars autopilot were digitally altered, raising the question of deepfake of proposed evidence. In State of Washington v. Puloka, the judge denied the admission of an enhanced video by AI enhancement model into evidence on the basis of confusions of issue and unfair prejudice.
Understanding the process of deepfake itself can provide important context for addressing their legal implications. There are various ways to generate deepfake images however, deepfake is typically employed in two primary techniques, face morphing and face swapping. With face morphing, two images are being blended to resemble the two images. Face swapping is transforming one face into another. With its latest model 4o, ChatGPT is now capable of generating accurate photorealistic images, which requires no photographic input and can generate an image based on a textual prompt.
With such advancement in deep-fake technology, the legal system is faced with unprecedent challenge, authentication of photographs and videos. Historically, the standard for admitting photographic and video as evidence is to rely on a testimony of a person with knowledge to testify that a photograph or video fairly and accurately represents of the condition of the place or item they purport to represent. As such, extrinsic evidence is required to authenticate photographs and videos, such as testimony of the photographer, videographer, or technician, engineer, or any other person who observed the events depicted. The same standard also applies to the Federal Rules of Evidence under Rule 901, which requires a knowledgeable witness by attesting that a video is a fair and accurate portrayal.
The fair and accurate portrayal standard approach sets a low threshold for admission, requiring only a witness testimony that a depiction fairly and accurately represents their knowledge of a scene or item. The longstanding standard operates under the fundamental assumption that witnesses have sufficient personal knowledge to verify a photograph or video’s authenticity. Such assumption is greatly undermined by the advancement and sophisticated deepfakes. The system also relies on the assumption fraudulent deepfakes photographs and videos may pass the fair and accurate portrayal standard, however, such photographs and videos may subsequently be challenged through expert testimony.
During the closing of a homicide trial in Arizona, the court allowed the use of AI by the victim’s family. The court allowed the victim’s family to generate an AI video of the decedent to deliver the impact statement himself in an open court. As a result, Chief Justice Timmer stated that the court has created an AI committee tasked with evaluating the use of AI and providing guidance on its optimal application. Chief Justice Timmer further stated that users of AI, including courts themselves, bear responsibility for ensuring its accuracy.
As deepfakes technology advances, the detection systems lag behind. Until adequate detection systems have evolved, it is a good practice to question witnesses, under oath, whether a photograph, audio, or video have been altered or generated using artificial intelligence. The use of experts would likely be increased to opine if such alterations or generation using AI language models occurred. The AI Language Models should establish a mechanism to allow the detection of such synthetic media files by embedding digital footprints, and not simply relying on users’ identifiable information, such as an IP address and email address, to track back the source of media files.
The legal system’s response to deepfake challenges will likely require both procedural adaptation and technological innovations. As courts continue to navigate these complex issues, attorneys should be adaptable to future technological developments and seek to preserve the integrity of legal proceedings in the digital age.
Mostafa Soliman serves as Counsel for leading insurance defense litigation firm Coffey Modica LLP, practicing out of their Buffalo, NY office. A seasoned legal professional with a robust background in international and comparative law, his practice focuses on defending clients in complex litigation cases. He previously served as a legal fellow with Equal Justice Works AmeriCorps in Western New York.
###