HOMEAIWhat evidential value do images still have?

What evidential value do images still have?

The AI excuse

On Wednesday 26th April 2023, a judge in California ruled that Tesla CEO Elon Musk must testify under oath regarding certain statements he made about the safety and capabilities of Tesla’s Autopilot features. This order is part of a lawsuit filed by the family of Walter Huang, an Apple engineer who died in a 2018 car crash. Tesla’s lawyers have argued that Musk cannot remember the specifics of his statements that the plaintiffs want to question him about, and that Musk, as a high-profile CEO, is often the target of convincing “deepfake” videos.

The Huang family alleges that Tesla’s semi-automated driving software failed, while Tesla argues that Huang was distracted by a video game on his phone and ignored vehicle warnings at the time of the crash.

Musk is expected to be questioned about a 2016 statement he allegedly made, claiming that a Model S and Model X can drive autonomously more safely than a human. Tesla has opposed this request in court documents, arguing that Musk cannot remember the details of his statements.

On Friday, a California state court jury ruled that Tesla’s Autopilot feature did not fail in what seems to be the first trial related to a crash involving the semi-automated driving software.

End of the evidential value of visual and audio recordings?

It’s clear that AI-generated images are now so realistic that they’re virtually indistinguishable from actual photos to the untrained human eye. This is increasingly true for images of people, known as “deep fakes”, and sound recordings, referred to as “deep voice”. In the United States, there have been instances where criminals have used deep voice technology to impersonate relatives over the phone and ask for money, essentially a modern version of the grandparent scam.

AI programs like Midjourney, DALL-E, and others have the ability to create incredibly realistic images, sounds, and videos, which presents two significant issues for the formation of public opinion. The conversation has largely centered around the risk of fake content and the subsequent spread of misinformation. However, as more people become aware of AI’s capabilities, they’re likely to become more skeptical of images and sound recordings.

In the near future, the issue will likely shift from scandals caused by fake images to real scandals being dismissed due to the decreasing credibility of image and sound evidence. This could also severely undermine trust in historical documentaries, as the ability to alter and stage the past could lead to more people questioning historical facts. This blurring of the past into the realm of fiction could have serious implications for societal consensus.

In this new reality, “It wasn’t me” could become a common defense for individuals confronted with incriminating images. This presents a significant challenge for journalists who receive potentially explosive material that they struggle to verify. As demonstrated by the Tesla case, judges will also increasingly have to consider the possibility that evidence may not be authentic, but rather AI-generated. The question is, are they ready for this?

Consequences for court proceedings

In a discussion with LTO, Dr. Christian Rückert, a legal academic and expert in cybercrime, voiced his worries about the German judiciary’s hesitance to critically evaluate digital evidence. He highlighted that the judiciary needs to enhance its comprehension of digital evidence, even without taking into account AI-created deep fakes or audio recordings. He used the EncroChat cases as an illustration, where the courts accepted the authenticity and completeness of chat logs, despite the lack of original data from the French authorities.

Rückert also expressed disapproval of the unquestioning use of WhatsApp chats as evidence in court, where chat conversation printouts are frequently deemed trustworthy. He considers this method to be negligent, especially considering the existence of free online tools that can generate fake WhatsApp chat logs. He asserts that if courts were genuinely dedicated to their disclosure responsibilities, they would need to inspect the original WhatsApp file on the smartphone to detect any possible manipulation.

Rückert stressed that, given the rise in AI-produced images, it’s vital for courts to examine evidence more thoroughly and stay vigilant. He counsels judges to always remember that any evidence presented to them could potentially be doctored.

Judge is “deeply concerned”

This all ties back to the lawsuit against Tesla for damages and the assertion credited to Elon Musk that Tesla’s Autopilot is safer than human driving. As it stands, all signs point to a potentially embarrassing claim made by Tesla’s legal team. There’s no evidence suggesting a deep fake video: the event was organized by the well-known US media platform recode, which also uploaded the video on YouTube. This happened nearly six years ago, a time when artificial intelligence was not yet advanced enough to create such flawless deep fakes. The nature of the conversation, specifically Musk’s engagement with an audience member, doesn’t imply a fake either. Judge Evette Pennypacker of California responded with appropriate outrage. She described Tesla’s arguments as deeply concerning, as they essentially protect celebrities from being held accountable for their actions and words.

The judge responded by scheduling a preliminary three-hour hearing for the Tesla CEO at the end of July to interrogate Musk about the legitimacy of the statement made at the event. An event, by the way, that was attended by a large crowd. In the end, Musk’s statement could be traced back to him using the most ancient form of legal evidence. Testimony from witnesses.

You may also like

Leave a Comment