
A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed “Sora”, unveiled by the company OpenAI, in Paris on February 16, 2024. Stefano Rellandini/AFP via Getty Images
While there’s an obvious negative impact of deepfakes—making people believe things that aren’t real—there’s also a flip side to that problem, warns Nicole Shackleton, a law lecturer at RMIT University in Melbourne.
As awareness of inauthentic media grows, a sceptical public “will be primed to doubt the authenticity of real audio and video evidence,” allowing people caught in the act of wrongdoing to muddy the waters by claiming the image or video isn’t real.
We had a problem loading this article. Please enable javascript or use a different browser. If the issue persists, please visit our help center.










