AI is coming for the past too
In our focus on protecting the present and future from artificial intelligence, we have forgotten about the urgent need to protect the past.
WE DON’T have to imagine a world where deepfakes can so believably imitate the voices of politicians that they can be used to gin up scandals that could sway elections. It’s already here. Fortunately, there are numerous reasons for optimism about society’s ability to identify fake media and maintain a shared understanding of current events.
While we have reason to believe the future may be safe, we worry that the past is not.
History can be a powerful tool for manipulation and malfeasance. The same generative artificial intelligence (AI) that can fake current events can also fake past ones. While new content may be secured through built-in systems, there is a world of content out there that has not been watermarked, which is done by adding imperceptible information to a digital file so that its provenance can be traced. Once watermarking at creation becomes widespread and people adapt to distrust content that is not watermarked, then everything produced before that point in time can be much more easily called into question.
KEYWORDS IN THIS ARTICLE
BT is now on Telegram!
For daily updates on weekdays and specially selected content for the weekend. Subscribe to t.me/BizTimes
Opinion & Features
Singapore offices await a new wave of tenants
Musk has made Tesla a meme stock
The dog ate Japan’s plan to phase out coal power
If inflation continues to build, the Fed won’t be able to maintain neutral stance for long
Last rides and last rites: The rise of the limousine hearse
Beyond US aid, Ukraine needs European allies to step up