Artificial intelligence has made it difficult to tell what's real and what's not, especially in photos and videos. This has led to deepfakesAI robocall scam, accusations of fake AI mobsAND misinformationwhich is only increasing ahead of the presidential elections in November.
However, Andy Parsons, senior director of the Content Authenticity Initiative at Adobe, says the problem can be solved with another technology: content credentials.
These “feed tags” of information are embedded in the metadata of the digital content and act as an invisible watermark to indicate whether something has been done with AI. they answer the questions about how a piece of content was made and whether it was created or edited with AI.
Creators can add content credentials to anything they do in programs like Photoshop AND Microsoft Designer.
“This level of transparency can help dispel suspicion, especially during breaking news and election cycles,” Parsons said. entrepreneur.
How Big Tech is using credentials
OpenAI is already using credentials in images generated by the DALLĀ·E 3 AI image generator as part of her approach for the elections of 2024. Google is exploring credentials to determine where the content across its products, including YouTube, came from. And in October, Leica was released the world's first camera that automatically adds content credentials to photos taken with it.
Meanwhile, in May, TikTok became the first social media platform to use content credentials to automatically detect and label “AI-generated” content where appropriate.
Parsons says the “broad adoption” of credentials by social media platforms would “create a chain of trust.”
“This will empower consumers to verify details themselves and allow good actors to be trusted,” he said.
Adobe is a member of the Content Authenticity Initiative steering committee, together with Google, Microsoft, Sony, Intel and over 2000 other companies.