Metadata, Watermarks and Other Technical Systems are Important in Tackling AI-Generated False Images

What can stop AI from flooding internet with fake images and videos?

Some experts believe that a novel way to differentiate between fake and real is to use watermarks, metadata, or other technical systems. Google, Adobe and Microsoft all support some form of AI labeling in their products. Google, for instance, announced at its I/O conference recently that it would attach a written disclosure similar to a notice of copyright underneath AI-generated images on Google Images in the months ahead. OpenAI’s DALL-E image generation technology already adds a watermark at the bottom of every image it creates.

Andy Parsons is the senior director for Adobe’s Content Authenticity Initiative Group. He said, \”We have a right to establish an objective reality that we can all agree on.\” \”And it starts with knowing exactly what it is, and in some cases, where it makes sense to do so, who created it or where from it comes.\”

Adobe uses a tool called content credentials to track when AI is used to edit images. This helps reduce confusion about fake and real pictures. It’s like a nutrition label for digital content. The information stays with the file, no matter where it is published or stored. Photoshop’s newest feature, Generative Fill uses AI to create new content quickly in an existing picture. Content credentials can track these changes.

Source:
https://www.vox.com/technology/23746060/ai-generative-fake-images-photoshop-google-microsoft-adobe?utm_medium=social&utm_content=voxdotcom&utm_campaign=vox.social&utm_source=facebook

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注