We use cookies for marketing and statistics.
Google introduces non-removable watermark for AI images
As generative AI tools can create more and more, it is also increasingly important to have tools that can detect an AI-generated text or image. Especially now that elections will be held again in 2024 in both the US and the UK, recognizing AI images and deepfakes (fake videos) is becoming increasingly important.
Google is releasing SynthID today. A program that watermarks an AI-generated image that is imperceptible to the human eye, but can be easily spotted by a special AI detection tool.
The watermark is embedded in the pixels of the image, but does not alter the image itself or the quality of the image in any way. The watermark is resistant to various operations such as cropping, resizing, all things that can be done to bypass normal, traditional watermarks. With SynthID you can edit a photo as much as you want without destroying the AI watermark.
And as SynthID's underlying models improve, the watermark will be even less perceptible to humans, and even easier to detect. Eventually, the tool could even be included as a Chrome extension or built into the browser so it can identify generated images all over the web.
SynthID is rolling out in the familiar Google fashion: First, Google Cloud customers using the company's Vertex AI platform and Imagen image generator can embed and detect the watermark. After the system has been tested in practice and has become better, it can be used more widely.
Other purposes
The watermark can also be used for other purposes, such as for the purpose of verifying an original advertisement image, or for verifying original product photos in a catalog that also contains texts and images generated by AI. And, says Google, "when you're scanning tumors in hospitals, you really want to make sure it's not a synthetically generated image."
Ultimately, Google hopes that SynthID could become an internet-wide standard, but Google isn't the only company doing this. Just last month, Meta, OpenAI, and a number of other AI big names pledged to build more protection and safety systems into their AI. For example, OpenAI already has a tool that can detect text written by its own ChatGPT chatbot. It seems that many AI detection tools will hit the market before one becomes the standard. Google is convinced that watermarks will be at least part of the solution on the web.
No doubt this will only inspire hackers and other developers to find more creative ways to get around the system. But as Google also says: “first prove that the fundamental part of the technology works.”