The rampant spread of deepfakes brings significant risks—from creating nude images of minors to scamming individuals with fraudulent promotions using deepfakes of celebrities—the ability to distinguish AI-generated content (AIGC) from human-created ones has never been more crucial.
Watermarking, a common anti-counterfeiting measure seen in documents and currency, is one method to identify such content, with the addition of information that helps differentiate an AI-generated image from a non-AI-generated one. But a recent research paper concluded that simple or even advanced watermarking methods may not be really enough to prevent the risks associated with releasing AI material as human-made.
The research was conducted by a team of scientists at Nanyang Technological University, S-Lab, NTU, the Chongqing University, Shannon.AI, and the Zhejiang University.
One of the authors, Li Guanlin, told Decrypt that “the watermark can help people know if the content is generated by AI or humans.” But, he added, “If the watermark on AIGC is easy to remove or forge, we can f
Go to Source to See Full Article
Author: Jose Antonio Lanz
Tip BTC Newswire with Cryptocurrency