Deepfakes remain a crucial concern for law enforcement and cybersecurity experts, and the United Nations has sounded the alarm over their role in spreading hate and misinformation online. A science team at MIT now says they have developed a novel defense against the weaponization of real photos.
During a presentation at the 2023 International Conference on Machine Learning on Tuesday, the researchers explained that small coding changes can cause meaningful distortions in derivative AI-generated images.
The team specifically proposed mitigating the risk of deepfakes created with large diffusion models by adding tiny changes or “attacks” to images that are hard to see but change how the models work, causing them to generate images that don’t look real.
“The key idea is to immunize images so as to make them resistant to manipulation by these models,” the researchers said. “This immunization relies on the injection of imperceptible adversarial perturbations designed to disrupt the operation of the targeted diffusion models, forcing them to generate unrealistic images.”
Such an encoder attack would theoretically derail the entire diffusion-genera
Go to Source to See Full Article
Author: Jason Nelson
Tip BTC Newswire with Cryptocurrency