The astonishing capability of generative AI to create visual images is getting better and more accessible, but with their models based on massive libraries of existing art, artists are frantically looking for ways to prevent their work from being harvested without their permission. A new tool, ominously named Nightshade, could be the answer.
The trick involves using optimized, prompt-specific “data poisoning attacks” that corrupt the data needed to train AI models when it’s fed into an image generator.
“Poisoning has been a known attack vector in machine learning models for years,” Professor Ben Zhao told Decrypt. “Nightshade is not interesting because it does poisoning, but because it poisons generative AI models, which nobody thought was possible because these models are so big.”
Combatting intellectual property theft and AI deepfakes has become crucial since generative AI models came into the mainstream this year. In July, a team of researchers at MIT similarly suggested injecting small bits of code that would cause the image to distort, rendering it unusable.
Generative AI refers to AI models that use pro
Go to Source to See Full Article
Author: Jason Nelson
Tip BTC Newswire with Cryptocurrency