Until the models are constructed to account for the kind of noise that Nightshade throws, and work to countermand it. It's the same cat-and-mouse game that malware makers play with antivirus makers. Or youtuber and ad blockers. This will only make plagiarist AI stronger, sad to say.
Exactly. Like, how hard would it be to reverse engineer the poison and create a reversal tool that applies the exact opposite modifications. Hell, I wouldn't be surprised if it could be defeated by something as simple as a little image compression or noise.
University of Chicago boffins this week released Nightshade 1.0, a tool built to punish unscrupulous makers of machine learning models who train their systems on data without getting permission first.
"Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image," said the team responsible for the project.
Nightshade was developed by University of Chicago doctoral students Shawn Shan, Wenxin Ding, and Josephine Passananti, and professors Heather Zheng and Ben Zhao, some of whom also helped with Glaze.
"Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists," the authors state in their paper.
The failure to consider the wishes of artwork creators and owners led to a lawsuit filed last year, part of a broader pushback against the permissionless harvesting of data for the benefit of AI businesses.
Matthew Guzdial, assistant professor of computer science at University of Alberta, said in a social media post, "This is cool and timely work!
The original article contains 704 words, the summary contains 174 words. Saved 75%. I'm a bot and I'm open source!