Skip Navigation

Ok but what is an actually good solution to the whole AI art debacle?

I heard about C2PA and I don't believe for a second that it's not going to be used for surveillance and all that other fun stuff. What's worse is that they're apparently trying to make it legally required. It also really annoys me when I see headlines along the lines of "Is AI the end of creativity?!1!" or "AI will help artists, not hurt them!1!!" or something to that effect. So, it got me thinking and I tried to come up with some answers that actually benefit artists and their audience rather that just you know who.

Unfortunately my train of thought keeps barreling out of control to things like, "AI should do the boring stuff, not the fun stuff" and "if people didn't risk starvation in the first place..." So I thought I'd find out what other people think (search engines have become borderline useless haven't they).

So what do you think would be the best way to satisfy everyone?

62 comments
  • Universal basic income.

    EDIT: I feel I need to clarify this stance. I'm a game developer, and I'm also sometimes an artist. I've been in discussions with my artist friends and they're of the belief that AI is the enemy. I don't believe that, but I also don't blame them; artists' careers are being displaced by AI, so of course they are justified in directing their anger at the immediate threat to their livelihoods. As for the ethical/legal quandary about AI art models being trained on publicly-available copyrighted images, well, I think that's more of a grey area that I should not delve into right now.

    AI art is inevitable, and there is no feasible way to slow it down, let alone stop it; even if the developed Western world decide unanimously to ban AI art, black markets for AI art models would still exist, and it would give Eastern countries (okay, mainly just China) the opportunity to really corner the market and further develop that technology for (nef)various purposes.

    Every industry succumbs to automation sooner or later. I mean, it makes perfect economic sense to adopt automation to replace the existing workforce; robots don't need to get paid. However, normally, when automation is introduced into an industry, the pace at which that technology is developed and implemented is slow enough for displaced workers to be able to learn new trades and pick up new careers before they, well, go broke.

    The issue is that in this era of information, this AI technology is developing far too fast for that to happen, and couple that with the ever-increasing greed in today's capitalism that ensures workers (including artists) are working long hours and are being paid barely enough to survive; they don't have the time, energy, or funds to learn new trades or adapt to other creative endeavours that haven't been taken over by AI yet.

    We need a return to an era where art is not a method of livelihood, but rather an admired craft, simply for the purposes of art; that also comes with the extra bonus of lowering the worth of AI art. For all that to be possible, artists first need a way to stop worrying about having to make money to be able to afford food and housing.

    That's where UBI comes in. Again, capitalism, the richest 1% hoarding 50% of the world's wealth, yada yada, just tax those rich fucks and give that money to everyone; boom, UBI. Once artists (and workers in general) no longer have that threat of starvation, 1. they can take all the time they need/want to learn new crafts and careers, 2. they can force corporations to offer workers better treatment and pay, especially when they can afford to quit toxic work environments.

  • Focus on physical art. Sculpture, screen prints, paintings on canvas, mixed media, etc.

    Trying to stop progress has never worked. Sure, there are regulations that should be put in place, but putting the genie back in the bottle isn't going to happen.

  • There is no easy solution, hence companies trying to push extreme tactics like C2PA. I'm not even convinced at all about it, because people will definitely figure out how to fake image metadata. It'll also take ages for it to become a standard, and it can't be a worldwide standard when you know some people will reject it.

    Realistically the best option is accepting that AI art is a thing and at the very least making sure it stays open source. It would be terrible if only large companies would have access to this tech. Meanwhile artists can make something like a timelapse video to prove their art is genuine. Some people are also guessing with the rise of "artificial" art, art IRL will be way more valuable like theaters, statues, paintings etc.

    • I don't like to be a pessimist, but as a musician and writer, and seeing how the value of my work has steadily decreased years before AI became mainstream, I don't see how "real" art will become more valuable. Maybe on an individual/personal level, but not in the industry as a whole. Especially once an untrained person can't tell the difference between AI and "handmade" art.

    • At a certain point the AI could probably make a convincing time lapse. Just thinking out loud, it really is a tricky problem

    • Considering the amount of processing power needed to make a decent AI model, I'm pretty sure it's already solely controlled by large companies. Plus, if it becomes legally required then people can't exactly reject it.

      In my personal opinion, I don't think AI art is inherently bad and I'd put it on the same level as that particular style of soulless corporate art. I'm confident that people who actually care about the quality of whatever it is they're making will commission real artists. And the existence of AI art wouldn't take away the enjoyment of creating something with your own hands. But I'm not a professional artist so I think my opinion is irrelevant anyway. If actual artists have a problem with it, then it needs to be addressed.

      While I mostly agree with you in that there's no way most people would be on board with C2PA, it's an entirely different matter if it becomes legally required. I don't know how likely it is but it doesn't seem impossible.

      (Also the impersonation argument feels contrived to me. Just get your info from the source 4Head)

      Well, it just bothers me that I know many people who still think art and other creative pursuits should be relegated to hobby status and I should get a "real" job. And the fact that AI is doing things that humans are supposedly meant to do for fun just doesn't sit right with me.

      • Considering the amount of processing power needed to make a decent AI model, I’m pretty sure it’s already solely controlled by large companies.

        Well, yeah, but not exactly. The best image generation model is Stable Diffusion which is open source. There's a huge open source community vastly improving image generation and creating amazing features for it that outpace companies. Stable Diffusion XL is almost done and also set to be open source. There's also a big push for language models to be open source, but we're not there yet.

        the fact that AI is doing things that humans are supposedly meant to do for fun just doesn’t sit right with me.

        I totally agree but sadly it's where technology has lead us. It turns out making image/text/software generating AIs are so much easier than robots that automate the boring stuff. Physical robots aren't there yet. I don't think computer scientists intended to destroy art, but more just "this seems like a logical next step that AI can do".

        The big problem just lies in money. Millions of people will lose their jobs quicker than they think over AI advances, and it'll be a slow transition until we can create an economic system that can sustain them. The "just get a real job" crowd are in for a rude awakening when they realize there will be no "real job" 10 years later.

        But ah well, I'd encourage people to just enjoy the rollercoaster ride and see how it goes rather than shouting at computers.

  • As another user pointed out, as long as capitalism has control, there will be no good solution.

    I think about it this way: does knowing art is AI generated take away from the experience?

    • If the answer is yes, then it simply will never take over. As long as we have some sort of law that requires art to be tagged as AI if it's AI generated, then I think that would be enough. No need to tag original (human) art with anything, no need for that kind of surveillance, just tag AI art or make companies legally required to divulge if it is so.
    • If the answer is no, then I think this is just the natural progression of things. There will always be artists, and there will always be people that want their art to be made by a person. But if most people really don't care if there's a person behind the brush, then it doesn't matter if it's AI or not.

    I don't think anyone has a right to what they do. If you're an artist, that's all good, but if your art isn't appreciated, if people prefer AI over your art, then why should we block AI? Just so you can keep making money off your drawings? There's other things you could be doing... Once again capitalism makes it so that hobbies can't just be hobbies, "if you're not making money you're failing", so this isn't a very satisfying perspective, but the reality is that you don't need to be just an artist, you can have a job and draw for fun, post things online, etc.

    Ideally, we tell capitalism to fuck off. We already produce enough to feed, clothe, house, and heal all of the world, we don't do it only because the oligarchs choose not to, as it's not profitable. If we set up a system that actually makes use of the value generated by labour, instead of letting the 1% hoard it, then AI would not be threatening any job security, and it wouldn't be stifling creativity or anything, it would just be a tool.

    There is the issue of copyright though, since original works are used to train AI. That whole debacle is a can of worms that I will not open.

  • It’s a very difficult topic, and I don’t see any satisfying real-world solutions. Two big issues:

    1. Obvious solutions are impossible. Generative AI are impossible to "undo". Much of the basic tech, and many simpler models, are spread far and wide. Research, likewise, is spread out both globally and on varying levels from large Megacorps down to small groups of researchers. Even severe attempts at restricting it would, at most, punish the small guys.

    I don’t want a world, where corporations like Adobe or Microsoft hold sole control over legal "ethically trained" generative AI. However, that is where insistence on copyright for training sets, or insistence on censored "safe" LLMs would lead us.

    1. Many of the ethical and practical concerns are on sliding scales. They are also on the edge of these scales. When does machine assistance become unethical? When does imitating the specific style of an artist become wrong? Where does inspiration end and intellectual rights infringement begin? At what point does reducing racial and other biases from LLMs switch over to turning them into biased propaganda machines?

    There are dozens of questions like these, and I have found no satisfying answers to any of them. Yet the answers to some of them are required in order to produce reasonable solutions.

  • I had 85+ images of my art used to train AI. I think the best solution is for the current AI image training sets to be cleared and rebuilt on copyright free and opt in only content. Similar to stock photography where artists can decide for themselves if they want minimal compensation to contribute their art to the training set. This would be necessary because once the systems have been trained on an image it's in the memory. So the only way to respect the rights of the artists after the fact is to wipe the computers and start image generation all over again but ethically. I have linked on my Mastodon https://www.youtube.com/live/uoCJun7gkbA?feature=share a senate hearing on the issue in which the lawyer from Universal Music perfectly pointed out "it'd be hard to opt out if you don't know what has been opted in" . Additionally, this isn't just an artist issue the training set includes photos from medical records, schools, and personal photos. Basically if you've ever posted a photo on the internet there's a chance it's in the training set. "have I been trained" is a website where you can see what is included and opt out (though as mentioned earlier that's not a good solution) I spoke to a prominent IP lawyer in Chicago (before the class action lawsuits were public) and he pointed out that they didn't have the right to reproduce my artwork into their training set. Their actions have been likened to a smoothie shop. They have the storefront and the blenders but they stole all the ingredients. After it's blended you may not ALWAYS be able to recognize the strawberries BUT we know they didn't pay for the fruit. It was stolen for their profit. Why should I be forced to provide the core product of my business to develop the core product of another (for-profit) business?? The senate hearing linked above includes many other important and valid points. Myself and many other artists I know aren't against AI. I love tech and think it's really fun and can be helpful, it just needs to be done ethically. I have a lot more I could add to this, hahahaha

  • AI needs data to train up on. It can't create art without first consuming existing art and spitting out parts of the originals. There's a reasonable claim to be made that AI synthesis of prior art is itself original enough to count as having intrinsic worth, but if the only way to get it is stealing other people's work to train up your model, the whole value proposition of AI art is probably net negative, entirely at the expense of artists whose work was used to feed the model.

    Yes, there's the argument that automation of new things is inevitable, but we do have choices about whether the automated violation of copyright to feed the model is tolerable or not. Sure, it's a cool sexy technology and who doesn't love getting on the bandwagon of the future and all, but the ethics of modern AI development are trash and despite promises that automated AI labor will save the owner class money by doing for free what the plebes demand to be paid for, it's really as much a ponzi scheme as all those crypto currencies that don't have intrinsic value unless enough suckers can be convinced to feed the scheme.

    And yet, it's a powerful technology that has potential to be a legitimate boon to humanity. I'd like to see it used to do things (like picking crops that are hard to automate with dumb machines, or cleaning trash off of beaches or out of the ocean, or refactoring boilerplate code to not use deprecated packages or to review boilerplate contract text for errors) that aren't just ways for owners to cut labor out of the economy and pocket the differences.

    Perhaps, if we are going to allow AI to be a great big machine that steals inputs (like art, or writing, or code) from others and uses them to do for-profit work for their owners, the proceeds attributable to AI ought to be taxed at a 90%+ rate and used to fund a Universal Basic Income as payment for the original work that went into the AI model.

  • AI art is art, period. Just like with any method of creation, there will be good and bad AI art, and as with any method of making art, there is human input and intention behind it. Internet is chock full of same-looking fan drawings of popular characters—everyone can pick up a pencil, do a 15·minute sketch of Joker; or grab a camera, shoot a landscape, and upload it on Deviantart. Same for boring, uninspiring, mass-produced commercial art.

    Fundamentally generative neural networks are no different from "oldschool" procedural generation tools like Mandelbulb3D or Terragen—with both of which I have tinkered a lot in the past. With AI you use a verbal prompt to generate; with "oldschool" generative processes you use a numerical input or different math formulas.

    As for AI somehow "stealing" art, well, every artist who studies the works of other artists to learn how to make good art, is "stealing", then. At the end of the day, a human brain is literally a neural network that can be trained using various inputs. No input; no output other than random noise. From my own past a decade ago tinkering with digital art—one of my renders with Mandelbulb that was well recieved on DA (ended up in some curated collection, even) was based on someone else's input formula that I tweaked heavily and used different render parameters—and I'm sure someone else took my version of that formula and made their own version.

    That's the nature of art, nothing is created in vacuum; nothing is original. Every artist "steals". Those who claim different, who believe art should never draw from other art, are either delusional or pretentious elitists. Or lawyers.

  • I think ensuring growing funding for arts should be less directly connected to the work they create. One side of this is abolishing interleectual property rights. The other side would be a proper UBI + probably art/culture endowments and alternative revenue structures.

    How to get there without a bit of civil war I'm not so sure.

62 comments