That's great! These things are super fun. Just don't call yourself an artist or try to copyright your generations. That's like pretending to be a musician because you're good at Guitar Hero.
Aw, that's cute, a drummer thinks he's a musician too? (I kid, that's a running joke in music circles, percussionists are definitely musicians, we'd be lost without them). That's awesome! I suppose expert drumming in Rock Band would be a lot like the real thing. A program like rock band would probably work as a great drum trainer on a real set.
So did I, and I didn't even know I could play until years later when I sat in front of a friend's kit for a lesson with them. They basically talked me through the setup, gave me a song to play, and I just played the opening without much fuss. They told me I didn't need the lesson, I could already play and I just needed time on the kit, left the room and let me go ham.
Honestly who cares about being an artist? There's always going to be snobs trying to tear you down or devalue your efforts. No one questions whether video games are art or not now, but that took like twenty years since people began seriously pushing the subject. The same thing happened with synthesizers and samplers in the 1980s and as a result there are fewer working drummers today, but without these we would not have hip hop or house, and that would have been a huge cultural loss.
Generative art hasn't found its Marley Marl or Frankie Knuckles yet, but they're out there, and they're going to do stuff that will blow our minds. They didn't need to be artists to change the world.
Image generators don't produce anything new, though. All they can do is iterate on previously sampled works which have been broken down into statistical arrays and then output based on the probability that best matches your prompt. They're a fancier Gaussian Blur tool that can collage. To compare to your examples, they're making songs that are nothing but samples from other music used without permission without a single original note in them, and companies are selling the tool for profit while the people using it are claiming that they wrote the music.
Also, people absolutely do still argue that video games aren't art (and they're stupid for it), and it takes tons of artists to make games. The first thing they teach you about 3d modeling is how to pick up a pencil and do life drawing and color theory.
The issue with generative AI isn't the tech. Like your examples, the tech is just a tool. The issues are the wage theft and copyright violations of using other people's work without permission and taking credit for their work as your own. You can't remix a song and then claim it as your own original work because you remixed 5 songs into 1. And neither should a company be allowed to sell the sampler filled with music used without permission and make billions in profit doing so.
Have you ever heard the saying that there are only 4 or 5 stories in the world? That's basically what you're arguing, and we're getting into heavy philosophical areas here.
The difference is in the process. Anybody can take a photo, but it takes knowledge and experience to be a photographer. An artist understands concepts in the way that a physicist understands the rules that govern particles. The issue with AI isn't that it's derivative in the sense that "everything old is new again" or "nature doesn't break her own laws," it's derivative in the sense that it merely regurgitates a collage of vectorized arrays of its training data. Even somebody who lives in a cave would understand how light falls and could extrapolate that knowledge to paint a sunset if you told them what a sunset is like. Given A and B, you can figure out C. The image generators we have today don't understand how light works, even with all the images on the internet to examine. They can give you sets of A, B, and AB, but never C. If I draw a line and then tell you to draw a line, your line and my line will be different even though they're both lines. If you tell an image generator to draw a line, it'll spit out what is effectively a collage of lines from its training set.
And even this would only matter in terms of prompters saying that they are artists because they wrote the phrase that caused the tool to generate an image, but we live in a world where we must make money to live, and the way that the companies that make these tools work amounts to wage theft.
AI is like a camera. It's a tool that will spawn entirely new genres of art and be used to improve the work of artists in many other areas. But like any other tool, it can be put together and used ethically or unethically, and that's where the issues lie.
AI bros say that it's like when the camera was first invented and all the painters freaked out. But that's a strawman. Artists are asking, "Is a man not entitled to the sweat of his brow?"
I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF, and this one by Cory Doctorow. You're misrepresenting how these systems actually work. I'd like to hear your thoughts.
Copyright is a whole mess and a dangerous can of worms, but before I get any further, I just want to quote a funny meme: "I'm not doing homework for you. I've known you for 30 seconds and enjoyed none of them." If you're going to make a point, give the actual point before citing sources because there's no guarantee that the person you're talking to will even understand what you're trying to say.
Having said that, I agree that anything around copyright and AI is a dangerous road. Copyright is extremely flawed in its design.
I compare image generators to the Gaussian Blur tool for a reason - it's a tool that outputs an algorithm based on its inputs. Your prompt and its training set, in this case. And like any other tool, its work on its own is derivative of all the works in its training set and therefore the burning question comes down to whether or not that training data was ethically sourced, ie used with permission. So the question comes down to whether or not the companies behind the tool had the right to use the images that they did and how to prove that. I'm a fan of requiring generators to list the works that they used for their training data somewhere. Basically, a similar licensing system as open source software. This way, people could openly license their work for use or not and have a way to prove if their works were used without their permission legally. There are some companies that are actually moving to commissioning artists to create works specifically for use in their training sets, and I think that's great.
AI is a tool like any other, and like any other tool, it can be made using unethical means. In an ideal world, it wouldn't matter because artists wouldn't have to worry about putting food on the table and would be able to just make art for the sake of following their passions. But we don't live in an ideal world, and the generators we have today are equivalent to the fast fashion industry.
Basically, I ask, "Is a man not entitled to the sweat of his brow?" And the AI companies of today respond, "No! It belongs to me."
There's a whole other discussion to be had about prompters and the attitude that they created the works generated by these tools and how similar they are to corporate middle managers taking credit for the work of the people under them, but that's a discussion for another time.
Works should not have to be licensed for analysis, and Cory Doctorow very eloquently explains why in this article. I'll quote a small part, but I implore you to read the whole thing.
This open letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries, further expands on the pitfalls of this kind of thinking and the implications for broader society. I know it's a lot, but these are wonderfully condensed explanations of the deeper issues at hand.