AI or DEI?
AI or DEI?
AI or DEI?
Not sure if someone else has brought this up, but this is because these AI models are massively biased towards generating white people so as a lazy "fix" they randomly add race tags to your prompts to get more racially diverse results.
Exactly. I wish people had a better understanding of what's going on technically.
It's not that the model itself has these biases. It's that the instructions given them are heavy handed in trying to correct for an inversely skewed representation bias.
So the models are literally instructed things like "if generating a person, add a modifier to evenly represent various backgrounds like Black, South Asian..."
Here you can see that modifier being reflected back when the prompt is shared before the image.
It's like an ethnicity AdLibs the model is being instructed to fill out whenever generating people.
I mean, I don't think it's an easy thing to fix. How do you eliminate bias in the training data without eliminating a substantial percentage of your training data. Which would significantly hinder performance.
Didn't someone manage to leak one of the tags into the image once?
how about black nazis or female asian nazi soldiers?
With that sort of diversity, can we really say the Nazis were all that bad?
Thisis what the nazis would've looked like if they were Asian or black
Why does the first guy remind me of patrick bateman
It's horrifically bad, even if not compared against other LLMs. I asked it for photos of actress and model Elle Fanning (aged 25 or so) on a beach, and it accused me of seeking CSAM... That's an instant never-going-to-use-again for me - mishandling that subject matter in any way is not a "whoopsie"
My purpose is to help people, and that includes protecting children. Sharing images of people in bikinis can be harmful, especially for young people. I hope you understand.
No no, you are the child in this context
But I'm a practicing non-contextualist!
That sounds more like what shall we ever do if children are allowed to see bikinis
Aaaaaand now you’re on a list through no fault of your own 😬
This just shows that AI sucks for getting accurate information. Even if it didn't hallucinate black people, it would've been just as wrong, just with white skinned queens. Now the lies just line up with "current social freakout of conservatives".
It really does not, even if you have a perfectly accurate model and ask it "draw an English queen, but make it ethnically diverse" this would still appear.
This is fucking ridiculous. This AI is the worst of them all. I don't mind it when they subtly try to insert some diversity where it makes sense but this is just nonsense.
They are experimenting and tuning. Apparently without any correction there is significant racist bias. Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.
PS: I find it hilarious. If anything it elevates the AI system to art, since it now provides an emotionally provoking mirror about white identity.
Significant racist bias is an understatement.
I asked a generator to make me a “queen monkey in a purple gown sitting on a throne” and I got maybe two pictures of actual monkeys. I even tried rewording it several times to be a real monkey, described the hair and everything.
The rest were all women of color.
Very disturbing. Pretty ladies, but very racist.
Apparently without any correction there is significant racist bias.
This doesn't make it any less ridiculous. This is a central pillar of this kind of AI tech, and they're trying to shove a band aid over the most obvious example of it. Clearly, that doesn't work. It's also only even attempting to fix one of the "problems" - they're never going to be able to "band aid" every single place where the AI exhibits this problem, so it's going to leave thousands of others un-fixed. Even if their band aid works, it only continues to mask the shortcomings of this tech and makes it less obvious to people that it's horrendously inacurrate with the other things it does.
Basically the AI reflects the long term racial bias in the training data. According to this BBC article it was an attempt to correct this bias but went a bit overboard.
Exactly. This is a core failing of LLM tech. It's just going to repeat all the shit it was fed to it. You're never going to fix that. You can attempt to steer it in different directions, but the reason this tech was used was because it is otherwise impossible for us to trudge through all the info that was fed to it. This was the only way to get it to "understand" everything. But all of it's understandings are going to have these biases, and it's going to be just as impossible to run through and fix all of these. It's like you didn't have enough metal to build the titanic so you just built it out of Swiss cheese and are trying to duct tape one hole closed so it doesn't sink. It's just never going to work.
This being pushed as some artificial INTELLIGENCE is the problem here. This shit doesn't understand what it's doing, it's just regurgitating the things it's consumed. It's going to be exactly as flawed as whatever was put into it, and you can't change that. The internet media it was trained on is racist, biased, full of undeniably false information, and massively swayed by propaganda on all sides of the fence. You can't expect LLMs to do anything different when trained on that data. They're going to have all the same problems. Asking these things to give you any information is like asking the average internet user what the answer is. And the average internet user is not very intelligent.
These are just amped up chat bots with data being sourced from random bits of the internet. Calling them artificial INTELLIGENCE misleads people into thinking these bots are smart of have some sort of understanding of what they're doing. They don't. They're just fucking internet parrots, and they don't have the architecture to be "fixed" from having these problems. Trying to patch these problems out is a fools errand and only masks their underlying failings.
Wow I had no idea there was such diversity in the British ruling class from that period! /s
Yes who can forget about Henry the Magnificent and his onion hat.
It's literally instructed to do AdLibs with ethnic identities to diversify prompts for images of people.
You can see how it's just inserting the ethnicity right before the noun in each case.
Was a very poor alignment strategy. This already blew up for Dall-E. Was Google not paying attention to their competitors' mistakes?
Wonder if you would get white rulers if you asked for historical leaders in Africa
Edit:
The prompt was "make me a funny picture I can bait people into arguing over"
Please note that the prompt says "queens of England" very clearly, which turns it into a glorified Google image search, so the results are unacceptable trash, and vaguely leftist language about people being angry for the lack of racism are your problem, only. Fuck off, troll.
The real issue is that even with a handholding, direct and easy prompt, the tech cannot simply hand over pictures, even generated ones to avoid copyright issues, that come from easily discovered answers on Wikipedia and who knows how many other credible sources. The lineage of the British Royal Family is all but open-source data - probably is, literally - and your mom can probably name three Queens offhand though she's Canadian. This thing completely ate shit on an easy, easy prompt.
I don't know how many times now I've seen some YouTuber use "evil Jerome Powell" as a prompt for a thumbnail, and get a clear picture of him complete with devil horns, copyright be damned, so what the f? The AI isn't this stupid, that means they're nerfing it and screwing it up. You best believe they're still selling it, though.
What other results will it comically fuck up, but you don't have the knowledge to critique? You won't see the results, either, somebody else will use them to judge your resume; IS using them, now. Fucking lazy hiring managers are going to just plug your name into this thing and ask for a synopsis of your life so they don't have to work. It will just fill in missing information with lies, and they'll eat it up. I guess you shot two people a couple of years ago and didn't know about it. I wonder why you didn't get the job?
People have been crazy dumb with this AI, meaning young, smart, tech-savvy people with heavy internet backgrounds who should know better than to trust keep treating it like an oracle, because they have some weird blind spot about this technology. Ignorant executives who think math is for slurs are going to make it do everything.
They're going to use this technology to decide who gets an apartment, who gets arrested, and a bunch of other shit, save your leftism for that.
AI so bad there's no Elizabeth I. Lol 🍑☁️
It is ridiculous. However, how can we know you did not first instruct to only show dark skin? Or select these from many examples that showed something else?
This issue is widely reported and you can check the AI for yourself to confirm.
It's also like, I guess I would prefer it to make mistakes like this if it means it is less biased towards whiteness in other, less specific areas?
Like, we know these models are dumb as rocks. We know that they are imperfect and that they mirror the biases of their trainers and training data, and that in American society that means bias towards whiteness. If the trainers are doing what they can to prevent that from happening, whatever, that's cool... even if the result is some dumb stuff like this sometimes.
I also don't think it's a problem for the user to specify race if it matters? Like "a white queen of England" is a fine thing to ask for, and if it isn't specified, the model will include diverse options even if they aren't historically accurate. No one gets bent out of shape if the outfits aren't quite historically accurate, for example
The problem is that these answers are hugely incorrect and if some child learning about history of England would see this, they would create bias that England was always diverse.
The same is true for some recent post, where people knowing nothing about Scotland history could learn from images that half of Scotland population in 18th century was black.
So from my perspective these images are just completely wrong and it should be fixed.
Also if you want diversity, what about handicapped people?
Not sure why you're getting downvoted. The user essentially asked for the AI to generate some random made up rulers of England. Might as well have asked it for new Game of Thrones characters for all the difference it would have made. These are not real people so it, quite correctly, threw in a whole load of mixed races because why wouldn't it? No idea why people are getting bent out of shape over someone doing a poor job of assigning prompts.
Is there some preview version of Gemini Ultra that can generate images or what gives?
Based on another comment here, looks like they turned off image generation recently
I know that the 23-year reign of Renaissance Ruler is mired in controversy, but you have to admit that without her, England would never have conquered Redding.
You can get around it by clicking the drafts button. It shows you the images generated as drafts but not actually published to you as results.
Just current BBC live action casting policy believe it or not.
Always telling when you see people online with a huge problem when AI generators aren't racist or attempt to avoid racism.
It's almost like they see racism in technology as a sort of affirmation.
I'm not sure just giving false history is anti-racist. It's usually the racist side that tries to do that, really.
If you want accurate history photos then I think you should ask a real artist to make you the picture and not the mindless machine we've only recently tricked into drawing
You mix up anti racist with factual, that's two different things
And how do we know you didn't crop out an instruction asking for diversity?
Either that or a side effect of trying to have less training data bias.
OpenAI also does this with its image generator, but apparently not to such a powerful degree.