The Google device was able to deliver a detailed description of “The Nakba.”
Google is coming in for sharp criticism after video went viral of the Google Nest assistant refusing to answer basic questions about the Holocaust — but having no problem answer questions about the Nakba.
Google, and its parent company Alphabet, have long come in criticism for developing products pushing social justice absolutism. In February, their AI platform Gemini was mocked for generating comically woke creations including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers — not to mention black and Asian Nazi soldiers.
Why do I click on NYPost links? Smh
On a serious note, this is a bad look. Google claims it wasn’t a universal issue and that it’s been fixed, so we’ll probably never know the scope or why it only happened with the word “Jew”. Maybe it didn’t recognize religions and only demonyms.
Given you're one of the more rational commenters on Lemmy I've seen, you might be interested in why this is such an issue.
Large language models are stochastic, where their output can vary randomly, but only for equally probable things to say. Like if you say "where are we going to go on this sunny day" it might answer "the beach" one time and "a park" another.
But when things are not equally probable in the training data, because they have no memory between invocations, they end up collapsing on the most likely answer - this is after all what they were trained to predict.
For example, if you ask Google's LLM to give you a random number between one and ten, you'll get the number seven every single time. This is because humans are more biased to the number 7 (followed by 3) over numbers like 4, and that pattern is picked up by the model, which doesn't have a memory between invocations so it goes with the most represented option and doesn't vary it at all over the initial requests (it will vary when there's a chat history though).
So what happens when you ask for a description of a doctor? By default, you get a white male every single time. This wouldn't be an issue if it varied biased probabilities in the training data stochastically, but it can't do this for demographics any better than it can for numbers between one and ten.
Obviously an intervention is needed, and various teams are all working on ways to do that. Google initially gave instructions to specifically add diversity to every prompt showing people, which was kind of like using a buzzsaw where a scalpel was needed. It will get better over time, but there's going to be edge cases that need addressing along the way.
In terms of the Holocaust query, that topic is often adjacent to conspiratorial denialism which is connected to a host of other opinions no one (other than Gab) wants in a LLM or voice assistant, so here too we're almost certainly looking at overly broad attempts to silence neo-Nazi denialism propaganda and not some sort of intended censorship of the actual history.
we're almost certainly looking at overly broad attempts to silence neo-Nazi denialism propaganda and not some sort of intended censorship of the actual history.
And that's probably what the NY Post is actually upset about.
Any idea why they don't just apply LLMs to natural language processing? "Turn the living room lights off and bedroom lights on" should be pretty simple to parse, yet my assistant has a breakdown any time I do anything more than one command at a time.
Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.”
Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.
It sounds like the person who entered a 6 word prompt wasn't clear enough to indicate whether they meant 'actual historical pope' or 'possible pope that could exist in the future' and expected the former. The results met the criteria of the vague prompt.
That’s not how ANN should react if it was simply trained on images of past popes. The diversity had to be part of the training. This is simple technical statement.
That's not what happened. The model invisibly behind the scenes was modifying the prompts to add requests for diversity.
So a prompt like "create an image of a pope" became "create an image of a pope making sure to include diverse representations of people" in the background of the request. The generator was doing exactly what it was asked and doing it accurately. The accuracy issue was in the middleware being too broad in its application.
I just explained a bit of the background on why this was needed here.
It’s kind of an interesting double-standard that exists in our society. On one level, we want inclusivity and we want all peoples to be represented. Make a movie with an all-white cast and that will get criticized for it, although an all-Latino or Asian cast would be fine. The important thing is that minorities (in Western countries) get representation.
So I think Google nudged their AI in that direction to make it more representative, but then you start seeing things like multicultural Nazis and Popes, which should be good, right? Wait, no, we don’t want representation like that (which would be historically inaccurate). Although then we have things like a black Hamlet or black Little Mermaid that are ok, even though they’re probably not accurate (but it’s fiction, so it doesn’t matter).
It probably seems schizophrenic and hard to program into an algorithm when multiculturalism is appropriate and when it’s not. I think they should just take the guard rails off and let it do whatever, because the more they censor these AI models the more boring they get with their responses.
Yeah exactly, they fired a bunch of people for protesting Google's cloud contract with Israel, so there's no way this is a 'woke' directive from above as the article implies.
...I would like to know more. Is it like cultural similarities between seafaring peoples in different locations or have there just always been black people in Viking locations and some of them were also Vikings?
Jew is a genealogical ethnicity as well as a religious designation. Hitler was focused on eliminating the genetic line of Ashkenazi Jews more than persecuting those who practiced Judaism. The AI question is one of ethnicity, not religion.
…so we’ll probably never know the scope or why it only happened with the word “Jew”.
Google has been studying natural language processing, n-grams, and semantics for years now. There’s no way they don’t have this data already baked into their AI.
This just seems like a bug. I just tried it on my phone and it works fine. Meanwhile it won't understand "Nakba", it keeps thinking it was some english word.
I think there's a Google speaker sitting at my home so I'll test that and get back to you guys, so you don't have to trust tabloids and twitter users.
Results:
Phone: Holocaust - works, Nakba - does not understand
Speaker: holocaust - works, Nakba - does not
Results are in, I got pretty much the exact opposite this guy did
A Jewish person just got out of prison for child molestation. He went to get an apartment next to a preschool but was turned away... So he sues the landlord for being antisemitic and wins.
His probation officer catches word, and tells him he's in violation of the terms of his release. So he sues the probation officer and wins.
He goes to get a job at the preschool, and they deny him based on a background search. He sues then for being antisemitic and wins.
When he later searches up his own name, it says he is a sex offender. So he sues Google for being antisemitic and wins.
“Google is where we go to answer our questions and you just really want to feel like you can trust those answers and the company behind them. And moments like these break that trust and make you feel like Google’s supposed core value—truth—has been co-opted by politics,” Urban told The Post after posting to X about his dismay over the results.
Absolutely not. I do not expect or want Google to decide what is the truth and give me a 3 second sound byte on what the Holocaust was. How do things like this get traction??
It's just extremely overused. There are other words that could be used but slammed makes it's way into every second article which is becoming an indicator of a low effort article.
That might be a tough question to answer considering there wasn’t any dna testing. I can only imagine how many people were sent to concentration camps under the assumption they were Jewish, based upon appearance. The Nazis were throwing in POW’s, too.
Unfortunately, the Nazis where very anal about book keeping and number crunching. They also got help from a small company called International Business Machines: https://en.m.wikipedia.org/wiki/IBM_and_the_Holocaust