Google created a new AI model for talking to dolphins
Google created a new AI model for talking to dolphins

Google created a new AI model for talking to dolphins

Google created a new AI model for talking to dolphins
Google created a new AI model for talking to dolphins
What the dolphins are asking for:
Turns out Dolphins really hate black people.
This conclusion was brought to you by Grok.
[...] Historian Philip Mirowski argues that the decline of scientific quality can be connected to its commodification, especially spurred by major corporations' profit-driven decision to outsource their research to universities and contract research organizations.
The high publication rates for papers that reject the null hypothesis contributes to a file drawer effect in which papers that fail to reject the null go unpublished because they are not written up, written up but not submitted, or submitted and rejected. Publication bias and the file drawer effect combine to propagate the dissemination and maintenance of false knowledge: through the file drawer effect, correct findings of no effect are unpublished and hidden from view; and through publication bias, a single incorrect chance finding (a 1:20 chance at α = .05, if the null hypothesis is true) can be published and become part of a discipline’s wrong knowledge.
Ideally, scientists are objective and dispassionate throughout their investigations, but knowledge of the publication bias strongly opposes these ideals. Publication success shapes careers, so researchers need their experiments to succeed (rejecting the null in order to get published), creating many areas of concern (middle row of Figure 1), as follows.
I'm so tired. I hate Googler Science.
This is bullshit hype nonsense, and the fact it devolves into an ad for "AI-powered" Pixel 9 is pretty telling.
This is still less shit than their arxiv "GenAI Game Engine" crap, but not by much.
lol what do you expect them to say? “Golly thanks for destroying the biosphere. Im so glad the ocean is spanned with floating garbage and commercial nets, there used to be pesky fish everywhere!”
Back in the day we had regular humans who would talk to dolphins and only occasionally engage in cross-species intercourse
What's a little handy between human and dolphin captive dosed with LSD
I don't know if we really want to talk to dolphins. Those things are godless.
I see a major problem with this. Why would we assume dolphins have one single language? I see no reason to assume their languages wouldn't be as diverse as ours.
But worse still, you have to factor in the decline in dolphin populations over time. Maybe at their natural numbers, there would have be many thousands of dolphin languages, each spoken by tens of thousands of dolphins. But we've severely degraded their numbers. Now each dolphin language is the equivalent of one of those dying indigenous languages that now only has a handful of living speakers. Dolphin language might be a collection of such near-extinct languages, each highly distinct from each other. Maybe there's thousands of dolphin languages, each spoken by only a few dozen dolphins.
And unlike human languages, these dolphin languages weren't replaced by some broader hegemonic dolphin language, a dolphin English, Spanish, Mandarin, etc. There is no dolphin lingua franca that we can train the model on. There's just a whole series of dolphin language remnants, mutually incomprehensible to each other.
This is a real problem because LLMs require vast quantities of data to train on. It may simply not be possible to gather enough samples of a single dolphin language sufficient in quantity to train an LLM on.
Uhh, someone clearly hasn't read the Bible. There weren't any dolphins building the tower of Babel, sweaty.
do not remind me of the silly stuff some christians wholehesrtedly believe, the door to door ones traumatized me as a kid, they were so serious trying to convert me when I answered the door, stilll remember them going from teacher voice story telling to serious oh hello sir how are you when my dad walked up, I was like yo you were telling me about hell I need to know more
. It's an audio-in, audio-out model. So after providing it with a dolphin vocalization, the model does just what human-centric language models do—it predicts the next token. If it works anything like a standard LLM, those predicted tokens could be sounds that a dolphin would understand.
It's a cool tech application, but all they're technically doing right now is training an AI to sound like dolphins.. Unless they can somehow convert this to actual meaning/human language, I feel like we're just going to end up with an equally incomprehensible Large Dolphin Language Model.
incomprehensible Large Dolphin Language Model
Dolphin-speak: iiiiiiiiiiiiiiii gggggggggggggggggrrrrrreeeeeee tzzttzzttzzttzzt nnnt-nnnt-nnnt-nnnt-nnnt brrrrrrt mwahwahwahwahwah
English: fish want want swim water swam down—down—down fish prehensile penis
Don't LLMs work on text though? Speech to text is a separate process that has its output fed to an LLM? Even when you integrate them more closely to do stuff like figure out words based on context clues, wouldn't that amount to "here's a text list of possible words, which would make the most sense"?
What counts as a "token" in a purely audio based model?
Unless they can somehow convert this to actual meaning/human language, I feel like we're just going to end up with an equally incomprehensible Large Dolphin Language Model.
I guess the next step would be associating those sounds with the Dolphins' actions. Similar to how we would learn the language of people we've never contacted before.
I do not know enough about the intricacies of differences in AI text and audio-only models. Though I know we already have audio-only models that do work basically the same way.
I guess the next step would be associating those sounds with the Dolphins' actions
Yeah but, we're already trying to do this. I'm not sure how the AI step really helps. We can already hear dolphins, isolate specific noises, and associate them actions, but we still haven't gotten very far. Having a machine that can replicate those noises without doing the actions sounds significantly less helpful compared to watching a dolphin.
I assume phonemes would be the tokens. We can already computer generate the audio of spoken language, seems like the tough part here is figuring out what the dolphin sounds actually mean. Especially when we don't have native speakers available to correct the machine outputs as the model is trained.
An emergent behavior of LLMs is the ability to translate between languages. IE, we taught something Spanish, and we taught it English, and it automatically knows how to translate between them. If we taught it English and dolphin, it should be able to translate anything with shared meaning.
Is it emergent?! I've never seen this claim. Where did you see or read this? Do you mean by this that it can just work in any trained language and accept/return tokens based on the language input and/or requested?
Assuming this is an emergent property of llms (and not a result of getting lucky with what pieces of the training data were memorized in the model weights), it has thus far only been demonstrated with human language.
Does dolphin language share enough homology with human language in terms of embedded representations of the utterances (clicks?)? Maybe llms are a useful tool to start probing these questions but it seems excessively optimistic and ascientific to expect a priori that training an LLM of any type - especially a sensorily unimodal one - on non-human sounds would produce a functional translator
Moreover, from deepmind's writeup on the topic:
Knowing the individual dolphins involved is crucial for accurate interpretation. The ultimate goal of this observational work is to understand the structure and potential meaning within these natural sound sequences — seeking patterns and rules that might indicate language.
So a research team did this recently with a sperm whale I believe and they basically said 'hello come here' and it actually did and circled the ship trying to talk with it, but they didn't know enough yet to respond.
Turns out the whales have their own Karl Marx and are all communists
turns out dolphins mostly just talk about fish and all the sa they have done, are doing, and plan to do
Ok yeah if tech bros find a way to communicate with animals, I will have no choice but to concede
I’m kinda skeptical though, this seems like more hype
So long and thanks for all the fish!
Ok this would actually be pretty cool, if it works
Very much so. But if it does work, we should use that as proof that it's possible and start working on a non-AI way to accomplish it.
But I doubt they'd want to put the money into that. If the mystery box works well enough why bother?
I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:
they're gonna love Shrimp Jesus
Is ecco the dolphin going to time travel into the future and save the human race?