Back in the day, I'd be thrilled to read something like this, but now all I hear is 'look at how many new ways the Google overlord can fuck humans up with protein mutations to eliminate fragile meat-based enemies'
I want to believe this, but given how wonky AI bots have proven to be as of late, I can’t help but think that you can cut this number down by several million
In my field where Google "throws" their huge DL models at problems as well, the papers they publish tend to have very limited explanation of how and why the model works and they don't really provide a comprehensive validation of the model. So I find it difficult to trust their findings here, not only by looking at LLMs but also their "scientific" models.