i think what the previous dude said is, people wouldn't not have resorted to use scrapping if there was a good api. one of the major reason people/organisations choose scrapping is because it's better than paying insane amounts of money for insane(ly low)amount of api calls.
There was a good API before Elon essentially shut it down. But that is irrelevant.
The reason mass data scraping like OpenAI does would not rely on an API is because they are getting data from the entire Internet (and other sources that aren't online). They want raw data and they want as much and as varied as possible. It's much easier, cheaper and practical to build tools that scrape websites generically than to integrate with thousands of completely independent and different APIs.
It's the same reason that Reddit complaining about "AI taking all of our data" is bullshit. "AI" is just a convenient excuse and the most recent tech buzzword.
They are largely mad because of how effective the AI is. If data was being used just to improve Swype for texting people would care less. I care more about artists' complaints about getting replaced than big tech companies complaining that content they didn't create is being used to create things.
Also I decided to read openAI's GPT2 paper and they were pretty clear about their created dataset:
"Instead, we created a new web scrape which emphasizes
document quality. To do this we only scraped web pages
which have been curated/filtered by humans. Manually
filtering a full web scrape would be exceptionally expensive
so as a starting point, we scraped all outbound links from
Reddit, a social media platform, which received at least 3
karma. This can be thought of as a heuristic indicator for
whether other users found the link interesting, educational,
or just funny.
The resulting dataset, WebText, contains the text subset
of these 45 million links."
That's a nice sized dataset from real people that's already somewhat filtered by quality. They were totally scraping Reddit very specifically and now that people see it's effective, anyone else who wants to make their own chatgpt or wants to improve their models will do the same.
scraping the outbound links is very different than all of the user comments, which would put more load on Reddit's servers and also be easier to claim "belongs" to Reddit (it doesn't but they argue otherwise)
this shows that OpenAI was not using, let alone abusing, Reddit's API as claimed
Twitter very much does not want that to happen. Remember two weeks ago when Musk reversed his decision to block anyone but registered users from seeing tweets because Google started removing links to Twitter since they were dead?
Right-wingers don't just want to be bigoted assholes with megaphones, they want to make sure decent people have to hear them too.
Yeah, people think this is like trying to stop drive by bots that are looking for PHP vulnerabilities, it isn't.
Usually you are attempting to stop someone who is spending their entire day trying to scrape your site. It's a full time job trying to stop them and even then it's a cat and mouse game at best.
Still don't think Elon is going to get anywhere with this though.
The same way any large web service would identify a sudden increase in traffic, whether malicious or not. For the servers I manage, we end up dealing with more unintentionally out-of-control bots than we do legitimate hack or DDoS attempts.