Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Y’know.
just thinking about robots.txt as a working solution to people that literally broker in people's entire digital lives for hundreds of billions of dollars is so ... quaint.
hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.
I explicitly have my robots.txt set to block out AI crawlers, but I don't know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i've been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.
What social contract? When sites regularly have a robots.txt that says "only Google may crawl", and are effectively helping enforce a monolopy, that's not a social contract I'd ever agree to.
If you hosted your website on your computer, as many people did, or on hastily constructed server software run through your home internet connection, all it took was a few robots overzealously downloading your pages for things to break and the phone bill to spike.
AI companies like OpenAI are crawling the web in order to train large language models that could once again fundamentally change the way we access and share information.
In the last year or so, the rise of AI products like ChatGPT, and the large language models underlying them, have made high-quality training data one of the internet’s most valuable commodities.
You might build a totally innocent one to crawl around and make sure all your on-page links still lead to other live pages; you might send a much sketchier one around the web harvesting every email address or phone number you can find.
The New York Times blocked GPTBot as well, months before launching a suit against OpenAI alleging that OpenAI’s models “were built by copying and using millions of The Times’s copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more.” A study by Ben Welsh, the news applications editor at Reuters, found that 606 of 1,156 surveyed publishers had blocked GPTBot in their robots.txt file.
“We recognize that existing web publisher controls were developed before new AI and research use cases,” Google’s VP of trust Danielle Romain wrote last year.
The original article contains 2,912 words, the summary contains 239 words. Saved 92%. I'm a bot and I'm open source!
Wow I'm shocked! Just like how OpenAI preached for "privacy and ethics" and went deafly silent on data hoarding and scraping, then privatizes their stolen scraped data. If they insist their data collection to be private, then it needs regular external audits by strict data privacy firms just like they do with security.
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)