How to block AI Crawler Bots using robots.txt file
How to block AI Crawler Bots using robots.txt file

Just a moment...

How to block AI Crawler Bots using robots.txt file
Just a moment...
This article lies to the reader, so it earns a -1 from me.
robots.txt does not work. I don't think it ever has - it's an honour system with no penalty for ignoring it.
I have a few low traffic sites hosted at home, and when a crawler takes an interest they can totally flood my connection. I'm using cloudflare and being incredibly aggressive with my filtering but so many bots are ignoring robots.txt as well as lying about who they are with humanesque UAs that it's having a real impact on my ability to provide the sites for humans.
Over the past year it's got around ten times worse. I woke up this morning to find my connection at a crawl and on checking the logs, AmazonBot has been hitting one site 12000 times an hour, and that's one of the more well-behaved bots. But there's thousands and thousands of them.
Cloudflare just announced an AI Bot prevention system: https://blog.cloudflare.com/declaring-your-aindependence-block-ai-bots-scrapers-and-crawlers-with-a-single-click/
When I changed my domain name I set this to on and then wondered why I couldn't log into the Nextcloud desktop app.
This does not block anything at all.
It's a 1994 "standard" that requires voluntary compliance and the user-agent is a string set by the operator of the tool used to access your site.
https://en.m.wikipedia.org/wiki/Robots.txt
https://en.m.wikipedia.org/wiki/User-Agent_header
In other words, the bot operator can ignore your robots.txt file and if you check your webserver logs, they can set their user-agent to whatever they like, so you cannot tell if they are ignoring you.
robots.txt will not block a bad bot, but you can use it to lure the bad bots into a "bot-trap" so you can ban them in an automated fashion.
I'm guessing something like:
Robots.txt: Do not index this particular area.
Main page: invisible link to particular area at top of page, with alt text of "don't follow this, it's just a bot trap" for screen readers and such.
Result: any access to said particular area equals insta-ban for that IP. Maybe just for 24 hours so nosy humans can get back to enjoying your site.
Wow. A lot of cynicism here. The AI bots are (currently) honoring robots.txt so this is an easy way to say go away. Honeypot urls can be a second line of defense as well as blocking published IP ranges. They’re no different than other bots that have existed for years.
In my experience, the AI bots are absolutely not honoring robots.txt - and there are literally hundreds of unique ones. Many of them aren't even identifying themselves as AI bots, but faking human user-agents.
It isn’t an enforceable solution. robots.txt and similar are just please bots dont index these pages. Doesn’t mean any bots will respect it
#TL;DR:
User-agent: GPTBot Disallow: / User-agent: ChatGPT-User Disallow: / User-agent: Google-Extended Disallow: / User-agent: PerplexityBot Disallow: / User-agent: Amazonbot Disallow: / User-agent: ClaudeBot Disallow: / User-agent: Omgilibot Disallow: / User-Agent: FacebookBot Disallow: / User-Agent: Applebot Disallow: / User-agent: anthropic-ai Disallow: / User-agent: Bytespider Disallow: / User-agent: Claude-Web Disallow: / User-agent: Diffbot Disallow: / User-agent: ImagesiftBot Disallow: / User-agent: Omgilibot Disallow: / User-agent: Omgili Disallow: / User-agent: YouBot Disallow: /
Block? Nope, robots.txt does not block the bots. It's just a text file that says: "Hey robot X, please do not crawl my website. Thanks :>"
I disallow a page in my robots.txt and ip-ban everyone who goes there. Thats pretty effective.
Did you ban it in your humans.txt too?
Can you explain this more?
Is the page linked in the site anywhere, or just mentioned in the robots.txt file?
Not sure if that is effective at all. Why would a crawler check the robots.txt if it's programmed to ignore it anyways?
smart
I doubt it'd be possible in most any way due to lack of server control, but I'm definitely gonna have to look this up to see if anything similar could be done on a neocities site.
Can this be done without fail2ban?
Robots.txt is honor-based and Big Data has no honor.
Unfortunate indeed.
typically adhere. but they don’t have to follow it.
Is it a poor design if its explicitly a design choice to ignore it entirely to scrape as much data as possible? Id argue its more AI bots designed to scrape everything regardless of robots.txt. That’s the intention. Asshole design vs poor design.
This is why I block in a htaccess: