From 6801ce299a3a0016bae08ee8f64602aeb0274659 Mon Sep 17 00:00:00 2001 From: kim Date: Wed, 17 Sep 2025 14:16:53 +0200 Subject: [chore] remove nollamas middleware for now (after discussions with a security advisor) (#4433) i'll keep this on a separate branch for now while i experiment with other possible alternatives, but for now both our hacky implementation especially, and more popular ones (like anubis) aren't looking too great on the deterrent front: https://github.com/eternal-flame-AD/pow-buster Co-authored-by: tobi Reviewed-on: https://codeberg.org/superseriousbusiness/gotosocial/pulls/4433 Co-authored-by: kim Co-committed-by: kim --- docs/admin/robots.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'docs/admin/robots.md') diff --git a/docs/admin/robots.md b/docs/admin/robots.md index 29a02db42..285e41bf9 100644 --- a/docs/admin/robots.md +++ b/docs/admin/robots.md @@ -10,6 +10,6 @@ You can allow or disallow crawlers from collecting stats about your instance fro The AI scrapers come from a [community maintained repository][airobots]. It's manually kept in sync for the time being. If you know of any missing robots, please send them a PR! -A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md), and enabling a proof-of-work [scraper deterrence](../advanced/scraper_deterrence.md). +A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md). [airobots]: https://github.com/ai-robots-txt/ai.robots.txt/ -- cgit v1.2.3