summaryrefslogtreecommitdiff
path: root/docs/admin
diff options
context:
space:
mode:
authorLibravatar kim <grufwub@gmail.com>2025-09-17 14:16:53 +0200
committerLibravatar kim <gruf@noreply.codeberg.org>2025-09-17 14:16:53 +0200
commit6801ce299a3a0016bae08ee8f64602aeb0274659 (patch)
treeee7d1d15e05794b2f0383d076dd7c51fafc70dad /docs/admin
parent[bugfix/frontend] Use correct account domain in move account helper (#4440) (diff)
downloadgotosocial-6801ce299a3a0016bae08ee8f64602aeb0274659.tar.xz
[chore] remove nollamas middleware for now (after discussions with a security advisor) (#4433)
i'll keep this on a separate branch for now while i experiment with other possible alternatives, but for now both our hacky implementation especially, and more popular ones (like anubis) aren't looking too great on the deterrent front: https://github.com/eternal-flame-AD/pow-buster Co-authored-by: tobi <tobi.smethurst@protonmail.com> Reviewed-on: https://codeberg.org/superseriousbusiness/gotosocial/pulls/4433 Co-authored-by: kim <grufwub@gmail.com> Co-committed-by: kim <grufwub@gmail.com>
Diffstat (limited to 'docs/admin')
-rw-r--r--docs/admin/robots.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/admin/robots.md b/docs/admin/robots.md
index 29a02db42..285e41bf9 100644
--- a/docs/admin/robots.md
+++ b/docs/admin/robots.md
@@ -10,6 +10,6 @@ You can allow or disallow crawlers from collecting stats about your instance fro
The AI scrapers come from a [community maintained repository][airobots]. It's manually kept in sync for the time being. If you know of any missing robots, please send them a PR!
-A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md), and enabling a proof-of-work [scraper deterrence](../advanced/scraper_deterrence.md).
+A number of AI scrapers are known to ignore entries in `robots.txt` even if it explicitly matches their User-Agent. This means the `robots.txt` file is not a foolproof way of ensuring AI scrapers don't grab your content. In addition to this you might want to look into blocking User-Agents via [requester header filtering](request_filtering_modes.md).
[airobots]: https://github.com/ai-robots-txt/ai.robots.txt/