summaryrefslogtreecommitdiff
path: root/docs/configuration/advanced.md
diff options
context:
space:
mode:
authorLibravatar kim <89579420+NyaaaWhatsUpDoc@users.noreply.github.com>2024-04-11 10:45:35 +0100
committerLibravatar GitHub <noreply@github.com>2024-04-11 11:45:35 +0200
commita483bd9e38333b153df1d4df95276cca38f99ff5 (patch)
treeb2fdb6f53ef248b31719a15adc93e767eba3d5c4 /docs/configuration/advanced.md
parent[chore]: Bump github.com/yuin/goldmark from 1.7.0 to 1.7.1 (#2819) (diff)
downloadgotosocial-a483bd9e38333b153df1d4df95276cca38f99ff5.tar.xz
[performance] massively improved ActivityPub delivery worker efficiency (#2812)
* add delivery worker type that pulls from queue to httpclient package * finish up some code commenting, bodge a vendored activity library change, integrate the deliverypool changes into transportcontroller * hook up queue deletion logic * support deleting queued http requests by target ID * don't index APRequest by hostname in the queue * use gorun * use the original context's values when wrapping msg type as delivery{} * actually log in the AP delivery worker ... * add uncommitted changes * use errors.AsV2() * use errorsv2.AsV2() * finish adding some code comments, add bad host handling to delivery workers * slightly tweak deliveryworkerpool API, use advanced sender multiplier * remove PopCtx() method, let others instead rely on Wait() * shuffle things around to move delivery stuff into transport/ subpkg * remove dead code * formatting * validate request before queueing for delivery * finish adding code comments, fix up backoff code * finish adding more code comments * clamp minimum no. senders to 1 * add start/stop logging to delivery worker, some slight changes * remove double logging * use worker ptrs * expose the embedded log fields in httpclient.Request{} * ensure request context values are preserved when updating ctx * add delivery worker tests * fix linter issues * ensure delivery worker gets inited in testrig * fix tests to delivering messages to check worker delivery queue * update error type to use ptr instead of value receiver * fix test calling Workers{}.Start() instead of testrig.StartWorkers() * update docs for advanced-sender-multiplier * update to the latest activity library version * add comment about not using httptest.Server{}
Diffstat (limited to 'docs/configuration/advanced.md')
-rw-r--r--docs/configuration/advanced.md13
1 files changed, 4 insertions, 9 deletions
diff --git a/docs/configuration/advanced.md b/docs/configuration/advanced.md
index f776dac54..b97d8a6ba 100644
--- a/docs/configuration/advanced.md
+++ b/docs/configuration/advanced.md
@@ -119,15 +119,10 @@ advanced-throttling-multiplier: 8
# Default: "30s"
advanced-throttling-retry-after: "30s"
-# Int. CPU multiplier for the amount of goroutines to spawn in order to send messages via ActivityPub.
-# Messages will be batched so that at most multiplier * CPU count messages will be sent out at once.
-# This can be tuned to limit concurrent POSTing to remote inboxes, preventing your instance CPU
-# usage from skyrocketing when an account with many followers posts a new status.
-#
-# Messages are split among available senders, and each sender processes its assigned messages in serial.
-# For example, say a user with 1000 followers is on an instance with 2 CPUs. With the default multiplier
-# of 2, this means 4 senders would be in process at once on this instance. When the user creates a new post,
-# each sender would end up iterating through about 250 Create messages + delivering them to remote instances.
+# Int. CPU multiplier for the fixed number of goroutines to spawn in order to send messages via ActivityPub.
+# Messages will be batched and pushed to a singular queue, from which multiplier * CPU count goroutines will
+# pull and attempt deliveries. This can be tuned to limit concurrent posting to remote inboxes, preventing
+# your instance CPU usage skyrocketing when accounts with many followers post statuses.
#
# If you set this to 0 or less, only 1 sender will be used regardless of CPU count. This may be
# useful in cases where you are working with very tight network or CPU constraints.