Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fastlane broadcasts #10

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Fastlane broadcasts #10

wants to merge 1 commit into from

Conversation

palkan
Copy link
Member

@palkan palkan commented Jan 8, 2025

Context

Action Cable decodes and re-encodes every broadcasted message for every client which is unnecessary in most cases. This PR provides a fastlane_broadcasts configuration option to skip double encoding and transmit broadcast messages as is to clients not using custom stream coders or user-defined callbacks.

This is a continuation of the previous work: rails/rails#26999

Benchmarks

The cost of excess encoding heavily depends on the broadcasting payload size and nature. We consider three examples: a small JSON, a larger JSON, and a Turbo Stream "append" action.

Here are the results:

$ be ruby --yjit  benchmarks/broadcasting.rb

Running benchmark with N=10, M=100, adapter=async, fastlane_broadcasts_enabled=false, worker_pool_size=4

ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [aarch64-linux]
Warming up --------------------------------------
small json (19 Bytes)
                        10.000 i/100ms
large json (80.5 KB)     1.000 i/100ms
turbo stream (2.95 KB)
                         1.000 i/100ms
Calculating -------------------------------------
small json (19 Bytes)
                        101.991 (±19.6%) i/s    (9.80 ms/i) -    980.000 in  10.053077s
large json (80.5 KB)      0.557 (± 0.0%) i/s     (1.80 s/i) -      6.000 in  10.786842s
turbo stream (2.95 KB)
                         16.618 (±24.1%) i/s   (60.18 ms/i) -    156.000 in  10.068446s
$ FASTLANE_BROADCASTS=1 be ruby --yjit  benchmarks/broadcasting.rb

Running benchmark with N=10, M=100, adapter=async, fastlane_broadcasts_enabled=true, worker_pool_size=4

ruby 3.4.1 (2024-12-25 revision 48d4efcb85) +YJIT +PRISM [aarch64-linux]
Warming up --------------------------------------
small json (19 Bytes)
                        38.000 i/100ms
large json (80.5 KB)     1.000 i/100ms
turbo stream (2.95 KB)
                         9.000 i/100ms
Calculating -------------------------------------
small json (19 Bytes)
                        327.354 (±25.0%) i/s    (3.05 ms/i) -      3.116k in  10.076621s
large json (80.5 KB)      3.826 (± 0.0%) i/s  (261.36 ms/i) -     38.000 in  10.019392s
turbo stream (2.95 KB)
                         85.029 (±31.8%) i/s   (11.76 ms/i) -    765.000 in  10.078424s

So, for small JSON payloads the fastlane version is ~3x faster, for larger JSON — ~7x faster, for HTML/Hotwire payloads — ~5x faster.

Here are Vernier profile screenshots:

  • Fastlane implementation:
image
  • Current implementation:
image

@palkan palkan force-pushed the feat/fastlane-broadcasts branch from 7679c8b to 10e1a80 Compare January 8, 2025 15:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant