Enable Cross-Worker Queue Testing Locally #7548
Replies: 5 comments
-
Leaving this comment here to highlight this as a major annoyance in an otherwise fantastic DX with Cloudflare. I'm surprised this has so few upvotes, how are others getting around this? The only solutions I can think of are deploying to staging or, as suggested in the docs, duplicating all the logic from my consumer's @StefanStuehrmann in this thread you mention something about wiring handlers of different workers, could you give an example of what you mean here? In any case, I hope the Cloudflare people will support you in implementing your proposal. |
Beta Was this translation helpful? Give feedback.
-
We've got an experimental feature to run multiple workers in one instance by providing multiple configs, it may work with queues? Thanks for the feedback though, passing it on to the queues team |
Beta Was this translation helpful? Give feedback.
-
I started implementing a Durable Object based persistency for local queues in Miniflare. Bigger problem are the edge cases and some implementation details:
After spending quite some time on an implementation that was working for me but still with quite some shortcomings I decided to go for the single instance approach. Below you can see how I start my workers (using NX and a startup script in my api gateway worker to start all). I had to persist services to / level .wrangler/state so that other dependencies like D1 were properly working with multiple config support but it is working ok. Biggest disadvantage is that I need to run those services now with a single command instead of being able to have my own session for each service to follow (which is minor). |
Beta Was this translation helpful? Give feedback.
-
@longas what I did initially is that I basically just imported the queue function into the producer worker and then "wired" it in via wrangler config. e.g. localqueues.consumers = [ prodqueues.producers = [ consumer worker wrangler.toml local (not really needed) just for some test scenariosqueues.consumers = [ prodqueues.consumers = [ But this is super dirty and basically requiring you in your producer worker to have something like this, where the queue part is only executed locally because of the queue wirering. import honoApp from './hono-app'; export default { The multiple config otpion that was suggested before is what I'm also using since some time and which is working much better. |
Beta Was this translation helpful? Give feedback.
-
Thanks to both of you for your help! Using multiple configs with Wrangler worked perfectly in my monorepo with two workers, one as |
Beta Was this translation helpful? Give feedback.
-
Proposal: SQLite-based Queue Implementation for Local Development
Background
Currently, wrangler's local queue implementation uses an in-memory array for storing queue messages, which prevents queue usage across different workers during local development. This limitation makes it difficult to test distributed worker architectures locally.
Proposal
Extend the current queue implementation to optionally use SQLite for message storage (which is already used for D1).
This would enable:
Implementation Details
Configuration
You could think about if a queue state should always reside with the consumer worker as queues can only have one consumer but multiple producers.
Key Features
Benefits
Implementation Impact
This change would be isolated to the local development environment and wouldn't affect production queues. The implementation would live entirely within the miniflare package and tests and wouldn't require changes to the core Workers runtime.
Looking forward to hearing the queue teams thoughts on this approach.
The effort estimation for the implementation seems to be relatively small (few days).
So in case it does not make it to the queues team backlog I would be willing to contribute once we agree on details and it will make it into the main branch.
Beta Was this translation helpful? Give feedback.
All reactions