Hey everyone,
I’m currently developing a larger business application with a strong reliance on Event Sourcing, using EventStoreDb as the backbone. So far, the experience has been surprisingly smooth — which is awesome.
Setup:
For development, I run a single-node EventStoreDb instance via Docker on my MacBook M3 (64 GB RAM, 16 CPU cores). The Docker container is configured with:
EVENTSTORE_INSECURE=true
EVENTSTORE_INT_IP=172.30.240.11
EVENTSTORE_TELEMETRY_OPTOUT=true
EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP=true
EVENTSTORE_RUN_PROJECTIONS=All
EVENTSTORE_START_STANDARD_PROJECTIONS=true
Use Case:
We’re following a transactional outbox pattern to emit domain events. Every event is tied to a dedicated stream (aggregate stream pattern), for example:
price-8DA9A782-5B91-434A-B233-9177E1CDC13C
Let’s say there’s a mass update in our product price range — around 1.5 million prices need to be adjusted. This means:
- 1.5 million events
- 1.5 million individual streams
- Each stream gets a single
AppendToStreamAsync
call
In my current tests, I’m observing a throughput of 3,000 individual AppendToStreamAsync
calls in ~5.5 seconds. While this doesn’t sound terrible, it also doesn’t seem particularly fast — especially given the scale I need to support.
My Questions:
- Is this throughput (3k appends in 5.5s) expected for Dockerized EventStoreDb on macOS with ARM (M3)?
- What are best practices to improve append performance for a high number of distinct streams?
- I’ve heard virtualized environments (e.g., Docker on Mac) can introduce performance bottlenecks. Are there specific optimizations for that setup?
- Would it be better to queue events in larger batches per stream where possible, or use another strategy altogether?
Any insights or real-world numbers from your own setups would be super helpful — especially from those running EventStore in containerized development environments.
Thanks in advance!
Update:
I’ve realized that during large-scale price updates, instead of writing each change to its own dedicated stream, I could direct all events into a consolidated stream — for example, something likeprice-change-8DA9A782-5B91-434A-B233-9177E1CDC13C
.By batching the 1.5 million price change events into a single stream, I can take advantage of
AppendToStreamAsync
’s ability to handle multiple events per call, significantly improving throughput and reducing overall write time. I heard to keep streams small, but in this case, I don´t have any choice?