Files
seaweedFS/weed/mq
Chris Lu d48e1e1659 mount: improve read throughput with parallel chunk fetching (#7569)
* mount: improve read throughput with parallel chunk fetching

This addresses issue #7504 where a single weed mount FUSE instance
does not fully utilize node network bandwidth when reading large files.

Changes:
- Add -concurrentReaders mount option (default: 16) to control the
  maximum number of parallel chunk fetches during read operations
- Implement parallel section reading in ChunkGroup.ReadDataAt() using
  errgroup for better throughput when reading across multiple sections
- Enhance ReaderCache with MaybeCacheMany() to prefetch multiple chunks
  ahead in parallel during sequential reads (now prefetches 4 chunks)
- Increase ReaderCache limit dynamically based on concurrentReaders
  to support higher read parallelism

The bottleneck was that chunks were being read sequentially even when
they reside on different volume servers. By introducing parallel chunk
fetching, a single mount instance can now better saturate available
network bandwidth.

Fixes: #7504

* fmt

* Address review comments: make prefetch configurable, improve error handling

Changes:
1. Add DefaultPrefetchCount constant (4) to reader_at.go
2. Add GetPrefetchCount() method to ChunkGroup that derives prefetch count
   from concurrentReaders (1/4 ratio, min 1, max 8)
3. Pass prefetch count through NewChunkReaderAtFromClient
4. Fix error handling in readDataAtParallel to prioritize errgroup error
5. Update all callers to use DefaultPrefetchCount constant

For mount operations, prefetch scales with -concurrentReaders:
- concurrentReaders=16 (default) -> prefetch=4
- concurrentReaders=32 -> prefetch=8 (capped)
- concurrentReaders=4 -> prefetch=1

For non-mount paths (WebDAV, query engine, MQ), uses DefaultPrefetchCount.

* fmt

* Refactor: use variadic parameter instead of new function name

Use NewChunkGroup with optional concurrentReaders parameter instead of
creating a separate NewChunkGroupWithConcurrency function.

This maintains backward compatibility - existing callers without the
parameter get the default of 16 concurrent readers.

* Use explicit concurrentReaders parameter instead of variadic

* Refactor: use MaybeCache with count parameter instead of new MaybeCacheMany function

* Address nitpick review comments

- Add upper bound (128) on concurrentReaders to prevent excessive goroutine fan-out
- Cap readerCacheLimit at 256 accordingly
- Fix SetChunks: use Lock() instead of RLock() since we are writing to group.sections
2025-11-29 10:06:11 -08:00
..
2025-10-13 18:05:17 -07:00
2025-10-13 18:05:17 -07:00
2025-10-27 23:04:55 -07:00
2025-10-27 23:04:55 -07:00
2025-10-13 18:05:17 -07:00
2025-10-27 23:04:55 -07:00
2023-08-20 22:53:05 -07:00
2025-10-27 23:04:55 -07:00
2022-07-31 13:23:44 -07:00

SeaweedMQ Message Queue on SeaweedFS (WIP, not ready)

What are the use cases it is designed for?

Message queues are like water pipes. Messages flow in the pipes to their destinations.

However, what if a flood comes? Of course, you can increase the number of partitions, add more brokers, restart, and watch the traffic level closely.

Sometimes the flood is expected. For example, backfill some old data in batch, and switch to online messages. You may want to ensure enough brokers to handle the data and reduce them later to cut cost.

SeaweedMQ is designed for use cases that need to:

  • Receive and save large number of messages.
  • Handle spike traffic automatically.

What is special about SeaweedMQ?

  • Separate computation and storage nodes to scale independently.
    • Unlimited storage space by adding volume servers.
    • Unlimited message brokers to handle incoming messages.
    • Offline messages can be operated as normal files.
  • Scale up and down with auto split and merge message topics.
    • Topics can automatically split into segments when traffic increases, and vice verse.
  • Pass messages by reference instead of copying.
    • Clients can optionally upload the messages first and just submit the references.
    • Drastically reduce the broker load.
  • Stateless brokers
    • All brokers are equal. One broker is dynamically picked as the leader.
    • Add brokers at any time.
    • Allow rolling restart brokers or remove brokers at a pace.

Design

How it works?

Brokers are just computation nodes without storage. When a broker starts, it reports itself to masters. Among all the brokers, one of them will be selected as the leader by the masters.

A topic needs to define its partition key on its messages.

Messages for a topic are divided into segments. One segment can cover a range of partitions. A segment can be split into 2 segments, or 2 neighboring segments can be merged back to one segment.

During write time, the client will ask the broker leader for a few brokers to process the segment.

The broker leader will check whether the segment already has assigned the brokers. If not, select a few brokers based on their loads, save the selection into filer, and tell the client.

The client will write the messages for this segment to the selected brokers.

Failover

The broker leader does not contain any state. If it fails, the masters will select a different broker.

For a segment, if any one of the selected brokers is down, the remaining brokers should try to write received messages to the filer, and close the segment to the clients.

Then the clients should start a new segment. The masters should assign other healthy brokers to handle the new segment.

So any brokers can go down without losing data.

Auto Split or Merge

(The idea is learned from Pravega.)

The brokers should report its traffic load to the broker leader periodically.

If any segment has too much load, the broker leader will ask the brokers to tell the client to close current one and create two new segments.

If 2 neighboring segments have the combined load below average load per segment, the broker leader will ask the brokers to tell the client to close this 2 segments and create a new segment.