* filer.sync: send log file chunk fids to clients for direct volume server reads
Instead of the server reading persisted log files from volume servers, parsing
entries, and streaming them over gRPC (serial bottleneck), clients that opt in
via client_supports_metadata_chunks receive log file chunk references (fids)
and read directly from volume servers in parallel.
New proto messages:
- LogFileChunkRef: chunk fids + timestamp + filer ID for one log file
- SubscribeMetadataRequest.client_supports_metadata_chunks: client opt-in
- SubscribeMetadataResponse.log_file_refs: server sends refs during backlog
Server changes:
- CollectLogFileRefs: lists log files and returns chunk refs without any
volume server I/O (metadata-only operation)
- SubscribeMetadata/SubscribeLocalMetadata: when client opts in, sends refs
during persisted log phase, then falls back to normal streaming for
in-memory events
Client changes:
- ReadLogFileRefs: reads log files from volume servers, parses entries,
filters by path prefix, invokes processEventFn
- MetadataFollowOption.LogFileReaderFn: factory for chunk readers,
enables metadata chunks when non-nil
- Both filer_pb_tail.go and meta_aggregator.go recv loops accumulate
refs then process them at the disk→memory transition
Backward compatible: old clients don't set the flag, get existing behavior.
Ref: #8771
* filer.sync: merge entries across filers in timestamp order on client side
ReadLogFileRefs now groups refs by filer ID and merges entries from
multiple filers using a min-heap priority queue — the same algorithm
the server uses in OrderedLogVisitor + LogEntryItemPriorityQueue.
This ensures events are processed in correct timestamp order even when
log files from different filers have interleaved timestamps. Single-filer
case takes the fast path (no heap allocation).
* filer.sync: integration tests for direct-read metadata chunks
Three test categories:
1. Merge correctness (TestReadLogFileRefsMergeOrder):
Verifies entries from 3 filers are delivered in strict timestamp order,
matching the server-side OrderedLogVisitor guarantee.
2. Path filtering (TestReadLogFileRefsPathFilter):
Verifies client-side path prefix filtering works correctly.
3. Throughput comparison (TestDirectReadVsServerSideThroughput):
3 filers × 7 files × 300 events = 6300 events, 2ms per file read:
server-side: 6300 events 218ms 28,873 events/sec
direct-read: 6300 events 51ms 123,566 events/sec (4.3x)
parallel: 6300 events 17ms 378,628 events/sec (13.1x)
Direct-read eliminates gRPC send overhead per event (4.3x).
Parallel per-filer reading eliminates serial file I/O (13.1x).
* filer.sync: parallel per-filer reads with prefetching in ReadLogFileRefs
ReadLogFileRefs now has two levels of I/O overlap:
1. Cross-filer parallelism: one goroutine per filer reads its files
concurrently. Entries feed into per-filer channels, merged by the
main goroutine via min-heap (same ordering guarantee as the server's
OrderedLogVisitor).
2. Within-filer prefetching: while the current file's entries are being
consumed by the merge heap, the next file is already being read from
the volume server in a background goroutine.
Single-filer fast path avoids the heap and channels.
Test results (3 filers × 7 files × 300 events, 2ms per file read):
server-side sequential: 6300 events 212ms 29,760 events/sec
parallel + prefetch: 6300 events 36ms 177,443 events/sec
Speedup: 6.0x
* filer.sync: address all review comments on metadata chunks PR
Critical fixes:
- sendLogFileRefs: bypass pipelinedSender, send directly on gRPC stream.
Ref messages have TsNs=0 and were being incorrectly batched into the
Events field by the adaptive batching logic, corrupting ref delivery.
- readLogFileEntries: use io.ReadFull instead of reader.Read to prevent
partial reads from corrupting size values or protobuf data.
- Error handling: only skip chunk-not-found errors (matching server-side
isChunkNotFoundError). Other I/O or decode failures are propagated so
the follower can retry.
High-priority fixes:
- CollectLogFileRefs: remove incorrect +24h padding from stopTime. The
extra day caused unnecessary log file refs to be collected.
- Path filtering: ReadLogFileRefs now accepts PathFilter struct with
PathPrefix, AdditionalPathPrefixes, and DirectoriesToWatch. Uses
util.Join for path construction (avoids "//foo" on root). Excludes
/.system/log/ internal entries. Matches server-side
eachEventNotificationFn filtering logic.
Medium-priority fixes:
- CollectLogFileRefs: accept context.Context, propagate to
ListDirectoryEntries calls for cancellation support.
- NewChunkStreamReaderFromLookup: accept context.Context, propagate to
doNewChunkStreamReader.
Test fixes:
- Check error returns from ReadLogFileRefs in all test call sites.
---------
Co-authored-by: Copilot <copilot@github.com>
* filer.sync: pipelined subscription with adaptive batching for faster catch-up
The SubscribeMetadata pipeline was fully serial: reading a log entry from a
volume server, unmarshaling, filtering, and calling stream.Send() all happened
one-at-a-time. stream.Send() blocked the entire pipeline until the client
acknowledged each event, limiting throughput to ~80 events/sec regardless of
the -concurrency setting.
Three server-side optimizations that stack:
1. Pipelined sender: decouple stream.Send() from the read loop via a buffered
channel (1024 messages). A dedicated goroutine handles gRPC delivery while
the reader continues processing the next events.
2. Adaptive batching: when event timestamps are >2min behind wall clock
(backlog catch-up), drain multiple events from the channel and pack them
into a single stream.Send() using a new `repeated events` field on
SubscribeMetadataResponse. When events are recent (real-time), send
one-by-one for low latency. Old clients ignore the new field (backward
compatible).
3. Persisted log readahead: run the OrderedLogVisitor in a background
goroutine so volume server I/O for the next log file overlaps with event
processing and gRPC delivery.
4. Event-driven aggregated subscription: replace time.Sleep(1127ms) polling
in SubscribeMetadata with notification-driven wake-up using the
MetaLogBuffer subscriber mechanism, reducing real-time latency from
~1127ms to sub-millisecond.
Combined, these create a 3-stage pipeline:
[Volume I/O → readahead buffer] → [Filter → send buffer] → [gRPC Send]
Test results (simulated backlog with 50µs gRPC latency per Send):
direct (old): 2100 events 2100 sends 168ms 12,512 events/sec
pipelined+batched: 2100 events 14 sends 40ms 52,856 events/sec
Speedup: 4.2x single-stream throughput
Ref: #8771
* filer.sync: require client opt-in for batch event delivery
Add ClientSupportsBatching field to SubscribeMetadataRequest. The server
only packs events into the Events batch field when the client explicitly
sets this flag to true. Old clients (Java SDK, third-party) that don't
set the flag get one-event-per-Send, preserving backward compatibility.
All Go callers (FollowMetadata, MetaAggregator) set the flag to true
since their recv loops already unpack batched events.
* filer.sync: clear batch Events field after Send to release references
Prevents the envelope message from holding references to the rest of the
batch after gRPC serialization, allowing the GC to collect them sooner.
* filer.sync: fix Send deadlock, add error propagation test, event-driven local subscribe
- pipelinedSender.Send: add case <-s.done to unblock when sender goroutine
exits (fixes deadlock when errCh was already consumed by a prior Send).
- pipelinedSender.reportErr: remove for-range drain on sendCh that could
block indefinitely. Send() now detects exit via s.done instead.
- SubscribeLocalMetadata: replace remaining time.Sleep(1127ms) in the
gap-detected-no-memory-data path with event-driven listenersCond.Wait(),
consistent with the rest of the subscription paths.
- Add TestPipelinedSenderErrorPropagation: verifies error surfaces via
Send and Close when the underlying stream fails.
- Replace goto with labeled break in test simulatePipeline.
* filer.sync: check error returns in test code
- direct_send: check slowStream.Send error return
- pipelined_batched_send: check sender.Close error return
- simulatePipeline: return error from sender.Close, propagate to callers
---------
Co-authored-by: Copilot <copilot@github.com>
* filer.sync: fix race condition on first checkpoint save
Initialize lastWriteTime to time.Now() instead of zero time to prevent
the first checkpoint save from being triggered immediately when the
first event arrives. This gives async jobs time to complete and update
the watermark before the checkpoint is saved.
Previously, the zero time caused lastWriteTime.Add(3s).Before(now) to
be true on the first event, triggering an immediate checkpoint save
attempt. But since jobs are processed asynchronously, the watermark
was still 0 (initial value), causing the save to be skipped due to
the 'if offsetTsNs == 0 { return nil }' check.
Fixes#7717
* filer.sync: save checkpoint on graceful shutdown
Add graceful shutdown handling to save the final checkpoint when
filer.sync is terminated. Previously, any sync progress within the
last 3-second checkpoint interval would be lost on shutdown.
Changes:
- Add syncState struct to track current processor and offset save info
- Add atomic pointers syncStateA2B and syncStateB2A for both directions
- Register grace.OnInterrupt hook to save checkpoints on shutdown
- Modify doSubscribeFilerMetaChanges to update sync state atomically
This ensures that when filer.sync is restarted, it resumes from the
correct position instead of potentially replaying old events.
Fixes#7717