Files
seaweedFS/weed/pb/filer_pb_direct_read.go
Chris Lu ced2236cc6 Adjust rename events metadata format (#8854)
* rename metadata events

* fix subscription filter to use NewEntry.Name for rename path matching

The server-side subscription filter constructed the new path using
OldEntry.Name instead of NewEntry.Name when checking if a rename
event's destination matches the subscriber's path prefix. This could
cause events to be incorrectly filtered when a rename changes the
file name.

* fix bucket events to handle rename of bucket directories

onBucketEvents only checked IsCreate and IsDelete. A bucket directory
rename via AtomicRenameEntry now emits a single rename event (both
OldEntry and NewEntry non-nil), which matched neither check. Handle
IsRename by deleting the old bucket and creating the new one.

* fix replicator to handle rename events across directory boundaries

Two issues fixed:

1. The replicator filtered events by checking if the key (old path)
   was under the source directory. Rename events now use the old path
   as key, so renames from outside into the watched directory were
   silently dropped. Now both old and new paths are checked, and
   cross-boundary renames are converted to create or delete.

2. NewParentPath was passed to the sink without remapping to the
   sink's target directory structure, causing the sink to write
   entries at the wrong location. Now NewParentPath is remapped
   alongside the key.

* fix filer sync to handle rename events crossing directory boundaries

The early directory-prefix filter only checked resp.Directory (old
parent). Rename events now carry the old parent as Directory, so
renames from outside the source path into it were dropped before
reaching the existing cross-boundary handling logic. Check both old
and new directories against sourcePath and excludePaths so the
downstream old-key/new-key logic can properly convert these to
create or delete operations.

* fix metadata event path matching

* fix metadata event consumers for rename targets

* Fix replication rename target keys

Logical rename events now reach replication sinks with distinct source and target paths.\n\nHandle non-filer sinks as delete-plus-create on the translated target key, and make the rename fallback path create at the translated target key too.\n\nAdd focused tests covering non-filer renames, filer rename updates, and the fallback path.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix filer sync rename path scoping

Use directory-boundary matching instead of raw prefix checks when classifying source and target paths during filer sync.\n\nAlso apply excludePaths per side so renames across excluded boundaries downgrade cleanly to create/delete instead of being misclassified as in-scope updates.\n\nAdd focused tests for boundary matching and rename classification.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix replicator directory boundary checks

Use directory-boundary matching instead of raw prefix checks when deciding whether a source or target path is inside the watched tree or an excluded subtree.\n\nThis prevents sibling paths such as /foo and /foobar from being misclassified during rename handling, and preserves the earlier rename-target-key fix.\n\nAdd focused tests for boundary matching and rename classification across sibling/excluded directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix etc-remote rename-out handling

Use boundary-safe source/target directory membership when classifying metadata events under DirectoryEtcRemote.\n\nThis prevents rename-out events from being processed as config updates, while still treating them as removals where appropriate for the remote sync and remote gateway command paths.\n\nAdd focused tests for update/removal classification and sibling-prefix handling.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Defer rename events until commit

Queue logical rename metadata events during atomic and streaming renames and publish them only after the transaction commits successfully.\n\nThis prevents subscribers from seeing delete or logical rename events for operations that later fail during delete or commit.\n\nAlso serialize notification.Queue swaps in rename tests and add failure-path coverage.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Skip descendant rename target lookups

Avoid redundant target lookups during recursive directory renames once the destination subtree is known absent.\n\nThe recursive move path now inserts known-absent descendants directly, and the test harness exercises prefixed directory listing so the optimization is covered by a directory rename regression test.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Tighten rename review tests

Return filer_pb.ErrNotFound from the bucket tracking store test stub so it follows the FilerStore contract, and add a webhook filter case for same-name renames across parent directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix HardLinkId format verb in InsertEntryKnownAbsent error

HardLinkId is a byte slice. %d prints each byte as a decimal number
which is not useful for an identifier. Use %x to match the log line
two lines above.

* only skip descendant target lookup when source and dest use same store

moveFolderSubEntries unconditionally passed skipTargetLookup=true for
every descendant. This is safe when all paths resolve to the same
underlying store, but with path-specific store configuration a child's
destination may map to a different backend that already holds an entry
at that path. Use FilerStoreWrapper.SameActualStore to check per-child
and fall back to the full CreateEntry path when stores differ.

* add nil and create edge-case tests for metadata event scope helpers

* extract pathIsEqualOrUnder into util.IsEqualOrUnder

Identical implementations existed in both replication/replicator.go and
command/filer_sync.go. Move to util.IsEqualOrUnder (alongside the
existing FullPath.IsUnder) and remove the duplicates.

* use MetadataEventTargetDirectory for new-side directory in filer sync

The new-side directory checks and sourceNewKey computation used
message.NewParentPath directly. If NewParentPath were empty (legacy
events, older filer versions during rolling upgrades), sourceNewKey
would be wrong (/filename instead of /dir/filename) and the
UpdateEntry parent path rewrite would panic on slice bounds.

Derive targetDir once from MetadataEventTargetDirectory, which falls
back to resp.Directory when NewParentPath is empty, and use it
consistently for all new-side checks and the sink parent path.
2026-03-30 18:25:11 -07:00

344 lines
9.0 KiB
Go

package pb
import (
"container/heap"
"fmt"
"io"
"strings"
"sync"
"google.golang.org/protobuf/proto"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/util"
)
// LogFileReaderFn creates an io.ReadCloser for a set of file chunks.
type LogFileReaderFn func(chunks []*filer_pb.FileChunk) (io.ReadCloser, error)
// PathFilter holds subscription path filtering parameters, matching the
// server-side eachEventNotificationFn filtering logic.
type PathFilter struct {
PathPrefix string
AdditionalPathPrefixes []string
DirectoriesToWatch []string
}
// ReadLogFileRefs reads log file data directly from volume servers using the
// chunk references, merges entries from multiple filers in timestamp order
// (same algorithm as the server's OrderedLogVisitor), applies path filtering,
// and invokes processEventFn for each matching event.
//
// Filers are read in parallel (one goroutine per filer). Within each filer,
// the next file is prefetched while the current file's entries are consumed.
func ReadLogFileRefs(
refs []*filer_pb.LogFileChunkRef,
newReader LogFileReaderFn,
startTsNs, stopTsNs int64,
filter PathFilter,
processEventFn ProcessMetadataFunc,
) (lastTsNs int64, err error) {
if len(refs) == 0 {
return
}
// Group refs by filer ID, preserving order within each filer.
perFiler := make(map[string][]*filer_pb.LogFileChunkRef)
var filerOrder []string
for _, ref := range refs {
if len(ref.Chunks) == 0 {
continue
}
if _, seen := perFiler[ref.FilerId]; !seen {
filerOrder = append(filerOrder, ref.FilerId)
}
perFiler[ref.FilerId] = append(perFiler[ref.FilerId], ref)
}
if len(filerOrder) == 0 {
return
}
// Single filer fast path: no merge heap needed.
if len(filerOrder) == 1 {
return readFilerFilesWithPrefetch(perFiler[filerOrder[0]], newReader, startTsNs, stopTsNs, filter, processEventFn)
}
// Multiple filers: read each in parallel with prefetching, merge via min-heap.
return readMultiFilersMerged(filerOrder, perFiler, newReader, startTsNs, stopTsNs, filter, processEventFn)
}
// readFilerFilesWithPrefetch reads files for a single filer, prefetching the
// next file while processing entries from the current one.
func readFilerFilesWithPrefetch(
refs []*filer_pb.LogFileChunkRef,
newReader LogFileReaderFn,
startTsNs, stopTsNs int64,
filter PathFilter,
processEventFn ProcessMetadataFunc,
) (lastTsNs int64, err error) {
type prefetchResult struct {
entries []*filer_pb.LogEntry
err error
}
startPrefetch := func(ref *filer_pb.LogFileChunkRef) chan prefetchResult {
ch := make(chan prefetchResult, 1)
go func() {
entries, readErr := readLogFileEntries(newReader, ref.Chunks, startTsNs, stopTsNs)
ch <- prefetchResult{entries, readErr}
}()
return ch
}
var pendingCh chan prefetchResult
if len(refs) > 0 {
pendingCh = startPrefetch(refs[0])
}
for i, ref := range refs {
result := <-pendingCh
// Start prefetching next file while we process current
if i+1 < len(refs) {
pendingCh = startPrefetch(refs[i+1])
}
if result.err != nil {
if isChunkNotFound(result.err) {
glog.V(0).Infof("skip log file filer=%s ts=%d: %v", ref.FilerId, ref.FileTsNs, result.err)
continue
}
return lastTsNs, fmt.Errorf("read log file filer=%s ts=%d: %w", ref.FilerId, ref.FileTsNs, result.err)
}
for _, logEntry := range result.entries {
lastTsNs, err = processOneLogEntry(logEntry, filter, processEventFn)
if err != nil {
return
}
}
}
return
}
// readMultiFilersMerged reads files from multiple filers in parallel (one goroutine
// per filer with prefetching), then merges entries in timestamp order via min-heap.
func readMultiFilersMerged(
filerOrder []string,
perFiler map[string][]*filer_pb.LogFileChunkRef,
newReader LogFileReaderFn,
startTsNs, stopTsNs int64,
filter PathFilter,
processEventFn ProcessMetadataFunc,
) (lastTsNs int64, err error) {
type filerStream struct {
filerId string
entryCh chan *filer_pb.LogEntry
}
streams := make([]filerStream, len(filerOrder))
var wg sync.WaitGroup
for i, filerId := range filerOrder {
entryCh := make(chan *filer_pb.LogEntry, 512)
streams[i] = filerStream{filerId: filerId, entryCh: entryCh}
wg.Add(1)
go func(refs []*filer_pb.LogFileChunkRef, ch chan *filer_pb.LogEntry) {
defer wg.Done()
defer close(ch)
readFilerFilesToChannel(refs, newReader, startTsNs, stopTsNs, ch)
}(perFiler[filerId], entryCh)
}
// Seed the min-heap with the first entry from each filer
pq := &logEntryHeap{}
heap.Init(pq)
for i := range streams {
if entry, ok := <-streams[i].entryCh; ok {
heap.Push(pq, &logEntryHeapItem{entry: entry, filerIdx: i})
}
}
// Merge loop
for pq.Len() > 0 {
item := heap.Pop(pq).(*logEntryHeapItem)
lastTsNs, err = processOneLogEntry(item.entry, filter, processEventFn)
if err != nil {
for i := range streams {
for range streams[i].entryCh {
}
}
wg.Wait()
return
}
if entry, ok := <-streams[item.filerIdx].entryCh; ok {
heap.Push(pq, &logEntryHeapItem{entry: entry, filerIdx: item.filerIdx})
}
}
wg.Wait()
return
}
func readFilerFilesToChannel(
refs []*filer_pb.LogFileChunkRef,
newReader LogFileReaderFn,
startTsNs, stopTsNs int64,
ch chan *filer_pb.LogEntry,
) {
type prefetchResult struct {
entries []*filer_pb.LogEntry
err error
}
startPrefetch := func(ref *filer_pb.LogFileChunkRef) chan prefetchResult {
resultCh := make(chan prefetchResult, 1)
go func() {
entries, err := readLogFileEntries(newReader, ref.Chunks, startTsNs, stopTsNs)
resultCh <- prefetchResult{entries, err}
}()
return resultCh
}
var pendingCh chan prefetchResult
if len(refs) > 0 {
pendingCh = startPrefetch(refs[0])
}
for i, ref := range refs {
result := <-pendingCh
if i+1 < len(refs) {
pendingCh = startPrefetch(refs[i+1])
}
if result.err != nil {
if isChunkNotFound(result.err) {
glog.V(0).Infof("skip log file filer=%s ts=%d: %v", ref.FilerId, ref.FileTsNs, result.err)
} else {
glog.Errorf("read log file filer=%s ts=%d: %v", ref.FilerId, ref.FileTsNs, result.err)
}
continue
}
for _, entry := range result.entries {
ch <- entry
}
}
}
func processOneLogEntry(logEntry *filer_pb.LogEntry, filter PathFilter, processEventFn ProcessMetadataFunc) (int64, error) {
event := &filer_pb.SubscribeMetadataResponse{}
if err := proto.Unmarshal(logEntry.Data, event); err != nil {
glog.Errorf("unmarshal log entry: %v", err)
return 0, nil // skip corrupt entries
}
if !matchesFilter(event, filter) {
return event.TsNs, nil
}
if err := processEventFn(event); err != nil {
return event.TsNs, fmt.Errorf("process event: %w", err)
}
return event.TsNs, nil
}
// --- path filtering (mirrors server-side eachEventNotificationFn logic) ---
const systemLogDir = "/topics/.system/log"
func matchesFilter(resp *filer_pb.SubscribeMetadataResponse, filter PathFilter) bool {
fullpath := filer_pb.MetadataEventSourceFullPath(resp)
// Skip internal meta log entries
if strings.HasPrefix(fullpath, systemLogDir) {
return false
}
return filer_pb.MetadataEventMatchesSubscription(resp, filter.PathPrefix, filter.AdditionalPathPrefixes, filter.DirectoriesToWatch)
}
// isChunkNotFound checks if an error indicates a missing volume chunk.
// Matches the server-side isChunkNotFoundError logic.
func isChunkNotFound(err error) bool {
if err == nil {
return false
}
s := err.Error()
return strings.Contains(s, "not found") || strings.Contains(s, "status 404")
}
// --- min-heap for merging entries across filers ---
type logEntryHeapItem struct {
entry *filer_pb.LogEntry
filerIdx int
}
type logEntryHeap []*logEntryHeapItem
func (h logEntryHeap) Len() int { return len(h) }
func (h logEntryHeap) Less(i, j int) bool { return h[i].entry.TsNs < h[j].entry.TsNs }
func (h logEntryHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
func (h *logEntryHeap) Push(x any) { *h = append(*h, x.(*logEntryHeapItem)) }
func (h *logEntryHeap) Pop() any {
old := *h
n := len(old)
item := old[n-1]
old[n-1] = nil
*h = old[:n-1]
return item
}
// --- log file parsing (uses io.ReadFull for correct partial-read handling) ---
func readLogFileEntries(newReader LogFileReaderFn, chunks []*filer_pb.FileChunk, startTsNs, stopTsNs int64) ([]*filer_pb.LogEntry, error) {
reader, err := newReader(chunks)
if err != nil {
return nil, fmt.Errorf("create reader: %w", err)
}
defer reader.Close()
sizeBuf := make([]byte, 4)
var entries []*filer_pb.LogEntry
for {
_, err := io.ReadFull(reader, sizeBuf)
if err != nil {
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
}
return entries, err
}
size := util.BytesToUint32(sizeBuf)
entryData := make([]byte, size)
_, err = io.ReadFull(reader, entryData)
if err != nil {
return entries, err
}
logEntry := &filer_pb.LogEntry{}
if err = proto.Unmarshal(entryData, logEntry); err != nil {
return entries, err
}
if logEntry.TsNs <= startTsNs {
continue
}
if stopTsNs != 0 && logEntry.TsNs > stopTsNs {
break
}
entries = append(entries, logEntry)
}
return entries, nil
}