Adjust rename events metadata format (#8854)

* rename metadata events

* fix subscription filter to use NewEntry.Name for rename path matching

The server-side subscription filter constructed the new path using
OldEntry.Name instead of NewEntry.Name when checking if a rename
event's destination matches the subscriber's path prefix. This could
cause events to be incorrectly filtered when a rename changes the
file name.

* fix bucket events to handle rename of bucket directories

onBucketEvents only checked IsCreate and IsDelete. A bucket directory
rename via AtomicRenameEntry now emits a single rename event (both
OldEntry and NewEntry non-nil), which matched neither check. Handle
IsRename by deleting the old bucket and creating the new one.

* fix replicator to handle rename events across directory boundaries

Two issues fixed:

1. The replicator filtered events by checking if the key (old path)
   was under the source directory. Rename events now use the old path
   as key, so renames from outside into the watched directory were
   silently dropped. Now both old and new paths are checked, and
   cross-boundary renames are converted to create or delete.

2. NewParentPath was passed to the sink without remapping to the
   sink's target directory structure, causing the sink to write
   entries at the wrong location. Now NewParentPath is remapped
   alongside the key.

* fix filer sync to handle rename events crossing directory boundaries

The early directory-prefix filter only checked resp.Directory (old
parent). Rename events now carry the old parent as Directory, so
renames from outside the source path into it were dropped before
reaching the existing cross-boundary handling logic. Check both old
and new directories against sourcePath and excludePaths so the
downstream old-key/new-key logic can properly convert these to
create or delete operations.

* fix metadata event path matching

* fix metadata event consumers for rename targets

* Fix replication rename target keys

Logical rename events now reach replication sinks with distinct source and target paths.\n\nHandle non-filer sinks as delete-plus-create on the translated target key, and make the rename fallback path create at the translated target key too.\n\nAdd focused tests covering non-filer renames, filer rename updates, and the fallback path.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix filer sync rename path scoping

Use directory-boundary matching instead of raw prefix checks when classifying source and target paths during filer sync.\n\nAlso apply excludePaths per side so renames across excluded boundaries downgrade cleanly to create/delete instead of being misclassified as in-scope updates.\n\nAdd focused tests for boundary matching and rename classification.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix replicator directory boundary checks

Use directory-boundary matching instead of raw prefix checks when deciding whether a source or target path is inside the watched tree or an excluded subtree.\n\nThis prevents sibling paths such as /foo and /foobar from being misclassified during rename handling, and preserves the earlier rename-target-key fix.\n\nAdd focused tests for boundary matching and rename classification across sibling/excluded directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix etc-remote rename-out handling

Use boundary-safe source/target directory membership when classifying metadata events under DirectoryEtcRemote.\n\nThis prevents rename-out events from being processed as config updates, while still treating them as removals where appropriate for the remote sync and remote gateway command paths.\n\nAdd focused tests for update/removal classification and sibling-prefix handling.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Defer rename events until commit

Queue logical rename metadata events during atomic and streaming renames and publish them only after the transaction commits successfully.\n\nThis prevents subscribers from seeing delete or logical rename events for operations that later fail during delete or commit.\n\nAlso serialize notification.Queue swaps in rename tests and add failure-path coverage.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Skip descendant rename target lookups

Avoid redundant target lookups during recursive directory renames once the destination subtree is known absent.\n\nThe recursive move path now inserts known-absent descendants directly, and the test harness exercises prefixed directory listing so the optimization is covered by a directory rename regression test.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Tighten rename review tests

Return filer_pb.ErrNotFound from the bucket tracking store test stub so it follows the FilerStore contract, and add a webhook filter case for same-name renames across parent directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix HardLinkId format verb in InsertEntryKnownAbsent error

HardLinkId is a byte slice. %d prints each byte as a decimal number
which is not useful for an identifier. Use %x to match the log line
two lines above.

* only skip descendant target lookup when source and dest use same store

moveFolderSubEntries unconditionally passed skipTargetLookup=true for
every descendant. This is safe when all paths resolve to the same
underlying store, but with path-specific store configuration a child's
destination may map to a different backend that already holds an entry
at that path. Use FilerStoreWrapper.SameActualStore to check per-child
and fall back to the full CreateEntry path when stores differ.

* add nil and create edge-case tests for metadata event scope helpers

* extract pathIsEqualOrUnder into util.IsEqualOrUnder

Identical implementations existed in both replication/replicator.go and
command/filer_sync.go. Move to util.IsEqualOrUnder (alongside the
existing FullPath.IsUnder) and remove the duplicates.

* use MetadataEventTargetDirectory for new-side directory in filer sync

The new-side directory checks and sourceNewKey computation used
message.NewParentPath directly. If NewParentPath were empty (legacy
events, older filer versions during rolling upgrades), sourceNewKey
would be wrong (/filename instead of /dir/filename) and the
UpdateEntry parent path rewrite would panic on slice bounds.

Derive targetDir once from MetadataEventTargetDirectory, which falls
back to resp.Directory when NewParentPath is empty, and use it
consistently for all new-side checks and the sink parent path.
This commit is contained in:
Chris Lu
2026-03-30 18:25:11 -07:00
committed by GitHub
parent 2eaf98a7a2
commit ced2236cc6
26 changed files with 1846 additions and 248 deletions

View File

@@ -164,7 +164,7 @@ func (option *RemoteGatewayOptions) makeBucketedEventProcessor(filerSource *sour
handleEtcRemoteChanges := func(resp *filer_pb.SubscribeMetadataResponse) error {
message := resp.EventNotification
if message.NewEntry != nil {
if metadataEventUpdatesDirectory(resp, filer.DirectoryEtcRemote) {
// update
if message.NewEntry.Name == filer.REMOTE_STORAGE_MOUNT_FILE {
newMappings, readErr := filer.UnmarshalRemoteStorageMappings(message.NewEntry.Content)
@@ -180,8 +180,11 @@ func (option *RemoteGatewayOptions) makeBucketedEventProcessor(filerSource *sour
}
option.remoteConfs[conf.Name] = conf
}
} else if message.OldEntry != nil {
} else if metadataEventRemovesFromDirectory(resp, filer.DirectoryEtcRemote) {
// deletion
if message.OldEntry.Name == filer.REMOTE_STORAGE_MOUNT_FILE {
option.mappings = &remote_pb.RemoteStorageMapping{}
}
if strings.HasSuffix(message.OldEntry.Name, filer.REMOTE_STORAGE_CONF_SUFFIX) {
conf := &remote_pb.RemoteConf{}
if err := proto.Unmarshal(message.OldEntry.Content, conf); err != nil {
@@ -196,7 +199,8 @@ func (option *RemoteGatewayOptions) makeBucketedEventProcessor(filerSource *sour
eachEntryFunc := func(resp *filer_pb.SubscribeMetadataResponse) error {
message := resp.EventNotification
if strings.HasPrefix(resp.Directory, filer.DirectoryEtcRemote) {
sourceInEtcRemote, targetInEtcRemote := metadataEventDirectoryMembership(resp, filer.DirectoryEtcRemote)
if sourceInEtcRemote || targetInEtcRemote {
return handleEtcRemoteChanges(resp)
}

View File

@@ -92,34 +92,38 @@ func (option *RemoteSyncOptions) makeEventProcessor(remoteStorage *remote_pb.Rem
handleEtcRemoteChanges := func(resp *filer_pb.SubscribeMetadataResponse) error {
message := resp.EventNotification
if message.NewEntry == nil {
return nil
}
if message.NewEntry.Name == filer.REMOTE_STORAGE_MOUNT_FILE {
mappings, readErr := filer.UnmarshalRemoteStorageMappings(message.NewEntry.Content)
if readErr != nil {
return fmt.Errorf("unmarshal mappings: %w", readErr)
}
if remoteLoc, found := mappings.Mappings[mountedDir]; found {
if remoteStorageMountLocation.Bucket != remoteLoc.Bucket || remoteStorageMountLocation.Path != remoteLoc.Path {
glog.Fatalf("Unexpected mount changes %+v => %+v", remoteStorageMountLocation, remoteLoc)
if metadataEventUpdatesDirectory(resp, filer.DirectoryEtcRemote) {
if message.NewEntry.Name == filer.REMOTE_STORAGE_MOUNT_FILE {
mappings, readErr := filer.UnmarshalRemoteStorageMappings(message.NewEntry.Content)
if readErr != nil {
return fmt.Errorf("unmarshal mappings: %w", readErr)
}
if remoteLoc, found := mappings.Mappings[mountedDir]; found {
if remoteStorageMountLocation.Bucket != remoteLoc.Bucket || remoteStorageMountLocation.Path != remoteLoc.Path {
glog.Fatalf("Unexpected mount changes %+v => %+v", remoteStorageMountLocation, remoteLoc)
}
} else {
glog.V(0).Infof("unmounted %s exiting ...", mountedDir)
os.Exit(0)
}
}
if message.NewEntry.Name == remoteStorage.Name+filer.REMOTE_STORAGE_CONF_SUFFIX {
conf := &remote_pb.RemoteConf{}
if err := proto.Unmarshal(message.NewEntry.Content, conf); err != nil {
return fmt.Errorf("unmarshal %s/%s: %v", filer.DirectoryEtcRemote, message.NewEntry.Name, err)
}
remoteStorage = conf
if newClient, err := remote_storage.GetRemoteStorage(remoteStorage); err == nil {
client = newClient
} else {
return err
}
} else {
glog.V(0).Infof("unmounted %s exiting ...", mountedDir)
os.Exit(0)
}
}
if message.NewEntry.Name == remoteStorage.Name+filer.REMOTE_STORAGE_CONF_SUFFIX {
conf := &remote_pb.RemoteConf{}
if err := proto.Unmarshal(message.NewEntry.Content, conf); err != nil {
return fmt.Errorf("unmarshal %s/%s: %v", filer.DirectoryEtcRemote, message.NewEntry.Name, err)
}
remoteStorage = conf
if newClient, err := remote_storage.GetRemoteStorage(remoteStorage); err == nil {
client = newClient
} else {
return err
}
if metadataEventRemovesFromDirectory(resp, filer.DirectoryEtcRemote) &&
message.OldEntry.Name == filer.REMOTE_STORAGE_MOUNT_FILE {
glog.V(0).Infof("unmounted %s exiting ...", mountedDir)
os.Exit(0)
}
return nil
@@ -127,7 +131,8 @@ func (option *RemoteSyncOptions) makeEventProcessor(remoteStorage *remote_pb.Rem
eachEntryFunc := func(resp *filer_pb.SubscribeMetadataResponse) error {
message := resp.EventNotification
if strings.HasPrefix(resp.Directory, filer.DirectoryEtcRemote) {
sourceInEtcRemote, targetInEtcRemote := metadataEventDirectoryMembership(resp, filer.DirectoryEtcRemote)
if sourceInEtcRemote || targetInEtcRemote {
return handleEtcRemoteChanges(resp)
}

View File

@@ -26,37 +26,37 @@ import (
)
type SyncOptions struct {
isActivePassive *bool
filerA *string
filerB *string
aPath *string
aExcludePaths *string
bPath *string
bExcludePaths *string
aReplication *string
bReplication *string
aCollection *string
bCollection *string
aTtlSec *int
bTtlSec *int
aDiskType *string
bDiskType *string
aDebug *bool
bDebug *bool
aFromTsMs *int64
bFromTsMs *int64
aProxyByFiler *bool
bProxyByFiler *bool
metricsHttpIp *string
metricsHttpPort *int
isActivePassive *bool
filerA *string
filerB *string
aPath *string
aExcludePaths *string
bPath *string
bExcludePaths *string
aReplication *string
bReplication *string
aCollection *string
bCollection *string
aTtlSec *int
bTtlSec *int
aDiskType *string
bDiskType *string
aDebug *bool
bDebug *bool
aFromTsMs *int64
bFromTsMs *int64
aProxyByFiler *bool
bProxyByFiler *bool
metricsHttpIp *string
metricsHttpPort *int
concurrency *int
chunkConcurrency *int
aDoDeleteFiles *bool
bDoDeleteFiles *bool
clientId int32
clientEpoch atomic.Int32
debug *bool
debugPort *int
bDoDeleteFiles *bool
clientId int32
clientEpoch atomic.Int32
debug *bool
debugPort *int
}
const (
@@ -445,12 +445,17 @@ func genProcessFunction(sourcePath string, targetPath string, excludePaths []str
processEventFn := func(resp *filer_pb.SubscribeMetadataResponse) error {
message := resp.EventNotification
// Derive the target (new-side) directory once. MetadataEventTargetDirectory
// returns NewParentPath when set, falling back to resp.Directory for
// delete events or legacy events with an empty NewParentPath.
targetDir := filer_pb.MetadataEventTargetDirectory(resp)
var sourceOldKey, sourceNewKey util.FullPath
if message.OldEntry != nil {
sourceOldKey = util.FullPath(resp.Directory).Child(message.OldEntry.Name)
}
if message.NewEntry != nil {
sourceNewKey = util.FullPath(message.NewParentPath).Child(message.NewEntry.Name)
sourceNewKey = util.FullPath(targetDir).Child(message.NewEntry.Name)
}
if debug {
@@ -461,19 +466,24 @@ func genProcessFunction(sourcePath string, targetPath string, excludePaths []str
return nil
}
if !strings.HasPrefix(resp.Directory+"/", sourcePath) {
// For rename events the key/directory is the old (source) path.
// Check both old and new directories so cross-boundary renames
// are not silently dropped. The downstream old/new key handling
// (lines below) already converts these to create or delete.
oldDirExcluded := matchesExcludePath(resp.Directory, excludePaths)
newDirExcluded := matchesExcludePath(targetDir, excludePaths)
oldDirInScope := util.IsEqualOrUnder(resp.Directory, sourcePath) && !oldDirExcluded
newDirInScope := message.NewEntry != nil &&
util.IsEqualOrUnder(targetDir, sourcePath) &&
!newDirExcluded
if !oldDirInScope && !newDirInScope {
return nil
}
for _, excludePath := range excludePaths {
if strings.HasPrefix(resp.Directory+"/", excludePath) {
return nil
}
}
// Compute per-side exclusion so that rename events crossing an
// exclude boundary are handled as delete + create rather than
// being entirely skipped.
oldExcluded := isEntryExcluded(resp.Directory, message.OldEntry, reExcludeFileName, excludeFileNames, excludePathPatterns)
newExcluded := isEntryExcluded(message.NewParentPath, message.NewEntry, reExcludeFileName, excludeFileNames, excludePathPatterns)
oldExcluded := oldDirExcluded || isEntryExcluded(resp.Directory, message.OldEntry, reExcludeFileName, excludeFileNames, excludePathPatterns)
newExcluded := newDirExcluded || isEntryExcluded(targetDir, message.NewEntry, reExcludeFileName, excludeFileNames, excludePathPatterns)
if oldExcluded && newExcluded {
return nil
@@ -495,7 +505,7 @@ func genProcessFunction(sourcePath string, targetPath string, excludePaths []str
if !doDeleteFiles {
return nil
}
if !strings.HasPrefix(string(sourceOldKey), sourcePath) {
if !util.IsEqualOrUnder(string(sourceOldKey), sourcePath) {
return nil
}
key := buildKey(dataSink, message, targetPath, sourceOldKey, sourcePath)
@@ -504,7 +514,7 @@ func genProcessFunction(sourcePath string, targetPath string, excludePaths []str
// handle new entries
if filer_pb.IsCreate(resp) {
if !strings.HasPrefix(string(sourceNewKey), sourcePath) {
if !util.IsEqualOrUnder(string(sourceNewKey), sourcePath) {
return nil
}
key := buildKey(dataSink, message, targetPath, sourceNewKey, sourcePath)
@@ -521,18 +531,19 @@ func genProcessFunction(sourcePath string, targetPath string, excludePaths []str
}
// handle updates
if strings.HasPrefix(string(sourceOldKey), sourcePath) {
if util.IsEqualOrUnder(string(sourceOldKey), sourcePath) {
// old key is in the watched directory
if strings.HasPrefix(string(sourceNewKey), sourcePath) {
if util.IsEqualOrUnder(string(sourceNewKey), sourcePath) {
// new key is also in the watched directory
if doDeleteFiles {
oldKey := util.Join(targetPath, string(sourceOldKey)[len(sourcePath):])
var sinkNewParentPath string
if strings.HasSuffix(sourcePath, "/") {
message.NewParentPath = util.Join(targetPath, message.NewParentPath[len(sourcePath)-1:])
sinkNewParentPath = util.Join(targetPath, targetDir[len(sourcePath)-1:])
} else {
message.NewParentPath = util.Join(targetPath, message.NewParentPath[len(sourcePath):])
sinkNewParentPath = util.Join(targetPath, targetDir[len(sourcePath):])
}
foundExisting, err := dataSink.UpdateEntry(string(oldKey), message.OldEntry, message.NewParentPath, message.NewEntry, message.DeleteChunks, message.Signatures)
foundExisting, err := dataSink.UpdateEntry(string(oldKey), message.OldEntry, sinkNewParentPath, message.NewEntry, message.DeleteChunks, message.Signatures)
if foundExisting {
return err
}
@@ -559,7 +570,7 @@ func genProcessFunction(sourcePath string, targetPath string, excludePaths []str
}
} else {
// old key is outside the watched directory
if strings.HasPrefix(string(sourceNewKey), sourcePath) {
if util.IsEqualOrUnder(string(sourceNewKey), sourcePath) {
// new key is in the watched directory
key := buildKey(dataSink, message, targetPath, sourceNewKey, sourcePath)
if err := dataSink.CreateEntry(key, message.NewEntry, message.Signatures); err != nil {
@@ -623,6 +634,15 @@ func isEntryExcluded(dir string, entry *filer_pb.Entry, reExcludeFileName *regex
return false
}
func matchesExcludePath(dir string, excludePaths []string) bool {
for _, excludePath := range excludePaths {
if util.IsEqualOrUnder(dir, excludePath) {
return true
}
}
return false
}
// compileExcludePattern compiles a regexp pattern string, returning nil if empty.
func compileExcludePattern(pattern string, label string) (*regexp.Regexp, error) {
if pattern == "" {

View File

@@ -0,0 +1,121 @@
package command
import (
"testing"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/replication/sink"
"github.com/seaweedfs/seaweedfs/weed/replication/source"
"github.com/seaweedfs/seaweedfs/weed/util"
)
var _ sink.ReplicationSink = (*recordingSyncSink)(nil)
type recordingSyncSink struct {
deleteKeys []string
createKeys []string
updateKeys []string
}
func (s *recordingSyncSink) GetName() string { return "recording" }
func (s *recordingSyncSink) Initialize(util.Configuration, string) error {
return nil
}
func (s *recordingSyncSink) DeleteEntry(key string, isDirectory, deleteIncludeChunks bool, signatures []int32) error {
s.deleteKeys = append(s.deleteKeys, key)
return nil
}
func (s *recordingSyncSink) CreateEntry(key string, entry *filer_pb.Entry, signatures []int32) error {
s.createKeys = append(s.createKeys, key)
return nil
}
func (s *recordingSyncSink) UpdateEntry(key string, oldEntry *filer_pb.Entry, newParentPath string, newEntry *filer_pb.Entry, deleteIncludeChunks bool, signatures []int32) (bool, error) {
s.updateKeys = append(s.updateKeys, key)
return true, nil
}
func (s *recordingSyncSink) GetSinkToDirectory() string { return "/dest" }
func (s *recordingSyncSink) SetSourceFiler(*source.FilerSource) {}
func (s *recordingSyncSink) IsIncremental() bool { return false }
func TestPathIsEqualOrUnderUsesDirectoryBoundaries(t *testing.T) {
tests := []struct {
name string
candidate string
other string
expected bool
}{
{name: "equal", candidate: "/foo", other: "/foo", expected: true},
{name: "descendant", candidate: "/foo/bar", other: "/foo", expected: true},
{name: "sibling prefix", candidate: "/foobar/bar", other: "/foo", expected: false},
{name: "root", candidate: "/foo/bar", other: "/", expected: true},
{name: "empty", candidate: "", other: "/foo", expected: false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := util.IsEqualOrUnder(tt.candidate, tt.other); got != tt.expected {
t.Fatalf("IsEqualOrUnder(%q, %q) = %v, want %v", tt.candidate, tt.other, got, tt.expected)
}
})
}
}
func TestMatchesExcludePathUsesDirectoryBoundaries(t *testing.T) {
if !matchesExcludePath("/tmp", []string{"/tmp"}) {
t.Fatal("expected exact directory match to be excluded")
}
if !matchesExcludePath("/tmp/sub", []string{"/tmp"}) {
t.Fatal("expected descendant directory to be excluded")
}
if matchesExcludePath("/tmp2/sub", []string{"/tmp"}) {
t.Fatal("did not expect sibling directory to be excluded")
}
}
func TestGenProcessFunctionRenameToSiblingPrefixBecomesDelete(t *testing.T) {
dataSink := &recordingSyncSink{}
processFn := genProcessFunction("/foo", "/dest", nil, nil, nil, nil, dataSink, true, false)
err := processFn(&filer_pb.SubscribeMetadataResponse{
Directory: "/foo/dir",
EventNotification: &filer_pb.EventNotification{
OldEntry: &filer_pb.Entry{Name: "file.txt"},
NewEntry: &filer_pb.Entry{Name: "file.txt"},
NewParentPath: "/foobar/dir",
},
})
if err != nil {
t.Fatalf("processFn rename to sibling prefix: %v", err)
}
if len(dataSink.deleteKeys) != 1 || dataSink.deleteKeys[0] != "/dest/dir/file.txt" {
t.Fatalf("delete keys = %v, want [/dest/dir/file.txt]", dataSink.deleteKeys)
}
if len(dataSink.createKeys) != 0 || len(dataSink.updateKeys) != 0 {
t.Fatalf("unexpected create/update calls: creates=%v updates=%v", dataSink.createKeys, dataSink.updateKeys)
}
}
func TestGenProcessFunctionRenameFromExcludedDirBecomesCreate(t *testing.T) {
dataSink := &recordingSyncSink{}
processFn := genProcessFunction("/foo", "/dest", []string{"/foo/excluded"}, nil, nil, nil, dataSink, true, false)
err := processFn(&filer_pb.SubscribeMetadataResponse{
Directory: "/foo/excluded",
EventNotification: &filer_pb.EventNotification{
OldEntry: &filer_pb.Entry{Name: "file.txt"},
NewEntry: &filer_pb.Entry{Name: "file.txt"},
NewParentPath: "/foo/live",
},
})
if err != nil {
t.Fatalf("processFn rename from excluded dir: %v", err)
}
if len(dataSink.createKeys) != 1 || dataSink.createKeys[0] != "/dest/live/file.txt" {
t.Fatalf("create keys = %v, want [/dest/live/file.txt]", dataSink.createKeys)
}
if len(dataSink.deleteKeys) != 0 || len(dataSink.updateKeys) != 0 {
t.Fatalf("unexpected delete/update calls: deletes=%v updates=%v", dataSink.deleteKeys, dataSink.updateKeys)
}
}

View File

@@ -0,0 +1,36 @@
package command
import (
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/util"
)
func metadataEventDirectoryMembership(resp *filer_pb.SubscribeMetadataResponse, dir string) (sourceInDir, targetInDir bool) {
if resp == nil || resp.EventNotification == nil {
return false, false
}
sourceInDir = util.IsEqualOrUnder(resp.Directory, dir)
targetInDir = resp.EventNotification.NewEntry != nil &&
util.IsEqualOrUnder(filer_pb.MetadataEventTargetDirectory(resp), dir)
return sourceInDir, targetInDir
}
func metadataEventUpdatesDirectory(resp *filer_pb.SubscribeMetadataResponse, dir string) bool {
if resp == nil || resp.EventNotification == nil || resp.EventNotification.NewEntry == nil {
return false
}
_, targetInDir := metadataEventDirectoryMembership(resp, dir)
return targetInDir
}
func metadataEventRemovesFromDirectory(resp *filer_pb.SubscribeMetadataResponse, dir string) bool {
if resp == nil || resp.EventNotification == nil || resp.EventNotification.OldEntry == nil {
return false
}
sourceInDir, targetInDir := metadataEventDirectoryMembership(resp, dir)
return sourceInDir && !targetInDir
}

View File

@@ -0,0 +1,116 @@
package command
import (
"testing"
"github.com/seaweedfs/seaweedfs/weed/filer"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
)
func TestMetadataEventDirectoryMembershipUsesDirectoryBoundaries(t *testing.T) {
resp := &filer_pb.SubscribeMetadataResponse{
Directory: filer.DirectoryEtcRemote,
EventNotification: &filer_pb.EventNotification{
OldEntry: &filer_pb.Entry{Name: "remote.conf"},
NewEntry: &filer_pb.Entry{Name: "remote.conf"},
NewParentPath: "/etc/remote-sibling",
},
}
sourceInDir, targetInDir := metadataEventDirectoryMembership(resp, filer.DirectoryEtcRemote)
if !sourceInDir {
t.Fatal("expected source directory to match")
}
if targetInDir {
t.Fatal("did not expect sibling target directory to match")
}
}
func TestMetadataEventUpdatesAndRemovesDirectory(t *testing.T) {
tests := []struct {
name string
resp *filer_pb.SubscribeMetadataResponse
wantUpdate bool
wantRemoval bool
}{
{
name: "nil response",
resp: nil,
wantUpdate: false,
wantRemoval: false,
},
{
name: "create event",
resp: &filer_pb.SubscribeMetadataResponse{
Directory: filer.DirectoryEtcRemote,
EventNotification: &filer_pb.EventNotification{
NewEntry: &filer_pb.Entry{Name: "new.conf"},
NewParentPath: filer.DirectoryEtcRemote,
},
},
wantUpdate: true,
wantRemoval: false,
},
{
name: "rename out",
resp: &filer_pb.SubscribeMetadataResponse{
Directory: filer.DirectoryEtcRemote,
EventNotification: &filer_pb.EventNotification{
OldEntry: &filer_pb.Entry{Name: "remote.conf"},
NewEntry: &filer_pb.Entry{Name: "remote.conf"},
NewParentPath: "/tmp",
},
},
wantUpdate: false,
wantRemoval: true,
},
{
name: "rename into",
resp: &filer_pb.SubscribeMetadataResponse{
Directory: "/tmp",
EventNotification: &filer_pb.EventNotification{
OldEntry: &filer_pb.Entry{Name: "remote.conf"},
NewEntry: &filer_pb.Entry{Name: "remote.conf"},
NewParentPath: filer.DirectoryEtcRemote,
},
},
wantUpdate: true,
wantRemoval: false,
},
{
name: "rename within",
resp: &filer_pb.SubscribeMetadataResponse{
Directory: filer.DirectoryEtcRemote,
EventNotification: &filer_pb.EventNotification{
OldEntry: &filer_pb.Entry{Name: "remote.conf"},
NewEntry: &filer_pb.Entry{Name: "renamed.conf"},
NewParentPath: filer.DirectoryEtcRemote,
},
},
wantUpdate: true,
wantRemoval: false,
},
{
name: "delete",
resp: &filer_pb.SubscribeMetadataResponse{
Directory: filer.DirectoryEtcRemote,
EventNotification: &filer_pb.EventNotification{
OldEntry: &filer_pb.Entry{Name: "remote.conf"},
},
},
wantUpdate: false,
wantRemoval: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := metadataEventUpdatesDirectory(tt.resp, filer.DirectoryEtcRemote); got != tt.wantUpdate {
t.Fatalf("metadataEventUpdatesDirectory() = %v, want %v", got, tt.wantUpdate)
}
if got := metadataEventRemovesFromDirectory(tt.resp, filer.DirectoryEtcRemote); got != tt.wantRemoval {
t.Fatalf("metadataEventRemovesFromDirectory() = %v, want %v", got, tt.wantRemoval)
}
})
}
}