* fix: decrypt SSE-encrypted objects in S3 replication sink
* fix: add SSE decryption support to GCS, Azure, B2, Local sinks
* fix: return error instead of warning for SSE-C objects during replication
* fix: close readers after upload to prevent resource leaks
* fix: return error for unknown SSE types instead of passing through ciphertext
* refactor(repl_util): extract CloseReader/CloseMaybeDecryptedReader helpers
The io.Closer close-on-error and defer-close pattern was duplicated in
copyWithDecryption and the S3 sink. Extract exported helpers to keep a
single implementation and prevent future divergence.
* fix(repl_util): warn on mixed SSE types across chunks in detectSSEType
detectSSEType previously returned the SSE type of the first encrypted
chunk without inspecting the rest. If an entry somehow has chunks with
different SSE types, only the first type's decryption would be applied.
Now scans all chunks and logs a warning on mismatch.
* fix(repl_util): decrypt inline SSE objects during replication
Small SSE-encrypted objects stored in entry.Content were being copied
as ciphertext because:
1. detectSSEType only checked chunk metadata, but inline objects have
no chunks — now falls back to checking entry.Extended for SSE keys
2. Non-S3 sinks short-circuited on len(entry.Content)>0, bypassing
the decryption path — now call MaybeDecryptContent before writing
Adds MaybeDecryptContent helper for decrypting inline byte content.
* fix(repl_util): add KMS initialization for replication SSE decryption
SSE-KMS decryption was not wired up for filer.backup — the only
initialization was for SSE-S3 key manager. CreateSSEKMSDecryptedReader
requires a global KMS provider which is only loaded by the S3 API
auth-config path.
Add InitializeSSEForReplication helper that initializes both SSE-S3
(from filer KEK) and SSE-KMS (from Viper config [kms] section /
WEED_KMS_* env vars). Replace the SSE-S3-only init in filer_backup.go.
* fix(replicator): initialize SSE decryption for filer.replicate
The SSE decryption setup was only added to filer_backup.go, but the
notification-based replicator (filer.replicate) uses the same sinks
and was missing the required initialization. Add SSE init in
NewReplicator so filer.replicate can decrypt SSE objects.
* refactor(repl_util): fold entry param into CopyFromChunkViews
Remove the CopyFromChunkViewsWithEntry wrapper and add the entry
parameter directly to CopyFromChunkViews, since all callers already
pass it.
* fix(repl_util): guard SSE init with sync.Once, error on mixed SSE types
InitializeWithFiler overwrites the global superKey on every call.
Wrap InitializeSSEForReplication with sync.Once so repeated calls
(e.g. from NewReplicator) are safe.
detectSSEType now returns an error instead of logging a warning when
chunks have inconsistent SSE types, so replication aborts rather than
silently applying the wrong decryption to some chunks.
* fix(repl_util): allow SSE init retry, detect conflicting metadata, add tests
- Replace sync.Once with mutex+bool so transient failures (e.g. filer
unreachable) don't permanently prevent initialization. Only successful
init flips the flag; failed attempts allow retries.
- Remove v.IsSet("kms") guard that prevented env-only KMS configs
(WEED_KMS_*) from being detected. Always attempt KMS loading and let
LoadConfigurations handle "no config found".
- detectSSEType now checks for conflicting extended metadata keys
(e.g. both SeaweedFSSSES3Key and SeaweedFSSSEKMSKey present) and
returns an error instead of silently picking the first match.
- Add table-driven tests for detectSSEType, MaybeDecryptReader, and
MaybeDecryptContent covering plaintext, uniform SSE, mixed chunks,
inline SSE via extended metadata, conflicting metadata, and SSE-C.
* test(repl_util): add SSE-S3 and SSE-KMS integration tests
Add round-trip encryption/decryption tests:
- SSE-S3: encrypt with CreateSSES3EncryptedReader, decrypt with
CreateSSES3DecryptedReader, verify plaintext matches
- SSE-KMS: encrypt with AES-CTR, wire a mock KMSProvider via
SetGlobalKMSProvider, build serialized KMS metadata, verify
MaybeDecryptReader and MaybeDecryptContent produce correct plaintext
Fix existing tests to check io.ReadAll errors.
* test(repl_util): exercise full SSE-S3 path through MaybeDecryptReader
Replace direct CreateSSES3DecryptedReader calls with end-to-end tests
that go through MaybeDecryptReader → decryptSSES3 →
DeserializeSSES3Metadata → GetSSES3IV → CreateSSES3DecryptedReader.
Uses WEED_S3_SSE_KEK env var + a mock filer client to initialize the
global key manager with a test KEK, then SerializeSSES3Metadata to
build proper envelope-encrypted metadata. Cleanup restores the key
manager state.
* fix(localsink): write to temp file to prevent truncated replicas
The local sink truncated the destination file before writing content.
If decryption or chunk copy failed, the file was left empty/truncated,
destroying the previous replica.
Write to a temp file in the same directory and atomically rename on
success. On any error the temp file is cleaned up and the existing
replica is untouched.
---------
Co-authored-by: Chris Lu <chris.lu@gmail.com>
180 lines
5.8 KiB
Go
180 lines
5.8 KiB
Go
package replication
|
|
|
|
import (
|
|
"context"
|
|
"fmt"
|
|
"time"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/replication/repl_util"
|
|
"github.com/seaweedfs/seaweedfs/weed/replication/sink"
|
|
"github.com/seaweedfs/seaweedfs/weed/replication/source"
|
|
"github.com/seaweedfs/seaweedfs/weed/util"
|
|
"google.golang.org/grpc"
|
|
)
|
|
|
|
type Replicator struct {
|
|
sink sink.ReplicationSink
|
|
source *source.FilerSource
|
|
excludeDirs []string
|
|
}
|
|
|
|
func NewReplicator(sourceConfig util.Configuration, configPrefix string, dataSink sink.ReplicationSink) *Replicator {
|
|
|
|
source := &source.FilerSource{}
|
|
source.Initialize(sourceConfig, configPrefix)
|
|
|
|
if err := repl_util.InitializeSSEForReplication(source); err != nil {
|
|
glog.Warningf("SSE initialization failed: %v (encrypted objects may fail to replicate)", err)
|
|
}
|
|
|
|
dataSink.SetSourceFiler(source)
|
|
|
|
return &Replicator{
|
|
sink: dataSink,
|
|
source: source,
|
|
excludeDirs: sourceConfig.GetStringSlice(configPrefix + "excludeDirectories"),
|
|
}
|
|
}
|
|
|
|
func (r *Replicator) Replicate(ctx context.Context, key string, message *filer_pb.EventNotification) error {
|
|
if message.IsFromOtherCluster && r.sink.GetName() == "filer" {
|
|
return nil
|
|
}
|
|
|
|
oldEntry := message.OldEntry
|
|
newEntry := message.NewEntry
|
|
newParentPath := message.NewParentPath
|
|
|
|
oldInSource := util.IsEqualOrUnder(key, r.source.Dir) && !r.isExcluded(key)
|
|
|
|
// For rename events (both old and new entry present), check both paths
|
|
// against the source directory. Convert cross-boundary renames to
|
|
// create or delete so the sink stays consistent.
|
|
if oldEntry != nil && newEntry != nil {
|
|
newFullPath, targetParent := metadataEventTarget(key, newEntry, newParentPath)
|
|
newInSource := util.IsEqualOrUnder(newFullPath, r.source.Dir) && !r.isExcluded(newFullPath)
|
|
|
|
if !oldInSource && !newInSource {
|
|
return nil
|
|
}
|
|
if !oldInSource {
|
|
// Rename into watched directory: treat as create
|
|
oldEntry = nil
|
|
key = newFullPath
|
|
newParentPath = targetParent
|
|
} else if !newInSource {
|
|
// Rename out of watched directory: treat as delete
|
|
newEntry = nil
|
|
newParentPath = ""
|
|
}
|
|
} else if !oldInSource {
|
|
glog.V(4).Infof("skipping %v outside of %v", key, r.source.Dir)
|
|
return nil
|
|
}
|
|
|
|
var dateKey string
|
|
if r.sink.IsIncremental() {
|
|
var mTime int64
|
|
if newEntry != nil {
|
|
mTime = newEntry.Attributes.Mtime
|
|
} else if oldEntry != nil {
|
|
mTime = oldEntry.Attributes.Mtime
|
|
}
|
|
dateKey = time.Unix(mTime, 0).Format("2006-01-02")
|
|
}
|
|
oldSinkKey := r.sourceToSinkKey(key, dateKey)
|
|
glog.V(3).Infof("replicate %s => %s", key, oldSinkKey)
|
|
|
|
newSinkKey := oldSinkKey
|
|
newSinkParentPath := newParentPath
|
|
if oldEntry != nil && newEntry != nil {
|
|
targetSourceKey, targetSourceParent := metadataEventTarget(key, newEntry, newParentPath)
|
|
newSinkKey = r.sourceToSinkKey(targetSourceKey, dateKey)
|
|
newSinkParentPath = r.sourceToSinkPath(targetSourceParent, dateKey)
|
|
} else if newParentPath != "" && util.IsEqualOrUnder(newParentPath, r.source.Dir) {
|
|
newSinkParentPath = r.sourceToSinkPath(newParentPath, dateKey)
|
|
}
|
|
|
|
if oldEntry != nil && newEntry == nil {
|
|
glog.V(4).Infof("deleting %v", oldSinkKey)
|
|
return r.sink.DeleteEntry(oldSinkKey, oldEntry.IsDirectory, message.DeleteChunks, message.Signatures)
|
|
}
|
|
if oldEntry == nil && newEntry != nil {
|
|
glog.V(4).Infof("creating %v", oldSinkKey)
|
|
return r.sink.CreateEntry(oldSinkKey, newEntry, message.Signatures)
|
|
}
|
|
if oldEntry == nil && newEntry == nil {
|
|
glog.V(0).Infof("weird message %+v", message)
|
|
return nil
|
|
}
|
|
|
|
if oldSinkKey != newSinkKey && r.sink.GetName() != "filer" {
|
|
if err := r.sink.DeleteEntry(oldSinkKey, oldEntry.IsDirectory, false, message.Signatures); err != nil {
|
|
return fmt.Errorf("delete old entry %v: %w", oldSinkKey, err)
|
|
}
|
|
glog.V(4).Infof("creating renamed %v", newSinkKey)
|
|
return r.sink.CreateEntry(newSinkKey, newEntry, message.Signatures)
|
|
}
|
|
|
|
foundExisting, err := r.sink.UpdateEntry(oldSinkKey, oldEntry, newSinkParentPath, newEntry, message.DeleteChunks, message.Signatures)
|
|
if foundExisting {
|
|
glog.V(4).Infof("updated %v", oldSinkKey)
|
|
return err
|
|
}
|
|
|
|
err = r.sink.DeleteEntry(oldSinkKey, oldEntry.IsDirectory, false, message.Signatures)
|
|
if err != nil {
|
|
return fmt.Errorf("delete old entry %v: %w", oldSinkKey, err)
|
|
}
|
|
|
|
glog.V(4).Infof("creating missing %v", newSinkKey)
|
|
return r.sink.CreateEntry(newSinkKey, newEntry, message.Signatures)
|
|
}
|
|
|
|
func (r *Replicator) isExcluded(path string) bool {
|
|
for _, excludeDir := range r.excludeDirs {
|
|
if util.IsEqualOrUnder(path, excludeDir) {
|
|
return true
|
|
}
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (r *Replicator) sourceToSinkKey(sourceKey, dateKey string) string {
|
|
return util.Join(r.sink.GetSinkToDirectory(), dateKey, sourceKey[len(r.source.Dir):])
|
|
}
|
|
|
|
func (r *Replicator) sourceToSinkPath(sourcePath, dateKey string) string {
|
|
return util.Join(r.sink.GetSinkToDirectory(), dateKey, sourcePath[len(r.source.Dir):])
|
|
}
|
|
|
|
func metadataEventTarget(key string, newEntry *filer_pb.Entry, newParentPath string) (targetKey, targetParent string) {
|
|
if newEntry == nil {
|
|
return "", ""
|
|
}
|
|
|
|
targetParent = newParentPath
|
|
if targetParent == "" {
|
|
targetParent, _ = util.FullPath(key).DirAndName()
|
|
}
|
|
|
|
return util.Join(targetParent, newEntry.Name), targetParent
|
|
}
|
|
|
|
func ReadFilerSignature(grpcDialOption grpc.DialOption, filer pb.ServerAddress) (filerSignature int32, readErr error) {
|
|
if readErr = pb.WithFilerClient(false, 0, filer, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
|
|
if resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{}); err != nil {
|
|
return fmt.Errorf("GetFilerConfiguration %s: %v", filer, err)
|
|
} else {
|
|
filerSignature = resp.Signature
|
|
}
|
|
return nil
|
|
}); readErr != nil {
|
|
return 0, readErr
|
|
}
|
|
return filerSignature, nil
|
|
}
|