Files
seaweedFS/weed/replication/sink/s3sink/s3_sink.go
Mmx233 3cea900241 fix: replication sinks upload ciphertext for SSE-encrypted objects (#8931)
* fix: decrypt SSE-encrypted objects in S3 replication sink

* fix: add SSE decryption support to GCS, Azure, B2, Local sinks

* fix: return error instead of warning for SSE-C objects during replication

* fix: close readers after upload to prevent resource leaks

* fix: return error for unknown SSE types instead of passing through ciphertext

* refactor(repl_util): extract CloseReader/CloseMaybeDecryptedReader helpers

The io.Closer close-on-error and defer-close pattern was duplicated in
copyWithDecryption and the S3 sink. Extract exported helpers to keep a
single implementation and prevent future divergence.

* fix(repl_util): warn on mixed SSE types across chunks in detectSSEType

detectSSEType previously returned the SSE type of the first encrypted
chunk without inspecting the rest. If an entry somehow has chunks with
different SSE types, only the first type's decryption would be applied.
Now scans all chunks and logs a warning on mismatch.

* fix(repl_util): decrypt inline SSE objects during replication

Small SSE-encrypted objects stored in entry.Content were being copied
as ciphertext because:
1. detectSSEType only checked chunk metadata, but inline objects have
   no chunks — now falls back to checking entry.Extended for SSE keys
2. Non-S3 sinks short-circuited on len(entry.Content)>0, bypassing
   the decryption path — now call MaybeDecryptContent before writing

Adds MaybeDecryptContent helper for decrypting inline byte content.

* fix(repl_util): add KMS initialization for replication SSE decryption

SSE-KMS decryption was not wired up for filer.backup — the only
initialization was for SSE-S3 key manager. CreateSSEKMSDecryptedReader
requires a global KMS provider which is only loaded by the S3 API
auth-config path.

Add InitializeSSEForReplication helper that initializes both SSE-S3
(from filer KEK) and SSE-KMS (from Viper config [kms] section /
WEED_KMS_* env vars). Replace the SSE-S3-only init in filer_backup.go.

* fix(replicator): initialize SSE decryption for filer.replicate

The SSE decryption setup was only added to filer_backup.go, but the
notification-based replicator (filer.replicate) uses the same sinks
and was missing the required initialization. Add SSE init in
NewReplicator so filer.replicate can decrypt SSE objects.

* refactor(repl_util): fold entry param into CopyFromChunkViews

Remove the CopyFromChunkViewsWithEntry wrapper and add the entry
parameter directly to CopyFromChunkViews, since all callers already
pass it.

* fix(repl_util): guard SSE init with sync.Once, error on mixed SSE types

InitializeWithFiler overwrites the global superKey on every call.
Wrap InitializeSSEForReplication with sync.Once so repeated calls
(e.g. from NewReplicator) are safe.

detectSSEType now returns an error instead of logging a warning when
chunks have inconsistent SSE types, so replication aborts rather than
silently applying the wrong decryption to some chunks.

* fix(repl_util): allow SSE init retry, detect conflicting metadata, add tests

- Replace sync.Once with mutex+bool so transient failures (e.g. filer
  unreachable) don't permanently prevent initialization. Only successful
  init flips the flag; failed attempts allow retries.

- Remove v.IsSet("kms") guard that prevented env-only KMS configs
  (WEED_KMS_*) from being detected. Always attempt KMS loading and let
  LoadConfigurations handle "no config found".

- detectSSEType now checks for conflicting extended metadata keys
  (e.g. both SeaweedFSSSES3Key and SeaweedFSSSEKMSKey present) and
  returns an error instead of silently picking the first match.

- Add table-driven tests for detectSSEType, MaybeDecryptReader, and
  MaybeDecryptContent covering plaintext, uniform SSE, mixed chunks,
  inline SSE via extended metadata, conflicting metadata, and SSE-C.

* test(repl_util): add SSE-S3 and SSE-KMS integration tests

Add round-trip encryption/decryption tests:
- SSE-S3: encrypt with CreateSSES3EncryptedReader, decrypt with
  CreateSSES3DecryptedReader, verify plaintext matches
- SSE-KMS: encrypt with AES-CTR, wire a mock KMSProvider via
  SetGlobalKMSProvider, build serialized KMS metadata, verify
  MaybeDecryptReader and MaybeDecryptContent produce correct plaintext

Fix existing tests to check io.ReadAll errors.

* test(repl_util): exercise full SSE-S3 path through MaybeDecryptReader

Replace direct CreateSSES3DecryptedReader calls with end-to-end tests
that go through MaybeDecryptReader → decryptSSES3 →
DeserializeSSES3Metadata → GetSSES3IV → CreateSSES3DecryptedReader.

Uses WEED_S3_SSE_KEK env var + a mock filer client to initialize the
global key manager with a test KEK, then SerializeSSES3Metadata to
build proper envelope-encrypted metadata. Cleanup restores the key
manager state.

* fix(localsink): write to temp file to prevent truncated replicas

The local sink truncated the destination file before writing content.
If decryption or chunk copy failed, the file was left empty/truncated,
destroying the previous replica.

Write to a temp file in the same directory and atomically rename on
success. On any error the temp file is cleaned up and the existing
replica is untouched.

---------

Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-04-06 00:32:27 -07:00

247 lines
8.4 KiB
Go

package S3Sink
import (
"encoding/base64"
"fmt"
"net/url"
"strconv"
"strings"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3iface"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/filer"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/replication/repl_util"
"github.com/seaweedfs/seaweedfs/weed/replication/sink"
"github.com/seaweedfs/seaweedfs/weed/replication/source"
"github.com/seaweedfs/seaweedfs/weed/util"
)
type S3Sink struct {
conn s3iface.S3API
filerSource *source.FilerSource
isIncremental bool
keepPartSize bool
s3DisableContentMD5Validation bool
s3ForcePathStyle bool
uploaderConcurrency int
uploaderMaxUploadParts int
uploaderPartSizeMb int
region string
bucket string
dir string
endpoint string
acl string
}
func init() {
sink.Sinks = append(sink.Sinks, &S3Sink{})
}
func (s3sink *S3Sink) GetName() string {
return "s3"
}
func (s3sink *S3Sink) GetSinkToDirectory() string {
return s3sink.dir
}
func (s3sink *S3Sink) IsIncremental() bool {
return s3sink.isIncremental
}
func (s3sink *S3Sink) Initialize(configuration util.Configuration, prefix string) error {
configuration.SetDefault(prefix+"region", "us-east-2")
configuration.SetDefault(prefix+"directory", "/")
configuration.SetDefault(prefix+"keep_part_size", true)
configuration.SetDefault(prefix+"uploader_max_upload_parts", 1000)
configuration.SetDefault(prefix+"uploader_part_size_mb", 8)
configuration.SetDefault(prefix+"uploader_concurrency", 8)
configuration.SetDefault(prefix+"s3_disable_content_md5_validation", true)
configuration.SetDefault(prefix+"s3_force_path_style", true)
s3sink.region = configuration.GetString(prefix + "region")
s3sink.bucket = configuration.GetString(prefix + "bucket")
s3sink.dir = configuration.GetString(prefix + "directory")
s3sink.endpoint = configuration.GetString(prefix + "endpoint")
s3sink.acl = configuration.GetString(prefix + "acl")
s3sink.isIncremental = configuration.GetBool(prefix + "is_incremental")
s3sink.keepPartSize = configuration.GetBool(prefix + "keep_part_size")
s3sink.s3DisableContentMD5Validation = configuration.GetBool(prefix + "s3_disable_content_md5_validation")
s3sink.s3ForcePathStyle = configuration.GetBool(prefix + "s3_force_path_style")
s3sink.uploaderMaxUploadParts = configuration.GetInt(prefix + "uploader_max_upload_parts")
s3sink.uploaderPartSizeMb = configuration.GetInt(prefix + "uploader_part_size")
s3sink.uploaderConcurrency = configuration.GetInt(prefix + "uploader_concurrency")
glog.V(0).Infof("sink.s3.region: %v", s3sink.region)
glog.V(0).Infof("sink.s3.bucket: %v", s3sink.bucket)
glog.V(0).Infof("sink.s3.directory: %v", s3sink.dir)
glog.V(0).Infof("sink.s3.endpoint: %v", s3sink.endpoint)
glog.V(0).Infof("sink.s3.acl: %v", s3sink.acl)
glog.V(0).Infof("sink.s3.is_incremental: %v", s3sink.isIncremental)
glog.V(0).Infof("sink.s3.s3_disable_content_md5_validation: %v", s3sink.s3DisableContentMD5Validation)
glog.V(0).Infof("sink.s3.s3_force_path_style: %v", s3sink.s3ForcePathStyle)
glog.V(0).Infof("sink.s3.keep_part_size: %v", s3sink.keepPartSize)
if s3sink.uploaderMaxUploadParts > s3manager.MaxUploadParts {
s3sink.uploaderMaxUploadParts = s3manager.MaxUploadParts
glog.Warningf("uploader_max_upload_parts is greater than the maximum number of parts allowed when uploading multiple parts to Amazon S3")
glog.V(0).Infof("sink.s3.uploader_max_upload_parts: %v => %v", s3sink.uploaderMaxUploadParts, s3manager.MaxUploadParts)
} else {
glog.V(0).Infof("sink.s3.uploader_max_upload_parts: %v", s3sink.uploaderMaxUploadParts)
}
glog.V(0).Infof("sink.s3.uploader_part_size_mb: %v", s3sink.uploaderPartSizeMb)
glog.V(0).Infof("sink.s3.uploader_concurrency: %v", s3sink.uploaderConcurrency)
return s3sink.initialize(
configuration.GetString(prefix+"aws_access_key_id"),
configuration.GetString(prefix+"aws_secret_access_key"),
)
}
func (s3sink *S3Sink) SetSourceFiler(s *source.FilerSource) {
s3sink.filerSource = s
}
func (s3sink *S3Sink) initialize(awsAccessKeyId, awsSecretAccessKey string) error {
config := &aws.Config{
Region: aws.String(s3sink.region),
Endpoint: aws.String(s3sink.endpoint),
S3DisableContentMD5Validation: aws.Bool(s3sink.s3DisableContentMD5Validation),
S3ForcePathStyle: aws.Bool(s3sink.s3ForcePathStyle),
}
if awsAccessKeyId != "" && awsSecretAccessKey != "" {
config.Credentials = credentials.NewStaticCredentials(awsAccessKeyId, awsSecretAccessKey, "")
}
sess, err := session.NewSession(config)
if err != nil {
return fmt.Errorf("create aws session: %w", err)
}
s3sink.conn = s3.New(sess)
return nil
}
func (s3sink *S3Sink) DeleteEntry(key string, isDirectory, deleteIncludeChunks bool, signatures []int32) error {
key = cleanKey(key)
if isDirectory {
return nil
}
input := &s3.DeleteObjectInput{
Bucket: aws.String(s3sink.bucket),
Key: aws.String(key),
}
result, err := s3sink.conn.DeleteObject(input)
if err == nil {
glog.V(2).Infof("[%s] delete %s: %v", s3sink.bucket, key, result)
} else {
glog.Errorf("[%s] delete %s: %v", s3sink.bucket, key, err)
}
return err
}
func (s3sink *S3Sink) CreateEntry(key string, entry *filer_pb.Entry, signatures []int32) (err error) {
key = cleanKey(key)
if entry.IsDirectory {
return nil
}
reader := filer.NewFileReader(s3sink.filerSource, entry)
// Decrypt SSE-encrypted objects so the destination receives plaintext
decryptedReader, err := repl_util.MaybeDecryptReader(reader, entry)
if err != nil {
repl_util.CloseReader(reader)
return fmt.Errorf("decrypt SSE object: %w", err)
}
defer repl_util.CloseMaybeDecryptedReader(reader, decryptedReader)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploaderWithClient(s3sink.conn, func(u *s3manager.Uploader) {
u.PartSize = int64(s3sink.uploaderPartSizeMb * 1024 * 1024)
u.Concurrency = s3sink.uploaderConcurrency
u.MaxUploadParts = s3sink.uploaderMaxUploadParts
})
if s3sink.keepPartSize {
switch chunkCount := len(entry.Chunks); {
case chunkCount > 1:
if firstChunkSize := int64(entry.Chunks[0].Size); firstChunkSize > s3manager.MinUploadPartSize {
uploader.PartSize = firstChunkSize
}
default:
uploader.PartSize = 0
}
}
doSaveMtime := true
if entry.Extended == nil {
entry.Extended = make(map[string][]byte)
} else if _, ok := entry.Extended[s3_constants.AmzUserMetaMtime]; ok {
doSaveMtime = false
}
if doSaveMtime {
entry.Extended[s3_constants.AmzUserMetaMtime] = []byte(strconv.FormatInt(entry.Attributes.Mtime, 10))
}
// process tagging
tags := buildTaggingString(entry.Extended)
// Upload the file to S3.
uploadInput := s3manager.UploadInput{
Bucket: aws.String(s3sink.bucket),
Key: aws.String(key),
Body: decryptedReader,
}
if tags != "" {
uploadInput.Tagging = aws.String(tags)
}
if len(entry.Attributes.Md5) > 0 {
uploadInput.ContentMD5 = aws.String(base64.StdEncoding.EncodeToString([]byte(entry.Attributes.Md5)))
}
_, err = uploader.Upload(&uploadInput)
return err
}
func (s3sink *S3Sink) UpdateEntry(key string, oldEntry *filer_pb.Entry, newParentPath string, newEntry *filer_pb.Entry, deleteIncludeChunks bool, signatures []int32) (foundExistingEntry bool, err error) {
key = cleanKey(key)
return true, s3sink.CreateEntry(key, newEntry, signatures)
}
func cleanKey(key string) string {
if strings.HasPrefix(key, "/") {
key = key[1:]
}
return key
}
// buildTaggingString builds the S3 Tagging header value from entry extended metadata.
// Only keys with the AmzObjectTaggingPrefix ("X-Amz-Tagging-") are included as object
// tags. The prefix is stripped and values are URL-encoded to produce a valid S3 tagging
// query string.
func buildTaggingString(extended map[string][]byte) string {
tagValues := url.Values{}
for k, v := range extended {
if strings.HasPrefix(k, s3_constants.AmzObjectTaggingPrefix) {
tagKey := k[len(s3_constants.AmzObjectTaggingPrefix):]
tagValues.Set(tagKey, string(v))
}
}
return tagValues.Encode()
}