* fix(kafka): resolve consumer group resumption timeout in e2e tests
Three issues caused ConsumerGroupResumption to time out when the second
consumer tried to resume from committed offsets:
1. ForceCompleteRebalance deadlock: performCleanup() held group.Mu.Lock
then called ForceCompleteRebalance() which tried to acquire the same
lock — a guaranteed deadlock on Go's non-reentrant sync.Mutex. Fixed
by requiring callers to hold the lock (matching actual call sites).
2. Unbounded fallback fetch: when the multi-batch fetch timed out, the
fallback GetStoredRecords call used the connection context (no
deadline). A slow broker gRPC call could block the data-plane
goroutine indefinitely, causing head-of-line blocking for all
responses on that connection. Fixed with a 10-second timeout.
3. HWM lookup failure caused empty responses: after a consumer leaves
and the partition is deactivated, GetLatestOffset can fail. The
fetch handler treated this as "no data" and entered the long-poll
loop (up to 10s × 4 retries = 40s timeout). Fixed by assuming data
may exist when HWM lookup fails, so the actual fetch determines
availability.
* fix(kafka): address review feedback on HWM sentinel and fallback timeout
- Don't expose synthetic HWM (requestedOffset+1) to clients; keep
result.highWaterMark at 0 when the real HWM lookup fails.
- Tie fallback timeout to client's MaxWaitTime instead of a fixed 10s,
so one slow partition doesn't hold the reader beyond the request budget.
* fix(kafka): use large HWM sentinel and clamp fallback timeout
- Use requestedOffset+10000 as sentinel HWM instead of +1, so
FetchMultipleBatches doesn't artificially limit to 1 record.
- Add 2s floor to fallback timeout so disk reads via gRPC have
a reasonable chance even when maxWaitMs is small or zero.
* fix(kafka): use MaxInt64 sentinel and derive HWM from fetch result
- Use math.MaxInt64 as HWM sentinel to avoid integer overflow risk
(previously requestedOffset+10000 could wrap on large offsets).
- After the fetch, derive a meaningful HWM from newOffset so the
client never sees MaxInt64 or 0 in the response.
* fix(kafka): use remaining time budget for fallback fetch
The fallback was restarting the full maxWaitMs budget even though the
multi-batch fetch already consumed part of it. Now compute remaining
time from either the parent context deadline or maxWaitMs minus
elapsed, skip the fallback if budget is exhausted, and clamp to
[2s, 10s] bounds.
* fix(gcssink): prevent empty object finalization on write failure
The GCS writer was created unconditionally with defer wc.Close(),
which finalizes the upload even when content decryption or copy
fails. This silently overwrites valid objects with empty data.
Remove the unconditional defer, explicitly close on success to
propagate errors, and delete the object on write failure.
* fix(gcssink): use context cancellation instead of obj.Delete on failure
obj.Delete() after a failed write would delete the existing object at
that key, causing data loss on updates. Use a cancelable context
instead — cancelling before Close() aborts the GCS upload without
touching any pre-existing object.
* fix(azuresink): delete freshly created blob on write failure
appendBlobClient.Create() runs before content decryption and copy.
If MaybeDecryptContent or CopyFromChunkViews fails, an empty blob
is left behind, silently replacing any previous valid data. Add
cleanup that deletes the blob on content write errors when we were
the ones who created it.
* fix(azuresink): track recreated blobs for cleanup on write failure
handleExistingBlob deletes and recreates the blob when overwrite is
needed, but freshlyCreated was only set on the initial Create success
path. Set freshlyCreated = needsWrite after handleExistingBlob so
recreated blobs are also cleaned up on content write failure.
* fix(s3): apply PutObject multipart expansion to STS session policy evaluation (#8929)
PR #8445 added logic to implicitly grant multipart upload actions when
s3:PutObject is authorized, but only in the S3 API policy engine's
CompiledStatement.MatchesAction(). STS session policies are evaluated
through the IAM policy engine's matchesActions() -> awsIAMMatch() path,
which did plain pattern matching without the multipart expansion.
Add the same multipart expansion logic to the IAM policy engine's
matchesActions() so that session policies containing s3:PutObject
correctly allow multipart upload operations.
* fix: make multipart action set lookup case-insensitive and optimize
Address PR review feedback:
- Lowercase multipartActionSet keys and use strings.ToLower for lookup,
since AWS IAM actions are case-insensitive
- Only check for s3:PutObject permission when the requested action is
actually a multipart action, avoiding unnecessary awsIAMMatch calls
- Add test case for case-insensitive multipart action matching
* fix(admin): reduce memory usage and verbose logging for large clusters (#8919)
The admin server used excessive memory and produced thousands of log lines
on clusters with many volumes (e.g., 33k volumes). Three root causes:
1. Scanner duplicated all volume metrics: getVolumeHealthMetrics() created
VolumeHealthMetrics objects, then convertToTaskMetrics() copied them all
into identical types.VolumeHealthMetrics. Now uses the task-system type
directly, eliminating the duplicate allocation and removing convertToTaskMetrics.
2. All previous task states loaded at startup: LoadTasksFromPersistence read
and deserialized every .pb file from disk, logging each one. With thousands
of balance tasks persisted, this caused massive startup I/O, memory usage,
and log noise (including unguarded DEBUG glog.Infof per task). Now starts
with an empty queue — the scanner re-detects current needs from live cluster
state. Terminal tasks are purged from memory and disk when new scan results
arrive.
3. Verbose per-volume/per-node logging: V(2) and V(3) logs produced thousands
of lines per scan. Per-volume logs bumped to V(4), per-node/rack/disk logs
bumped to V(3). Topology summary now logs counts instead of full node ID arrays.
Also removes lastTopologyInfo field from MaintenanceScanner — the raw protobuf
topology is returned as a local value and not retained between 30-minute scans.
* fix(admin): delete stale task files at startup, add DeleteAllTaskStates
Old task .pb files from previous runs were left on disk. The periodic
CleanupCompletedTasks still loads all files to find completed ones —
the same expensive 4GB path from the pprof profile.
Now at startup, DeleteAllTaskStates removes all .pb files by scanning
the directory without reading or deserializing them. The scanner will
re-detect any tasks still needed from live cluster state.
* fix(admin): don't persist terminal tasks to disk
CompleteTask was saving failed/completed tasks to disk where they'd
accumulate. The periodic cleanup only triggered for completed tasks,
not failed ones. Now terminal tasks are deleted from disk immediately
and only kept in memory for the current session's UI.
* fix(admin): cap in-memory tasks to 100 per job type
Without a limit, the task map grows unbounded — balance could create
thousands of pending tasks for a cluster with many imbalanced volumes.
Now AddTask rejects new tasks when a job type already has 100 in the
queue. The scanner will re-detect skipped volumes on the next scan.
* fix(admin): address PR review - memory-only purge, active-only capacity
- purgeTerminalTasks now only cleans in-memory map (terminal tasks are
already deleted from disk by CompleteTask)
- Per-type capacity limit counts only active tasks (pending/assigned/
in_progress), not terminal ones
- When at capacity, purge terminal tasks first before rejecting
* fix(admin): fix orphaned comment, add TaskStatusCancelled to terminal switch
- Move hasQueuedOrActiveTaskForVolume comment to its function definition
- Add TaskStatusCancelled to the terminal state switch in CompleteTask
so cancelled task files are deleted from disk
When cross-compiling aws-lc-sys for aarch64-unknown-linux-musl using
aarch64-linux-gnu-gcc, glibc's _FORTIFY_SOURCE generates calls to
__memcpy_chk, __fprintf_chk etc. which don't exist in musl, causing
linker errors. Disable it via CFLAGS_aarch64_unknown_linux_musl.
* fix(master): fast resume state and default resumeState to true
When resumeState is enabled in single-master mode, the raft server had
existing log entries so the self-join path couldn't promote to leader.
The server waited the full election timeout (10-20s) before self-electing.
Fix by temporarily setting election timeout to 1ms before Start() when
in single-master + resumeState mode with existing log, then restoring
the original timeout after leader election. This makes resume near-instant.
Also change the default for resumeState from false to true across all
CLI commands (master, mini, server) so state is preserved by default.
* fix(master): prevent fastResume goroutine from hanging forever
Use defer to guarantee election timeout is always restored, and bound
the polling loop with a timeout so it cannot spin indefinitely if
leader election never succeeds.
* fix(master): use ticker instead of time.After in fastResume polling loop
reqwest's default features include native-tls which depends on
openssl-sys, causing builds to fail on musl targets where OpenSSL
headers are not available. Since we already use rustls-tls, disable
default features to eliminate the openssl-sys dependency entirely.
Both container_latest.yml and container_dev.yml use Dockerfile.go_build
which expects weed-volume-prebuilt/ with pre-compiled Rust binaries, but
neither workflow produced them, causing COPY failures during docker build.
Add build-rust-binaries jobs that natively cross-compile for amd64 and
arm64, then download and place the artifacts in the Docker build context.
Also fix the trivy-scan local build path in container_latest.yml.
fix(admin): use gRPC address for current server in RaftListClusterServers
The old Raft implementation was returning the HTTP address
(ms.option.Master) for the current server, while peers used gRPC
addresses (peer.ConnectionString). The Admin UI's GetClusterMasters()
converts all addresses from gRPC to HTTP via GrpcAddressToServerAddress
(port - 10000), which produced a negative port (-667) for the current
server since its address was already in HTTP format (port 9333).
Use ToGrpcAddress() for consistency with both HashicorpRaft (which
stores gRPC addresses) and old Raft peers.
Fixes#8921
* feat(admin): add profiling options for debugging high memory/CPU usage
Add -debug, -debug.port, -cpuprofile, and -memprofile flags to the admin
command, matching the profiling support already available in master, volume,
and other server commands. This enables investigation of resource usage
issues like #8919.
* refactor(admin): move profiling flags into AdminOptions struct
Move cpuprofile and memprofile flags from global variables into the
AdminOptions struct and init() function for consistency with other flags.
* fix(debug): bind pprof server to localhost only and document profiling flags
StartDebugServer was binding to all interfaces (0.0.0.0), exposing
runtime profiling data to the network. Restrict to 127.0.0.1 since
this is a development/debugging tool.
Also add a "Debugging and Profiling" section to the admin command's
help text documenting the new flags.
Two bugs prevented reliable volume balancing when a Rust volume server
is the copy target:
1. find_last_append_at_ns returned None for delete tombstones (Size==0
in dat header), falling back to file mtime truncated to seconds.
This caused the tail step to re-send needles from the last sub-second
window. Fix: change `needle_size <= 0` to `< 0` since Size==0 delete
needles still have a valid timestamp in their tail.
2. VolumeTailReceiver called read_body_v2 on delete needles, which have
no DataSize/Data/flags — only checksum+timestamp+padding after the
header. Fix: skip read_body_v2 when size == 0, reject negative sizes.
Also:
- Unify gRPC server bind: use TcpListener::bind before spawn for both
TLS and non-TLS paths, propagating bind errors at startup.
- Add mixed Go+Rust cluster test harness and integration tests covering
VolumeCopy in both directions, copy with deletes, and full balance
move with tail tombstone propagation and source deletion.
- Make FindOrBuildRustBinary configurable for default vs no-default
features (4-byte vs 5-byte offsets).
* fix(s3): include static identities in listing operations
Static identities loaded from -s3.config file were only stored in the
S3 API server's in-memory state. Listing operations (s3.configure shell
command, aws iam list-users) queried the credential manager which only
returned dynamic identities from the backend store.
Register static identities with the credential manager after loading
so they are included in LoadConfiguration and ListUsers results, and
filtered out before SaveConfiguration to avoid persisting them to the
dynamic store.
Fixes https://github.com/seaweedfs/seaweedfs/discussions/8896
* fix: avoid mutating caller's config and defensive copies
- SaveConfiguration: use shallow struct copy instead of mutating the
caller's config.Identities field
- SetStaticIdentities: skip nil entries to avoid panics
- GetStaticIdentities: defensively copy PolicyNames slice to avoid
aliasing the original
* fix: filter nil static identities and sync on config reload
- SetStaticIdentities: filter nil entries from the stored slice (not
just from staticNames) to prevent panics in LoadConfiguration/ListUsers
- Extract updateCredentialManagerStaticIdentities helper and call it
from both startup and the grace.OnReload handler so the credential
manager's static snapshot stays current after config file reloads
* fix: add mutex for static identity fields and fix ListUsers for store callers
- Add sync.RWMutex to protect staticIdentities/staticNames against
concurrent reads during config reload
- Revert CredentialManager.ListUsers to return only store users, since
internal callers (e.g. DeletePolicy) look up each user in the store
and fail on non-existent static entries
- Merge static usernames in the filer gRPC ListUsers handler instead,
via the new GetStaticUsernames method
- Fix CI: TestIAMPolicyManagement/managed_policy_crud_lifecycle was
failing because DeletePolicy iterated static users that don't exist
in the store
* fix: show static identities in admin UI and weed shell
The admin UI and weed shell s3.configure command query the filer's
credential manager via gRPC, which is a separate instance from the S3
server's credential manager. Static identities were only registered
on the S3 server's credential manager, so they never appeared in the
filer's responses.
- Add CredentialManager.LoadS3ConfigFile to parse a static S3 config
file and register its identities
- Add FilerOptions.s3ConfigFile so the filer can load the same static
config that the S3 server uses
- Wire s3ConfigFile through in weed mini and weed server modes
- Merge static usernames in filer gRPC ListUsers handler
- Add CredentialManager.GetStaticUsernames helper
- Add sync.RWMutex to protect concurrent access to static identity
fields
- Avoid importing weed/filer from weed/credential (which pulled in
filer store init() registrations and broke test isolation)
- Add docker/compose/s3_static_users_example.json
* fix(admin): make static users read-only in admin UI
Static users loaded from the -s3.config file should not be editable
or deletable through the admin UI since they are managed via the
config file.
- Add IsStatic field to ObjectStoreUser, set from credential manager
- Hide edit, delete, and access key buttons for static users in the
users table template
- Show a "static" badge next to static user names
- Return 403 Forbidden from UpdateUser and DeleteUser API handlers
when the target user is a static identity
* fix(admin): show details for static users
GetObjectStoreUserDetails called credentialManager.GetUser which only
queries the dynamic store. For static users this returned
ErrUserNotFound. Fall back to GetStaticIdentity when the store lookup
fails.
* fix(admin): load static S3 identities in admin server
The admin server has its own credential manager (gRPC store) which is
a separate instance from the S3 server's and filer's. It had no static
identity data, so IsStaticIdentity returned false (edit/delete buttons
shown) and GetStaticIdentity returned nil (details page failed).
Pass the -s3.config file path through to the admin server and call
LoadS3ConfigFile on its credential manager, matching the approach
used for the filer.
* fix: use protobuf is_static field instead of passing config file path
The previous approach passed -s3.config file path to every component
(filer, admin). This is wrong because the admin server should not need
to know about S3 config files.
Instead, add an is_static field to the Identity protobuf message.
The field is set when static identities are serialized (in
GetStaticIdentities and LoadS3ConfigFile). Any gRPC client that loads
configuration via GetConfiguration automatically sees which identities
are static, without needing the config file.
- Add is_static field (tag 8) to iam_pb.Identity proto message
- Set IsStatic=true in GetStaticIdentities and LoadS3ConfigFile
- Admin GetObjectStoreUsers reads identity.IsStatic from proto
- Admin IsStaticUser helper loads config via gRPC to check the flag
- Filer GetUser gRPC handler falls back to GetStaticIdentity
- Remove s3ConfigFile from AdminOptions and NewAdminServer signature
* feat(s3): add concurrent chunk prefetch for large file downloads
Add a pipe-based prefetch pipeline that overlaps chunk fetching with
response writing during S3 GetObject, SSE downloads, and filer proxy.
While chunk N streams to the HTTP response, fetch goroutines for the
next K chunks establish HTTP connections to volume servers ahead of
time, eliminating the RTT gap between sequential chunk fetches.
Uses io.Pipe for minimal memory overhead (~1MB per download regardless
of chunk size, vs buffering entire chunks). Also increases the
streaming read buffer from 64KB to 256KB to reduce syscall overhead.
Benchmark results (64KB chunks, prefetch=4):
- 0ms latency: 1058 → 2362 MB/s (2.2× faster)
- 5ms latency: 11.0 → 41.7 MB/s (3.8× faster)
- 10ms latency: 5.9 → 23.3 MB/s (4.0× faster)
- 20ms latency: 3.1 → 12.1 MB/s (3.9× faster)
* fix: address review feedback for prefetch pipeline
- Fix data race: use *chunkPipeResult (pointer) on channel to avoid
copying struct while fetch goroutines write to it. Confirmed clean
with -race detector.
- Remove concurrent map write: retryWithCacheInvalidation no longer
updates fileId2Url map. Producer only reads it; consumer never writes.
- Use mem.Allocate/mem.Free for copy buffer to reduce GC pressure.
- Add local cancellable context so consumer errors (client disconnect)
immediately stop the producer and all in-flight fetch goroutines.
* fix(test): remove dead code and add Range header support in test server
- Remove unused allData variable in makeChunksAndServer
- Add Range header handling to createTestServer for partial chunk
read coverage (206 Partial Content, 416 Range Not Satisfiable)
* fix: correct retry condition and goroutine leak in prefetch pipeline
- Fix retry condition: use result.fetchErr/result.written instead of
copied to decide cache-invalidation retry. The old condition wrongly
triggered retry when the fetch succeeded but the response writer
failed on the first write (copied==0 despite fetcher having data).
Now matches the sequential path (stream.go:197) which checks whether
the fetcher itself wrote zero bytes.
- Fix goroutine leak: when the producer's send to the results channel
is interrupted by context cancellation, the fetch goroutine was
already launched but the result was never sent to the channel. The
drain loop couldn't handle it. Now waits on result.done before
returning so every fetch goroutine is properly awaited.
* feat(s3): store and return checksum headers for additional checksum algorithms
When clients upload with --checksum-algorithm (SHA256, CRC32, etc.),
SeaweedFS validated the checksum but discarded it. The checksum was
never stored in metadata or returned in PUT/HEAD/GET responses.
Now the checksum is computed alongside MD5 during upload, stored in
entry extended attributes, and returned as the appropriate
x-amz-checksum-* header in all responses.
Fixes#8911
* fix(s3): address review feedback and CI failures for checksum support
- Gate GET/HEAD checksum response headers on x-amz-checksum-mode: ENABLED
per AWS S3 spec, fixing FlexibleChecksumError on ranged GETs and
multipart copies
- Verify computed checksum against client-provided header value for
non-chunked uploads, returning BadDigest on mismatch
- Add nil check for getCheckSumWriter to prevent panic
- Handle comma-separated values in X-Amz-Trailer header
- Use ordered slice instead of map for deterministic checksum header
selection; extract shared mappings into package-level vars
* fix(s3): skip checksum header for ranged GET responses
The stored checksum covers the full object. Returning it for ranged
(partial) responses causes SDK checksum validation failures because the
SDK validates the header value against the partial content received.
Skip emitting x-amz-checksum-* headers when a Range request header is
present, fixing PyArrow large file read failures.
* fix(s3): reject unsupported checksum algorithm with 400
detectRequestedChecksumAlgorithm now returns an error code when
x-amz-sdk-checksum-algorithm or x-amz-checksum-algorithm contains
an unsupported value, instead of silently ignoring it.
* feat(s3): compute composite checksum for multipart uploads
Store the checksum algorithm during CreateMultipartUpload, then during
CompleteMultipartUpload compute a composite checksum from per-part
checksums following the AWS S3 spec: concatenate raw per-part checksums,
hash with the same algorithm, format as "base64-N" where N is part count.
The composite checksum is persisted on the final object entry and
returned in HEAD/GET responses (gated on x-amz-checksum-mode: ENABLED).
Reuses existing per-part checksum storage from putToFiler and the
getCheckSumWriter/checksumHeaders infrastructure.
* fix(s3): validate checksum algorithm in CreateMultipartUpload, error on missing part checksums
- Move detectRequestedChecksumAlgorithm call before mkdir callback so
an unsupported algorithm returns 400 before the upload is created
- Change computeCompositeChecksum to return an error when a part is
missing its checksum (the upload was initiated with a checksum
algorithm, so all parts must have checksums)
- Propagate the error as ErrInvalidPart in CompleteMultipartUpload
* fix(s3): return checksum header in CompleteMultipartUpload response, validate per-part algorithm
- Add ChecksumHeaderName/ChecksumValue fields to CompleteMultipartUploadResult
and set the x-amz-checksum-* HTTP response header in the handler, matching
the AWS S3 CompleteMultipartUpload response spec
- Validate that each part's stored checksum algorithm matches the upload's
expected algorithm before assembling the composite checksum; return an
error if a part was uploaded with a different algorithm
* fix(filer): remove cancellation guard from RollbackTransaction and clean up #8909
RollbackTransaction is a cleanup operation that must succeed even when
the context is cancelled — guarding it causes the exact orphaned state
that #8909 was trying to prevent.
Also:
- Use single-evaluation `if err := ctx.Err(); err != nil` pattern
instead of double-calling ctx.Err()
- Remove spurious blank lines before guards
- Add context.DeadlineExceeded test coverage
- Simplify tests from ~230 lines to ~130 lines
* fix(filer): call cancel() in expiredCtx and test rollback with expired context
- Call cancel() instead of suppressing it to avoid leaking timer resources
- Test RollbackTransaction with both cancelled and expired contexts
* chore: remove unreachable dead code across the codebase
Remove ~50,000 lines of unreachable code identified by static analysis.
Major removals:
- weed/filer/redis_lua: entire unused Redis Lua filer store implementation
- weed/wdclient/net2, resource_pool: unused connection/resource pool packages
- weed/plugin/worker/lifecycle: unused lifecycle plugin worker
- weed/s3api: unused S3 policy templates, presigned URL IAM, streaming copy,
multipart IAM, key rotation, and various SSE helper functions
- weed/mq/kafka: unused partition mapping, compression, schema, and protocol functions
- weed/mq/offset: unused SQL storage and migration code
- weed/worker: unused registry, task, and monitoring functions
- weed/query: unused SQL engine, parquet scanner, and type functions
- weed/shell: unused EC proportional rebalance functions
- weed/storage/erasure_coding/distribution: unused distribution analysis functions
- Individual unreachable functions removed from 150+ files across admin,
credential, filer, iam, kms, mount, mq, operation, pb, s3api, server,
shell, storage, topology, and util packages
* fix(s3): reset shared memory store in IAM test to prevent flaky failure
TestLoadIAMManagerFromConfig_EmptyConfigWithFallbackKey was flaky because
the MemoryStore credential backend is a singleton registered via init().
Earlier tests that create anonymous identities pollute the shared store,
causing LookupAnonymous() to unexpectedly return true.
Fix by calling Reset() on the memory store before the test runs.
* style: run gofmt on changed files
* fix: restore KMS functions used by integration tests
* fix(plugin): prevent panic on send to closed worker session channel
The Plugin.sendToWorker method could panic with "send on closed channel"
when a worker disconnected while a message was being sent. The race was
between streamSession.close() closing the outgoing channel and sendToWorker
writing to it concurrently.
Add a done channel to streamSession that is closed before the outgoing
channel, and check it in sendToWorker's select to safely detect closed
sessions without panicking.
* feat(s3): support WEED_S3_SSE_KEY env var for SSE-S3 KEK
Add support for providing the SSE-S3 Key Encryption Key (KEK) via the
WEED_S3_SSE_KEY environment variable (hex-encoded 256-bit key). This
avoids storing the master key in plaintext on the filer at /etc/s3/sse_kek.
Key source priority:
1. WEED_S3_SSE_KEY environment variable (recommended)
2. Existing filer KEK at /etc/s3/sse_kek (backward compatible)
3. Auto-generate and save to filer (deprecated for new deployments)
Existing deployments with a filer-stored KEK continue to work unchanged.
A deprecation warning is logged when auto-generating a new filer KEK.
* refactor(s3): derive KEK from any string via HKDF instead of requiring hex
Accept any secret string in WEED_S3_SSE_KEY and derive a 256-bit key
using HKDF-SHA256 instead of requiring a hex-encoded key. This is
simpler for users — no need to generate hex, just set a passphrase.
* feat(s3): add WEED_S3_SSE_KEK and WEED_S3_SSE_KEY env vars for KEK
Two env vars for providing the SSE-S3 Key Encryption Key:
- WEED_S3_SSE_KEK: hex-encoded, same format as /etc/s3/sse_kek.
If the filer file also exists, they must match.
- WEED_S3_SSE_KEY: any string, 256-bit key derived via HKDF-SHA256.
Refuses to start if /etc/s3/sse_kek exists (must delete first).
Only one may be set. Existing filer-stored KEKs continue to work.
Auto-generating and storing new KEKs on filer is deprecated.
* fix(s3): stop auto-generating KEK, fail only when SSE-S3 is used
Instead of auto-generating a KEK and storing it on the filer when no
key source is configured, simply leave SSE-S3 disabled. Encrypt and
decrypt operations return a clear error directing the user to set
WEED_S3_SSE_KEK or WEED_S3_SSE_KEY.
* refactor(s3): move SSE-S3 KEK config to security.toml
Move KEK configuration from standalone env vars to security.toml's new
[sse_s3] section, following the same pattern as JWT keys and TLS certs.
[sse_s3]
kek = "" # hex-encoded 256-bit key (same format as /etc/s3/sse_kek)
key = "" # any string, HKDF-derived
Viper's WEED_ prefix auto-mapping provides env var support:
WEED_SSE_S3_KEK and WEED_SSE_S3_KEY.
All existing behavior is preserved: filer KEK fallback, mismatch
detection, and HKDF derivation.
* refactor(s3): rename SSE-S3 config keys to s3.sse.kek / s3.sse.key
Use [s3.sse] section in security.toml, matching the existing naming
convention (e.g. [s3.*]). Env vars: WEED_S3_SSE_KEK, WEED_S3_SSE_KEY.
* fix(s3): address code review findings for SSE-S3 KEK
- Don't hold mutex during filer retry loop (up to 20s of sleep).
Lock only to write filerClient and superKey.
- Remove dead generateAndSaveSuperKeyToFiler and unused constants.
- Return error from deriveKeyFromSecret instead of ignoring it.
- Fix outdated doc comment on InitializeWithFiler.
- Use t.Setenv in tests instead of manual os.Setenv/Unsetenv.
* fix(s3): don't block startup on filer errors when KEK is configured
- When s3.sse.kek is set, a temporarily unreachable filer no longer
prevents startup. The filer consistency check becomes best-effort
with a warning.
- Same treatment for s3.sse.key: filer unreachable logs a warning
instead of failing.
- Rewrite error messages to suggest migration instead of file deletion,
avoiding the risk of orphaning encrypted data.
Finding 3 (restore auto-generation) intentionally skipped — auto-gen
was removed by design to avoid storing plaintext KEK on filer.
* fix(test): set WEED_S3_SSE_KEY in SSE integration test server startup
SSE-S3 no longer auto-generates a KEK, so integration tests must
provide one. Set WEED_S3_SSE_KEY=test-sse-s3-key in all weed mini
invocations in the test Makefile.
* fix(s3): use URL-safe secret keys for admin dashboard users and service accounts
The dashboard's generateSecretKey() used base64.StdEncoding which produces
+, /, and = characters that break S3 signature authentication. Reuse the
IAM package's GenerateSecretAccessKey() which was already fixed in #7990.
Fixes#8898
* fix: handle error from GenerateSecretAccessKey instead of ignoring it
SocketAddr::parse() only accepts numeric IPs, so binding the gRPC
server to "localhost:18833" panicked. Use tokio::net::lookup_host()
to resolve hostnames before passing to tonic's serve_with_shutdown.
* feat(s3): add STS GetFederationToken support
Implement the AWS STS GetFederationToken API, which allows long-term IAM
users to obtain temporary credentials scoped down by an optional inline
session policy. This is useful for server-side applications that mint
per-user temporary credentials.
Key behaviors:
- Requires SigV4 authentication from a long-term IAM user
- Rejects calls from temporary credentials (session tokens)
- Name parameter (2-64 chars) identifies the federated user
- DurationSeconds supports 900-129600 (15 min to 36 hours, default 12h)
- Optional inline session policy for permission scoping
- Caller's attached policies are embedded in the JWT token
- Returns federated user ARN: arn:aws:sts::<account>:federated-user/<Name>
No performance impact on the S3 hot path — credential vending is a
separate control-plane operation, and all policy data is embedded in
the stateless JWT token.
* fix(s3): address GetFederationToken PR review feedback
- Fix Name validation: max 32 chars (not 64) per AWS spec, add regex
validation for [\w+=,.@-]+ character whitelist
- Refactor parseDurationSeconds into parseDurationSecondsWithBounds to
eliminate duplicated duration parsing logic
- Add sts:GetFederationToken permission check via VerifyActionPermission
mirroring the AssumeRole authorization pattern
- Change GetPoliciesForUser to return ([]string, error) so callers fail
closed on policy-resolution failures instead of silently returning nil
- Move temporary-credentials rejection before SigV4 verification for
early rejection and proper test coverage
- Update tests: verify specific error message for temp cred rejection,
add regex validation test cases (spaces, slashes rejected)
* refactor(s3): use sts.Action* constants instead of hard-coded strings
Replace hard-coded "sts:AssumeRole" and "sts:GetFederationToken" strings
in VerifyActionPermission calls with sts.ActionAssumeRole and
sts.ActionGetFederationToken package constants.
* fix(s3): pass through sts: prefix in action resolver and merge policies
Two fixes:
1. mapBaseActionToS3Format now passes through "sts:" prefix alongside
"s3:" and "iam:", preventing sts:GetFederationToken from being
rewritten to s3:sts:GetFederationToken in VerifyActionPermission.
This also fixes the existing sts:AssumeRole permission checks.
2. GetFederationToken policy embedding now merges identity.PolicyNames
(from SigV4 identity) with policies from the IAM manager (which may
include group-attached policies), deduplicated via a map. Previously
the IAM manager lookup was skipped when identity.PolicyNames was
non-empty, causing group policies to be omitted from the token.
* test(s3): add integration tests for sts: action passthrough and policy merge
Action resolver tests:
- TestMapBaseActionToS3Format_ServicePrefixPassthrough: verifies s3:, iam:,
and sts: prefixed actions pass through unchanged while coarse actions
(Read, Write) are mapped to S3 format
- TestResolveS3Action_STSActionsPassthrough: verifies sts:AssumeRole,
sts:GetFederationToken, sts:GetCallerIdentity pass through ResolveS3Action
unchanged with both nil and real HTTP requests
Policy merge tests:
- TestGetFederationToken_GetPoliciesForUser: tests IAMManager.GetPoliciesForUser
with no user store (error), missing user, user with policies, user without
- TestGetFederationToken_PolicyMergeAndDedup: tests that identity.PolicyNames
and IAM-manager-resolved policies are merged and deduplicated (SharedPolicy
appears in both sources, result has 3 unique policies)
- TestGetFederationToken_PolicyMergeNoManager: tests that when IAM manager is
unavailable, identity.PolicyNames alone are embedded
* test(s3): add end-to-end integration tests for GetFederationToken
Add integration tests that call GetFederationToken using real AWS SigV4
signed HTTP requests against a running SeaweedFS instance, following the
existing pattern in test/s3/iam/s3_sts_assume_role_test.go.
Tests:
- TestSTSGetFederationTokenValidation: missing name, name too short/long,
invalid characters, duration too short/long, malformed policy, anonymous
rejection (7 subtests)
- TestSTSGetFederationTokenRejectTemporaryCredentials: obtains temp creds
via AssumeRole then verifies GetFederationToken rejects them
- TestSTSGetFederationTokenSuccess: basic success, custom 1h duration,
36h max duration with expiration time verification
- TestSTSGetFederationTokenWithSessionPolicy: creates a bucket, obtains
federated creds with GetObject-only session policy, verifies GetObject
succeeds and PutObject is denied using the AWS SDK S3 client
Cross-compile Rust volume server natively for amd64/arm64 using musl
targets in a separate job, then inject pre-built binaries into the
Docker build. This replaces the ~5-hour QEMU-emulated cargo build
with ~15 minutes of native cross-compilation.
The Dockerfile falls back to building from source when no pre-built
binary is found, preserving local build compatibility.
* fix(s3): skip directories before marker in ListObjectVersions pagination
ListObjectVersions was re-traversing the entire directory tree from the
beginning on every paginated request, only skipping entries at the leaf
level. For buckets with millions of objects in deep hierarchies, this
caused exponentially slower responses as pagination progressed.
Two optimizations:
1. Use keyMarker to compute a startFrom position at each directory level,
skipping directly to the relevant entry instead of scanning from the
beginning (mirroring how ListObjects uses marker descent).
2. Skip recursing into subdirectories whose keys are entirely before the
keyMarker.
Changes per-page cost from O(entries_before_marker) to O(tree_depth).
* test(s3): add integration test for deep-hierarchy version listing pagination
Adds TestVersioningPaginationDeepDirectoryHierarchy which creates objects
across 20 subdirectories at depth 6 (mimicking Veeam 365 backup layout)
and paginates through them with small maxKeys. Verifies correctness
(no duplicates, sorted order, all objects found) and checks that later
pages don't take dramatically longer than earlier ones — the symptom
of the pre-fix re-traversal bug. Also tests delimiter+pagination
interaction across subdirectories.
* test(s3): strengthen deep-hierarchy pagination assertions
- Replace timing warning (t.Logf) with a failing assertion (t.Errorf)
so pagination regressions actually fail the test.
- Replace generic count/uniqueness/sort checks on CommonPrefixes with
exact equality against the expected prefix slice, catching wrong-but-
sorted results.
* test(s3): use allKeys for exact assertion in deep-hierarchy pagination test
Wire the allKeys slice (previously unused dead code) into the version
listing assertion, replacing generic count/uniqueness/sort checks with
an exact equality comparison against the keys that were created.
* STS: add GetCallerIdentity support
Implement the AWS STS GetCallerIdentity action, which returns the
ARN, account ID, and user ID of the caller based on SigV4 authentication.
This is commonly used by AWS SDKs and CLI tools (e.g. `aws sts get-caller-identity`)
to verify credentials and determine the authenticated identity.
* test: remove trivial GetCallerIdentity tests
Remove the XML unmarshal test (we don't consume this response as input)
and the routing constant test (just asserts a literal equals itself).
* fix: route GetCallerIdentity through STS in UnifiedPostHandler and use stable UserId
- UnifiedPostHandler only dispatched actions starting with "AssumeRole" to STS,
so GetCallerIdentity in a POST body would fall through to the IAM path and
get AccessDenied for non-admin users. Add explicit check for GetCallerIdentity.
- Use identity.Name as UserId instead of credential.AccessKey, which is a
transient value and incorrect for STS assumed-role callers.
* fix(weed/worker/tasks/ec_balance): non-recursive reportProgress
* fix(ec_balance): call ReportProgressWithStage and include volumeID in log
The original fix replaced infinite recursion with a glog.Infof, but
skipped the framework progress callback. This adds the missing
ReportProgressWithStage call so the admin server receives EC balance
progress, and includes volumeID in the log for disambiguation.
---------
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* fix(test): address flaky S3 distributed lock integration test
Two root causes:
1. Lock ring convergence race: After waitForFilerCount(2) confirms the
master sees both filers, there's a window where filer0's lock ring
still only contains itself (master's LockRingUpdate broadcast is
delayed by the 1s stabilization timer). During this window filer0
considers itself primary for ALL keys, so both filers can
independently grant the same lock.
Fix: Add waitForLockRingConverged() that acquires the same lock
through both filers and verifies mutual exclusion before proceeding.
2. Hash function mismatch: ownerForObjectLock used util.HashStringToLong
(MD5 + modulo) to predict lock owners, but the production DLM uses
CRC32 consistent hashing via HashRing. This meant the test could
pick keys that route to the same filer, not exercising the
cross-filer coordination it intended to test.
Fix: Use lock_manager.NewHashRing + GetPrimary() to match production
routing exactly.
* fix(test): verify lock denial reason in convergence check
Ensure the convergence check only returns true when the second lock
attempt is denied specifically because the lock is already owned,
avoiding false positives from transient errors.
* fix(test): check one key per primary filer in convergence wait
A single arbitrary key can false-pass: if its real primary is the filer
with the stale ring, mutual exclusion holds trivially because that filer
IS the correct primary. Generate one test key per distinct primary using
the same consistent-hash ring as production, so a stale ring on any
filer is caught deterministically.
* filer.sync: show active chunk transfers when sync progress stalls
When the sync watermark is not advancing, print each in-progress chunk
transfer with its file path, bytes received so far, and current status
(downloading, uploading, or waiting with backoff duration). This helps
diagnose which files are blocking progress during replication.
Closes#8542
* filer.sync: include last error in stall diagnostics
* filer.sync: fix data races in ChunkTransferStatus
Add sync.RWMutex to ChunkTransferStatus and lock around all field
mutations in fetchAndWrite. ActiveTransfers now returns value copies
under RLock so callers get immutable snapshots.
Add sync.RWMutex to ChunkTransferStatus and lock around all field
mutations in fetchAndWrite. ActiveTransfers now returns value copies
under RLock so callers get immutable snapshots.
When the sync watermark is not advancing, print each in-progress chunk
transfer with its file path, bytes received so far, and current status
(downloading, uploading, or waiting with backoff duration). This helps
diagnose which files are blocking progress during replication.
Closes#8542
The test port allocation had a TOCTOU race where GetFreePort() would
open a listener, grab the port number, then immediately close it.
When called repeatedly, the OS could recycle a just-released port,
causing two services (e.g. Filer and S3) to be assigned the same port.
Replace per-call GetFreePort() with batch AllocatePorts() that holds
all listeners open until every port is obtained, matching the pattern
already used in test/volume_server/framework/cluster.go.
* fix(s3): use recursive delete for .versions directory cleanup
When only delete markers remain in a .versions directory,
updateLatestVersionAfterDeletion tried to delete it non-recursively,
which failed with "fail to delete non-empty folder" because the delete
marker entries were still present. Use recursive deletion so the
directory and its remaining delete marker entries are cleaned up together.
* fix(s3): guard .versions directory deletion against truncated listings
When the version listing is truncated (>1000 entries), content versions
may exist beyond the first page. Skip the recursive directory deletion
in this case to prevent data loss.
* fix(s3): preserve delete markers in .versions directory
Delete markers must be preserved per S3 semantics — they are only
removed by an explicit DELETE with versionId. The previous fix would
recursively delete the entire .versions directory (including delete
markers) when no content versions were found.
Now the logic distinguishes three cases:
1. Content versions exist → update latest version metadata
2. Only delete markers remain (or listing truncated) → keep directory
3. Truly empty → safe to delete directory (non-recursive)
* fix(admin): respect urlPrefix in S3 bucket and S3Tables navigation links (#8884)
Several admin UI templates used hardcoded URLs (templ.SafeURL) instead of
dash.PUrl(ctx, ...) for navigation links, causing 404 errors when the
admin is deployed with --urlPrefix.
Fixed in: s3_buckets.templ, s3tables_buckets.templ, s3tables_tables.templ
* fix(admin): URL-escape bucketName in S3Tables navigation links
Add url.PathEscape(bucketName) for consistency and correctness in
s3tables_tables.templ (back-to-namespaces link) and s3tables_buckets.templ
(namespace link), matching the escaping already used in the table details link.
* S3: map canned ACL to file permissions and add configurable default file mode
S3 uploads were hardcoded to 0660 regardless of ACL headers. Now the
X-Amz-Acl header maps to Unix file permissions per-object:
- public-read, authenticated-read, bucket-owner-read → 0644
- public-read-write → 0666
- private, bucket-owner-full-control → 0660
Also adds -defaultFileMode / -s3.defaultFileMode flag to set a
server-wide default when no ACL header is present.
Closes#8874
* Address review feedback for S3 file mode feature
- Extract hardcoded 0660 to defaultFileMode constant
- Change parseDefaultFileMode to return error instead of calling Fatalf
- Add -s3.defaultFileMode flag to filer.go and mini.go (was missing)
- Add doc comment to S3Options about updating all four flag sites
- Add TestResolveFileMode with 10 test cases covering ACL mapping,
server default, and priority ordering
- concurrent_operations_test: Add retry loop for transient I/O errors
on file close during ConcurrentDirectoryOperations
- git_operations_test: Wait for pushed objects to become visible through
FUSE mount before cloning in Phase 3
The Rust weed-volume binary requires libgcc_s.so.1 for stack unwinding
(_Unwind_* symbols). Without it, the binary fails to load in the Alpine
container with "Error loading shared library libgcc_s.so.1".
* fix(s3): remove customer encryption key from SSE-C debug log
The debug log in validateAndParseSSECHeaders was logging the raw
customer-provided encryption key bytes in hex format (keyBytes=%x),
leaking sensitive key material to log output. Remove the key bytes
from the log statement while keeping the MD5 hash comparison info.
* Apply suggestion from @gemini-code-assist[bot]
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>