Commit Graph

7764 Commits

Author SHA1 Message Date
Chris Lu
a9b3be416b fix: initialize missing S3 options in filer to prevent nil pointer dereference (#7646)
* fix: initialize missing S3 options in filer to prevent nil pointer dereference

Fixes #7644

When starting the S3 gateway from the filer, several S3Options fields
were not being initialized, which could cause nil pointer dereferences
during startup.

This commit adds initialization for:
- iamConfig: for advanced IAM configuration
- metricsHttpPort: for Prometheus metrics endpoint
- metricsHttpIp: for binding the metrics endpoint

Also ensures metricsHttpIp defaults to bindIp when not explicitly set,
matching the behavior of the standalone S3 server.

This prevents the panic that was occurring in the s3.go:226 area when
these pointer fields were accessed but never initialized.

* fix: copy value instead of pointer for metricsHttpIp default

Address review comment to avoid pointer aliasing. Copy the value
instead of the pointer to prevent unexpected side effects if the
bindIp value is modified later.
2025-12-07 19:56:05 -08:00
Chris Lu
5d53edb93b Optimize database connection pool settings for MySQL and PostgreSQL (#7645)
- Reduce connection_max_idle from 100 to 10 (PostgreSQL) and from 2 to 10 (MySQL)
- Reduce connection_max_open from 100 to 50 for all database stores
- Set connection_max_lifetime_seconds to 300 (5 minutes) to force connection recycling

This prevents 'cannot assign requested address' errors under high load by:
1. Limiting the number of concurrent connections to reduce ephemeral port usage
2. Forcing connection recycling to prevent stale connections and port exhaustion
3. Reducing idle connections to minimize resource consumption

Fixes #6887
2025-12-07 13:26:47 -08:00
chrislu
5167bbd2a9 Remove deprecated allowEmptyFolder CLI option
The allowEmptyFolder option is no longer functional because:
1. The code that used it was already commented out
2. Empty folder cleanup is now handled asynchronously by EmptyFolderCleaner

The CLI flags are kept for backward compatibility but marked as deprecated
and ignored. This removes:
- S3ApiServerOption.AllowEmptyFolder field
- The actual usage in s3api_object_handlers_list.go
- Helm chart values and template references
- References in test Makefiles and docker-compose files
2025-12-06 21:54:12 -08:00
Chris Lu
55f0fbf364 s3: optimize DELETE by skipping lock check for buckets without Object Lock (#7642)
This optimization avoids an expensive filer gRPC call for every DELETE
operation on buckets that don't have Object Lock enabled.

Before this change, enforceObjectLockProtections() would always call
getObjectEntry() to fetch object metadata to check for retention/legal
hold, even for buckets that never had Object Lock configured.

Changes:
1. Add early return in enforceObjectLockProtections() if bucket has no
   Object Lock config or bucket doesn't exist
2. Add isObjectLockEnabled() helper function to check if a bucket has
   Object Lock configured
3. Fix validateObjectLockHeaders() to check ObjectLockConfig instead of
   just versioningEnabled - this ensures object-lock headers are properly
   rejected on buckets without Object Lock enabled, which aligns with
   AWS S3 semantics
4. Make bucket creation with Object Lock atomic - set Object Lock config
   in the same CreateEntry call as bucket creation, preventing race
   conditions where bucket exists without Object Lock enabled
5. Properly handle Object Lock setup failures during bucket creation -
   if StoreObjectLockConfigurationInExtended fails, roll back the bucket
   creation and return an error instead of leaving a bucket without
   the requested Object Lock configuration

This significantly improves DELETE latency for non-Object-Lock buckets,
which is the common case (lockCheck time reduced from 1-10ms to ~1µs).
2025-12-06 21:37:25 -08:00
Chris Lu
9c266fac29 fix: CompleteMultipartUpload fails for uploads with more than 1000 parts (#7641)
When completing a multipart upload, the code was listing parts with limit=0,
which relies on the server's DirListingLimit default. In 'weed server' mode,
this defaults to 1000, causing uploads with more than 1000 parts to fail
with InvalidPart error.

For a 38GB file with 8MB parts (AWS CLI default), this results in ~4564 parts,
far exceeding the 1000 limit.

Fix: Use explicit limit of MaxS3MultipartParts+1 (10001) to ensure all parts
are listed regardless of server configuration.

Fixes #7638
2025-12-06 11:25:27 -08:00
Chris Lu
28ac536280 fix: normalize Windows backslash paths in weed admin file uploads (#7636)
fix: normalize Windows backslash paths in file uploads

When uploading files from a Windows client to a Linux server,
file paths containing backslashes were not being properly interpreted as
directory separators. This caused files intended for subdirectories to be
created in the root directory with backslashes in their filenames.

Changes:
- Add util.CleanWindowsPath and util.CleanWindowsPathBase helper functions
  in weed/util/fullpath.go for reusable path normalization
- Use path.Join/path.Clean/path.Base instead of filepath equivalents
  for URL path semantics (filepath is OS-specific)
- Apply normalization in weed admin handlers and filer upload parsing

Fixes #7628
2025-12-05 17:40:32 -08:00
Chris Lu
89b6deaefa fix: Use mime.FormatMediaType for RFC 6266 compliant Content-Disposition (#7635)
Updated Content-Disposition header generation to use mime.FormatMediaType
from the standard library, which properly handles non-ASCII characters
and special characters per RFC 6266.

Changes:
- weed/server/common.go: Updated adjustHeaderContentDisposition to use
  mime.FormatMediaType instead of manual escaping with fileNameEscaper
- weed/operation/upload_content.go: Updated multipart form Content-Disposition
  to use mime.FormatMediaType
- weed/server/volume_server_handlers_read.go: Removed unused fileNameEscaper

This ensures correct filename display for international users across
filer downloads and file uploads.

Fixes #7634
2025-12-05 15:59:12 -08:00
Chris Lu
f1384108e8 fix: Admin UI file browser uses https.client TLS config for filer communication (#7633)
* fix: Admin UI file browser uses https.client TLS config for filer communication

When filer is configured with HTTPS (https.filer section in security.toml),
the Admin UI file browser was still using plain HTTP for file uploads,
downloads, and viewing. This caused TLS handshake errors:
'http: TLS handshake error: client sent an HTTP request to an HTTPS server'

This fix:
- Updates FileBrowserHandlers to use the HTTPClient from weed/util/http/client
  which properly loads TLS configuration from https.client section
- The HTTPClient automatically uses HTTPS when https.client.enabled=true
- All file operations (upload, download, view) now respect TLS configuration
- Falls back to plain HTTP if TLS client creation fails

Fixes #7631

* fix: Address code review comments

- Fix fallback client Transport wiring (properly assign transport to http.Client)
- Use per-operation timeouts instead of unified 60s timeout:
  - uploadFileToFiler: 60s (for large file uploads)
  - ViewFile: 30s (original timeout)
  - isLikelyTextFile: 10s (original timeout)

* fix: Proxy file downloads through Admin UI for mTLS support

The DownloadFile function previously used browser redirect, which would
fail when filer requires mutual TLS (client certificates) since the
browser doesn't have these certificates.

Now the Admin UI server proxies the download, using its TLS-aware HTTP
client with the configured client certificates, then streams the
response to the browser.

* fix: Ensure HTTP response body is closed on non-200 responses

In ViewFile, the response body was only closed on 200 OK paths,
which could leak connections on non-200 responses. Now the body
is always closed via defer immediately after checking err == nil,
before checking the status code.

* refactor: Extract fetchFileContent helper to reduce nesting in ViewFile

Extracted the deeply nested file fetch logic (7+ levels) into a
separate fetchFileContent helper method. This improves readability
while maintaining the same TLS-aware behavior and error handling.

* refactor: Use idiomatic Go error handling in fetchFileContent

Changed fetchFileContent to return (string, error) instead of
(content string, reason string) for idiomatic Go error handling.
This enables error wrapping and standard 'if err != nil' checks.

Also improved error messages to be more descriptive for debugging,
including the HTTP status code and response body on non-200 responses.

* refactor: Extract newClientWithTimeout helper to reduce code duplication

- Added newClientWithTimeout() helper method that creates a temporary
  http.Client with the specified timeout, reusing the TLS transport
- Updated uploadFileToFiler, fetchFileContent, DownloadFile, and
  isLikelyTextFile to use the new helper
- Improved error message in DownloadFile to include response body
  for better debuggability (consistent with fetchFileContent)

* fix: Address CodeRabbit review comments

- Fix connection leak in isLikelyTextFile: ensure resp.Body.Close()
  is called even when status code is not 200
- Use http.NewRequestWithContext in DownloadFile so the filer request
  is cancelled when the client disconnects, improving resource cleanup

* fix: Escape Content-Disposition filename per RFC 2616

Filenames containing quotes, backslashes, or special characters could
break the Content-Disposition header or cause client-side parsing issues.
Now properly escapes these characters before including in the header.

* fix: Handle io.ReadAll errors when reading error response bodies

In fetchFileContent and DownloadFile, the error from io.ReadAll was
ignored when reading the filer's error response body. Now properly
handles these errors to provide complete error messages.

* fix: Fail fast when TLS client creation fails

If TLS is enabled (https.client.enabled=true) but misconfigured,
fail immediately with glog.Fatalf rather than silently falling back
to plain HTTP. This prevents confusing runtime errors when the filer
only accepts HTTPS connections.

* fix: Use mime.FormatMediaType for RFC 6266 compliant Content-Disposition

Replace manual escaping with mime.FormatMediaType which properly handles
non-ASCII characters and special characters per RFC 6266, ensuring
correct filename display for international users.
2025-12-05 15:39:26 -08:00
msementsov
c0dad091f1 Separate vacuum speed from replication speed (#7632) 2025-12-05 12:24:38 -08:00
Chris Lu
4cc6a2a4e5 fix: Admin UI user creation fails before filer discovery (#7624) (#7625)
* fix: Admin UI user creation fails before filer discovery (#7624)

The credential manager's filer address function was not configured quickly
enough after admin server startup, causing 'filer address function not
configured' errors when users tried to create users immediately.

Changes:
- Use exponential backoff (200ms -> 5s) instead of fixed 5s polling for
  faster filer discovery on startup
- Improve error messages to be more user-friendly and actionable

Fixes #7624

* Add more debug logging to help diagnose filer discovery issues

* fix: Use dynamic filer address function to eliminate race condition

Instead of using a goroutine to wait for filer discovery before setting
the filer address function, we now set a dynamic function immediately
that returns the current filer address whenever it's called.

This eliminates the race condition where users could create users before
the goroutine completed, and provides clearer error messages when no
filer is available.

The dynamic function is HA-aware - it automatically returns whatever
filer is currently available, adapting to filer failovers.
2025-12-05 12:19:06 -08:00
Chris Lu
5c1de633cb mount: improve read throughput with parallel chunk fetching (#7627)
* filer: remove lock contention during chunk download

This addresses issue #7504 where a single weed mount FUSE instance
does not fully utilize node network bandwidth when reading large files.

The SingleChunkCacher was holding a mutex during the entire HTTP download,
causing readers to block until the download completed. This serialized
chunk reads even when multiple goroutines were downloading in parallel.

Changes:
- Add sync.Cond to SingleChunkCacher for efficient waiting
- Move HTTP download outside the critical section in startCaching()
- Use condition variable in readChunkAt() to wait for download completion
- Add isComplete flag to track download state

Now multiple chunk downloads can proceed truly in parallel, and readers
wait efficiently using the condition variable instead of blocking on
a mutex held during I/O operations.

Ref: #7504

* filer: parallel chunk fetching within doReadAt

This addresses issue #7504 by enabling parallel chunk downloads within
a single read operation.

Previously, doReadAt() processed chunks sequentially in a loop, meaning
each chunk had to be fully downloaded before the next one started.
This left significant network bandwidth unused when chunks resided on
different volume servers.

Changes:
- Collect all chunk read tasks upfront
- Use errgroup to fetch multiple chunks in parallel
- Each chunk reads directly into its correct buffer position
- Limit concurrency to prefetchCount (min 4) to avoid overwhelming the system
- Handle gaps and zero-filling before parallel fetch
- Trigger prefetch after parallel reads complete

For a read spanning N chunks on different volume servers, this can
now utilize up to N times the bandwidth of a single connection.

Ref: #7504

* http: direct buffer read to reduce memory copies

This addresses issue #7504 by reducing memory copy overhead during
chunk downloads.

Previously, RetriedFetchChunkData used ReadUrlAsStream which:
1. Allocated a 64KB intermediate buffer
2. Read data in 64KB chunks
3. Called a callback to copy each chunk to the destination

For a 16MB chunk, this meant 256 copy operations plus the callback
overhead. Profiling showed significant time spent in memmove.

Changes:
- Add readUrlDirectToBuffer() that reads directly into the destination
- Add retriedFetchChunkDataDirect() for unencrypted, non-gzipped chunks
- Automatically use direct read path when possible (cipher=nil, gzip=false)
- Use http.NewRequestWithContext for proper cancellation

For unencrypted chunks (the common case), this eliminates the
intermediate buffer entirely, reading HTTP response bytes directly
into the final destination buffer.

Ref: #7504

* address review comments

- Use channel (done) instead of sync.Cond for download completion signaling
  This integrates better with context cancellation patterns
- Remove redundant groupErr check in reader_at.go (errors are already captured in task.err)
- Remove buggy URL encoding logic from retriedFetchChunkDataDirect
  (The existing url.PathEscape on full URL is a pre-existing bug that should be fixed separately)

* address review comments (round 2)

- Return io.ErrUnexpectedEOF when HTTP response is truncated
  This prevents silent data corruption from incomplete reads
- Simplify errgroup error handling by using g.Wait() error directly
  Remove redundant task.err field and manual error aggregation loop
- Define minReadConcurrency constant instead of magic number 4
  Improves code readability and maintainability

Note: Context propagation to startCaching() is intentionally NOT changed.
The downloaded chunk is a shared resource that may be used by multiple
readers. Using context.Background() ensures the download completes even
if one reader cancels, preventing data loss for other waiting readers.

* http: inject request ID for observability in direct read path

Add request_id.InjectToRequest() call to readUrlDirectToBuffer() for
consistency with ReadUrlAsStream path. This ensures full-chunk reads
carry the same tracing/correlation headers for server logs and metrics.

* filer: consistent timestamp handling in sequential read path

Use max(ts, task.chunk.ModifiedTsNs) in sequential path to match
parallel path behavior. Also update ts before error check so that
on failure, the returned timestamp reflects the max of all chunks
processed so far.

* filer: document why context.Background() is used in startCaching

Add comment explaining the intentional design decision: the downloaded
chunk is a shared resource that may be used by multiple concurrent
readers. Using context.Background() ensures the download completes
even if one reader cancels, preventing errors for other waiting readers.

* filer: propagate context for reader cancellation

Address review comment: pass context through ReadChunkAt call chain so
that a reader can cancel its wait for a download. The key distinction is:

- Download uses context.Background() - shared resource, always completes
- Reader wait uses request context - can be cancelled individually

If a reader cancels, it stops waiting and returns ctx.Err(), but the
download continues to completion for other readers waiting on the same
chunk. This properly handles the shared resource semantics while still
allowing individual reader cancellation.

* filer: use defer for close(done) to guarantee signal on panic

Move close(s.done) to a defer statement at the start of startCaching()
to ensure the completion signal is always sent, even if an unexpected
panic occurs. This prevents readers from blocking indefinitely.

* filer: remove unnecessary code

- Remove close(s.cacheStartedCh) in destroy() - the channel is only used
  for one-time synchronization, closing it provides no benefit
- Remove task := task loop variable capture - Go 1.22+ fixed loop variable
  semantics, this capture is no longer necessary (go.mod specifies Go 1.24.0)

* filer: restore fallback to chunkCache when cacher returns no data

Fix critical issue where ReadChunkAt would return 0,nil immediately
if SingleChunkCacher couldn't provide data for the requested offset,
without trying the chunkCache fallback. Now if cacher.readChunkAt
returns n=0 and err=nil, we fall through to try chunkCache.

* filer: add comprehensive tests for ReaderCache

Tests cover:
- Context cancellation while waiting for download
- Fallback to chunkCache when cacher returns n=0, err=nil
- Multiple concurrent readers waiting for same chunk
- Partial reads at different offsets
- Downloader cleanup when exceeding cache limit
- Done channel signaling (no hangs on completion)

* filer: prioritize done channel over context cancellation

If data is already available (done channel closed), return it even if
the reader's context is also cancelled. This avoids unnecessary errors
when the download has already completed.

* filer: add lookup error test and document test limitations

Add TestSingleChunkCacherLookupError to test error handling when lookup
fails. Document that full HTTP integration tests for SingleChunkCacher
require global HTTP client initialization which is complex in unit tests.
The download path is tested via FUSE integration tests.

* filer: add tests that exercise SingleChunkCacher concurrency logic

Add tests that use blocking lookupFileIdFn to exercise the actual
SingleChunkCacher wait/cancellation logic:

- TestSingleChunkCacherContextCancellationDuringLookup: tests reader
  cancellation while lookup is blocked
- TestSingleChunkCacherMultipleReadersWaitForDownload: tests multiple
  readers waiting on the same download
- TestSingleChunkCacherOneReaderCancelsOthersContinue: tests that when
  one reader cancels, other readers continue waiting

These tests properly exercise the done channel wait/cancel logic without
requiring HTTP calls - the blocking lookup simulates a slow download.
2025-12-04 23:40:56 -08:00
Chris Lu
3183a49698 fix: S3 downloads failing after idle timeout (#7626)
* fix: S3 downloads failing after idle timeout (#7618)

The idle timeout was incorrectly terminating active downloads because
read and write deadlines were managed independently. During a download,
the server writes data but rarely reads, so the read deadline would
expire even though the connection was actively being used.

Changes:
1. Simplify to single Timeout field - since this is a 'no activity timeout'
   where any activity extends the deadline, separate read/write timeouts
   are unnecessary. Now uses SetDeadline() which sets both at once.

2. Implement proper 'no activity timeout' - any activity (read or write)
   now extends the deadline. The connection only times out when there's
   genuinely no activity in either direction.

3. Increase default S3 idleTimeout from 10s to 120s for additional safety
   margin when fetching chunks from slow storage backends.

Fixes #7618

* Update weed/util/net_timeout.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-04 18:31:46 -08:00
Chris Lu
f9b4a4c396 fix: check freeEcSlot before evacuating EC shards to prevent data loss (#7621)
* fix: check freeEcSlot before evacuating EC shards to prevent data loss

Related to #7619

The moveAwayOneEcVolume function was missing the freeEcSlot check that
exists in other EC shard placement functions. This could cause EC shards
to be moved to volume servers that have no capacity, resulting in:
1. 0-byte shard files when disk is full
2. Data loss when source shards are deleted after 'successful' copy

Changes:
- Add freeEcSlot check before attempting to move EC shards
- Sort destinations by both shard count and free slots
- Refresh topology during evacuation to get updated slot counts
- Log when nodes are skipped due to no free slots
- Update freeEcSlot count after successful moves

* fix: clarify comment wording per CodeRabbit review

The comment stated 'after each move' but the code executes before
calling moveAwayOneEcVolume. Updated to 'before moving each EC volume'
for accuracy.

* fix: collect topology once and track capacity changes locally

Remove the topology refresh within the loop as it gives a false sense
of correctness - the refreshed topology could still be stale (minutes old).

Instead, we:
1. Collect topology once at the start
2. Track capacity changes ourselves via freeEcSlot decrement after each move

This is more accurate because we know exactly what moves we've made,
rather than relying on potentially stale topology refreshes.

* fix: ensure partial EC volume moves are reported as failures

Set hasMoved=false when a shard fails to move, even if previous shards
succeeded. This prevents the caller from incorrectly assuming the entire
volume was evacuated, which could lead to data loss if the source server
is decommissioned based on this incorrect status.

* fix: also reset hasMoved on moveMountedShardToEcNode error

Same issue as the previous fix: if moveMountedShardToEcNode fails
after some shards succeeded, hasMoved would incorrectly be true.
Ensure partial moves are always reported as failures.
2025-12-04 16:05:06 -08:00
Chris Lu
fdb888729b fix: properly handle errors in writeToFile to prevent 0-byte EC shards (#7620)
Fixes #7619

The writeToFile function had two critical bugs that could cause data loss
during EC shard evacuation when the destination disk is full:

Bug 1: When os.OpenFile fails (e.g., disk full), the error was silently
ignored and nil was returned. This caused the caller to think the copy
succeeded.

Bug 2: When dst.Write fails (e.g., 'no space left on device'), the error
was completely ignored because the return value was not checked.

When evacuating EC shards to a full volume server (especially on BTRFS):
1. OpenFile may succeed (creates 0-byte file inode)
2. Write fails with 'no space left on device'
3. Errors were ignored, function returned nil
4. Caller thinks copy succeeded and deletes source shard
5. Result: 0-byte shard on destination, data loss!

This fix ensures both errors are properly returned, preventing data loss.
Added unit tests to verify the fix.
2025-12-04 14:52:03 -08:00
Chris Lu
716f21fbd3 s3: support STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER for signed chunked uploads with checksums (#7623)
* s3: support STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER for signed chunked uploads with checksums

When AWS SDK v2 clients upload with both chunked encoding and checksum
validation enabled, they use the x-amz-content-sha256 header value of
STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER instead of the simpler
STREAMING-AWS4-HMAC-SHA256-PAYLOAD.

This caused the chunked reader to not be properly activated, resulting
in chunk-signature metadata being stored as part of the file content.

Changes:
- Add streamingSignedPayloadTrailer constant for the new header value
- Update isRequestSignStreamingV4() to recognize this header
- Update newChunkedReader() to handle this streaming type
- Update calculateSeedSignature() to accept this header
- Add unit test for signed streaming upload with trailer

Fixes issue where Quarkus/AWS SDK v2 uploads with checksum validation
resulted in corrupted file content containing chunk-signature data.

* address review comments: add trailer signature to test, fix constant alignment

* test: separate canonical trailer text (\n) from on-wire format (\r\n)

* test: add negative test for invalid trailer signature

* refactor: check HTTP method first in streaming auth checks (fail-fast)

* test: handle crc32 Write error return for completeness

* refactor: extract createTrailerStreamingRequest helper to reduce test duplication

* fmt

* docs: clarify test comment about trailer signature validation status

* refactor: calculate chunk data length dynamically instead of hardcoding

* Update weed/s3api/chunked_reader_v4_test.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* fix: use current time for signatures instead of hardcoded past date

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-04 14:51:37 -08:00
Chris Lu
a5ab05ec03 fix: S3 GetObject/HeadObject with PartNumber should return object ETag, not part ETag (#7622)
AWS S3 behavior: when calling GetObject or HeadObject with the PartNumber
query parameter, the ETag header should still return the complete object's
ETag (e.g., 'abc123-4' for a 4-part multipart upload), not the individual
part's ETag.

The previous implementation incorrectly overrode the ETag with the part's
ETag, causing test_multipart_get_part to fail.

This fix removes the ETag override logic while keeping:
- x-amz-mp-parts-count header (correct)
- Content-Length adjusted to part size (correct)
- Range calculation for part boundaries (correct)
2025-12-04 12:18:57 -08:00
chrislu
8d110b29dd fmt 2025-12-04 10:40:01 -08:00
Chris Lu
39ba19eea6 filer: async empty folder cleanup via metadata events (#7614)
* filer: async empty folder cleanup via metadata events

Implements asynchronous empty folder cleanup when files are deleted in S3.

Key changes:

1. EmptyFolderCleaner - New component that handles folder cleanup:
   - Uses consistent hashing (LockRing) to determine folder ownership
   - Each filer owns specific folders, avoiding duplicate cleanup work
   - Debounces delete events (10s delay) to batch multiple deletes
   - Caches rough folder counts to skip unnecessary checks
   - Cancels pending cleanup when new files are created
   - Handles both file and subdirectory deletions

2. Integration with metadata events:
   - Listens to both local and remote filer metadata events
   - Processes create/delete/rename events to track folder state
   - Only processes folders under /buckets/<bucket>/...

3. Removed synchronous empty folder cleanup from S3 handlers:
   - DeleteObjectHandler no longer calls DoDeleteEmptyParentDirectories
   - DeleteMultipleObjectsHandler no longer tracks/cleans directories
   - Cleanup now happens asynchronously via metadata events

Benefits:
- Non-blocking: S3 delete requests return immediately
- Coordinated: Only one filer (the owner) cleans each folder
- Efficient: Batching and caching reduce unnecessary checks
- Event-driven: Folder deletion triggers parent folder check automatically

* filer: add CleanupQueue data structure for deduplicated folder cleanup

CleanupQueue uses a linked list for FIFO ordering and a hashmap for O(1)
deduplication. Processing is triggered when:
- Queue size reaches maxSize (default 1000), OR
- Oldest item exceeds maxAge (default 10 minutes)

Key features:
- O(1) Add, Remove, Pop, Contains operations
- Duplicate folders are ignored (keeps original position/time)
- Testable with injectable time function
- Thread-safe with mutex protection

* filer: use CleanupQueue for empty folder cleanup

Replace timer-per-folder approach with queue-based processing:
- Use CleanupQueue for deduplication and ordered processing
- Process queue when full (1000 items) or oldest item exceeds 10 minutes
- Background processor checks queue every 10 seconds
- Remove from queue on create events to cancel pending cleanup

Benefits:
- Bounded memory: queue has max size, not unlimited timers
- Efficient: O(1) add/remove/contains operations
- Batch processing: handle many folders efficiently
- Better for high-volume delete scenarios

* filer: CleanupQueue.Add moves duplicate to back with updated time

When adding a folder that already exists in the queue:
- Remove it from its current position
- Add it to the back of the queue
- Update the queue time to current time

This ensures that folders with recent delete activity are processed
later, giving more time for additional deletes to occur.

* filer: CleanupQueue uses event time and inserts in sorted order

Changes:
- Add() now takes eventTime parameter instead of using current time
- Insert items in time-sorted order (oldest at front) to handle out-of-order events
- When updating duplicate with newer time, reposition to maintain sort order
- Ignore updates with older time (keep existing later time)

This ensures proper ordering when processing events from distributed filers
where event arrival order may not match event occurrence order.

* filer: remove unused CleanupQueue functions (SetNowFunc, GetAll)

Removed test-only functions:
- SetNowFunc: tests now use real time with past event times
- GetAll: tests now use Pop() to verify order

Kept functions used in production:
- Peek: used in filer_notify_read.go
- OldestAge: used in empty_folder_cleaner.go logging

* filer: initialize cache entry on first delete/create event

Previously, roughCount was only updated if the cache entry already
existed, but entries were only created during executeCleanup. This
meant delete/create events before the first cleanup didn't track
the count.

Now create the cache entry on first event, so roughCount properly
tracks all changes from the start.

* filer: skip adding to cleanup queue if roughCount > 0

If the cached roughCount indicates there are still items in the
folder, don't bother adding it to the cleanup queue. This avoids
unnecessary queue entries and reduces wasted cleanup checks.

* filer: don't create cache entry on create event

Only update roughCount if the folder is already being tracked.
New folders don't need tracking until we see a delete event.

* filer: move empty folder cleanup to its own package

- Created weed/filer/empty_folder_cleanup package
- Defined FilerOperations interface to break circular dependency
- Added CountDirectoryEntries method to Filer
- Exported IsUnderPath and IsUnderBucketPath helper functions

* filer: make isUnderPath and isUnderBucketPath private

These helpers are only used within the empty_folder_cleanup package.
2025-12-03 21:12:19 -08:00
Chris Lu
e361daa754 fix: SFTP HomeDir path translation for user operations (#7611)
* fix: SFTP HomeDir path translation for user operations

When users have a non-root HomeDir (e.g., '/sftp/user'), their SFTP
operations should be relative to that directory. Previously, when a
user uploaded to '/' via SFTP, the path was not translated to their
home directory, causing 'permission denied for / for permission write'.

This fix adds a toAbsolutePath() method that implements chroot-like
behavior where the user's HomeDir becomes their root. All file and
directory operations now translate paths through this method.

Example: User with HomeDir='/sftp/user' uploading to '/' now correctly
maps to '/sftp/user'.

Fixes: https://github.com/seaweedfs/seaweedfs/issues/7470

* test: add SFTP integration tests

Add comprehensive integration tests for the SFTP server including:
- HomeDir path translation tests (verifies fix for issue #7470)
- Basic file upload/download operations
- Directory operations (mkdir, rmdir, list)
- Large file handling (1MB test)
- File rename operations
- Stat/Lstat operations
- Path edge cases (trailing slashes, .., unicode filenames)
- Admin root access verification

The test framework starts a complete SeaweedFS cluster with:
- Master server
- Volume server
- Filer server
- SFTP server with test user credentials

Test users are configured in testdata/userstore.json:
- admin: HomeDir=/ with full access
- testuser: HomeDir=/sftp/testuser with access to home
- readonly: HomeDir=/public with read-only access

* fix: correct SFTP HomeDir path translation and add CI

Fix path.Join issue where paths starting with '/' weren't joined correctly.
path.Join('/sftp/user', '/file') returns '/file' instead of '/sftp/user/file'.
Now we strip the leading '/' before joining.

Test improvements:
- Update go.mod to Go 1.24
- Fix weed binary discovery to prefer local build over PATH
- Add stabilization delay after service startup
- All 8 SFTP integration tests pass locally

Add GitHub Actions workflow for SFTP tests:
- Runs on push/PR affecting sftpd code or tests
- Tests HomeDir path translation, file ops, directory ops
- Covers issue #7470 fix verification

* security: update golang.org/x/crypto to v0.45.0

Addresses security vulnerability in golang.org/x/crypto < 0.45.0

* security: use proper SSH host key verification in tests

Replace ssh.InsecureIgnoreHostKey() with ssh.FixedHostKey() that
verifies the server's host key matches the known test key we generated.
This addresses CodeQL warning go/insecure-hostkeycallback.

Also updates go.mod to specify go 1.24.0 explicitly.

* security: fix path traversal vulnerability in SFTP toAbsolutePath

The previous implementation had a critical security vulnerability:
- Path traversal via '../..' could escape the HomeDir chroot jail
- Absolute paths were not correctly prefixed with HomeDir

The fix:
1. Concatenate HomeDir with userPath directly, then clean
2. Add security check to ensure final path stays within HomeDir
3. If traversal detected, safely return HomeDir instead

Also adds path traversal prevention tests to verify the fix.

* fix: address PR review comments

1. Fix SkipCleanup check to use actual test config instead of default
   - Added skipCleanup field to SftpTestFramework struct
   - Store config.SkipCleanup during Setup()
   - Use f.skipCleanup in Cleanup() instead of DefaultTestConfig()

2. Fix path prefix check false positive in mkdir
   - Changed from strings.HasPrefix(absPath, fs.user.HomeDir)
   - To: absPath == fs.user.HomeDir || strings.HasPrefix(absPath, fs.user.HomeDir+"/")
   - Prevents matching partial directory names (e.g., /sftp/username when HomeDir is /sftp/user)

* fix: check write permission on parent dir for mkdir

Aligns makeDir's permission check with newFileWriter for consistency.
To create a directory, a user needs write permission on the parent
directory, not mkdir permission on the new directory path.

* fix: refine SFTP path traversal logic and tests

1. Refine toAbsolutePath:
   - Use path.Join with strings.TrimPrefix for idiomatic path construction
   - Return explicit error on path traversal attempt instead of clamping
   - Updated all call sites to handle the error

2. Add Unit Tests:
   - Added sftp_server_test.go to verify toAbsolutePath logic
   - Covers normal paths, root path, and various traversal attempts

3. Update Integration Tests:
   - Updated PathTraversalPrevention test to reflect that standard SFTP clients
     sanitize paths before sending. The test now verifies successful containment
     within the jail rather than blocking (since the server receives a clean path).
   - The server-side blocking is verified by the new unit tests.

4. Makefile:
   - Removed -v from default test target

* fix: address PR comments on tests and makefile

1. Enhanced Unit Tests:
   - Added edge cases (empty path, multiple slashes, trailing slash) to sftp_server_test.go

2. Makefile Improvements:
   - Added 'all' target as default entry point

3. Code Clarity:
   - Added comment to mkdir permission check explaining defensive nature of HomeDir check

* fix: address PR review comments on permissions and tests

1. Security:
   - Added write permission check on target directory in renameEntry

2. Logging:
   - Changed dispatch log verbosity from V(0) to V(1)

3. Testing:
   - Updated Makefile .PHONY targets
   - Added unit test cases for empty/root HomeDir behavior in toAbsolutePath

* fix: set SFTP starting directory to virtual root

1. Critical Fix:
   - Changed sftp.WithStartDirectory from fs.user.HomeDir to '/'
   - Prevents double-prefixing when toAbsolutePath translates paths
   - Users now correctly start at their virtual root which maps to HomeDir

2. Test Improvements:
   - Use pointer for homeDir in tests for clearer nil vs empty distinction

* fix: clean HomeDir at config load time

Clean HomeDir path when loading users from JSON config.
This handles trailing slashes and other path anomalies at the source,
ensuring consistency throughout the codebase and avoiding repeated
cleaning on every toAbsolutePath call.

* test: strengthen assertions and add error checking in SFTP tests

1. Add error checking for cleanup operations in TestWalk
2. Strengthen cwd assertion to expect '/' explicitly in TestCurrentWorkingDirectory
3. Add error checking for cleanup in PathTraversalPrevention test
2025-12-03 13:42:05 -08:00
Lisandro Pin
d59cc1b09f Fix handling of fixed read-only volumes for volume.check.disk. (#7612)
There's unfortunatley no way to tell whether a volume is flagged read-only
because it got full, or because it is faulty. To address this, modify the
check logic so all read-only volumes are processed; if no changes are written
(i.e. the volume is healthy) it is kept as read-only.

Volumes which are modified in this process are deemed fixed, and switched to writable.
2025-12-03 11:33:35 -08:00
Xiao Wei
3bcadc9f90 fix: update getVersioningState to signal non-existent buckets with Er… (#7613)
* fix: update getVersioningState to signal non-existent buckets with ErrNotFound

This change modifies the getVersioningState function to return filer_pb.ErrNotFound when a requested bucket does not exist, allowing callers to handle the situation appropriately, such as auto-creating the bucket in PUT handlers. This improves error handling and clarity in the API's behavior regarding bucket existence.

* Update weed/s3api/s3api_bucket_config.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: 洪晓威 <xiaoweihong@deepglint.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-03 10:23:59 -08:00
Chris Lu
e9da64f62a fix: volume server healthz now checks local conditions only (#7610)
This fixes issue #6823 where a single volume server shutdown would cause
other healthy volume servers to fail their health checks and get restarted
by Kubernetes, causing a cascading failure.

Previously, the healthz handler checked if all replicated volumes could
reach their remote replicas via GetWritableRemoteReplications(). When a
volume server went down, the master would remove it from the volume
location list. Other volume servers would then fail their healthz checks
because they couldn't find all required replicas, causing Kubernetes to
restart them.

The healthz endpoint now only checks local conditions:
1. Is the server shutting down?
2. Is the server heartbeating with the master?

This follows the principle that a health check should only verify the
health of THIS server, not the overall cluster state.

Fixes #6823
2025-12-02 23:19:14 -08:00
Chris Lu
5ed0b00fb9 Support separate volume server ID independent of RPC bind address (#7609)
* pb: add id field to Heartbeat message for stable volume server identification

This adds an 'id' field to the Heartbeat protobuf message that allows
volume servers to identify themselves independently of their IP:port address.

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* storage: add Id field to Store struct

Add Id field to Store struct and include it in CollectHeartbeat().
The Id field provides a stable volume server identity independent of IP:port.

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* topology: support id-based DataNode identification

Update GetOrCreateDataNode to accept an id parameter for stable node
identification. When id is provided, the DataNode can maintain its identity
even when its IP address changes (e.g., in Kubernetes pod reschedules).

For backward compatibility:
- If id is provided, use it as the node ID
- If id is empty, fall back to ip:port

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* volume: add -id flag for stable volume server identity

Add -id command line flag to volume server that allows specifying a stable
identifier independent of the IP address. This is useful for Kubernetes
deployments with hostPath volumes where pods can be rescheduled to different
nodes while the persisted data remains on the original node.

Usage: weed volume -id=node-1 -ip=10.0.0.1 ...

If -id is not specified, it defaults to ip:port for backward compatibility.

Fixes https://github.com/seaweedfs/seaweedfs/issues/7487

* server: add -volume.id flag to weed server command

Support the -volume.id flag in the all-in-one 'weed server' command,
consistent with the standalone 'weed volume' command.

Usage: weed server -volume.id=node-1 ...

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* topology: add test for id-based DataNode identification

Test the key scenarios:
1. Create DataNode with explicit id
2. Same id with different IP returns same DataNode (K8s reschedule)
3. IP/PublicUrl are updated when node reconnects with new address
4. Different id creates new DataNode
5. Empty id falls back to ip:port (backward compatibility)

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* pb: add address field to DataNodeInfo for proper node addressing

Previously, DataNodeInfo.Id was used as the node address, which worked
when Id was always ip:port. Now that Id can be an explicit string,
we need a separate Address field for connection purposes.

Changes:
- Add 'address' field to DataNodeInfo protobuf message
- Update ToDataNodeInfo() to populate the address field
- Update NewServerAddressFromDataNode() to use Address (with Id fallback)
- Fix LookupEcVolume to use dn.Url() instead of dn.Id()

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* fix: trim whitespace from volume server id and fix test

- Trim whitespace from -id flag to treat ' ' as empty
- Fix store_load_balancing_test.go to include id parameter in NewStore call

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* refactor: extract GetVolumeServerId to util package

Move the volume server ID determination logic to a shared utility function
to avoid code duplication between volume.go and rack.go.

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* fix: improve transition logic for legacy nodes

- Use exact ip:port match instead of net.SplitHostPort heuristic
- Update GrpcPort and PublicUrl during transition for consistency
- Remove unused net import

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487

* fix: add id normalization and address change logging

- Normalize id parameter at function boundary (trim whitespace)
- Log when DataNode IP:Port changes (helps debug K8s pod rescheduling)

Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
2025-12-02 22:08:11 -08:00
Chris Lu
51841a2e04 fix: skip cookie validation for EC volume deletion when SkipCookieCheck is set (#7608)
fix: EC volume deletion issues

Fixes #7489

1. Skip cookie check for EC volume deletion when SkipCookieCheck is set
   When batch deleting files from EC volumes with SkipCookieCheck=true
   (e.g., orphan file cleanup), the cookie is not available. The deletion
   was failing with 'unexpected cookie 0' because DeleteEcShardNeedle
   always validated the cookie.

2. Optimize doDeleteNeedleFromAtLeastOneRemoteEcShards to return early
   Return immediately when a deletion succeeds, instead of continuing
   to try all parity shards unnecessarily.

3. Remove useless log message that always logged nil error
   The log at V(1) was logging err after checking it was nil.

Regression introduced in commit 7bdae5172 (Jan 3, 2023) when EC batch
delete support was added.
2025-12-02 17:00:05 -08:00
Chris Lu
4f038820dc Add disk-aware EC rebalancing (#7597)
* Add placement package for EC shard placement logic

- Consolidate EC shard placement algorithm for reuse across shell and worker tasks
- Support multi-pass selection: racks, then servers, then disks
- Include proper spread verification and scoring functions
- Comprehensive test coverage for various cluster topologies

* Make ec.balance disk-aware for multi-disk servers

- Add EcDisk struct to track individual disks on volume servers
- Update EcNode to maintain per-disk shard distribution
- Parse disk_id from EC shard information during topology collection
- Implement pickBestDiskOnNode() for selecting best disk per shard
- Add diskDistributionScore() for tie-breaking node selection
- Update all move operations to specify target disk in RPC calls
- Improves shard balance within multi-disk servers, not just across servers

* Use placement package in EC detection for consistent disk-level placement

- Replace custom EC disk selection logic with shared placement package
- Convert topology DiskInfo to placement.DiskCandidate format
- Use SelectDestinations() for multi-rack/server/disk spreading
- Convert placement results back to topology DiskInfo for task creation
- Ensures EC detection uses same placement logic as shell commands

* Make volume server evacuation disk-aware

- Use pickBestDiskOnNode() when selecting evacuation target disk
- Specify target disk in evacuation RPC requests
- Maintains balanced disk distribution during server evacuations

* Rename PlacementConfig to PlacementRequest for clarity

PlacementRequest better reflects that this is a request for placement
rather than a configuration object. This improves API semantics.

* Rename DefaultConfig to DefaultPlacementRequest

Aligns with the PlacementRequest type naming for consistency

* Address review comments from Gemini and CodeRabbit

Fix HIGH issues:
- Fix empty disk discovery: Now discovers all disks from VolumeInfos,
  not just from EC shards. This ensures disks without EC shards are
  still considered for placement.
- Fix EC shard count calculation in detection.go: Now correctly filters
  by DiskId and sums actual shard counts using ShardBits.ShardIdCount()
  instead of just counting EcShardInfo entries.

Fix MEDIUM issues:
- Add disk ID to evacuation log messages for consistency with other logging
- Remove unused serverToDisks variable in placement.go
- Fix comment that incorrectly said 'ascending' when sorting is 'descending'

* add ec tests

* Update ec-integration-tests.yml

* Update ec_integration_test.go

* Fix EC integration tests CI: build weed binary and update actions

- Add 'Build weed binary' step before running tests
- Update actions/setup-go from v4 to v6 (Node20 compatibility)
- Update actions/checkout from v2 to v4 (Node20 compatibility)
- Move working-directory to test step only

* Add disk-aware EC rebalancing integration tests

- Add TestDiskAwareECRebalancing test with multi-disk cluster setup
- Test EC encode with disk awareness (shows disk ID in output)
- Test EC balance with disk-level shard distribution
- Add helper functions for disk-level verification:
  - startMultiDiskCluster: 3 servers x 4 disks each
  - countShardsPerDisk: track shards per disk per server
  - calculateDiskShardVariance: measure distribution balance
- Verify no single disk is overloaded with shards
2025-12-02 12:30:15 -08:00
Lisandro Pin
ebb06a3908 Mutex command output writes for volume.check.disk. (#7605)
Prevents potential screen garbling when operations are parallelized
.Also simplifies logging by automatically adding newlines on output, if necessary.

Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
2025-12-02 10:14:24 -08:00
Lisandro Pin
ee775825bc Parallelize read-only volume check pass for volume.check.disk. (#7602) 2025-12-02 09:29:27 -08:00
Chris Lu
733ca8e6df Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type (#7598)
* Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type

Fixes GitHub #7562: Copying objects between encrypted buckets was failing.

Root causes:
1. processMetadataBytes was re-adding SSE headers from source entry, undoing
   the encryption header filtering. Now uses dstEntry.Extended which is
   already filtered.

2. SSE-S3 streaming copy returned nil metadata. Now properly generates and
   returns SSE-S3 destination metadata (SeaweedFSSSES3Key, AES256 header)
   via ExecuteStreamingCopyWithMetadata.

3. Chunks created during streaming copy didn't have SseType set. Now sets
   SseType and per-chunk SseMetadata with chunk-specific IVs for SSE-S3,
   enabling proper decryption on GetObject.

* Address review: make SSE-S3 metadata serialization failures fatal errors

- In executeEncryptCopy: return error instead of just logging if
  SerializeSSES3Metadata fails
- In createChunkFromData: return error if chunk SSE-S3 metadata
  serialization fails

This ensures objects/chunks are never created without proper encryption
metadata, preventing unreadable/corrupted data.

* fmt

* Refactor: reuse function names instead of creating WithMetadata variants

- Change ExecuteStreamingCopy to return (*EncryptionSpec, error) directly
- Remove ExecuteStreamingCopyWithMetadata wrapper
- Change executeStreamingReencryptCopy to return (*EncryptionSpec, error)
- Remove executeStreamingReencryptCopyWithMetadata wrapper
- Update callers to ignore encryption spec with _ where not needed

* Add TODO documenting large file SSE-S3 copy limitation

The streaming copy approach encrypts the entire stream with a single IV
but stores data in chunks with per-chunk IVs. This causes decryption
issues for large files. Small inline files work correctly.

This is a known architectural issue that needs separate work to fix.

* Use chunk-by-chunk encryption for SSE-S3 copy (consistent with SSE-C/SSE-KMS)

Instead of streaming encryption (which had IV mismatch issues for multi-chunk
files), SSE-S3 now uses the same chunk-by-chunk approach as SSE-C and SSE-KMS:

1. Extended copyMultipartCrossEncryption to handle SSE-S3:
   - Added SSE-S3 source decryption in copyCrossEncryptionChunk
   - Added SSE-S3 destination encryption with per-chunk IVs
   - Added object-level metadata generation for SSE-S3 destinations

2. Updated routing in executeEncryptCopy/executeDecryptCopy/executeReencryptCopy
   to use copyMultipartCrossEncryption for all SSE-S3 scenarios

3. Removed streaming copy functions (shouldUseStreamingCopy,
   executeStreamingReencryptCopy) as they're no longer used

4. Added large file (1MB) integration test to verify chunk-by-chunk copy works

This ensures consistent behavior across all SSE types and fixes data corruption
that occurred with large files in the streaming copy approach.

* fmt

* fmt

* Address review: fail explicitly if SSE-S3 metadata is missing

Instead of silently ignoring missing SSE-S3 metadata (which could create
unreadable objects), now explicitly fail the copy operation with a clear
error message if:
- First chunk is missing
- First chunk doesn't have SSE-S3 type
- First chunk has empty SSE metadata
- Deserialization fails

* Address review: improve comment to reflect full scope of chunk creation

* Address review: fail explicitly if baseIV is empty for SSE-S3 chunk encryption

If DestinationIV is not set when encrypting SSE-S3 chunks, the chunk would
be created without SseMetadata, causing GetObject decryption to fail later.
Now fails explicitly with a clear error message.

Note: calculateIVWithOffset returns ([]byte, int) not ([]byte, error) - the
int is a skip amount for intra-block alignment, not an error code.

* Address review: handle 0-byte files in SSE-S3 copy

For 0-byte files, there are no chunks to get metadata from. Generate an IV
for the object-level metadata to ensure even empty files are properly marked
as SSE-S3 encrypted.

Also validate that we don't have a non-empty file with no chunks (which
would indicate an internal error).
2025-12-02 09:24:31 -08:00
Chris Lu
099a351f3b Fix issue #6847: S3 chunked encoding includes headers in stored content (#7595)
* Fix issue #6847: S3 chunked encoding includes headers in stored content

- Add hasTrailer flag to s3ChunkedReader to track trailer presence
- Update state transition logic to properly handle trailers in unsigned streaming
- Enhance parseChunkChecksum to handle multiple trailer lines
- Skip checksum verification for unsigned streaming uploads
- Add test case for mixed format handling (unsigned headers with signed chunks)
- Remove redundant CRLF reading in trailer processing

This fixes the issue where chunk-signature and x-amz headers were appearing
in stored file content when using chunked encoding with newer AWS SDKs.

* Fix checksum validation for unsigned streaming uploads

- Always validate checksum for data integrity regardless of signing
- Correct checksum value in test case
- Addresses PR review feedback about checksum verification

* Add warning log when multiple checksum headers found in trailer

- Log a warning when multiple valid checksum headers appear in trailers
- Uses last checksum header as suggested by CodeRabbit reviewer
- Improves debugging for edge cases with multiple checksum algorithms

* Improve trailer parsing robustness in parseChunkChecksum

- Remove redundant trimTrailingWhitespace call since readChunkLine already trims
- Use bytes.TrimSpace for both key and value to handle whitespace around colon separator
- Follows HTTP header specifications for optional whitespace around separators
- Addresses Gemini Code Assist review feedback
2025-12-01 17:41:42 -08:00
Chris Lu
8c585a9682 Fix S3 object tagging issue #7589
- Add X-Amz-Tagging header parsing in putToFiler function for PUT object operations
- Store tags with X-Amz-Tagging- prefix in entry.Extended metadata
- Add comprehensive test suite for S3 object tagging functionality
- Tests cover upload tagging, API operations, special characters, and edge cases
2025-12-01 15:21:30 -08:00
Chris Lu
61c0514a1c filer: add username and keyPrefix support for Redis stores (#7591)
* filer: add username and keyPrefix support for Redis stores

Addresses https://github.com/seaweedfs/seaweedfs/issues/7299

- Add username config option to redis2, redis_cluster2, redis_lua, and
  redis_lua_cluster stores (sentinel stores already had it)
- Add keyPrefix config option to all Redis stores to prefix all keys,
  useful for Envoy Redis Proxy or multi-tenant Redis setups

* refactor: reduce duplication in redis.NewClient creation

Address code review feedback by defining redis.Options once and
conditionally setting TLSConfig instead of duplicating the entire
NewClient call.

* filer.toml: add username and keyPrefix to redis2.tmp example
2025-12-01 13:31:35 -08:00
Chris Lu
f6f3859826 Fix #7575: Correct interface check for filer address function in admin server (#7588)
* Fix #7575: Correct interface check for filer address function in admin server

Problem:
User creation in object store was failing with error:
'filer_etc: filer address function not configured'

Root Cause:
In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)

This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail.

Solution:
- Fixed interface check to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer address

Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
- Integration tests in test/admin/user_creation_integration_test.go
- Documentation in test/admin/README.md

All tests pass successfully.

* Fix #7575: Correct interface check for filer address function in admin UI

Problem:
User creation in Admin UI was failing with error:
'filer_etc: filer address function not configured'

Root Cause:
In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)

This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail in the Admin UI.

Note: This bug only affects the Admin UI. The S3 API and weed shell
commands (s3.configure) were unaffected as they use the correct interface
or bypass the credential manager entirely.

Solution:
- Fixed interface check in admin_server.go to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer (HA-aware)
- Cleaned up redundant comments in the code

Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
  * TestFilerAddressFunctionInterface - verifies correct interface
  * TestGenerateAccessKey - tests key generation
  * TestGenerateSecretKey - tests secret generation
  * TestGenerateAccountId - tests account ID generation

All tests pass and will run automatically in CI.

* Fix #7575: Correct interface check for filer address function in admin UI

Problem:
User creation in Admin UI was failing with error:
'filer_etc: filer address function not configured'

Root Cause:
1. In admin_server.go, the code checked for incorrect interface method
   SetFilerClient(string, grpc.DialOption) instead of the actual
   SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
2. The admin command was missing the filer_etc import, so the store
   was never registered

This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail in the Admin UI.

Note: This bug only affects the Admin UI. The S3 API and weed shell
commands (s3.configure) were unaffected as they use the correct interface
or bypass the credential manager entirely.

Solution:
- Added filer_etc import to weed/command/admin.go to register the store
- Fixed interface check in admin_server.go to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer (HA-aware)
- Hoisted credentialManager assignment to reduce code duplication

Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
  * TestFilerAddressFunctionInterface - verifies correct interface
  * TestGenerateAccessKey - tests key generation
  * TestGenerateSecretKey - tests secret generation
  * TestGenerateAccountId - tests account ID generation

All tests pass and will run automatically in CI.
2025-12-01 12:19:02 -08:00
Lisandro Pin
36dd59560b Have volume.check.disk select a random (heathly) source volume when… (#7574)
Have `volume.check.disk` select a random (heathly) source volume when repairing read-only volumes.

This ensures uniform load across the topology when the command is run. Also remove a lingering
TODO about ignoring full volumes; not only there's no way to discern read-only volumes from
being full vs. being damaged, we ultimately want to check the former anyway.
2025-12-01 10:58:29 -08:00
Krzysztof Osiniak
5461f85240 Update README and weed/Makefile (#7571)
* Add link to wiki installation page in README

* Add building for docker in weed/Makefile

Building without `CGO_ENABLED=0` and using the executable in docker can result in a docker container exiting with an error

* Update README.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Add GOOS=linux to build_docker target for cross-compilation

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
2025-11-29 11:36:22 -08:00
Chris Lu
d48e1e1659 mount: improve read throughput with parallel chunk fetching (#7569)
* mount: improve read throughput with parallel chunk fetching

This addresses issue #7504 where a single weed mount FUSE instance
does not fully utilize node network bandwidth when reading large files.

Changes:
- Add -concurrentReaders mount option (default: 16) to control the
  maximum number of parallel chunk fetches during read operations
- Implement parallel section reading in ChunkGroup.ReadDataAt() using
  errgroup for better throughput when reading across multiple sections
- Enhance ReaderCache with MaybeCacheMany() to prefetch multiple chunks
  ahead in parallel during sequential reads (now prefetches 4 chunks)
- Increase ReaderCache limit dynamically based on concurrentReaders
  to support higher read parallelism

The bottleneck was that chunks were being read sequentially even when
they reside on different volume servers. By introducing parallel chunk
fetching, a single mount instance can now better saturate available
network bandwidth.

Fixes: #7504

* fmt

* Address review comments: make prefetch configurable, improve error handling

Changes:
1. Add DefaultPrefetchCount constant (4) to reader_at.go
2. Add GetPrefetchCount() method to ChunkGroup that derives prefetch count
   from concurrentReaders (1/4 ratio, min 1, max 8)
3. Pass prefetch count through NewChunkReaderAtFromClient
4. Fix error handling in readDataAtParallel to prioritize errgroup error
5. Update all callers to use DefaultPrefetchCount constant

For mount operations, prefetch scales with -concurrentReaders:
- concurrentReaders=16 (default) -> prefetch=4
- concurrentReaders=32 -> prefetch=8 (capped)
- concurrentReaders=4 -> prefetch=1

For non-mount paths (WebDAV, query engine, MQ), uses DefaultPrefetchCount.

* fmt

* Refactor: use variadic parameter instead of new function name

Use NewChunkGroup with optional concurrentReaders parameter instead of
creating a separate NewChunkGroupWithConcurrency function.

This maintains backward compatibility - existing callers without the
parameter get the default of 16 concurrent readers.

* Use explicit concurrentReaders parameter instead of variadic

* Refactor: use MaybeCache with count parameter instead of new MaybeCacheMany function

* Address nitpick review comments

- Add upper bound (128) on concurrentReaders to prevent excessive goroutine fan-out
- Cap readerCacheLimit at 256 accordingly
- Fix SetChunks: use Lock() instead of RLock() since we are writing to group.sections
2025-11-29 10:06:11 -08:00
Chris Lu
bd419fda51 fix: copy to bucket with default SSE-S3 encryption fails (#7562) (#7568)
* filer use context without cancellation

* pass along context

* fix: copy to bucket with default SSE-S3 encryption fails (#7562)

When copying an object from an encrypted bucket to a temporary unencrypted
bucket, then to another bucket with default SSE-S3 encryption, the operation
fails with 'invalid SSE-S3 source key type' error.

Root cause:
When objects are copied from an SSE-S3 encrypted bucket to an unencrypted
bucket, the 'X-Amz-Server-Side-Encryption: AES256' header is preserved but
the actual encryption key (SeaweedFSSSES3Key) is stripped. This creates an
'orphaned' SSE-S3 header that causes IsSSES3EncryptedInternal() to return
true, triggering decryption logic with a nil key.

Fix:
1. Modified IsSSES3EncryptedInternal() to require BOTH the AES256 header
   AND the SeaweedFSSSES3Key to be present before returning true
2. Added isOrphanedSSES3Header() to detect orphaned SSE-S3 headers
3. Updated copy handler to strip orphaned headers during copy operations

Fixes #7562

* fmt

* refactor: simplify isOrphanedSSES3Header function logic

Remove redundant existence check since the caller iterates through
metadata map, making the check unnecessary. Improves readability
while maintaining the same functionality.
2025-11-28 13:28:17 -08:00
Chris Lu
7e4bab8032 filer write request use context without cancellation (#7567)
* filer use context without cancellation

* pass along context
2025-11-28 11:52:57 -08:00
Chris Lu
03ce060e85 fix too many pings (#7566)
* fix too many pings

* constants

* Update weed/pb/grpc_client_server.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-28 11:47:24 -08:00
chrislu
487c0f92a9 fmt 2025-11-27 22:44:35 -08:00
Chris Lu
76b9611881 fix nil map 2025-11-27 15:06:37 -08:00
Chris Lu
7e15a4abe2 4.01 2025-11-27 11:39:05 -08:00
steve.wei
5c25df20f2 feat(volume.fix): show all replica locations for misplaced volumes (#7560) 2025-11-27 00:04:45 -08:00
Chris Lu
ebb4f57cc7 s3api: Fix response-content-disposition query parameter not being honored (#7559)
* s3api: Fix response-content-disposition query parameter not being honored

Fixes #7486

This fix resolves an issue where S3 presigned URLs with query parameters
like `response-content-disposition`, `response-content-type`, etc. were
being ignored, causing browsers to use default file handling instead of
the specified behavior.

Changes:
- Modified `setResponseHeaders()` to accept the HTTP request object
- Added logic to process S3 passthrough headers from query parameters
- Updated all call sites to pass the request object
- Supports all AWS S3 response override parameters:
  - response-content-disposition
  - response-content-type
  - response-cache-control
  - response-content-encoding
  - response-content-language
  - response-expires

The implementation follows the same pattern used in the filer handler
and properly honors the AWS S3 API specification for presigned URLs.

Testing:
- Existing S3 API tests pass without modification
- Build succeeds with no compilation errors

* Update weed/s3api/s3api_object_handlers.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-26 17:17:02 -08:00
Chris Lu
4106fc0436 fix(tikv): improve context propagation and refactor batch delete logic (#7558)
* fix(tikv): improve context propagation and refactor batch delete logic

Address review comments from PR #7557:

1. Replace context.TODO() with ctx in txn.Get calls
   - Fixes timeout/cancellation propagation in FindEntry
   - Fixes timeout/cancellation propagation in KvGet

2. Refactor DeleteFolderChildren to use flush helper
   - Eliminates code duplication
   - Cleaner and more maintainable

These changes ensure proper context propagation throughout all
TiKV operations and improve code maintainability.

* error formatting
2025-11-26 15:54:26 -08:00
Chris Lu
848bec6d24 Metrics: Add Prometheus metrics for concurrent upload tracking (#7555)
* metrics: add Prometheus metrics for concurrent upload tracking

Add Prometheus metrics to monitor concurrent upload activity for both
filer and S3 servers. This provides visibility into the upload limiting
feature added in the previous PR.

New Metrics:
- SeaweedFS_filer_in_flight_upload_bytes: Current bytes being uploaded to filer
- SeaweedFS_filer_in_flight_upload_count: Current number of uploads to filer
- SeaweedFS_s3_in_flight_upload_bytes: Current bytes being uploaded to S3
- SeaweedFS_s3_in_flight_upload_count: Current number of uploads to S3

The metrics are updated atomically whenever uploads start or complete,
providing real-time visibility into upload concurrency levels.

This helps operators:
- Monitor upload concurrency in real-time
- Set appropriate limits based on actual usage patterns
- Detect potential bottlenecks or capacity issues
- Track the effectiveness of upload limiting configuration

* grafana: add dashboard panels for concurrent upload metrics

Add 4 new panels to the Grafana dashboard to visualize the concurrent
upload metrics added in this PR:

Filer Section:
- Filer Concurrent Uploads: Shows current number of concurrent uploads
- Filer Concurrent Upload Bytes: Shows current bytes being uploaded

S3 Gateway Section:
- S3 Concurrent Uploads: Shows current number of concurrent uploads
- S3 Concurrent Upload Bytes: Shows current bytes being uploaded

These panels help operators monitor upload concurrency in real-time and
tune the upload limiting configuration based on actual usage patterns.

* more efficient
2025-11-26 15:51:38 -08:00
Chris Lu
5287d9f3e3 fix(tikv): replace DeleteRange with transaction-based batch deletes (#7557)
* fix(tikv): replace DeleteRange with transaction-based batch deletes

Fixes #7187

Problem:
TiKV's DeleteRange API is a RawKV operation that bypasses transaction
isolation. When SeaweedFS filer uses TiKV with txn client and another
service uses RawKV client on the same cluster, DeleteFolderChildren
can accidentally delete KV pairs from the RawKV client because
DeleteRange operates at the raw key level without respecting
transaction boundaries.

Reproduction:
1. SeaweedFS filer using TiKV txn client for metadata
2. Another service using rawkv client on same TiKV cluster
3. Filer performs batch file deletion via DeleteFolderChildren
4. Result: ~50% of rawkv client's KV pairs get deleted

Solution:
Replace client.DeleteRange() (RawKV API) with transactional batch
deletes using txn.Delete() within transactions. This ensures:
- Transaction isolation - operations respect TiKV's MVCC boundaries
- Keyspace separation - txn client and RawKV client stay isolated
- Proper key handling - keys are copied to avoid iterator reuse issues
- Batch processing - deletes batched (10K default) to manage memory

Changes:
1. Core data structure:
   - Removed deleteRangeConcurrency field
   - Added batchCommitSize field (configurable, default 10000)

2. DeleteFolderChildren rewrite:
   - Replaced DeleteRange with iterative batch deletes
   - Added proper transaction lifecycle management
   - Implemented key copying to avoid iterator buffer reuse
   - Added batching to prevent memory exhaustion

3. New deleteBatch helper:
   - Handles transaction creation and lifecycle
   - Batches deletes within single transaction
   - Properly commits/rolls back based on context

4. Context propagation:
   - Updated RunInTxn to accept context parameter
   - All RunInTxn call sites now pass context
   - Enables proper timeout/cancellation handling

5. Configuration:
   - Removed deleterange_concurrency setting
   - Added batchdelete_count setting (default 10000)

All critical review comments from PR #7188 have been addressed:
- Proper key copying with append([]byte(nil), key...)
- Conditional transaction rollback based on inContext flag
- Context propagation for commits
- Proper transaction lifecycle management
- Configurable batch size

Co-authored-by: giftz <giftz@users.noreply.github.com>

* fix: remove extra closing brace causing syntax error in tikv_store.go

---------

Co-authored-by: giftz <giftz@users.noreply.github.com>
2025-11-26 14:45:56 -08:00
Chris Lu
cd2fac4551 S3: pass HTTP 429 from volume servers to S3 clients (#7556)
With the recent changes (commit c1b8d4bf0) that made S3 directly access
volume servers instead of proxying through filer, we need to properly
handle HTTP 429 (Too Many Requests) errors from volume servers.

This change ensures that when volume servers rate limit requests with
HTTP 429, the S3 API properly translates this to an S3-compatible error
response (ErrRequestBytesExceed with HTTP 503) instead of returning a
generic InternalError.

Changes:
- Add ErrTooManyRequests sentinel error in weed/util/http
- Detect HTTP 429 in ReadUrlAsStream and wrap with ErrTooManyRequests
- Check for ErrTooManyRequests in GetObjectHandler and map to S3 error
- Return ErrRequestBytesExceed (HTTP 503) for rate limiting scenarios

This addresses the same issue as PR #7482 but for the new direct
volume server access path instead of the filer proxy path.

Fixes: Rate limiting errors from volume servers being masked as 500
2025-11-26 13:03:09 -08:00
littlemilkwu
0653394ec6 fix(filer-ui): support folder creation with JWT token in URL (#7271)
fix: filer ui create folder with jwt token error
2025-11-26 12:35:08 -08:00
qzh
3ab26e39ff fix(s3api): fix AWS Signature V2 format and validation (#7488)
* fix(s3api): fix AWS Signature V2 format and validation

* fix(s3api): Skip space after "AWS" prefix (+1 offset)

* test(s3api): add unit tests for Signature V2 authentication fix

* fix(s3api): simply comparing signatures

* validation for the colon extraction in expectedAuth

---------

Co-authored-by: chrislu <chris.lu@gmail.com>
2025-11-26 12:24:02 -08:00
Chris Lu
edf0ef7a80 Filer, S3: Feature/add concurrent file upload limit (#7554)
* Support multiple filers for S3 and IAM servers with automatic failover

This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover.

Key changes:
- Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress)
- Updated IamServerOption.Filer to Filers ([]pb.ServerAddress)
- Modified -filer flag to accept comma-separated addresses
- Added getFilerAddress() helper methods for backward compatibility
- Updated all filer client calls to support multiple addresses
- Uses pb.WithOneOfGrpcFilerClients for automatic failover

Usage:
  weed s3 -filer=localhost:8888,localhost:8889
  weed iam -filer=localhost:8888,localhost:8889

The underlying FilerClient already supported multiple filers with health
tracking and automatic failover - this change exposes that capability
through the command-line interface.

* Add filer discovery: treat initial filers as seeds and discover peers from master

Enhances FilerClient to automatically discover additional filers in the same
filer group by querying the master server. This allows users to specify just
a few seed filers, and the client will discover all other filers in the cluster.

Key changes to wdclient/FilerClient:
- Added MasterClient, FilerGroup, and DiscoveryInterval fields
- Added thread-safe filer list management with RWMutex
- Implemented discoverFilers() background goroutine
- Uses cluster.ListExistingPeerUpdates() to query master for filers
- Automatically adds newly discovered filers to the list
- Added Close() method to clean up discovery goroutine

New FilerClientOption fields:
- MasterClient: enables filer discovery from master
- FilerGroup: specifies which filer group to discover
- DiscoveryInterval: how often to refresh (default 5 minutes)

Usage example:
  masterClient := wdclient.NewMasterClient(...)
  filerClient := wdclient.NewFilerClient(
    []pb.ServerAddress{"localhost:8888"}, // seed filers
    grpcDialOption,
    dataCenter,
    &wdclient.FilerClientOption{
      MasterClient: masterClient,
      FilerGroup: "my-group",
    },
  )
  defer filerClient.Close()

The initial filers act as seeds - the client discovers and adds all other
filers in the same group from the master. Discovered filers are added
dynamically without removing existing ones (relying on health checks for
unavailable filers).

* Address PR review comments: implement full failover for IAM operations

Critical fixes based on code review feedback:

1. **IAM API Failover (Critical)**:
   - Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in:
     * GetS3ApiConfigurationFromFiler()
     * PutS3ApiConfigurationToFiler()
     * GetPolicies()
     * PutPolicies()
   - Now all IAM operations support automatic failover across multiple filers

2. **Validation Improvements**:
   - Add validation in NewIamApiServerWithStore() to require at least one filer
   - Add validation in NewS3ApiServerWithStore() to require at least one filer
   - Add warning log when no filers configured for credential store

3. **Error Logging**:
   - Circuit breaker now logs when config load fails instead of silently ignoring
   - Helps operators understand why circuit breaker limits aren't applied

4. **Code Quality**:
   - Use ToGrpcAddress() for filer address in credential store setup
   - More consistent with rest of codebase and future-proof

These changes ensure IAM operations have the same high availability guarantees
as S3 operations, completing the multi-filer failover implementation.

* Fix IAM manager initialization: remove code duplication, add TODO for HA

Addresses review comment on s3api_server.go:145

Changes:
- Remove duplicate code for getting first filer address
- Extract filerAddr variable once and reuse
- Add TODO comment documenting the HA limitation for IAM manager
- Document that loadIAMManagerFromConfig and NewS3IAMIntegration need
  updates to support multiple filers for full HA

Note: This is a known limitation when using filer-backed IAM stores.
The interfaces need to be updated to accept multiple filer addresses.
For now, documenting this limitation clearly.

* Document credential store HA limitation with TODO

Addresses review comment on auth_credentials.go:149

Changes:
- Add TODO comment documenting that SetFilerClient interface needs update
  for multi-filer support
- Add informative log message indicating HA limitation
- Document that this is a known limitation for filer-backed credential stores

The SetFilerClient interface currently only accepts a single filer address.
To properly support HA, the credential store interfaces need to be updated
to handle multiple filer addresses.

* Track current active filer in FilerClient for better HA

Add GetCurrentFiler() method to FilerClient that returns the currently
active filer based on the filerIndex which is updated on successful
operations. This provides better availability than always using the
first filer.

Changes:
- Add FilerClient.GetCurrentFiler() method that returns current active filer
- Update S3ApiServer.getFilerAddress() to use FilerClient's current filer
- Add fallback to first filer if FilerClient not yet initialized
- Document IAM limitation (doesn't have FilerClient access)

Benefits:
- Single-filer operations (URLs, ReadFilerConf, etc.) now use the
  currently active/healthy filer
- Better distribution and failover behavior
- FilerClient's round-robin and health tracking automatically
  determines which filer to use

* Document ReadFilerConf HA limitation in lifecycle handlers

Addresses review comment on s3api_bucket_handlers.go:880

Add comment documenting that ReadFilerConf uses the current active filer
from FilerClient (which is better than always using first filer), but
doesn't have built-in multi-filer failover.

Add TODO to update filer.ReadFilerConf to support multiple filers for
complete HA. For now, it uses the currently active/healthy filer tracked
by FilerClient which provides reasonable availability.

* Document multipart upload URL HA limitation

Addresses review comment on s3api_object_handlers_multipart.go:442

Add comment documenting that part upload URLs point to the current
active filer (tracked by FilerClient), which is better than always
using the first filer but still creates a potential point of failure
if that filer becomes unavailable during upload.

Suggest TODO solutions:
- Use virtual hostname/load balancer for filers
- Have S3 server proxy uploads to healthy filers

Current behavior provides reasonable availability by using the
currently active/healthy filer rather than being pinned to first filer.

* Document multipart completion Location URL limitation

Addresses review comment on filer_multipart.go:187

Add comment documenting that the Location URL in CompleteMultipartUpload
response points to the current active filer (tracked by FilerClient).

Note that clients should ideally use the S3 API endpoint rather than
this direct URL. If direct access is attempted and the specific filer
is unavailable, the request will fail.

Current behavior uses the currently active/healthy filer rather than
being pinned to the first filer, providing better availability.

* Make credential store use current active filer for HA

Update FilerEtcStore to use a function that returns the current active
filer instead of a fixed address, enabling high availability.

Changes:
- Add SetFilerAddressFunc() method to FilerEtcStore
- Store uses filerAddressFunc instead of fixed filerGrpcAddress
- withFilerClient() calls the function to get current active filer
- Keep SetFilerClient() for backward compatibility (marked deprecated)
- Update S3ApiServer to pass FilerClient.GetCurrentFiler to store

Benefits:
- Credential store now uses currently active/healthy filer
- Automatic failover when filer becomes unavailable
- True HA for credential operations
- Backward compatible with old SetFilerClient interface

This addresses the credential store limitation - no longer pinned to
first filer, uses FilerClient's tracked current active filer.

* Clarify multipart URL comments: filer address not used for uploads

Update comments to reflect that multipart upload URLs are not actually
used for upload traffic - uploads go directly to volume servers.

Key clarifications:
- genPartUploadUrl: Filer address is parsed out, only path is used
- CompleteMultipartUpload Location: Informational field per AWS S3 spec
- Actual uploads bypass filer proxy and go directly to volume servers

The filer address in these URLs is NOT a HA concern because:
1. Part uploads: URL is parsed for path, upload goes to volume servers
2. Location URL: Informational only, clients use S3 endpoint

This addresses the observation that S3 uploads don't go through filers,
only metadata operations do.

* Remove filer address from upload paths - pass path directly

Eliminate unnecessary filer address from upload URLs by passing file
paths directly instead of full URLs that get immediately parsed.

Changes:
- Rename genPartUploadUrl() → genPartUploadPath() (returns path only)
- Rename toFilerUrl() → toFilerPath() (returns path only)
- Update putToFiler() to accept filePath instead of uploadUrl
- Remove URL parsing code (no longer needed)
- Remove net/url import (no longer used)
- Keep old function names as deprecated wrappers for compatibility

Benefits:
- Cleaner code - no fake URL construction/parsing
- No dependency on filer address for internal operations
- More accurate naming (these are paths, not URLs)
- Eliminates confusion about HA concerns

This completely removes the filer address from upload operations - it was
never actually used for routing, only parsed for the path.

* Remove deprecated functions: use new path-based functions directly

Remove deprecated wrapper functions and update all callers to use the
new function names directly.

Removed:
- genPartUploadUrl() → all callers now use genPartUploadPath()
- toFilerUrl() → all callers now use toFilerPath()
- SetFilerClient() → removed along with fallback code

Updated:
- s3api_object_handlers_multipart.go: uploadUrl → filePath
- s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath
- s3api_object_versioning.go: toFilerUrl → toFilerPath
- s3api_object_handlers_test.go: toFilerUrl → toFilerPath
- auth_credentials.go: removed SetFilerClient fallback
- filer_etc_store.go: removed deprecated SetFilerClient method

Benefits:
- Cleaner codebase with no deprecated functions
- All variable names accurately reflect that they're paths, not URLs
- Single interface for credential stores (SetFilerAddressFunc only)

All code now consistently uses the new path-based approach.

* Fix toFilerPath: remove URL escaping for raw file paths

The toFilerPath function should return raw file paths, not URL-escaped
paths. URL escaping was needed when the path was embedded in a URL
(old toFilerUrl), but now that we pass paths directly to putToFiler,
they should be unescaped.

This fixes S3 integration test failures:
- test_bucket_listv2_encoding_basic
- test_bucket_list_encoding_basic
- test_bucket_listv2_delimiter_whitespace
- test_bucket_list_delimiter_whitespace

The tests were failing because paths were double-encoded (escaped when
stored, then escaped again when listed), resulting in %252B instead of
%2B for '+' characters.

Root cause: When we removed URL parsing in putToFiler, we should have
also removed URL escaping in toFilerPath since paths are now used
directly without URL encoding/decoding.

* Add thread safety to FilerEtcStore and clarify credential store comments

Address review suggestions for better thread safety and code clarity:

1. **Thread Safety**: Add RWMutex to FilerEtcStore
   - Protects filerAddressFunc and grpcDialOption from concurrent access
   - Initialize() uses write lock when setting function
   - SetFilerAddressFunc() uses write lock
   - withFilerClient() uses read lock to get function and dial option
   - GetPolicies() uses read lock to check if configured

2. **Improved Error Messages**:
   - Prefix errors with "filer_etc:" for easier debugging
   - "filer address not configured" → "filer_etc: filer address function not configured"
   - "filer address is empty" → "filer_etc: filer address is empty"

3. **Clarified Comments**:
   - auth_credentials.go: Clarify that initial setup is temporary
   - Document that it's updated in s3api_server.go after FilerClient creation
   - Remove ambiguity about when FilerClient.GetCurrentFiler is used

Benefits:
- Safe for concurrent credential operations
- Clear error messages for debugging
- Explicit documentation of initialization order

* Enable filer discovery: pass master addresses to FilerClient

Fix two critical issues:

1. **Filer Discovery Not Working**: Master client was not being passed to
   FilerClient, so peer discovery couldn't work

2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler
   function - this is the correct design for HA

Changes:

**Command (s3.go):**
- Read master addresses from GetFilerConfiguration response
- Pass masterAddresses to S3ApiServerOption
- Log master addresses for visibility

**S3ApiServerOption:**
- Add Masters []pb.ServerAddress field for discovery

**S3ApiServer:**
- Create MasterClient from Masters when available
- Pass MasterClient + FilerGroup to FilerClient via options
- Enable discovery with 5-minute refresh interval
- Log whether discovery is enabled or disabled

**Credential Store:**
- Already correctly uses filerClient.GetCurrentFiler via function
- This provides HA without tight coupling to FilerClient struct
- Function-based design is clean and thread-safe

Discovery Flow:
1. S3 command reads filer config → gets masters + filer group
2. S3ApiServer creates MasterClient from masters
3. FilerClient uses MasterClient to query for peer filers
4. Background goroutine refreshes peer list every 5 minutes
5. Credential store uses GetCurrentFiler to get active filer

Now filer discovery actually works! ��

* Use S3 endpoint in multipart Location instead of filer address

* Add multi-filer failover to ReadFilerConf

* Address CodeRabbit review: fix buffer reuse and improve lock safety

Address two code review suggestions:

1. **Fix buffer reuse in ReadFilerConfFromFilers**:
   - Use local []byte data instead of shared buffer
   - Prevents partial data from failed attempts affecting successful reads
   - Creates fresh buffer inside callback for masterClient path
   - More robust to future changes in read helpers

2. **Improve lock safety in FilerClient**:
   - Add *WithHealth variants that accept health pointer
   - Get health pointer while holding lock, then release before calling
   - Eliminates potential for lock confusion (though no actual deadlock existed)
   - Clearer separation: lock for data access, atomics for health ops

Changes:
- ReadFilerConfFromFilers: var data []byte, create buf inside callback
- shouldSkipUnhealthyFilerWithHealth(health *filerHealth)
- recordFilerSuccessWithHealth(health *filerHealth)
- recordFilerFailureWithHealth(health *filerHealth)
- Keep old functions for backward compatibility (marked deprecated)
- Update LookupVolumeIds to use WithHealth variants

Benefits:
- More robust multi-filer configuration reading
- Clearer lock vs atomic operation boundaries
- No lock held during health checks (even though atomics don't block)
- Better code organization and maintainability

* add constant

* Fix IAM manager and post policy to use current active filer

* Fix critical race condition and goroutine leak

* Update weed/s3api/filer_multipart.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Fix compilation error and address code review suggestions

Address remaining unresolved comments:

1. **Fix compilation error**: Add missing net/url import
   - filer_multipart.go used url.PathEscape without import
   - Added "net/url" to imports

2. **Fix Location URL formatting** (all 4 occurrences):
   - Add missing slash between bucket and key
   - Use url.PathEscape for bucket names
   - Use urlPathEscape for object keys
   - Handles special characters in bucket/key names
   - Before: http://host/bucketkey
   - After: http://host/bucket/key (properly escaped)

3. **Optimize discovery loop** (O(N*M) → O(N+M)):
   - Use map for existing filers (O(1) lookup)
   - Reduces time holding write lock
   - Better performance with many filers
   - Before: Nested loop for each discovered filer
   - After: Build map once, then O(1) lookups

Changes:
- filer_multipart.go: Import net/url, fix all Location URLs
- filer_client.go: Use map for efficient filer discovery

Benefits:
- Compiles successfully
- Proper URL encoding (handles spaces, special chars)
- Faster discovery with less lock contention
- Production-ready URL formatting

* Fix race conditions and make Close() idempotent

Address CodeRabbit review #3512078995:

1. **Critical: Fix unsynchronized read in error message**
   - Line 584 read len(fc.filerAddresses) without lock
   - Race with refreshFilerList appending to slice
   - Fixed: Take RLock to read length safely
   - Prevents race detector warnings

2. **Important: Make Close() idempotent**
   - Closing already-closed channel panics
   - Can happen with layered cleanup in shutdown paths
   - Fixed: Use sync.Once to ensure single close
   - Safe to call Close() multiple times now

3. **Nitpick: Add warning for empty filer address**
   - getFilerAddress() can return empty string
   - Helps diagnose unexpected state
   - Added: Warning log when no filers available

4. **Nitpick: Guard deprecated index-based helpers**
   - shouldSkipUnhealthyFiler, recordFilerSuccess/Failure
   - Accessed filerHealth without lock (races with discovery)
   - Fixed: Take RLock and check bounds before array access
   - Prevents index out of bounds and races

Changes:
- filer_client.go:
  - Add closeDiscoveryOnce sync.Once field
  - Use Do() in Close() for idempotent channel close
  - Add RLock guards to deprecated index-based helpers
  - Add bounds checking to prevent panics
  - Synchronized read of filerAddresses length in error

- s3api_server.go:
  - Add warning log when getFilerAddress returns empty

Benefits:
- No race conditions (passes race detector)
- No panic on double-close
- Better error diagnostics
- Safe with discovery enabled
- Production-hardened shutdown logic

* Fix hardcoded http scheme and add panic recovery

Address CodeRabbit review #3512114811:

1. **Major: Fix hardcoded http:// scheme in Location URLs**
   - Location URLs always used http:// regardless of client connection
   - HTTPS clients got http:// URLs (incorrect)
   - Fixed: Detect scheme from request
   - Check X-Forwarded-Proto header (for proxies) first
   - Check r.TLS != nil for direct HTTPS
   - Fallback to http for plain connections
   - Applied to all 4 CompleteMultipartUploadResult locations

2. **Major: Add panic recovery to discovery goroutine**
   - Long-running background goroutine could crash entire process
   - Panic in refreshFilerList would terminate program
   - Fixed: Add defer recover() with error logging
   - Goroutine failures now logged, not fatal

3. **Note: Close() idempotency already implemented**
   - Review flagged as duplicate issue
   - Already fixed in commit 3d7a65c7e
   - sync.Once (closeDiscoveryOnce) prevents double-close panic
   - Safe to call Close() multiple times

Changes:
- filer_multipart.go:
  - Add getRequestScheme() helper function
  - Update all 4 Location URLs to use dynamic scheme
  - Format: scheme://host/bucket/key (was: http://...)

- filer_client.go:
  - Add panic recovery to discoverFilers()
  - Log panics instead of crashing

Benefits:
- Correct scheme (https/http) in Location URLs
- Works behind proxies (X-Forwarded-Proto)
- No process crashes from discovery failures
- Production-hardened background goroutine
- Proper AWS S3 API compliance

* filer: add ConcurrentFileUploadLimit option to limit number of concurrent uploads

This adds a new configuration option ConcurrentFileUploadLimit that limits
the number of concurrent file uploads based on file count, complementing
the existing ConcurrentUploadLimit which limits based on total data size.

This addresses an OOM vulnerability where requests with missing/zero
Content-Length headers could bypass the size-based rate limiter.

Changes:
- Add ConcurrentFileUploadLimit field to FilerOption
- Add inFlightUploads counter to FilerServer
- Update upload handler to check both size and count limits
- Add -concurrentFileUploadLimit command line flag (default: 0 = unlimited)

Fixes #7529

* s3: add ConcurrentFileUploadLimit option to limit number of concurrent uploads

This adds a new configuration option ConcurrentFileUploadLimit that limits
the number of concurrent file uploads based on file count, complementing
the existing ConcurrentUploadLimit which limits based on total data size.

This addresses an OOM vulnerability where requests with missing/zero
Content-Length headers could bypass the size-based rate limiter.

Changes:
- Add ConcurrentUploadLimit and ConcurrentFileUploadLimit fields to S3ApiServerOption
- Add inFlightDataSize, inFlightUploads, and inFlightDataLimitCond to S3ApiServer
- Add s3a reference to CircuitBreaker for upload limiting
- Enhance CircuitBreaker.Limit() to apply upload limiting for write actions
- Add -concurrentUploadLimitMB and -concurrentFileUploadLimit command line flags
- Add s3.concurrentUploadLimitMB and s3.concurrentFileUploadLimit flags to filer command

The upload limiting is integrated into the existing CircuitBreaker.Limit()
function, avoiding creation of new wrapper functions and reusing the existing
handler registration pattern.

Fixes #7529

* server: add missing concurrentFileUploadLimit flags for server command

The server command was missing the initialization of concurrentFileUploadLimit
flags for both filer and S3, causing a nil pointer dereference when starting
the server in combined mode.

This adds:
- filer.concurrentFileUploadLimit flag to server command
- s3.concurrentUploadLimitMB flag to server command
- s3.concurrentFileUploadLimit flag to server command

Fixes the panic: runtime error: invalid memory address or nil pointer dereference
at filer.go:332

* http status 503

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-26 12:07:54 -08:00