Commit Graph

133 Commits

Author SHA1 Message Date
Chris Lu
06391701ed Add AssumeRole and AssumeRoleWithLDAPIdentity STS actions (#8003)
* test: add integration tests for AssumeRole and AssumeRoleWithLDAPIdentity STS actions

- Add s3_sts_assume_role_test.go with comprehensive tests for AssumeRole:
  * Parameter validation (missing RoleArn, RoleSessionName, invalid duration)
  * AWS SigV4 authentication with valid/invalid credentials
  * Temporary credential generation and usage

- Add s3_sts_ldap_test.go with tests for AssumeRoleWithLDAPIdentity:
  * Parameter validation (missing LDAP credentials, RoleArn)
  * LDAP authentication scenarios (valid/invalid credentials)
  * Integration with LDAP server (when configured)

- Update Makefile with new test targets:
  * test-sts: run all STS tests
  * test-sts-assume-role: run AssumeRole tests only
  * test-sts-ldap: run LDAP STS tests only
  * test-sts-suite: run tests with full service lifecycle

- Enhance setup_all_tests.sh:
  * Add OpenLDAP container setup for LDAP testing
  * Create test LDAP users (testuser, ldapadmin)
  * Set LDAP environment variables for tests
  * Update cleanup to remove LDAP container

- Fix setup_keycloak.sh:
  * Enable verbose error logging for realm creation
  * Improve error diagnostics

Tests use fail-fast approach (t.Fatal) when server not configured,
ensuring clear feedback when infrastructure is missing.

* feat: implement AssumeRole and AssumeRoleWithLDAPIdentity STS actions

Implement two new STS actions to match MinIO's STS feature set:

**AssumeRole Implementation:**
- Add handleAssumeRole with full AWS SigV4 authentication
- Integrate with existing IAM infrastructure via verifyV4Signature
- Validate required parameters (RoleArn, RoleSessionName)
- Validate DurationSeconds (900-43200 seconds range)
- Generate temporary credentials with expiration
- Return AWS-compatible XML response

**AssumeRoleWithLDAPIdentity Implementation:**
- Add handleAssumeRoleWithLDAPIdentity handler (stub)
- Validate LDAP-specific parameters (LDAPUsername, LDAPPassword)
- Validate common STS parameters (RoleArn, RoleSessionName, DurationSeconds)
- Return proper error messages for missing LDAP provider
- Ready for LDAP provider integration

**Routing Fixes:**
- Add explicit routes for AssumeRole and AssumeRoleWithLDAPIdentity
- Prevent IAM handler from intercepting authenticated STS requests
- Ensure proper request routing priority

**Handler Infrastructure:**
- Add IAM field to STSHandlers for SigV4 verification
- Update NewSTSHandlers to accept IAM reference
- Add STS-specific error codes and response types
- Implement writeSTSErrorResponse for AWS-compatible errors

The AssumeRole action is fully functional and tested.
AssumeRoleWithLDAPIdentity requires LDAP provider implementation.

* fix: update IAM matcher to exclude STS actions from interception

Update the IAM handler matcher to check for STS actions (AssumeRole,
AssumeRoleWithWebIdentity, AssumeRoleWithLDAPIdentity) and exclude them
from IAM handler processing. This allows STS requests to be handled by
the STS fallback handler even when they include AWS SigV4 authentication.

The matcher now parses the form data to check the Action parameter and
returns false for STS actions, ensuring they are routed to the correct
handler.

Note: This is a work-in-progress fix. Tests are still showing some
routing issues that need further investigation.

* fix: address PR review security issues for STS handlers

This commit addresses all critical security issues from PR review:

Security Fixes:
- Use crypto/rand for cryptographically secure credential generation
  instead of time.Now().UnixNano() (fixes predictable credentials)
- Add sts:AssumeRole permission check via VerifyActionPermission to
  prevent unauthorized role assumption
- Generate proper session tokens using crypto/rand instead of
  placeholder strings

Code Quality Improvements:
- Refactor DurationSeconds parsing into reusable parseDurationSeconds()
  helper function used by all three STS handlers
- Create generateSecureCredentials() helper for consistent and secure
  temporary credential generation
- Fix iamMatcher to check query string as fallback when Action not
  found in form data

LDAP Provider Implementation:
- Add go-ldap/ldap/v3 dependency
- Create LDAPProvider implementing IdentityProvider interface with
  full LDAP authentication support (connect, bind, search, groups)
- Update ProviderFactory to create real LDAP providers
- Wire LDAP provider into AssumeRoleWithLDAPIdentity handler

Test Infrastructure:
- Add LDAP user creation verification step in setup_all_tests.sh

* fix: address PR feedback (Round 2) - config validation & provider improvements

- Implement `validateLDAPConfig` in `ProviderFactory`
- Improve `LDAPProvider.Initialize`:
  - Support `connectionTimeout` parsing (string/int/float) from config map
  - Warn if `BindDN` is present but `BindPassword` is empty
- Improve `LDAPProvider.GetUserInfo`:
  - Add fallback to `searchUserGroups` if `memberOf` returns no groups (consistent with Authenticate)

* fix: address PR feedback (Round 3) - LDAP connection improvements & build fix

- Improve `LDAPProvider` connection handling:
  - Use `net.Dialer` with configured timeout for connection establishment
  - Enforce TLS 1.2+ (`MinVersion: tls.VersionTLS12`) for both LDAPS and StartTLS
- Fix build error in `s3api_sts.go` (format verb for ErrorCode)

* fix: address PR feedback (Round 4) - LDAP hardening, Authz check & Routing fix

- LDAP Provider Hardening:
  - Prevent re-initialization
  - Enforce single user match in `GetUserInfo` (was explicit only in Authenticate)
  - Ensure connection closure if StartTLS fails
- STS Handlers:
  - Add robust provider detection using type assertion
  - **Security**: Implement authorization check (`VerifyActionPermission`) after LDAP authentication
- Routing:
  - Update tests to reflect that STS actions are handled by STS handler, not generic IAM

* fix: address PR feedback (Round 5) - JWT tokens, ARN formatting, PrincipalArn

CRITICAL FIXES:
- Replace standalone credential generation with STS service JWT tokens
  - handleAssumeRole now generates proper JWT session tokens
  - handleAssumeRoleWithLDAPIdentity now generates proper JWT session tokens
  - Session tokens can be validated across distributed instances

- Fix ARN formatting in responses
  - Extract role name from ARN using utils.ExtractRoleNameFromArn()
  - Prevents malformed ARNs like "arn:aws:sts::assumed-role/arn:aws:iam::..."

- Add configurable AccountId for federated users
  - Add AccountId field to STSConfig (defaults to "111122223333")
  - PrincipalArn now uses configured account ID instead of hardcoded "aws"
  - Enables proper trust policy validation

IMPROVEMENTS:
- Sanitize LDAP authentication error messages (don't leak internal details)
- Remove duplicate comment in provider detection
- Add utils import for ARN parsing utilities

* feat: implement LDAP connection pooling to prevent resource exhaustion

PERFORMANCE IMPROVEMENT:
- Add connection pool to LDAPProvider (default size: 10 connections)
- Reuse LDAP connections across authentication requests
- Prevent file descriptor exhaustion under high load

IMPLEMENTATION:
- connectionPool struct with channel-based connection management
- getConnection(): retrieves from pool or creates new connection
- returnConnection(): returns healthy connections to pool
- createConnection(): establishes new LDAP connection with TLS support
- Close(): cleanup method to close all pooled connections
- Connection health checking (IsClosing()) before reuse

BENEFITS:
- Reduced connection overhead (no TCP handshake per request)
- Better resource utilization under load
- Prevents "too many open files" errors
- Non-blocking pool operations (creates new conn if pool empty)

* fix: correct TokenGenerator access in STS handlers

CRITICAL FIX:
- Make TokenGenerator public in STSService (was private tokenGenerator)
- Update all references from Config.TokenGenerator to TokenGenerator
- Remove TokenGenerator from STSConfig (it belongs in STSService)

This fixes the "NotImplemented" errors in distributed and Keycloak tests.
The issue was that Round 5 changes tried to access Config.TokenGenerator
which didn't exist - TokenGenerator is a field in STSService, not STSConfig.

The TokenGenerator is properly initialized in STSService.Initialize() and
is now accessible for JWT token generation in AssumeRole handlers.

* fix: update tests to use public TokenGenerator field

Following the change to make TokenGenerator public in STSService,
this commit updates the test files to reference the correct public field name.
This resolves compilation errors in the IAM STS test suite.

* fix: update distributed tests to use valid Keycloak users

Updated s3_iam_distributed_test.go to use 'admin-user' and 'read-user'
which exist in the standard Keycloak setup provided by setup_keycloak.sh.
This resolves 'unknown test user' errors in distributed integration tests.

* fix: ensure iam_config.json exists in setup target for CI

The GitHub Actions workflow calls 'make setup' which was not creating
iam_config.json, causing the server to start without IAM integration
enabled (iamIntegration = nil), resulting in NotImplemented errors.

Now 'make setup' copies iam_config.local.json to iam_config.json if
it doesn't exist, ensuring IAM is properly configured in CI.

* fix(iam/ldap): fix connection pool race and rebind corruption

- Add atomic 'closed' flag to connection pool to prevent racing on Close()
- Rebind authenticated user connections back to service account before returning to pool
- Close connections on error instead of returning potentially corrupted state to pool

* fix(iam/ldap): populate standard TokenClaims fields in ValidateToken

- Set Subject, Issuer, Audience, IssuedAt, and ExpiresAt to satisfy the interface
- Use time.Time for timestamps as required by TokenClaims struct
- Default to 1 hour TTL for LDAP tokens

* fix(s3api): include account ID in STS AssumedRoleUser ARN

- Consistent with AWS, include the account ID in the assumed-role ARN
- Use the configured account ID from STS service if available, otherwise default to '111122223333'
- Apply to both AssumeRole and AssumeRoleWithLDAPIdentity handlers
- Also update .gitignore to ignore IAM test environment files

* refactor(s3api): extract shared STS credential generation logic

- Move common logic for session claims and credential generation to prepareSTSCredentials
- Update handleAssumeRole and handleAssumeRoleWithLDAPIdentity to use the helper
- Remove stale comments referencing outdated line numbers

* feat(iam/ldap): make pool size configurable and add audience support

- Add PoolSize to LDAPConfig (default 10)
- Add Audience to LDAPConfig to align with OIDC validation
- Update initialization and ValidateToken to use new fields

* update tests

* debug

* chore(iam): cleanup debug prints and fix test config port

* refactor(iam): use mapstructure for LDAP config parsing

* feat(sts): implement strict trust policy validation for AssumeRole

* test(iam): refactor STS tests to use AWS SDK signer

* test(s3api): implement ValidateTrustPolicyForPrincipal in MockIAMIntegration

* fix(s3api): ensure IAM matcher checks query string on ParseForm error

* fix(sts): use crypto/rand for secure credentials and extract constants

* fix(iam): fix ldap connection leaks and add insecure warning

* chore(iam): improved error wrapping and test parameterization

* feat(sts): add support for LDAPProviderName parameter

* Update weed/iam/ldap/ldap_provider.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update weed/s3api/s3api_sts.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix(sts): use STSErrSTSNotReady when LDAP provider is missing

* fix(sts): encapsulate TokenGenerator in STSService and add getter

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-12 10:45:24 -08:00
Chris Lu
217d8b9e0e Fix: ListObjectVersions delimiter support (#7987)
* Fix: Add delimiter support to ListObjectVersions with proper truncation

- Implemented delimiter support to group keys into CommonPrefixes
- Fixed critical truncation bug: now merges versions and common prefixes into single sorted list before truncation
- Ensures total items never exceed MaxKeys (prevents infinite pagination loops)
- Properly sets NextKeyMarker and NextVersionIdMarker for pagination
- Added integration tests in test/s3/versioning/s3_versioning_delimiter_test.go
- Verified behavior matches S3 API specification

* Fix: Add delimiter support to ListObjectVersions with proper truncation

- Implemented delimiter support to group keys into CommonPrefixes
- Fixed critical truncation bug: now merges versions and common prefixes before truncation
- Added safety guard for maxKeys=0 to prevent panics
- Condensed verbose comments for better readability
- Added robust Go integration tests with nil checks for AWS SDK pointers
- Verified behavior matches S3 API specification
- Resolved compilation error in integration tests
- Refined pagination comments and ensured exclusive KeyMarker behavior
- Refactored listObjectVersions into helper methods for better maintainability
2026-01-08 14:02:19 -08:00
promalert
9012069bd7 chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code

Signed-off-by: promalert <promalert@outlook.com>

* goimports -w .

---------

Signed-off-by: promalert <promalert@outlook.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-01-07 13:06:08 -08:00
Chris Lu
a75cc09cdf test: fix EC integration test needle blob mismatch (#7972)
fix needle blob
2026-01-05 22:13:56 -08:00
Chris Lu
d2f0d6c03b go mod 2026-01-05 17:56:11 -08:00
Chris Lu
15ca301e43 Fix flaky EC integration tests by collecting server logs on failure (#7969)
* Fix flaky EC integration tests by collecting server logs on failure

The EC Integration Tests were experiencing flaky timeouts with errors like
"error reading from server: EOF" and master client reconnection attempts.
When tests failed, server logs were not collected, making debugging difficult.

Changes:
- Updated all test functions to use t.TempDir() instead of os.MkdirTemp()
  and manual cleanup. t.TempDir() automatically preserves directories when
  tests fail, ensuring logs are available for debugging.
- Modified GitHub Actions workflow to collect server logs from temp
  directories when tests fail, including master.log and volume*.log files.
- Added explicit log collection step that searches for test temp directories
  and copies them to artifacts for upload.

This will make debugging flaky test failures much easier by providing access
to the actual server logs showing what went wrong.

* Fix find command precedence in log collection

The -type d flag only applied to the first -name predicate because -o
has lower precedence than the implicit AND. Grouped the -name predicates
with escaped parentheses so -type d applies to all directory name patterns.
2026-01-05 12:05:31 -08:00
Chris Lu
ae9a943ef6 IAM: Add Service Account Support (#7744) (#7901)
* iam: add ServiceAccount protobuf schema

Add ServiceAccount message type to iam.proto with support for:
- Unique ID and parent user linkage
- Optional expiration timestamp
- Separate credentials (access key/secret)
- Action restrictions (subset of parent)
- Enable/disable status

This is the first step toward implementing issue #7744
(IAM Service Account Support).

* iam: add service account response types

Add IAM API response types for service account operations:
- ServiceAccountInfo struct for marshaling account details
- CreateServiceAccountResponse
- DeleteServiceAccountResponse
- ListServiceAccountsResponse
- GetServiceAccountResponse
- UpdateServiceAccountResponse

Also add type aliases in iamapi package for backwards compatibility.

Part of issue #7744 (IAM Service Account Support).

* iam: implement service account API handlers

Add CRUD operations for service accounts:
- CreateServiceAccount: Creates service account with ABIA key prefix
- DeleteServiceAccount: Removes service account and parent linkage
- ListServiceAccounts: Lists all or filtered by parent user
- GetServiceAccount: Retrieves service account details
- UpdateServiceAccount: Modifies status, description, expiration

Service accounts inherit parent user's actions by default and
support optional expiration timestamps.

Part of issue #7744 (IAM Service Account Support).

* sts: add AssumeRoleWithWebIdentity HTTP endpoint

Add STS API HTTP endpoint for AWS SDK compatibility:
- Create s3api_sts.go with HTTP handlers matching AWS STS spec
- Support AssumeRoleWithWebIdentity action with JWT token
- Return XML response with temporary credentials (AccessKeyId,
  SecretAccessKey, SessionToken) matching AWS format
- Register STS route at POST /?Action=AssumeRoleWithWebIdentity

This enables AWS SDKs (boto3, AWS CLI, etc.) to obtain temporary
S3 credentials using OIDC/JWT tokens.

Part of issue #7744 (IAM Service Account Support).

* test: add service account and STS integration tests

Add integration tests for new IAM features:

s3_service_account_test.go:
- TestServiceAccountLifecycle: Create, Get, List, Update, Delete
- TestServiceAccountValidation: Error handling for missing params

s3_sts_test.go:
- TestAssumeRoleWithWebIdentityValidation: Parameter validation
- TestAssumeRoleWithWebIdentityWithMockJWT: JWT token handling

Tests skip gracefully when SeaweedFS is not running or when IAM
features are not configured.

Part of issue #7744 (IAM Service Account Support).

* iam: address code review comments

- Add constants for service account ID and key lengths
- Use strconv.ParseInt instead of fmt.Sscanf for better error handling
- Allow clearing descriptions by checking key existence in url.Values
- Replace magic numbers (12, 20, 40) with named constants

Addresses review comments from gemini-code-assist[bot]

* test: add proper error handling in service account tests

Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal
to prevent silent failures and ensure test reliability.

Addresses review comment from gemini-code-assist[bot]

* test: add proper error handling in STS tests

Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal
to prevent silent failures and ensure test reliability.
Repeated this fix throughout the file.

Addresses review comment from gemini-code-assist[bot] in PR #7901.

* iam: address additional code review comments

- Specific error code mapping for STS service errors
- Distinguish between Sender and Receiver error types in STS responses
- Add nil checks for credentials in List/GetServiceAccount
- Validate expiration date is in the future
- Improve integration test error messages (include response body)
- Add credential verification step in service account tests

Addresses remaining review comments from gemini-code-assist[bot] across multiple files.

* iam: fix shared slice reference in service account creation

Copy parent's actions to create an independent slice for the service
account instead of sharing the underlying array. This prevents
unexpected mutations when the parent's actions are modified later.

Addresses review comment from coderabbitai[bot] in PR #7901.

* iam: remove duplicate unused constant

Removed redundant iamServiceAccountKeyPrefix as ServiceAccountKeyPrefix
is already defined and used.

Addresses remaining cleanup task.

* sts: document limitation of string-based error mapping

Added TODO comment explaining that the current string-based error
mapping approach is fragile and should be replaced with typed errors
from the STS service in a future refactoring.

This addresses the architectural concern raised in code review while
deferring the actual implementation to a separate PR to avoid scope
creep in the current service account feature addition.

* iam: fix remaining review issues

- Add future-date validation for expiration in UpdateServiceAccount
- Reorder tests so credential verification happens before deletion
- Fix compilation error by using correct JWT generation methods

Addresses final review comments from coderabbitai[bot].

* iam: fix service account access key length

The access key IDs were incorrectly generated with 24 characters
instead of the AWS-standard 20 characters. This was caused by
generating 20 random characters and then prepending the 4-character
ABIA prefix.

Fixed by subtracting the prefix length from AccessKeyLength, so the
final key is: ABIA (4 chars) + random (16 chars) = 20 chars total.

This ensures compatibility with S3 clients that validate key length.

* test: add comprehensive service account security tests

Added comprehensive integration tests for service account functionality:

- TestServiceAccountS3Access: Verify SA credentials work for S3 operations
- TestServiceAccountExpiration: Test expiration date validation and enforcement
- TestServiceAccountInheritedPermissions: Verify parent-child relationship
- TestServiceAccountAccessKeyFormat: Validate AWS-compatible key format (ABIA prefix, 20 char length)

These tests ensure SeaweedFS service accounts are compatible with AWS
conventions and provide robust security coverage.

* iam: remove unused UserAccessKeyPrefix constant

Code cleanup to remove unused constants.

* iam: remove unused iamCommonResponse type alias

Code cleanup to remove unused type aliases.

* iam: restore and use UserAccessKeyPrefix constant

Restored UserAccessKeyPrefix constant and updated s3api tests to use it
instead of hardcoded strings for better maintainability and consistency.

* test: improve error handling in service account security tests

Added explicit error checking for io.ReadAll and xml.Unmarshal in
TestServiceAccountExpiration to ensure failures are reported correctly and
cleanup is performed only when appropriate. Also added logging for failed
responses.

* test: use t.Cleanup for reliable resource cleanup

Replaced defer with t.Cleanup to ensure service account cleanup runs even
when require.NoError fails. Also switched from manual error checking to
require.NoError for more idiomatic testify usage.

* iam: add CreatedBy field and optimize identity lookups

- Added createdBy parameter to CreateServiceAccount to track who created each service account
- Extract creator identity from request context using GetIdentityNameFromContext
- Populate created_by field in ServiceAccount protobuf
- Added findIdentityByName helper function to optimize identity lookups
- Replaced nested loops with O(n) helper function calls in CreateServiceAccount and DeleteServiceAccount

This addresses code review feedback for better auditing and performance.

* iam: prevent user deletion when service accounts exist

Following AWS IAM behavior, prevent deletion of users that have active
service accounts. This ensures explicit cleanup and prevents orphaned
service account resources with invalid ParentUser references.

Users must delete all associated service accounts before deleting the
parent user, providing safer resource management.

* sts: enhance TODO with typed error implementation guidance

Updated TODO comment with detailed implementation approach for replacing
string-based error matching with typed errors using errors.Is(). This
provides a clear roadmap for a follow-up PR to improve error handling
robustness and maintainability.

* iam: add operational limits for service account creation

Added AWS IAM-compatible safeguards to prevent resource exhaustion:
- Maximum 100 service accounts per user (LimitExceededException)
- Maximum 1000 character description length (InvalidInputException)

These limits prevent accidental or malicious resource exhaustion while
not impacting legitimate use cases.

* iam: add missing operational limit constants

Added MaxServiceAccountsPerUser and MaxDescriptionLength constants that
were referenced in the previous commit but not defined.

* iam: enforce service account expiration during authentication

CRITICAL SECURITY FIX: Expired service account credentials were not being
rejected during authentication, allowing continued access after expiration.

Changes:
- Added Expiration field to Credential struct
- Populate expiration when loading service accounts from configuration
- Check expiration in all authentication paths (V2 and V4 signatures)
- Return ErrExpiredToken for expired credentials

This ensures expired service accounts are properly rejected at authentication
time, matching AWS IAM behavior and preventing unauthorized access.

* iam: fix error code for expired service account credentials

Use ErrAccessDenied instead of non-existent ErrExpiredToken for expired
service account credentials. This provides appropriate access denial for
expired credentials while maintaining AWS-compatible error responses.

* iam: fix remaining ErrExpiredToken references

Replace all remaining instances of non-existent ErrExpiredToken with
ErrAccessDenied for expired service account credentials.

* iam: apply AWS-standard key format to user access keys

Updated CreateAccessKey to generate AWS-standard 20-character access keys
with AKIA prefix for regular users, matching the format used for service
accounts. This ensures consistency across all access key types and full
AWS compatibility.

- Access keys: AKIA + 16 random chars = 20 total (was 21 chars, no prefix)
- Secret keys: 40 random chars (was 42, now matches AWS standard)
- Uses AccessKeyLength and UserAccessKeyPrefix constants

* sts: replace fragile string-based error matching with typed errors

Implemented robust error handling using typed errors and errors.Is() instead
of fragile strings.Contains() matching. This decouples the HTTP layer from
service implementation details and prevents errors from being miscategorized
if error messages change.

Changes:
- Added typed error variables to weed/iam/sts/constants.go:
  * ErrTypedTokenExpired
  * ErrTypedInvalidToken
  * ErrTypedInvalidIssuer
  * ErrTypedInvalidAudience
  * ErrTypedMissingClaims

- Updated STS service to wrap provider authentication errors with typed errors
- Replaced strings.Contains() with errors.Is() in HTTP layer for error checking
- Removed TODO comment as the improvement is now implemented

This makes error handling more maintainable and reliable.

* sts: eliminate all string-based error matching with provider-level typed errors

Completed the typed error implementation by adding provider-level typed errors
and updating provider implementations to return them. This eliminates ALL
fragile string matching throughout the entire error handling stack.

Changes:
- Added typed error definitions to weed/iam/providers/errors.go:
  * ErrProviderTokenExpired
  * ErrProviderInvalidToken
  * ErrProviderInvalidIssuer
  * ErrProviderInvalidAudience
  * ErrProviderMissingClaims

- Updated OIDC provider to wrap JWT validation errors with typed provider errors
- Replaced strings.Contains() with errors.Is() in STS service for error mapping
- Complete error chain: Provider -> STS -> HTTP layer, all using errors.Is()

This provides:
- Reliable error classification independent of error message content
- Type-safe error checking throughout the stack
- No order-dependent string matching
- Maintainable error handling that won't break with message changes

* oidc: use jwt.ErrTokenExpired instead of string matching

Replaced the last remaining string-based error check with the JWT library's
exported typed error. This makes the error detection independent of error
message content and more robust against library updates.

Changed from:
  strings.Contains(errMsg, "expired")
To:
  errors.Is(err, jwt.ErrTokenExpired)

This completes the elimination of ALL string-based error matching throughout
the entire authentication stack.

* iam: add description length validation to UpdateServiceAccount

Fixed inconsistency where UpdateServiceAccount didn't validate description
length against MaxDescriptionLength, allowing operational limits to be
bypassed during updates.

Now validates that updated descriptions don't exceed 1000 characters,
matching the validation in CreateServiceAccount.

* iam: refactor expiration check into helper method

Extracted duplicated credential expiration check logic into a helper method
to reduce code duplication and improve maintainability.

Added Credential.isCredentialExpired() method and replaced 5 instances of
inline expiration checks across auth_signature_v2.go and auth_signature_v4.go.

* iam: address critical Copilot security and consistency feedback

Fixed three critical issues identified by Copilot code review:

1. SECURITY: Prevent loading disabled service account credentials
   - Added check to skip disabled service accounts during credential loading
   - Disabled accounts can no longer authenticate

2. Add DurationSeconds validation for STS AssumeRoleWithWebIdentity
   - Enforce AWS-compatible range: 900-43200 seconds (15 min - 12 hours)
   - Returns proper error for out-of-range values

3. Fix expiration update consistency in UpdateServiceAccount
   - Added key existence check like Description field
   - Allows explicit clearing of expiration by setting to empty string
   - Distinguishes between "not updating" and "clearing expiration"

* sts: remove unused durationSecondsStr variable

Fixed build error from unused variable after refactoring duration parsing.

* iam: address remaining Copilot feedback and remove dead code

Completed remaining Copilot code review items:

1. Remove unused getPermission() method (dead code)
   - Method was defined but never called anywhere

2. Improve slice modification safety in DeleteServiceAccount
   - Replaced append-with-slice-operations with filter pattern
   - Avoids potential issues from mutating slice during iteration

3. Fix route registration order
   - Moved STS route registration BEFORE IAM route
   - Prevents IAM route from intercepting STS requests
   - More specific route (with query parameter) now registered first

* iam: improve expiration validation and test cleanup robustness

Addressed additional Copilot feedback:

1. Make expiration validation more explicit
   - Added explicit check for negative values
   - Added comment clarifying that 0 is allowed to clear expiration
   - Improves code readability and intent

2. Fix test cleanup order in s3_service_account_test.go
   - Track created service accounts in a slice
   - Delete all service accounts before deleting parent user
   - Prevents DeleteConflictException during cleanup
   - More robust cleanup even if test fails mid-execution

Note: s3_service_account_security_test.go already had correct cleanup
order due to LIFO defer execution.

* test: remove redundant variable assignments

Removed duplicate assignments of createdSAId, createdAccessKeyId, and
createdSecretAccessKey on lines 148-150 that were already assigned on
lines 132-134.
2025-12-29 20:17:23 -08:00
Chris Lu
29a245052e tidy 2025-12-28 19:31:54 -08:00
Chris Lu
8d6bcddf60 Add S3 volume encryption support with -s3.encryptVolumeData flag (#7890)
* Add S3 volume encryption support with -s3.encryptVolumeData flag

This change adds volume-level encryption support for S3 uploads, similar
to the existing -filer.encryptVolumeData option. Each chunk is encrypted
with its own auto-generated CipherKey when the flag is enabled.

Changes:
- Add -s3.encryptVolumeData flag to weed s3, weed server, and weed mini
- Wire Cipher option through S3ApiServer and ChunkedUploadOption
- Add integration tests for multi-chunk range reads with encryption
- Tests verify encryption works across chunk boundaries

Usage:
  weed s3 -encryptVolumeData
  weed server -s3 -s3.encryptVolumeData
  weed mini -s3.encryptVolumeData

Integration tests:
  go test -v -tags=integration -timeout 5m ./test/s3/sse/...

* Add GitHub Actions CI for S3 volume encryption tests

- Add test-volume-encryption target to Makefile that starts server with -s3.encryptVolumeData
- Add s3-volume-encryption job to GitHub Actions workflow
- Tests run with integration build tag and 10m timeout
- Server logs uploaded on failure for debugging

* Fix S3 client credentials to use environment variables

The test was using hardcoded credentials "any"/"any" but the Makefile
sets AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY to "some_access_key1"/
"some_secret_key1". Updated getS3Client() to read from environment
variables with fallback to "any"/"any" for manual testing.

* Change bucket creation errors from skip to fatal

Tests should fail, not skip, when bucket creation fails. This ensures
that credential mismatches and other configuration issues are caught
rather than silently skipped.

* Make copy and multipart test jobs fail instead of succeed

Changed exit 0 to exit 1 for s3-sse-copy-operations and s3-sse-multipart
jobs. These jobs document known limitations but should fail to ensure
the issues are tracked and addressed, not silently ignored.

* Hardcode S3 credentials to match Makefile

Changed from environment variables to hardcoded credentials
"some_access_key1"/"some_secret_key1" to match the Makefile
configuration. This ensures tests work reliably.

* fix Double Encryption

* fix Chunk Size Mismatch

* Added IsCompressed

* is gzipped

* fix copying

* only perform HEAD request when len(cipherKey) > 0

* Revert "Make copy and multipart test jobs fail instead of succeed"

This reverts commit bc34a7eb3c103ae7ab2000da2a6c3925712eb226.

* fix security vulnerability

* fix security

* Update s3api_object_handlers_copy.go

* Update s3api_object_handlers_copy.go

* jwt to get content length
2025-12-27 00:09:14 -08:00
Chris Lu
5aa111708d grpc: reduce client idle pings to avoid ENHANCE_YOUR_CALM (#7885)
* grpc: reduce client idle pings to avoid ENHANCE_YOUR_CALM (too_many_pings)

* test: use context.WithTimeout and pb constants for keepalive

* test(kafka): use separate dial and client contexts in NewDirectBrokerClient

* test(kafka): fix client context usage in NewDirectBrokerClient
2025-12-26 10:58:18 -08:00
Chris Lu
7064ad420d Refactor S3 integration tests to use weed mini (#7877)
* Refactor S3 integration tests to use weed mini

* Fix weed mini flags for sse and parquet tests

* Fix IAM test startup: remove -iam.config flag from weed mini

* Enhance logging in IAM Makefile to debug startup failure

* Simplify weed mini flags and checks in S3 tests (IAM, Parquet, SSE, Copying)

* Simplify weed mini flags and checks in all S3 tests

* Fix IAM tests: use -s3.iam.config for weed mini

* Replace timeout command with portable loop in IAM Makefile

* Standardize portable loop-based readiness checks in all S3 Makefiles

* Define SERVER_DIR in retention Makefile

* Fix versioning and retention Makefiles: remove unsupported weed mini flags

* fix filer_group test

* fix cors

* emojis

* fix sse

* fix retention

* fixes

* fix

* fixes

* fix parquet

* fixes

* fix

* clean up

* avoid duplicated debug server

* Update .gitignore

* simplify

* clean up

* add credentials

* bind

* delay

* Update Makefile

* Update Makefile

* check ready

* delay

* update remote credentials

* Update Makefile

* clean up

* kill

* Update Makefile

* update credentials
2025-12-25 11:00:54 -08:00
Chris Lu
2f6aa98221 Refactor: Replace removeDuplicateSlashes with NormalizeObjectKey (#7873)
* Replace removeDuplicateSlashes with NormalizeObjectKey

Use s3_constants.NormalizeObjectKey instead of removeDuplicateSlashes in most places
for consistency. NormalizeObjectKey handles both duplicate slash removal and ensures
the path starts with '/', providing more complete normalization.

* Fix double slash issues after NormalizeObjectKey

After using NormalizeObjectKey, object keys have a leading '/'. This commit ensures:
- getVersionedObjectDir strips leading slash before concatenation
- getEntry calls receive names without leading slash
- String concatenation with '/' doesn't create '//' paths

This prevents path construction errors like:
  /buckets/bucket//object  (wrong)
  /buckets/bucket/object   (correct)

* ensure object key leading "/"

* fix compilation

* fix: Strip leading slash from object keys in S3 API responses

After introducing NormalizeObjectKey, all internal object keys have a
leading slash. However, S3 API responses must return keys without
leading slashes to match AWS S3 behavior.

Fixed in three functions:
- addVersion: Strip slash for version list entries
- processRegularFile: Strip slash for regular file entries
- processExplicitDirectory: Strip slash for directory entries

This ensures ListObjectVersions and similar APIs return keys like
'bar' instead of '/bar', matching S3 API specifications.

* fix: Normalize keyMarker for consistent pagination comparison

The S3 API provides keyMarker without a leading slash (e.g., 'object-001'),
but after introducing NormalizeObjectKey, all internal object keys have
leading slashes (e.g., '/object-001').

When comparing keyMarker < normalizedObjectKey in shouldSkipObjectForMarker,
the ASCII value of '/' (47) is less than 'o' (111), causing all objects
to be incorrectly skipped during pagination. This resulted in page 2 and
beyond returning 0 results.

Fix: Normalize the keyMarker when creating versionCollector so comparisons
work correctly with normalized object keys.

Fixes pagination tests:
- TestVersioningPaginationOver1000Versions
- TestVersioningPaginationMultipleObjectsManyVersions

* refactor: Change NormalizeObjectKey to return keys without leading slash

BREAKING STRATEGY CHANGE:
Previously, NormalizeObjectKey added a leading slash to all object keys,
which required stripping it when returning keys to S3 API clients and
caused complexity in marker normalization for pagination.

NEW STRATEGY:
- NormalizeObjectKey now returns keys WITHOUT leading slash (e.g., 'foo/bar' not '/foo/bar')
- This matches the S3 API format directly
- All path concatenations now explicitly add '/' between bucket and object
- No need to strip slashes in responses or normalize markers

Changes:
1. Modified NormalizeObjectKey to strip leading slash instead of adding it
2. Fixed all path concatenations to use:
   - BucketsPath + '/' + bucket + '/' + object
   instead of:
   - BucketsPath + '/' + bucket + object
3. Reverted response key stripping in:
   - addVersion()
   - processRegularFile()
   - processExplicitDirectory()
4. Reverted keyMarker normalization in findVersionsRecursively()
5. Updated matchesPrefixFilter() to work with keys without leading slash
6. Fixed paths in handlers:
   - s3api_object_handlers.go (GetObject, HeadObject, cacheRemoteObjectForStreaming)
   - s3api_object_handlers_postpolicy.go
   - s3api_object_handlers_tagging.go
   - s3api_object_handlers_acl.go
   - s3api_version_id.go (getVersionedObjectDir, getVersionIdFormat)
   - s3api_object_versioning.go (getObjectVersionList, updateLatestVersionAfterDeletion)

All versioning tests pass including pagination stress tests.

* adjust format

* Update post policy tests to match new NormalizeObjectKey behavior

- Update TestPostPolicyKeyNormalization to expect keys without leading slashes
- Update TestNormalizeObjectKey to expect keys without leading slashes
- Update TestPostPolicyFilenameSubstitution to expect keys without leading slashes
- Update path construction in tests to use new pattern: BucketsPath + '/' + bucket + '/' + object

* Fix ListObjectVersions prefix filtering

Remove leading slash addition to prefix parameter to allow correct filtering
of .versions directories when listing object versions with a specific prefix.

The prefix parameter should match entry paths relative to bucket root.
Adding a leading slash was breaking the prefix filter for paginated requests.

Fixes pagination issue where second page returned 0 versions instead of
continuing with remaining versions.

* no leading slash

* Fix urlEscapeObject to add leading slash for filer paths

NormalizeObjectKey now returns keys without leading slashes to match S3 API format.
However, urlEscapeObject is used for filer paths which require leading slashes.
Add leading slash back after normalization to ensure filer paths are correct.

Fixes TestS3ApiServer_toFilerPath test failures.

* adjust tests

* normalize

* Fix: Normalize prefixes and markers in LIST operations using NormalizeObjectKey

Ensure consistent key normalization across all S3 operations (GET, PUT, LIST).
Previously, LIST operations were not applying the same normalization rules
(handling backslashes, duplicate slashes, leading slashes) as GET/PUT operations.

Changes:
- Updated normalizePrefixMarker() to call NormalizeObjectKey for both prefix and marker
- This ensures prefixes with leading slashes, backslashes, or duplicate slashes are
  handled consistently with how object keys are normalized
- Fixes Parquet test failures where pads.write_dataset creates implicit directory
  structures that couldn't be discovered by subsequent LIST operations
- Added TestPrefixNormalizationInList and TestListPrefixConsistency tests

All existing LIST tests continue to pass with the normalization improvements.

* Add debugging logging to LIST operations to track prefix normalization

* Fix: Remove leading slash addition from GetPrefix to work with NormalizeObjectKey

The NormalizeObjectKey function removes leading slashes to match S3 API format
(e.g., 'foo/bar' not '/foo/bar'). However, GetPrefix was adding a leading slash
back, which caused LIST operations to fail with incorrect path handling.

Now GetPrefix only normalizes duplicate slashes without adding a leading slash,
which allows NormalizeObjectKey changes to work correctly for S3 LIST operations.

All Parquet integration tests now pass (20/20).

* Fix: Handle object paths without leading slash in checkDirectoryObject

NormalizeObjectKey() removes the leading slash to match S3 API format.
However, checkDirectoryObject() was assuming the object path has a leading
slash when processing directory markers (paths ending with '/').

Now we ensure the object has a leading slash before processing it for
filer operations.

Fixes implicit directory marker test (explicit_dir/) while keeping
Parquet integration tests passing (20/20).

All tests pass:
- Implicit directory tests: 6/6
- Parquet integration tests: 20/20

* Fix: Handle explicit directory markers with trailing slashes

Explicit directory markers created with put_object(Key='dir/', ...) are stored
in the filer with the trailing slash as part of the name. The checkDirectoryObject()
function now checks for both:
1. Explicit directories: lookup with trailing slash preserved (e.g., 'explicit_dir/')
2. Implicit directories: lookup without trailing slash (e.g., 'implicit_dir')

This ensures both types of directory markers are properly recognized.

All tests pass:
- Implicit directory tests: 6/6 (including explicit directory marker test)
- Parquet integration tests: 20/20

* Fix: Preserve trailing slash in NormalizeObjectKey

NormalizeObjectKey now preserves trailing slashes when normalizing object keys.
This is important for explicit directory markers like 'explicit_dir/' which rely
on the trailing slash to be recognized as directory objects.

The normalization process:
1. Notes if trailing slash was present
2. Removes duplicate slashes and converts backslashes
3. Removes leading slash for S3 API format
4. Restores trailing slash if it was in the original

This ensures explicit directory markers created with put_object(Key='dir/', ...)
are properly normalized and can be looked up by their exact name.

All tests pass:
- Implicit directory tests: 6/6
- Parquet integration tests: 20/20

* clean object

* Fix: Don't restore trailing slash if result is empty

When normalizing paths that are only slashes (e.g., '///', '/'), the function
should return an empty string, not a single slash. The fix ensures we only
restore the trailing slash if the result is non-empty.

This fixes the 'just_slashes' test case:
- Input: '///'
- Expected: ''
- Previous: '/'
- Fixed: ''

All tests now pass:
- Unit tests: TestNormalizeObjectKey (13/13)
- Implicit directory tests: 6/6
- Parquet integration tests: 20/20

* prefixEndsOnDelimiter

* Update s3api_object_handlers_list.go

* Update s3api_object_handlers_list.go

* handle create directory
2025-12-24 19:07:08 -08:00
Chris Lu
014027f75a Fix: Support object tagging in versioned buckets (Issue #7868) (#7871)
* Fix: Support object tagging in versioned buckets (Issue #7868)

This fix addresses the issue where setting tags on files in versioned buckets would fail with 'filer: no entry is found in filer store' error.

Changes:
- Updated GetObjectTaggingHandler to check versioning status and retrieve correct object versions
- Updated PutObjectTaggingHandler to properly locate and update tags on versioned objects
- Updated DeleteObjectTaggingHandler to delete tags from versioned objects
- Added proper handling for both specific versions and latest versions
- Added distinction between null versions (pre-versioning objects) and versioned objects

The fix follows the same versioning-aware pattern already implemented in ACL handlers.

Tests:
- Added comprehensive test suite for tagging operations on versioned buckets
- Tests cover PUT, GET, and DELETE tagging operations on specific versions and latest versions
- Tests verify tag isolation between different versions of the same object

* Fix: Ensure consistent directory path construction in tagging handlers

Changed directory path construction to match the pattern used in ACL handlers:
- Added missing '/' before object path when constructing .versions directory path
- This ensures compatibility with the filer's expected path structure
- Applied to both PutObjectTaggingHandler and DeleteObjectTaggingHandler

* Revert: Remove redundant slash in path construction - object already has leading slash from NormalizeObjectKey

* Fix: Remove redundant slashes in versioning path construction across handlers

- getVersionedObjectDir: object already starts with '/', no need for extra '/'
- ACL handlers: same pattern, fix both PutObjectAcl locations
- Ensures consistent path construction with object parameter normalization

* fix test compilation

* Add: Comprehensive ACL tests for versioned and non-versioned buckets

- Added s3_acl_versioning_test.go with 5 test cases covering:
  * GetObjectAcl on versioned buckets
  * GetObjectAcl on specific versions
  * PutObjectAcl on versioned buckets
  * PutObjectAcl on specific versions
  * Independent ACL management across versions

These tests were missing and would have caught the path construction
issues we just fixed in the ACL handler. Tests validate that ACL
operations work correctly on both versioned and non-versioned objects.

* Fix: Correct tagging versioning test file formatting

* fix: Update AWS SDK endpoint config and improve cleanup to handle delete markers

- Replace deprecated EndpointResolverWithOptions with BaseEndpoint in AWS SDK v2 client configuration
- Update cleanupTestBucket to properly delete both object versions and delete markers
- Apply changes to both ACL and tagging test files for consistency

* Fix S3 multi-delete for versioned objects

The bug was in getVersionedObjectDir() which was constructing paths without
a slash between the bucket and object key:

BEFORE (WRONG): /buckets/mybucket{key}.versions
AFTER (FIXED):  /buckets/mybucket/{key}/.versions

This caused version deletions to claim success but not actually delete files,
breaking S3 compatibility tests:
- test_versioning_multi_object_delete
- test_versioning_multi_object_delete_with_marker
- test_versioning_concurrent_multi_object_delete
- test_object_lock_multi_delete_object_with_retention

Added comprehensive test that reproduces the issue and verifies the fix.

* Remove emojis from test output
2025-12-24 13:09:08 -08:00
Chris Lu
683e3d06a4 go mod tidy 2025-12-22 18:48:13 -08:00
dependabot[bot]
ce71968bad chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0 (#7849)
* chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.47.0 to 0.48.0.
- [Commits](https://github.com/golang/net/compare/v0.47.0...v0.48.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.48.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* mod

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
2025-12-22 15:58:36 -08:00
dependabot[bot]
a898160e39 chore(deps): bump golang.org/x/crypto from 0.45.0 to 0.46.0 (#7847)
* chore(deps): bump golang.org/x/crypto from 0.45.0 to 0.46.0

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.45.0 to 0.46.0.
- [Commits](https://github.com/golang/crypto/compare/v0.45.0...v0.46.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.46.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* mod

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
2025-12-22 15:58:18 -08:00
chrislu
4a764dbb37 fmt 2025-12-19 15:33:16 -08:00
Chris Lu
4aa50bfa6a fix: EC rebalance fails with replica placement 000 (#7812)
* fix: EC rebalance fails with replica placement 000

This PR fixes several issues with EC shard distribution:

1. Pre-flight check before EC encoding
   - Verify target disk type has capacity before encoding starts
   - Prevents encoding shards only to fail during rebalance
   - Shows helpful error when wrong diskType is specified (e.g., ssd when volumes are on hdd)

2. Fix EC rebalance with replica placement 000
   - When DiffRackCount=0, shards should be distributed freely across racks
   - The '000' placement means 'no volume replication needed' because EC provides redundancy
   - Previously all racks were skipped with error 'shards X > replica placement limit (0)'

3. Add unit tests for EC rebalance slot calculation
   - TestECRebalanceWithLimitedSlots: documents the limited slots scenario
   - TestECRebalanceZeroFreeSlots: reproduces the 0 free slots error

4. Add Makefile for manual EC testing
   - make setup: start cluster and populate data
   - make shell: open weed shell for EC commands
   - make clean: stop cluster and cleanup

* fix: default -rebalance to true for ec.encode

The -rebalance flag was defaulting to false, which meant ec.encode would
only print shard moves but not actually execute them. This is a poor
default since the whole point of EC encoding is to distribute shards
across servers for fault tolerance.

Now -rebalance defaults to true, so shards are actually distributed
after encoding. Users can use -rebalance=false if they only want to
see what would happen without making changes.

* test/erasure_coding: improve Makefile safety and docs

- Narrow pkill pattern for volume servers to use TEST_DIR instead of
  port pattern, avoiding accidental kills of unrelated SeaweedFS processes
- Document external dependencies (curl, jq) in header comments

* shell: refactor buildRackWithEcShards to reuse buildEcShards

Extract common shard bit construction logic to avoid duplication
between buildEcShards and buildRackWithEcShards helper functions.

* shell: update test for EC replication 000 behavior

When DiffRackCount=0 (replication "000"), EC shards should be
distributed freely across racks since erasure coding provides its
own redundancy. Update test expectation to reflect this behavior.

* erasure_coding: add distribution package for proportional EC shard placement

Add a new reusable package for EC shard distribution that:
- Supports configurable EC ratios (not hard-coded 10+4)
- Distributes shards proportionally based on replication policy
- Provides fault tolerance analysis
- Prefers moving parity shards to keep data shards spread out

Key components:
- ECConfig: Configurable data/parity shard counts
- ReplicationConfig: Parsed XYZ replication policy
- ECDistribution: Target shard counts per DC/rack/node
- Rebalancer: Plans shard moves with parity-first strategy

This enables seaweed-enterprise custom EC ratios and weed worker
integration while maintaining a clean, testable architecture.

* shell: integrate distribution package for EC rebalancing

Add shell wrappers around the distribution package:
- ProportionalECRebalancer: Plans moves using distribution.Rebalancer
- NewProportionalECRebalancerWithConfig: Supports custom EC configs
- GetDistributionSummary/GetFaultToleranceAnalysis: Helper functions

The shell layer converts between EcNode types and the generic
TopologyNode types used by the distribution package.

* test setup

* ec: improve data and parity shard distribution across racks

- Add shardsByTypePerRack helper to track data vs parity shards
- Rewrite doBalanceEcShardsAcrossRacks for two-pass balancing:
  1. Balance data shards (0-9) evenly, max ceil(10/6)=2 per rack
  2. Balance parity shards (10-13) evenly, max ceil(4/6)=1 per rack
- Add balanceShardTypeAcrossRacks for generic shard type balancing
- Add pickRackForShardType to select destination with room for type
- Add unit tests for even data/parity distribution verification

This ensures even read load during normal operation by spreading
both data and parity shards across all available racks.

* ec: make data/parity shard counts configurable in ecBalancer

- Add dataShardCount and parityShardCount fields to ecBalancer struct
- Add getDataShardCount() and getParityShardCount() methods with defaults
- Replace direct constant usage with configurable methods
- Fix unused variable warning for parityPerRack

This allows seaweed-enterprise to use custom EC ratios while
defaulting to standard 10+4 scheme.

* Address PR 7812 review comments

Makefile improvements:
- Save PIDs for each volume server for precise termination
- Use PID-based killing in stop target with pkill fallback
- Use more specific pkill patterns with TEST_DIR paths

Documentation:
- Document jq dependency in README.md

Rebalancer fix:
- Fix duplicate shard count updates in applyMovesToAnalysis
- All planners (DC/rack/node) update counts inline during planning
- Remove duplicate updates from applyMovesToAnalysis to avoid double-counting

* test/erasure_coding: use mktemp for test file template

Use mktemp instead of hardcoded /tmp/testfile_template.bin path
to provide better isolation for concurrent test runs.
2025-12-19 13:29:12 -08:00
Chris Lu
ed1da07665 Add consistent -debug and -debug.port flags to commands (#7816)
* Add consistent -debug and -debug.port flags to commands

Add -debug and -debug.port flags to weed master, weed volume, weed s3,
weed mq.broker, and weed filer.sync commands for consistency with
weed filer.

When -debug is enabled, an HTTP server starts on the specified port
(default 6060) serving runtime profiling data at /debug/pprof/.

For mq.broker, replaced the older -port.pprof flag with the new
-debug and -debug.port pattern for consistency.

* Update weed/util/grace/pprof.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-18 17:44:36 -08:00
Chris Lu
bccef78082 fix: reduce N+1 queries in S3 versioned object list operations (#7814)
* fix: achieve single-scan efficiency for S3 versioned object listing

When listing objects in a versioning-enabled bucket, the original code
triggered multiple getEntry calls per versioned object (up to 12 with
retries), causing excessive 'find' operations visible in Grafana and
leading to high memory usage.

This fix achieves single-scan efficiency by caching list metadata
(size, ETag, mtime, owner) directly in the .versions directory:

1. Add new Extended keys for caching list metadata in .versions dir
2. Update upload/copy/multipart paths to cache metadata when creating versions
3. Update getLatestVersionEntryFromDirectoryEntry to use cached metadata
   (zero getEntry calls when cache is available)
4. Update updateLatestVersionAfterDeletion to maintain cache consistency

Performance improvement for N versioned objects:
- Before: N×1 to N×12 find operations per list request
- After: 0 extra find operations (all metadata from single scan)

This matches the efficiency of normal (non-versioned) object listing.

* Update s3api_object_versioning.go

* s3api: fix ETag handling for versioned objects and simplify delete marker creation

- Add Md5 attribute to synthetic logicalEntry for single-part uploads to ensure
  filer.ETag() returns correct value in ListObjects response
- Simplify delete marker creation by initializing entry directly in mkFile callback
- Add bytes and encoding/hex imports for ETag parsing

* s3api: preserve default attributes in delete marker mkFile callback

Only modify Mtime field instead of replacing the entire Attributes struct,
preserving default values like Crtime, FileMode, Uid, and Gid that mkFile
initializes.

* s3api: fix ETag handling in newListEntry for multipart uploads

Prioritize ExtETagKey from Extended attributes before falling back to
filer.ETag(). This properly handles multipart upload ETags (format: md5-parts)
for versioned objects, where the synthetic entry has cached ETag metadata
but no chunks to calculate from.

* s3api: reduce code duplication in delete marker creation

Extract deleteMarkerExtended map to be reused in both mkFile callback
and deleteMarkerEntry construction.

* test: add multipart upload versioning tests for ETag verification

Add tests to verify that multipart uploaded objects in versioned buckets
have correct ETags when listed:

- TestMultipartUploadVersioningListETag: Basic multipart upload with 2 parts
- TestMultipartUploadMultipleVersionsListETag: Multiple multipart versions
- TestMixedSingleAndMultipartVersionsListETag: Mix of single-part and multipart

These tests cover a bug where synthetic entries for versioned objects
didn't include proper ETag handling for multipart uploads.

* test: add delete marker test for multipart uploaded versioned objects

TestMultipartUploadDeleteMarkerListBehavior verifies:
- Delete marker creation hides object from ListObjectsV2
- ListObjectVersions shows both version and delete marker
- Version ETag (multipart format) is preserved after delete marker
- Object can be accessed by version ID after delete marker
- Removing delete marker restores object visibility

* refactor: address code review feedback

- test: use assert.ElementsMatch for ETag verification (more idiomatic)
- s3api: optimize newListEntry ETag logic (check ExtETagKey first)
- s3api: fix edge case in ETag parsing (>= 2 instead of > 2)

* s3api: prevent stale cached metadata and preserve existing extended attrs

- setCachedListMetadata: clear old cached keys before setting new values
  to prevent stale data when new version lacks certain fields (e.g., owner)
- createDeleteMarker: merge extended attributes instead of overwriting
  to preserve any existing metadata on the entry

* s3api: extract clearCachedVersionMetadata to reduce code duplication

- clearCachedVersionMetadata: clears only metadata fields (size, mtime, etag, owner, deleteMarker)
- clearCachedListMetadata: now reuses clearCachedVersionMetadata + clears ID/filename
- setCachedListMetadata: uses clearCachedVersionMetadata (not clearCachedListMetadata
  because caller has already set ID/filename)

* s3api: share timestamp between version entry and cache entry

Capture versionMtime once before mkFile and reuse for both:
- versionEntry.Attributes.Mtime in the mkFile callback
- versionEntryForCache.Attributes.Mtime for list caching

This keeps list vs. HEAD LastModified timestamps aligned.

* s3api: remove amzAccountId variable shadowing in multipart upload

Extract amzAccountId before mkFile callback and reuse in both places,
similar to how versionMtime is handled. Avoids confusion from
redeclaring the same variable.
2025-12-18 17:44:27 -08:00
Chris Lu
2763f105f4 fix: use unique bucket name in TestS3IAMPresignedURLIntegration to avoid flaky test (#7801)
The test was using a static bucket name 'test-iam-bucket' that could conflict
with buckets created by other tests or previous runs. Each test framework
creates new RSA keys for JWT signing, so the 'admin-user' identity differs
between runs. When the bucket exists from a previous test, the new admin
cannot access or delete it, causing AccessDenied errors.

Changed to use GenerateUniqueBucketName() which ensures each test run gets
its own bucket, avoiding cross-test conflicts.
2025-12-17 00:21:32 -08:00
Chris Lu
347ed7cbfa fix: sync replica entries before ec.encode and volume.tier.move (#7798)
* fix: sync replica entries before ec.encode and volume.tier.move (#7797)

This addresses the data inconsistency risk in multi-replica volumes.

When ec.encode or volume.tier.move operates on a multi-replica volume:
1. Find the replica with the highest file count (the 'best' one)
2. Copy missing entries from other replicas INTO this best replica
3. Use this union replica for the destructive operation

This ensures no data is lost due to replica inconsistency before
EC encoding or tier moving.

Added:
- command_volume_replica_check.go: Core sync and select logic
- command_volume_replica_check_test.go: Test coverage

Modified:
- command_ec_encode.go: Call syncAndSelectBestReplica before encoding
- command_volume_tier_move.go: Call syncAndSelectBestReplica before moving

Fixes #7797

* test: add integration test for replicated volume sync during ec.encode

* test: improve retry logic for replicated volume integration test

* fix: resolve JWT issue in integration tests by using empty security.toml

* address review comments: add readNeedleMeta, parallelize status fetch, fix collection param, fix test issues

* test: use collection parameter consistently in replica sync test

* fix: convert weed binary path to absolute to work with changed working directory

* fix: remove skip behavior, keep tests failing on missing binary

* fix: always check recency for each needle, add divergent replica test
2025-12-16 23:16:07 -08:00
G-OD
504b258258 s3: fix remote object not caching (#7790)
* s3: fix remote object not caching

* s3: address review comments for remote object caching

- Fix leading slash in object name by using strings.TrimPrefix
- Return cached entry from CacheRemoteObjectToLocalCluster to get updated local chunk locations
- Reuse existing helper function instead of inline gRPC call

* s3/filer: add singleflight deduplication for remote object caching

- Add singleflight.Group to FilerServer to deduplicate concurrent cache operations
- Wrap CacheRemoteObjectToLocalCluster with singleflight to ensure only one
  caching operation runs per object when multiple clients request the same file
- Add early-return check for already-cached objects
- S3 API calls filer gRPC with timeout and graceful fallback on error
- Clear negative bucket cache when bucket is created via weed shell
- Add integration tests for remote cache with singleflight deduplication

This benefits all clients (S3, HTTP, Hadoop) accessing remote-mounted objects
by preventing redundant cache operations and improving concurrent access performance.

Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599

* fix: data race in concurrent remote object caching

- Add mutex to protect chunks slice from concurrent append
- Add mutex to protect fetchAndWriteErr from concurrent read/write
- Fix incorrect error check (was checking assignResult.Error instead of parseErr)
- Rename inner variable to avoid shadowing fetchAndWriteErr

* fix: address code review comments

- Remove duplicate remote caching block in GetObjectHandler, keep only singleflight version
- Add mutex protection for concurrent chunk slice and error access (data race fix)
- Use lazy initialization for S3 client in tests to avoid panic during package load
- Fix markdown linting: add language specifier to code fence, blank lines around tables
- Add 'all' target to Makefile as alias for test-with-server
- Remove unused 'util' import

* style: remove emojis from test files

* fix: add defensive checks and sort chunks by offset

- Add nil check and type assertion check for singleflight result
- Sort chunks by offset after concurrent fetching to maintain file order

* fix: improve test diagnostics and path normalization

- runWeedShell now returns error for better test diagnostics
- Add all targets to .PHONY in Makefile (logs-primary, logs-remote, health)
- Strip leading slash from normalizedObject to avoid double slashes in path

---------

Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
2025-12-16 12:41:04 -08:00
chrislu
9150d84eea test: use -master.peers=none for faster test server startup 2025-12-15 23:37:44 -08:00
chrislu
26121c55c9 test: improve pagination stress test with QUICK_TEST option and better assertions 2025-12-15 23:23:50 -08:00
chrislu
f517bc39fc test: fix nil pointer dereference and add debugging to pagination stress tests 2025-12-15 23:16:01 -08:00
chrislu
0972a0acf3 test: add pagination stress tests for S3 versioning with >1000 versions 2025-12-15 23:06:57 -08:00
Chris Lu
f5c666052e feat: add S3 bucket size and object count metrics (#7776)
* feat: add S3 bucket size and object count metrics

Adds periodic collection of bucket size metrics:
- SeaweedFS_s3_bucket_size_bytes: logical size (deduplicated across replicas)
- SeaweedFS_s3_bucket_physical_size_bytes: physical size (including replicas)
- SeaweedFS_s3_bucket_object_count: object count (deduplicated)

Collection runs every 1 minute via background goroutine that queries
filer Statistics RPC for each bucket's collection.

Also adds Grafana dashboard panels for:
- S3 Bucket Size (logical vs physical)
- S3 Bucket Object Count

* address PR comments: fix bucket size metrics collection

1. Fix collectCollectionInfoFromMaster to use master VolumeList API
   - Now properly queries master for topology info
   - Uses WithMasterClient to get volume list from master
   - Correctly calculates logical vs physical size based on replication

2. Return error when filerClient is nil to trigger fallback
   - Changed from 'return nil, nil' to 'return nil, error'
   - Ensures fallback to filer stats is properly triggered

3. Implement pagination in listBucketNames
   - Added listBucketPageSize constant (1000)
   - Uses StartFromFileName for pagination
   - Continues fetching until fewer entries than limit returned

4. Handle NewReplicaPlacementFromByte error and prevent division by zero
   - Check error return from NewReplicaPlacementFromByte
   - Default to 1 copy if error occurs
   - Add explicit check for copyCount == 0

* simplify bucket size metrics: remove filer fallback, align with quota enforcement

- Remove fallback to filer Statistics RPC
- Use only master topology for collection info (same as s3.bucket.quota.enforce)
- Updated comments to clarify this runs the same collection logic as quota enforcement
- Simplified code by removing collectBucketSizeFromFilerStats

* use s3a.option.Masters directly instead of querying filer

* address PR comments: fix dashboard overlaps and improve metrics collection

Grafana dashboard fixes:
- Fix overlapping panels 55 and 59 in grafana_seaweedfs.json (moved 59 to y=30)
- Fix grid collision in k8s dashboard (moved panel 72 to y=48)
- Aggregate bucket metrics with max() by (bucket) for multi-instance S3 gateways

Go code improvements:
- Add graceful shutdown support via context cancellation
- Use ticker instead of time.Sleep for better shutdown responsiveness
- Distinguish EOF from actual errors in stream handling

* improve bucket size metrics: multi-master failover and proper error handling

- Initial delay now respects context cancellation using select with time.After
- Use WithOneOfGrpcMasterClients for multi-master failover instead of hardcoding Masters[0]
- Properly propagate stream errors instead of just logging them (EOF vs real errors)

* improve bucket size metrics: distributed lock and volume ID deduplication

- Add distributed lock (LiveLock) so only one S3 instance collects metrics at a time
- Add IsLocked() method to LiveLock for checking lock status
- Fix deduplication: use volume ID tracking instead of dividing by copyCount
  - Previous approach gave wrong results if replicas were missing
  - Now tracks seen volume IDs and counts each volume only once
- Physical size still includes all replicas for accurate disk usage reporting

* rename lock to s3.leader

* simplify: remove StartBucketSizeMetricsCollection wrapper function

* fix data race: use atomic operations for LiveLock.isLocked field

- Change isLocked from bool to int32
- Use atomic.LoadInt32/StoreInt32 for all reads/writes
- Sync shared isLocked field in StartLongLivedLock goroutine

* add nil check for topology info to prevent panic

* fix bucket metrics: use Ticker for consistent intervals, fix pagination logic

- Use time.Ticker instead of time.After for consistent interval execution
- Fix pagination: count all entries (not just directories) for proper termination
- Update lastFileName for all entries to prevent pagination issues

* address PR comments: remove redundant atomic store, propagate context

- Remove redundant atomic.StoreInt32 in StartLongLivedLock (AttemptToLock already sets it)
- Propagate context through metrics collection for proper cancellation on shutdown
  - collectAndUpdateBucketSizeMetrics now accepts ctx
  - collectCollectionInfoFromMaster uses ctx for VolumeList RPC
  - listBucketNames uses ctx for ListEntries RPC
2025-12-15 19:23:25 -08:00
dependabot[bot]
d0cc51e7c6 chore(deps): bump io.netty:netty-codec-http from 4.1.125.Final to 4.1.129.Final in /test/java/spark (#7773)
chore(deps): bump io.netty:netty-codec-http in /test/java/spark

Bumps [io.netty:netty-codec-http](https://github.com/netty/netty) from 4.1.125.Final to 4.1.129.Final.
- [Commits](https://github.com/netty/netty/compare/netty-4.1.125.Final...netty-4.1.129.Final)

---
updated-dependencies:
- dependency-name: io.netty:netty-codec-http
  dependency-version: 4.1.129.Final
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-15 15:46:06 -08:00
dependabot[bot]
c6e07429e7 chore(deps): bump golang.org/x/image from 0.33.0 to 0.34.0 (#7764)
* chore(deps): bump golang.org/x/image from 0.33.0 to 0.34.0

Bumps [golang.org/x/image](https://github.com/golang/image) from 0.33.0 to 0.34.0.
- [Commits](https://github.com/golang/image/compare/v0.33.0...v0.34.0)

---
updated-dependencies:
- dependency-name: golang.org/x/image
  dependency-version: 0.34.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
2025-12-15 13:26:56 -08:00
Chris Lu
5a03b5538f filer: improve FoundationDB performance by disabling batch by default (#7770)
* filer: improve FoundationDB performance by disabling batch by default

This PR addresses a performance issue where FoundationDB filer was achieving
only ~757 ops/sec with 12 concurrent S3 clients, despite FDB being capable
of 17,000+ ops/sec.

Root cause: The write batcher was waiting up to 5ms for each operation to
batch, even though S3 semantics require waiting for durability confirmation.
This added artificial latency that defeated the purpose of batching.

Changes:
- Disable write batching by default (batch_enabled = false)
- Each write now commits immediately in its own transaction
- Reduce batch interval from 5ms to 1ms when batching is enabled
- Add batch_enabled config option to toggle behavior
- Improve batcher to collect available ops without blocking
- Add benchmarks comparing batch vs no-batch performance

Benchmark results (16 concurrent goroutines):
- With batch:    2,924 ops/sec (342,032 ns/op)
- Without batch: 4,625 ops/sec (216,219 ns/op)
- Improvement:   +58% faster

Configuration:
- Default: batch_enabled = false (optimal for S3 PUT latency)
- For bulk ingestion: set batch_enabled = true

Also fixes ARM64 Docker test setup (shell compatibility, fdbserver path).

* fix: address review comments - use atomic counter and remove duplicate batcher

- Use sync/atomic.Uint64 for unique filenames in concurrent benchmarks
- Remove duplicate batcher creation in createBenchmarkStoreWithBatching
  (initialize() already creates batcher when batchEnabled=true)

* fix: add realistic default values to benchmark store helper

Set directoryPrefix, timeout, and maxRetryDelay to reasonable defaults
for more realistic benchmark conditions.
2025-12-15 13:03:34 -08:00
Chris Lu
44beb42eb9 s3: fix PutObject ETag format for multi-chunk uploads (#7771)
* s3: fix PutObject ETag format for multi-chunk uploads

Fix issue #7768: AWS S3 SDK for Java fails with 'Invalid base 16
character: -' when performing PutObject on files that are internally
auto-chunked.

The issue was that SeaweedFS returned a composite ETag format
(<md5hash>-<count>) for regular PutObject when the file was split
into multiple chunks due to auto-chunking. However, per AWS S3 spec,
the composite ETag format should only be used for multipart uploads
(CreateMultipartUpload/UploadPart/CompleteMultipartUpload API).

Regular PutObject should always return a pure MD5 hash as the ETag,
regardless of how the file is stored internally.

The fix ensures the MD5 hash is always stored in entry.Attributes.Md5
for regular PutObject operations, so filer.ETag() returns the pure
MD5 hash instead of falling back to ETagChunks() composite format.

* test: add comprehensive ETag format tests for issue #7768

Add integration tests to ensure PutObject ETag format compatibility:

Go tests (test/s3/etag/):
- TestPutObjectETagFormat_SmallFile: 1KB single chunk
- TestPutObjectETagFormat_LargeFile: 10MB auto-chunked (critical for #7768)
- TestPutObjectETagFormat_ExtraLargeFile: 25MB multi-chunk
- TestMultipartUploadETagFormat: verify composite ETag for multipart
- TestPutObjectETagConsistency: ETag consistency across PUT/HEAD/GET
- TestETagHexValidation: simulate AWS SDK v2 hex decoding
- TestMultipleLargeFileUploads: stress test multiple large uploads

Java tests (other/java/s3copier/):
- Update pom.xml to include AWS SDK v2 (2.20.127)
- Add ETagValidationTest.java with comprehensive SDK v2 tests
- Add README.md documenting SDK versions and test coverage

Documentation:
- Add test/s3/SDK_COMPATIBILITY.md documenting validated SDK versions
- Add test/s3/etag/README.md explaining test coverage

These tests ensure large file PutObject (>8MB) returns pure MD5 ETags
(not composite format), which is required for AWS SDK v2 compatibility.

* fix: lower Java version requirement to 11 for CI compatibility

* address CodeRabbit review comments

- s3_etag_test.go: Handle rand.Read error, fix multipart part-count logging
- Makefile: Add 'all' target, pass S3_ENDPOINT to test commands
- SDK_COMPATIBILITY.md: Add language tag to fenced code block
- ETagValidationTest.java: Add pagination to cleanup logic
- README.md: Clarify Go SDK tests are in separate location

* ci: add s3copier ETag validation tests to Java integration tests

- Enable S3 API (-s3 -s3.port=8333) in SeaweedFS test server
- Add S3 API readiness check to wait loop
- Add step to run ETagValidationTest from s3copier

This ensures the fix for issue #7768 is continuously tested
against AWS SDK v2 for Java in CI.

* ci: add S3 config with credentials for s3copier tests

- Add -s3.config pointing to docker/compose/s3.json
- Add -s3.allowDeleteBucketNotEmpty for test cleanup
- Set S3_ACCESS_KEY and S3_SECRET_KEY env vars for tests

* ci: pass S3 config as Maven system properties

Pass S3_ENDPOINT, S3_ACCESS_KEY, S3_SECRET_KEY via -D flags
so they're available via System.getProperty() in Java tests
2025-12-15 12:43:33 -08:00
Chris Lu
1b1e5f69a2 Add TUS protocol support for resumable uploads (#7592)
* Add TUS protocol integration tests

This commit adds integration tests for the TUS (resumable upload) protocol
in preparation for implementing TUS support in the filer.

Test coverage includes:
- OPTIONS handler for capability discovery
- Basic single-request upload
- Chunked/resumable uploads
- HEAD requests for offset tracking
- DELETE for upload cancellation
- Error handling (invalid offsets, missing uploads)
- Creation-with-upload extension
- Resume after interruption simulation

Tests are skipped in short mode and require a running SeaweedFS cluster.

* Add TUS session storage types and utilities

Implements TUS upload session management:
- TusSession struct for tracking upload state
- Session creation with directory-based storage
- Session persistence using filer entries
- Session retrieval and offset updates
- Session deletion with chunk cleanup
- Upload completion with chunk assembly into final file

Session data is stored in /.uploads.tus/{upload-id}/ directory,
following the pattern used by S3 multipart uploads.

* Add TUS HTTP handlers

Implements TUS protocol HTTP handlers:
- tusHandler: Main entry point routing requests
- tusOptionsHandler: Capability discovery (OPTIONS)
- tusCreateHandler: Create new upload (POST)
- tusHeadHandler: Get upload offset (HEAD)
- tusPatchHandler: Upload data at offset (PATCH)
- tusDeleteHandler: Cancel upload (DELETE)
- tusWriteData: Upload data to volume servers

Features:
- Supports creation-with-upload extension
- Validates TUS protocol headers
- Offset conflict detection
- Automatic upload completion when size is reached
- Metadata parsing from Upload-Metadata header

* Wire up TUS protocol routes in filer server

Add TUS handler route (/.tus/) to the filer HTTP server.
The TUS route is registered before the catch-all route to ensure
proper routing of TUS protocol requests.

TUS protocol is now accessible at:
- OPTIONS /.tus/ - Capability discovery
- POST /.tus/{path} - Create upload
- HEAD /.tus/.uploads/{id} - Get offset
- PATCH /.tus/.uploads/{id} - Upload data
- DELETE /.tus/.uploads/{id} - Cancel upload

* Improve TUS integration test setup

Add comprehensive Makefile for TUS tests with targets:
- test-with-server: Run tests with automatic server management
- test-basic/chunked/resume/errors: Specific test categories
- manual-start/stop: For development testing
- debug-logs/status: For debugging
- ci-test: For CI/CD pipelines

Update README.md with:
- Detailed TUS protocol documentation
- All endpoint descriptions with headers
- Usage examples with curl commands
- Architecture diagram
- Comparison with S3 multipart uploads

Follows the pattern established by other tests in test/ folder.

* Fix TUS integration tests and creation-with-upload

- Fix test URLs to use full URLs instead of relative paths
- Fix creation-with-upload to refresh session before completing
- Fix Makefile to properly handle test cleanup
- Add FullURL helper function to TestCluster

* Add TUS protocol tests to GitHub Actions CI

- Add tus-tests.yml workflow that runs on PRs and pushes
- Runs when TUS-related files are modified
- Automatic server management for integration testing
- Upload logs on failure for debugging

* Make TUS base path configurable via CLI

- Add -tus.path CLI flag to filer command
- TUS is disabled by default (empty path)
- Example: -tus.path=/.tus to enable at /.tus endpoint
- Update test Makefile to use -tus.path flag
- Update README with TUS enabling instructions

* Rename -tus.path to -tusBasePath with default .tus

- Rename CLI flag from -tus.path to -tusBasePath
- Default to .tus (TUS enabled by default)
- Add -filer.tusBasePath option to weed server command
- Properly handle path prefix (prepend / if missing)

* Address code review comments

- Sort chunks by offset before assembling final file
- Use chunk.Offset directly instead of recalculating
- Return error on invalid file ID instead of skipping
- Require Content-Length header for PATCH requests
- Use fs.option.Cipher for encryption setting
- Detect MIME type from data using http.DetectContentType
- Fix concurrency group for push events in workflow
- Use os.Interrupt instead of Kill for graceful shutdown in tests

* fmt

* Address remaining code review comments

- Fix potential open redirect vulnerability by sanitizing uploadLocation path
- Add language specifier to README code block
- Handle os.Create errors in test setup
- Use waitForHTTPServer instead of time.Sleep for master/volume readiness
- Improve test reliability and debugging

* Address critical and high-priority review comments

- Add per-session locking to prevent race conditions in updateTusSessionOffset
- Stream data directly to volume server instead of buffering entire chunk
- Only buffer 512 bytes for MIME type detection, then stream remaining data
- Clean up session locks when session is deleted

* Fix race condition to work across multiple filer instances

- Store each chunk as a separate file entry instead of updating session JSON
- Chunk file names encode offset, size, and fileId for atomic storage
- getTusSession loads chunks from directory listing (atomic read)
- Eliminates read-modify-write race condition across multiple filers
- Remove in-memory mutex that only worked for single filer instance

* Address code review comments: fix variable shadowing, sniff size, and test stability

- Rename path variable to reqPath to avoid shadowing path package
- Make sniff buffer size respect contentLength (read at most contentLength bytes)
- Handle Content-Length < 0 in creation-with-upload (return error for chunked encoding)
- Fix test cluster: use temp directory for filer store, add startup delay

* Fix test stability: increase cluster stabilization delay to 5 seconds

The tests were intermittently failing because the volume server needed more
time to create volumes and register with the master. Increasing the delay
from 2 to 5 seconds fixes the flaky test behavior.

* Address PR review comments for TUS protocol support

- Fix strconv.Atoi error handling in test file (lines 386, 747)
- Fix lossy fileId encoding: use base64 instead of underscore replacement
- Add pagination support for ListDirectoryEntries in getTusSession
- Batch delete chunks instead of one-by-one in deleteTusSession

* Address additional PR review comments for TUS protocol

- Fix UploadAt timestamp: use entry.Crtime instead of time.Now()
- Remove redundant JSON content in chunk entry (metadata in filename)
- Refactor tusWriteData to stream in 4MB chunks to avoid OOM on large uploads
- Pass filer.Entry to parseTusChunkPath to preserve actual upload time

* Address more PR review comments for TUS protocol

- Normalize TUS path once in filer_server.go, store in option.TusPath
- Remove redundant path normalization from TUS handlers
- Remove goto statement in tusCreateHandler, simplify control flow

* Remove unnecessary mutexes in tusWriteData

The upload loop is sequential, so uploadErrLock and chunksLock are not needed.

* Rename updateTusSessionOffset to saveTusChunk

Remove unused newOffset parameter and rename function to better reflect its purpose.

* Improve TUS upload performance and add path validation

- Reuse operation.Uploader across sub-chunks for better connection reuse
- Guard against TusPath='/' to prevent hijacking all filer routes

* Address PR review comments for TUS protocol

- Fix critical chunk filename parsing: use strings.Cut instead of SplitN
  to correctly handle base64-encoded fileIds that may contain underscores
- Rename tusPath to tusBasePath for naming consistency across codebase
- Add background garbage collection for expired TUS sessions (runs hourly)
- Improve error messages with %w wrapping for better debuggability

* Address additional TUS PR review comments

- Fix tusBasePath default to use leading slash (/.tus) for consistency
- Add chunk contiguity validation in completeTusUpload to detect gaps/overlaps
- Fix offset calculation to find maximum contiguous range from 0, not just last chunk
- Return 413 Request Entity Too Large instead of silently truncating content
- Document tusChunkSize rationale (4MB balances memory vs request overhead)
- Fix Makefile xargs portability by removing GNU-specific -r flag
- Add explicit -tusBasePath flag to integration test for robustness
- Fix README example to use /.uploads/tus path format

* Revert log_buffer changes (moved to separate PR)

* Minor style fixes from PR review

- Simplify tusBasePath flag description to use example format
- Add 'TUS upload' prefix to session not found error message
- Remove duplicate tusChunkSize comment
- Capitalize warning message for consistency
- Add grep filter to Makefile xargs for better empty input handling
2025-12-14 21:56:07 -08:00
Chris Lu
51c2ab0107 fix: admin UI bucket deletion with filer group configured (#7735) 2025-12-13 19:04:12 -08:00
Chris Lu
36b8b2147b test: add integration test for versioned object listing path fix (#7731)
* test: add integration test for versioned object listing path fix

Add integration test that validates the fix for GitHub discussion #7573.
The test verifies that:
- Entry names use path.Base() to get base filename only
- Path doubling bug is prevented when listing versioned objects
- Logical entries are created correctly with proper attributes
- .versions folder paths are handled correctly

This test documents the Velero/Kopia compatibility fix and prevents
regression of the path doubling bug.

* test: add Velero/Kopia integration test for versioned object listing

Add integration tests that simulate Velero/Kopia's exact access patterns
when using S3 versioning. These tests validate the fix for GitHub
discussion #7573 where versioned objects with nested paths would have
their paths doubled in ListObjects responses.

Tests added:
- TestVeleroKopiaVersionedObjectListing: Tests various Kopia path patterns
- TestVeleroKopiaGetAfterList: Verifies list-then-get workflow works
- TestVeleroMultipleVersionsWithNestedPaths: Tests multi-version objects
- TestVeleroListVersionsWithNestedPaths: Tests ListObjectVersions API

Each test verifies:
1. Listed keys match original keys without path doubling
2. Objects can be retrieved using the listed keys
3. Content integrity is maintained

Related: https://github.com/seaweedfs/seaweedfs/discussions/7573

* refactor: remove old unit test, keep only Velero integration test

Remove weed/s3api/s3api_versioning_list_test.go as it was a simpler
unit test that the comprehensive Velero integration test supersedes.

The integration test in test/s3/versioning/s3_velero_integration_test.go
provides better coverage by actually exercising the S3 API with real
AWS SDK calls.

* refactor: use defer for response body cleanup in test loop

Use anonymous function with defer for getResp.Body.Close() to be more
defensive against future code additions in the loop body.

* refactor: improve hasDoubledPath function clarity and efficiency

- Fix comment to accurately describe checking for repeated pairs
- Tighten outer loop bound from len(parts)-2 to len(parts)-3
- Remove redundant bounds checks in the condition
2025-12-13 10:56:21 -08:00
Chris Lu
c6d6ee8297 test: fix master client timeout causing test hangs (#7715)
* test: fix master client timeout causing test hangs

Use the main test context for KeepConnectedToMaster instead of creating
a separate 60s context. The tests have 180s outer timeouts but the master
client was disconnecting after 60s, causing subsequent commands to hang
waiting for reconnection.

* test: add -peers=none to all test masters and timeout for lock

- Add -peers=none flag to all master servers for faster startup
- Add tryLockWithTimeout helper to avoid tests hanging on lock acquisition
- Skip tests if lock cannot be acquired within 30 seconds

* test: extract connectToMasterAndSync helper to reduce duplication

* test: fix captureCommandOutput pipe deadlock

Close write end of pipe before calling io.ReadAll to signal EOF,
otherwise ReadAll blocks forever waiting for more data.

* test: fix tryLockWithTimeout to check lock command errors

Propagate lock command error through channel and only treat as
locked if command succeeded. Previously any completion (including
errors) was treated as successful lock acquisition.
2025-12-11 00:22:32 -08:00
Chris Lu
df4f2f7020 ec: add -diskType flag to EC commands for SSD support (#7607)
* ec: add diskType parameter to core EC functions

Add diskType parameter to:
- ecBalancer struct
- collectEcVolumeServersByDc()
- collectEcNodesForDC()
- collectEcNodes()
- EcBalance()

This allows EC operations to target specific disk types (hdd, ssd, etc.)
instead of being hardcoded to HardDriveType only.

For backward compatibility, all callers currently pass types.HardDriveType
as the default value. Subsequent commits will add -diskType flags to
the individual EC commands.

* ec: update helper functions to use configurable diskType

Update the following functions to accept/use diskType parameter:
- findEcVolumeShards()
- addEcVolumeShards()
- deleteEcVolumeShards()
- moveMountedShardToEcNode()
- countShardsByRack()
- pickNEcShardsToMoveFrom()

All ecBalancer methods now use ecb.diskType instead of hardcoded
types.HardDriveType. Non-ecBalancer callers (like volumeServer.evacuate
and ec.rebuild) use types.HardDriveType as the default.

Update all test files to pass diskType where needed.

* ec: add -diskType flag to ec.balance and ec.encode commands

Add -diskType flag to specify the target disk type for EC operations:
- ec.balance -diskType=ssd
- ec.encode -diskType=ssd

The disk type can be 'hdd', 'ssd', or empty for default (hdd).
This allows placing EC shards on SSD or other disk types instead of
only HDD.

Example usage:
  ec.balance -collection=mybucket -diskType=ssd -apply
  ec.encode -collection=mybucket -diskType=ssd -force

* test: add integration tests for EC disk type support

Add integration tests to verify the -diskType flag works correctly:
- TestECDiskTypeSupport: Tests EC encode and balance with SSD disk type
- TestECDiskTypeMixedCluster: Tests EC operations on a mixed HDD/SSD cluster

The tests verify:
- Volume servers can be configured with specific disk types
- ec.encode accepts -diskType flag and encodes to the correct disk type
- ec.balance accepts -diskType flag and balances on the correct disk type
- Mixed disk type clusters work correctly with separate collections

* ec: add -sourceDiskType to ec.encode and -diskType to ec.decode

ec.encode:
- Add -sourceDiskType flag to filter source volumes by disk type
- This enables tier migration scenarios (e.g., SSD volumes → HDD EC shards)
- -diskType specifies target disk type for EC shards

ec.decode:
- Add -diskType flag to specify source disk type where EC shards are stored
- Update collectEcShardIds() and collectEcNodeShardBits() to accept diskType

Examples:
  # Encode SSD volumes to HDD EC shards (tier migration)
  ec.encode -collection=mybucket -sourceDiskType=ssd -diskType=hdd

  # Decode EC shards from SSD
  ec.decode -collection=mybucket -diskType=ssd

Integration tests updated to cover new flags.

* ec: fix variable shadowing and add -diskType to ec.rebuild and volumeServer.evacuate

Address code review comments:

1. Fix variable shadowing in collectEcVolumeServersByDc():
   - Rename loop variable 'diskType' to 'diskTypeKey' and 'diskTypeStr'
     to avoid shadowing the function parameter

2. Fix hardcoded HardDriveType in ecBalancer methods:
   - balanceEcRack(): use ecb.diskType instead of types.HardDriveType
   - collectVolumeIdToEcNodes(): use ecb.diskType

3. Add -diskType flag to ec.rebuild command:
   - Add diskType field to ecRebuilder struct
   - Pass diskType to collectEcNodes() and addEcVolumeShards()

4. Add -diskType flag to volumeServer.evacuate command:
   - Add diskType field to commandVolumeServerEvacuate struct
   - Pass diskType to collectEcVolumeServersByDc() and moveMountedShardToEcNode()

* test: add diskType field to ecBalancer in TestPickEcNodeToBalanceShardsInto

Address nitpick comment: ensure test ecBalancer struct has diskType
field set for consistency with other tests.

* ec: filter disk selection by disk type in pickBestDiskOnNode

When evacuating or rebalancing EC shards, pickBestDiskOnNode now
filters disks by the target disk type. This ensures:

1. EC shards from SSD disks are moved to SSD disks on destination nodes
2. EC shards from HDD disks are moved to HDD disks on destination nodes
3. No cross-disk-type shard movement occurs

This maintains the storage tier isolation when moving EC shards
between nodes during evacuation or rebalancing operations.

* ec: allow disk type fallback during evacuation

Update pickBestDiskOnNode to accept a strictDiskType parameter:

- strictDiskType=true (balancing): Only use disks of matching type.
  This maintains storage tier isolation during normal rebalancing.

- strictDiskType=false (evacuation): Prefer same disk type, but
  fall back to other disk types if no matching disk is available.
  This ensures evacuation can complete even when same-type capacity
  is insufficient.

Priority order for evacuation:
1. Same disk type with lowest shard count (preferred)
2. Different disk type with lowest shard count (fallback)

* test: use defer for lock/unlock to prevent lock leaks

Use defer to ensure locks are always released, even on early returns
or test failures. This prevents lock leaks that could cause subsequent
tests to hang or fail.

Changes:
- Return early if lock acquisition fails
- Immediately defer unlock after successful lock
- Remove redundant explicit unlock calls at end of tests
- Fix unused variable warning (err -> encodeErr/locErr)

* ec: dynamically discover disk types from topology for evacuation

Disk types are free-form tags (e.g., 'ssd', 'nvme', 'archive') that come
from the topology, not a hardcoded set. Only 'hdd' (or empty) is the
default disk type.

Use collectVolumeDiskTypes() to discover all disk types present in the
cluster topology instead of hardcoding [HardDriveType, SsdType].

* test: add evacuation fallback and cross-rack EC placement tests

Add two new integration tests:

1. TestEvacuationFallbackBehavior:
   - Tests that when same disk type has no capacity, shards fall back
     to other disk types during evacuation
   - Creates cluster with 1 SSD + 2 HDD servers (limited SSD capacity)
   - Verifies pickBestDiskOnNode behavior with strictDiskType=false

2. TestCrossRackECPlacement:
   - Tests EC shard distribution across different racks
   - Creates cluster with 4 servers in 4 different racks
   - Verifies shards are spread across multiple racks
   - Tests that ec.balance respects rack placement

Helper functions added:
- startLimitedSsdCluster: 1 SSD + 2 HDD servers
- startMultiRackCluster: 4 servers in 4 racks
- countShardsPerRack: counts EC shards per rack from disk

* test: fix collection mismatch in TestCrossRackECPlacement

The EC commands were using collection 'rack_test' but uploaded test data
uses collection 'test' (default). This caused ec.encode/ec.balance to not
find the uploaded volume.

Fix: Change EC commands to use '-collection test' to match the uploaded data.

Addresses review comment from PR #7607.

* test: close log files in MultiDiskCluster.Stop() to prevent FD leaks

Track log files in MultiDiskCluster.logFiles and close them in Stop()
to prevent file descriptor accumulation in long-running or many-test
scenarios.

Addresses review comment about logging resources cleanup.

* test: improve EC integration tests with proper assertions

- Add assertNoFlagError helper to detect flag parsing regressions
- Update diskType subtests to fail on flag errors (ec.encode, ec.balance, ec.decode)
- Update verify_disktype_flag_parsing to check help output contains diskType
- Remove verify_fallback_disk_selection (was documentation-only, not executable)
- Add assertion to verify_cross_rack_distribution for minimum 2 racks
- Consolidate uploadTestDataWithDiskType to accept collection parameter
- Remove duplicate uploadTestDataWithDiskTypeMixed function

* test: extract captureCommandOutput helper and fix error handling

- Add captureCommandOutput helper to reduce code duplication in diskType tests
- Create commandRunner interface to match shell command Do method
- Update ec_encode_with_ssd_disktype, ec_balance_with_ssd_disktype,
  ec_encode_with_source_disktype, ec_decode_with_disktype to use helper
- Fix filepath.Glob error handling in countShardsPerRack instead of ignoring it

* test: add flag validation to ec_balance_targets_correct_disk_type

Add assertNoFlagError calls after ec.balance commands to ensure
-diskType flag is properly recognized for both SSD and HDD disk types.

* test: add proper assertions for EC command results

- ec_encode_with_ssd_disktype: check for expected volume-related errors
- ec_balance_with_ssd_disktype: require success with require.NoError
- ec_encode_with_source_disktype: check for expected no-volume errors
- ec_decode_with_disktype: check for expected no-ec-volume errors
- upload_to_ssd_and_hdd: use require.NoError for setup validation

Tests now properly fail on unexpected errors rather than just logging.

* test: fix missing unlock in ec_encode_with_disk_awareness

Add defer unlock pattern to ensure lock is always released, matching
the pattern used in other subtests.

* test: improve helper robustness

- Make assertNoFlagError case-insensitive for pattern matching
- Use defer in captureCommandOutput to restore stdout/stderr and close
  pipe ends to avoid FD leaks even if cmd.Do panics
2025-12-10 22:42:52 -08:00
Chris Lu
d970c15d71 fix: prevent filer.backup stall in single-filer setups (#7695)
* fix: prevent filer.backup stall in single-filer setups (#4977)

When MetaAggregator.MetaLogBuffer is empty (which happens in single-filer
setups with no peers), ReadFromBuffer was returning nil error, causing
LoopProcessLogData to enter an infinite wait loop on ListenersCond.

This fix returns ResumeFromDiskError instead, allowing SubscribeMetadata
to loop back and read from persisted logs on disk. This ensures filer.backup
continues processing events even when the in-memory aggregator buffer is empty.

Fixes #4977

* test: add integration tests for metadata subscription

Add integration tests for metadata subscription functionality:

- TestMetadataSubscribeBasic: Tests basic subscription and event receiving
- TestMetadataSubscribeSingleFilerNoStall: Regression test for #4977,
  verifies subscription doesn't stall under high load in single-filer setups
- TestMetadataSubscribeResumeFromDisk: Tests resuming subscription from disk

Related to #4977

* ci: add GitHub Actions workflow for metadata subscribe tests

Add CI workflow that runs on:
- Push/PR to master affecting filer, log_buffer, or metadata subscribe code
- Runs the integration tests for metadata subscription
- Uploads logs on failure for debugging

Related to #4977

* fix: use multipart form-data for file uploads in integration tests

The filer expects multipart/form-data for file uploads, not raw POST body.
This fixes the 'Content-Type isn't multipart/form-data' error.

* test: use -peers=none for faster master startup

* test: add -peers=none to remaining master startup in ec tests

* fix: use filer HTTP port 8888, WithFilerClient adds 10000 for gRPC

WithFilerClient calls ToGrpcAddress() which adds 10000 to the port.
Passing 18888 resulted in connecting to 28888. Use 8888 instead.

* test: add concurrent writes and million updates tests

- TestMetadataSubscribeConcurrentWrites: 50 goroutines writing 20 files each
- TestMetadataSubscribeMillionUpdates: 1 million metadata entries via gRPC
  (metadata only, no actual file content for speed)

* fix: address PR review comments

- Handle os.MkdirAll errors explicitly instead of ignoring
- Handle log file creation errors with proper error messages
- Replace silent event dropping with 100ms timeout and warning log

* Update metadata_subscribe_integration_test.go
2025-12-09 20:15:35 -08:00
Chris Lu
d5f21fd8ba fix: add missing backslash for volume extraArgs in helm chart (#7676)
Fixes #7467

The -mserver argument line in volume-statefulset.yaml was missing a
trailing backslash, which prevented extraArgs from being passed to
the weed volume process.

Also:
- Extracted master server list generation logic into shared helper
  templates in _helpers.tpl for better maintainability
- Updated all occurrences of deprecated -mserver flag to -master
  across docker-compose files, test files, and documentation
2025-12-08 23:21:02 -08:00
dependabot[bot]
51c630b5ff chore(deps): bump golang.org/x/sync from 0.18.0 to 0.19.0 (#7664)
* chore(deps): bump golang.org/x/sync from 0.18.0 to 0.19.0

Bumps [golang.org/x/sync](https://github.com/golang/sync) from 0.18.0 to 0.19.0.
- [Commits](https://github.com/golang/sync/compare/v0.18.0...v0.19.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sync
  dependency-version: 0.19.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
2025-12-08 10:52:58 -08:00
dependabot[bot]
4709dbf4f8 chore(deps): bump github.com/klauspost/reedsolomon from 1.12.5 to 1.12.6 (#7662)
* chore(deps): bump github.com/klauspost/reedsolomon from 1.12.5 to 1.12.6

Bumps [github.com/klauspost/reedsolomon](https://github.com/klauspost/reedsolomon) from 1.12.5 to 1.12.6.
- [Release notes](https://github.com/klauspost/reedsolomon/releases)
- [Commits](https://github.com/klauspost/reedsolomon/compare/v1.12.5...v1.12.6)

---
updated-dependencies:
- dependency-name: github.com/klauspost/reedsolomon
  dependency-version: 1.12.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
2025-12-08 10:52:38 -08:00
chrislu
5167bbd2a9 Remove deprecated allowEmptyFolder CLI option
The allowEmptyFolder option is no longer functional because:
1. The code that used it was already commented out
2. Empty folder cleanup is now handled asynchronously by EmptyFolderCleaner

The CLI flags are kept for backward compatibility but marked as deprecated
and ignored. This removes:
- S3ApiServerOption.AllowEmptyFolder field
- The actual usage in s3api_object_handlers_list.go
- Helm chart values and template references
- References in test Makefiles and docker-compose files
2025-12-06 21:54:12 -08:00
Chris Lu
55f0fbf364 s3: optimize DELETE by skipping lock check for buckets without Object Lock (#7642)
This optimization avoids an expensive filer gRPC call for every DELETE
operation on buckets that don't have Object Lock enabled.

Before this change, enforceObjectLockProtections() would always call
getObjectEntry() to fetch object metadata to check for retention/legal
hold, even for buckets that never had Object Lock configured.

Changes:
1. Add early return in enforceObjectLockProtections() if bucket has no
   Object Lock config or bucket doesn't exist
2. Add isObjectLockEnabled() helper function to check if a bucket has
   Object Lock configured
3. Fix validateObjectLockHeaders() to check ObjectLockConfig instead of
   just versioningEnabled - this ensures object-lock headers are properly
   rejected on buckets without Object Lock enabled, which aligns with
   AWS S3 semantics
4. Make bucket creation with Object Lock atomic - set Object Lock config
   in the same CreateEntry call as bucket creation, preventing race
   conditions where bucket exists without Object Lock enabled
5. Properly handle Object Lock setup failures during bucket creation -
   if StoreObjectLockConfigurationInExtended fails, roll back the bucket
   creation and return an error instead of leaving a bucket without
   the requested Object Lock configuration

This significantly improves DELETE latency for non-Object-Lock buckets,
which is the common case (lockCheck time reduced from 1-10ms to ~1µs).
2025-12-06 21:37:25 -08:00
chrislu
8d110b29dd fmt 2025-12-04 10:40:01 -08:00
Chris Lu
e361daa754 fix: SFTP HomeDir path translation for user operations (#7611)
* fix: SFTP HomeDir path translation for user operations

When users have a non-root HomeDir (e.g., '/sftp/user'), their SFTP
operations should be relative to that directory. Previously, when a
user uploaded to '/' via SFTP, the path was not translated to their
home directory, causing 'permission denied for / for permission write'.

This fix adds a toAbsolutePath() method that implements chroot-like
behavior where the user's HomeDir becomes their root. All file and
directory operations now translate paths through this method.

Example: User with HomeDir='/sftp/user' uploading to '/' now correctly
maps to '/sftp/user'.

Fixes: https://github.com/seaweedfs/seaweedfs/issues/7470

* test: add SFTP integration tests

Add comprehensive integration tests for the SFTP server including:
- HomeDir path translation tests (verifies fix for issue #7470)
- Basic file upload/download operations
- Directory operations (mkdir, rmdir, list)
- Large file handling (1MB test)
- File rename operations
- Stat/Lstat operations
- Path edge cases (trailing slashes, .., unicode filenames)
- Admin root access verification

The test framework starts a complete SeaweedFS cluster with:
- Master server
- Volume server
- Filer server
- SFTP server with test user credentials

Test users are configured in testdata/userstore.json:
- admin: HomeDir=/ with full access
- testuser: HomeDir=/sftp/testuser with access to home
- readonly: HomeDir=/public with read-only access

* fix: correct SFTP HomeDir path translation and add CI

Fix path.Join issue where paths starting with '/' weren't joined correctly.
path.Join('/sftp/user', '/file') returns '/file' instead of '/sftp/user/file'.
Now we strip the leading '/' before joining.

Test improvements:
- Update go.mod to Go 1.24
- Fix weed binary discovery to prefer local build over PATH
- Add stabilization delay after service startup
- All 8 SFTP integration tests pass locally

Add GitHub Actions workflow for SFTP tests:
- Runs on push/PR affecting sftpd code or tests
- Tests HomeDir path translation, file ops, directory ops
- Covers issue #7470 fix verification

* security: update golang.org/x/crypto to v0.45.0

Addresses security vulnerability in golang.org/x/crypto < 0.45.0

* security: use proper SSH host key verification in tests

Replace ssh.InsecureIgnoreHostKey() with ssh.FixedHostKey() that
verifies the server's host key matches the known test key we generated.
This addresses CodeQL warning go/insecure-hostkeycallback.

Also updates go.mod to specify go 1.24.0 explicitly.

* security: fix path traversal vulnerability in SFTP toAbsolutePath

The previous implementation had a critical security vulnerability:
- Path traversal via '../..' could escape the HomeDir chroot jail
- Absolute paths were not correctly prefixed with HomeDir

The fix:
1. Concatenate HomeDir with userPath directly, then clean
2. Add security check to ensure final path stays within HomeDir
3. If traversal detected, safely return HomeDir instead

Also adds path traversal prevention tests to verify the fix.

* fix: address PR review comments

1. Fix SkipCleanup check to use actual test config instead of default
   - Added skipCleanup field to SftpTestFramework struct
   - Store config.SkipCleanup during Setup()
   - Use f.skipCleanup in Cleanup() instead of DefaultTestConfig()

2. Fix path prefix check false positive in mkdir
   - Changed from strings.HasPrefix(absPath, fs.user.HomeDir)
   - To: absPath == fs.user.HomeDir || strings.HasPrefix(absPath, fs.user.HomeDir+"/")
   - Prevents matching partial directory names (e.g., /sftp/username when HomeDir is /sftp/user)

* fix: check write permission on parent dir for mkdir

Aligns makeDir's permission check with newFileWriter for consistency.
To create a directory, a user needs write permission on the parent
directory, not mkdir permission on the new directory path.

* fix: refine SFTP path traversal logic and tests

1. Refine toAbsolutePath:
   - Use path.Join with strings.TrimPrefix for idiomatic path construction
   - Return explicit error on path traversal attempt instead of clamping
   - Updated all call sites to handle the error

2. Add Unit Tests:
   - Added sftp_server_test.go to verify toAbsolutePath logic
   - Covers normal paths, root path, and various traversal attempts

3. Update Integration Tests:
   - Updated PathTraversalPrevention test to reflect that standard SFTP clients
     sanitize paths before sending. The test now verifies successful containment
     within the jail rather than blocking (since the server receives a clean path).
   - The server-side blocking is verified by the new unit tests.

4. Makefile:
   - Removed -v from default test target

* fix: address PR comments on tests and makefile

1. Enhanced Unit Tests:
   - Added edge cases (empty path, multiple slashes, trailing slash) to sftp_server_test.go

2. Makefile Improvements:
   - Added 'all' target as default entry point

3. Code Clarity:
   - Added comment to mkdir permission check explaining defensive nature of HomeDir check

* fix: address PR review comments on permissions and tests

1. Security:
   - Added write permission check on target directory in renameEntry

2. Logging:
   - Changed dispatch log verbosity from V(0) to V(1)

3. Testing:
   - Updated Makefile .PHONY targets
   - Added unit test cases for empty/root HomeDir behavior in toAbsolutePath

* fix: set SFTP starting directory to virtual root

1. Critical Fix:
   - Changed sftp.WithStartDirectory from fs.user.HomeDir to '/'
   - Prevents double-prefixing when toAbsolutePath translates paths
   - Users now correctly start at their virtual root which maps to HomeDir

2. Test Improvements:
   - Use pointer for homeDir in tests for clearer nil vs empty distinction

* fix: clean HomeDir at config load time

Clean HomeDir path when loading users from JSON config.
This handles trailing slashes and other path anomalies at the source,
ensuring consistency throughout the codebase and avoiding repeated
cleaning on every toAbsolutePath call.

* test: strengthen assertions and add error checking in SFTP tests

1. Add error checking for cleanup operations in TestWalk
2. Strengthen cwd assertion to expect '/' explicitly in TestCurrentWorkingDirectory
3. Add error checking for cleanup in PathTraversalPrevention test
2025-12-03 13:42:05 -08:00
Chris Lu
4f038820dc Add disk-aware EC rebalancing (#7597)
* Add placement package for EC shard placement logic

- Consolidate EC shard placement algorithm for reuse across shell and worker tasks
- Support multi-pass selection: racks, then servers, then disks
- Include proper spread verification and scoring functions
- Comprehensive test coverage for various cluster topologies

* Make ec.balance disk-aware for multi-disk servers

- Add EcDisk struct to track individual disks on volume servers
- Update EcNode to maintain per-disk shard distribution
- Parse disk_id from EC shard information during topology collection
- Implement pickBestDiskOnNode() for selecting best disk per shard
- Add diskDistributionScore() for tie-breaking node selection
- Update all move operations to specify target disk in RPC calls
- Improves shard balance within multi-disk servers, not just across servers

* Use placement package in EC detection for consistent disk-level placement

- Replace custom EC disk selection logic with shared placement package
- Convert topology DiskInfo to placement.DiskCandidate format
- Use SelectDestinations() for multi-rack/server/disk spreading
- Convert placement results back to topology DiskInfo for task creation
- Ensures EC detection uses same placement logic as shell commands

* Make volume server evacuation disk-aware

- Use pickBestDiskOnNode() when selecting evacuation target disk
- Specify target disk in evacuation RPC requests
- Maintains balanced disk distribution during server evacuations

* Rename PlacementConfig to PlacementRequest for clarity

PlacementRequest better reflects that this is a request for placement
rather than a configuration object. This improves API semantics.

* Rename DefaultConfig to DefaultPlacementRequest

Aligns with the PlacementRequest type naming for consistency

* Address review comments from Gemini and CodeRabbit

Fix HIGH issues:
- Fix empty disk discovery: Now discovers all disks from VolumeInfos,
  not just from EC shards. This ensures disks without EC shards are
  still considered for placement.
- Fix EC shard count calculation in detection.go: Now correctly filters
  by DiskId and sums actual shard counts using ShardBits.ShardIdCount()
  instead of just counting EcShardInfo entries.

Fix MEDIUM issues:
- Add disk ID to evacuation log messages for consistency with other logging
- Remove unused serverToDisks variable in placement.go
- Fix comment that incorrectly said 'ascending' when sorting is 'descending'

* add ec tests

* Update ec-integration-tests.yml

* Update ec_integration_test.go

* Fix EC integration tests CI: build weed binary and update actions

- Add 'Build weed binary' step before running tests
- Update actions/setup-go from v4 to v6 (Node20 compatibility)
- Update actions/checkout from v2 to v4 (Node20 compatibility)
- Move working-directory to test step only

* Add disk-aware EC rebalancing integration tests

- Add TestDiskAwareECRebalancing test with multi-disk cluster setup
- Test EC encode with disk awareness (shows disk ID in output)
- Test EC balance with disk-level shard distribution
- Add helper functions for disk-level verification:
  - startMultiDiskCluster: 3 servers x 4 disks each
  - countShardsPerDisk: track shards per disk per server
  - calculateDiskShardVariance: measure distribution balance
- Verify no single disk is overloaded with shards
2025-12-02 12:30:15 -08:00
Chris Lu
733ca8e6df Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type (#7598)
* Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type

Fixes GitHub #7562: Copying objects between encrypted buckets was failing.

Root causes:
1. processMetadataBytes was re-adding SSE headers from source entry, undoing
   the encryption header filtering. Now uses dstEntry.Extended which is
   already filtered.

2. SSE-S3 streaming copy returned nil metadata. Now properly generates and
   returns SSE-S3 destination metadata (SeaweedFSSSES3Key, AES256 header)
   via ExecuteStreamingCopyWithMetadata.

3. Chunks created during streaming copy didn't have SseType set. Now sets
   SseType and per-chunk SseMetadata with chunk-specific IVs for SSE-S3,
   enabling proper decryption on GetObject.

* Address review: make SSE-S3 metadata serialization failures fatal errors

- In executeEncryptCopy: return error instead of just logging if
  SerializeSSES3Metadata fails
- In createChunkFromData: return error if chunk SSE-S3 metadata
  serialization fails

This ensures objects/chunks are never created without proper encryption
metadata, preventing unreadable/corrupted data.

* fmt

* Refactor: reuse function names instead of creating WithMetadata variants

- Change ExecuteStreamingCopy to return (*EncryptionSpec, error) directly
- Remove ExecuteStreamingCopyWithMetadata wrapper
- Change executeStreamingReencryptCopy to return (*EncryptionSpec, error)
- Remove executeStreamingReencryptCopyWithMetadata wrapper
- Update callers to ignore encryption spec with _ where not needed

* Add TODO documenting large file SSE-S3 copy limitation

The streaming copy approach encrypts the entire stream with a single IV
but stores data in chunks with per-chunk IVs. This causes decryption
issues for large files. Small inline files work correctly.

This is a known architectural issue that needs separate work to fix.

* Use chunk-by-chunk encryption for SSE-S3 copy (consistent with SSE-C/SSE-KMS)

Instead of streaming encryption (which had IV mismatch issues for multi-chunk
files), SSE-S3 now uses the same chunk-by-chunk approach as SSE-C and SSE-KMS:

1. Extended copyMultipartCrossEncryption to handle SSE-S3:
   - Added SSE-S3 source decryption in copyCrossEncryptionChunk
   - Added SSE-S3 destination encryption with per-chunk IVs
   - Added object-level metadata generation for SSE-S3 destinations

2. Updated routing in executeEncryptCopy/executeDecryptCopy/executeReencryptCopy
   to use copyMultipartCrossEncryption for all SSE-S3 scenarios

3. Removed streaming copy functions (shouldUseStreamingCopy,
   executeStreamingReencryptCopy) as they're no longer used

4. Added large file (1MB) integration test to verify chunk-by-chunk copy works

This ensures consistent behavior across all SSE types and fixes data corruption
that occurred with large files in the streaming copy approach.

* fmt

* fmt

* Address review: fail explicitly if SSE-S3 metadata is missing

Instead of silently ignoring missing SSE-S3 metadata (which could create
unreadable objects), now explicitly fail the copy operation with a clear
error message if:
- First chunk is missing
- First chunk doesn't have SSE-S3 type
- First chunk has empty SSE metadata
- Deserialization fails

* Address review: improve comment to reflect full scope of chunk creation

* Address review: fail explicitly if baseIV is empty for SSE-S3 chunk encryption

If DestinationIV is not set when encrypting SSE-S3 chunks, the chunk would
be created without SseMetadata, causing GetObject decryption to fail later.
Now fails explicitly with a clear error message.

Note: calculateIVWithOffset returns ([]byte, int) not ([]byte, error) - the
int is a skip amount for intra-block alignment, not an error code.

* Address review: handle 0-byte files in SSE-S3 copy

For 0-byte files, there are no chunks to get metadata from. Generate an IV
for the object-level metadata to ensure even empty files are properly marked
as SSE-S3 encrypted.

Also validate that we don't have a non-empty file with no chunks (which
would indicate an internal error).
2025-12-02 09:24:31 -08:00
Chris Lu
208d008fe3 Fix tagging test pattern to run our comprehensive tests instead of basic tests 2025-12-01 16:34:13 -08:00
Chris Lu
ec41795594 Update s3-tagging-tests to use Makefile server management like other S3 tests 2025-12-01 16:22:51 -08:00
Chris Lu
a33e5a9e6a Add S3 object tagging tests to CI workflow
- Modified test/s3/tagging/s3_tagging_test.go to use environment variables for configurable endpoint and credentials
- Added s3-tagging-tests job to .github/workflows/s3-go-tests.yml to run tagging tests in CI
- Tests will now run automatically on pull requests
2025-12-01 15:40:06 -08:00