Files
seaweedFS/weed/s3api/auth_credentials.go
Chris Lu ca84a8a713 S3: Directly read write volume servers (#7481)
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization

* revert

Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata

* Lazy Entry Fetch for SSE, Skip Conditional Header Check

* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)

* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS

* refactor

* Removed Premature Mutual Exclusivity Check

* check for the presence of the X-Amz-Server-Side-Encryption header

* not used

* fmt

* directly read write volume servers

* HTTP Range Request Support

* set header

* md5

* copy object

* fix sse

* fmt

* implement sse

* sse continue

* fixed the suffix range bug (bytes=-N for "last N bytes")

* debug logs

* Missing PartsCount Header

* profiling

* url encoding

* test_multipart_get_part

* headers

* debug

* adjust log level

* handle part number

* Update s3api_object_handlers.go

* nil safety

* set ModifiedTsNs

* remove

* nil check

* fix sse header

* same logic as filer

* decode values

* decode ivBase64

* s3: Fix SSE decryption JWT authentication and streaming errors

Critical fix for SSE (Server-Side Encryption) test failures:

1. **JWT Authentication Bug** (Root Cause):
   - Changed from GenJwtForFilerServer to GenJwtForVolumeServer
   - S3 API now uses correct JWT when directly reading from volume servers
   - Matches filer's authentication pattern for direct volume access
   - Fixes 'unexpected EOF' and 500 errors in SSE tests

2. **Streaming Error Handling**:
   - Added error propagation in getEncryptedStreamFromVolumes goroutine
   - Use CloseWithError() to properly communicate stream failures
   - Added debug logging for streaming errors

3. **Response Header Timing**:
   - Removed premature WriteHeader(http.StatusOK) call
   - Let Go's http package write status automatically on first write
   - Prevents header lock when errors occur during streaming

4. **Enhanced SSE Decryption Debugging**:
   - Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3
   - Better error messages for missing or invalid encryption metadata
   - Added glog.V(2) debugging for decryption setup

This fixes SSE integration test failures where encrypted objects
could not be retrieved due to volume server authentication failures.
The JWT bug was causing volume servers to reject requests, resulting
in truncated/empty streams (EOF) or internal errors.

* s3: Fix SSE multipart upload metadata preservation

Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS):

**Root Cause - Incomplete SSE Metadata Copying**:
The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first
part to the completed object. This had TWO bugs:

1. **Wrong Constant Name** (Key Mismatch Bug):
   - Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key'
   - Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key'
   - Result: SSE-KMS metadata was NEVER copied → 500 errors

2. **Missing SSE-C and SSE-S3 Headers**:
   - SSE-C requires: IV, Algorithm, KeyMD5
   - SSE-S3 requires: encrypted key data + standard headers
   - Old code: copied nothing for SSE-C/SSE-S3 → decryption failures

**Fix - Complete SSE Header Preservation**:
Now copies ALL SSE headers from first part to completed object:

- SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5
- SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption
- SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption

Applied consistently to all 3 code paths:
1. Versioned buckets (creates version file)
2. Suspended versioning (creates main object with null versionId)
3. Non-versioned buckets (creates main object)

**Why This Is Correct**:
The headers copied EXACTLY match what putToFiler stores during part
upload (lines 496-521 in s3api_object_handlers_put.go). This ensures
detectPrimarySSEType() can correctly identify encrypted multipart
objects and trigger inline decryption with proper metadata.

Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests)

* s3: Add debug logging for versioning state diagnosis

Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure.

Added glog.V(0) logging to show:
1. setBucketVersioningStatus: when versioning status is changed
2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none)
3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject)

This will help identify if:
- The versioning status is being set correctly in bucket config
- The cache is returning stale/incorrect versioning state
- The switch statement is correctly routing to suspended vs enabled handlers

* s3: Enhanced versioning state tracing for suspended versioning diagnosis

Added comprehensive logging across the entire versioning state flow:

PutBucketVersioningHandler:
- Log requested status (Enabled/Suspended)
- Log when calling setBucketVersioningStatus
- Log success/failure of status change

setBucketVersioningStatus:
- Log bucket and status being set
- Log when config is updated
- Log completion with error code

updateBucketConfig:
- Log versioning state being written to cache
- Immediate cache verification after Set
- Log if cache verification fails

getVersioningState:
- Log bucket name and state being returned
- Log if object lock forces VersioningEnabled
- Log errors

This will reveal:
1. If PutBucketVersioning(Suspended) is reaching the handler
2. If the cache update succeeds
3. What state getVersioningState returns during PUT
4. Any cache consistency issues

Expected to show why bucket still reports 'Enabled' after 'Suspended' call.

* s3: Add SSE chunk detection debugging for multipart uploads

Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails:

detectPrimarySSEType now logs:
1. Total chunk count and extended header count
2. All extended headers with 'sse'/'SSE'/'encryption' in the name
3. For each chunk: index, SseType, and whether it has metadata
4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3)

This will reveal if:
- Chunks are missing SSE metadata after multipart completion
- Extended headers are copied correctly from first part
- The SSE detection logic is working correctly

Expected to show if chunks have SseType=0 (none) or proper SSE types set.

* s3: Trace SSE chunk metadata through multipart completion and retrieval

Added end-to-end logging to track SSE chunk metadata lifecycle:

**During Multipart Completion (filer_multipart.go)**:
1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata
2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info
3. Log success after mkFile completes

**During GET Retrieval (s3api_object_handlers.go)**:
1. Log retrieved entry chunks - shows SseType and metadata after retrieval
2. Log detected SSE type result

This will reveal at which point SSE chunk metadata is lost:
- If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug
- If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug
- If chunks never have SSE metadata → multipart completion SSE processing bug

Expected to show chunks with SseType=NONE during retrieval even though
they were created with proper SseType during multipart completion.

* s3: Fix SSE-C multipart IV base64 decoding bug

**Critical Bug Found**: SSE-C multipart uploads were failing because:

Root Cause:
- entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV)
- SerializeSSECMetadata expects raw IV bytes (16 bytes)
- During multipart completion, we were passing base64 IV directly → serialization error

Error Message:
"Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24"

Fix:
- Base64-decode IV before passing to SerializeSSECMetadata
- Added error handling for decode failures

Impact:
- SSE-C multipart uploads will now correctly serialize chunk metadata
- Chunks will have proper SSE metadata for decryption during GET

This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration.
SSE-KMS still has a separate issue (error code 23) being investigated.

* fixes

* kms sse

* handle retry if not found in .versions folder and should read the normal object

* quick check (no retries) to see if the .versions/ directory exists

* skip retry if object is not found

* explicit update to avoid sync delay

* fix map update lock

* Remove fmt.Printf debug statements

* Fix SSE-KMS multipart base IV fallback to fail instead of regenerating

* fmt

* Fix ACL grants storage logic

* header handling

* nil handling

* range read for sse content

* test range requests for sse objects

* fmt

* unused code

* upload in chunks

* header case

* fix url

* bucket policy error vs bucket not found

* jwt handling

* fmt

* jwt in request header

* Optimize Case-Insensitive Prefix Check

* dead code

* Eliminated Unnecessary Stream Prefetch for Multipart SSE

* range sse

* sse

* refactor

* context

* fmt

* fix type

* fix SSE-C IV Mismatch

* Fix Headers Being Set After WriteHeader

* fix url parsing

* propergate sse headers

* multipart sse-s3

* aws sig v4 authen

* sse kms

* set content range

* better errors

* Update s3api_object_handlers_copy.go

* Update s3api_object_handlers.go

* Update s3api_object_handlers.go

* avoid magic number

* clean up

* Update s3api_bucket_policy_handlers.go

* fix url parsing

* context

* data and metadata both use background context

* adjust the offset

* SSE Range Request IV Calculation

* adjust logs

* IV relative to offset in each part, not the whole file

* collect logs

* offset

* fix offset

* fix url

* logs

* variable

* jwt

* Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only.

* sse

* adjust IV and offset

* multipart boundaries

* ensures PUT and GET operations return consistent ETags

* Metadata Header Case

* CommonPrefixes Sorting with URL Encoding

* always sort

* remove the extra PathUnescape call

* fix the multipart get part ETag

* the FileChunk is created without setting ModifiedTsNs

* Sort CommonPrefixes lexicographically to match AWS S3 behavior

* set md5 for multipart uploads

* prevents any potential data loss or corruption in the small-file inline storage path

* compiles correctly

* decryptedReader will now be properly closed after use

* Fixed URL encoding and sort order for CommonPrefixes

* Update s3api_object_handlers_list.go

* SSE-x Chunk View Decryption

* Different IV offset calculations for single-part vs multipart objects

* still too verbose in logs

* less logs

* ensure correct conversion

* fix listing

* nil check

* minor fixes

* nil check

* single character delimiter

* optimize

* range on empty object or zero-length

* correct IV based on its position within that part, not its position in the entire object

* adjust offset

* offset

Fetch FULL encrypted chunk (not just the range)
Adjust IV by PartOffset/ChunkOffset only
Decrypt full chunk
Skip in the DECRYPTED stream to reach OffsetInChunk

* look breaking

* refactor

* error on no content

* handle intra-block byte skipping

* Incomplete HTTP Response Error Handling

* multipart SSE

* Update s3api_object_handlers.go

* address comments

* less logs

* handling directory

* Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups

* Revert "handling directory"

This reverts commit 3a335f0ac33c63f51975abc63c40e5328857a74b.

* constant

* Consolidate nil entry checks in GetObjectHandler

* add range tests

* Consolidate redundant nil entry checks in HeadObjectHandler

* adjust logs

* SSE type

* large files

* large files

Reverted the plain-object range test

* ErrNoEncryptionConfig

* Fixed SSERangeReader Infinite Loop Vulnerability

* Fixed SSE-KMS Multipart ChunkReader HTTP Body Leak

* handle empty directory in S3, added PyArrow tests

* purge unused code

* Update s3_parquet_test.py

* Update requirements.txt

* According to S3 specifications, when both partNumber and Range are present, the Range should apply within the selected part's boundaries, not to the full object.

* handle errors

* errors after writing header

* https

* fix: Wait for volume assignment readiness before running Parquet tests

The test-implicit-dir-with-server test was failing with an Internal Error
because volume assignment was not ready when tests started. This fix adds
a check that attempts a volume assignment and waits for it to succeed
before proceeding with tests.

This ensures that:
1. Volume servers are registered with the master
2. Volume growth is triggered if needed
3. The system can successfully assign volumes for writes

Fixes the timeout issue where boto3 would retry 4 times and fail with
'We encountered an internal error, please try again.'

* sse tests

* store derived IV

* fix: Clean up gRPC ports between tests to prevent port conflicts

The second test (test-implicit-dir-with-server) was failing because the
volume server's gRPC port (18080 = VOLUME_PORT + 10000) was still in use
from the first test. The cleanup code only killed HTTP port processes,
not gRPC port processes.

Added cleanup for gRPC ports in all stop targets:
- Master gRPC: MASTER_PORT + 10000 (19333)
- Volume gRPC: VOLUME_PORT + 10000 (18080)
- Filer gRPC: FILER_PORT + 10000 (18888)

This ensures clean state between test runs in CI.

* add import

* address comments

* docs: Add placeholder documentation files for Parquet test suite

Added three missing documentation files referenced in test/s3/parquet/README.md:

1. TEST_COVERAGE.md - Documents 43 total test cases (17 Go unit tests,
   6 Python integration tests, 20 Python end-to-end tests)

2. FINAL_ROOT_CAUSE_ANALYSIS.md - Explains the s3fs compatibility issue
   with PyArrow, the implicit directory problem, and how the fix works

3. MINIO_DIRECTORY_HANDLING.md - Compares MinIO's directory handling
   approach with SeaweedFS's implementation

Each file contains:
- Title and overview
- Key technical details relevant to the topic
- TODO sections for future expansion

These placeholder files resolve the broken README links and provide
structure for future detailed documentation.

* clean up if metadata operation failed

* Update s3_parquet_test.py

* clean up

* Update Makefile

* Update s3_parquet_test.py

* Update Makefile

* Handle ivSkip for non-block-aligned offsets

* Update README.md

* stop volume server faster

* stop volume server in 1 second

* different IV for each chunk in SSE-S3 and SSE-KMS

* clean up if fails

* testing upload

* error propagation

* fmt

* simplify

* fix copying

* less logs

* endian

* Added marshaling error handling

* handling invalid ranges

* error handling for adding to log buffer

* fix logging

* avoid returning too quickly and ensure proper cleaning up

* Activity Tracking for Disk Reads

* Cleanup Unused Parameters

* Activity Tracking for Kafka Publishers

* Proper Test Error Reporting

* refactoring

* less logs

* less logs

* go fmt

* guard it with if entry.Attributes.TtlSec > 0 to match the pattern used elsewhere.

* Handle bucket-default encryption config errors explicitly for multipart

* consistent activity tracking

* obsolete code for s3 on filer read/write handlers

* Update weed/s3api/s3api_object_handlers_list.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-18 23:18:35 -08:00

750 lines
24 KiB
Go

package s3api
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"slices"
"strings"
"sync"
"github.com/seaweedfs/seaweedfs/weed/credential"
"github.com/seaweedfs/seaweedfs/weed/filer"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/kms"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/iam_pb"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
// Import KMS providers to register them
_ "github.com/seaweedfs/seaweedfs/weed/kms/aws"
// _ "github.com/seaweedfs/seaweedfs/weed/kms/azure" // TODO: Fix Azure SDK compatibility issues
_ "github.com/seaweedfs/seaweedfs/weed/kms/gcp"
_ "github.com/seaweedfs/seaweedfs/weed/kms/local"
_ "github.com/seaweedfs/seaweedfs/weed/kms/openbao"
"google.golang.org/grpc"
)
type Action string
type Iam interface {
Check(f http.HandlerFunc, actions ...Action) http.HandlerFunc
}
type IdentityAccessManagement struct {
m sync.RWMutex
identities []*Identity
accessKeyIdent map[string]*Identity
accounts map[string]*Account
emailAccount map[string]*Account
hashes map[string]*sync.Pool
hashCounters map[string]*int32
identityAnonymous *Identity
hashMu sync.RWMutex
domain string
isAuthEnabled bool
credentialManager *credential.CredentialManager
filerClient filer_pb.SeaweedFilerClient
grpcDialOption grpc.DialOption
// IAM Integration for advanced features
iamIntegration *S3IAMIntegration
// Bucket policy engine for evaluating bucket policies
policyEngine *BucketPolicyEngine
}
type Identity struct {
Name string
Account *Account
Credentials []*Credential
Actions []Action
PrincipalArn string // ARN for IAM authorization (e.g., "arn:aws:iam::account-id:user/username")
}
// Account represents a system user, a system user can
// configure multiple IAM-Users, IAM-Users can configure
// permissions respectively, and each IAM-User can
// configure multiple security credentials
type Account struct {
//Name is also used to display the "DisplayName" as the owner of the bucket or object
DisplayName string
EmailAddress string
//Id is used to identify an Account when granting cross-account access(ACLs) to buckets and objects
Id string
}
// Predefined Accounts
var (
// AccountAdmin is used as the default account for IAM-Credentials access without Account configured
AccountAdmin = Account{
DisplayName: "admin",
EmailAddress: "admin@example.com",
Id: s3_constants.AccountAdminId,
}
// AccountAnonymous is used to represent the account for anonymous access
AccountAnonymous = Account{
DisplayName: "anonymous",
EmailAddress: "anonymous@example.com",
Id: s3_constants.AccountAnonymousId,
}
)
type Credential struct {
AccessKey string
SecretKey string
}
// "Permission": "FULL_CONTROL"|"WRITE"|"WRITE_ACP"|"READ"|"READ_ACP"
func (action Action) getPermission() Permission {
switch act := strings.Split(string(action), ":")[0]; act {
case s3_constants.ACTION_ADMIN:
return Permission("FULL_CONTROL")
case s3_constants.ACTION_WRITE:
return Permission("WRITE")
case s3_constants.ACTION_WRITE_ACP:
return Permission("WRITE_ACP")
case s3_constants.ACTION_READ:
return Permission("READ")
case s3_constants.ACTION_READ_ACP:
return Permission("READ_ACP")
default:
return Permission("")
}
}
func NewIdentityAccessManagement(option *S3ApiServerOption) *IdentityAccessManagement {
return NewIdentityAccessManagementWithStore(option, "")
}
func NewIdentityAccessManagementWithStore(option *S3ApiServerOption, explicitStore string) *IdentityAccessManagement {
iam := &IdentityAccessManagement{
domain: option.DomainName,
hashes: make(map[string]*sync.Pool),
hashCounters: make(map[string]*int32),
}
// Always initialize credential manager with fallback to defaults
credentialManager, err := credential.NewCredentialManagerWithDefaults(credential.CredentialStoreTypeName(explicitStore))
if err != nil {
glog.Fatalf("failed to initialize credential manager: %v", err)
}
// For stores that need filer client details, set them
if store := credentialManager.GetStore(); store != nil {
if filerClientSetter, ok := store.(interface {
SetFilerClient(string, grpc.DialOption)
}); ok {
filerClientSetter.SetFilerClient(string(option.Filer), option.GrpcDialOption)
}
}
iam.credentialManager = credentialManager
// Track whether any configuration was successfully loaded
configLoaded := false
// First, try to load configurations from file or filer
if option.Config != "" {
glog.V(3).Infof("loading static config file %s", option.Config)
if err := iam.loadS3ApiConfigurationFromFile(option.Config); err != nil {
glog.Fatalf("fail to load config file %s: %v", option.Config, err)
}
// Check if any identities were actually loaded from the config file
iam.m.RLock()
configLoaded = len(iam.identities) > 0
iam.m.RUnlock()
} else {
glog.V(3).Infof("no static config file specified... loading config from credential manager")
if err := iam.loadS3ApiConfigurationFromFiler(option); err != nil {
glog.Warningf("fail to load config: %v", err)
} else {
// Check if any identities were actually loaded from filer
iam.m.RLock()
configLoaded = len(iam.identities) > 0
iam.m.RUnlock()
}
}
// Only use environment variables as fallback if no configuration was loaded
if !configLoaded {
accessKeyId := os.Getenv("AWS_ACCESS_KEY_ID")
secretAccessKey := os.Getenv("AWS_SECRET_ACCESS_KEY")
if accessKeyId != "" && secretAccessKey != "" {
glog.V(1).Infof("No S3 configuration found, using AWS environment variables as fallback")
// Create environment variable identity name
identityNameSuffix := accessKeyId
if len(accessKeyId) > 8 {
identityNameSuffix = accessKeyId[:8]
}
// Create admin identity with environment variable credentials
envIdentity := &Identity{
Name: "admin-" + identityNameSuffix,
Account: &AccountAdmin,
Credentials: []*Credential{
{
AccessKey: accessKeyId,
SecretKey: secretAccessKey,
},
},
Actions: []Action{
s3_constants.ACTION_ADMIN,
},
}
// Set as the only configuration
iam.m.Lock()
if len(iam.identities) == 0 {
iam.identities = []*Identity{envIdentity}
iam.accessKeyIdent = map[string]*Identity{accessKeyId: envIdentity}
iam.isAuthEnabled = true
}
iam.m.Unlock()
glog.V(1).Infof("Added admin identity from AWS environment variables: %s", envIdentity.Name)
}
}
return iam
}
func (iam *IdentityAccessManagement) loadS3ApiConfigurationFromFiler(option *S3ApiServerOption) error {
return iam.LoadS3ApiConfigurationFromCredentialManager()
}
func (iam *IdentityAccessManagement) loadS3ApiConfigurationFromFile(fileName string) error {
content, readErr := os.ReadFile(fileName)
if readErr != nil {
glog.Warningf("fail to read %s : %v", fileName, readErr)
return fmt.Errorf("fail to read %s : %v", fileName, readErr)
}
// Initialize KMS if configuration contains KMS settings
if err := iam.initializeKMSFromConfig(content); err != nil {
glog.Warningf("KMS initialization failed: %v", err)
}
return iam.LoadS3ApiConfigurationFromBytes(content)
}
func (iam *IdentityAccessManagement) LoadS3ApiConfigurationFromBytes(content []byte) error {
s3ApiConfiguration := &iam_pb.S3ApiConfiguration{}
if err := filer.ParseS3ConfigurationFromBytes(content, s3ApiConfiguration); err != nil {
glog.Warningf("unmarshal error: %v", err)
return fmt.Errorf("unmarshal error: %w", err)
}
if err := filer.CheckDuplicateAccessKey(s3ApiConfiguration); err != nil {
return err
}
if err := iam.loadS3ApiConfiguration(s3ApiConfiguration); err != nil {
return err
}
return nil
}
func (iam *IdentityAccessManagement) loadS3ApiConfiguration(config *iam_pb.S3ApiConfiguration) error {
var identities []*Identity
var identityAnonymous *Identity
accessKeyIdent := make(map[string]*Identity)
accounts := make(map[string]*Account)
emailAccount := make(map[string]*Account)
foundAccountAdmin := false
foundAccountAnonymous := false
for _, account := range config.Accounts {
glog.V(3).Infof("loading account name=%s, id=%s", account.DisplayName, account.Id)
switch account.Id {
case AccountAdmin.Id:
AccountAdmin = Account{
Id: account.Id,
DisplayName: account.DisplayName,
EmailAddress: account.EmailAddress,
}
accounts[account.Id] = &AccountAdmin
foundAccountAdmin = true
case AccountAnonymous.Id:
AccountAnonymous = Account{
Id: account.Id,
DisplayName: account.DisplayName,
EmailAddress: account.EmailAddress,
}
accounts[account.Id] = &AccountAnonymous
foundAccountAnonymous = true
default:
t := Account{
Id: account.Id,
DisplayName: account.DisplayName,
EmailAddress: account.EmailAddress,
}
accounts[account.Id] = &t
}
if account.EmailAddress != "" {
emailAccount[account.EmailAddress] = accounts[account.Id]
}
}
if !foundAccountAdmin {
accounts[AccountAdmin.Id] = &AccountAdmin
emailAccount[AccountAdmin.EmailAddress] = &AccountAdmin
}
if !foundAccountAnonymous {
accounts[AccountAnonymous.Id] = &AccountAnonymous
emailAccount[AccountAnonymous.EmailAddress] = &AccountAnonymous
}
for _, ident := range config.Identities {
glog.V(3).Infof("loading identity %s", ident.Name)
t := &Identity{
Name: ident.Name,
Credentials: nil,
Actions: nil,
PrincipalArn: generatePrincipalArn(ident.Name),
}
switch {
case ident.Name == AccountAnonymous.Id:
t.Account = &AccountAnonymous
identityAnonymous = t
case ident.Account == nil:
t.Account = &AccountAdmin
default:
if account, ok := accounts[ident.Account.Id]; ok {
t.Account = account
} else {
t.Account = &AccountAdmin
glog.Warningf("identity %s is associated with a non exist account ID, the association is invalid", ident.Name)
}
}
for _, action := range ident.Actions {
t.Actions = append(t.Actions, Action(action))
}
for _, cred := range ident.Credentials {
t.Credentials = append(t.Credentials, &Credential{
AccessKey: cred.AccessKey,
SecretKey: cred.SecretKey,
})
accessKeyIdent[cred.AccessKey] = t
}
identities = append(identities, t)
}
iam.m.Lock()
// atomically switch
iam.identities = identities
iam.identityAnonymous = identityAnonymous
iam.accounts = accounts
iam.emailAccount = emailAccount
iam.accessKeyIdent = accessKeyIdent
if !iam.isAuthEnabled { // one-directional, no toggling
iam.isAuthEnabled = len(identities) > 0
}
iam.m.Unlock()
return nil
}
func (iam *IdentityAccessManagement) isEnabled() bool {
return iam.isAuthEnabled
}
func (iam *IdentityAccessManagement) lookupByAccessKey(accessKey string) (identity *Identity, cred *Credential, found bool) {
iam.m.RLock()
defer iam.m.RUnlock()
if ident, ok := iam.accessKeyIdent[accessKey]; ok {
for _, credential := range ident.Credentials {
if credential.AccessKey == accessKey {
return ident, credential, true
}
}
}
glog.V(1).Infof("could not find accessKey %s", accessKey)
return nil, nil, false
}
func (iam *IdentityAccessManagement) lookupAnonymous() (identity *Identity, found bool) {
iam.m.RLock()
defer iam.m.RUnlock()
if iam.identityAnonymous != nil {
return iam.identityAnonymous, true
}
return nil, false
}
// generatePrincipalArn generates an ARN for a user identity
func generatePrincipalArn(identityName string) string {
// Handle special cases
switch identityName {
case AccountAnonymous.Id:
return "arn:aws:iam::user/anonymous"
case AccountAdmin.Id:
return "arn:aws:iam::user/admin"
default:
return fmt.Sprintf("arn:aws:iam::user/%s", identityName)
}
}
func (iam *IdentityAccessManagement) GetAccountNameById(canonicalId string) string {
iam.m.RLock()
defer iam.m.RUnlock()
if account, ok := iam.accounts[canonicalId]; ok {
return account.DisplayName
}
return ""
}
func (iam *IdentityAccessManagement) GetAccountIdByEmail(email string) string {
iam.m.RLock()
defer iam.m.RUnlock()
if account, ok := iam.emailAccount[email]; ok {
return account.Id
}
return ""
}
func (iam *IdentityAccessManagement) Auth(f http.HandlerFunc, action Action) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !iam.isEnabled() {
f(w, r)
return
}
identity, errCode := iam.authRequest(r, action)
glog.V(3).Infof("auth error: %v", errCode)
if errCode == s3err.ErrNone {
if identity != nil && identity.Name != "" {
r.Header.Set(s3_constants.AmzIdentityId, identity.Name)
}
f(w, r)
return
}
s3err.WriteErrorResponse(w, r, errCode)
}
}
// check whether the request has valid access keys
func (iam *IdentityAccessManagement) authRequest(r *http.Request, action Action) (*Identity, s3err.ErrorCode) {
var identity *Identity
var s3Err s3err.ErrorCode
var found bool
var authType string
switch getRequestAuthType(r) {
case authTypeUnknown:
glog.V(3).Infof("unknown auth type")
r.Header.Set(s3_constants.AmzAuthType, "Unknown")
return identity, s3err.ErrAccessDenied
case authTypePresignedV2, authTypeSignedV2:
glog.V(3).Infof("v2 auth type")
identity, s3Err = iam.isReqAuthenticatedV2(r)
authType = "SigV2"
case authTypeStreamingSigned, authTypeSigned, authTypePresigned:
glog.V(3).Infof("v4 auth type")
identity, s3Err = iam.reqSignatureV4Verify(r)
authType = "SigV4"
case authTypePostPolicy:
glog.V(3).Infof("post policy auth type")
r.Header.Set(s3_constants.AmzAuthType, "PostPolicy")
return identity, s3err.ErrNone
case authTypeStreamingUnsigned:
glog.V(3).Infof("unsigned streaming upload")
return identity, s3err.ErrNone
case authTypeJWT:
glog.V(3).Infof("jwt auth type detected, iamIntegration != nil? %t", iam.iamIntegration != nil)
r.Header.Set(s3_constants.AmzAuthType, "Jwt")
if iam.iamIntegration != nil {
identity, s3Err = iam.authenticateJWTWithIAM(r)
authType = "Jwt"
} else {
glog.V(2).Infof("IAM integration is nil, returning ErrNotImplemented")
return identity, s3err.ErrNotImplemented
}
case authTypeAnonymous:
authType = "Anonymous"
if identity, found = iam.lookupAnonymous(); !found {
r.Header.Set(s3_constants.AmzAuthType, authType)
return identity, s3err.ErrAccessDenied
}
default:
return identity, s3err.ErrNotImplemented
}
if len(authType) > 0 {
r.Header.Set(s3_constants.AmzAuthType, authType)
}
if s3Err != s3err.ErrNone {
return identity, s3Err
}
glog.V(3).Infof("user name: %v actions: %v, action: %v", identity.Name, identity.Actions, action)
bucket, object := s3_constants.GetBucketAndObject(r)
prefix := s3_constants.GetPrefix(r)
// For List operations, use prefix for permission checking if available
if action == s3_constants.ACTION_LIST && object == "" && prefix != "" {
// List operation with prefix - check permission for the prefix path
object = prefix
} else if (object == "/" || object == "") && prefix != "" {
// Using the aws cli with s3, and s3api, and with boto3, the object is often set to "/" or empty
// but the prefix is set to the actual object key for permission checking
object = prefix
}
// For ListBuckets, authorization is performed in the handler by iterating
// through buckets and checking permissions for each. Skip the global check here.
policyAllows := false
if action == s3_constants.ACTION_LIST && bucket == "" {
// ListBuckets operation - authorization handled per-bucket in the handler
} else {
// First check bucket policy if one exists
// Bucket policies can grant or deny access to specific users/principals
// Following AWS semantics:
// - Explicit DENY in bucket policy → immediate rejection
// - Explicit ALLOW in bucket policy → grant access (bypass IAM checks)
// - No policy or indeterminate → fall through to IAM checks
if iam.policyEngine != nil && bucket != "" {
principal := buildPrincipalARN(identity)
// Use context-aware policy evaluation to get the correct S3 action
allowed, evaluated, err := iam.policyEngine.EvaluatePolicyWithContext(bucket, object, string(action), principal, r)
if err != nil {
// SECURITY: Fail-close on policy evaluation errors
// If we can't evaluate the policy, deny access rather than falling through to IAM
glog.Errorf("Error evaluating bucket policy for %s/%s: %v - denying access", bucket, object, err)
return identity, s3err.ErrAccessDenied
} else if evaluated {
// A bucket policy exists and was evaluated with a matching statement
if allowed {
// Policy explicitly allows this action - grant access immediately
// This bypasses IAM checks to support cross-account access and policy-only principals
glog.V(3).Infof("Bucket policy allows %s to %s on %s/%s (bypassing IAM)", identity.Name, action, bucket, object)
policyAllows = true
} else {
// Policy explicitly denies this action - deny access immediately
// Note: Explicit Deny in bucket policy overrides all other permissions
glog.V(3).Infof("Bucket policy explicitly denies %s to %s on %s/%s", identity.Name, action, bucket, object)
return identity, s3err.ErrAccessDenied
}
}
// If not evaluated (no policy or no matching statements), fall through to IAM/identity checks
}
// Only check IAM if bucket policy didn't explicitly allow
// This ensures bucket policies can independently grant access (AWS semantics)
if !policyAllows {
// Use enhanced IAM authorization if available, otherwise fall back to legacy authorization
if iam.iamIntegration != nil {
// Always use IAM when available for unified authorization
if errCode := iam.authorizeWithIAM(r, identity, action, bucket, object); errCode != s3err.ErrNone {
return identity, errCode
}
} else {
// Fall back to existing authorization when IAM is not configured
if !identity.canDo(action, bucket, object) {
return identity, s3err.ErrAccessDenied
}
}
}
}
r.Header.Set(s3_constants.AmzAccountId, identity.Account.Id)
return identity, s3err.ErrNone
}
func (identity *Identity) canDo(action Action, bucket string, objectKey string) bool {
if identity.isAdmin() {
return true
}
for _, a := range identity.Actions {
// Case where the Resource provided is
// "Resource": [
// "arn:aws:s3:::*"
// ]
if a == action {
return true
}
}
if bucket == "" {
glog.V(3).Infof("identity %s is not allowed to perform action %s on %s -- bucket is empty", identity.Name, action, bucket+objectKey)
return false
}
glog.V(3).Infof("checking if %s can perform %s on bucket '%s'", identity.Name, action, bucket+objectKey)
target := string(action) + ":" + bucket + objectKey
adminTarget := s3_constants.ACTION_ADMIN + ":" + bucket + objectKey
limitedByBucket := string(action) + ":" + bucket
adminLimitedByBucket := s3_constants.ACTION_ADMIN + ":" + bucket
for _, a := range identity.Actions {
act := string(a)
if strings.HasSuffix(act, "*") {
if strings.HasPrefix(target, act[:len(act)-1]) {
return true
}
if strings.HasPrefix(adminTarget, act[:len(act)-1]) {
return true
}
} else {
if act == limitedByBucket {
return true
}
if act == adminLimitedByBucket {
return true
}
}
}
//log error
glog.V(3).Infof("identity %s is not allowed to perform action %s on %s", identity.Name, action, bucket+objectKey)
return false
}
func (identity *Identity) isAdmin() bool {
return slices.Contains(identity.Actions, s3_constants.ACTION_ADMIN)
}
// buildPrincipalARN builds an ARN for an identity to use in bucket policy evaluation
func buildPrincipalARN(identity *Identity) string {
if identity == nil {
return "*" // Anonymous
}
// Check if this is the anonymous user identity (authenticated as anonymous)
// S3 policies expect Principal: "*" for anonymous access
if identity.Name == s3_constants.AccountAnonymousId ||
(identity.Account != nil && identity.Account.Id == s3_constants.AccountAnonymousId) {
return "*" // Anonymous user
}
// Build an AWS-compatible principal ARN
// Format: arn:aws:iam::account-id:user/user-name
accountId := identity.Account.Id
if accountId == "" {
accountId = "000000000000" // Default account ID
}
userName := identity.Name
if userName == "" {
userName = "unknown"
}
return fmt.Sprintf("arn:aws:iam::%s:user/%s", accountId, userName)
}
// GetCredentialManager returns the credential manager instance
func (iam *IdentityAccessManagement) GetCredentialManager() *credential.CredentialManager {
return iam.credentialManager
}
// LoadS3ApiConfigurationFromCredentialManager loads configuration using the credential manager
func (iam *IdentityAccessManagement) LoadS3ApiConfigurationFromCredentialManager() error {
s3ApiConfiguration, err := iam.credentialManager.LoadConfiguration(context.Background())
if err != nil {
return fmt.Errorf("failed to load configuration from credential manager: %w", err)
}
return iam.loadS3ApiConfiguration(s3ApiConfiguration)
}
// initializeKMSFromConfig loads KMS configuration from TOML format
func (iam *IdentityAccessManagement) initializeKMSFromConfig(configContent []byte) error {
// JSON-only KMS configuration
if err := iam.initializeKMSFromJSON(configContent); err == nil {
glog.V(1).Infof("Successfully loaded KMS configuration from JSON format")
return nil
}
glog.V(2).Infof("No KMS configuration found in S3 config - SSE-KMS will not be available")
return nil
}
// initializeKMSFromJSON loads KMS configuration from JSON format when provided in the same file
func (iam *IdentityAccessManagement) initializeKMSFromJSON(configContent []byte) error {
// Parse as generic JSON and extract optional "kms" block
var m map[string]any
if err := json.Unmarshal([]byte(strings.TrimSpace(string(configContent))), &m); err != nil {
return err
}
kmsVal, ok := m["kms"]
if !ok {
return fmt.Errorf("no KMS section found")
}
// Load KMS configuration directly from the parsed JSON data
return kms.LoadKMSFromConfig(kmsVal)
}
// SetIAMIntegration sets the IAM integration for advanced authentication and authorization
func (iam *IdentityAccessManagement) SetIAMIntegration(integration *S3IAMIntegration) {
iam.m.Lock()
defer iam.m.Unlock()
iam.iamIntegration = integration
}
// authenticateJWTWithIAM authenticates JWT tokens using the IAM integration
func (iam *IdentityAccessManagement) authenticateJWTWithIAM(r *http.Request) (*Identity, s3err.ErrorCode) {
ctx := r.Context()
// Use IAM integration to authenticate JWT
iamIdentity, errCode := iam.iamIntegration.AuthenticateJWT(ctx, r)
if errCode != s3err.ErrNone {
return nil, errCode
}
// Convert IAMIdentity to existing Identity structure
identity := &Identity{
Name: iamIdentity.Name,
Account: iamIdentity.Account,
Actions: []Action{}, // Empty - authorization handled by policy engine
}
// Store session info in request headers for later authorization
r.Header.Set("X-SeaweedFS-Session-Token", iamIdentity.SessionToken)
r.Header.Set("X-SeaweedFS-Principal", iamIdentity.Principal)
return identity, s3err.ErrNone
}
// authorizeWithIAM authorizes requests using the IAM integration policy engine
func (iam *IdentityAccessManagement) authorizeWithIAM(r *http.Request, identity *Identity, action Action, bucket string, object string) s3err.ErrorCode {
ctx := r.Context()
// Get session info from request headers (for JWT-based authentication)
sessionToken := r.Header.Get("X-SeaweedFS-Session-Token")
principal := r.Header.Get("X-SeaweedFS-Principal")
// Create IAMIdentity for authorization
iamIdentity := &IAMIdentity{
Name: identity.Name,
Account: identity.Account,
}
// Handle both session-based (JWT) and static-key-based (V4 signature) principals
if sessionToken != "" && principal != "" {
// JWT-based authentication - use session token and principal from headers
iamIdentity.Principal = principal
iamIdentity.SessionToken = sessionToken
glog.V(3).Infof("Using JWT-based IAM authorization for principal: %s", principal)
} else if identity.PrincipalArn != "" {
// V4 signature authentication - use principal ARN from identity
iamIdentity.Principal = identity.PrincipalArn
iamIdentity.SessionToken = "" // No session token for static credentials
glog.V(3).Infof("Using V4 signature IAM authorization for principal: %s", identity.PrincipalArn)
} else {
glog.V(3).Info("No valid principal information for IAM authorization")
return s3err.ErrAccessDenied
}
// Use IAM integration for authorization
return iam.iamIntegration.AuthorizeAction(ctx, iamIdentity, action, bucket, object, r)
}