* feat: Add AWS IAM Policy Variables support to S3 API
Implements policy variables for dynamic access control in bucket policies.
Supported variables:
- aws:username - Extracted from principal ARN
- aws:userid - User identifier (same as username in SeaweedFS)
- aws:principaltype - IAMUser, IAMRole, or AssumedRole
- jwt:* - Any JWT claim (e.g., jwt:preferred_username, jwt:sub)
Key changes:
- Added PolicyVariableRegex to detect ${...} patterns
- Extended CompiledStatement with DynamicResourcePatterns, DynamicPrincipalPatterns, DynamicActionPatterns
- Added Claims field to PolicyEvaluationArgs for JWT claim access
- Implemented SubstituteVariables() for variable replacement from context and JWT claims
- Implemented extractPrincipalVariables() for ARN parsing
- Updated EvaluateConditions() to support variable substitution
- Comprehensive unit and integration tests
Resolves #8037
* feat: Add LDAP and PrincipalAccount variable support
Completes future enhancements for policy variables:
- Added ldap:* variable support for LDAP claims
- ldap:username - LDAP username from claims
- ldap:dn - LDAP distinguished name from claims
- ldap:* - Any LDAP claim
- Added aws:PrincipalAccount extraction from ARN
- Extracts account ID from principal ARN
- Available as ${aws:PrincipalAccount} in policies
Updated SubstituteVariables() to check LDAP claims
Updated extractPrincipalVariables() to extract account ID
Added comprehensive tests for new variables
* feat(s3api): implement IAM policy variables core logic and optimization
* feat(s3api): integrate policy variables with S3 authentication and handlers
* test(s3api): add integration tests for policy variables
* cleanup: remove unused policy conversion files
* Add S3 policy variables integration tests and path support
- Add comprehensive integration tests for policy variables
- Test username isolation, JWT claims, LDAP claims
- Add support for IAM paths in principal ARN parsing
- Add tests for principals with paths
* Fix IAM Role principal variable extraction
IAM Roles should not have aws:userid or aws:PrincipalAccount
according to AWS behavior. Only IAM Users and Assumed Roles
should have these variables.
Fixes TestExtractPrincipalVariables test failures.
* Security fixes and bug fixes for S3 policy variables
SECURITY FIXES:
- Prevent X-SeaweedFS-Principal header spoofing by clearing internal
headers at start of authentication (auth_credentials.go)
- Restrict policy variable substitution to safe allowlist to prevent
client header injection (iam/policy/policy_engine.go)
- Add core policy validation before storing bucket policies
BUG FIXES:
- Remove unused sid variable in evaluateStatement
- Fix LDAP claim lookup to check both prefixed and unprefixed keys
- Add ValidatePolicy call in PutBucketPolicyHandler
These fixes prevent privilege escalation via header injection and
ensure only validated identity claims are used in policy evaluation.
* Additional security fixes and code cleanup
SECURITY FIXES:
- Fixed X-Forwarded-For spoofing by only trusting proxy headers from
private/localhost IPs (s3_iam_middleware.go)
- Changed context key from "sourceIP" to "aws:SourceIp" for proper
policy variable substitution
CODE IMPROVEMENTS:
- Kept aws:PrincipalAccount for IAM Roles to support condition evaluations
- Removed redundant STS principaltype override
- Removed unused service variable
- Cleaned up commented-out debug logging statements
- Updated tests to reflect new IAM Role behavior
These changes prevent IP spoofing attacks and ensure policy variables
work correctly with the safe allowlist.
* Add security documentation for ParseJWTToken
Added comprehensive security comments explaining that ParseJWTToken
is safe despite parsing without verification because:
- It's only used for routing to the correct verification method
- All code paths perform cryptographic verification before trusting claims
- OIDC tokens: validated via validateExternalOIDCToken
- STS tokens: validated via ValidateSessionToken
Enhanced function documentation with clear security warnings about
proper usage to prevent future misuse.
* Fix IP condition evaluation to use aws:SourceIp key
Fixed evaluateIPCondition in IAM policy engine to use "aws:SourceIp"
instead of "sourceIP" to match the updated extractRequestContext.
This fixes the failing IP-restricted role test where IP-based policy
conditions were not being evaluated correctly.
Updated all test cases to use the correct "aws:SourceIp" key.
* Address code review feedback: optimize and clarify
PERFORMANCE IMPROVEMENT:
- Optimized expandPolicyVariables to use regexp.ReplaceAllStringFunc
for single-pass variable substitution instead of iterating through
all safe variables. This improves performance from O(n*m) to O(m)
where n is the number of safe variables and m is the pattern length.
CODE CLARITY:
- Added detailed comment explaining LDAP claim fallback mechanism
(checks both prefixed and unprefixed keys for compatibility)
- Enhanced TODO comment for trusted proxy configuration with rationale
and recommendations for supporting cloud load balancers, CDNs, and
complex network topologies
All tests passing.
* Address Copilot code review feedback
BUG FIXES:
- Fixed type switch for int/int32/int64 - separated into individual cases
since interface type switches only match the first type in multi-type cases
- Fixed grammatically incorrect error message in types.go
CODE QUALITY:
- Removed duplicate Resource/NotResource validation (already in ValidateStatement)
- Added comprehensive comment explaining isEnabled() logic and security implications
- Improved trusted proxy NOTE comment to be more concise while noting limitations
All tests passing.
* Fix test failures after extractSourceIP security changes
Updated tests to work with the security fix that only trusts
X-Forwarded-For/X-Real-IP headers from private IP addresses:
- Set RemoteAddr to 127.0.0.1 in tests to simulate trusted proxy
- Changed context key from "sourceIP" to "aws:SourceIp"
- Added test case for untrusted proxy (public RemoteAddr)
- Removed invalid ValidateStatement call (validation happens in ValidatePolicy)
All tests now passing.
* Address remaining Gemini code review feedback
CODE SAFETY:
- Deep clone Action field in CompileStatement to prevent potential data races
if the original policy document is modified after compilation
TEST CLEANUP:
- Remove debug logging (fmt.Fprintf) from engine_notresource_test.go
- Remove unused imports in engine_notresource_test.go
All tests passing.
* Fix insecure JWT parsing in IAM auth flow
SECURITY FIX:
- Renamed ParseJWTToken to ParseUnverifiedJWTToken with explicit security warnings.
- Refactored AuthenticateJWT to use the trusted SessionInfo returned by ValidateSessionToken
instead of relying on unverified claims from the initial parse.
- Refactored ValidatePresignedURLWithIAM to reuse the robust AuthenticateJWT logic, removing
duplicated and insecure manual token parsing.
This ensures all identity information (Role, Principal, Subject) used for authorization
decisions is derived solely from cryptographically verified tokens.
* Security: Fix insecure JWT claim extraction in policy engine
- Refactored EvaluatePolicy to accept trusted claims from verified Identity instead of parsing unverified tokens
- Updated AuthenticateJWT to populate Claims in IAMIdentity from verified sources (SessionInfo/ExternalIdentity)
- Updated s3api_server and handlers to pass claims correctly
- Improved isPrivateIP to support IPv6 loopback, link-local, and ULA
- Fixed flaky distributed_session_consistency test with retry logic
* fix(iam): populate Subject in STSSessionInfo to ensure correct identity propagation
This fixes the TestS3IAMAuthentication/valid_jwt_token_authentication failure by ensuring the session subject (sub) is correctly mapped to the internal SessionInfo struct, allowing bucket ownership validation to succeed.
* Optimized isPrivateIP
* Create s3-policy-tests.yml
* fix tests
* fix tests
* tests(s3/iam): simplify policy to resource-based \ (step 1)
* tests(s3/iam): add explicit Deny NotResource for isolation (step 2)
* fixes
* policy: skip resource matching for STS trust policies to allow AssumeRole evaluation
* refactor: remove debug logging and hoist policy variables for performance
* test: fix TestS3IAMBucketPolicyIntegration cleanup to handle per-subtest object lifecycle
* test: fix bucket name generation to comply with S3 63-char limit
* test: skip TestS3IAMPolicyEnforcement until role setup is implemented
* test: use weed mini for simpler test server deployment
Replace 'weed server' with 'weed mini' for IAM tests to avoid port binding issues
and simplify the all-in-one server deployment. This improves test reliability
and execution time.
* security: prevent allocation overflow in policy evaluation
Add maxPoliciesForEvaluation constant to cap the number of policies evaluated
in a single request. This prevents potential integer overflow when allocating
slices for policy lists that may be influenced by untrusted input.
Changes:
- Add const maxPoliciesForEvaluation = 1024 to set an upper bound
- Validate len(policies) < maxPoliciesForEvaluation before appending bucket policy
- Use append() instead of make([]string, len+1) to avoid arithmetic overflow
- Apply fix to both IsActionAllowed policy evaluation paths
1006 lines
33 KiB
Go
1006 lines
33 KiB
Go
package s3api
|
|
|
|
import (
|
|
"context"
|
|
"encoding/json"
|
|
"errors"
|
|
"fmt"
|
|
"path/filepath"
|
|
"strings"
|
|
"sync"
|
|
"time"
|
|
|
|
"github.com/aws/aws-sdk-go/service/s3"
|
|
"google.golang.org/protobuf/proto"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/kms"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/s3_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/cors"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/policy_engine"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
|
|
)
|
|
|
|
// BucketConfig represents cached bucket configuration
|
|
type BucketConfig struct {
|
|
Name string
|
|
Versioning string // "Enabled", "Suspended", or ""
|
|
Ownership string
|
|
ACL []byte
|
|
Owner string
|
|
IsPublicRead bool // Cached flag to avoid JSON parsing on every request
|
|
CORS *cors.CORSConfiguration
|
|
ObjectLockConfig *ObjectLockConfiguration // Cached parsed Object Lock configuration
|
|
BucketPolicy *policy_engine.PolicyDocument // Cached bucket policy for performance
|
|
KMSKeyCache *BucketKMSCache // Per-bucket KMS key cache for SSE-KMS operations
|
|
LastModified time.Time
|
|
Entry *filer_pb.Entry
|
|
}
|
|
|
|
// BucketKMSCache represents per-bucket KMS key caching for SSE-KMS operations
|
|
// This provides better isolation and automatic cleanup compared to global caching
|
|
type BucketKMSCache struct {
|
|
cache map[string]*BucketKMSCacheEntry // Key: contextHash, Value: cached data key
|
|
mutex sync.RWMutex
|
|
bucket string // Bucket name for logging/debugging
|
|
lastTTL time.Duration // TTL used for cache entries (typically 1 hour)
|
|
}
|
|
|
|
// BucketKMSCacheEntry represents a single cached KMS data key
|
|
type BucketKMSCacheEntry struct {
|
|
DataKey interface{} // Could be *kms.GenerateDataKeyResponse or similar
|
|
ExpiresAt time.Time
|
|
KeyID string
|
|
ContextHash string // Hash of encryption context for cache validation
|
|
}
|
|
|
|
// NewBucketKMSCache creates a new per-bucket KMS key cache
|
|
func NewBucketKMSCache(bucketName string, ttl time.Duration) *BucketKMSCache {
|
|
return &BucketKMSCache{
|
|
cache: make(map[string]*BucketKMSCacheEntry),
|
|
bucket: bucketName,
|
|
lastTTL: ttl,
|
|
}
|
|
}
|
|
|
|
// Get retrieves a cached KMS data key if it exists and hasn't expired
|
|
func (bkc *BucketKMSCache) Get(contextHash string) (*BucketKMSCacheEntry, bool) {
|
|
if bkc == nil {
|
|
return nil, false
|
|
}
|
|
|
|
bkc.mutex.RLock()
|
|
defer bkc.mutex.RUnlock()
|
|
|
|
entry, exists := bkc.cache[contextHash]
|
|
if !exists {
|
|
return nil, false
|
|
}
|
|
|
|
// Check if entry has expired
|
|
if time.Now().After(entry.ExpiresAt) {
|
|
return nil, false
|
|
}
|
|
|
|
return entry, true
|
|
}
|
|
|
|
// Set stores a KMS data key in the cache
|
|
func (bkc *BucketKMSCache) Set(contextHash, keyID string, dataKey interface{}, ttl time.Duration) {
|
|
if bkc == nil {
|
|
return
|
|
}
|
|
|
|
bkc.mutex.Lock()
|
|
defer bkc.mutex.Unlock()
|
|
|
|
bkc.cache[contextHash] = &BucketKMSCacheEntry{
|
|
DataKey: dataKey,
|
|
ExpiresAt: time.Now().Add(ttl),
|
|
KeyID: keyID,
|
|
ContextHash: contextHash,
|
|
}
|
|
bkc.lastTTL = ttl
|
|
}
|
|
|
|
// CleanupExpired removes expired entries from the cache
|
|
func (bkc *BucketKMSCache) CleanupExpired() int {
|
|
if bkc == nil {
|
|
return 0
|
|
}
|
|
|
|
bkc.mutex.Lock()
|
|
defer bkc.mutex.Unlock()
|
|
|
|
now := time.Now()
|
|
expiredCount := 0
|
|
|
|
for key, entry := range bkc.cache {
|
|
if now.After(entry.ExpiresAt) {
|
|
// Clear sensitive data before removing from cache
|
|
bkc.clearSensitiveData(entry)
|
|
delete(bkc.cache, key)
|
|
expiredCount++
|
|
}
|
|
}
|
|
|
|
return expiredCount
|
|
}
|
|
|
|
// Size returns the current number of cached entries
|
|
func (bkc *BucketKMSCache) Size() int {
|
|
if bkc == nil {
|
|
return 0
|
|
}
|
|
|
|
bkc.mutex.RLock()
|
|
defer bkc.mutex.RUnlock()
|
|
|
|
return len(bkc.cache)
|
|
}
|
|
|
|
// clearSensitiveData securely clears sensitive data from a cache entry
|
|
func (bkc *BucketKMSCache) clearSensitiveData(entry *BucketKMSCacheEntry) {
|
|
if dataKeyResp, ok := entry.DataKey.(*kms.GenerateDataKeyResponse); ok {
|
|
// Zero out the plaintext data key to prevent it from lingering in memory
|
|
if dataKeyResp.Plaintext != nil {
|
|
for i := range dataKeyResp.Plaintext {
|
|
dataKeyResp.Plaintext[i] = 0
|
|
}
|
|
dataKeyResp.Plaintext = nil
|
|
}
|
|
}
|
|
}
|
|
|
|
// Clear clears all cached KMS entries, securely zeroing sensitive data first
|
|
func (bkc *BucketKMSCache) Clear() {
|
|
if bkc == nil {
|
|
return
|
|
}
|
|
|
|
bkc.mutex.Lock()
|
|
defer bkc.mutex.Unlock()
|
|
|
|
// Clear sensitive data from all entries before deletion
|
|
for _, entry := range bkc.cache {
|
|
bkc.clearSensitiveData(entry)
|
|
}
|
|
|
|
// Clear the cache map
|
|
bkc.cache = make(map[string]*BucketKMSCacheEntry)
|
|
}
|
|
|
|
// BucketConfigCache provides caching for bucket configurations
|
|
// Cache entries are automatically updated/invalidated through metadata subscription events,
|
|
// so TTL serves as a safety fallback rather than the primary consistency mechanism
|
|
type BucketConfigCache struct {
|
|
cache map[string]*BucketConfig
|
|
negativeCache map[string]time.Time // Cache for non-existent buckets
|
|
mutex sync.RWMutex
|
|
ttl time.Duration // Safety fallback TTL; real-time consistency maintained via events
|
|
negativeTTL time.Duration // TTL for negative cache entries
|
|
}
|
|
|
|
// BucketMetadata represents the complete metadata for a bucket
|
|
type BucketMetadata struct {
|
|
Tags map[string]string `json:"tags,omitempty"`
|
|
CORS *cors.CORSConfiguration `json:"cors,omitempty"`
|
|
Encryption *s3_pb.EncryptionConfiguration `json:"encryption,omitempty"`
|
|
// Future extensions can be added here:
|
|
// Versioning *s3_pb.VersioningConfiguration `json:"versioning,omitempty"`
|
|
// Lifecycle *s3_pb.LifecycleConfiguration `json:"lifecycle,omitempty"`
|
|
// Notification *s3_pb.NotificationConfiguration `json:"notification,omitempty"`
|
|
// Replication *s3_pb.ReplicationConfiguration `json:"replication,omitempty"`
|
|
// Analytics *s3_pb.AnalyticsConfiguration `json:"analytics,omitempty"`
|
|
// Logging *s3_pb.LoggingConfiguration `json:"logging,omitempty"`
|
|
// Website *s3_pb.WebsiteConfiguration `json:"website,omitempty"`
|
|
// RequestPayer *s3_pb.RequestPayerConfiguration `json:"requestPayer,omitempty"`
|
|
// PublicAccess *s3_pb.PublicAccessConfiguration `json:"publicAccess,omitempty"`
|
|
}
|
|
|
|
// NewBucketMetadata creates a new BucketMetadata with default values
|
|
func NewBucketMetadata() *BucketMetadata {
|
|
return &BucketMetadata{
|
|
Tags: make(map[string]string),
|
|
}
|
|
}
|
|
|
|
// IsEmpty returns true if the metadata has no configuration set
|
|
func (bm *BucketMetadata) IsEmpty() bool {
|
|
return len(bm.Tags) == 0 && bm.CORS == nil && bm.Encryption == nil
|
|
}
|
|
|
|
// HasEncryption returns true if bucket has encryption configuration
|
|
func (bm *BucketMetadata) HasEncryption() bool {
|
|
return bm.Encryption != nil
|
|
}
|
|
|
|
// HasCORS returns true if bucket has CORS configuration
|
|
func (bm *BucketMetadata) HasCORS() bool {
|
|
return bm.CORS != nil
|
|
}
|
|
|
|
// HasTags returns true if bucket has tags
|
|
func (bm *BucketMetadata) HasTags() bool {
|
|
return len(bm.Tags) > 0
|
|
}
|
|
|
|
// NewBucketConfigCache creates a new bucket configuration cache
|
|
// TTL can be set to a longer duration since cache consistency is maintained
|
|
// through real-time metadata subscription events rather than TTL expiration
|
|
func NewBucketConfigCache(ttl time.Duration) *BucketConfigCache {
|
|
negativeTTL := ttl / 4 // Negative cache TTL is shorter than positive cache
|
|
if negativeTTL < 30*time.Second {
|
|
negativeTTL = 30 * time.Second // Minimum 30 seconds for negative cache
|
|
}
|
|
|
|
return &BucketConfigCache{
|
|
cache: make(map[string]*BucketConfig),
|
|
negativeCache: make(map[string]time.Time),
|
|
ttl: ttl,
|
|
negativeTTL: negativeTTL,
|
|
}
|
|
}
|
|
|
|
// Get retrieves bucket configuration from cache
|
|
func (bcc *BucketConfigCache) Get(bucket string) (*BucketConfig, bool) {
|
|
bcc.mutex.RLock()
|
|
defer bcc.mutex.RUnlock()
|
|
|
|
config, exists := bcc.cache[bucket]
|
|
if !exists {
|
|
return nil, false
|
|
}
|
|
|
|
// Check if cache entry is expired (safety fallback; entries are normally updated via events)
|
|
if time.Since(config.LastModified) > bcc.ttl {
|
|
return nil, false
|
|
}
|
|
|
|
return config, true
|
|
}
|
|
|
|
// Set stores bucket configuration in cache
|
|
func (bcc *BucketConfigCache) Set(bucket string, config *BucketConfig) {
|
|
bcc.mutex.Lock()
|
|
defer bcc.mutex.Unlock()
|
|
|
|
config.LastModified = time.Now()
|
|
bcc.cache[bucket] = config
|
|
}
|
|
|
|
// Remove removes bucket configuration from cache
|
|
func (bcc *BucketConfigCache) Remove(bucket string) {
|
|
bcc.mutex.Lock()
|
|
defer bcc.mutex.Unlock()
|
|
|
|
delete(bcc.cache, bucket)
|
|
}
|
|
|
|
// Clear clears all cached configurations
|
|
func (bcc *BucketConfigCache) Clear() {
|
|
bcc.mutex.Lock()
|
|
defer bcc.mutex.Unlock()
|
|
|
|
bcc.cache = make(map[string]*BucketConfig)
|
|
bcc.negativeCache = make(map[string]time.Time)
|
|
}
|
|
|
|
// IsNegativelyCached checks if a bucket is in the negative cache (doesn't exist)
|
|
func (bcc *BucketConfigCache) IsNegativelyCached(bucket string) bool {
|
|
bcc.mutex.Lock()
|
|
defer bcc.mutex.Unlock()
|
|
|
|
if cachedTime, exists := bcc.negativeCache[bucket]; exists {
|
|
// Check if the negative cache entry is still valid
|
|
if time.Since(cachedTime) < bcc.negativeTTL {
|
|
return true
|
|
}
|
|
// Entry expired, remove it
|
|
delete(bcc.negativeCache, bucket)
|
|
}
|
|
return false
|
|
}
|
|
|
|
// SetNegativeCache marks a bucket as non-existent in the negative cache
|
|
func (bcc *BucketConfigCache) SetNegativeCache(bucket string) {
|
|
bcc.mutex.Lock()
|
|
defer bcc.mutex.Unlock()
|
|
|
|
bcc.negativeCache[bucket] = time.Now()
|
|
}
|
|
|
|
// RemoveNegativeCache removes a bucket from the negative cache
|
|
func (bcc *BucketConfigCache) RemoveNegativeCache(bucket string) {
|
|
bcc.mutex.Lock()
|
|
defer bcc.mutex.Unlock()
|
|
|
|
delete(bcc.negativeCache, bucket)
|
|
}
|
|
|
|
// loadBucketPolicyFromExtended loads and parses bucket policy from entry extended attributes
|
|
func loadBucketPolicyFromExtended(entry *filer_pb.Entry, bucket string) *policy_engine.PolicyDocument {
|
|
if entry.Extended == nil {
|
|
return nil
|
|
}
|
|
|
|
policyJSON, exists := entry.Extended[BUCKET_POLICY_METADATA_KEY]
|
|
if !exists || len(policyJSON) == 0 {
|
|
glog.V(4).Infof("loadBucketPolicyFromExtended: no bucket policy found for bucket %s", bucket)
|
|
return nil
|
|
}
|
|
|
|
var policyDoc policy_engine.PolicyDocument
|
|
if err := json.Unmarshal(policyJSON, &policyDoc); err != nil {
|
|
glog.Errorf("loadBucketPolicyFromExtended: failed to parse bucket policy for %s: %v", bucket, err)
|
|
return nil
|
|
}
|
|
|
|
glog.V(3).Infof("loadBucketPolicyFromExtended: loaded bucket policy for bucket %s", bucket)
|
|
return &policyDoc
|
|
}
|
|
|
|
// getBucketConfig retrieves bucket configuration with caching
|
|
func (s3a *S3ApiServer) getBucketConfig(bucket string) (*BucketConfig, s3err.ErrorCode) {
|
|
// Check negative cache first
|
|
if s3a.bucketConfigCache.IsNegativelyCached(bucket) {
|
|
return nil, s3err.ErrNoSuchBucket
|
|
}
|
|
|
|
// Try positive cache
|
|
if config, found := s3a.bucketConfigCache.Get(bucket); found {
|
|
return config, s3err.ErrNone
|
|
}
|
|
|
|
// Try to get from filer
|
|
entry, err := s3a.getEntry(s3a.option.BucketsPath, bucket)
|
|
if err != nil {
|
|
if errors.Is(err, filer_pb.ErrNotFound) {
|
|
// Bucket doesn't exist - set negative cache
|
|
s3a.bucketConfigCache.SetNegativeCache(bucket)
|
|
return nil, s3err.ErrNoSuchBucket
|
|
}
|
|
glog.Errorf("getBucketConfig: failed to get bucket entry for %s: %v", bucket, err)
|
|
return nil, s3err.ErrInternalError
|
|
}
|
|
|
|
config := &BucketConfig{
|
|
Name: bucket,
|
|
Entry: entry,
|
|
IsPublicRead: false, // Explicitly default to false for private buckets
|
|
}
|
|
|
|
// Extract configuration from extended attributes
|
|
if entry.Extended != nil {
|
|
glog.V(3).Infof("getBucketConfig: checking extended attributes for bucket %s, ExtObjectLockEnabledKey value=%s",
|
|
bucket, string(entry.Extended[s3_constants.ExtObjectLockEnabledKey]))
|
|
if versioning, exists := entry.Extended[s3_constants.ExtVersioningKey]; exists {
|
|
config.Versioning = string(versioning)
|
|
}
|
|
if ownership, exists := entry.Extended[s3_constants.ExtOwnershipKey]; exists {
|
|
config.Ownership = string(ownership)
|
|
}
|
|
if acl, exists := entry.Extended[s3_constants.ExtAmzAclKey]; exists {
|
|
config.ACL = acl
|
|
// Parse ACL once and cache public-read status
|
|
config.IsPublicRead = parseAndCachePublicReadStatus(acl)
|
|
} else {
|
|
// No ACL means private bucket
|
|
config.IsPublicRead = false
|
|
}
|
|
if owner, exists := entry.Extended[s3_constants.ExtAmzOwnerKey]; exists {
|
|
config.Owner = string(owner)
|
|
}
|
|
// Parse Object Lock configuration if present
|
|
if objectLockConfig, found := LoadObjectLockConfigurationFromExtended(entry); found {
|
|
config.ObjectLockConfig = objectLockConfig
|
|
glog.V(3).Infof("getBucketConfig: loaded Object Lock config from extended attributes for bucket %s: %+v", bucket, objectLockConfig)
|
|
} else {
|
|
glog.V(3).Infof("getBucketConfig: no Object Lock config found in extended attributes for bucket %s", bucket)
|
|
}
|
|
|
|
// Load bucket policy if present (for performance optimization)
|
|
config.BucketPolicy = loadBucketPolicyFromExtended(entry, bucket)
|
|
}
|
|
|
|
// Sync bucket policy to the policy engine for evaluation
|
|
s3a.syncBucketPolicyToEngine(bucket, config.BucketPolicy)
|
|
|
|
// Load CORS configuration from bucket directory content
|
|
if corsConfig, err := s3a.loadCORSFromBucketContent(bucket); err != nil {
|
|
if errors.Is(err, filer_pb.ErrNotFound) {
|
|
// Missing metadata is not an error; fall back cleanly
|
|
glog.V(2).Infof("CORS metadata not found for bucket %s, falling back to default behavior", bucket)
|
|
} else {
|
|
// Log parsing or validation errors
|
|
glog.Errorf("Failed to load CORS configuration for bucket %s: %v", bucket, err)
|
|
}
|
|
} else {
|
|
config.CORS = corsConfig
|
|
}
|
|
|
|
// Cache the result
|
|
s3a.bucketConfigCache.Set(bucket, config)
|
|
|
|
return config, s3err.ErrNone
|
|
}
|
|
|
|
// updateBucketConfig updates bucket configuration and invalidates cache
|
|
func (s3a *S3ApiServer) updateBucketConfig(bucket string, updateFn func(*BucketConfig) error) s3err.ErrorCode {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
return errCode
|
|
}
|
|
|
|
// Apply update function
|
|
if err := updateFn(config); err != nil {
|
|
glog.Errorf("updateBucketConfig: update function failed for bucket %s: %v", bucket, err)
|
|
return s3err.ErrInternalError
|
|
}
|
|
|
|
// Prepare extended attributes
|
|
if config.Entry.Extended == nil {
|
|
config.Entry.Extended = make(map[string][]byte)
|
|
}
|
|
|
|
// Update extended attributes
|
|
if config.Versioning != "" {
|
|
config.Entry.Extended[s3_constants.ExtVersioningKey] = []byte(config.Versioning)
|
|
}
|
|
if config.Ownership != "" {
|
|
config.Entry.Extended[s3_constants.ExtOwnershipKey] = []byte(config.Ownership)
|
|
}
|
|
if config.ACL != nil {
|
|
config.Entry.Extended[s3_constants.ExtAmzAclKey] = config.ACL
|
|
}
|
|
if config.Owner != "" {
|
|
config.Entry.Extended[s3_constants.ExtAmzOwnerKey] = []byte(config.Owner)
|
|
}
|
|
// Update Object Lock configuration
|
|
if config.ObjectLockConfig != nil {
|
|
glog.V(3).Infof("updateBucketConfig: storing Object Lock config for bucket %s: %+v", bucket, config.ObjectLockConfig)
|
|
if err := StoreObjectLockConfigurationInExtended(config.Entry, config.ObjectLockConfig); err != nil {
|
|
glog.Errorf("updateBucketConfig: failed to store Object Lock configuration for bucket %s: %v", bucket, err)
|
|
return s3err.ErrInternalError
|
|
}
|
|
glog.V(3).Infof("updateBucketConfig: stored Object Lock config in extended attributes for bucket %s, key=%s, value=%s",
|
|
bucket, s3_constants.ExtObjectLockEnabledKey, string(config.Entry.Extended[s3_constants.ExtObjectLockEnabledKey]))
|
|
}
|
|
|
|
// Save to filer
|
|
glog.V(3).Infof("updateBucketConfig: saving entry to filer for bucket %s", bucket)
|
|
err := s3a.updateEntry(s3a.option.BucketsPath, config.Entry)
|
|
if err != nil {
|
|
glog.Errorf("updateBucketConfig: failed to update bucket entry for %s: %v", bucket, err)
|
|
return s3err.ErrInternalError
|
|
}
|
|
glog.V(3).Infof("updateBucketConfig: saved entry to filer for bucket %s", bucket)
|
|
|
|
// Update cache
|
|
s3a.bucketConfigCache.Set(bucket, config)
|
|
|
|
return s3err.ErrNone
|
|
}
|
|
|
|
// isVersioningEnabled checks if versioning is enabled for a bucket (with caching)
|
|
func (s3a *S3ApiServer) isVersioningEnabled(bucket string) (bool, error) {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
if errCode == s3err.ErrNoSuchBucket {
|
|
return false, filer_pb.ErrNotFound
|
|
}
|
|
return false, fmt.Errorf("failed to get bucket config: %v", errCode)
|
|
}
|
|
|
|
// Versioning is enabled if explicitly set to "Enabled" OR if object lock is enabled
|
|
// (since object lock requires versioning to be enabled)
|
|
return config.Versioning == s3_constants.VersioningEnabled || config.ObjectLockConfig != nil, nil
|
|
}
|
|
|
|
// isVersioningConfigured checks if versioning has been configured (either Enabled or Suspended)
|
|
func (s3a *S3ApiServer) isVersioningConfigured(bucket string) (bool, error) {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
if errCode == s3err.ErrNoSuchBucket {
|
|
return false, filer_pb.ErrNotFound
|
|
}
|
|
return false, fmt.Errorf("failed to get bucket config: %v", errCode)
|
|
}
|
|
|
|
// Versioning is configured if explicitly set to either "Enabled" or "Suspended"
|
|
// OR if object lock is enabled (which forces versioning)
|
|
return config.Versioning != "" || config.ObjectLockConfig != nil, nil
|
|
}
|
|
|
|
// isObjectLockEnabled checks if Object Lock is enabled for a bucket (with caching)
|
|
func (s3a *S3ApiServer) isObjectLockEnabled(bucket string) (bool, error) {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
if errCode == s3err.ErrNoSuchBucket {
|
|
return false, filer_pb.ErrNotFound
|
|
}
|
|
return false, fmt.Errorf("failed to get bucket config: %v", errCode)
|
|
}
|
|
|
|
return config.ObjectLockConfig != nil, nil
|
|
}
|
|
|
|
// getVersioningState returns the detailed versioning state for a bucket
|
|
func (s3a *S3ApiServer) getVersioningState(bucket string) (string, error) {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
if errCode == s3err.ErrNoSuchBucket {
|
|
// Signal to callers that the bucket does not exist so they can
|
|
// decide whether to auto-create it (e.g., in PUT handlers).
|
|
return "", filer_pb.ErrNotFound
|
|
}
|
|
glog.Errorf("getVersioningState: failed to get bucket config for %s: %v", bucket, errCode)
|
|
return "", fmt.Errorf("failed to get bucket config: %v", errCode)
|
|
}
|
|
|
|
// If object lock is enabled, versioning must be enabled regardless of explicit setting
|
|
if config.ObjectLockConfig != nil {
|
|
return s3_constants.VersioningEnabled, nil
|
|
}
|
|
|
|
// Return the explicit versioning status (empty string means never configured)
|
|
return config.Versioning, nil
|
|
}
|
|
|
|
// getBucketVersioningStatus returns the versioning status for a bucket
|
|
func (s3a *S3ApiServer) getBucketVersioningStatus(bucket string) (string, s3err.ErrorCode) {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
return "", errCode
|
|
}
|
|
|
|
// Return exactly what's stored - empty string means versioning was never configured
|
|
// This matches AWS S3 behavior where new buckets have no Status field in GetBucketVersioning response
|
|
return config.Versioning, s3err.ErrNone
|
|
}
|
|
|
|
// setBucketVersioningStatus sets the versioning status for a bucket
|
|
func (s3a *S3ApiServer) setBucketVersioningStatus(bucket, status string) s3err.ErrorCode {
|
|
errCode := s3a.updateBucketConfig(bucket, func(config *BucketConfig) error {
|
|
config.Versioning = status
|
|
return nil
|
|
})
|
|
return errCode
|
|
}
|
|
|
|
// getBucketOwnership returns the ownership setting for a bucket
|
|
func (s3a *S3ApiServer) getBucketOwnership(bucket string) (string, s3err.ErrorCode) {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
return "", errCode
|
|
}
|
|
|
|
return config.Ownership, s3err.ErrNone
|
|
}
|
|
|
|
// setBucketOwnership sets the ownership setting for a bucket
|
|
func (s3a *S3ApiServer) setBucketOwnership(bucket, ownership string) s3err.ErrorCode {
|
|
return s3a.updateBucketConfig(bucket, func(config *BucketConfig) error {
|
|
config.Ownership = ownership
|
|
return nil
|
|
})
|
|
}
|
|
|
|
// loadCORSFromBucketContent loads CORS configuration from bucket directory content
|
|
func (s3a *S3ApiServer) loadCORSFromBucketContent(bucket string) (*cors.CORSConfiguration, error) {
|
|
metadata, err := s3a.GetBucketMetadata(bucket)
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
// Note: corsConfig can be nil if no CORS configuration is set, which is valid
|
|
return metadata.CORS, nil
|
|
}
|
|
|
|
// getCORSConfiguration retrieves CORS configuration with caching
|
|
func (s3a *S3ApiServer) getCORSConfiguration(bucket string) (*cors.CORSConfiguration, s3err.ErrorCode) {
|
|
config, errCode := s3a.getBucketConfig(bucket)
|
|
if errCode != s3err.ErrNone {
|
|
return nil, errCode
|
|
}
|
|
|
|
return config.CORS, s3err.ErrNone
|
|
}
|
|
|
|
// updateCORSConfiguration updates the CORS configuration for a bucket
|
|
func (s3a *S3ApiServer) updateCORSConfiguration(bucket string, corsConfig *cors.CORSConfiguration) s3err.ErrorCode {
|
|
// Update using structured API
|
|
// Note: UpdateBucketCORS -> UpdateBucketMetadata -> setBucketMetadata
|
|
// already invalidates the cache synchronously after successful update
|
|
err := s3a.UpdateBucketCORS(bucket, corsConfig)
|
|
if err != nil {
|
|
glog.Errorf("updateCORSConfiguration: failed to update CORS config for bucket %s: %v", bucket, err)
|
|
return s3err.ErrInternalError
|
|
}
|
|
|
|
return s3err.ErrNone
|
|
}
|
|
|
|
// removeCORSConfiguration removes the CORS configuration for a bucket
|
|
func (s3a *S3ApiServer) removeCORSConfiguration(bucket string) s3err.ErrorCode {
|
|
// Update using structured API
|
|
// Note: ClearBucketCORS -> UpdateBucketMetadata -> setBucketMetadata
|
|
// already invalidates the cache synchronously after successful update
|
|
err := s3a.ClearBucketCORS(bucket)
|
|
if err != nil {
|
|
glog.Errorf("removeCORSConfiguration: failed to remove CORS config for bucket %s: %v", bucket, err)
|
|
return s3err.ErrInternalError
|
|
}
|
|
|
|
return s3err.ErrNone
|
|
}
|
|
|
|
// Conversion functions between CORS types and protobuf types
|
|
|
|
// corsRuleToProto converts a CORS rule to protobuf format
|
|
func corsRuleToProto(rule cors.CORSRule) *s3_pb.CORSRule {
|
|
return &s3_pb.CORSRule{
|
|
AllowedHeaders: rule.AllowedHeaders,
|
|
AllowedMethods: rule.AllowedMethods,
|
|
AllowedOrigins: rule.AllowedOrigins,
|
|
ExposeHeaders: rule.ExposeHeaders,
|
|
MaxAgeSeconds: int32(getMaxAgeSecondsValue(rule.MaxAgeSeconds)),
|
|
Id: rule.ID,
|
|
}
|
|
}
|
|
|
|
// corsRuleFromProto converts a protobuf CORS rule to standard format
|
|
func corsRuleFromProto(protoRule *s3_pb.CORSRule) cors.CORSRule {
|
|
var maxAge *int
|
|
// Always create the pointer if MaxAgeSeconds is >= 0
|
|
// This prevents nil pointer dereferences in tests and matches AWS behavior
|
|
if protoRule.MaxAgeSeconds >= 0 {
|
|
age := int(protoRule.MaxAgeSeconds)
|
|
maxAge = &age
|
|
}
|
|
// Only leave maxAge as nil if MaxAgeSeconds was explicitly set to a negative value
|
|
|
|
return cors.CORSRule{
|
|
AllowedHeaders: protoRule.AllowedHeaders,
|
|
AllowedMethods: protoRule.AllowedMethods,
|
|
AllowedOrigins: protoRule.AllowedOrigins,
|
|
ExposeHeaders: protoRule.ExposeHeaders,
|
|
MaxAgeSeconds: maxAge,
|
|
ID: protoRule.Id,
|
|
}
|
|
}
|
|
|
|
// corsConfigToProto converts CORS configuration to protobuf format
|
|
func corsConfigToProto(config *cors.CORSConfiguration) *s3_pb.CORSConfiguration {
|
|
if config == nil {
|
|
return nil
|
|
}
|
|
|
|
protoRules := make([]*s3_pb.CORSRule, len(config.CORSRules))
|
|
for i, rule := range config.CORSRules {
|
|
protoRules[i] = corsRuleToProto(rule)
|
|
}
|
|
|
|
return &s3_pb.CORSConfiguration{
|
|
CorsRules: protoRules,
|
|
}
|
|
}
|
|
|
|
// corsConfigFromProto converts protobuf CORS configuration to standard format
|
|
func corsConfigFromProto(protoConfig *s3_pb.CORSConfiguration) *cors.CORSConfiguration {
|
|
if protoConfig == nil {
|
|
return nil
|
|
}
|
|
|
|
rules := make([]cors.CORSRule, len(protoConfig.CorsRules))
|
|
for i, protoRule := range protoConfig.CorsRules {
|
|
rules[i] = corsRuleFromProto(protoRule)
|
|
}
|
|
|
|
return &cors.CORSConfiguration{
|
|
CORSRules: rules,
|
|
}
|
|
}
|
|
|
|
// getMaxAgeSecondsValue safely extracts max age seconds value
|
|
func getMaxAgeSecondsValue(maxAge *int) int {
|
|
if maxAge == nil {
|
|
return 0
|
|
}
|
|
return *maxAge
|
|
}
|
|
|
|
// parseAndCachePublicReadStatus parses the ACL and caches the public-read status
|
|
func parseAndCachePublicReadStatus(acl []byte) bool {
|
|
var grants []*s3.Grant
|
|
if err := json.Unmarshal(acl, &grants); err != nil {
|
|
return false
|
|
}
|
|
|
|
// Check if any grant gives read permission to "AllUsers" group
|
|
for _, grant := range grants {
|
|
if grant.Grantee != nil && grant.Grantee.URI != nil && grant.Permission != nil {
|
|
// Check for AllUsers group with Read permission
|
|
if *grant.Grantee.URI == s3_constants.GranteeGroupAllUsers &&
|
|
(*grant.Permission == s3_constants.PermissionRead || *grant.Permission == s3_constants.PermissionFullControl) {
|
|
return true
|
|
}
|
|
}
|
|
}
|
|
|
|
return false
|
|
}
|
|
|
|
// getBucketMetadata retrieves bucket metadata as a structured object with caching
|
|
func (s3a *S3ApiServer) getBucketMetadata(bucket string) (*BucketMetadata, error) {
|
|
if s3a.bucketConfigCache != nil {
|
|
// Check negative cache first
|
|
if s3a.bucketConfigCache.IsNegativelyCached(bucket) {
|
|
return nil, fmt.Errorf("bucket directory not found %s", bucket)
|
|
}
|
|
|
|
// Try to get from positive cache
|
|
if config, found := s3a.bucketConfigCache.Get(bucket); found {
|
|
// Extract metadata from cached config
|
|
if metadata, err := s3a.extractMetadataFromConfig(config); err == nil {
|
|
return metadata, nil
|
|
}
|
|
// If extraction fails, fall through to direct load
|
|
}
|
|
}
|
|
|
|
// Load directly from filer
|
|
return s3a.loadBucketMetadataFromFiler(bucket)
|
|
}
|
|
|
|
// extractMetadataFromConfig extracts BucketMetadata from cached BucketConfig
|
|
func (s3a *S3ApiServer) extractMetadataFromConfig(config *BucketConfig) (*BucketMetadata, error) {
|
|
if config == nil || config.Entry == nil {
|
|
return NewBucketMetadata(), nil
|
|
}
|
|
|
|
// Parse metadata from entry content if available
|
|
if len(config.Entry.Content) > 0 {
|
|
var protoMetadata s3_pb.BucketMetadata
|
|
if err := proto.Unmarshal(config.Entry.Content, &protoMetadata); err != nil {
|
|
glog.Errorf("extractMetadataFromConfig: failed to unmarshal protobuf metadata for bucket %s: %v", config.Name, err)
|
|
return nil, err
|
|
}
|
|
// Convert protobuf to structured metadata
|
|
metadata := &BucketMetadata{
|
|
Tags: protoMetadata.Tags,
|
|
CORS: corsConfigFromProto(protoMetadata.Cors),
|
|
Encryption: protoMetadata.Encryption,
|
|
}
|
|
return metadata, nil
|
|
}
|
|
|
|
// Fallback: create metadata from cached CORS config
|
|
metadata := NewBucketMetadata()
|
|
if config.CORS != nil {
|
|
metadata.CORS = config.CORS
|
|
}
|
|
|
|
return metadata, nil
|
|
}
|
|
|
|
// loadBucketMetadataFromFiler loads bucket metadata directly from the filer
|
|
func (s3a *S3ApiServer) loadBucketMetadataFromFiler(bucket string) (*BucketMetadata, error) {
|
|
// Validate bucket name to prevent path traversal attacks
|
|
if bucket == "" || strings.Contains(bucket, "/") || strings.Contains(bucket, "\\") ||
|
|
strings.Contains(bucket, "..") || strings.Contains(bucket, "~") {
|
|
return nil, fmt.Errorf("invalid bucket name: %s", bucket)
|
|
}
|
|
|
|
// Clean the bucket name further to prevent any potential path traversal
|
|
bucket = filepath.Clean(bucket)
|
|
if bucket == "." || bucket == ".." {
|
|
return nil, fmt.Errorf("invalid bucket name: %s", bucket)
|
|
}
|
|
|
|
// Get bucket directory entry to access its content
|
|
entry, err := s3a.getEntry(s3a.option.BucketsPath, bucket)
|
|
if err != nil {
|
|
// Check if this is a "not found" error
|
|
if errors.Is(err, filer_pb.ErrNotFound) {
|
|
// Set negative cache for non-existent bucket
|
|
if s3a.bucketConfigCache != nil {
|
|
s3a.bucketConfigCache.SetNegativeCache(bucket)
|
|
}
|
|
}
|
|
return nil, fmt.Errorf("error retrieving bucket directory %s: %w", bucket, err)
|
|
}
|
|
if entry == nil {
|
|
// Set negative cache for non-existent bucket
|
|
if s3a.bucketConfigCache != nil {
|
|
s3a.bucketConfigCache.SetNegativeCache(bucket)
|
|
}
|
|
return nil, fmt.Errorf("bucket directory not found %s", bucket)
|
|
}
|
|
|
|
// If no content, return empty metadata
|
|
if len(entry.Content) == 0 {
|
|
return NewBucketMetadata(), nil
|
|
}
|
|
|
|
// Unmarshal metadata from protobuf
|
|
var protoMetadata s3_pb.BucketMetadata
|
|
if err := proto.Unmarshal(entry.Content, &protoMetadata); err != nil {
|
|
glog.Errorf("getBucketMetadata: failed to unmarshal protobuf metadata for bucket %s: %v", bucket, err)
|
|
return nil, fmt.Errorf("failed to unmarshal bucket metadata for %s: %w", bucket, err)
|
|
}
|
|
|
|
// Convert protobuf CORS to standard CORS
|
|
corsConfig := corsConfigFromProto(protoMetadata.Cors)
|
|
|
|
// Create and return structured metadata
|
|
metadata := &BucketMetadata{
|
|
Tags: protoMetadata.Tags,
|
|
CORS: corsConfig,
|
|
Encryption: protoMetadata.Encryption,
|
|
}
|
|
|
|
return metadata, nil
|
|
}
|
|
|
|
// setBucketMetadata stores bucket metadata from a structured object
|
|
func (s3a *S3ApiServer) setBucketMetadata(bucket string, metadata *BucketMetadata) error {
|
|
// Validate bucket name to prevent path traversal attacks
|
|
if bucket == "" || strings.Contains(bucket, "/") || strings.Contains(bucket, "\\") ||
|
|
strings.Contains(bucket, "..") || strings.Contains(bucket, "~") {
|
|
return fmt.Errorf("invalid bucket name: %s", bucket)
|
|
}
|
|
|
|
// Clean the bucket name further to prevent any potential path traversal
|
|
bucket = filepath.Clean(bucket)
|
|
if bucket == "." || bucket == ".." {
|
|
return fmt.Errorf("invalid bucket name: %s", bucket)
|
|
}
|
|
|
|
// Default to empty metadata if nil
|
|
if metadata == nil {
|
|
metadata = NewBucketMetadata()
|
|
}
|
|
|
|
// Create protobuf metadata
|
|
protoMetadata := &s3_pb.BucketMetadata{
|
|
Tags: metadata.Tags,
|
|
Cors: corsConfigToProto(metadata.CORS),
|
|
Encryption: metadata.Encryption,
|
|
}
|
|
|
|
// Marshal metadata to protobuf
|
|
metadataBytes, err := proto.Marshal(protoMetadata)
|
|
if err != nil {
|
|
return fmt.Errorf("failed to marshal bucket metadata to protobuf: %w", err)
|
|
}
|
|
|
|
// Update the bucket entry with new content
|
|
err = s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
|
// Get current bucket entry
|
|
entry, err := s3a.getEntry(s3a.option.BucketsPath, bucket)
|
|
if err != nil {
|
|
return fmt.Errorf("error retrieving bucket directory %s: %w", bucket, err)
|
|
}
|
|
if entry == nil {
|
|
return fmt.Errorf("bucket directory not found %s", bucket)
|
|
}
|
|
|
|
// Update content with metadata
|
|
entry.Content = metadataBytes
|
|
|
|
request := &filer_pb.UpdateEntryRequest{
|
|
Directory: s3a.option.BucketsPath,
|
|
Entry: entry,
|
|
}
|
|
|
|
_, err = client.UpdateEntry(context.Background(), request)
|
|
return err
|
|
})
|
|
|
|
// Invalidate cache after successful update
|
|
if err == nil && s3a.bucketConfigCache != nil {
|
|
s3a.bucketConfigCache.Remove(bucket)
|
|
s3a.bucketConfigCache.RemoveNegativeCache(bucket) // Remove from negative cache too
|
|
}
|
|
|
|
return err
|
|
}
|
|
|
|
// New structured API functions using BucketMetadata
|
|
|
|
// GetBucketMetadata retrieves complete bucket metadata as a structured object
|
|
func (s3a *S3ApiServer) GetBucketMetadata(bucket string) (*BucketMetadata, error) {
|
|
return s3a.getBucketMetadata(bucket)
|
|
}
|
|
|
|
// SetBucketMetadata stores complete bucket metadata from a structured object
|
|
func (s3a *S3ApiServer) SetBucketMetadata(bucket string, metadata *BucketMetadata) error {
|
|
return s3a.setBucketMetadata(bucket, metadata)
|
|
}
|
|
|
|
// UpdateBucketMetadata updates specific parts of bucket metadata while preserving others
|
|
//
|
|
// DISTRIBUTED SYSTEM DESIGN NOTE:
|
|
// This function implements a read-modify-write pattern with "last write wins" semantics.
|
|
// In the rare case of concurrent updates to different parts of bucket metadata
|
|
// (e.g., simultaneous tag and CORS updates), the last write may overwrite previous changes.
|
|
//
|
|
// This is an acceptable trade-off because:
|
|
// 1. Bucket metadata updates are infrequent in typical S3 usage
|
|
// 2. Traditional locking doesn't work in distributed systems across multiple nodes
|
|
// 3. The complexity of distributed consensus (e.g., Raft) for metadata updates would
|
|
// be disproportionate to the low frequency of bucket configuration changes
|
|
// 4. Most bucket operations (tags, CORS, encryption) are typically configured once
|
|
// during setup rather than being frequently modified
|
|
//
|
|
// If stronger consistency is required, consider implementing optimistic concurrency
|
|
// control with version numbers or ETags at the storage layer.
|
|
func (s3a *S3ApiServer) UpdateBucketMetadata(bucket string, update func(*BucketMetadata) error) error {
|
|
// Get current metadata
|
|
metadata, err := s3a.GetBucketMetadata(bucket)
|
|
if err != nil {
|
|
return fmt.Errorf("failed to get current bucket metadata: %w", err)
|
|
}
|
|
|
|
// Apply update function
|
|
if err := update(metadata); err != nil {
|
|
return fmt.Errorf("failed to apply metadata update: %w", err)
|
|
}
|
|
|
|
// Store updated metadata (last write wins)
|
|
return s3a.SetBucketMetadata(bucket, metadata)
|
|
}
|
|
|
|
// Helper functions for specific metadata operations using structured API
|
|
|
|
// UpdateBucketTags sets bucket tags using the structured API
|
|
func (s3a *S3ApiServer) UpdateBucketTags(bucket string, tags map[string]string) error {
|
|
return s3a.UpdateBucketMetadata(bucket, func(metadata *BucketMetadata) error {
|
|
metadata.Tags = tags
|
|
return nil
|
|
})
|
|
}
|
|
|
|
// UpdateBucketCORS sets bucket CORS configuration using the structured API
|
|
func (s3a *S3ApiServer) UpdateBucketCORS(bucket string, corsConfig *cors.CORSConfiguration) error {
|
|
return s3a.UpdateBucketMetadata(bucket, func(metadata *BucketMetadata) error {
|
|
metadata.CORS = corsConfig
|
|
return nil
|
|
})
|
|
}
|
|
|
|
// UpdateBucketEncryption sets bucket encryption configuration using the structured API
|
|
func (s3a *S3ApiServer) UpdateBucketEncryption(bucket string, encryptionConfig *s3_pb.EncryptionConfiguration) error {
|
|
return s3a.UpdateBucketMetadata(bucket, func(metadata *BucketMetadata) error {
|
|
metadata.Encryption = encryptionConfig
|
|
return nil
|
|
})
|
|
}
|
|
|
|
// ClearBucketTags removes all bucket tags using the structured API
|
|
func (s3a *S3ApiServer) ClearBucketTags(bucket string) error {
|
|
return s3a.UpdateBucketMetadata(bucket, func(metadata *BucketMetadata) error {
|
|
metadata.Tags = make(map[string]string)
|
|
return nil
|
|
})
|
|
}
|
|
|
|
// ClearBucketCORS removes bucket CORS configuration using the structured API
|
|
func (s3a *S3ApiServer) ClearBucketCORS(bucket string) error {
|
|
return s3a.UpdateBucketMetadata(bucket, func(metadata *BucketMetadata) error {
|
|
metadata.CORS = nil
|
|
return nil
|
|
})
|
|
}
|
|
|
|
// ClearBucketEncryption removes bucket encryption configuration using the structured API
|
|
func (s3a *S3ApiServer) ClearBucketEncryption(bucket string) error {
|
|
return s3a.UpdateBucketMetadata(bucket, func(metadata *BucketMetadata) error {
|
|
metadata.Encryption = nil
|
|
return nil
|
|
})
|
|
}
|