* iam: add ServiceAccount protobuf schema Add ServiceAccount message type to iam.proto with support for: - Unique ID and parent user linkage - Optional expiration timestamp - Separate credentials (access key/secret) - Action restrictions (subset of parent) - Enable/disable status This is the first step toward implementing issue #7744 (IAM Service Account Support). * iam: add service account response types Add IAM API response types for service account operations: - ServiceAccountInfo struct for marshaling account details - CreateServiceAccountResponse - DeleteServiceAccountResponse - ListServiceAccountsResponse - GetServiceAccountResponse - UpdateServiceAccountResponse Also add type aliases in iamapi package for backwards compatibility. Part of issue #7744 (IAM Service Account Support). * iam: implement service account API handlers Add CRUD operations for service accounts: - CreateServiceAccount: Creates service account with ABIA key prefix - DeleteServiceAccount: Removes service account and parent linkage - ListServiceAccounts: Lists all or filtered by parent user - GetServiceAccount: Retrieves service account details - UpdateServiceAccount: Modifies status, description, expiration Service accounts inherit parent user's actions by default and support optional expiration timestamps. Part of issue #7744 (IAM Service Account Support). * sts: add AssumeRoleWithWebIdentity HTTP endpoint Add STS API HTTP endpoint for AWS SDK compatibility: - Create s3api_sts.go with HTTP handlers matching AWS STS spec - Support AssumeRoleWithWebIdentity action with JWT token - Return XML response with temporary credentials (AccessKeyId, SecretAccessKey, SessionToken) matching AWS format - Register STS route at POST /?Action=AssumeRoleWithWebIdentity This enables AWS SDKs (boto3, AWS CLI, etc.) to obtain temporary S3 credentials using OIDC/JWT tokens. Part of issue #7744 (IAM Service Account Support). * test: add service account and STS integration tests Add integration tests for new IAM features: s3_service_account_test.go: - TestServiceAccountLifecycle: Create, Get, List, Update, Delete - TestServiceAccountValidation: Error handling for missing params s3_sts_test.go: - TestAssumeRoleWithWebIdentityValidation: Parameter validation - TestAssumeRoleWithWebIdentityWithMockJWT: JWT token handling Tests skip gracefully when SeaweedFS is not running or when IAM features are not configured. Part of issue #7744 (IAM Service Account Support). * iam: address code review comments - Add constants for service account ID and key lengths - Use strconv.ParseInt instead of fmt.Sscanf for better error handling - Allow clearing descriptions by checking key existence in url.Values - Replace magic numbers (12, 20, 40) with named constants Addresses review comments from gemini-code-assist[bot] * test: add proper error handling in service account tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Addresses review comment from gemini-code-assist[bot] * test: add proper error handling in STS tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Repeated this fix throughout the file. Addresses review comment from gemini-code-assist[bot] in PR #7901. * iam: address additional code review comments - Specific error code mapping for STS service errors - Distinguish between Sender and Receiver error types in STS responses - Add nil checks for credentials in List/GetServiceAccount - Validate expiration date is in the future - Improve integration test error messages (include response body) - Add credential verification step in service account tests Addresses remaining review comments from gemini-code-assist[bot] across multiple files. * iam: fix shared slice reference in service account creation Copy parent's actions to create an independent slice for the service account instead of sharing the underlying array. This prevents unexpected mutations when the parent's actions are modified later. Addresses review comment from coderabbitai[bot] in PR #7901. * iam: remove duplicate unused constant Removed redundant iamServiceAccountKeyPrefix as ServiceAccountKeyPrefix is already defined and used. Addresses remaining cleanup task. * sts: document limitation of string-based error mapping Added TODO comment explaining that the current string-based error mapping approach is fragile and should be replaced with typed errors from the STS service in a future refactoring. This addresses the architectural concern raised in code review while deferring the actual implementation to a separate PR to avoid scope creep in the current service account feature addition. * iam: fix remaining review issues - Add future-date validation for expiration in UpdateServiceAccount - Reorder tests so credential verification happens before deletion - Fix compilation error by using correct JWT generation methods Addresses final review comments from coderabbitai[bot]. * iam: fix service account access key length The access key IDs were incorrectly generated with 24 characters instead of the AWS-standard 20 characters. This was caused by generating 20 random characters and then prepending the 4-character ABIA prefix. Fixed by subtracting the prefix length from AccessKeyLength, so the final key is: ABIA (4 chars) + random (16 chars) = 20 chars total. This ensures compatibility with S3 clients that validate key length. * test: add comprehensive service account security tests Added comprehensive integration tests for service account functionality: - TestServiceAccountS3Access: Verify SA credentials work for S3 operations - TestServiceAccountExpiration: Test expiration date validation and enforcement - TestServiceAccountInheritedPermissions: Verify parent-child relationship - TestServiceAccountAccessKeyFormat: Validate AWS-compatible key format (ABIA prefix, 20 char length) These tests ensure SeaweedFS service accounts are compatible with AWS conventions and provide robust security coverage. * iam: remove unused UserAccessKeyPrefix constant Code cleanup to remove unused constants. * iam: remove unused iamCommonResponse type alias Code cleanup to remove unused type aliases. * iam: restore and use UserAccessKeyPrefix constant Restored UserAccessKeyPrefix constant and updated s3api tests to use it instead of hardcoded strings for better maintainability and consistency. * test: improve error handling in service account security tests Added explicit error checking for io.ReadAll and xml.Unmarshal in TestServiceAccountExpiration to ensure failures are reported correctly and cleanup is performed only when appropriate. Also added logging for failed responses. * test: use t.Cleanup for reliable resource cleanup Replaced defer with t.Cleanup to ensure service account cleanup runs even when require.NoError fails. Also switched from manual error checking to require.NoError for more idiomatic testify usage. * iam: add CreatedBy field and optimize identity lookups - Added createdBy parameter to CreateServiceAccount to track who created each service account - Extract creator identity from request context using GetIdentityNameFromContext - Populate created_by field in ServiceAccount protobuf - Added findIdentityByName helper function to optimize identity lookups - Replaced nested loops with O(n) helper function calls in CreateServiceAccount and DeleteServiceAccount This addresses code review feedback for better auditing and performance. * iam: prevent user deletion when service accounts exist Following AWS IAM behavior, prevent deletion of users that have active service accounts. This ensures explicit cleanup and prevents orphaned service account resources with invalid ParentUser references. Users must delete all associated service accounts before deleting the parent user, providing safer resource management. * sts: enhance TODO with typed error implementation guidance Updated TODO comment with detailed implementation approach for replacing string-based error matching with typed errors using errors.Is(). This provides a clear roadmap for a follow-up PR to improve error handling robustness and maintainability. * iam: add operational limits for service account creation Added AWS IAM-compatible safeguards to prevent resource exhaustion: - Maximum 100 service accounts per user (LimitExceededException) - Maximum 1000 character description length (InvalidInputException) These limits prevent accidental or malicious resource exhaustion while not impacting legitimate use cases. * iam: add missing operational limit constants Added MaxServiceAccountsPerUser and MaxDescriptionLength constants that were referenced in the previous commit but not defined. * iam: enforce service account expiration during authentication CRITICAL SECURITY FIX: Expired service account credentials were not being rejected during authentication, allowing continued access after expiration. Changes: - Added Expiration field to Credential struct - Populate expiration when loading service accounts from configuration - Check expiration in all authentication paths (V2 and V4 signatures) - Return ErrExpiredToken for expired credentials This ensures expired service accounts are properly rejected at authentication time, matching AWS IAM behavior and preventing unauthorized access. * iam: fix error code for expired service account credentials Use ErrAccessDenied instead of non-existent ErrExpiredToken for expired service account credentials. This provides appropriate access denial for expired credentials while maintaining AWS-compatible error responses. * iam: fix remaining ErrExpiredToken references Replace all remaining instances of non-existent ErrExpiredToken with ErrAccessDenied for expired service account credentials. * iam: apply AWS-standard key format to user access keys Updated CreateAccessKey to generate AWS-standard 20-character access keys with AKIA prefix for regular users, matching the format used for service accounts. This ensures consistency across all access key types and full AWS compatibility. - Access keys: AKIA + 16 random chars = 20 total (was 21 chars, no prefix) - Secret keys: 40 random chars (was 42, now matches AWS standard) - Uses AccessKeyLength and UserAccessKeyPrefix constants * sts: replace fragile string-based error matching with typed errors Implemented robust error handling using typed errors and errors.Is() instead of fragile strings.Contains() matching. This decouples the HTTP layer from service implementation details and prevents errors from being miscategorized if error messages change. Changes: - Added typed error variables to weed/iam/sts/constants.go: * ErrTypedTokenExpired * ErrTypedInvalidToken * ErrTypedInvalidIssuer * ErrTypedInvalidAudience * ErrTypedMissingClaims - Updated STS service to wrap provider authentication errors with typed errors - Replaced strings.Contains() with errors.Is() in HTTP layer for error checking - Removed TODO comment as the improvement is now implemented This makes error handling more maintainable and reliable. * sts: eliminate all string-based error matching with provider-level typed errors Completed the typed error implementation by adding provider-level typed errors and updating provider implementations to return them. This eliminates ALL fragile string matching throughout the entire error handling stack. Changes: - Added typed error definitions to weed/iam/providers/errors.go: * ErrProviderTokenExpired * ErrProviderInvalidToken * ErrProviderInvalidIssuer * ErrProviderInvalidAudience * ErrProviderMissingClaims - Updated OIDC provider to wrap JWT validation errors with typed provider errors - Replaced strings.Contains() with errors.Is() in STS service for error mapping - Complete error chain: Provider -> STS -> HTTP layer, all using errors.Is() This provides: - Reliable error classification independent of error message content - Type-safe error checking throughout the stack - No order-dependent string matching - Maintainable error handling that won't break with message changes * oidc: use jwt.ErrTokenExpired instead of string matching Replaced the last remaining string-based error check with the JWT library's exported typed error. This makes the error detection independent of error message content and more robust against library updates. Changed from: strings.Contains(errMsg, "expired") To: errors.Is(err, jwt.ErrTokenExpired) This completes the elimination of ALL string-based error matching throughout the entire authentication stack. * iam: add description length validation to UpdateServiceAccount Fixed inconsistency where UpdateServiceAccount didn't validate description length against MaxDescriptionLength, allowing operational limits to be bypassed during updates. Now validates that updated descriptions don't exceed 1000 characters, matching the validation in CreateServiceAccount. * iam: refactor expiration check into helper method Extracted duplicated credential expiration check logic into a helper method to reduce code duplication and improve maintainability. Added Credential.isCredentialExpired() method and replaced 5 instances of inline expiration checks across auth_signature_v2.go and auth_signature_v4.go. * iam: address critical Copilot security and consistency feedback Fixed three critical issues identified by Copilot code review: 1. SECURITY: Prevent loading disabled service account credentials - Added check to skip disabled service accounts during credential loading - Disabled accounts can no longer authenticate 2. Add DurationSeconds validation for STS AssumeRoleWithWebIdentity - Enforce AWS-compatible range: 900-43200 seconds (15 min - 12 hours) - Returns proper error for out-of-range values 3. Fix expiration update consistency in UpdateServiceAccount - Added key existence check like Description field - Allows explicit clearing of expiration by setting to empty string - Distinguishes between "not updating" and "clearing expiration" * sts: remove unused durationSecondsStr variable Fixed build error from unused variable after refactoring duration parsing. * iam: address remaining Copilot feedback and remove dead code Completed remaining Copilot code review items: 1. Remove unused getPermission() method (dead code) - Method was defined but never called anywhere 2. Improve slice modification safety in DeleteServiceAccount - Replaced append-with-slice-operations with filter pattern - Avoids potential issues from mutating slice during iteration 3. Fix route registration order - Moved STS route registration BEFORE IAM route - Prevents IAM route from intercepting STS requests - More specific route (with query parameter) now registered first * iam: improve expiration validation and test cleanup robustness Addressed additional Copilot feedback: 1. Make expiration validation more explicit - Added explicit check for negative values - Added comment clarifying that 0 is allowed to clear expiration - Improves code readability and intent 2. Fix test cleanup order in s3_service_account_test.go - Track created service accounts in a slice - Delete all service accounts before deleting parent user - Prevents DeleteConflictException during cleanup - More robust cleanup even if test fails mid-execution Note: s3_service_account_security_test.go already had correct cleanup order due to LIFO defer execution. * test: remove redundant variable assignments Removed duplicate assignments of createdSAId, createdAccessKeyId, and createdSecretAccessKey on lines 148-150 that were already assigned on lines 132-134.
743 lines
34 KiB
Go
743 lines
34 KiB
Go
package s3api
|
|
|
|
import (
|
|
"context"
|
|
"encoding/json"
|
|
"fmt"
|
|
"net"
|
|
"net/http"
|
|
"os"
|
|
"slices"
|
|
"strings"
|
|
"sync"
|
|
"time"
|
|
|
|
"github.com/gorilla/mux"
|
|
"google.golang.org/grpc"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/cluster"
|
|
"github.com/seaweedfs/seaweedfs/weed/credential"
|
|
"github.com/seaweedfs/seaweedfs/weed/filer"
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/iam/integration"
|
|
"github.com/seaweedfs/seaweedfs/weed/iam/policy"
|
|
"github.com/seaweedfs/seaweedfs/weed/iam/sts"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/s3_pb"
|
|
. "github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
|
|
"github.com/seaweedfs/seaweedfs/weed/security"
|
|
"github.com/seaweedfs/seaweedfs/weed/util"
|
|
"github.com/seaweedfs/seaweedfs/weed/util/grace"
|
|
util_http "github.com/seaweedfs/seaweedfs/weed/util/http"
|
|
util_http_client "github.com/seaweedfs/seaweedfs/weed/util/http/client"
|
|
"github.com/seaweedfs/seaweedfs/weed/wdclient"
|
|
)
|
|
|
|
type S3ApiServerOption struct {
|
|
Filers []pb.ServerAddress
|
|
Masters []pb.ServerAddress // For filer discovery
|
|
Port int
|
|
Config string
|
|
DomainName string
|
|
AllowedOrigins []string
|
|
BucketsPath string
|
|
GrpcDialOption grpc.DialOption
|
|
AllowDeleteBucketNotEmpty bool
|
|
LocalFilerSocket string
|
|
DataCenter string
|
|
FilerGroup string
|
|
IamConfig string // Advanced IAM configuration file path
|
|
ConcurrentUploadLimit int64
|
|
ConcurrentFileUploadLimit int64
|
|
EnableIam bool // Enable embedded IAM API on the same port
|
|
Cipher bool // encrypt data on volume servers
|
|
}
|
|
|
|
type S3ApiServer struct {
|
|
s3_pb.UnimplementedSeaweedS3Server
|
|
option *S3ApiServerOption
|
|
iam *IdentityAccessManagement
|
|
iamIntegration *S3IAMIntegration // Advanced IAM integration for JWT authentication
|
|
cb *CircuitBreaker
|
|
randomClientId int32
|
|
filerGuard *security.Guard
|
|
filerClient *wdclient.FilerClient
|
|
client util_http_client.HTTPClientInterface
|
|
bucketRegistry *BucketRegistry
|
|
credentialManager *credential.CredentialManager
|
|
bucketConfigCache *BucketConfigCache
|
|
policyEngine *BucketPolicyEngine // Engine for evaluating bucket policies
|
|
inFlightDataSize int64
|
|
inFlightUploads int64
|
|
inFlightDataLimitCond *sync.Cond
|
|
embeddedIam *EmbeddedIamApi // Embedded IAM API server (when enabled)
|
|
stsHandlers *STSHandlers // STS HTTP handlers for AssumeRoleWithWebIdentity
|
|
cipher bool // encrypt data on volume servers
|
|
}
|
|
|
|
func NewS3ApiServer(router *mux.Router, option *S3ApiServerOption) (s3ApiServer *S3ApiServer, err error) {
|
|
return NewS3ApiServerWithStore(router, option, "")
|
|
}
|
|
|
|
func NewS3ApiServerWithStore(router *mux.Router, option *S3ApiServerOption, explicitStore string) (s3ApiServer *S3ApiServer, err error) {
|
|
if len(option.Filers) == 0 {
|
|
return nil, fmt.Errorf("at least one filer address is required")
|
|
}
|
|
|
|
startTsNs := time.Now().UnixNano()
|
|
|
|
v := util.GetViper()
|
|
signingKey := v.GetString("jwt.filer_signing.key")
|
|
v.SetDefault("jwt.filer_signing.expires_after_seconds", 10)
|
|
expiresAfterSec := v.GetInt("jwt.filer_signing.expires_after_seconds")
|
|
|
|
readSigningKey := v.GetString("jwt.filer_signing.read.key")
|
|
v.SetDefault("jwt.filer_signing.read.expires_after_seconds", 60)
|
|
readExpiresAfterSec := v.GetInt("jwt.filer_signing.read.expires_after_seconds")
|
|
|
|
v.SetDefault("cors.allowed_origins.values", "*")
|
|
|
|
if len(option.AllowedOrigins) == 0 {
|
|
allowedOrigins := v.GetString("cors.allowed_origins.values")
|
|
domains := strings.Split(allowedOrigins, ",")
|
|
option.AllowedOrigins = domains
|
|
}
|
|
|
|
iam := NewIdentityAccessManagementWithStore(option, explicitStore)
|
|
|
|
// Initialize bucket policy engine first
|
|
policyEngine := NewBucketPolicyEngine()
|
|
|
|
// Initialize FilerClient for volume location caching
|
|
// Uses the battle-tested vidMap with filer-based lookups
|
|
// Supports multiple filer addresses with automatic failover for high availability
|
|
var filerClient *wdclient.FilerClient
|
|
if len(option.Masters) > 0 && option.FilerGroup != "" {
|
|
// Enable filer discovery via master
|
|
masterMap := make(map[string]pb.ServerAddress)
|
|
for i, addr := range option.Masters {
|
|
masterMap[fmt.Sprintf("master%d", i)] = addr
|
|
}
|
|
masterClient := wdclient.NewMasterClient(option.GrpcDialOption, option.FilerGroup, cluster.S3Type, "", "", "", *pb.NewServiceDiscoveryFromMap(masterMap))
|
|
// Start the master client connection loop - required for GetMaster() to work
|
|
go masterClient.KeepConnectedToMaster(context.Background())
|
|
|
|
filerClient = wdclient.NewFilerClient(option.Filers, option.GrpcDialOption, option.DataCenter, &wdclient.FilerClientOption{
|
|
MasterClient: masterClient,
|
|
FilerGroup: option.FilerGroup,
|
|
DiscoveryInterval: 5 * time.Minute,
|
|
})
|
|
glog.V(0).Infof("S3 API initialized FilerClient with %d filer(s) and discovery enabled (group: %s, masters: %v)",
|
|
len(option.Filers), option.FilerGroup, option.Masters)
|
|
} else {
|
|
filerClient = wdclient.NewFilerClient(option.Filers, option.GrpcDialOption, option.DataCenter)
|
|
glog.V(0).Infof("S3 API initialized FilerClient with %d filer(s) (no discovery)", len(option.Filers))
|
|
}
|
|
|
|
// Update credential store to use FilerClient's current filer for HA
|
|
if store := iam.credentialManager.GetStore(); store != nil {
|
|
if filerFuncSetter, ok := store.(interface {
|
|
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
|
|
}); ok {
|
|
// Use FilerClient's GetCurrentFiler for true HA
|
|
filerFuncSetter.SetFilerAddressFunc(filerClient.GetCurrentFiler, option.GrpcDialOption)
|
|
glog.V(1).Infof("Updated credential store to use FilerClient's current active filer (HA-aware)")
|
|
}
|
|
}
|
|
|
|
s3ApiServer = &S3ApiServer{
|
|
option: option,
|
|
iam: iam,
|
|
randomClientId: util.RandomInt32(),
|
|
filerGuard: security.NewGuard([]string{}, signingKey, expiresAfterSec, readSigningKey, readExpiresAfterSec),
|
|
filerClient: filerClient,
|
|
cb: NewCircuitBreaker(option),
|
|
credentialManager: iam.credentialManager,
|
|
bucketConfigCache: NewBucketConfigCache(60 * time.Minute), // Increased TTL since cache is now event-driven
|
|
policyEngine: policyEngine, // Initialize bucket policy engine
|
|
inFlightDataLimitCond: sync.NewCond(new(sync.Mutex)),
|
|
cipher: option.Cipher,
|
|
}
|
|
|
|
// Set s3a reference in circuit breaker for upload limiting
|
|
s3ApiServer.cb.s3a = s3ApiServer
|
|
|
|
// Pass policy engine to IAM for bucket policy evaluation
|
|
// This avoids circular dependency by not passing the entire S3ApiServer
|
|
iam.policyEngine = policyEngine
|
|
|
|
// Initialize advanced IAM system if config is provided
|
|
if option.IamConfig != "" {
|
|
glog.V(1).Infof("Loading advanced IAM configuration from: %s", option.IamConfig)
|
|
|
|
// Use FilerClient's GetCurrentFiler for HA-aware filer selection
|
|
iamManager, err := loadIAMManagerFromConfig(option.IamConfig, func() string {
|
|
return string(filerClient.GetCurrentFiler())
|
|
})
|
|
if err != nil {
|
|
glog.Errorf("Failed to load IAM configuration: %v", err)
|
|
} else {
|
|
// Create S3 IAM integration with the loaded IAM manager
|
|
// filerAddress not actually used, just for backward compatibility
|
|
s3iam := NewS3IAMIntegration(iamManager, "")
|
|
|
|
// Set IAM integration in server
|
|
s3ApiServer.iamIntegration = s3iam
|
|
|
|
// Set the integration in the traditional IAM for compatibility
|
|
iam.SetIAMIntegration(s3iam)
|
|
|
|
// Initialize STS HTTP handlers for AssumeRoleWithWebIdentity endpoint
|
|
if stsService := iamManager.GetSTSService(); stsService != nil {
|
|
s3ApiServer.stsHandlers = NewSTSHandlers(stsService)
|
|
glog.V(1).Infof("STS HTTP handlers initialized for AssumeRoleWithWebIdentity")
|
|
}
|
|
|
|
glog.V(1).Infof("Advanced IAM system initialized successfully with HA filer support")
|
|
}
|
|
}
|
|
|
|
// Initialize embedded IAM API if enabled
|
|
if option.EnableIam {
|
|
s3ApiServer.embeddedIam = NewEmbeddedIamApi(s3ApiServer.credentialManager, iam)
|
|
glog.V(0).Infof("Embedded IAM API initialized (use -iam=false to disable)")
|
|
}
|
|
|
|
if option.Config != "" {
|
|
grace.OnReload(func() {
|
|
if err := s3ApiServer.iam.loadS3ApiConfigurationFromFile(option.Config); err != nil {
|
|
glog.Errorf("fail to load config file %s: %v", option.Config, err)
|
|
} else {
|
|
glog.V(1).Infof("Loaded %d identities from config file %s", len(s3ApiServer.iam.identities), option.Config)
|
|
}
|
|
})
|
|
}
|
|
s3ApiServer.bucketRegistry = NewBucketRegistry(s3ApiServer)
|
|
if option.LocalFilerSocket == "" {
|
|
if s3ApiServer.client, err = util_http.NewGlobalHttpClient(); err != nil {
|
|
return nil, err
|
|
}
|
|
} else {
|
|
s3ApiServer.client = &http.Client{
|
|
Transport: &http.Transport{
|
|
DialContext: func(_ context.Context, _, _ string) (net.Conn, error) {
|
|
return net.Dial("unix", option.LocalFilerSocket)
|
|
},
|
|
},
|
|
}
|
|
}
|
|
|
|
s3ApiServer.registerRouter(router)
|
|
|
|
// Initialize the global SSE-S3 key manager with filer access
|
|
if err := InitializeGlobalSSES3KeyManager(s3ApiServer); err != nil {
|
|
return nil, fmt.Errorf("failed to initialize SSE-S3 key manager: %w", err)
|
|
}
|
|
|
|
go s3ApiServer.subscribeMetaEvents("s3", startTsNs, filer.DirectoryEtcRoot, []string{option.BucketsPath})
|
|
|
|
// Start bucket size metrics collection in background
|
|
go s3ApiServer.startBucketSizeMetricsLoop(context.Background())
|
|
|
|
return s3ApiServer, nil
|
|
}
|
|
|
|
// getFilerAddress returns the current active filer address
|
|
// Uses FilerClient's tracked current filer which is updated on successful operations
|
|
// This provides better availability than always using the first filer
|
|
func (s3a *S3ApiServer) getFilerAddress() pb.ServerAddress {
|
|
if s3a.filerClient != nil {
|
|
return s3a.filerClient.GetCurrentFiler()
|
|
}
|
|
// Fallback to first filer if filerClient not initialized
|
|
if len(s3a.option.Filers) > 0 {
|
|
return s3a.option.Filers[0]
|
|
}
|
|
glog.Warningf("getFilerAddress: no filer addresses available")
|
|
return ""
|
|
}
|
|
|
|
// syncBucketPolicyToEngine syncs a bucket policy to the policy engine
|
|
// This helper method centralizes the logic for loading bucket policies into the engine
|
|
// to avoid duplication and ensure consistent error handling
|
|
func (s3a *S3ApiServer) syncBucketPolicyToEngine(bucket string, policyDoc *policy.PolicyDocument) {
|
|
if s3a.policyEngine == nil {
|
|
return
|
|
}
|
|
|
|
if policyDoc != nil {
|
|
if err := s3a.policyEngine.LoadBucketPolicyFromCache(bucket, policyDoc); err != nil {
|
|
glog.Errorf("Failed to sync bucket policy for %s to policy engine: %v", bucket, err)
|
|
}
|
|
} else {
|
|
// No policy - ensure it's removed from engine if it was there
|
|
s3a.policyEngine.DeleteBucketPolicy(bucket)
|
|
}
|
|
}
|
|
|
|
// checkPolicyWithEntry re-evaluates bucket policy with the object entry metadata.
|
|
// This is used by handlers after fetching the entry to enforce tag-based conditions
|
|
// like s3:ExistingObjectTag/<key>.
|
|
//
|
|
// Returns:
|
|
// - s3err.ErrCode: ErrNone if allowed, ErrAccessDenied if denied
|
|
// - bool: true if policy was evaluated (has policy for bucket), false if no policy
|
|
func (s3a *S3ApiServer) checkPolicyWithEntry(r *http.Request, bucket, object, action, principal string, objectEntry map[string][]byte) (s3err.ErrorCode, bool) {
|
|
if s3a.policyEngine == nil {
|
|
return s3err.ErrNone, false
|
|
}
|
|
|
|
// Skip if no policy for this bucket
|
|
if !s3a.policyEngine.HasPolicyForBucket(bucket) {
|
|
return s3err.ErrNone, false
|
|
}
|
|
|
|
allowed, evaluated, err := s3a.policyEngine.EvaluatePolicy(bucket, object, action, principal, r, objectEntry)
|
|
if err != nil {
|
|
glog.Errorf("checkPolicyWithEntry: error evaluating policy for %s/%s: %v", bucket, object, err)
|
|
return s3err.ErrInternalError, true
|
|
}
|
|
|
|
if !evaluated {
|
|
return s3err.ErrNone, false
|
|
}
|
|
|
|
if !allowed {
|
|
glog.V(3).Infof("checkPolicyWithEntry: policy denied access to %s/%s for principal %s", bucket, object, principal)
|
|
return s3err.ErrAccessDenied, true
|
|
}
|
|
|
|
return s3err.ErrNone, true
|
|
}
|
|
|
|
// recheckPolicyWithObjectEntry performs the second phase of policy evaluation after
|
|
// an object's entry is fetched. It extracts identity from context and checks for
|
|
// tag-based conditions like s3:ExistingObjectTag/<key>.
|
|
//
|
|
// Returns s3err.ErrNone if allowed, or an error code if denied or on error.
|
|
func (s3a *S3ApiServer) recheckPolicyWithObjectEntry(r *http.Request, bucket, object, action string, objectEntry map[string][]byte, handlerName string) s3err.ErrorCode {
|
|
identityRaw := GetIdentityFromContext(r)
|
|
var identity *Identity
|
|
if identityRaw != nil {
|
|
var ok bool
|
|
identity, ok = identityRaw.(*Identity)
|
|
if !ok {
|
|
glog.Errorf("%s: unexpected identity type in context for %s/%s", handlerName, bucket, object)
|
|
return s3err.ErrInternalError
|
|
}
|
|
}
|
|
principal := buildPrincipalARN(identity)
|
|
errCode, _ := s3a.checkPolicyWithEntry(r, bucket, object, action, principal, objectEntry)
|
|
return errCode
|
|
}
|
|
|
|
// classifyDomainNames classifies domains into path-style and virtual-host style domains.
|
|
// A domain is considered path-style if:
|
|
// 1. It contains a dot (has subdomains)
|
|
// 2. Its parent domain is also in the list of configured domains
|
|
//
|
|
// For example, if domains are ["s3.example.com", "develop.s3.example.com"],
|
|
// then "develop.s3.example.com" is path-style (parent "s3.example.com" is in the list),
|
|
// while "s3.example.com" is virtual-host style.
|
|
func classifyDomainNames(domainNames []string) (pathStyleDomains, virtualHostDomains []string) {
|
|
for _, domainName := range domainNames {
|
|
parts := strings.SplitN(domainName, ".", 2)
|
|
if len(parts) == 2 && slices.Contains(domainNames, parts[1]) {
|
|
// This is a subdomain and its parent is also in the list
|
|
// Register as path-style: domain.com/bucket/object
|
|
pathStyleDomains = append(pathStyleDomains, domainName)
|
|
} else {
|
|
// This is a top-level domain or its parent is not in the list
|
|
// Register as virtual-host style: bucket.domain.com/object
|
|
virtualHostDomains = append(virtualHostDomains, domainName)
|
|
}
|
|
}
|
|
return pathStyleDomains, virtualHostDomains
|
|
}
|
|
|
|
// handleCORSOriginValidation handles the common CORS origin validation logic
|
|
func (s3a *S3ApiServer) handleCORSOriginValidation(w http.ResponseWriter, r *http.Request) bool {
|
|
origin := r.Header.Get("Origin")
|
|
if origin != "" {
|
|
if len(s3a.option.AllowedOrigins) == 0 || s3a.option.AllowedOrigins[0] == "*" {
|
|
origin = "*"
|
|
} else {
|
|
originFound := false
|
|
for _, allowedOrigin := range s3a.option.AllowedOrigins {
|
|
if origin == allowedOrigin {
|
|
originFound = true
|
|
break
|
|
}
|
|
}
|
|
if !originFound {
|
|
writeFailureResponse(w, r, http.StatusForbidden)
|
|
return false
|
|
}
|
|
}
|
|
}
|
|
|
|
w.Header().Set("Access-Control-Allow-Origin", origin)
|
|
w.Header().Set("Access-Control-Expose-Headers", "*")
|
|
w.Header().Set("Access-Control-Allow-Methods", "*")
|
|
w.Header().Set("Access-Control-Allow-Headers", "*")
|
|
w.Header().Set("Access-Control-Allow-Credentials", "true")
|
|
return true
|
|
}
|
|
|
|
func (s3a *S3ApiServer) registerRouter(router *mux.Router) {
|
|
// API Router
|
|
apiRouter := router.PathPrefix("/").Subrouter()
|
|
|
|
// Readiness Probe
|
|
apiRouter.Methods(http.MethodGet).Path("/status").HandlerFunc(s3a.StatusHandler)
|
|
apiRouter.Methods(http.MethodGet).Path("/healthz").HandlerFunc(s3a.StatusHandler)
|
|
|
|
var routers []*mux.Router
|
|
if s3a.option.DomainName != "" {
|
|
domainNames := strings.Split(s3a.option.DomainName, ",")
|
|
pathStyleDomains, virtualHostDomains := classifyDomainNames(domainNames)
|
|
|
|
// Register path-style domains
|
|
for _, domain := range pathStyleDomains {
|
|
routers = append(routers, apiRouter.Host(domain).PathPrefix("/{bucket}").Subrouter())
|
|
}
|
|
|
|
// Register virtual-host style domains
|
|
for _, virtualHost := range virtualHostDomains {
|
|
routers = append(routers, apiRouter.Host(
|
|
fmt.Sprintf("%s.%s", "{bucket:.+}", virtualHost)).Subrouter())
|
|
}
|
|
}
|
|
routers = append(routers, apiRouter.PathPrefix("/{bucket}").Subrouter())
|
|
|
|
// Get CORS middleware instance with caching
|
|
corsMiddleware := s3a.getCORSMiddleware()
|
|
|
|
for _, bucket := range routers {
|
|
// Apply CORS middleware to bucket routers for automatic CORS header handling
|
|
bucket.Use(corsMiddleware.Handler)
|
|
|
|
// Bucket-specific OPTIONS handler for CORS preflight requests
|
|
// Use PathPrefix to catch all bucket-level preflight routes including /bucket/object
|
|
bucket.PathPrefix("/").Methods(http.MethodOptions).HandlerFunc(corsMiddleware.HandleOptionsRequest)
|
|
|
|
// each case should follow the next rule:
|
|
// - requesting object with query must precede any other methods
|
|
// - requesting object must precede any methods with buckets
|
|
// - requesting bucket with query must precede raw methods with buckets
|
|
// - requesting bucket must be processed in the end
|
|
|
|
// objects with query
|
|
|
|
// CopyObjectPart
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp("X-Amz-Copy-Source", `.*?(\/|%2F).*?`).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.CopyObjectPartHandler, ACTION_WRITE)), "PUT")).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
|
|
// PutObjectPart
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutObjectPartHandler, ACTION_WRITE)), "PUT")).Queries("partNumber", "{partNumber:[0-9]+}", "uploadId", "{uploadId:.*}")
|
|
// CompleteMultipartUpload
|
|
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.CompleteMultipartUploadHandler, ACTION_WRITE)), "POST")).Queries("uploadId", "{uploadId:.*}")
|
|
// NewMultipartUpload
|
|
bucket.Methods(http.MethodPost).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.NewMultipartUploadHandler, ACTION_WRITE)), "POST")).Queries("uploads", "")
|
|
// AbortMultipartUpload
|
|
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.AbortMultipartUploadHandler, ACTION_WRITE)), "DELETE")).Queries("uploadId", "{uploadId:.*}")
|
|
// ListObjectParts
|
|
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.ListObjectPartsHandler, ACTION_READ)), "GET")).Queries("uploadId", "{uploadId:.*}")
|
|
// ListMultipartUploads
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.ListMultipartUploadsHandler, ACTION_READ)), "GET")).Queries("uploads", "")
|
|
|
|
// GetObjectTagging
|
|
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetObjectTaggingHandler, ACTION_READ)), "GET")).Queries("tagging", "")
|
|
// PutObjectTagging
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutObjectTaggingHandler, ACTION_TAGGING)), "PUT")).Queries("tagging", "")
|
|
// DeleteObjectTagging
|
|
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteObjectTaggingHandler, ACTION_TAGGING)), "DELETE")).Queries("tagging", "")
|
|
|
|
// PutObjectACL
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutObjectAclHandler, ACTION_WRITE_ACP)), "PUT")).Queries("acl", "")
|
|
// PutObjectRetention
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutObjectRetentionHandler, ACTION_WRITE)), "PUT")).Queries("retention", "")
|
|
// PutObjectLegalHold
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutObjectLegalHoldHandler, ACTION_WRITE)), "PUT")).Queries("legal-hold", "")
|
|
|
|
// GetObjectACL
|
|
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetObjectAclHandler, ACTION_READ_ACP)), "GET")).Queries("acl", "")
|
|
// GetObjectRetention
|
|
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetObjectRetentionHandler, ACTION_READ)), "GET")).Queries("retention", "")
|
|
// GetObjectLegalHold
|
|
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetObjectLegalHoldHandler, ACTION_READ)), "GET")).Queries("legal-hold", "")
|
|
|
|
// objects with query
|
|
|
|
// raw objects
|
|
|
|
// HeadObject
|
|
bucket.Methods(http.MethodHead).Path("/{object:.+}").HandlerFunc(track(s3a.AuthWithPublicRead(func(w http.ResponseWriter, r *http.Request) {
|
|
limitedHandler, _ := s3a.cb.Limit(s3a.HeadObjectHandler, ACTION_READ)
|
|
limitedHandler(w, r)
|
|
}, ACTION_READ), "GET"))
|
|
|
|
// GetObject, but directory listing is not supported
|
|
bucket.Methods(http.MethodGet).Path("/{object:.+}").HandlerFunc(track(s3a.AuthWithPublicRead(func(w http.ResponseWriter, r *http.Request) {
|
|
limitedHandler, _ := s3a.cb.Limit(s3a.GetObjectHandler, ACTION_READ)
|
|
limitedHandler(w, r)
|
|
}, ACTION_READ), "GET"))
|
|
|
|
// CopyObject
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HeadersRegexp("X-Amz-Copy-Source", ".*?(\\/|%2F).*?").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.CopyObjectHandler, ACTION_WRITE)), "COPY"))
|
|
// PutObject
|
|
bucket.Methods(http.MethodPut).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutObjectHandler, ACTION_WRITE)), "PUT"))
|
|
// DeleteObject
|
|
bucket.Methods(http.MethodDelete).Path("/{object:.+}").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteObjectHandler, ACTION_WRITE)), "DELETE"))
|
|
|
|
// raw objects
|
|
|
|
// buckets with query
|
|
|
|
// DeleteMultipleObjects
|
|
bucket.Methods(http.MethodPost).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteMultipleObjectsHandler, ACTION_WRITE)), "DELETE")).Queries("delete", "")
|
|
|
|
// GetBucketACL
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketAclHandler, ACTION_READ_ACP)), "GET")).Queries("acl", "")
|
|
// PutBucketACL
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketAclHandler, ACTION_WRITE_ACP)), "PUT")).Queries("acl", "")
|
|
|
|
// GetBucketPolicy
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketPolicyHandler, ACTION_READ)), "GET")).Queries("policy", "")
|
|
// PutBucketPolicy
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketPolicyHandler, ACTION_WRITE)), "PUT")).Queries("policy", "")
|
|
// DeleteBucketPolicy
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteBucketPolicyHandler, ACTION_WRITE)), "DELETE")).Queries("policy", "")
|
|
|
|
// GetBucketCors
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketCorsHandler, ACTION_READ)), "GET")).Queries("cors", "")
|
|
// PutBucketCors
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketCorsHandler, ACTION_WRITE)), "PUT")).Queries("cors", "")
|
|
// DeleteBucketCors
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteBucketCorsHandler, ACTION_WRITE)), "DELETE")).Queries("cors", "")
|
|
|
|
// GetBucketLifecycleConfiguration
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketLifecycleConfigurationHandler, ACTION_READ)), "GET")).Queries("lifecycle", "")
|
|
// PutBucketLifecycleConfiguration
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketLifecycleConfigurationHandler, ACTION_WRITE)), "PUT")).Queries("lifecycle", "")
|
|
// DeleteBucketLifecycleConfiguration
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteBucketLifecycleHandler, ACTION_WRITE)), "DELETE")).Queries("lifecycle", "")
|
|
|
|
// GetBucketLocation
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketLocationHandler, ACTION_READ)), "GET")).Queries("location", "")
|
|
|
|
// GetBucketRequestPayment
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketRequestPaymentHandler, ACTION_READ)), "GET")).Queries("requestPayment", "")
|
|
|
|
// GetBucketVersioning
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketVersioningHandler, ACTION_READ)), "GET")).Queries("versioning", "")
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketVersioningHandler, ACTION_WRITE)), "PUT")).Queries("versioning", "")
|
|
|
|
// GetObjectLockConfiguration / PutObjectLockConfiguration (bucket-level operations)
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetObjectLockConfigurationHandler, ACTION_READ)), "GET")).Queries("object-lock", "")
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutObjectLockConfigurationHandler, ACTION_WRITE)), "PUT")).Queries("object-lock", "")
|
|
|
|
// GetBucketTagging
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketTaggingHandler, ACTION_TAGGING)), "GET")).Queries("tagging", "")
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketTaggingHandler, ACTION_TAGGING)), "PUT")).Queries("tagging", "")
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteBucketTaggingHandler, ACTION_TAGGING)), "DELETE")).Queries("tagging", "")
|
|
|
|
// GetBucketEncryption
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetBucketEncryptionHandler, ACTION_ADMIN)), "GET")).Queries("encryption", "")
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketEncryptionHandler, ACTION_ADMIN)), "PUT")).Queries("encryption", "")
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteBucketEncryptionHandler, ACTION_ADMIN)), "DELETE")).Queries("encryption", "")
|
|
|
|
// GetPublicAccessBlockHandler
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.GetPublicAccessBlockHandler, ACTION_ADMIN)), "GET")).Queries("publicAccessBlock", "")
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutPublicAccessBlockHandler, ACTION_ADMIN)), "PUT")).Queries("publicAccessBlock", "")
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeletePublicAccessBlockHandler, ACTION_ADMIN)), "DELETE")).Queries("publicAccessBlock", "")
|
|
|
|
// ListObjectsV2
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.AuthWithPublicRead(func(w http.ResponseWriter, r *http.Request) {
|
|
limitedHandler, _ := s3a.cb.Limit(s3a.ListObjectsV2Handler, ACTION_LIST)
|
|
limitedHandler(w, r)
|
|
}, ACTION_LIST), "LIST")).Queries("list-type", "2")
|
|
|
|
// ListObjectVersions
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.ListObjectVersionsHandler, ACTION_LIST)), "LIST")).Queries("versions", "")
|
|
|
|
// buckets with query
|
|
// PutBucketOwnershipControls
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.PutBucketOwnershipControls, ACTION_ADMIN), "PUT")).Queries("ownershipControls", "")
|
|
|
|
//GetBucketOwnershipControls
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.iam.Auth(s3a.GetBucketOwnershipControls, ACTION_READ), "GET")).Queries("ownershipControls", "")
|
|
|
|
//DeleteBucketOwnershipControls
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.DeleteBucketOwnershipControls, ACTION_ADMIN), "DELETE")).Queries("ownershipControls", "")
|
|
|
|
// raw buckets
|
|
|
|
// PostPolicy
|
|
bucket.Methods(http.MethodPost).HeadersRegexp("Content-Type", "multipart/form-data*").HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PostPolicyBucketHandler, ACTION_WRITE)), "POST"))
|
|
|
|
// HeadBucket
|
|
bucket.Methods(http.MethodHead).HandlerFunc(track(s3a.AuthWithPublicRead(func(w http.ResponseWriter, r *http.Request) {
|
|
limitedHandler, _ := s3a.cb.Limit(s3a.HeadBucketHandler, ACTION_READ)
|
|
limitedHandler(w, r)
|
|
}, ACTION_READ), "GET"))
|
|
|
|
// PutBucket
|
|
bucket.Methods(http.MethodPut).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.PutBucketHandler, ACTION_ADMIN)), "PUT"))
|
|
|
|
// DeleteBucket
|
|
bucket.Methods(http.MethodDelete).HandlerFunc(track(s3a.iam.Auth(s3a.cb.Limit(s3a.DeleteBucketHandler, ACTION_DELETE_BUCKET)), "DELETE"))
|
|
|
|
// ListObjectsV1 (Legacy)
|
|
bucket.Methods(http.MethodGet).HandlerFunc(track(s3a.AuthWithPublicRead(func(w http.ResponseWriter, r *http.Request) {
|
|
limitedHandler, _ := s3a.cb.Limit(s3a.ListObjectsV1Handler, ACTION_LIST)
|
|
limitedHandler(w, r)
|
|
}, ACTION_LIST), "LIST"))
|
|
|
|
// raw buckets
|
|
|
|
}
|
|
|
|
// Global OPTIONS handler for service-level requests (non-bucket requests)
|
|
// This handles requests like OPTIONS /, OPTIONS /status, OPTIONS /healthz
|
|
// Place this after bucket handlers to avoid interfering with bucket CORS middleware
|
|
apiRouter.Methods(http.MethodOptions).PathPrefix("/").HandlerFunc(
|
|
func(w http.ResponseWriter, r *http.Request) {
|
|
// Only handle if this is not a bucket-specific request
|
|
vars := mux.Vars(r)
|
|
bucket := vars["bucket"]
|
|
if bucket != "" {
|
|
// This is a bucket-specific request, let bucket CORS middleware handle it
|
|
http.NotFound(w, r)
|
|
return
|
|
}
|
|
|
|
if s3a.handleCORSOriginValidation(w, r) {
|
|
writeSuccessResponseEmpty(w, r)
|
|
}
|
|
})
|
|
|
|
// STS API endpoint for AssumeRoleWithWebIdentity
|
|
// POST /?Action=AssumeRoleWithWebIdentity&WebIdentityToken=...
|
|
// This endpoint is unauthenticated - the JWT token in the request is the authentication
|
|
// IMPORTANT: Register this BEFORE the general IAM route to prevent interception
|
|
if s3a.stsHandlers != nil {
|
|
apiRouter.Methods(http.MethodPost).Path("/").Queries("Action", "AssumeRoleWithWebIdentity").
|
|
HandlerFunc(track(s3a.stsHandlers.HandleSTSRequest, "STS"))
|
|
glog.V(0).Infof("STS API enabled on S3 port (AssumeRoleWithWebIdentity)")
|
|
}
|
|
|
|
// Embedded IAM API endpoint
|
|
// POST / (without specific query parameters)
|
|
// This must be before ListBuckets since IAM uses POST and ListBuckets uses GET
|
|
// Uses AuthIam for granular permission checking:
|
|
// - Self-service operations (own access keys) don't require admin
|
|
// - Operations on other users require admin privileges
|
|
if s3a.embeddedIam != nil {
|
|
apiRouter.Methods(http.MethodPost).Path("/").HandlerFunc(track(s3a.embeddedIam.AuthIam(s3a.cb.Limit(s3a.embeddedIam.DoActions, ACTION_WRITE)), "IAM"))
|
|
glog.V(0).Infof("Embedded IAM API enabled on S3 port")
|
|
}
|
|
|
|
// ListBuckets
|
|
apiRouter.Methods(http.MethodGet).Path("/").HandlerFunc(track(s3a.iam.Auth(s3a.ListBucketsHandler, ACTION_LIST), "LIST"))
|
|
|
|
// NotFound
|
|
apiRouter.NotFoundHandler = http.HandlerFunc(s3err.NotFoundHandler)
|
|
|
|
}
|
|
|
|
// loadIAMManagerFromConfig loads the advanced IAM manager from configuration file
|
|
func loadIAMManagerFromConfig(configPath string, filerAddressProvider func() string) (*integration.IAMManager, error) {
|
|
// Read configuration file
|
|
configData, err := os.ReadFile(configPath)
|
|
if err != nil {
|
|
return nil, fmt.Errorf("failed to read config file: %w", err)
|
|
}
|
|
|
|
// Parse configuration structure
|
|
var configRoot struct {
|
|
STS *sts.STSConfig `json:"sts"`
|
|
Policy *policy.PolicyEngineConfig `json:"policy"`
|
|
Providers []map[string]interface{} `json:"providers"`
|
|
Roles []*integration.RoleDefinition `json:"roles"`
|
|
Policies []struct {
|
|
Name string `json:"name"`
|
|
Document *policy.PolicyDocument `json:"document"`
|
|
} `json:"policies"`
|
|
}
|
|
|
|
if err := json.Unmarshal(configData, &configRoot); err != nil {
|
|
return nil, fmt.Errorf("failed to parse config: %w", err)
|
|
}
|
|
|
|
// Ensure a valid policy engine config exists
|
|
if configRoot.Policy == nil {
|
|
// Provide a secure default if not specified in the config file
|
|
// Default to Deny with in-memory store so that JSON-defined policies work without filer
|
|
glog.V(1).Infof("No policy engine config provided; using defaults (DefaultEffect=%s, StoreType=%s)", sts.EffectDeny, sts.StoreTypeMemory)
|
|
configRoot.Policy = &policy.PolicyEngineConfig{
|
|
DefaultEffect: sts.EffectDeny,
|
|
StoreType: sts.StoreTypeMemory,
|
|
}
|
|
} else if configRoot.Policy.StoreType == "" {
|
|
// If policy config exists but storeType is not specified, use memory store
|
|
// This ensures JSON-defined policies are stored in memory and work correctly
|
|
configRoot.Policy.StoreType = sts.StoreTypeMemory
|
|
glog.V(1).Infof("Policy storeType not specified; using memory store for JSON config-based setup")
|
|
}
|
|
|
|
// Create IAM configuration
|
|
iamConfig := &integration.IAMConfig{
|
|
STS: configRoot.STS,
|
|
Policy: configRoot.Policy,
|
|
Roles: &integration.RoleStoreConfig{
|
|
StoreType: sts.StoreTypeMemory, // Use memory store for JSON config-based setup
|
|
},
|
|
}
|
|
|
|
// Initialize IAM manager
|
|
iamManager := integration.NewIAMManager()
|
|
if err := iamManager.Initialize(iamConfig, filerAddressProvider); err != nil {
|
|
return nil, fmt.Errorf("failed to initialize IAM manager: %w", err)
|
|
}
|
|
|
|
// Load identity providers
|
|
providerFactory := sts.NewProviderFactory()
|
|
for _, providerConfig := range configRoot.Providers {
|
|
provider, err := providerFactory.CreateProvider(&sts.ProviderConfig{
|
|
Name: providerConfig["name"].(string),
|
|
Type: providerConfig["type"].(string),
|
|
Enabled: true,
|
|
Config: providerConfig["config"].(map[string]interface{}),
|
|
})
|
|
if err != nil {
|
|
glog.Warningf("Failed to create provider %s: %v", providerConfig["name"], err)
|
|
continue
|
|
}
|
|
if provider != nil {
|
|
if err := iamManager.RegisterIdentityProvider(provider); err != nil {
|
|
glog.Warningf("Failed to register provider %s: %v", providerConfig["name"], err)
|
|
} else {
|
|
glog.V(1).Infof("Registered identity provider: %s", providerConfig["name"])
|
|
}
|
|
}
|
|
}
|
|
|
|
// Load policies
|
|
for _, policyDef := range configRoot.Policies {
|
|
if err := iamManager.CreatePolicy(context.Background(), "", policyDef.Name, policyDef.Document); err != nil {
|
|
glog.Warningf("Failed to create policy %s: %v", policyDef.Name, err)
|
|
}
|
|
}
|
|
|
|
// Load roles
|
|
for _, roleDef := range configRoot.Roles {
|
|
if err := iamManager.CreateRole(context.Background(), "", roleDef.RoleName, roleDef); err != nil {
|
|
glog.Warningf("Failed to create role %s: %v", roleDef.RoleName, err)
|
|
}
|
|
}
|
|
|
|
glog.V(1).Infof("Loaded %d providers, %d policies and %d roles from config", len(configRoot.Providers), len(configRoot.Policies), len(configRoot.Roles))
|
|
|
|
return iamManager, nil
|
|
}
|