* Add Trino blog operations test * Update test/s3tables/catalog_trino/trino_blog_operations_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * feat: add table bucket path helpers and filer operations - Add table object root and table location mapping directories - Implement ensureDirectory, upsertFile, deleteEntryIfExists helpers - Support table location bucket mapping for S3 access * feat: manage table bucket object roots on creation/deletion - Create .objects directory for table buckets on creation - Clean up table object bucket paths on deletion - Enable S3 operations on table bucket object roots * feat: add table location mapping for Iceberg REST - Track table location bucket mappings when tables are created/updated/deleted - Enable location-based routing for S3 operations on table data * feat: route S3 operations to table bucket object roots - Route table-s3 bucket names to mapped table paths - Route table buckets to object root directories - Support table location bucket mapping lookup * feat: emit table-s3 locations from Iceberg REST - Generate unique table-s3 bucket names with UUID suffix - Store table metadata under table bucket paths - Return table-s3 locations for Trino compatibility * fix: handle missing directories in S3 list operations - Propagate ErrNotFound from ListEntries for non-existent directories - Treat missing directories as empty results for list operations - Fixes Trino non-empty location checks on table creation * test: improve Trino CSV parsing for single-value results - Sanitize Trino output to skip jline warnings - Handle single-value CSV results without header rows - Strip quotes from numeric values in tests * refactor: use bucket path helpers throughout S3 API - Replace direct bucket path operations with helper functions - Leverage centralized table bucket routing logic - Improve maintainability with consistent path resolution * fix: add table bucket cache and improve filer error handling - Cache table bucket lookups to reduce filer overhead on repeated checks - Use filer_pb.CreateEntry and filer_pb.UpdateEntry helpers to check resp.Error - Fix delete order in handler_bucket_get_list_delete: delete table object before directory - Make location mapping errors best-effort: log and continue, don't fail API - Update table location mappings to delete stale prior bucket mappings on update - Add 1-second sleep before timestamp time travel query to ensure timestamps are in past - Fix CSV parsing: examine all lines, not skip first; handle single-value rows * fix: properly handle stale metadata location mapping cleanup - Capture oldMetadataLocation before mutation in handleUpdateTable - Update updateTableLocationMapping to accept both old and new locations - Use passed-in oldMetadataLocation to detect location changes - Delete stale mapping only when location actually changes - Pass empty string for oldLocation in handleCreateTable (new tables have no prior mapping) - Improve logging to show old -> new location transitions * refactor: cleanup imports and cache design - Remove unused 'sync' import from bucket_paths.go - Use filer_pb.UpdateEntry helper in setExtendedAttribute and deleteExtendedAttribute for consistent error handling - Add dedicated tableBucketCache map[string]bool to BucketRegistry instead of mixing concerns with metadataCache - Improve cache separation: table buckets cache is now separate from bucket metadata cache * fix: improve cache invalidation and add transient error handling Cache invalidation (critical fix): - Add tableLocationCache to BucketRegistry for location mapping lookups - Clear tableBucketCache and tableLocationCache in RemoveBucketMetadata - Prevents stale cache entries when buckets are deleted/recreated Transient error handling: - Only cache table bucket lookups when conclusive (found or ErrNotFound) - Skip caching on transient errors (network, permission, etc) - Prevents marking real table buckets as non-table due to transient failures Performance optimization: - Cache tableLocationDir results to avoid repeated filer RPCs on hot paths - tableLocationDir now checks cache before making expensive filer lookups - Cache stores empty string for 'not found' to avoid redundant lookups Code clarity: - Add comment to deleteDirectory explaining DeleteEntry response lacks Error field * go fmt * fix: mirror transient error handling in tableLocationDir and optimize bucketDir Transient error handling: - tableLocationDir now only caches definitive results - Mirrors isTableBucket behavior to prevent treating transient errors as permanent misses - Improves reliability on flaky systems or during recovery Performance optimization: - bucketDir avoids redundant isTableBucket call via bucketRoot - Directly use s3a.option.BucketsPath for regular buckets - Saves one cache lookup for every non-table bucket operation * fix: revert bucketDir optimization to preserve bucketRoot logic The optimization to directly use BucketsPath bypassed bucketRoot's logic and caused issues with S3 list operations on delimiter+prefix cases. Revert to using path.Join(s3a.bucketRoot(bucket), bucket) which properly handles all bucket types and ensures consistent path resolution across the codebase. The slight performance cost of an extra cache lookup is worth the correctness and consistency benefits. * feat: move table buckets under /buckets Add a table-bucket marker attribute, reuse bucket metadata cache for table bucket detection, and update list/validation/UI/test paths to treat table buckets as /buckets entries. * Fix S3 Tables code review issues - handler_bucket_create.go: Fix bucket existence check to properly validate entryResp.Entry before setting s3BucketExists flag (nil Entry should not indicate existing bucket) - bucket_paths.go: Add clarifying comment to bucketRoot() explaining unified buckets root path for all bucket types - file_browser_data.go: Optimize by extracting table bucket check early to avoid redundant WithFilerClient call * Fix list prefix delimiter handling * Handle list errors conservatively * Fix Trino FOR TIMESTAMP query - use past timestamp Iceberg requires the timestamp to be strictly in the past. Use current_timestamp - interval '1' second instead of current_timestamp. --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
414 lines
14 KiB
Go
414 lines
14 KiB
Go
package s3api
|
|
|
|
import (
|
|
"context"
|
|
"encoding/json"
|
|
"errors"
|
|
"fmt"
|
|
"io"
|
|
"net/http"
|
|
"strings"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/policy_engine"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
|
|
)
|
|
|
|
// Bucket policy metadata key for storing policies in filer
|
|
const BUCKET_POLICY_METADATA_KEY = "s3-bucket-policy"
|
|
|
|
// Sentinel errors for bucket policy operations
|
|
var (
|
|
ErrPolicyNotFound = errors.New("bucket policy not found")
|
|
// ErrBucketNotFound is already defined in s3api_object_retention.go
|
|
)
|
|
|
|
// GetBucketPolicyHandler handles GET bucket?policy requests
|
|
func (s3a *S3ApiServer) GetBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
|
|
bucket, _ := s3_constants.GetBucketAndObject(r)
|
|
|
|
glog.V(3).Infof("GetBucketPolicyHandler: bucket=%s", bucket)
|
|
|
|
// Validate bucket exists first for correct error mapping
|
|
_, err := s3a.getBucketEntry(bucket)
|
|
if err != nil {
|
|
if errors.Is(err, filer_pb.ErrNotFound) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucket)
|
|
} else {
|
|
glog.Errorf("Failed to check bucket existence for %s: %v", bucket, err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
}
|
|
return
|
|
}
|
|
|
|
// Get bucket policy from filer metadata
|
|
policyDocument, err := s3a.getBucketPolicy(bucket)
|
|
if err != nil {
|
|
if errors.Is(err, ErrPolicyNotFound) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucketPolicy)
|
|
} else if errors.Is(err, ErrBucketNotFound) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucket)
|
|
} else {
|
|
glog.Errorf("Failed to get bucket policy for %s: %v", bucket, err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
}
|
|
return
|
|
}
|
|
|
|
// Return policy as JSON
|
|
w.Header().Set("Content-Type", "application/json")
|
|
w.WriteHeader(http.StatusOK)
|
|
|
|
if err := json.NewEncoder(w).Encode(policyDocument); err != nil {
|
|
glog.Errorf("Failed to encode bucket policy response: %v", err)
|
|
}
|
|
}
|
|
|
|
// PutBucketPolicyHandler handles PUT bucket?policy requests
|
|
func (s3a *S3ApiServer) PutBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
|
|
bucket, _ := s3_constants.GetBucketAndObject(r)
|
|
|
|
glog.V(3).Infof("PutBucketPolicyHandler: bucket=%s", bucket)
|
|
|
|
// Read policy document from request body
|
|
body, err := io.ReadAll(r.Body)
|
|
if err != nil {
|
|
glog.Errorf("Failed to read bucket policy request body: %v", err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidPolicyDocument)
|
|
return
|
|
}
|
|
defer r.Body.Close()
|
|
|
|
// Parse and validate policy document
|
|
var policyDoc policy_engine.PolicyDocument
|
|
if err := json.Unmarshal(body, &policyDoc); err != nil {
|
|
glog.Errorf("Failed to parse bucket policy JSON: %v", err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrMalformedPolicy)
|
|
return
|
|
}
|
|
|
|
// Validate core policy structure (Effect, Action, etc.)
|
|
if err := policy_engine.ValidatePolicy(&policyDoc); err != nil {
|
|
glog.Errorf("Policy validation failed: %v", err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidPolicyDocument)
|
|
return
|
|
}
|
|
|
|
// Additional bucket policy specific validation
|
|
if err := s3a.validateBucketPolicy(&policyDoc, bucket); err != nil {
|
|
glog.Errorf("Bucket policy validation failed: %v", err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidPolicyDocument)
|
|
return
|
|
}
|
|
|
|
// Store bucket policy
|
|
if err := s3a.setBucketPolicy(bucket, &policyDoc); err != nil {
|
|
glog.Errorf("Failed to store bucket policy for %s: %v", bucket, err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
return
|
|
}
|
|
|
|
// Immediately load into policy engine to avoid race condition
|
|
// (The subscription system will also do this async, but we want immediate effect)
|
|
if s3a.policyEngine != nil {
|
|
if err := s3a.policyEngine.LoadBucketPolicyFromCache(bucket, &policyDoc); err != nil {
|
|
glog.Warningf("Failed to immediately load bucket policy into engine for %s: %v", bucket, err)
|
|
// Don't fail the request since the subscription will eventually sync it
|
|
}
|
|
}
|
|
|
|
// Update IAM integration with new bucket policy
|
|
if s3a.iam.iamIntegration != nil {
|
|
if err := s3a.updateBucketPolicyInIAM(bucket, &policyDoc); err != nil {
|
|
glog.Errorf("Failed to update IAM with bucket policy: %v", err)
|
|
// Don't fail the request, but log the warning
|
|
}
|
|
}
|
|
|
|
w.WriteHeader(http.StatusNoContent)
|
|
}
|
|
|
|
// DeleteBucketPolicyHandler handles DELETE bucket?policy requests
|
|
func (s3a *S3ApiServer) DeleteBucketPolicyHandler(w http.ResponseWriter, r *http.Request) {
|
|
bucket, _ := s3_constants.GetBucketAndObject(r)
|
|
|
|
glog.V(3).Infof("DeleteBucketPolicyHandler: bucket=%s", bucket)
|
|
|
|
// Validate bucket exists first for correct error mapping
|
|
_, err := s3a.getBucketEntry(bucket)
|
|
if err != nil {
|
|
if errors.Is(err, filer_pb.ErrNotFound) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucket)
|
|
} else {
|
|
glog.Errorf("Failed to check bucket existence for %s: %v", bucket, err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
}
|
|
return
|
|
}
|
|
|
|
// Check if bucket policy exists
|
|
if _, err := s3a.getBucketPolicy(bucket); err != nil {
|
|
if errors.Is(err, ErrPolicyNotFound) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucketPolicy)
|
|
} else if errors.Is(err, ErrBucketNotFound) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucket)
|
|
} else {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
}
|
|
return
|
|
}
|
|
|
|
// Delete bucket policy
|
|
if err := s3a.deleteBucketPolicy(bucket); err != nil {
|
|
glog.Errorf("Failed to delete bucket policy for %s: %v", bucket, err)
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
return
|
|
}
|
|
|
|
// Immediately remove from policy engine to avoid race condition
|
|
// (The subscription system will also do this async, but we want immediate effect)
|
|
if s3a.policyEngine != nil {
|
|
if err := s3a.policyEngine.DeleteBucketPolicy(bucket); err != nil {
|
|
glog.Warningf("Failed to immediately remove bucket policy from engine for %s: %v", bucket, err)
|
|
// Don't fail the request since the subscription will eventually sync it
|
|
}
|
|
}
|
|
|
|
// Update IAM integration to remove bucket policy
|
|
if s3a.iam.iamIntegration != nil {
|
|
if err := s3a.removeBucketPolicyFromIAM(bucket); err != nil {
|
|
glog.Errorf("Failed to remove bucket policy from IAM: %v", err)
|
|
// Don't fail the request, but log the warning
|
|
}
|
|
}
|
|
|
|
w.WriteHeader(http.StatusNoContent)
|
|
}
|
|
|
|
// Helper functions for bucket policy storage and retrieval
|
|
|
|
// getBucketPolicy retrieves a bucket policy from filer metadata
|
|
// getBucketPolicy retrieves the bucket policy from filer
|
|
func (s3a *S3ApiServer) getBucketPolicy(bucket string) (*policy_engine.PolicyDocument, error) {
|
|
|
|
var policyDoc policy_engine.PolicyDocument
|
|
err := s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
|
resp, err := client.LookupDirectoryEntry(context.Background(), &filer_pb.LookupDirectoryEntryRequest{
|
|
Directory: s3a.bucketRoot(bucket),
|
|
Name: bucket,
|
|
})
|
|
if err != nil {
|
|
// Return sentinel error for bucket not found
|
|
return fmt.Errorf("%w: %v", ErrBucketNotFound, err)
|
|
}
|
|
|
|
if resp.Entry == nil {
|
|
return ErrPolicyNotFound
|
|
}
|
|
|
|
policyJSON, exists := resp.Entry.Extended[BUCKET_POLICY_METADATA_KEY]
|
|
if !exists || len(policyJSON) == 0 {
|
|
return ErrPolicyNotFound
|
|
}
|
|
|
|
if err := json.Unmarshal(policyJSON, &policyDoc); err != nil {
|
|
return fmt.Errorf("failed to parse stored bucket policy: %v", err)
|
|
}
|
|
|
|
return nil
|
|
})
|
|
|
|
if err != nil {
|
|
return nil, err
|
|
}
|
|
|
|
return &policyDoc, nil
|
|
}
|
|
|
|
// setBucketPolicy stores a bucket policy in filer metadata
|
|
func (s3a *S3ApiServer) setBucketPolicy(bucket string, policyDoc *policy_engine.PolicyDocument) error {
|
|
// Serialize policy to JSON
|
|
policyJSON, err := json.Marshal(policyDoc)
|
|
if err != nil {
|
|
return fmt.Errorf("failed to serialize policy: %v", err)
|
|
}
|
|
|
|
return s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
|
// First, get the current entry to preserve other attributes
|
|
resp, err := client.LookupDirectoryEntry(context.Background(), &filer_pb.LookupDirectoryEntryRequest{
|
|
Directory: s3a.bucketRoot(bucket),
|
|
Name: bucket,
|
|
})
|
|
if err != nil {
|
|
return fmt.Errorf("bucket not found: %v", err)
|
|
}
|
|
|
|
entry := resp.Entry
|
|
if entry.Extended == nil {
|
|
entry.Extended = make(map[string][]byte)
|
|
}
|
|
|
|
// Set the bucket policy metadata
|
|
entry.Extended[BUCKET_POLICY_METADATA_KEY] = policyJSON
|
|
|
|
// Update the entry with new metadata
|
|
_, err = client.UpdateEntry(context.Background(), &filer_pb.UpdateEntryRequest{
|
|
Directory: s3a.bucketRoot(bucket),
|
|
Entry: entry,
|
|
})
|
|
|
|
return err
|
|
})
|
|
}
|
|
|
|
// deleteBucketPolicy removes a bucket policy from filer metadata
|
|
func (s3a *S3ApiServer) deleteBucketPolicy(bucket string) error {
|
|
return s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
|
// Get the current entry
|
|
resp, err := client.LookupDirectoryEntry(context.Background(), &filer_pb.LookupDirectoryEntryRequest{
|
|
Directory: s3a.bucketRoot(bucket),
|
|
Name: bucket,
|
|
})
|
|
if err != nil {
|
|
return fmt.Errorf("bucket not found: %v", err)
|
|
}
|
|
|
|
entry := resp.Entry
|
|
if entry.Extended == nil {
|
|
return nil // No policy to delete
|
|
}
|
|
|
|
// Remove the bucket policy metadata
|
|
delete(entry.Extended, BUCKET_POLICY_METADATA_KEY)
|
|
|
|
// Update the entry
|
|
_, err = client.UpdateEntry(context.Background(), &filer_pb.UpdateEntryRequest{
|
|
Directory: s3a.bucketRoot(bucket),
|
|
Entry: entry,
|
|
})
|
|
|
|
return err
|
|
})
|
|
}
|
|
|
|
// validateBucketPolicy performs bucket-specific policy validation
|
|
func (s3a *S3ApiServer) validateBucketPolicy(policyDoc *policy_engine.PolicyDocument, bucket string) error {
|
|
if policyDoc.Version != "2012-10-17" {
|
|
return fmt.Errorf("unsupported policy version: %s (must be 2012-10-17)", policyDoc.Version)
|
|
}
|
|
|
|
if len(policyDoc.Statement) == 0 {
|
|
return fmt.Errorf("policy document must contain at least one statement")
|
|
}
|
|
|
|
for i, statement := range policyDoc.Statement {
|
|
// Bucket policies must have Principal
|
|
if statement.Principal == nil {
|
|
return fmt.Errorf("statement %d: bucket policies must specify a Principal", i)
|
|
}
|
|
|
|
// Validate resources refer to this bucket
|
|
for _, resource := range statement.Resource.Strings() {
|
|
if !s3a.validateResourceForBucket(resource, bucket) {
|
|
return fmt.Errorf("statement %d: resource %s does not match bucket %s", i, resource, bucket)
|
|
}
|
|
}
|
|
|
|
// Validate NotResources refer to this bucket
|
|
for _, notResource := range statement.NotResource.Strings() {
|
|
if !s3a.validateResourceForBucket(notResource, bucket) {
|
|
return fmt.Errorf("statement %d: NotResource %s does not match bucket %s", i, notResource, bucket)
|
|
}
|
|
}
|
|
|
|
// Validate actions are S3 actions
|
|
for _, action := range statement.Action.Strings() {
|
|
if !strings.HasPrefix(action, "s3:") {
|
|
return fmt.Errorf("statement %d: bucket policies only support S3 actions, got %s", i, action)
|
|
}
|
|
}
|
|
}
|
|
|
|
return nil
|
|
}
|
|
|
|
// validateResourceForBucket checks if a resource ARN is valid for the given bucket
|
|
func (s3a *S3ApiServer) validateResourceForBucket(resource, bucket string) bool {
|
|
// Accepted formats for S3 bucket policies:
|
|
// AWS-style ARNs (standard):
|
|
// arn:aws:s3:::bucket-name
|
|
// arn:aws:s3:::bucket-name/*
|
|
// arn:aws:s3:::bucket-name/path/to/object
|
|
// Simplified formats (for convenience):
|
|
// bucket-name
|
|
// bucket-name/*
|
|
// bucket-name/path/to/object
|
|
|
|
var resourcePath string
|
|
const awsPrefix = "arn:aws:s3:::"
|
|
|
|
// Strip the optional ARN prefix to get the resource path
|
|
if path, ok := strings.CutPrefix(resource, awsPrefix); ok {
|
|
resourcePath = path
|
|
} else {
|
|
resourcePath = resource
|
|
}
|
|
|
|
// After stripping the optional ARN prefix, the resource path must
|
|
// either match the bucket name exactly, or be a path within the bucket.
|
|
return resourcePath == bucket ||
|
|
resourcePath == bucket+"/*" ||
|
|
strings.HasPrefix(resourcePath, bucket+"/")
|
|
}
|
|
|
|
// IAM integration functions
|
|
|
|
// updateBucketPolicyInIAM updates the IAM system with the new bucket policy
|
|
func (s3a *S3ApiServer) updateBucketPolicyInIAM(bucket string, policyDoc *policy_engine.PolicyDocument) error {
|
|
// Update IAM integration with new bucket policy
|
|
if s3a.iam.iamIntegration != nil {
|
|
// Type assert to access the concrete implementation which has access to iamManager
|
|
if s3Integration, ok := s3a.iam.iamIntegration.(*S3IAMIntegration); ok {
|
|
if s3Integration.iamManager != nil {
|
|
glog.V(2).Infof("Updated bucket policy for %s in IAM system", bucket)
|
|
|
|
policyJSON, err := json.Marshal(policyDoc)
|
|
if err != nil {
|
|
return fmt.Errorf("failed to marshal policy: %w", err)
|
|
}
|
|
|
|
return s3Integration.iamManager.UpdateBucketPolicy(context.Background(), bucket, policyJSON)
|
|
}
|
|
}
|
|
}
|
|
|
|
return nil
|
|
}
|
|
|
|
// removeBucketPolicyFromIAM removes the bucket policy from the IAM system
|
|
func (s3a *S3ApiServer) removeBucketPolicyFromIAM(bucket string) error {
|
|
// This would remove the bucket policy from our advanced IAM system
|
|
glog.V(2).Infof("Removed bucket policy for %s from IAM system", bucket)
|
|
|
|
// TODO: Integrate with IAM manager to remove resource-based policies
|
|
// s3a.iam.iamIntegration.iamManager.RemoveBucketPolicy(bucket)
|
|
|
|
return nil
|
|
}
|
|
|
|
// GetPublicAccessBlockHandler Retrieves the PublicAccessBlock configuration for an S3 bucket
|
|
// https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetPublicAccessBlock.html
|
|
func (s3a *S3ApiServer) GetPublicAccessBlockHandler(w http.ResponseWriter, r *http.Request) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNotImplemented)
|
|
}
|
|
|
|
func (s3a *S3ApiServer) PutPublicAccessBlockHandler(w http.ResponseWriter, r *http.Request) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNotImplemented)
|
|
}
|
|
|
|
func (s3a *S3ApiServer) DeletePublicAccessBlockHandler(w http.ResponseWriter, r *http.Request) {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNotImplemented)
|
|
}
|