* Fix: Initialize filer CredentialManager with filer address
* The fix involves checking for directory existence before creation.
* adjust error message
* Fix: Implement FilerAddressSetter in PropagatingCredentialStore
* Refactor: Reorder credential manager initialization in filer server
* refactor
* full integration with iceberg-go
* Table Commit Operations (handleUpdateTable)
* s3tables: fix Iceberg v2 compliance and namespace properties
This commit ensures SeaweedFS Iceberg REST Catalog is compliant with
Iceberg Format Version 2 by:
- Using iceberg-go's table.NewMetadataWithUUID for strict v2 compliance.
- Explicitly initializing namespace properties to empty maps.
- Removing omitempty from required Iceberg response fields.
- Fixing CommitTableRequest unmarshaling using table.Requirements and table.Updates.
* s3tables: automate Iceberg integration tests
- Added Makefile for local test execution and cluster management.
- Added docker-compose for PyIceberg compatibility kit.
- Added Go integration test harness for PyIceberg.
- Updated GitHub CI to run Iceberg catalog tests automatically.
* s3tables: update PyIceberg test suite for compatibility
- Updated test_rest_catalog.py to use latest PyIceberg transaction APIs.
- Updated Dockerfile to include pyarrow and pandas dependencies.
- Improved namespace and table handling in integration tests.
* s3tables: address review feedback on Iceberg Catalog
- Implemented robust metadata version parsing and incrementing.
- Ensured table metadata changes are persisted during commit (handleUpdateTable).
- Standardized namespace property initialization for consistency.
- Fixed unused variable and incorrect struct field build errors.
* s3tables: finalize Iceberg REST Catalog and optimize tests
- Implemented robust metadata versioning and persistence.
- Standardized namespace property initialization.
- Optimized integration tests using pre-built Docker image.
- Added strict property persistence validation to test suite.
- Fixed build errors from previous partial updates.
* Address PR review: fix Table UUID stability, implement S3Tables UpdateTable, and support full metadata persistence individually
* fix: Iceberg catalog stable UUIDs, metadata persistence, and file writing
- Ensure table UUIDs are stable (do not regenerate on load).
- Persist full table metadata (Iceberg JSON) in s3tables extended attributes.
- Add `MetadataVersion` to explicitly track version numbers, replacing regex parsing.
- Implement `saveMetadataFile` to persist metadata JSON files to the Filer on commit.
- Update `CreateTable` and `UpdateTable` handlers to use the new logic.
* test: bind weed mini to 0.0.0.0 in integration tests to fix Docker connectivity
* Iceberg: fix metadata handling in REST catalog
- Add nil guard in createTable
- Fix updateTable to correctly load existing metadata from storage
- Ensure full metadata persistence on updates
- Populate loadTable result with parsed metadata
* S3Tables: add auth checks and fix response fields in UpdateTable
- Add CheckPermissionWithContext to UpdateTable handler
- Include TableARN and MetadataLocation in UpdateTable response
- Use ErrCodeConflict (409) for version token mismatches
* Tests: improve Iceberg catalog test infrastructure and cleanup
- Makefile: use PID file for precise process killing
- test_rest_catalog.py: remove unused variables and fix f-strings
* Iceberg: fix variable shadowing in UpdateTable
- Rename inner loop variable `req` to `requirement` to avoid shadowing outer request variable
* S3Tables: simplify MetadataVersion initialization
- Use `max(req.MetadataVersion, 1)` instead of anonymous function
* Tests: remove unicode characters from S3 tables integration test logs
- Remove unicode checkmarks from test output for cleaner logs
* Iceberg: improve metadata persistence robustness
- Fix MetadataLocation in LoadTableResult to fallback to generated location
- Improve saveMetadataFile to ensure directory hierarchy existence and robust error handling
* s3: enforce authentication and JSON error format for Iceberg REST Catalog
* s3/iceberg: align error exception types with OpenAPI spec examples
* s3api: refactor AuthenticateRequest to return identity object
* s3/iceberg: propagate full identity object to request context
* s3/iceberg: differentiate NotAuthorizedException and ForbiddenException
* s3/iceberg: reject requests if authenticator is nil to prevent auth bypass
* s3/iceberg: refactor Auth middleware to build context incrementally and use switch for error mapping
* s3api: update misleading comment for authRequestWithAuthType
* s3api: return ErrAccessDenied if IAM is not configured to prevent auth bypass
* s3/iceberg: optimize context update in Auth middleware
* s3api: export CanDo for external authorization use
* s3/iceberg: enforce identity-based authorization in all API handlers
* s3api: fix compilation errors by updating internal CanDo references
* s3/iceberg: robust identity validation and consistent action usage in handlers
* s3api: complete CanDo rename across tests and policy engine integration
* s3api: fix integration tests by allowing admin access when auth is disabled and explicit gRPC ports
* duckdb
* create test bucket
* feat: Add Iceberg REST Catalog server
Implement Iceberg REST Catalog API on a separate port (default 8181)
that exposes S3 Tables metadata through the Apache Iceberg REST protocol.
- Add new weed/s3api/iceberg package with REST handlers
- Implement /v1/config endpoint returning catalog configuration
- Implement namespace endpoints (list/create/get/head/delete)
- Implement table endpoints (list/create/load/head/delete/update)
- Add -port.iceberg flag to S3 standalone server (s3.go)
- Add -s3.port.iceberg flag to combined server mode (server.go)
- Add -s3.port.iceberg flag to mini cluster mode (mini.go)
- Support prefix-based routing for multiple catalogs
The Iceberg REST server reuses S3 Tables metadata storage under
/table-buckets and enables DuckDB, Spark, and other Iceberg clients
to connect to SeaweedFS as a catalog.
* feat: Add Iceberg Catalog pages to admin UI
Add admin UI pages to browse Iceberg catalogs, namespaces, and tables.
- Add Iceberg Catalog menu item under Object Store navigation
- Create iceberg_catalog.templ showing catalog overview with REST info
- Create iceberg_namespaces.templ listing namespaces in a catalog
- Create iceberg_tables.templ listing tables in a namespace
- Add handlers and routes in admin_handlers.go
- Add Iceberg data provider methods in s3tables_management.go
- Add Iceberg data types in types.go
The Iceberg Catalog pages provide visibility into the same S3 Tables
data through an Iceberg-centric lens, including REST endpoint examples
for DuckDB and PyIceberg.
* test: Add Iceberg catalog integration tests and reorg s3tables tests
- Reorganize existing s3tables tests to test/s3tables/table-buckets/
- Add new test/s3tables/catalog/ for Iceberg REST catalog tests
- Add TestIcebergConfig to verify /v1/config endpoint
- Add TestIcebergNamespaces to verify namespace listing
- Add TestDuckDBIntegration for DuckDB connectivity (requires Docker)
- Update CI workflow to use new test paths
* fix: Generate proper random UUIDs for Iceberg tables
Address code review feedback:
- Replace placeholder UUID with crypto/rand-based UUID v4 generation
- Add detailed TODO comments for handleUpdateTable stub explaining
the required atomic metadata swap implementation
* fix: Serve Iceberg on localhost listener when binding to different interface
Address code review feedback: properly serve the localhost listener
when the Iceberg server is bound to a non-localhost interface.
* ci: Add Iceberg catalog integration tests to CI
Add new job to run Iceberg catalog tests in CI, along with:
- Iceberg package build verification
- Iceberg unit tests
- Iceberg go vet checks
- Iceberg format checks
* fix: Address code review feedback for Iceberg implementation
- fix: Replace hardcoded account ID with s3_constants.AccountAdminId in buildTableBucketARN()
- fix: Improve UUID generation error handling with deterministic fallback (timestamp + PID + counter)
- fix: Update handleUpdateTable to return HTTP 501 Not Implemented instead of fake success
- fix: Better error handling in handleNamespaceExists to distinguish 404 from 500 errors
- fix: Use relative URL in template instead of hardcoded localhost:8181
- fix: Add HTTP timeout to test's waitForService function to avoid hangs
- fix: Use dynamic ephemeral ports in integration tests to avoid flaky parallel failures
- fix: Add Iceberg port to final port configuration logging in mini.go
* fix: Address critical issues in Iceberg implementation
- fix: Cache table UUIDs to ensure persistence across LoadTable calls
The UUID now remains stable for the lifetime of the server session.
TODO: For production, UUIDs should be persisted in S3 Tables metadata.
- fix: Remove redundant URL-encoded namespace parsing
mux router already decodes %1F to \x1F before passing to handlers.
Redundant ReplaceAll call could cause bugs with literal %1F in namespace.
* fix: Improve test robustness and reduce code duplication
- fix: Make DuckDB test more robust by failing on unexpected errors
Instead of silently logging errors, now explicitly check for expected
conditions (extension not available) and skip the test appropriately.
- fix: Extract username helper method to reduce duplication
Created getUsername() helper in AdminHandlers to avoid duplicating
the username retrieval logic across Iceberg page handlers.
* fix: Add mutex protection to table UUID cache
Protects concurrent access to the tableUUIDs map with sync.RWMutex.
Uses read-lock for fast path when UUID already cached, and write-lock
for generating new UUIDs. Includes double-check pattern to handle race
condition between read-unlock and write-lock.
* style: fix go fmt errors
* feat(iceberg): persist table UUID in S3 Tables metadata
* feat(admin): configure Iceberg port in Admin UI and commands
* refactor: address review comments (flags, tests, handlers)
- command/mini: fix tracking of explicit s3.port.iceberg flag
- command/admin: add explicit -iceberg.port flag
- admin/handlers: reuse getUsername helper
- tests: use 127.0.0.1 for ephemeral ports and os.Stat for file size check
* test: check error from FileStat in verify_gc_empty_test
* s3tables: add Iceberg file layout validation for table buckets
This PR adds file layout validation for table buckets to enforce Apache
Iceberg table structure. Files uploaded to table buckets must conform
to the expected Iceberg layout:
- metadata/ directory: contains metadata files (*.json, *.avro)
- v*.metadata.json (table metadata)
- snap-*.avro (snapshot manifests)
- *-m*.avro (manifest files)
- version-hint.text
- data/ directory: contains data files (*.parquet, *.orc, *.avro)
- Supports partition paths (e.g., year=2024/month=01/)
- Supports bucket subdirectories
The validator exports functions for use by the S3 API:
- IsTableBucketPath: checks if a path is under /table-buckets/
- GetTableInfoFromPath: extracts bucket/namespace/table from path
- ValidateTableBucketUpload: validates file layout for table bucket uploads
- ValidateTableBucketUploadWithClient: validates with filer client access
Invalid uploads receive InvalidIcebergLayout error response.
* Address review comments: regex performance, error handling, stricter patterns
* Fix validateMetadataFile and validateDataFile to handle subdirectories and directory creation
* Fix error handling, metadata validation, reduce code duplication
* Fix empty remainingPath handling for directory paths
* Refactor: unify validateMetadataFile and validateDataFile
* Refactor: extract UUID pattern constant
* fix: allow Iceberg partition and directory paths without trailing slashes
Modified validateFile to correctly handle directory paths that do not end with a trailing slash.
This ensures that paths like 'data/year=2024' are validated as directories if they match
partition or subdirectory patterns, rather than being incorrectly rejected as invalid files.
Added comprehensive test cases for various directory and partition path combinations.
* refactor: use standard path package and idiomatic returns
Simplified directory and filename extraction in validateFile by using the standard
path package (aliased as pathpkg). This improves readability and avoids manual string
manipulation. Also updated GetTableInfoFromPath to use naked returns for named
return values, aligning with Go conventions for short functions.
* feat: enforce strict Iceberg top-level directories and metadata restrictions
Implemented strict validation for Iceberg layout:
- Bare top-level keys like 'metadata' and 'data' are now rejected; they must have
a trailing slash or a subpath.
- Subdirectories under 'metadata/' are now prohibited to enforce the flat structure
required by Iceberg.
- Updated the test suite with negative test cases and ensured proper formatting.
* feat: allow table root directory markers in ValidateTableBucketUpload
Modified ValidateTableBucketUpload to short-circuit and return nil when the
relative path within a table is empty. This occurs when a trailing slash is
used on the table directory (e.g., /table-buckets/mybucket/myns/mytable/).
Added a test case 'table dir with slash' to verify this behavior.
* test: add regression cases for metadata subdirs and table markers
Enforced a strictly flat structure for the metadata directory by removing the
"directory without trailing slash" fallback in validateFile for metadata.
Added regression test cases:
- metadata/nested (must fail)
- /table-buckets/.../mytable/ (must pass)
Verified all tests pass.
* feat: reject double slashes in Iceberg table paths
Modified validateDirectoryPath to return an error when encountering empty path
segments, effectively rejecting double slashes like 'data//file.parquet'.
Updated validateFile to use manual path splitting instead of the 'path' package
for intermediate directories to ensure redundant slashes are not auto-cleaned
before validation. Added regression tests for various double slash scenarios.
* refactor: separate isMetadata logic in validateDirectoryPath
Following reviewer feedback, refactored validateDirectoryPath to explicitly
separate the handling of metadata and data paths. This improves readability
and clarifies the function's intent while maintaining the strict validation rules
and double-slash rejection previously implemented.
* feat: validate bucket, namespace, and table path segments
Updated ValidateTableBucketUpload to ensure that bucket, namespace, and table
segments in the path are non-empty. This prevents invalid paths like
'/table-buckets//myns/mytable/...' from being accepted during upload.
Added regression tests for various empty segment scenarios.
* Update weed/s3api/s3tables/iceberg_layout.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* feat: block double-slash bypass in table relative paths
Added a guard in ValidateTableBucketUpload to reject tableRelativePath if it
starts with a '/' or contains '//'. This ensures that paths like
'/table-buckets/b/ns/t//data/file.parquet' are properly rejected and cannot
bypass the layout validation. Added regression tests to verify.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Add shared s3tables manager
* Add s3tables shell commands
* Add s3tables admin API
* Add s3tables admin UI
* Fix admin s3tables namespace create
* Rename table buckets menu
* Centralize s3tables tag validation
* Reuse s3tables manager in admin
* Extract s3tables list limit
* Add s3tables bucket ARN helper
* Remove write middleware from s3tables APIs
* Fix bucket link and policy hint
* Fix table tag parsing and nav link
* Disable namespace table link on invalid ARN
* Improve s3tables error decode
* Return flag parse errors for s3tables tag
* Accept query params for namespace create
* Bind namespace create form data
* Read s3tables JS data from DOM
* s3tables: allow empty region ARN
* shell: pass s3tables account id
* shell: require account for table buckets
* shell: use bucket name for namespaces
* shell: use bucket name for tables
* shell: use bucket name for tags
* admin: add table buckets links in file browser
* s3api: reuse s3tables tag validation
* admin: harden s3tables UI handlers
* fix admin list table buckets
* allow admin s3tables access
* validate s3tables bucket tags
* log s3tables bucket metadata errors
* rollback table bucket on owner failure
* show s3tables bucket owner
* add s3tables iam conditions
* Add s3tables user permissions UI
* Authorize s3tables using identity actions
* Add s3tables permissions to user modal
* Disambiguate bucket scope in user permissions
* Block table bucket names that match S3 buckets
* Pretty-print IAM identity JSON
* Include tags in s3tables permission context
* admin: refactor S3 Tables inline JavaScript into a separate file
* s3tables: extend IAM policy condition operators support
* shell: use LookupEntry wrapper for s3tables bucket conflict check
* admin: handle buildBucketPermissions validation in create/update flows
* s3api: ensure MD5 is calculated or reused during CopyObject
Fixes#8155
- Capture and reuse source MD5 for direct copies
- Calculate MD5 for small inline objects during copy
* s3api: refactor encryption logic and safe MD5 copying
- Extract duplicated bucket default encryption logic into helper
- Use safe append copy for MD5 slice to avoid shared modifications
* refactor
* avoids unnecessary MD5 recalculations for small files
Fetch and evaluate table policies in DeleteTable handler to support policy-based
delegation. Aligns authorization behavior with GetTable and ListTables handlers
instead of only checking ownership.
Explicitly handle ErrAttributeNotFound vs other errors when fetching bucket policy.
Return errors for non-expected failures to prevent masking filer issues and
ensure correct authorization decisions.
Combine length, character, and reserved pattern validation into validateBucketName()
which returns descriptive error messages. Keep isValidBucketName() for backward
compatibility. This simplifies handler validation and provides better error reporting.
Define a separate tableNamePatternStr constant for the table name component in
the ARN regex, even though it currently has the same value as
tableNamespacePatternStr. This improves code clarity and maintainability, making
it easier to modify if the naming rules for tables and namespaces diverge in the
future.
Make ListTables authorization consistent with GetTable/CreateTable:
1. ListTables authorization now evaluates policies instead of owner-only checks:
- For namespace listing: checks namespace policy AND bucket policy
- For bucket-wide listing: checks bucket policy
- Uses CanListTables permission framework
2. Remove owner-only filter in listTablesWithClient that prevented policy-based
sharing of tables. Authorization is now enforced at the handler level, so all
tables in the namespace/bucket are returned to authorized callers (who have
access either via ownership or policy).
3. Add flexible PolicyDocument.UnmarshalJSON to support both single-object and
array forms of Statement field:
- Handles: {"Statement": {...}}
- Handles: {"Statement": [{...}, {...}]}
- Improves AWS IAM compatibility
This ensures cross-account table listing works when delegated via bucket/namespace
policies, consistent with the authorization model for other operations.
Address three related ownership consistency issues:
1. CreateNamespace now sets OwnerAccountID to bucketMetadata.OwnerAccountID
instead of request principal. This prevents namespaces created by
delegated callers (via bucket policy) from becoming unmanageable, since
ListNamespaces filters by bucket owner.
2. CreateTable now:
- Fetches bucket metadata to use correct owner for bucket policy evaluation
- Uses namespaceMetadata.OwnerAccountID for namespace policy checks
- Uses bucketMetadata.OwnerAccountID for bucket policy checks
- Sets table OwnerAccountID to namespaceMetadata.OwnerAccountID (inherited)
3. GetTable now:
- Fetches bucket metadata to use correct owner for bucket policy evaluation
- Uses metadata.OwnerAccountID for table policy checks
- Uses bucketMetadata.OwnerAccountID for bucket policy checks
This ensures:
- Bucket owner retains implicit "owner always allowed" behavior even when
evaluating bucket policies
- Ownership hierarchy is consistent (namespace owned by bucket, table owned by namespace)
- Cross-principal delegation via policies doesn't break ownership chains
Move bucket name extraction outside the if/else block in
extractResourceOwnerAndBucket since the logic is identical for both
ResourceTypeTable and ResourceTypeBucket cases. This reduces code
duplication and improves maintainability.
The extraction pattern (parts[1] from /tables/{bucket}/...) works for
both resource types, so it's now performed once before the type-specific
metadata unmarshaling.
Implement resource-specific policy validation to prevent over-broad
permission grants. Add matchesResource and matchesResourcePattern functions
to validate statement Resource fields against specific resource ARNs.
Add new CheckPermissionWithResource function that includes resource ARN
validation, while keeping CheckPermission unchanged for backward compatibility.
This enables policies to grant access to specific resources only:
- statements with Resource: "arn:aws:s3tables:...:bucket/specific-bucket/*"
will only match when accessing that specific bucket
- statements without Resource field match all resources (implicit *)
- resource patterns support wildcards (* for any sequence, ? for single char)
For future use: Handlers can call CheckPermissionWithResource with the
target resource ARN to enforce resource-level access control.
Replace silent error swallowing (err == nil) with proper error distinction
for bucket policy reads. Now properly checks ErrAttributeNotFound and
propagates other errors as internal server errors.
Fixed 5 locations:
- handleCreateNamespace (policy fetch)
- handleDeleteNamespace (policy fetch)
- handleListNamespaces (policy fetch)
- handleGetNamespace (policy fetch)
- handleGetTableBucket (policy fetch)
This prevents masking of filer issues when policies cannot be read due
to I/O errors or other transient failures.
Change ARN generation to use resource OwnerAccountID instead of caller
identity (h.getAccountID(r)). This ensures ARNs are stable and consistent
regardless of which principal accesses the resource.
Updated generateTableBucketARN and generateTableARN function signatures
to accept ownerAccountID parameter. All call sites updated to pass the
resource owner's account ID from metadata.
This prevents ARN inconsistency issues when multiple principals have
access to the same resource via policies.
Replace strict ownership check in CreateTable with policy-based authorization.
Now checks both namespace and bucket policies for CreateTable permission,
allowing delegation via resource policies while still respecting owner bypass.
Authorization logic:
- Namespace policy grants CreateTable → allowed
- Bucket policy grants CreateTable → allowed
- Otherwise → denied (even if same owner)
This enables cross-principal table creation via policies while maintaining
security through explicit allow/deny semantics.
Add automatic normalization of operations to full IAM-style action names
(e.g., 's3tables:CreateTableBucket') in CheckPermission(). This ensures
policy statements using prefixed actions (s3tables:*) correctly match
operations evaluated by permission helpers.
Also fixes incorrect r.Context() passed to GetIdentityNameFromContext
which expects *http.Request. Now passes r directly.
Move validateNamespace call outside of filerClient.WithFilerClient closure
so that validation errors return HTTP 400 (InvalidRequest) instead of 500
(InternalError).
Before: Validation error inside closure → treated as internal error → 500
After: Validation error before closure → handled as bad request → 400
This provides correct error semantics: namespace validation is an input
validation issue, not a server error.
Replace error-swallowing pattern where all errors from getExtendedAttribute
were ignored for bucket policy reads. Now properly distinguish between:
- ErrAttributeNotFound: Policy not found is expected; continue with empty policy
- Other errors: Return internal server error and stop processing
Applied fix to all bucket policy reads in:
- handleDeleteTableBucketPolicy (line 220)
- handleTagResource (line 313)
- handleUntagResource (line 405)
- handleListTagsForResource (line 488)
- And additional occurrences in closures
This prevents silent failures and ensures policy-related errors are surfaced
to callers rather than being silently ignored.
Replace the custom suffix-only wildcard implementation in matchesActionPattern
and matchesPrincipal with the policy_engine.MatchesWildcard function from
PR #8052. This enables full wildcard support including:
- Middle wildcards: s3tables:Get*Table matches GetTable
- Question mark wildcards: Get? matches any single character
- Combined patterns: s3tables:*Table* matches any action containing 'Table'
Benefits:
- Code reuse: eliminates duplicate wildcard logic
- Complete IAM compatibility: supports all AWS wildcard patterns
- Performance: uses efficient O(n) backtracking algorithm
- Consistency: same wildcard behavior across S3 API and S3 Tables
Add comprehensive unit tests covering exact matches, suffix wildcards,
middle wildcards, question marks, and combined patterns for both action
and principal matching.
Create extractResourceOwnerAndBucket() helper to consolidate the repeated pattern
of unmarshaling metadata and extracting bucket name from resource path. This
pattern was duplicated in handleTagResource, handleListTagsForResource, and
handleUntagResource. Update all three handlers to use the helper.
Also update remaining uses of getPrincipalFromRequest() (in handler_bucket_create,
handler_bucket_get_list_delete, handler_namespace) to use getAccountID() after
consolidating the two identical methods.
Update handleListTagsForResource to fetch and pass bucket policy to
CheckPermission, matching the behavior of handleTagResource/handleUntagResource.
This enables bucket-policy-based permission grants to be evaluated for
ListTagsForResource, not just ownership-based checks.
Both methods had identical implementations - they return the account ID from
request header or fall back to handler's default. Remove the duplicate
getPrincipalFromRequest and use getAccountID throughout, with updated comment
explaining its dual role as both caller identity and principal for permission
checks.
- Add CanTagResource() to check TagResource permission
- Add CanUntagResource() to check UntagResource permission
- Update CanManageTags() to check both operations (OR logic)
This prevents UntagResource from incorrectly checking 'ManageTags' permission
and ensures each operation validates the correct permission when per-operation
permissions are enforced.
Replace misleading character-only error message with generic 'invalid bucket
name'. The isValidBucketName() function checks multiple constraints beyond
character set (length, reserved prefixes/suffixes, start/end rules), so a
specific character message is inaccurate.
The field name 'Schema' was confusing given it holds a *TableMetadata struct
and serializes as 'metadata' in JSON. Rename to 'Metadata' for clarity and
consistency with the JSON tag and intended meaning.
- Remove dead URL unescape for namespace (regex [a-z0-9_]+ cannot contain
percent-escapes)
- Add URL decoding and validation of extracted table name via
validateTableName() to prevent callers from bypassing request validation
done in other paths
Enforce the same bucket name validation rules (length, characters, reserved
prefixes/suffixes) when extracting from ARN. This prevents accepting ARNs
that the system would never create and ensures consistency with
CreateTableBucket validation.
MaxBuckets is user-controlled and used in uint32(maxBuckets*2) for ListEntries.
Very large values can overflow uint32 or trigger overly expensive scans. Cap
MaxBuckets to 1000 and reject out-of-range values, consistent with MaxTables
handling and S3 MaxKeys validation elsewhere in the codebase.
MaxTables is user-controlled and influences gRPC ListEntries limits via
uint32(maxTables*2). Without an upper bound, very large values can overflow
uint32 or cause excessively large directory scans. Cap MaxTables to 1000 and
return InvalidRequest for out-of-range values, consistent with S3 MaxKeys
handling.
Add request body size limiting (10MB) to readRequestBody method:
- Define maxRequestBodySize constant to prevent unbounded reads
- Use io.LimitReader to enforce size limit
- Add explicit error handling for oversized requests
- Prevents potential DoS attacks via large request bodies
Remove premature HTTP error writes from within WithFilerClient closure
to prevent duplicate status code responses. Error handling is now
consistently performed at the top level using isAuthError.