75a6a34528d6f7f3bee64298a829117e9b703835
96 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
75a6a34528 |
dlm: resilient distributed locks via consistent hashing + backup replication (#8860)
* dlm: replace modulo hashing with consistent hash ring Introduce HashRing with virtual nodes (CRC32-based consistent hashing) to replace the modulo-based hashKeyToServer. When a filer node is removed, only keys that hashed to that node are remapped to the next server on the ring, leaving all other mappings stable. This is the foundation for backup replication — the successor on the ring is always the natural takeover node. * dlm: add Generation and IsBackup fields to Lock Lock now carries IsBackup (whether this node holds the lock as a backup replica) and Generation (a monotonic fencing token that increments on each fresh acquisition, stays the same on renewal). Add helper methods: AllLocks, PromoteLock, DemoteLock, InsertBackupLock, RemoveLock, GetLock. * dlm: add ReplicateLock RPC and generation/is_backup proto fields Add generation field to LockResponse for fencing tokens. Add generation and is_backup fields to Lock message. Add ReplicateLock RPC for primary-to-backup lock replication. Add ReplicateLockRequest/ReplicateLockResponse messages. * dlm: add async backup replication to DistributedLockManager Route lock/unlock via consistent hash ring's GetPrimaryAndBackup(). After a successful lock or unlock on the primary, asynchronously replicate the operation to the backup server via ReplicateFunc callback. Single-server deployments skip replication. * dlm: add ReplicateLock handler and backup-aware topology changes Add ReplicateLock gRPC handler for primary-to-backup replication. Revise OnDlmChangeSnapshot to handle three cases on topology change: - Promote backup locks when this node becomes primary - Demote primary locks when this node becomes backup - Transfer locks when this node is neither primary nor backup Wire up SetupDlmReplication during filer server initialization. * dlm: expose generation fencing token in lock client LiveLock now captures the generation from LockResponse and exposes it via Generation() method. Consumers can use this as a fencing token to detect stale lock holders. * dlm: update empty folder cleaner to use consistent hash ring Replace local modulo-based hashKeyToServer with LockRing.GetPrimary() which uses the shared consistent hash ring for folder ownership. * dlm: add unit tests for consistent hash ring Test basic operations, consistency on server removal (only keys from removed server move), backup-is-successor property (backup becomes new primary when primary is removed), and key distribution balance. * dlm: add integration tests for lock replication failure scenarios Test cases: - Primary crash with backup promotion (backup has valid token) - Backup crash with primary continuing - Both primary and backup crash (lock lost, re-acquirable) - Rolling restart across all nodes - Generation fencing token increments on new acquisition - Replication failure (primary still works independently) - Unlock replicates deletion to backup - Lock survives server addition (topology change) - Consistent hashing minimal disruption (only removed server's keys move) * dlm: address PR review findings 1. Causal replication ordering: Add per-lock sequence number (Seq) that increments on every mutation. Backup rejects incoming mutations with seq <= current seq, preventing stale async replications from overwriting newer state. Unlock replication also carries seq and is rejected if stale. 2. Demote-after-handoff: OnDlmChangeSnapshot now transfers the lock to the new primary first and only demotes to backup after a successful TransferLocks RPC. If the transfer fails, the lock stays as primary on this node. 3. SetSnapshot candidateServers leak: Replace the candidateServers map entirely instead of appending, so removed servers don't linger. 4. TransferLocks preserves Generation and Seq: InsertLock now accepts generation and seq parameters. After accepting a transferred lock, the receiving node re-replicates to its backup. 5. Rolling restart test: Add re-replication step after promotion and assert survivedCount > 0. Add TestDLM_StaleReplicationRejected. 6. Mixed-version upgrade note: Add comment on HashRing documenting that all filer nodes must be upgraded together. * dlm: serve renewals locally during transfer window on node join When a new node joins and steals hash ranges from surviving nodes, there's a window between ring update and lock transfer where the client gets redirected to a node that doesn't have the lock yet. Fix: if the ring says primary != self but we still hold the lock locally (non-backup, matching token), serve the renewal/unlock here rather than redirecting. The lock will be transferred by OnDlmChangeSnapshot, and subsequent requests will go to the new primary once the transfer completes. Add tests: - TestDLM_NodeDropAndJoin_OwnershipDisruption: measures disruption when a node drops and a new one joins (14/100 surviving-node locks disrupted, all handled by transfer logic) - TestDLM_RenewalDuringTransferWindow: verifies renewal succeeds on old primary during the transfer window * dlm: master-managed lock ring with stabilization batching The master now owns the lock ring membership. Instead of filers independently reacting to individual ClusterNodeUpdate add/remove events, the master: 1. Tracks filer membership in LockRingManager 2. Batches rapid changes with a 1-second stabilization timer (e.g., a node drop + join within 1 second → single ring update) 3. Broadcasts the complete ring snapshot atomically via the new LockRingUpdate message in KeepConnectedResponse Filers receive the ring as a complete snapshot and apply it via SetSnapshot, ensuring all filers converge to the same ring state without intermediate churn. This eliminates the double-churn problem where a rapid drop+join would fire two separate ring mutations, each triggering lock transfers and disrupting ownership on surviving nodes. * dlm: track ring version, reject stale updates, remove dead code SetSnapshot now takes a version parameter from the master. Stale updates (version < current) are rejected, preventing reordered messages from overwriting a newer ring state. Version 0 is always accepted for bootstrap. Remove AddServer/RemoveServer from LockRing — the ring is now exclusively managed by the master via SetSnapshot. Remove the candidateServers map that was only used by those methods. * dlm: fix SelectLocks data race, advance generation on backup insert - SelectLocks: change RLock to Lock since the function deletes map entries, which is a write operation and causes a data race under RLock. - InsertBackupLock: advance nextGeneration to at least the incoming generation so that after failover promotion, new lock acquisitions get a generation strictly greater than any replicated lock. - Bump replication failure log from V(1) to Warningf for production visibility. * dlm: fix SetSnapshot race, test reliability, timer edge cases - SetSnapshot: hold LockRing lock through both version update and Ring.SetServers() so they're atomic. Prevents a concurrent caller from seeing the new version but applying stale servers. - Transfer window test: search for a key that actually moves primary when filer4 joins, instead of relying on a fixed key that may not. - renewLock redirect: pass the existing token to the new primary instead of empty string, so redirected renewals work correctly. - scheduleBroadcast: check timer.Stop() return value. If the timer already fired, the callback picks up latest state. - FlushPending: only broadcast if timer.Stop() returns true (timer was still pending). If false, the callback is already running. - Fix test comment: "idempotent" → "accepted, state-changing". * dlm: use wall-clock nanoseconds for lock ring version The lock ring version was an in-memory counter that reset to 0 on master restart. A filer that had seen version 5 would reject version 1 from the restarted master. Fix: use time.Now().UnixNano() as the version. This survives master restarts without persistence — the restarted master produces a version greater than any pre-restart value. * dlm: treat expired lock owners as missing Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * dlm: reject stale lock transfers Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * dlm: order replication by generation Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * dlm: bootstrap lock ring on reconnect Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
0761be58d3 |
fix(s3): preserve explicit directory markers during empty folder cleanup (#8831)
* fix(s3): preserve explicit directory markers during empty folder cleanup PR #8292 switched empty-folder cleanup from per-folder implicit checks to bucket-level policy, inadvertently dropping the check that preserved explicitly created directories (e.g., PUT /bucket/folder/). This caused user-created folders to be deleted when their last file was removed. Add IsDirectoryKeyObject check in executeCleanup to skip folders that have a MIME type set, matching the canonical pattern used throughout the S3 listing and delete handlers. * fix: handle ErrNotFound in IsDirectoryKeyObject for race safety Entry may be deleted between the emptiness check and the directory marker lookup. Treat not-found as false rather than propagating the error, avoiding unnecessary error logging in the cleanup path. * refactor: consolidate directory marker tests and tidy error handling - Combine two separate test functions into a table-driven test - Nest ErrNotFound check inside the err != nil block |
||
|
|
0b3867dca3 |
filer: add structured error codes to CreateEntryResponse (#8767)
* filer: add FilerError enum and error_code field to CreateEntryResponse Add a machine-readable error code alongside the existing string error field. This follows the precedent set by PublishMessageResponse in the MQ broker proto. The string field is kept for human readability and backward compatibility. Defined codes: OK, ENTRY_NAME_TOO_LONG, PARENT_IS_FILE, EXISTING_IS_DIRECTORY, EXISTING_IS_FILE, ENTRY_ALREADY_EXISTS. * filer: add sentinel errors and error code mapping in filer_pb Define sentinel errors (ErrEntryNameTooLong, ErrParentIsFile, etc.) in the filer_pb package so both the filer and consumers can reference them without circular imports. Add FilerErrorToSentinel() to map proto error codes to sentinels, and update CreateEntryWithResponse() to check error_code first, falling back to the string-based path for backward compatibility with old servers. * filer: return wrapped sentinel errors and set proto error codes Replace fmt.Errorf string errors in filer.CreateEntry, UpdateEntry, and ensureParentDirectoryEntry with wrapped filer_pb sentinel errors (using %w). This preserves errors.Is() traversal on the server side. In the gRPC CreateEntry handler, map sentinel errors to the corresponding FilerError proto codes using errors.Is(), setting both resp.Error (string, for backward compat) and resp.ErrorCode (enum). * S3: use errors.Is() with filer sentinels instead of string matching Replace fragile string-based error matching in filerErrorToS3Error and other S3 API consumers with errors.Is() checks against filer_pb sentinel errors. This works because the updated CreateEntryWithResponse helper reconstructs sentinel errors from the proto FilerError code. Update iceberg stage_create and metadata_files to check resp.ErrorCode instead of parsing resp.Error strings. Update SSE-S3 to use errors.Is() for the already-exists check. String matching is retained only for non-filer errors (gRPC transport errors, checksum validation) that don't go through CreateEntryResponse. * filer: remove backward-compat string fallbacks for error codes Clients and servers are always deployed together, so there is no need for backward-compatibility fallback paths that parse resp.Error strings when resp.ErrorCode is unset. Simplify all consumers to rely solely on the structured error code. * iceberg: ensure unknown non-OK error codes are not silently ignored When FilerErrorToSentinel returns nil for an unrecognized error code, return an error including the code and message rather than falling through to return nil. * filer: fix redundant error message and restore error wrapping in helper Use request path instead of resp.Error in the sentinel error format string to avoid duplicating the sentinel message (e.g. "entry already exists: entry already exists"). Restore %w wrapping with errors.New() in the fallback paths so callers can use errors.Is()/errors.As(). * filer: promote file to directory on path conflict instead of erroring S3 allows both "foo/bar" (object) and "foo/bar/xyzzy" (another object) to coexist because S3 has a flat key space. When ensureParentDirectoryEntry finds a parent path that is a file instead of a directory, promote it to a directory by setting ModeDir while preserving the original content and chunks. Use Store.UpdateEntry directly to bypass the Filer.UpdateEntry type-change guard. This fixes the S3 compatibility test failures where creating overlapping keys (e.g. "foo/bar" then "foo/bar/xyzzy") returned ExistingObjectIsFile. |
||
|
|
2877febd73 |
S3: fix silent PutObject failure and enforce 1024-byte key limit (#8764)
* S3: add KeyTooLongError error code Add ErrKeyTooLongError (HTTP 400, code "KeyTooLongError") to match the standard AWS S3 error for object keys that exceed length limits. * S3: fix silent PutObject failure when entry name exceeds max_file_name_length putToFiler called client.CreateEntry() directly and discarded the gRPC response. The filer embeds application errors like "entry name too long" in resp.Error (not as gRPC transport errors), so the error was silently swallowed and clients received HTTP 200 with an ETag for objects that were never stored. Switch to the filer_pb.CreateEntry() helper which properly checks resp.Error, and map "entry name too long" to KeyTooLongError (HTTP 400). To avoid fragile string parsing across the gRPC boundary, define shared error message constants in weed/util/constants and use them in both the filer (producing errors) and S3 API (matching errors). Switch filerErrorToS3Error to use strings.Contains/HasSuffix with these constants so matches work regardless of any wrapper prefix. Apply filerErrorToS3Error to the mkdir path for directory markers. Fixes #8759 * S3: enforce 1024-byte maximum object key length AWS S3 limits object keys to 1024 bytes. Add early validation on write paths (PutObject, CopyObject, CreateMultipartUpload) to reject keys exceeding the limit with the standard KeyTooLongError (HTTP 400). The key length check runs before bucket auto-creation to prevent overlong keys from triggering unnecessary side effects. Also use filerErrorToS3Error for CopyObject's mkFile error paths so name-too-long errors from the filer return KeyTooLongError instead of InternalError. Ref #8758 * S3: add handler-level tests for key length validation and error mapping Add tests for filerErrorToS3Error mapping "entry name too long" to KeyTooLongError, including a regression test for the CreateEntry-prefixed "existing ... is a directory" form. Add handler-level integration tests that exercise PutObjectHandler, CopyObjectHandler, and NewMultipartUploadHandler via httptest, verifying HTTP 400 and KeyTooLongError XML response for overlong keys and acceptance of keys at the 1024-byte limit. |
||
|
|
6bf654c25c |
fix: keep metadata subscriptions progressing (#8730) (#8746)
* fix: keep metadata subscriptions progressing (#8730) * test: cancel slow metadata writers with parent context * filer: ignore missing persisted log chunks |
||
|
|
f3c5ba3cd6 |
feat(filer): add lazy directory listing for remote mounts (#8615)
* feat(filer): add lazy directory listing for remote mounts Directory listings on remote mounts previously only queried the local filer store. With lazy mounts the listing was empty; with eager mounts it went stale over time. Add on-demand directory listing that fetches from remote and caches results with a 5-minute TTL: - Add `ListDirectory` to `RemoteStorageClient` interface (delimiter-based, single-level listing, separate from recursive `Traverse`) - Implement in S3, GCS, and Azure backends using each platform's hierarchical listing API - Add `maybeLazyListFromRemote` to filer: before each directory listing, check if the directory is under a remote mount with an expired cache, fetch from remote, persist entries to the local store, then let existing listing logic run on the populated store - Use singleflight to deduplicate concurrent requests for the same directory - Skip local-only entries (no RemoteEntry) to avoid overwriting unsynced uploads - Errors are logged and swallowed (availability over consistency) * refactor: extract xattr key to constant xattrRemoteListingSyncedAt * feat: make listing cache TTL configurable per mount via listing_cache_ttl_seconds Add listing_cache_ttl_seconds field to RemoteStorageLocation protobuf. When 0 (default), lazy directory listing is disabled for that mount. When >0, enables on-demand directory listing with the specified TTL. Expose as -listingCacheTTL flag on remote.mount command. * refactor: address review feedback for lazy directory listing - Add context.Context to ListDirectory interface and all implementations - Capture startTime before remote call for accurate TTL tracking - Simplify S3 ListDirectory using ListObjectsV2PagesWithContext - Make maybeLazyListFromRemote return void (errors always swallowed) - Remove redundant trailing-slash path manipulation in caller - Update tests to match new signatures * When an existing entry has Remote != nil, we should merge remote metadata into it rather than replacing it. * fix(gcs): wrap ListDirectory iterator error with context The raw iterator error was returned without bucket/path context, making it harder to debug. Wrap it consistently with the S3 pattern. * fix(s3): guard against nil pointer dereference in Traverse and ListDirectory Some S3-compatible backends may return nil for LastModified, Size, or ETag fields. Check for nil before dereferencing to prevent panics. * fix(filer): remove blanket 2-minute timeout from lazy listing context Individual SDK operations (S3, GCS, Azure) already have per-request timeouts and retry policies. The blanket timeout could cut off large directory listings mid-operation even though individual pages were succeeding. * fix(filer): preserve trace context in lazy listing with WithoutCancel Use context.WithoutCancel(ctx) instead of context.Background() so trace/span values from the incoming request are retained for distributed tracing, while still decoupling cancellation. * fix(filer): use Store.FindEntry for internal lookups, add Uid/Gid to files, fix updateDirectoryListingSyncedAt - Use f.Store.FindEntry instead of f.FindEntry for staleness check and child lookups to avoid unnecessary lazy-fetch overhead - Set OS_UID/OS_GID on new file entries for consistency with directories - In updateDirectoryListingSyncedAt, use Store.UpdateEntry for existing directories instead of CreateEntry to avoid deleteChunksIfNotNew and NotifyUpdateEvent side effects * fix(filer): distinguish not-found from store errors in lazy listing Previously, any error from Store.FindEntry was treated as "not found," which could cause entry recreation/overwrite on transient DB failures. Now check for filer_pb.ErrNotFound explicitly and skip entries or bail out on real store errors. * refactor(filer): use errors.Is for ErrNotFound comparisons |
||
|
|
16f2269a33 |
feat(filer): lazy metadata pulling (#8454)
* Add remote storage index for lazy metadata pull Introduces remoteStorageIndex, which maintains a map of filer directory to remote storage client/location, refreshed periodically from the filer's mount mappings. Provides lazyFetchFromRemote, ensureRemoteEntryInFiler, and isRemoteBacked on S3ApiServer as integration points for handler-level work in a follow-up PR. Nothing is wired into the server yet. Made-with: Cursor * Add unit tests for remote storage index and wire field into S3ApiServer Adds tests covering isEmpty, findForPath (including longest-prefix resolution), and isRemoteBacked. Also removes a stray PR review annotation from the index file and adds the remoteStorageIdx field to S3ApiServer so the package compiles ahead of the wiring PR. Made-with: Cursor * Address review comments on remote storage index - Use filer_pb.CreateEntry helper so resp.Error is checked, not just the RPC error - Extract keepPrev closure to remove duplicated error-handling in refresh loop - Add comment explaining availability-over-consistency trade-off on filer save failure Made-with: Cursor * Move lazy metadata pull from S3 API to filer - Add maybeLazyFetchFromRemote in filer: on FindEntry miss, stat remote and CreateEntry when path is under a remote mount - Use singleflight for dedup; context guard prevents CreateEntry recursion - Availability-over-consistency: return in-memory entry if CreateEntry fails - Add longest-prefix test for nested mounts in remote_storage_test.go - Remove remoteStorageIndex, lazyFetchFromRemote, ensureRemoteEntryInFiler, doLazyFetch from s3api; filer now owns metadata operations - Add filer_lazy_remote_test.go with tests for hit, miss, not-found, CreateEntry failure, longest-prefix, and FindEntry integration Made-with: Cursor * Address review: fix context guard test, add FindMountDirectory comment, remove dead code Made-with: Cursor * Nitpicks: restore prev maker in registerStubMaker, instance-scope lazyFetchGroup, nil-check remoteEntry Made-with: Cursor * Fix remotePath when mountDir is root: ensure relPath has leading slash Made-with: Cursor * filer: decouple lazy-fetch persistence from caller context Use context.Background() inside the singleflight closure for CreateEntry so persistence is not cancelled when the winning request's context is cancelled. Fixes CreateEntry failing for all waiters when the first caller times out. Made-with: Cursor * filer: remove redundant Mode bitwise OR with zero Made-with: Cursor * filer: use bounded context for lazy-fetch persistence Replace context.Background() with context.WithTimeout(30s) and defer cancel() to prevent indefinite blocking and release resources. Made-with: Cursor * filer: use checked type assertion for singleflight result Made-with: Cursor * filer: rename persist context vars to avoid shadowing function parameter Made-with: Cursor |
||
|
|
b57429ef2e |
Switch empty-folder cleanup to bucket policy (#8292)
* Fix Spark _temporary cleanup and add issue #8285 regression test * Generalize empty folder cleanup for Spark temp artifacts * Revert synchronous folder pruning and add cleanup diagnostics * Add actionable empty-folder cleanup diagnostics * Fix Spark temp marker cleanup in async folder cleaner * Fix Spark temp cleanup with implicit directory markers * Keep explicit directory markers non-implicit * logging * more logs * Switch empty-folder cleanup to bucket policy * Seaweed-X-Amz-Allow-Empty-Folders * less logs * go vet * less logs * refactoring |
||
|
|
403592bb9f |
Add Spark Iceberg catalog integration tests and CI support (#8242)
* Add Spark Iceberg catalog integration tests and CI support Implement comprehensive integration tests for Spark with SeaweedFS Iceberg REST catalog: - Basic CRUD operations (Create, Read, Update, Delete) on Iceberg tables - Namespace (database) management - Data insertion, querying, and deletion - Time travel capabilities via snapshot versioning - Compatible with SeaweedFS S3 and Iceberg REST endpoints Tests mirror the structure of existing Trino integration tests but use Spark's Python SQL API and PySpark for testing. Add GitHub Actions CI job for spark-iceberg-catalog-tests in s3-tables-tests.yml to automatically run Spark integration tests on pull requests. * fmt * Fix Spark integration tests - code review feedback * go mod tidy * Add go mod tidy step to integration test jobs Add 'go mod tidy' step before test runs for all integration test jobs: - s3-tables-tests - iceberg-catalog-tests - trino-iceberg-catalog-tests - spark-iceberg-catalog-tests This ensures dependencies are clean before running tests. * Fix remaining Spark operations test issues Address final code review comments: Setup & Initialization: - Add waitForSparkReady() helper function that polls Spark readiness with backoff instead of hardcoded 10-second sleep - Extract setupSparkTestEnv() helper to reduce boilerplate duplication between TestSparkCatalogBasicOperations and TestSparkTimeTravel - Both tests now use helpers for consistent, reliable setup Assertions & Validation: - Make setup-critical operations (namespace, table creation, initial insert) use t.Fatalf instead of t.Errorf to fail fast - Validate setupSQL output in TestSparkTimeTravel and fail if not 'Setup complete' - Add validation after second INSERT in TestSparkTimeTravel: verify row count increased to 2 before time travel test - Add context to error messages with namespace and tableName params Code Quality: - Remove code duplication between test functions - All critical paths now properly validated - Consistent error handling throughout * Fix go vet errors in S3 Tables tests Fixes: 1. setup_test.go (Spark): - Add missing import: github.com/testcontainers/testcontainers-go/wait - Use wait.ForLog instead of undefined testcontainers.NewLogStrategy - Remove unused strings import 2. trino_catalog_test.go: - Use net.JoinHostPort instead of fmt.Sprintf for address formatting - Properly handles IPv6 addresses by wrapping them in brackets * Use weed mini for simpler SeaweedFS startup Replace complex multi-process startup (master, volume, filer, s3) with single 'weed mini' command that starts all services together. Benefits: - Simpler, more reliable startup - Single weed mini process vs 4 separate processes - Automatic coordination between components - Better port management with no manual coordination Changes: - Remove separate master, volume, filer process startup - Use weed mini with -master.port, -filer.port, -s3.port flags - Keep Iceberg REST as separate service (still needed) - Increase timeout to 15s for port readiness (weed mini startup) - Remove volumePort and filerProcess fields from TestEnvironment - Simplify cleanup to only handle two processes (mini, iceberg rest) * Clean up dead code and temp directory leaks Fixes: 1. Remove dead s3Process field and cleanup: - weed mini bundles S3 gateway, no separate process needed - Removed s3Process field from TestEnvironment - Removed unnecessary s3Process cleanup code 2. Fix temp config directory leak: - Add sparkConfigDir field to TestEnvironment - Store returned configDir in writeSparkConfig - Clean up sparkConfigDir in Cleanup() with os.RemoveAll - Prevents accumulation of temp directories in test runs 3. Simplify Cleanup: - Now handles only necessary processes (weed mini, iceberg rest) - Removes both seaweedfsDataDir and sparkConfigDir - Cleaner shutdown sequence * Use weed mini's built-in Iceberg REST and fix python binary Changes: - Add -s3.port.iceberg flag to weed mini for built-in Iceberg REST Catalog - Remove separate 'weed server' process for Iceberg REST - Remove icebergRestProcess field from TestEnvironment - Simplify Cleanup() to only manage weed mini + Spark - Add port readiness check for iceberg REST from weed mini - Set Spark container Cmd to '/bin/sh -c sleep 3600' to keep it running - Change python to python3 in container.Exec calls This simplifies to truly one all-in-one weed mini process (master, filer, s3, iceberg-rest) plus just the Spark container. * go fmt * clean up * bind on a non-loopback IP for container access, aligned Iceberg metadata saves/locations with table locations, and reworked Spark time travel to use TIMESTAMP AS OF with safe timestamp extraction. * shared mini start * Fixed internal directory creation under /buckets so .objects paths can auto-create without failing bucket-name validation, which restores table bucket object writes * fix path Updated table bucket objects to write under `/buckets/<bucket>` and saved Iceberg metadata there, adjusting Spark time-travel timestamp to committed_at +1s. Rebuilt the weed binary (`go install ./weed`) and confirmed passing tests for Spark and Trino with focused test commands. * Updated table bucket creation to stop creating /buckets/.objects and switched Trino REST warehouse to s3://<bucket> to match Iceberg layout. * Stabilize S3Tables integration tests * Fix timestamp extraction and remove dead code in bucketDir * Use table bucket as warehouse in s3tables tests * Update trino_blog_operations_test.go * adds the CASCADE option to handle any remaining table metadata/files in the schema directory * skip namespace not empty |
||
|
|
8880f9932f |
filer: auto clean empty implicit s3 folders (#8051)
* filer: auto clean empty s3 implicit folders Explicitly tag implicitly created S3 folders (parent directories from object uploads) with 'Seaweed-X-Amz-Implicit-Dir'. Update EmptyFolderCleaner to check for this attribute and cache the result efficiently. * filer: correctly handle nil attributes in empty folder cleaner cache * filer: refine implicit tagging logic Prevent tagging buckets as implicit directories. Reduce code duplication. * filer: safeguard GetEntryAttributes against nil entry and not found error * filer: move ErrNotFound handling to EmptyFolderCleaner * filer: add comment to explain level > 3 check for implicit directories |
||
|
|
e439e33888 |
fix(filer): check error from FindEntry (#7878)
* fix(filer): check error from FindEntry * remove --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
||
|
|
39ba19eea6 |
filer: async empty folder cleanup via metadata events (#7614)
* filer: async empty folder cleanup via metadata events Implements asynchronous empty folder cleanup when files are deleted in S3. Key changes: 1. EmptyFolderCleaner - New component that handles folder cleanup: - Uses consistent hashing (LockRing) to determine folder ownership - Each filer owns specific folders, avoiding duplicate cleanup work - Debounces delete events (10s delay) to batch multiple deletes - Caches rough folder counts to skip unnecessary checks - Cancels pending cleanup when new files are created - Handles both file and subdirectory deletions 2. Integration with metadata events: - Listens to both local and remote filer metadata events - Processes create/delete/rename events to track folder state - Only processes folders under /buckets/<bucket>/... 3. Removed synchronous empty folder cleanup from S3 handlers: - DeleteObjectHandler no longer calls DoDeleteEmptyParentDirectories - DeleteMultipleObjectsHandler no longer tracks/cleans directories - Cleanup now happens asynchronously via metadata events Benefits: - Non-blocking: S3 delete requests return immediately - Coordinated: Only one filer (the owner) cleans each folder - Efficient: Batching and caching reduce unnecessary checks - Event-driven: Folder deletion triggers parent folder check automatically * filer: add CleanupQueue data structure for deduplicated folder cleanup CleanupQueue uses a linked list for FIFO ordering and a hashmap for O(1) deduplication. Processing is triggered when: - Queue size reaches maxSize (default 1000), OR - Oldest item exceeds maxAge (default 10 minutes) Key features: - O(1) Add, Remove, Pop, Contains operations - Duplicate folders are ignored (keeps original position/time) - Testable with injectable time function - Thread-safe with mutex protection * filer: use CleanupQueue for empty folder cleanup Replace timer-per-folder approach with queue-based processing: - Use CleanupQueue for deduplication and ordered processing - Process queue when full (1000 items) or oldest item exceeds 10 minutes - Background processor checks queue every 10 seconds - Remove from queue on create events to cancel pending cleanup Benefits: - Bounded memory: queue has max size, not unlimited timers - Efficient: O(1) add/remove/contains operations - Batch processing: handle many folders efficiently - Better for high-volume delete scenarios * filer: CleanupQueue.Add moves duplicate to back with updated time When adding a folder that already exists in the queue: - Remove it from its current position - Add it to the back of the queue - Update the queue time to current time This ensures that folders with recent delete activity are processed later, giving more time for additional deletes to occur. * filer: CleanupQueue uses event time and inserts in sorted order Changes: - Add() now takes eventTime parameter instead of using current time - Insert items in time-sorted order (oldest at front) to handle out-of-order events - When updating duplicate with newer time, reposition to maintain sort order - Ignore updates with older time (keep existing later time) This ensures proper ordering when processing events from distributed filers where event arrival order may not match event occurrence order. * filer: remove unused CleanupQueue functions (SetNowFunc, GetAll) Removed test-only functions: - SetNowFunc: tests now use real time with past event times - GetAll: tests now use Pop() to verify order Kept functions used in production: - Peek: used in filer_notify_read.go - OldestAge: used in empty_folder_cleaner.go logging * filer: initialize cache entry on first delete/create event Previously, roughCount was only updated if the cache entry already existed, but entries were only created during executeCleanup. This meant delete/create events before the first cleanup didn't track the count. Now create the cache entry on first event, so roughCount properly tracks all changes from the start. * filer: skip adding to cleanup queue if roughCount > 0 If the cached roughCount indicates there are still items in the folder, don't bother adding it to the cleanup queue. This avoids unnecessary queue entries and reduces wasted cleanup checks. * filer: don't create cache entry on create event Only update roughCount if the folder is already being tracked. New folders don't need tracking until we see a delete event. * filer: move empty folder cleanup to its own package - Created weed/filer/empty_folder_cleanup package - Defined FilerOperations interface to break circular dependency - Added CountDirectoryEntries method to Filer - Exported IsUnderPath and IsUnderBucketPath helper functions * filer: make isUnderPath and isUnderBucketPath private These helpers are only used within the empty_folder_cleanup package. |
||
|
|
b669607fcd |
Add error list each entry func (#7485)
* added error return in type ListEachEntryFunc * return error if errClose * fix fmt.Errorf * fix return errClose * use %w fmt.Errorf * added entry in messege error * add callbackErr in ListDirectoryEntries * fix error * add log * clear err when the scanner stops on io.EOF, so returning err doesn’t surface EOF as a failure. * more info in error * add ctx to logs, error handling * fix return eachEntryFunc * fix * fix log * fix return * fix foundationdb test s * fix eachEntryFunc * fix return resEachEntryFuncErr * Update weed/filer/filer.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/elastic/v7/elastic_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/hbase/hbase_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/foundationdb/foundationdb_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/ydb/ydb_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * fix * add scanErr --------- Co-authored-by: Roman Tamarov <r.tamarov@kryptonite.ru> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
||
|
|
c1b8d4bf0d |
S3: adds FilerClient to use cached volume id (#7518)
* adds FilerClient to use cached volume id
* refactor: MasterClient embeds vidMapClient to eliminate ~150 lines of duplication
- Create masterVolumeProvider that implements VolumeLocationProvider
- MasterClient now embeds vidMapClient instead of maintaining duplicate cache logic
- Removed duplicate methods: LookupVolumeIdsWithFallback, getStableVidMap, etc.
- MasterClient still receives real-time updates via KeepConnected streaming
- Updates call inherited addLocation/deleteLocation from vidMapClient
- Benefits: DRY principle, shared singleflight, cache chain logic reused
- Zero behavioral changes - only architectural improvement
* refactor: mount uses FilerClient for efficient volume location caching
- Add configurable vidMap cache size (default: 5 historical snapshots)
- Add FilerClientOption struct for clean configuration
* GrpcTimeout: default 5 seconds (prevents hanging requests)
* UrlPreference: PreferUrl or PreferPublicUrl
* CacheSize: number of historical vidMap snapshots (for volume moves)
- NewFilerClient uses option struct for better API extensibility
- Improved error handling in filerVolumeProvider.LookupVolumeIds:
* Distinguish genuine 'not found' from communication failures
* Log volumes missing from filer response
* Return proper error context with volume count
* Document that filer Locations lacks Error field (unlike master)
- FilerClient.GetLookupFileIdFunction() handles URL preference automatically
- Mount (WFS) creates FilerClient with appropriate options
- Benefits for weed mount:
* Singleflight: Deduplicates concurrent volume lookups
* Cache history: Old volume locations available briefly when volumes move
* Configurable cache depth: Tune for different deployment environments
* Battle-tested vidMap cache with cache chain
* Better concurrency handling with timeout protection
* Improved error visibility and debugging
- Old filer.LookupFn() kept for backward compatibility
- Performance improvement for mount operations with high concurrency
* fix: prevent vidMap swap race condition in LookupFileIdWithFallback
- Hold vidMapLock.RLock() during entire vm.LookupFileId() call
- Prevents resetVidMap() from swapping vidMap mid-operation
- Ensures atomic access to the current vidMap instance
- Added documentation warnings to getStableVidMap() about swap risks
- Enhanced withCurrentVidMap() documentation for clarity
This fixes a subtle race condition where:
1. Thread A: acquires lock, gets vm pointer, releases lock
2. Thread B: calls resetVidMap(), swaps vc.vidMap
3. Thread A: calls vm.LookupFileId() on old/stale vidMap
While the old vidMap remains valid (in cache chain), holding the lock
ensures we consistently use the current vidMap for the entire operation.
* fix: FilerClient supports multiple filer addresses for high availability
Critical fix: FilerClient now accepts []ServerAddress instead of single address
- Prevents mount failure when first filer is down (regression fix)
- Implements automatic failover to remaining filers
- Uses round-robin with atomic index tracking (same pattern as WFS.WithFilerClient)
- Retries all configured filers before giving up
- Updates successful filer index for future requests
Changes:
- NewFilerClient([]pb.ServerAddress, ...) instead of (pb.ServerAddress, ...)
- filerVolumeProvider references FilerClient for failover access
- LookupVolumeIds tries all filers with util.Retry pattern
- Mount passes all option.FilerAddresses for HA
- S3 wraps single filer in slice for API consistency
This restores the high availability that existed in the old implementation
where mount would automatically failover between configured filers.
* fix: restore leader change detection in KeepConnected stream loop
Critical fix: Leader change detection was accidentally removed from the streaming loop
- Master can announce leader changes during an active KeepConnected stream
- Without this check, client continues talking to non-leader until connection breaks
- This can lead to stale data or operational errors
The check needs to be in TWO places:
1. Initial response (lines 178-187): Detect redirect on first connect
2. Stream loop (lines 203-209): Detect leader changes during active stream
Restored the loop check that was accidentally removed during refactoring.
This ensures the client immediately reconnects to new leader when announced.
* improve: address code review findings on error handling and documentation
1. Master provider now preserves per-volume errors
- Surface detailed errors from master (e.g., misconfiguration, deletion)
- Return partial results with aggregated errors using errors.Join
- Callers can now distinguish specific volume failures from general errors
- Addresses issue of losing vidLoc.Error details
2. Document GetMaster initialization contract
- Add comprehensive documentation explaining blocking behavior
- Clarify that KeepConnectedToMaster must be started first
- Provide typical initialization pattern example
- Prevent confusing timeouts during warm-up
3. Document partial results API contract
- LookupVolumeIdsWithFallback explicitly documents partial results
- Clear examples of how to handle result + error combinations
- Helps prevent callers from discarding valid partial results
4. Add safeguards to legacy filer.LookupFn
- Add deprecation warning with migration guidance
- Implement simple 10,000 entry cache limit
- Log warning when limit reached
- Recommend wdclient.FilerClient for new code
- Prevents unbounded memory growth in long-running processes
These changes improve API clarity and operational safety while maintaining
backward compatibility.
* fix: handle partial results correctly in LookupVolumeIdsWithFallback callers
Two callers were discarding partial results by checking err before processing
the result map. While these are currently single-volume lookups (so partial
results aren't possible), the code was fragile and would break if we ever
batched multiple volumes together.
Changes:
- Check result map FIRST, then conditionally check error
- If volume is found in result, use it (ignore errors about other volumes)
- If volume is NOT found and err != nil, include error context with %w
- Add defensive comments explaining the pattern for future maintainers
This makes the code:
1. Correct for future batched lookups
2. More informative (preserves underlying error details)
3. Consistent with filer_grpc_server.go which already handles this correctly
Example: If looking up ["1", "2", "999"] and only 999 fails, callers
looking for volumes 1 or 2 will succeed instead of failing unnecessarily.
* improve: address remaining code review findings
1. Lazy initialize FilerClient in mount for proxy-only setups
- Only create FilerClient when VolumeServerAccess != "filerProxy"
- Avoids wasted work when all reads proxy through filer
- filerClient is nil for proxy mode, initialized for direct access
2. Fix inaccurate deprecation comment in filer.LookupFn
- Updated comment to reflect current behavior (10k bounded cache)
- Removed claim of "unbounded growth" after adding size limit
- Still directs new code to wdclient.FilerClient for better features
3. Audit all MasterClient usages for KeepConnectedToMaster
- Verified all production callers start KeepConnectedToMaster early
- Filer, Shell, Master, Broker, Benchmark, Admin all correct
- IAM creates MasterClient but never uses it (harmless)
- Test code doesn't need KeepConnectedToMaster (mocks)
All callers properly follow the initialization pattern documented in
GetMaster(), preventing unexpected blocking or timeouts.
* fix: restore observability instrumentation in MasterClient
During the refactoring, several important stats counters and logging
statements were accidentally removed from tryConnectToMaster. These are
critical for monitoring and debugging the health of master client connections.
Restored instrumentation:
1. stats.MasterClientConnectCounter("total") - tracks all connection attempts
2. stats.MasterClientConnectCounter(FailedToKeepConnected) - when KeepConnected stream fails
3. stats.MasterClientConnectCounter(FailedToReceive) - when Recv() fails in loop
4. stats.MasterClientConnectCounter(Failed) - when overall gprcErr occurs
5. stats.MasterClientConnectCounter(OnPeerUpdate) - when peer updates detected
Additionally restored peer update logging:
- "+ filer@host noticed group.type address" for node additions
- "- filer@host noticed group.type address" for node removals
- Only logs updates matching the client's FilerGroup for noise reduction
This information is valuable for:
- Monitoring cluster health and connection stability
- Debugging cluster membership changes
- Tracking master failover and reconnection patterns
- Identifying network issues between clients and masters
No functional changes - purely observability restoration.
* improve: implement gRPC-aware retry for FilerClient volume lookups
The previous implementation used util.Retry which only retries errors
containing the string "transport". This is insufficient for handling
the full range of transient gRPC errors.
Changes:
1. Added isRetryableGrpcError() to properly inspect gRPC status codes
- Retries: Unavailable, DeadlineExceeded, ResourceExhausted, Aborted
- Falls back to string matching for non-gRPC network errors
2. Replaced util.Retry with custom retry loop
- 3 attempts with exponential backoff (1s, 1.5s, 2.25s)
- Tries all N filers on each attempt (N*3 total attempts max)
- Fast-fails on non-retryable errors (NotFound, PermissionDenied, etc.)
3. Improved logging
- Shows both filer attempt (x/N) and retry attempt (y/3)
- Logs retry reason and wait time for debugging
Benefits:
- Better handling of transient gRPC failures (server restarts, load spikes)
- Faster failure for permanent errors (no wasted retries)
- More informative logs for troubleshooting
- Maintains existing HA failover across multiple filers
Example: If all 3 filers return Unavailable (server overload):
- Attempt 1: try all 3 filers, wait 1s
- Attempt 2: try all 3 filers, wait 1.5s
- Attempt 3: try all 3 filers, fail
Example: If filer returns NotFound (volume doesn't exist):
- Attempt 1: try all 3 filers, fast-fail (no retry)
* fmt
* improve: add circuit breaker to skip known-unhealthy filers
The previous implementation tried all filers on every failure, including
known-unhealthy ones. This wasted time retrying permanently down filers.
Problem scenario (3 filers, filer0 is down):
- Last successful: filer1 (saved as filerIndex=1)
- Next lookup when filer1 fails:
Retry 1: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Retry 2: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Retry 3: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Total wasted: 15 seconds on known-bad filer!
Solution: Circuit breaker pattern
- Track consecutive failures per filer (atomic int32)
- Skip filers with 3+ consecutive failures
- Re-check unhealthy filers every 30 seconds
- Reset failure count on success
New behavior:
- filer0 fails 3 times → marked unhealthy
- Future lookups skip filer0 for 30 seconds
- After 30s, re-check filer0 (allows recovery)
- If filer0 succeeds, reset failure count to 0
Benefits:
1. Avoids wasting time on known-down filers
2. Still sticks to last healthy filer (via filerIndex)
3. Allows recovery (30s re-check window)
4. No configuration needed (automatic)
Implementation details:
- filerHealth struct tracks failureCount (atomic) + lastFailureTime
- shouldSkipUnhealthyFiler(): checks if we should skip this filer
- recordFilerSuccess(): resets failure count to 0
- recordFilerFailure(): increments count, updates timestamp
- Logs when skipping unhealthy filers (V(2) level)
Example with circuit breaker:
- filer0 down, saved filerIndex=1 (filer1 healthy)
- Lookup 1: filer1(ok) → Done (0.01s)
- Lookup 2: filer1(fail) → filer2(ok) → Done, save filerIndex=2 (0.01s)
- Lookup 3: filer2(fail) → skip filer0 (unhealthy) → filer1(ok) → Done (0.01s)
Much better than wasting 15s trying filer0 repeatedly!
* fix: OnPeerUpdate should only process updates for matching FilerGroup
Critical bug: The OnPeerUpdate callback was incorrectly moved outside the
FilerGroup check when restoring observability instrumentation. This caused
clients to process peer updates for ALL filer groups, not just their own.
Problem:
Before: mc.OnPeerUpdate only called for update.FilerGroup == mc.FilerGroup
Bug: mc.OnPeerUpdate called for ALL updates regardless of FilerGroup
Impact:
- Multi-tenant deployments with separate filer groups would see cross-group
updates (e.g., group A clients processing group B updates)
- Could cause incorrect cluster membership tracking
- OnPeerUpdate handlers (like Filer's DLM ring updates) would receive
irrelevant updates from other groups
Example scenario:
Cluster has two filer groups: "production" and "staging"
Production filer connects with FilerGroup="production"
Incorrect behavior (bug):
- Receives "staging" group updates
- Incorrectly adds staging filers to production DLM ring
- Cross-tenant data access issues
Correct behavior (fixed):
- Only receives "production" group updates
- Only adds production filers to production DLM ring
- Proper isolation between groups
Fix:
Moved mc.OnPeerUpdate(update, time.Now()) back INSIDE the FilerGroup check
where it belongs, matching the original implementation.
The logging and stats counter were already correctly scoped to matching
FilerGroup, so they remain inside the if block as intended.
* improve: clarify Aborted error handling in volume lookups
Added documentation and logging to address the concern that codes.Aborted
might not always be retryable in all contexts.
Context-specific justification for treating Aborted as retryable:
Volume location lookups (LookupVolume RPC) are simple, read-only operations:
- No transactions
- No write conflicts
- No application-level state changes
- Idempotent (safe to retry)
In this context, Aborted is most likely caused by:
- Filer restarting/recovering (transient)
- Connection interrupted mid-request (transient)
- Server-side resource cleanup (transient)
NOT caused by:
- Application-level conflicts (no writes)
- Transaction failures (no transactions)
- Logical errors (read-only lookup)
Changes:
1. Added detailed comment explaining the context-specific reasoning
2. Added V(1) logging when treating Aborted as retryable
- Helps detect misclassification if it occurs
- Visible in verbose logs for troubleshooting
3. Split switch statement for clarity (one case per line)
If future analysis shows Aborted should not be retried, operators will
now have visibility via logs to make that determination. The logging
provides evidence for future tuning decisions.
Alternative approaches considered but not implemented:
- Removing Aborted entirely (too conservative for read-only ops)
- Message content inspection (adds complexity, no known patterns yet)
- Different handling per RPC type (premature optimization)
* fix: IAM server must start KeepConnectedToMaster for masterClient usage
The IAM server creates and uses a MasterClient but never started
KeepConnectedToMaster, which could cause blocking if IAM config files
have chunks requiring volume lookups.
Problem flow:
NewIamApiServerWithStore()
→ creates masterClient
→ ❌ NEVER starts KeepConnectedToMaster
GetS3ApiConfigurationFromFiler()
→ filer.ReadEntry(iama.masterClient, ...)
→ StreamContent(masterClient, ...) if file has chunks
→ masterClient.GetLookupFileIdFunction()
→ GetMaster(ctx) ← BLOCKS indefinitely waiting for connection!
While IAM config files (identity & policies) are typically small and
stored inline without chunks, the code path exists and would block
if the files ever had chunks.
Fix:
Start KeepConnectedToMaster in background goroutine right after
creating masterClient, following the documented pattern:
mc := wdclient.NewMasterClient(...)
go mc.KeepConnectedToMaster(ctx)
This ensures masterClient is usable if ReadEntry ever needs to
stream chunked content from volume servers.
Note: This bug was dormant because IAM config files are small (<256 bytes)
and SeaweedFS stores small files inline in Entry.Content, not as chunks.
The bug would only manifest if:
- IAM config grew > 256 bytes (inline threshold)
- Config was stored as chunks on volume servers
- ReadEntry called StreamContent
- GetMaster blocked indefinitely
Now all 9 production MasterClient instances correctly follow the pattern.
* fix: data race on filerHealth.lastFailureTime in circuit breaker
The circuit breaker tracked lastFailureTime as time.Time, which was
written in recordFilerFailure and read in shouldSkipUnhealthyFiler
without synchronization, causing a data race.
Data race scenario:
Goroutine 1: recordFilerFailure(0)
health.lastFailureTime = time.Now() // ❌ unsynchronized write
Goroutine 2: shouldSkipUnhealthyFiler(0)
time.Since(health.lastFailureTime) // ❌ unsynchronized read
→ RACE DETECTED by -race detector
Fix:
Changed lastFailureTime from time.Time to int64 (lastFailureTimeNs)
storing Unix nanoseconds for atomic access:
Write side (recordFilerFailure):
atomic.StoreInt64(&health.lastFailureTimeNs, time.Now().UnixNano())
Read side (shouldSkipUnhealthyFiler):
lastFailureNs := atomic.LoadInt64(&health.lastFailureTimeNs)
if lastFailureNs == 0 { return false } // Never failed
lastFailureTime := time.Unix(0, lastFailureNs)
time.Since(lastFailureTime) > 30*time.Second
Benefits:
- Atomic reads/writes (no data race)
- Efficient (int64 is 8 bytes, always atomic on 64-bit systems)
- Zero value (0) naturally means "never failed"
- No mutex needed (lock-free circuit breaker)
Note: sync/atomic was already imported for failureCount, so no new
import needed.
* fix: create fresh timeout context for each filer retry attempt
The timeout context was created once at function start and reused across
all retry attempts, causing subsequent retries to run with progressively
shorter (or expired) deadlines.
Problem flow:
Line 244: timeoutCtx, cancel := context.WithTimeout(ctx, 5s)
defer cancel()
Retry 1, filer 0: client.LookupVolume(timeoutCtx, ...) ← 5s available ✅
Retry 1, filer 1: client.LookupVolume(timeoutCtx, ...) ← 3s left
Retry 1, filer 2: client.LookupVolume(timeoutCtx, ...) ← 0.5s left
Retry 2, filer 0: client.LookupVolume(timeoutCtx, ...) ← EXPIRED! ❌
Result: Retries always fail with DeadlineExceeded, defeating the purpose
of retries.
Fix:
Moved context.WithTimeout inside the per-filer loop, creating a fresh
timeout context for each attempt:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
err := pb.WithGrpcFilerClient(..., func(client) {
resp, err := client.LookupVolume(timeoutCtx, ...)
...
})
cancel() // Clean up immediately after call
}
Benefits:
- Each filer attempt gets full fc.grpcTimeout (default 5s)
- Retries actually have time to complete
- No context leaks (cancel called after each attempt)
- More predictable timeout behavior
Example with fix:
Retry 1, filer 0: fresh 5s timeout ✅
Retry 1, filer 1: fresh 5s timeout ✅
Retry 2, filer 0: fresh 5s timeout ✅
Total max time: 3 retries × 3 filers × 5s = 45s (plus backoff)
Note: The outer ctx (from caller) still provides overall cancellation if
the caller cancels or times out the entire operation.
* fix: always reset vidMap cache on master reconnection
The previous refactoring removed the else block that resets vidMap when
the first message from a newly connected master is not a VolumeLocation.
Problem scenario:
1. Client connects to master-1 and builds vidMap cache
2. Master-1 fails, client connects to master-2
3. First message from master-2 is a ClusterNodeUpdate (not VolumeLocation)
4. Old code: vidMap is reset and updated ✅
5. New code: vidMap is NOT reset ❌
6. Result: Client uses stale cache from master-1 → data access errors
Example flow with bug:
Connect to master-2
First message: ClusterNodeUpdate {filer.x added}
→ No resetVidMap() call
→ vidMap still has master-1's stale volume locations
→ Client reads from wrong volume servers → 404 errors
Fix:
Restored the else block that resets vidMap when first message is not
a VolumeLocation:
if resp.VolumeLocation != nil {
// ... check leader, reset, and update ...
} else {
// First message is ClusterNodeUpdate or other type
// Must still reset to avoid stale data
mc.resetVidMap()
}
This ensures the cache is always cleared when establishing a new master
connection, regardless of what the first message type is.
Root cause:
During the vidMapClient refactoring, this else block was accidentally
dropped, making failover behavior fragile and non-deterministic (depends
on which message type arrives first from the new master).
Impact:
- High severity for master failover scenarios
- Could cause read failures, 404s, or wrong data access
- Only manifests when first message is not VolumeLocation
* fix: goroutine and connection leak in IAM server shutdown
The IAM server's KeepConnectedToMaster goroutine used context.Background(),
which is non-cancellable, causing the goroutine and its gRPC connections
to leak on server shutdown.
Problem:
go masterClient.KeepConnectedToMaster(context.Background())
- context.Background() never cancels
- KeepConnectedToMaster goroutine runs forever
- gRPC connection to master stays open
- No way to stop cleanly on server shutdown
Result: Resource leaks when IAM server is stopped
Fix:
1. Added shutdownContext and shutdownCancel to IamApiServer struct
2. Created cancellable context in NewIamApiServerWithStore:
shutdownCtx, shutdownCancel := context.WithCancel(context.Background())
3. Pass shutdownCtx to KeepConnectedToMaster:
go masterClient.KeepConnectedToMaster(shutdownCtx)
4. Added Shutdown() method to invoke cancel:
func (iama *IamApiServer) Shutdown() {
if iama.shutdownCancel != nil {
iama.shutdownCancel()
}
}
5. Stored masterClient reference on IamApiServer for future use
Benefits:
- Goroutine stops cleanly when Shutdown() is called
- gRPC connections are closed properly
- No resource leaks on server restart/stop
- Shutdown() is idempotent (safe to call multiple times)
Usage (for future graceful shutdown):
iamServer, _ := iamapi.NewIamApiServer(...)
defer iamServer.Shutdown()
// or in signal handler:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-sigChan
iamServer.Shutdown()
os.Exit(0)
}()
Note: Current command implementations (weed/command/iam.go) don't have
shutdown paths yet, but this makes IAM server ready for proper lifecycle
management when that infrastructure is added.
* refactor: remove unnecessary KeepMasterClientConnected wrapper in filer
The Filer.KeepMasterClientConnected() method was an unnecessary wrapper that
just forwarded to MasterClient.KeepConnectedToMaster(). This wrapper added
no value and created inconsistency with other components that call
KeepConnectedToMaster directly.
Removed:
filer.go:178-180
func (fs *Filer) KeepMasterClientConnected(ctx context.Context) {
fs.MasterClient.KeepConnectedToMaster(ctx)
}
Updated caller:
filer_server.go:181
- go fs.filer.KeepMasterClientConnected(context.Background())
+ go fs.filer.MasterClient.KeepConnectedToMaster(context.Background())
Benefits:
- Consistent with other components (S3, IAM, Shell, Mount)
- Removes unnecessary indirection
- Clearer that KeepConnectedToMaster runs in background goroutine
- Follows the documented pattern from MasterClient.GetMaster()
Note: shell/commands.go was verified and already correctly starts
KeepConnectedToMaster in a background goroutine (shell_liner.go:51):
go commandEnv.MasterClient.KeepConnectedToMaster(ctx)
* fix: use client ID instead of timeout for gRPC signature parameter
The pb.WithGrpcFilerClient signature parameter is meant to be a client
identifier for logging and tracking (added as 'sw-client-id' gRPC metadata
in streaming mode), not a timeout value.
Problem:
timeoutMs := int32(fc.grpcTimeout.Milliseconds()) // 5000 (5 seconds)
err := pb.WithGrpcFilerClient(false, timeoutMs, filerAddress, ...)
- Passing timeout (5000ms) as signature/client ID
- Misuse of API: signature should be a unique client identifier
- Timeout is already handled by timeoutCtx passed to gRPC call
- Inconsistent with other callers (all use 0 or proper client ID)
How WithGrpcFilerClient uses signature parameter:
func WithGrpcClient(..., signature int32, ...) {
if streamingMode && signature != 0 {
md := metadata.New(map[string]string{"sw-client-id": fmt.Sprintf("%d", signature)})
ctx = metadata.NewOutgoingContext(ctx, md)
}
...
}
It's for client identification, not timeout control!
Fix:
1. Added clientId int32 field to FilerClient struct
2. Initialize with rand.Int31() in NewFilerClient for unique ID
3. Removed timeoutMs variable (and misleading comment)
4. Use fc.clientId in pb.WithGrpcFilerClient call
Before:
err := pb.WithGrpcFilerClient(false, timeoutMs, ...)
^^^^^^^^^ Wrong! (5000)
After:
err := pb.WithGrpcFilerClient(false, fc.clientId, ...)
^^^^^^^^^^^^ Correct! (random int31)
Benefits:
- Correct API usage (signature = client ID, not timeout)
- Timeout still works via timeoutCtx (unchanged)
- Consistent with other pb.WithGrpcFilerClient callers
- Enables proper client tracking on filer side via gRPC metadata
- Each FilerClient instance has unique ID for debugging
Examples of correct usage elsewhere:
weed/iamapi/iamapi_server.go:145 pb.WithGrpcFilerClient(false, 0, ...)
weed/command/s3.go:215 pb.WithGrpcFilerClient(false, 0, ...)
weed/shell/commands.go:110 pb.WithGrpcFilerClient(streamingMode, 0, ...)
All use 0 (or a proper signature), not a timeout value.
* fix: add timeout to master volume lookup to prevent indefinite blocking
The masterVolumeProvider.LookupVolumeIds method was using the context
directly without a timeout, which could cause it to block indefinitely
if the master is slow to respond or unreachable.
Problem:
err := pb.WithMasterClient(false, p.masterClient.GetMaster(ctx), ...)
resp, err := client.LookupVolume(ctx, &master_pb.LookupVolumeRequest{...})
- No timeout on gRPC call to master
- Could block indefinitely if master is unresponsive
- Inconsistent with FilerClient which uses 5s timeout
- This is a fallback path (cache miss) but still needs protection
Scenarios where this could hang:
1. Master server under heavy load (slow response)
2. Network issues between client and master
3. Master server hung or deadlocked
4. Master in process of shutting down
Fix:
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
err := pb.WithMasterClient(false, p.masterClient.GetMaster(timeoutCtx), ...)
resp, err := client.LookupVolume(timeoutCtx, &master_pb.LookupVolumeRequest{...})
Benefits:
- Prevents indefinite blocking on master lookup
- Consistent with FilerClient timeout pattern (5 seconds)
- Faster failure detection when master is unresponsive
- Caller's context still honored (timeout is in addition, not replacement)
- Improves overall system resilience
Note: 5 seconds is a reasonable default for volume lookups:
- Long enough for normal master response (~10-50ms)
- Short enough to fail fast on issues
- Matches FilerClient's grpcTimeout default
* purge
* refactor: address code review feedback on comments and style
Fixed several code quality issues identified during review:
1. Corrected backoff algorithm description in filer_client.go:
- Changed "Exponential backoff" to "Multiplicative backoff with 1.5x factor"
- The formula waitTime * 3/2 produces 1s, 1.5s, 2.25s, not exponential 2^n
- More accurate terminology prevents confusion
2. Removed redundant nil check in vidmap_client.go:
- After the for loop, node is guaranteed to be non-nil
- Loop either returns early or assigns non-nil value to node
- Simplified: if node != nil { node.cache.Store(nil) } → node.cache.Store(nil)
3. Added startup logging to IAM server for consistency:
- Log when master client connection starts
- Matches pattern in S3ApiServer (line 100 in s3api_server.go)
- Improves operational visibility during startup
- Added missing glog import
4. Fixed indentation in filer/reader_at.go:
- Lines 76-91 had incorrect indentation (extra tab level)
- Line 93 also misaligned
- Now properly aligned with surrounding code
5. Updated deprecation comment to follow Go convention:
- Changed "DEPRECATED:" to "Deprecated:" (standard Go format)
- Tools like staticcheck and IDEs recognize the standard format
- Enables automated deprecation warnings in tooling
- Better developer experience
All changes are cosmetic and do not affect functionality.
* fmt
* refactor: make circuit breaker parameters configurable in FilerClient
The circuit breaker failure threshold (3) and reset timeout (30s) were
hardcoded, making it difficult to tune the client's behavior in different
deployment environments without modifying the code.
Problem:
func shouldSkipUnhealthyFiler(index int32) bool {
if failureCount < 3 { // Hardcoded threshold
return false
}
if time.Since(lastFailureTime) > 30*time.Second { // Hardcoded timeout
return false
}
}
Different environments have different needs:
- High-traffic production: may want lower threshold (2) for faster failover
- Development/testing: may want higher threshold (5) to tolerate flaky networks
- Low-latency services: may want shorter reset timeout (10s)
- Batch processing: may want longer reset timeout (60s)
Solution:
1. Added fields to FilerClientOption:
- FailureThreshold int32 (default: 3)
- ResetTimeout time.Duration (default: 30s)
2. Added fields to FilerClient:
- failureThreshold int32
- resetTimeout time.Duration
3. Applied defaults in NewFilerClient with option override:
failureThreshold := int32(3)
resetTimeout := 30 * time.Second
if opt.FailureThreshold > 0 {
failureThreshold = opt.FailureThreshold
}
if opt.ResetTimeout > 0 {
resetTimeout = opt.ResetTimeout
}
4. Updated shouldSkipUnhealthyFiler to use configurable values:
if failureCount < fc.failureThreshold { ... }
if time.Since(lastFailureTime) > fc.resetTimeout { ... }
Benefits:
✓ Tunable for different deployment environments
✓ Backward compatible (defaults match previous hardcoded values)
✓ No breaking changes to existing code
✓ Better maintainability and flexibility
Example usage:
// Aggressive failover for low-latency production
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
FailureThreshold: 2,
ResetTimeout: 10 * time.Second,
})
// Tolerant of flaky networks in development
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
FailureThreshold: 5,
ResetTimeout: 60 * time.Second,
})
* retry parameters
* refactor: make retry and timeout parameters configurable
Made retry logic and gRPC timeouts configurable across FilerClient and
MasterClient to support different deployment environments and network
conditions.
Problem 1: Hardcoded retry parameters in FilerClient
waitTime := time.Second // Fixed at 1s
maxRetries := 3 // Fixed at 3 attempts
waitTime = waitTime * 3 / 2 // Fixed 1.5x multiplier
Different environments have different needs:
- Unstable networks: may want more retries (5) with longer waits (2s)
- Low-latency production: may want fewer retries (2) with shorter waits (500ms)
- Batch processing: may want exponential backoff (2x) instead of 1.5x
Problem 2: Hardcoded gRPC timeout in MasterClient
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
Master lookups may need different timeouts:
- High-latency cross-region: may need 10s timeout
- Local network: may use 2s timeout for faster failure detection
Solution for FilerClient:
1. Added fields to FilerClientOption:
- MaxRetries int (default: 3)
- InitialRetryWait time.Duration (default: 1s)
- RetryBackoffFactor float64 (default: 1.5)
2. Added fields to FilerClient:
- maxRetries int
- initialRetryWait time.Duration
- retryBackoffFactor float64
3. Updated LookupVolumeIds to use configurable values:
waitTime := fc.initialRetryWait
maxRetries := fc.maxRetries
for retry := 0; retry < maxRetries; retry++ {
...
waitTime = time.Duration(float64(waitTime) * fc.retryBackoffFactor)
}
Solution for MasterClient:
1. Added grpcTimeout field to MasterClient (default: 5s)
2. Initialize in NewMasterClient with 5 * time.Second default
3. Updated masterVolumeProvider to use p.masterClient.grpcTimeout
Benefits:
✓ Tunable for different network conditions and deployment scenarios
✓ Backward compatible (defaults match previous hardcoded values)
✓ No breaking changes to existing code
✓ Consistent configuration pattern across FilerClient and MasterClient
Example usage:
// Fast-fail for low-latency production with stable network
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
MaxRetries: 2,
InitialRetryWait: 500 * time.Millisecond,
RetryBackoffFactor: 2.0, // Exponential backoff
GrpcTimeout: 2 * time.Second,
})
// Patient retries for unstable network or batch processing
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
MaxRetries: 5,
InitialRetryWait: 2 * time.Second,
RetryBackoffFactor: 1.5,
GrpcTimeout: 10 * time.Second,
})
Note: MasterClient timeout is currently set at construction time and not
user-configurable via NewMasterClient parameters. Future enhancement could
add a MasterClientOption struct similar to FilerClientOption.
* fix: rename vicCacheLock to vidCacheLock for consistency
Fixed typo in variable name for better code consistency and readability.
Problem:
vidCache := make(map[string]*filer_pb.Locations)
var vicCacheLock sync.RWMutex // Typo: vic instead of vid
vicCacheLock.RLock()
locations, found := vidCache[vid]
vicCacheLock.RUnlock()
The variable name 'vicCacheLock' is inconsistent with 'vidCache'.
Both should use 'vid' prefix (volume ID) not 'vic'.
Fix:
Renamed all 5 occurrences:
- var vicCacheLock → var vidCacheLock (line 56)
- vicCacheLock.RLock() → vidCacheLock.RLock() (line 62)
- vicCacheLock.RUnlock() → vidCacheLock.RUnlock() (line 64)
- vicCacheLock.Lock() → vidCacheLock.Lock() (line 81)
- vicCacheLock.Unlock() → vidCacheLock.Unlock() (line 91)
Benefits:
✓ Consistent variable naming convention
✓ Clearer intent (volume ID cache lock)
✓ Better code readability
✓ Easier code navigation
* fix: use defer cancel() with anonymous function for proper context cleanup
Fixed context cancellation to use defer pattern correctly in loop iteration.
Problem:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
err := pb.WithGrpcFilerClient(...)
cancel() // Only called on normal return, not on panic
}
Issues with original approach:
1. If pb.WithGrpcFilerClient panics, cancel() is never called → context leak
2. If callback returns early (though unlikely here), cleanup might be missed
3. Not following Go best practices for context.WithTimeout usage
Problem with naive defer in loop:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
defer cancel() // ❌ WRONG: All defers accumulate until function returns
}
In Go, defer executes when the surrounding *function* returns, not when
the loop iteration ends. This would accumulate n deferred cancel() calls
and leak contexts until LookupVolumeIds returns.
Solution: Wrap in anonymous function
for x := 0; x < n; x++ {
err := func() error {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
defer cancel() // ✅ Executes when anonymous function returns (per iteration)
return pb.WithGrpcFilerClient(...)
}()
}
Benefits:
✓ Context always cancelled, even on panic
✓ defer executes after each iteration (not accumulated)
✓ Follows Go best practices for context.WithTimeout
✓ No resource leaks during retry loop execution
✓ Cleaner error handling
Reference:
Go documentation for context.WithTimeout explicitly shows:
ctx, cancel := context.WithTimeout(...)
defer cancel()
This is the idiomatic pattern that should always be followed.
* Can't use defer directly in loop
* improve: add data center preference and URL shuffling for consistent performance
Added missing data center preference and load distribution (URL shuffling)
to ensure consistent performance and behavior across all code paths.
Problem 1: PreferPublicUrl path missing DC preference and shuffling
Location: weed/wdclient/filer_client.go lines 184-192
The custom PreferPublicUrl implementation was simply iterating through
locations and building URLs without considering:
1. Data center proximity (latency optimization)
2. Load distribution across volume servers
Before:
for _, loc := range locations {
url := loc.PublicUrl
if url == "" { url = loc.Url }
fullUrls = append(fullUrls, "http://"+url+"/"+fileId)
}
return fullUrls, nil
After:
var sameDcUrls, otherDcUrls []string
dataCenter := fc.GetDataCenter()
for _, loc := range locations {
url := loc.PublicUrl
if url == "" { url = loc.Url }
httpUrl := "http://" + url + "/" + fileId
if dataCenter != "" && dataCenter == loc.DataCenter {
sameDcUrls = append(sameDcUrls, httpUrl)
} else {
otherDcUrls = append(otherDcUrls, httpUrl)
}
}
rand.Shuffle(len(sameDcUrls), ...)
rand.Shuffle(len(otherDcUrls), ...)
fullUrls = append(sameDcUrls, otherDcUrls...)
Problem 2: Cache miss path missing URL shuffling
Location: weed/wdclient/vidmap_client.go lines 95-108
The cache miss path (fallback lookup) was missing URL shuffling, while
the cache hit path (vm.LookupFileId) already shuffles URLs. This
inconsistency meant:
- Cache hit: URLs shuffled → load distributed
- Cache miss: URLs not shuffled → first server always hit
Before:
var sameDcUrls, otherDcUrls []string
// ... build URLs ...
fullUrls = append(sameDcUrls, otherDcUrls...)
return fullUrls, nil
After:
var sameDcUrls, otherDcUrls []string
// ... build URLs ...
rand.Shuffle(len(sameDcUrls), ...)
rand.Shuffle(len(otherDcUrls), ...)
fullUrls = append(sameDcUrls, otherDcUrls...)
return fullUrls, nil
Benefits:
✓ Reduced latency by preferring same-DC volume servers
✓ Even load distribution across all volume servers
✓ Consistent behavior between cache hit/miss paths
✓ Consistent behavior between PreferUrl and PreferPublicUrl
✓ Matches behavior of existing vidMap.LookupFileId implementation
Impact on performance:
- Lower read latency (same-DC preference)
- Better volume server utilization (load spreading)
- No single volume server becomes a hotspot
Note: Added math/rand import to vidmap_client.go for shuffle support.
* Update weed/wdclient/masterclient.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* improve: call IAM server Shutdown() for best-effort cleanup
Added call to iamApiServer.Shutdown() to ensure cleanup happens when possible,
and documented the limitations of the current approach.
Problem:
The Shutdown() method was defined in IamApiServer but never called anywhere,
meaning the KeepConnectedToMaster goroutine would continue running even when
the IAM server stopped, causing resource leaks.
Changes:
1. Store iamApiServer instance in weed/command/iam.go
- Changed: _, iamApiServer_err := iamapi.NewIamApiServer(...)
- To: iamApiServer, iamApiServer_err := iamapi.NewIamApiServer(...)
2. Added defer call for best-effort cleanup
- defer iamApiServer.Shutdown()
- This will execute if startIamServer() returns normally
3. Added logging in Shutdown() method
- Log when shutdown is triggered for visibility
4. Documented limitations and future improvements
- Added note that defer only works for normal function returns
- SeaweedFS commands don't currently have signal handling
- Suggested future enhancement: add SIGTERM/SIGINT handling
Current behavior:
- ✓ Cleanup happens if HTTP server fails to start (glog.Fatalf path)
- ✓ Cleanup happens if Serve() returns with error (unlikely)
- ✗ Cleanup does NOT happen on SIGTERM/SIGINT (process killed)
The last case is a limitation of the current command architecture - all
SeaweedFS commands (s3, filer, volume, master, iam) lack signal handling
for graceful shutdown. This is a systemic issue that affects all services.
Future enhancement:
To properly handle SIGTERM/SIGINT, the command layer would need:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
httpServer.Serve(listener) // Non-blocking
}()
<-sigChan
glog.V(0).Infof("Received shutdown signal")
iamApiServer.Shutdown()
httpServer.Shutdown(context.Background())
This would require refactoring the command structure for all services,
which is out of scope for this change.
Benefits of current approach:
✓ Best-effort cleanup (better than nothing)
✓ Proper cleanup in error paths
✓ Documented for future improvement
✓ Consistent with how other SeaweedFS services handle lifecycle
* data racing in test
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
||
|
|
084b377f87 |
do delete expired entries on s3 list request (#7426)
* do delete expired entries on s3 list request https://github.com/seaweedfs/seaweedfs/issues/6837 * disable delete expires s3 entry in filer * pass opt allowDeleteObjectsByTTL to all servers * delete on get and head * add lifecycle expiration s3 tests * fix opt allowDeleteObjectsByTTL for server * fix test lifecycle expiration * fix IsExpired * fix locationPrefix for updateEntriesTTL * fix s3tests * resolv coderabbitai * GetS3ExpireTime on filer * go mod * clear TtlSeconds for volume * move s3 delete expired entry to filer * filer delete meta and data * del unusing func removeExpiredObject * test s3 put * test s3 put multipart * allowDeleteObjectsByTTL by default * fix pipline tests * rm dublicate SeaweedFSExpiresS3 * revert expiration tests * fix updateTTL * rm log * resolv comment * fix delete version object * fix S3Versioning * fix delete on FindEntry * fix delete chunks * fix sqlite not support concurrent writes/reads * move deletion out of listing transaction; delete entries and empty folders * Revert "fix sqlite not support concurrent writes/reads" This reverts commit 5d5da14e0ed91c613fe5c0ed058f58bb04fba6f0. * clearer handling on recursive empty directory deletion * handle listing errors * strut copying * reuse code to delete empty folders * use iterative approach with a queue to avoid recursive WithFilerClient calls * stop a gRPC stream from the client-side callback is to return a specific error, e.g., io.EOF * still issue UpdateEntry when the flag must be added * errors join * join path * cleaner * add context, sort directories by depth (deepest first) to avoid redundant checks * batched operation, refactoring * prevent deleting bucket * constant * reuse code * more logging * refactoring * s3 TTL time * Safety check --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
||
|
|
9b6b564235 |
Filer: Add retry mechanism for failed file deletions (#7402)
* Filer: Add retry mechanism for failed file deletions Implement a retry queue with exponential backoff for handling transient deletion failures, particularly when volumes are temporarily read-only. Key features: - Automatic retry for retryable errors (read-only volumes, network issues) - Exponential backoff: 5min → 10min → 20min → ... (max 6 hours) - Maximum 10 retry attempts per file before giving up - Separate goroutine processing retry queue every minute - Enhanced logging with retry/permanent error classification This addresses the issue where file deletions fail when volumes are temporarily read-only (tiered volumes, maintenance, etc.) and these deletions were previously lost. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Update weed/filer/filer_deletion.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Filer: Add retry mechanism for failed file deletions Implement a retry queue with exponential backoff for handling transient deletion failures, particularly when volumes are temporarily read-only. Key features: - Automatic retry for retryable errors (read-only volumes, network issues) - Exponential backoff: 5min → 10min → 20min → ... (max 6 hours) - Maximum 10 retry attempts per file before giving up - Separate goroutine processing retry queue every minute - Map-based retry queue for O(1) lookups and deletions - Enhanced logging with retry/permanent error classification - Consistent error detail limiting (max 10 total errors logged) - Graceful shutdown support with quit channel for both processors This addresses the issue where file deletions fail when volumes are temporarily read-only (tiered volumes, maintenance, etc.) and these deletions were previously lost. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Filer: Replace magic numbers with named constants in retry processor Replace hardcoded values with package-level constants for better maintainability: - DeletionRetryPollInterval (1 minute): interval for checking retry queue - DeletionRetryBatchSize (1000): max items to process per iteration This improves code readability and makes configuration changes easier. * Filer: Optimize retry queue with min-heap data structure Replace map-based retry queue with a min-heap for better scalability and deterministic ordering. Performance improvements: - GetReadyItems: O(N) → O(K log N) where K is items retrieved - AddOrUpdate: O(1) → O(log N) (acceptable trade-off) - Early exit when checking ready items (heap top is earliest) - No full iteration over all items while holding lock Benefits: - Deterministic processing order (earliest NextRetryAt first) - Better scalability for large retry queues (thousands of items) - Reduced lock contention duration - Memory efficient (no separate slice reconstruction) Implementation: - Min-heap ordered by NextRetryAt using container/heap - Dual index: heap for ordering + map for O(1) FileId lookups - heap.Fix() used when updating existing items - Comprehensive complexity documentation in comments This addresses the performance bottleneck identified in GetReadyItems where iterating over the entire map with a write lock could block other goroutines in high-failure scenarios. * Filer: Modernize heap interface and improve error handling docs 1. Replace interface{} with any in heap methods - Addresses modern Go style (Go 1.18+) - Improves code readability 2. Enhance isRetryableError documentation - Acknowledge string matching brittleness - Add comprehensive TODO for future improvements: * Use HTTP status codes (503, 429, etc.) * Implement structured error types with errors.Is/As * Extract gRPC status codes * Add error wrapping for better context - Document each error pattern with context - Add defensive check for empty error strings Current implementation remains pragmatic for initial release while documenting a clear path for future robustness improvements. String matching is acceptable for now but should be replaced with structured error checking when refactoring the deletion pipeline. * Filer: Refactor deletion processors for better readability Extract large callback functions into dedicated private methods to improve code organization and maintainability. Changes: 1. Extract processDeletionBatch method - Handles deletion of a batch of file IDs - Classifies errors (success, not found, retryable, permanent) - Manages retry queue additions - Consolidates logging logic 2. Extract processRetryBatch method - Handles retry attempts for previously failed deletions - Processes retry results and updates queue - Symmetric to processDeletionBatch for consistency Benefits: - Main loop functions (loopProcessingDeletion, loopProcessingDeletionRetry) are now concise and focused on orchestration - Business logic is separated into testable methods - Reduced nesting depth improves readability - Easier to understand control flow at a glance - Better separation of concerns The refactored methods follow the single responsibility principle, making the codebase more maintainable and easier to extend. * Update weed/filer/filer_deletion.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Filer: Fix critical retry count bug and add comprehensive error patterns Critical bug fixes from PR review: 1. Fix RetryCount reset bug (CRITICAL) - Problem: When items are re-queued via AddOrUpdate, RetryCount resets to 1, breaking exponential backoff - Solution: Add RequeueForRetry() method that preserves retry state - Impact: Ensures proper exponential backoff progression 2. Add overflow protection in backoff calculation - Check shift amount > 63 to prevent bit-shift overflow - Additional safety: check if delay <= 0 or > MaxRetryDelay - Protects against arithmetic overflow in extreme cases 3. Expand retryable error patterns - Added: timeout, deadline exceeded, context canceled - Added: lookup error/failed (volume discovery issues) - Added: connection refused, broken pipe (network errors) - Added: too many requests, service unavailable (backpressure) - Added: temporarily unavailable, try again (transient errors) - Added: i/o timeout (network timeouts) Benefits: - Retry mechanism now works correctly across restarts - More robust against edge cases and overflow - Better coverage of transient failure scenarios - Improved resilience in high-failure environments Addresses feedback from CodeRabbit and Gemini Code Assist in PR #7402. * Filer: Add persistence docs and comprehensive unit tests Documentation improvements: 1. Document in-memory queue limitation - Acknowledge that retry queue is volatile (lost on restart) - Document trade-offs and future persistence options - Provide clear path for production hardening - Note eventual consistency through main deletion queue Unit test coverage: 1. TestDeletionRetryQueue_AddAndRetrieve - Basic add/retrieve operations - Verify items not ready before delay elapsed 2. TestDeletionRetryQueue_ExponentialBackoff - Verify exponential backoff progression (5m→10m→20m→40m→80m) - Validate delay calculations with timing tolerance 3. TestDeletionRetryQueue_OverflowProtection - Test high retry counts (60+) that could cause overflow - Verify capping at MaxRetryDelay 4. TestDeletionRetryQueue_MaxAttemptsReached - Verify items discarded after MaxRetryAttempts - Confirm proper queue cleanup 5. TestIsRetryableError - Comprehensive error pattern coverage - Test all retryable error types (timeout, connection, lookup, etc.) - Verify non-retryable errors correctly identified 6. TestDeletionRetryQueue_HeapOrdering - Verify min-heap property maintained - Test items processed in NextRetryAt order - Validate heap.Init() integration All tests passing. Addresses PR feedback on testing requirements. * Filer: Add code quality improvements for deletion retry Address PR feedback with minor optimizations: - Add MaxLoggedErrorDetails constant (replaces magic number 10) - Pre-allocate slices and maps in processRetryBatch for efficiency - Improve log message formatting to use constant These changes improve code maintainability and runtime performance without altering functionality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactoring retrying * use constant * assert * address comment * refactor * address comments * dedup * process retried deletions * address comment * check in-flight items also; dedup code * refactoring * refactoring * simplify * reset heap * more efficient * add DeletionBatchSize as a constant;Permanent > Retryable > Success > Not Found --------- Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
||
|
|
4511c2cc1f |
Changes logging function (#6919)
* updated logging methods for stores * updated logging methods for stores * updated logging methods for filer * updated logging methods for uploader and http_util * updated logging methods for weed server --------- Co-authored-by: akosov <a.kosov@kryptonite.ru> |
||
|
|
ee0c14673d | Fix TTL Behavior for Directories in Path-Specific Configuration (#6827) | ||
|
|
283d9e0079 | Add context with request (#6824) | ||
|
|
43c3e80970 |
added a check for the nil value when requesting FindEntry. (#6651)
Co-authored-by: akosov <a.kosov@kryptonite.ru> |
||
|
|
42efade0dc |
adjust fix
fix https://github.com/seaweedfs/seaweedfs/issues/6497 |
||
|
|
a75271dd43 |
ensure correct auto bucket creation
fix https://github.com/seaweedfs/seaweedfs/issues/6497 |
||
|
|
c030cb3ce9 | bootstrap filer from one peer | ||
|
|
8aae82dd71 |
Added context for the MasterClient's methods to avoid endless loops (#5628)
* Added context for the MasterClient's methods to avoid endless loops * Returned WithClient function. Added WithClientCustomGetMaster function * Hid unused ctx arguments * Using a common context for the KeepConnectedToMaster and WaitUntilConnected functions * Changed the context termination check in the tryConnectToMaster function * Added a child context to the tryConnectToMaster function * Added a common context for KeepConnectedToMaster and WaitUntilConnected functions in benchmark |
||
|
|
1877ce5126 | rename | ||
|
|
fa59a5d67e | read from disk if not in memory | ||
|
|
06343f8976 |
Set allowed origins in config (#5109)
* Add a way to use a JWT in an HTTP only cookie If a JWT is not included in the Authorization header or a query string, attempt to get a JWT from an HTTP only cookie. * Added a way to specify allowed origins header from config * Removed unecessary log * Check list of domains from config or command flag * Handle default wildcard and change name of config value to cors |
||
|
|
1cac5d983d |
fix: disallow file name too long when writing a file (#4881)
* fix: disallow file name too long when writing a file * bool LongerName to MaxFilenameLength --------- Co-authored-by: Konstantin Lebedev <9497591+kmlebedev@users.noreply.github.co> |
||
|
|
796b7508f3 | Implement SRV lookups for filer (#4767) | ||
|
|
6f588b5b18 |
fix refactoring mistake
fix https://github.com/seaweedfs/seaweedfs/issues/4639 |
||
|
|
61c42f9991 | adjust lock APIs | ||
|
|
06471dac9d | init lock ring | ||
|
|
868f7875d7 | refactor | ||
|
|
3fd659df2a | add distributed lock manager | ||
|
|
5db9fcccd4 | refactoring | ||
|
|
b9b613a78e | filter system log dir does not make subscribe event (#4172) | ||
|
|
d54eb9966f |
refactor: Directory readability (#3665)
|
||
|
|
10efdc7aab | align memory | ||
|
|
e90ab4ac60 |
avoid race conditions for OnPeerUpdate (#3525)
https://github.com/seaweedfs/seaweedfs/issues/3524 |
||
|
|
26dbc6c905 | move to https://github.com/seaweedfs/seaweedfs | ||
|
|
74f60f246f | dynamically connect to a filer | ||
|
|
68065128b8 | add dc and rack | ||
|
|
64f3d6fb6e | metadata subscription uses client epoch | ||
|
|
fa61074513 | add clientId logging | ||
|
|
4fd5f96598 |
filer: remove replication, collection, disk_type info from entry metadata
these metadata can change and are not used |
||
|
|
02ae102731 | fix filer.sync missing source srv uploaded files to target when target down | ||
|
|
6adc42147f | fresh filer store bootstrap from the oldest peer | ||
|
|
c59068d0f3 | refactor | ||
|
|
682382648e | collect cluster node start time | ||
|
|
94635e9b5c | filer: add filer group |