3efe88c718443b444fcd5e676d8a83f13e08e2d9
1410 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
995dfc4d5d |
chore: remove ~50k lines of unreachable dead code (#8913)
* chore: remove unreachable dead code across the codebase Remove ~50,000 lines of unreachable code identified by static analysis. Major removals: - weed/filer/redis_lua: entire unused Redis Lua filer store implementation - weed/wdclient/net2, resource_pool: unused connection/resource pool packages - weed/plugin/worker/lifecycle: unused lifecycle plugin worker - weed/s3api: unused S3 policy templates, presigned URL IAM, streaming copy, multipart IAM, key rotation, and various SSE helper functions - weed/mq/kafka: unused partition mapping, compression, schema, and protocol functions - weed/mq/offset: unused SQL storage and migration code - weed/worker: unused registry, task, and monitoring functions - weed/query: unused SQL engine, parquet scanner, and type functions - weed/shell: unused EC proportional rebalance functions - weed/storage/erasure_coding/distribution: unused distribution analysis functions - Individual unreachable functions removed from 150+ files across admin, credential, filer, iam, kms, mount, mq, operation, pb, s3api, server, shell, storage, topology, and util packages * fix(s3): reset shared memory store in IAM test to prevent flaky failure TestLoadIAMManagerFromConfig_EmptyConfigWithFallbackKey was flaky because the MemoryStore credential backend is a singleton registered via init(). Earlier tests that create anonymous identities pollute the shared store, causing LookupAnonymous() to unexpectedly return true. Fix by calling Reset() on the memory store before the test runs. * style: run gofmt on changed files * fix: restore KMS functions used by integration tests * fix(plugin): prevent panic on send to closed worker session channel The Plugin.sendToWorker method could panic with "send on closed channel" when a worker disconnected while a message was being sent. The race was between streamSession.close() closing the outgoing channel and sendToWorker writing to it concurrently. Add a done channel to streamSession that is closed before the outgoing channel, and check it in sendToWorker's select to safely detect closed sessions without panicking. |
||
|
|
8fad85aed7 |
feat(s3): support WEED_S3_SSE_KEY env var for SSE-S3 KEK (#8904)
* feat(s3): support WEED_S3_SSE_KEY env var for SSE-S3 KEK Add support for providing the SSE-S3 Key Encryption Key (KEK) via the WEED_S3_SSE_KEY environment variable (hex-encoded 256-bit key). This avoids storing the master key in plaintext on the filer at /etc/s3/sse_kek. Key source priority: 1. WEED_S3_SSE_KEY environment variable (recommended) 2. Existing filer KEK at /etc/s3/sse_kek (backward compatible) 3. Auto-generate and save to filer (deprecated for new deployments) Existing deployments with a filer-stored KEK continue to work unchanged. A deprecation warning is logged when auto-generating a new filer KEK. * refactor(s3): derive KEK from any string via HKDF instead of requiring hex Accept any secret string in WEED_S3_SSE_KEY and derive a 256-bit key using HKDF-SHA256 instead of requiring a hex-encoded key. This is simpler for users — no need to generate hex, just set a passphrase. * feat(s3): add WEED_S3_SSE_KEK and WEED_S3_SSE_KEY env vars for KEK Two env vars for providing the SSE-S3 Key Encryption Key: - WEED_S3_SSE_KEK: hex-encoded, same format as /etc/s3/sse_kek. If the filer file also exists, they must match. - WEED_S3_SSE_KEY: any string, 256-bit key derived via HKDF-SHA256. Refuses to start if /etc/s3/sse_kek exists (must delete first). Only one may be set. Existing filer-stored KEKs continue to work. Auto-generating and storing new KEKs on filer is deprecated. * fix(s3): stop auto-generating KEK, fail only when SSE-S3 is used Instead of auto-generating a KEK and storing it on the filer when no key source is configured, simply leave SSE-S3 disabled. Encrypt and decrypt operations return a clear error directing the user to set WEED_S3_SSE_KEK or WEED_S3_SSE_KEY. * refactor(s3): move SSE-S3 KEK config to security.toml Move KEK configuration from standalone env vars to security.toml's new [sse_s3] section, following the same pattern as JWT keys and TLS certs. [sse_s3] kek = "" # hex-encoded 256-bit key (same format as /etc/s3/sse_kek) key = "" # any string, HKDF-derived Viper's WEED_ prefix auto-mapping provides env var support: WEED_SSE_S3_KEK and WEED_SSE_S3_KEY. All existing behavior is preserved: filer KEK fallback, mismatch detection, and HKDF derivation. * refactor(s3): rename SSE-S3 config keys to s3.sse.kek / s3.sse.key Use [s3.sse] section in security.toml, matching the existing naming convention (e.g. [s3.*]). Env vars: WEED_S3_SSE_KEK, WEED_S3_SSE_KEY. * fix(s3): address code review findings for SSE-S3 KEK - Don't hold mutex during filer retry loop (up to 20s of sleep). Lock only to write filerClient and superKey. - Remove dead generateAndSaveSuperKeyToFiler and unused constants. - Return error from deriveKeyFromSecret instead of ignoring it. - Fix outdated doc comment on InitializeWithFiler. - Use t.Setenv in tests instead of manual os.Setenv/Unsetenv. * fix(s3): don't block startup on filer errors when KEK is configured - When s3.sse.kek is set, a temporarily unreachable filer no longer prevents startup. The filer consistency check becomes best-effort with a warning. - Same treatment for s3.sse.key: filer unreachable logs a warning instead of failing. - Rewrite error messages to suggest migration instead of file deletion, avoiding the risk of orphaning encrypted data. Finding 3 (restore auto-generation) intentionally skipped — auto-gen was removed by design to avoid storing plaintext KEK on filer. * fix(test): set WEED_S3_SSE_KEY in SSE integration test server startup SSE-S3 no longer auto-generates a KEK, so integration tests must provide one. Set WEED_S3_SSE_KEY=test-sse-s3-key in all weed mini invocations in the test Makefile. |
||
|
|
9552e80b58 |
filer.sync: show active chunk transfers when sync progress stalls (#8889)
* filer.sync: show active chunk transfers when sync progress stalls When the sync watermark is not advancing, print each in-progress chunk transfer with its file path, bytes received so far, and current status (downloading, uploading, or waiting with backoff duration). This helps diagnose which files are blocking progress during replication. Closes #8542 * filer.sync: include last error in stall diagnostics * filer.sync: fix data races in ChunkTransferStatus Add sync.RWMutex to ChunkTransferStatus and lock around all field mutations in fetchAndWrite. ActiveTransfers now returns value copies under RLock so callers get immutable snapshots. |
||
|
|
efbed39e25 |
S3: map canned ACL to file permissions and add configurable default file mode (#8886)
* S3: map canned ACL to file permissions and add configurable default file mode S3 uploads were hardcoded to 0660 regardless of ACL headers. Now the X-Amz-Acl header maps to Unix file permissions per-object: - public-read, authenticated-read, bucket-owner-read → 0644 - public-read-write → 0666 - private, bucket-owner-full-control → 0660 Also adds -defaultFileMode / -s3.defaultFileMode flag to set a server-wide default when no ACL header is present. Closes #8874 * Address review feedback for S3 file mode feature - Extract hardcoded 0660 to defaultFileMode constant - Change parseDefaultFileMode to return error instead of calling Fatalf - Add -s3.defaultFileMode flag to filer.go and mini.go (was missing) - Add doc comment to S3Options about updating all four flag sites - Add TestResolveFileMode with 10 test cases covering ACL mapping, server default, and priority ordering |
||
|
|
8572aae403 |
filer.sync: support per-cluster mTLS with -a.security and -b.security (#8872)
* filer.sync: support per-cluster mTLS with -a.security and -b.security flags When syncing between two clusters that use different certificate authorities, a single security.toml cannot authenticate to both. Add -a.security and -b.security flags so each filer can use its own security.toml for TLS. Closes #8481 * security: fatal on failure to read explicitly provided security config When -a.security or -b.security is specified, falling back to insecure credentials on read error would silently bypass mTLS. Fatal instead. * fix(filer.sync): use source filer's fromTsMs flag in initOffsetFromTsMs A→B was using bFromTsMs and B→A was using aFromTsMs — these were swapped. Each path should seed the target's offset with the source filer's starting timestamp. * security: return error from LoadClientTLSFromFile, resolve relative PEM paths Change LoadClientTLSFromFile to return (grpc.DialOption, error) so callers can handle failures explicitly instead of a silent insecure fallback. Resolve relative PEM paths (grpc.ca, grpc.client.cert, grpc.client.key) against the config file's directory. |
||
|
|
9205140bd5 |
Use Unix sockets for gRPC in weed server mode (#8858)
* Use Unix sockets for gRPC between co-located services in weed server Extends the Unix socket gRPC optimization (added for mini mode in #8856) to `weed server`. Registers Unix socket paths for each service's gRPC port before startup, so co-located services (master, volume, filer, S3) communicate via Unix sockets instead of TCP loopback. Only services actually started in this process get registered. The gRPC port is resolved early (port + 10000 if unset) so the socket path is known before any service dials another. * Refactor gRPC Unix socket registration into a data-driven loop |
||
|
|
ced2236cc6 |
Adjust rename events metadata format (#8854)
* rename metadata events * fix subscription filter to use NewEntry.Name for rename path matching The server-side subscription filter constructed the new path using OldEntry.Name instead of NewEntry.Name when checking if a rename event's destination matches the subscriber's path prefix. This could cause events to be incorrectly filtered when a rename changes the file name. * fix bucket events to handle rename of bucket directories onBucketEvents only checked IsCreate and IsDelete. A bucket directory rename via AtomicRenameEntry now emits a single rename event (both OldEntry and NewEntry non-nil), which matched neither check. Handle IsRename by deleting the old bucket and creating the new one. * fix replicator to handle rename events across directory boundaries Two issues fixed: 1. The replicator filtered events by checking if the key (old path) was under the source directory. Rename events now use the old path as key, so renames from outside into the watched directory were silently dropped. Now both old and new paths are checked, and cross-boundary renames are converted to create or delete. 2. NewParentPath was passed to the sink without remapping to the sink's target directory structure, causing the sink to write entries at the wrong location. Now NewParentPath is remapped alongside the key. * fix filer sync to handle rename events crossing directory boundaries The early directory-prefix filter only checked resp.Directory (old parent). Rename events now carry the old parent as Directory, so renames from outside the source path into it were dropped before reaching the existing cross-boundary handling logic. Check both old and new directories against sourcePath and excludePaths so the downstream old-key/new-key logic can properly convert these to create or delete operations. * fix metadata event path matching * fix metadata event consumers for rename targets * Fix replication rename target keys Logical rename events now reach replication sinks with distinct source and target paths.\n\nHandle non-filer sinks as delete-plus-create on the translated target key, and make the rename fallback path create at the translated target key too.\n\nAdd focused tests covering non-filer renames, filer rename updates, and the fallback path.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix filer sync rename path scoping Use directory-boundary matching instead of raw prefix checks when classifying source and target paths during filer sync.\n\nAlso apply excludePaths per side so renames across excluded boundaries downgrade cleanly to create/delete instead of being misclassified as in-scope updates.\n\nAdd focused tests for boundary matching and rename classification.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix replicator directory boundary checks Use directory-boundary matching instead of raw prefix checks when deciding whether a source or target path is inside the watched tree or an excluded subtree.\n\nThis prevents sibling paths such as /foo and /foobar from being misclassified during rename handling, and preserves the earlier rename-target-key fix.\n\nAdd focused tests for boundary matching and rename classification across sibling/excluded directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix etc-remote rename-out handling Use boundary-safe source/target directory membership when classifying metadata events under DirectoryEtcRemote.\n\nThis prevents rename-out events from being processed as config updates, while still treating them as removals where appropriate for the remote sync and remote gateway command paths.\n\nAdd focused tests for update/removal classification and sibling-prefix handling.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Defer rename events until commit Queue logical rename metadata events during atomic and streaming renames and publish them only after the transaction commits successfully.\n\nThis prevents subscribers from seeing delete or logical rename events for operations that later fail during delete or commit.\n\nAlso serialize notification.Queue swaps in rename tests and add failure-path coverage.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Skip descendant rename target lookups Avoid redundant target lookups during recursive directory renames once the destination subtree is known absent.\n\nThe recursive move path now inserts known-absent descendants directly, and the test harness exercises prefixed directory listing so the optimization is covered by a directory rename regression test.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Tighten rename review tests Return filer_pb.ErrNotFound from the bucket tracking store test stub so it follows the FilerStore contract, and add a webhook filter case for same-name renames across parent directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix HardLinkId format verb in InsertEntryKnownAbsent error HardLinkId is a byte slice. %d prints each byte as a decimal number which is not useful for an identifier. Use %x to match the log line two lines above. * only skip descendant target lookup when source and dest use same store moveFolderSubEntries unconditionally passed skipTargetLookup=true for every descendant. This is safe when all paths resolve to the same underlying store, but with path-specific store configuration a child's destination may map to a different backend that already holds an entry at that path. Use FilerStoreWrapper.SameActualStore to check per-child and fall back to the full CreateEntry path when stores differ. * add nil and create edge-case tests for metadata event scope helpers * extract pathIsEqualOrUnder into util.IsEqualOrUnder Identical implementations existed in both replication/replicator.go and command/filer_sync.go. Move to util.IsEqualOrUnder (alongside the existing FullPath.IsUnder) and remove the duplicates. * use MetadataEventTargetDirectory for new-side directory in filer sync The new-side directory checks and sourceNewKey computation used message.NewParentPath directly. If NewParentPath were empty (legacy events, older filer versions during rolling upgrades), sourceNewKey would be wrong (/filename instead of /dir/filename) and the UpdateEntry parent path rewrite would panic on slice bounds. Derive targetDir once from MetadataEventTargetDirectory, which falls back to resp.Directory when NewParentPath is empty, and use it consistently for all new-side checks and the sink parent path. |
||
|
|
2eaf98a7a2 |
Use Unix sockets for gRPC in mini mode (#8856)
* Use Unix sockets for gRPC between co-located services in mini mode In `weed mini`, all services run in one process. Previously, inter-service gRPC traffic (volume↔master, filer↔master, S3↔filer, worker↔admin, etc.) went through TCP loopback. This adds a gRPC Unix socket registry in the pb package: mini mode registers a socket path per gRPC port at startup, each gRPC server additionally listens on its socket, and GrpcDial transparently routes to the socket via WithContextDialer when a match is found. Standalone commands (weed master, weed filer, etc.) are unaffected since no sockets are registered. TCP listeners are kept for external clients. * Handle Serve error and clean up socket file in ServeGrpcOnLocalSocket Log non-expected errors from grpcServer.Serve (ignoring grpc.ErrServerStopped) and always remove the Unix socket file when Serve returns, ensuring cleanup on Stop/GracefulStop. |
||
|
|
937a168d34 |
notification.kafka: add SASL authentication and TLS support (#8832)
* notification.kafka: add SASL authentication and TLS support (#8827) Wire sarama SASL (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512) and TLS configuration into the Kafka notification producer and consumer, enabling connections to secured Kafka clusters. * notification.kafka: validate mTLS config * kafka notification: validate partial mTLS config, replace panics with errors - Reject when only one of tls_client_cert/tls_client_key is provided - Replace three panic() calls in KafkaInput.initialize with returned errors * kafka notification: enforce minimum TLS 1.2 for Kafka connections |
||
|
|
479e72b5ab |
mount: add option to show system entries (#8829)
* mount: add option to show system entries * address gemini code review's suggested changes * rename flag from -showSystemEntries to -includeSystemEntries * meta_cache: purge hidden system entries on filer events --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
||
|
|
d97660d0cd |
filer.sync: pipelined subscription with adaptive batching for faster catch-up (#8791)
* filer.sync: pipelined subscription with adaptive batching for faster catch-up The SubscribeMetadata pipeline was fully serial: reading a log entry from a volume server, unmarshaling, filtering, and calling stream.Send() all happened one-at-a-time. stream.Send() blocked the entire pipeline until the client acknowledged each event, limiting throughput to ~80 events/sec regardless of the -concurrency setting. Three server-side optimizations that stack: 1. Pipelined sender: decouple stream.Send() from the read loop via a buffered channel (1024 messages). A dedicated goroutine handles gRPC delivery while the reader continues processing the next events. 2. Adaptive batching: when event timestamps are >2min behind wall clock (backlog catch-up), drain multiple events from the channel and pack them into a single stream.Send() using a new `repeated events` field on SubscribeMetadataResponse. When events are recent (real-time), send one-by-one for low latency. Old clients ignore the new field (backward compatible). 3. Persisted log readahead: run the OrderedLogVisitor in a background goroutine so volume server I/O for the next log file overlaps with event processing and gRPC delivery. 4. Event-driven aggregated subscription: replace time.Sleep(1127ms) polling in SubscribeMetadata with notification-driven wake-up using the MetaLogBuffer subscriber mechanism, reducing real-time latency from ~1127ms to sub-millisecond. Combined, these create a 3-stage pipeline: [Volume I/O → readahead buffer] → [Filter → send buffer] → [gRPC Send] Test results (simulated backlog with 50µs gRPC latency per Send): direct (old): 2100 events 2100 sends 168ms 12,512 events/sec pipelined+batched: 2100 events 14 sends 40ms 52,856 events/sec Speedup: 4.2x single-stream throughput Ref: #8771 * filer.sync: require client opt-in for batch event delivery Add ClientSupportsBatching field to SubscribeMetadataRequest. The server only packs events into the Events batch field when the client explicitly sets this flag to true. Old clients (Java SDK, third-party) that don't set the flag get one-event-per-Send, preserving backward compatibility. All Go callers (FollowMetadata, MetaAggregator) set the flag to true since their recv loops already unpack batched events. * filer.sync: clear batch Events field after Send to release references Prevents the envelope message from holding references to the rest of the batch after gRPC serialization, allowing the GC to collect them sooner. * filer.sync: fix Send deadlock, add error propagation test, event-driven local subscribe - pipelinedSender.Send: add case <-s.done to unblock when sender goroutine exits (fixes deadlock when errCh was already consumed by a prior Send). - pipelinedSender.reportErr: remove for-range drain on sendCh that could block indefinitely. Send() now detects exit via s.done instead. - SubscribeLocalMetadata: replace remaining time.Sleep(1127ms) in the gap-detected-no-memory-data path with event-driven listenersCond.Wait(), consistent with the rest of the subscription paths. - Add TestPipelinedSenderErrorPropagation: verifies error surfaces via Send and Close when the underlying stream fails. - Replace goto with labeled break in test simulatePipeline. * filer.sync: check error returns in test code - direct_send: check slowStream.Send error return - pipelined_batched_send: check sender.Close error return - simulatePipeline: return error from sender.Close, propagate to callers --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
92c2fc0d52 |
Add insecure_skip_verify option for HTTPS client in security.toml (#8781)
* Add -insecureSkipVerify flag and config option for filer.sync HTTPS connections When using filer.sync between clusters with different CAs (e.g., separate OpenShift clusters), TLS certificate verification fails with "x509: certificate signed by unknown authority". This adds two ways to skip TLS certificate verification: 1. CLI flag: `weed filer.sync -insecureSkipVerify ...` 2. Config option: `insecure_skip_verify = true` under [https.client] in security.toml Closes #8778 * Add insecure_skip_verify option for HTTPS client in security.toml When using filer.sync between clusters with different CAs (e.g., separate OpenShift clusters), TLS certificate verification fails. Adding insecure_skip_verify = true under [https.client] in security.toml allows skipping TLS certificate verification. The option is read during global HTTP client initialization so it applies to all HTTPS connections including filer.sync proxy reads and writes. Closes #8778 --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
6cf34f2376 |
Add -filerExcludePathPattern flag and fix nil panic in -filerExcludeFileName (#8756)
* Fix filerExcludeFileName to support directory names and path components The original implementation only matched excludeFileName against message.NewEntry.Name, which caused two issues: 1. Nil pointer panic on delete events (NewEntry is nil) 2. Files inside excluded directories were still backed up because the parent directory name was not checked This patch: - Checks all path components in resp.Directory against the regexp - Adds nil guard for message.NewEntry before accessing .Name - Also checks message.OldEntry.Name for rename/delete events Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add -filerExcludePathPattern flag and fix nil panic in filerExcludeFileName Separate concerns between two exclude mechanisms: - filerExcludeFileName: matches entry name only (leaf node) - filerExcludePathPattern (NEW): matches any path component via regexp, so files inside matched directories are also excluded Also fixes nil pointer panic when filerExcludeFileName encounters delete events where NewEntry is nil. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Refactor exclude logic: per-side exclusion for rename events, reduce duplication - Extract isEntryExcluded() to compute exclusion per old/new side, so rename events crossing an exclude boundary are handled as delete + create instead of being entirely skipped - Extract compileExcludePattern() to deduplicate regexp compilation - Replace strings.Split with allocation-free pathContainsMatch() - Check message.NewParentPath (not just resp.Directory) for new side Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Move regexp compilation out of retry loop to fail fast on config errors compileExcludePattern for -filerExcludeFileName and -filerExcludePathPattern are configuration-time validations that will never succeed on retry. Move them to runFilerBackup before the reconnect loop and use glog.Fatalf on failure, so invalid patterns are caught immediately at startup instead of being retried every 1.7 seconds indefinitely. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add wildcard matching helpers for path and filename exclusion * Replace regexp exclude patterns with wildcard-based flags, deprecate -filerExcludeFileName Add -filerExcludeFileNames and -filerExcludePathPatterns flags that accept comma-separated wildcard patterns (*, ?) using the existing wildcard library. Mark -filerExcludeFileName as deprecated but keep its regexp behavior. --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
||
|
|
29bdbb3c48 |
filer.sync: replace O(n) conflict check with O(depth) index lookups (#8772)
* filer.sync: replace O(n) conflict check with O(depth) index lookups The MetadataProcessor.conflictsWith() scanned all active jobs linearly for every new event dispatch. At high concurrency (256-1024), this O(n) scan under the activeJobsLock became a bottleneck that throttled the event dispatch pipeline, negating the benefit of higher -concurrency values. Replace the linear scan with three index maps: - activeFilePaths: O(1) exact file path lookup - activeDirPaths: O(1) directory path lookup per ancestor - descendantCount: O(1) check for active jobs under a directory Conflict check is now O(depth) where depth is the path depth (typically 3-6 levels), constant regardless of active job count. Benchmark confirms ~81ns per check whether there are 32 or 1024 active jobs. Also replace the O(n) watermark scan with minActiveTs tracking so non-oldest job completions are O(1). Ref: #8771 * filer.sync: replace O(n) watermark rescan with min-heap lazy deletion Address review feedback: - Replace minActiveTs O(n) rescan with a tsMinHeap using lazy deletion. Each TsNs is pushed once and popped once, giving O(log n) amortized watermark tracking regardless of completion order. - Fix benchmark to consume conflictsWith result via package-level sink variable to prevent compiler elision. The watermark advancement semantics (conservative, sets to completing job's TsNs) are unchanged from the original code. This is intentionally safe for idempotent replay on restart. |
||
|
|
3d872e86f8 |
Implement POSIX file locking for FUSE mount (#8750)
* Add POSIX byte-range lock table for FUSE mount Implement PosixLockTable with per-inode range lock tracking supporting: - Shared (F_RDLCK) and exclusive (F_WRLCK) byte-range locks - Conflict detection across different lock owners - Lock coalescing for adjacent/overlapping same-owner same-type locks - Lock splitting on partial-range unlock - Blocking waiter support for SetLkw with cancellation - Owner-based cleanup for Release * Wire POSIX lock handlers into FUSE mount Implement GetLk, SetLk, SetLkw on WFS delegating to PosixLockTable. Add posixLocks field to WFS and initialize in constructor. Clean up locks on Release via ReleaseOwner using ReleaseIn.LockOwner. Remove ENOSYS stubs from weedfs_unsupported.go. * Enable POSIX and flock lock capabilities in FUSE mount Set EnableLocks: true in mount options to advertise CAP_POSIX_LOCKS and CAP_FLOCK_LOCKS during FUSE INIT. * Avoid thundering herd in lock waiter wake-up Replace broadcast-all wakeWaiters with selective wakeEligibleWaiters that checks each waiter's requested lock against remaining held locks. Only waiters whose request no longer conflicts are woken; others stay queued. Store the requested lockRange in each lockWaiter to enable this. * Fix uint64 overflow in adjacency check for lock coalescing Guard h.End+1 and lk.End+1 with < ^uint64(0) checks so that End == math.MaxUint64 (EOF) does not wrap to 0 and falsely merge non-adjacent locks. * Add test for non-adjacent ranges with gap not being coalesced |
||
|
|
c31e6b4684 |
Use filer-side copy for mounted whole-file copy_file_range (#8747)
* Optimize mounted whole-file copy_file_range * Address mounted copy review feedback * Harden mounted copy fast path --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
ecadeddcbe |
fix: extend ignore404Error to match 404 Not Found string from S3 sink… (#8741)
* fix: extend ignore404Error to match 404 Not Found string from S3 sink errors * test: add unit tests for isIgnorable404 error matching * improve: pre-compute ignorable 404 string and simplify isIgnorable404 * test: replace init() with TestMain for global HTTP client setup |
||
|
|
9434d3733d |
mount: async flush on close() when writebackCache is enabled (#8727)
* mount: async flush on close() when writebackCache is enabled When -writebackCache is enabled, defer data upload and metadata flush from Flush() (triggered by close()) to a background goroutine in Release(). This allows processes like rsync that write many small files to proceed to the next file immediately instead of blocking on two network round-trips (volume upload + filer metadata) per file. Fixes #8718 * mount: add retry with backoff for async metadata flush The metadata flush in completeAsyncFlush now retries up to 3 times with exponential backoff (1s, 2s, 4s) on transient gRPC errors. Since the chunk data is already safely on volume servers at this point, only the filer metadata reference needs persisting — retrying is both safe and effective. Data flush (FlushData) is not retried externally because UploadWithRetry already handles transient HTTP/gRPC errors internally; if it still fails, the chunk memory has been freed. * test: add integration tests for writebackCache async flush Add comprehensive FUSE integration tests for the writebackCache async flush feature (issue #8718): - Basic operations: write/read, sequential files, large files, empty files, overwrites - Fsync correctness: fsync forces synchronous flush even in writeback mode, immediate read-after-fsync - Concurrent small files: multi-worker parallel writes (rsync-like workload), multi-directory, rapid create/close - Data integrity: append after close, partial writes, file size correctness, binary data preservation - Performance comparison: writeback vs synchronous flush throughput - Stress test: 16 workers x 100 files with content verification - Mixed concurrent operations: reads, writes, creates running together Also fix pre-existing test infrastructure issues: - Rename framework.go to framework_test.go (fixes Go package conflict) - Fix undefined totalSize variable in concurrent_operations_test.go * ci: update fuse-integration workflow to run full test suite The workflow previously only ran placeholder tests (simple_test.go, working_demo_test.go) in a temp directory due to a Go module conflict. Now that framework.go is renamed to framework_test.go, the full test suite compiles and runs correctly from test/fuse_integration/. Changes: - Run go test directly in test/fuse_integration/ (no temp dir copy) - Install weed binary to /usr/local/bin for test framework discovery - Configure /etc/fuse.conf with user_allow_other for FUSE mounts - Install fuse3 for modern FUSE support - Stream test output to log file for artifact upload * mount: fix three P1 races in async flush P1-1: Reopen overwrites data still flushing in background ReleaseByHandle removes the old handle from fhMap before the deferred flush finishes. A reopen of the same inode during that window would build from stale filer metadata, overwriting the async flush. Fix: Track in-flight async flushes per inode via pendingAsyncFlush map. AcquireHandle now calls waitForPendingAsyncFlush(inode) to block until any pending flush completes before reading filer metadata. P1-2: Deferred flush races rename and unlink after close completeAsyncFlush captured the path once at entry, but rename or unlink after close() could cause metadata to be written under the wrong name or recreate a deleted file. Fix: Re-resolve path from inode via GetPath right before metadata flush. GetPath returns the current path (reflecting renames) or ENOENT (if unlinked), in which case we skip the metadata flush. P1-3: SIGINT/SIGTERM bypasses the async-flush drain grace.OnInterrupt runs hooks then calls os.Exit(0), so WaitForAsyncFlush after server.Serve() never executes on signal. Fix: Add WaitForAsyncFlush (with 10s timeout) to the WFS interrupt handler, before cache cleanup. The timeout prevents hanging on Ctrl-C when the filer is unreachable. * mount: fix P1 races — draining handle stays in fhMap P1-1: Reopen TOCTOU The gap between ReleaseByHandle removing from fhMap and submitAsyncFlush registering in pendingAsyncFlush allowed a concurrent AcquireHandle to slip through with stale metadata. Fix: Hold pendingAsyncFlushMu across both the counter decrement (ReleaseByHandle) and the pending registration. The handle is registered as pending before the lock is released, so waitForPendingAsyncFlush always sees it. P1-2: Rename/unlink can't find draining handle ReleaseByHandle deleted from fhMap immediately. Rename's FindFileHandle(inode) at line 251 could not find the handle to update entry.Name. Unlink could not coordinate either. Fix: When asyncFlushPending is true, ReleaseByHandle/ReleaseByInode leave the handle in fhMap (counter=0 but maps intact). The handle stays visible to FindFileHandle so rename can update entry.Name. completeAsyncFlush re-resolves the path from the inode (GetPath) right before metadata flush for correctness after rename/unlink. After drain, RemoveFileHandle cleans up the maps. Double-return prevention: ReleaseByHandle/ReleaseByInode return nil if counter is already <= 0, so Forget after Release doesn't start a second drain goroutine. P1-3: SIGINT deletes swap files under running goroutines After the 10s timeout, os.RemoveAll deleted the write cache dir (containing swap files) while FlushData goroutines were still reading from them. Fix: Increase timeout to 30s. If timeout expires, skip write cache dir removal so in-flight goroutines can finish reading swap files. The OS (or next mount) cleans them up. Read cache is always removed. * mount: never skip metadata flush when Forget drops inode mapping Forget removes the inode→path mapping when the kernel's lookup count reaches zero, but this does NOT mean the file was unlinked — it only means the kernel evicted its cache entry. completeAsyncFlush was treating GetPath failure as "file unlinked" and skipping the metadata flush, which orphaned the just-uploaded chunks for live files. Fix: Save dir and name at doFlush defer time. In completeAsyncFlush, try GetPath first to pick up renames; if the mapping is gone, fall back to the saved dir/name. Always attempt the metadata flush — the filer is the authority on whether the file exists, not the local inode cache. * mount: distinguish Forget from Unlink in async flush path fallback The saved-path fallback (from the previous fix) always flushed metadata when GetPath failed, which recreated files that were explicitly unlinked after close(). The same stale fallback could recreate the pre-rename path if Forget dropped the inode mapping after a rename. Root cause: GetPath failure has two meanings: 1. Forget — kernel evicted the cache entry (file still exists) 2. Unlink — file was explicitly deleted (should not recreate) Fix (three coordinated changes): Unlink (weedfs_file_mkrm.go): Before RemovePath, look up the inode and find any draining handle via FindFileHandle. Set fh.isDeleted = true so the async flush knows the file was explicitly removed. Rename (weedfs_rename.go): When renaming a file with a draining handle, update asyncFlushDir/asyncFlushName to the post-rename location. This keeps the saved-path fallback current so Forget after rename doesn't flush to the old (pre-rename) path. completeAsyncFlush (weedfs_async_flush.go): Check fh.isDeleted first — if true, skip metadata flush (file was unlinked, chunks become orphans for volume.fsck). Otherwise, try GetPath for the current path (renames); fall back to saved path if Forget dropped the mapping (file is live, just evicted from kernel cache). * test/ci: address PR review nitpicks concurrent_operations_test.go: - Restore precise totalSize assertion instead of info.Size() > 0 writeback_cache_test.go: - Check rand.Read errors in all 3 locations (lines 310, 512, 757) - Check os.MkdirAll error in stress test (line 752) - Remove dead verifyErrors variable (line 332) - Replace both time.Sleep(5s) with polling via waitForFileContent to avoid flaky tests under CI load (lines 638, 700) fuse-integration.yml: - Add set -o pipefail so go test failures propagate through tee * ci: fix fuse3/fuse package conflict on ubuntu-22.04 runner fuse3 is pre-installed on ubuntu-22.04 runners and conflicts with the legacy fuse package. Only install libfuse3-dev for the headers. * mount/page_writer: remove debug println statements Remove leftover debug println("read new data1/2") from ReadDataAt in MemChunk and SwapFileChunk. * test: fix findWeedBinary matching source directory instead of binary findWeedBinary() matched ../../weed (the source directory) via os.Stat before checking PATH, then tried to exec a directory which fails with "permission denied" on the CI runner. Fix: Check PATH first (reliable in CI where the binary is installed to /usr/local/bin). For relative paths, verify the candidate is a regular file (!info.IsDir()). Add ../../weed/weed as a candidate for in-tree builds. * test: fix framework — dynamic ports, output capture, data dirs The integration test framework was failing in CI because: 1. All tests used hardcoded ports (19333/18080/18888), so sequential tests could conflict when prior processes hadn't fully released their ports yet. 2. Data subdirectories (data/master, data/volume) were not created before starting processes. 3. Master was started with -peers=none which is not a valid address. 4. Process stdout/stderr was not captured, making failures opaque ("service not ready within timeout" with no diagnostics). 5. The unmount fallback used 'umount' instead of 'fusermount -u'. 6. The mount used -cacheSizeMB (nonexistent) instead of -cacheCapacityMB and was missing -allowOthers=false for unprivileged CI runners. Fixes: - Dynamic port allocation via freePort() (net.Listen ":0") - Explicit gRPC ports via -port.grpc to avoid default port conflicts - Create data/master and data/volume directories in Setup() - Remove invalid -peers=none and -raftBootstrap flags - Capture process output to logDir/*.log via startProcess() helper - dumpLog() prints tail of log file on service startup failure - Use fusermount3/fusermount -u for unmount - Fix mount flag names (-cacheCapacityMB, -allowOthers=false) * test: remove explicit -port.grpc flags from test framework SeaweedFS convention: gRPC port = HTTP port + 10000. Volume and filer discover the master gRPC port by this convention. Setting explicit -port.grpc on master/volume/filer broke inter-service communication because the volume server computed master gRPC as HTTP+10000 but the actual gRPC was on a different port. Remove all -port.grpc flags and let the default convention work. Dynamic HTTP ports already ensure uniqueness; the derived gRPC ports (HTTP+10000) will also be unique. --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
51ec0d2122 |
fix(remote_gateway): prevent double-versioning when syncing to versioned central bucket (#8710)
* fix(remote_gateway): prevent double-versioning when syncing to versioned central bucket
When a file is uploaded to a versioned bucket on edge, SeaweedFS stores
it internally as {object}.versions/v_{versionId}. The remote_gateway was
syncing this internal path directly to the central S3 endpoint. When
central's bucket also has versioning enabled, this caused central to
apply its own versioning on top, producing corrupt paths like:
object.versions/v_{edgeId}.versions/v_{centralId}
Fix: rewrite internal .versions/v_{id} paths to the original S3 object
key before uploading to the remote. Skip version file delete/update
events that are internal bookkeeping.
Fixes https://github.com/seaweedfs/seaweedfs/discussions/8481#discussioncomment-16209342
* fix(remote_gateway): propagate delete markers to remote as deletions
Delete markers are zero-content version entries (ExtDeleteMarkerKey=true)
created by S3 DELETE on a versioned bucket. Previously they were silently
dropped by the HasData() filter, so deletions on edge never reached
central.
Now: detect delete markers before the HasData check, rewrite the
.versions path to the original S3 key, and issue client.DeleteFile()
on the remote.
* fix(remote_gateway): tighten isVersionedPath to avoid false positives
Address PR review feedback:
- Add isDir parameter to isVersionedPath so it only matches the exact
internal shapes: directories whose name ends with .versions (isDir=true),
and files with the v_ prefix inside a .versions parent (isDir=false).
Previously the function was too broad and could match user-created paths
like "my.versions/data.txt".
- Update all 4 call sites to pass the entry's IsDirectory field.
- Rename TestVersionedDirectoryNotFilteredByHasData to
TestVersionsDirectoryFilteredByHasData so the name reflects the
actual assertion (directories ARE filtered by HasData).
- Expand TestIsVersionedPath with isDir cases and false-positive checks.
* fix(remote_gateway): persist sync marker after delete-marker propagation
The delete-marker branch was calling client.DeleteFile() and returning
without updating the local entry, making event replay re-issue the
remote delete. Now call updateLocalEntry after a successful DeleteFile
to stamp the delete-marker entry with a RemoteEntry, matching the
pattern used by the normal create path.
* refactor(remote_gateway): extract syncDeleteMarker and fix root path edge case
- Extract syncDeleteMarker() shared helper used by both bucketed and
mounted-dir event processors, replacing the duplicated delete + persist
local marker logic.
- Fix rewriteVersionedSourcePath for root-level objects: when lastSlash
is 0 (e.g. "/file.xml.versions"), return "/" as the parent dir instead
of an empty string.
- The strings.Contains(dir, ".versions/") condition flagged in review was
already removed in a prior commit that tightened isVersionedPath.
* fix(remote_gateway): skip updateLocalEntry for versioned path rewrites
After rewriting a .versions/v_{id} path to the logical S3 key and
uploading, the code was calling updateLocalEntry on the original v_*
entry, stamping it with a RemoteEntry for the logical key. This is
semantically wrong: the logical object has no filer entry in versioned
buckets, and the internal v_* entry should not carry a RemoteEntry for
a different path.
Skip updateLocalEntry when the path was rewritten from a versioned
source. Replay safety is preserved because S3 PutObject is idempotent.
* fix(remote_gateway): scope versioning checks to /buckets/ namespace
isVersionedPath and rewriteVersionedSourcePath could wrongly match
paths in non-bucket mounts (e.g. /mnt/remote/file.xml.versions).
Add the same /buckets/ prefix guard used by isMultipartUploadDir so
the .versions / v_ logic only applies within the bucket namespace.
|
||
|
|
15f4a97029 |
fix: improve raft leader election reliability and failover speed (#8692)
* fix: clear raft vote state file on non-resume startup The seaweedfs/raft library v1.1.7 added a persistent `state` file for currentTerm and votedFor. When RaftResumeState=false (the default), the log, conf, and snapshot directories are cleared but this state file was not. On repeated restarts, different masters accumulate divergent terms, causing AppendEntries rejections and preventing leader election. Fixes #8690 * fix: recover TopologyId from snapshot before clearing raft state When RaftResumeState=false clears log/conf/snapshot, the TopologyId (used for license validation) was lost. Now extract it from the latest snapshot before cleanup and restore it on the topology. Both seaweedfs/raft and hashicorp/raft paths are handled, with a shared recoverTopologyIdFromState helper in raft_common.go. * fix: stagger multi-master bootstrap delay by peer index Previously all masters used a fixed 1500ms delay before the bootstrap check. Now the delay is proportional to the peer's sorted index with randomization (matching the hashicorp raft path), giving the designated bootstrap node (peer 0) a head start while later peers wait for gRPC servers to be ready. Also adds diagnostic logging showing why DoJoinCommand was or wasn't called, making leader election issues easier to diagnose from logs. * fix: skip unreachable masters during leader reconnection When a master leader goes down, non-leader masters still redirect clients to the stale leader address. The masterClient would follow these redirects, fail, and retry — wasting round-trips each cycle. Now tryAllMasters tracks which masters failed within a cycle and skips redirects pointing to them, reducing log spam and connection overhead during leader failover. * fix: take snapshot after TopologyId generation for recovery After generating a new TopologyId on the leader, immediately take a raft snapshot so the ID can be recovered from the snapshot on future restarts with RaftResumeState=false. Without this, short-lived clusters would lose the TopologyId on restart since no automatic snapshot had been taken yet. * test: add multi-master raft failover integration tests Integration test framework and 5 test scenarios for 3-node master clusters: - TestLeaderConsistencyAcrossNodes: all nodes agree on leader and TopologyId - TestLeaderDownAndRecoverQuickly: leader stops, new leader elected, old leader rejoins as follower - TestLeaderDownSlowRecover: leader gone for extended period, cluster continues with 2/3 quorum - TestTwoMastersDownAndRestart: quorum lost (2/3 down), recovered when both restart - TestAllMastersDownAndRestart: full cluster restart, leader elected, all nodes agree on TopologyId * fix: address PR review comments - peerIndex: return -1 (not 0) when self not found, add warning log - recoverTopologyIdFromSnapshot: defer dir.Close() - tests: check GetTopologyId errors instead of discarding them * fix: address review comments on failover tests - Assert no leader after quorum loss (was only logging) - Verify follower cs.Leader matches expected leader via ServerAddress.ToHttpAddress() comparison - Check GetTopologyId error in TestTwoMastersDownAndRestart |
||
|
|
01987bcafd |
Make weed-fuse compatible with systemd-based mount (#6814)
* Make weed-fuse compatible with systemd-mount series * fix: add missing type annotation on skipAutofs param in FreeBSD build The parameter was declared without a type, causing a compile error on FreeBSD. * fix: guard hasAutofs nil dereference and make FsName conditional on autofs mode - Check option.hasAutofs for nil before dereferencing to prevent panic when RunMount is called without the flag initialized. - Only set FsName to "fuse" when autofs mode is active; otherwise preserve the descriptive server:path name for mount/df output. - Fix typo: recogize -> recognize. * fix: consistent error handling for autofs option and log ignored _netdev - Replace panic with fmt.Fprintf+return false for autofs parse errors, matching the pattern used by other fuse option parsers. - Log when _netdev option is silently stripped to aid debugging. --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
||
|
|
81369b8a83 |
improve: large file sync throughput for remote.cache and filer.sync (#8676)
* improve large file sync throughput for remote.cache and filer.sync
Three main throughput improvements:
1. Adaptive chunk sizing for remote.cache: targets ~32 chunks per file
instead of always starting at 5MB. A 500MB file now uses ~16MB chunks
(32 chunks) instead of 5MB chunks (100 chunks), reducing per-chunk
overhead (volume assign, gRPC call, needle write) by 3x.
2. Configurable concurrency at every layer:
- remote.cache chunk concurrency: -chunkConcurrency flag (default 8)
- remote.cache S3 download concurrency: -downloadConcurrency flag
(default raised from 1 to 5 per chunk)
- filer.sync chunk concurrency: -chunkConcurrency flag (default 32)
3. S3 multipart download concurrency raised from 1 to 5: the S3 manager
downloader was using Concurrency=1, serializing all part downloads
within each chunk. This alone can 5x per-chunk download speed.
The concurrency values flow through the gRPC request chain:
shell command → CacheRemoteObjectToLocalClusterRequest →
FetchAndWriteNeedleRequest → S3 downloader
Zero values in the request mean "use server defaults", maintaining
full backward compatibility with existing callers.
Ref #8481
* fix: use full maxMB for chunk size cap and remove loop guard
Address review feedback:
- Use full maxMB instead of maxMB/2 for maxChunkSize to avoid
unnecessarily limiting chunk size for very large files.
- Remove chunkSize < maxChunkSize guard from the safety loop so it
can always grow past maxChunkSize when needed to stay under 1000
chunks (e.g., extremely large files with small maxMB).
* address review feedback: help text, validation, naming, docs
- Fix help text for -chunkConcurrency and -downloadConcurrency flags
to say "0 = server default" instead of advertising specific numeric
defaults that could drift from the server implementation.
- Validate chunkConcurrency and downloadConcurrency are within int32
range before narrowing, returning a user-facing error if out of range.
- Rename ReadRemoteErr to readRemoteErr to follow Go naming conventions.
- Add doc comment to SetChunkConcurrency noting it must be called
during initialization before replication goroutines start.
- Replace doubling loop in chunk size safety check with direct
ceil(remoteSize/1000) computation to guarantee the 1000-chunk cap.
* address Copilot review: clamp concurrency, fix chunk count, clarify proto docs
- Use ceiling division for chunk count check to avoid overcounting
when file size is an exact multiple of chunk size.
- Clamp chunkConcurrency (max 1024) and downloadConcurrency (max 1024
at filer, max 64 at volume server) to prevent excessive goroutines.
- Always use ReadFileWithConcurrency when the client supports it,
falling back to the implementation's default when value is 0.
- Clarify proto comments that download_concurrency only applies when
the remote storage client supports it (currently S3).
- Include specific server defaults in help text (e.g., "0 = server
default 8") so users see the actual values in -h output.
* fix data race on executionErr and use %w for error wrapping
- Protect concurrent writes to executionErr in remote.cache worker
goroutines with a sync.Mutex to eliminate the data race.
- Use %w instead of %v in volume_grpc_remote.go error formatting
to preserve the error chain for errors.Is/errors.As callers.
|
||
|
|
e8914ac879 |
feat(admin): add -urlPrefix flag for subdirectory deployment (#8670)
Allow the admin server to run behind a reverse proxy under a subdirectory by adding a -urlPrefix flag (e.g. -urlPrefix=/seaweedfs). Closes #8646 |
||
|
|
8cde3d4486 |
Add data file compaction to iceberg maintenance (Phase 2) (#8503)
* Add iceberg_maintenance plugin worker handler (Phase 1) Implement automated Iceberg table maintenance as a new plugin worker job type. The handler scans S3 table buckets for tables needing maintenance and executes operations in the correct Iceberg order: expire snapshots, remove orphan files, and rewrite manifests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add data file compaction to iceberg maintenance handler (Phase 2) Implement bin-packing compaction for small Parquet data files: - Enumerate data files from manifests, group by partition - Merge small files using parquet-go (read rows, write merged output) - Create new manifest with ADDED/DELETED/EXISTING entries - Commit new snapshot with compaction metadata Add 'compact' operation to maintenance order (runs before expire_snapshots), configurable via target_file_size_bytes and min_input_files thresholds. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix memory exhaustion in mergeParquetFiles by processing files sequentially Previously all source Parquet files were loaded into memory simultaneously, risking OOM when a compaction bin contained many small files. Now each file is loaded, its rows are streamed into the output writer, and its data is released before the next file is loaded — keeping peak memory proportional to one input file plus the output buffer. * Validate bucket/namespace/table names against path traversal Reject names containing '..', '/', or '\' in Execute to prevent directory traversal via crafted job parameters. * Add filer address failover in iceberg maintenance handler Try each filer address from cluster context in order instead of only using the first one. This improves resilience when the primary filer is temporarily unreachable. * Add separate MinManifestsToRewrite config for manifest rewrite threshold The rewrite_manifests operation was reusing MinInputFiles (meant for compaction bin file counts) as its manifest count threshold. Add a dedicated MinManifestsToRewrite field with its own config UI section and default value (5) so the two thresholds can be tuned independently. * Fix risky mtime fallback in orphan removal that could delete new files When entry.Attributes is nil, mtime defaulted to Unix epoch (1970), which would always be older than the safety threshold, causing the file to be treated as eligible for deletion. Skip entries with nil Attributes instead, matching the safer logic in operations.go. * Fix undefined function references in iceberg_maintenance_handler.go Use the exported function names (ShouldSkipDetectionByInterval, BuildDetectorActivity, BuildExecutorActivity) matching their definitions in vacuum_handler.go. * Remove duplicated iceberg maintenance handler in favor of iceberg/ subpackage The IcebergMaintenanceHandler and its compaction code in the parent pluginworker package duplicated the logic already present in the iceberg/ subpackage (which self-registers via init()). The old code lacked stale-plan guards, proper path normalization, CAS-based xattr updates, and error-returning parseOperations. Since the registry pattern (default "all") makes the old handler unreachable, remove it entirely. All functionality is provided by iceberg.Handler with the reviewed improvements. * Fix MinManifestsToRewrite clamping to match UI minimum of 2 The clamp reset values below 2 to the default of 5, contradicting the UI's advertised MinValue of 2. Clamp to 2 instead. * Sort entries by size descending in splitOversizedBin for better packing Entries were processed in insertion order which is non-deterministic from map iteration. Sorting largest-first before the splitting loop improves bin packing efficiency by filling bins more evenly. * Add context cancellation check to drainReader loop The row-streaming loop in drainReader did not check ctx between iterations, making long compaction merges uncancellable. Check ctx.Done() at the top of each iteration. * Fix splitOversizedBin to always respect targetSize limit The minFiles check in the split condition allowed bins to grow past targetSize when they had fewer than minFiles entries, defeating the OOM protection. Now bins always split at targetSize, and a trailing runt with fewer than minFiles entries is merged into the previous bin. * Add integration tests for iceberg table maintenance plugin worker Tests start a real weed mini cluster, create S3 buckets and Iceberg table metadata via filer gRPC, then exercise the iceberg.Handler operations (ExpireSnapshots, RemoveOrphans, RewriteManifests) against the live filer. A full maintenance cycle test runs all operations in sequence and verifies metadata consistency. Also adds exported method wrappers (testing_api.go) so the integration test package can call the unexported handler methods. * Fix splitOversizedBin dropping files and add source path to drainReader errors The runt-merge step could leave leading bins with fewer than minFiles entries (e.g. [80,80,10,10] with targetSize=100, minFiles=2 would drop the first 80-byte file). Replace the filter-based approach with an iterative merge that folds any sub-minFiles bin into its smallest neighbor, preserving all eligible files. Also add the source file path to drainReader error messages so callers can identify which Parquet file caused a read/write failure. * Harden integration test error handling - s3put: fail immediately on HTTP 4xx/5xx instead of logging and continuing - lookupEntry: distinguish NotFound (return nil) from unexpected RPC errors (fail the test) - writeOrphan and orphan creation in FullMaintenanceCycle: check CreateEntryResponse.Error in addition to the RPC error * go fmt --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> |
||
|
|
47799a5b4f | fix tests | ||
|
|
1f2014568f |
fix(mini): use "all" job type for plugin worker (#8634)
The mini command previously hardcoded a list of specific job types (vacuum, volume_balance, erasure_coding, admin_script). Use the "all" category instead so that newly registered handlers are automatically picked up without requiring changes to the mini command. |
||
|
|
a838661b83 |
feat(plugin): EC shard balance handler for plugin worker (#8629)
* feat(ec_balance): add TaskTypeECBalance constant and protobuf definitions Add the ec_balance task type constant to both topology and worker type systems. Define EcBalanceTaskParams, EcShardMoveSpec, and EcBalanceTaskConfig protobuf messages for EC shard balance operations. * feat(ec_balance): add configuration for EC shard balance task Config includes imbalance threshold, min server count, collection filter, disk type, and preferred tags for tag-aware placement. * feat(ec_balance): add multi-phase EC shard balance detection algorithm Implements four detection phases adapted from the ec.balance shell command: 1. Duplicate shard detection and removal proposals 2. Cross-rack shard distribution balancing 3. Within-rack node-level shard balancing 4. Global shard count equalization across nodes Detection is side-effect-free: it builds an EC topology view from ActiveTopology and generates move proposals without executing them. * feat(ec_balance): add EC shard move task execution Implements the shard move sequence using the same VolumeEcShardsCopy, VolumeEcShardsMount, VolumeEcShardsUnmount, and VolumeEcShardsDelete RPCs as the shell ec.balance command. Supports both regular shard moves and dedup-phase deletions (unmount+delete without copy). * feat(ec_balance): add task registration and scheduling Register EC balance task definition with auto-config update support. Scheduling respects max concurrent limits and worker capabilities. * feat(ec_balance): add plugin handler for EC shard balance Implements the full plugin handler with detection, execution, admin and worker config forms, proposal building, and decision trace reporting. Supports collection/DC/disk type filtering, preferred tag placement, and configurable detection intervals. Auto-registered via init() with the handler registry. * test(ec_balance): add tests for detection algorithm and plugin handler Detection tests cover: duplicate shard detection, cross-rack imbalance, within-rack imbalance, global rebalancing, topology building, collection filtering, and edge cases. Handler tests cover: config derivation with clamping, proposal building, protobuf encode/decode round-trip, fallback parameter decoding, capability, and config policy round-trip. * fix(ec_balance): address PR review feedback and fix CI test failure - Update TestWorkerDefaultJobTypes to expect 6 handlers (was 5) - Extract threshold constants (ecBalanceMinImbalanceThreshold, etc.) to eliminate magic numbers in Descriptor and config derivation - Remove duplicate ShardIdsToUint32 helper (use erasure_coding package) - Add bounds checks for int64→int/uint32 conversions to fix CodeQL integer conversion warnings * fix(ec_balance): address code review findings storage_impact.go: - Add TaskTypeECBalance case returning shard-level reservation (ShardSlots: -1/+1) instead of falling through to default which incorrectly reserves a full volume slot on target. detection.go: - Use dc:rack composite key to avoid cross-DC rack name collisions. Only create rack entries after confirming node has matching disks. - Add exceedsImbalanceThreshold check to cross-rack, within-rack, and global phases so trivial skews below the configured threshold are ignored. Dedup phase always runs since duplicates are errors. - Reserve destination capacity after each planned move (decrement destNode.freeSlots, update rackShardCount/nodeShardCount) to prevent overbooking the same destination. - Skip nodes with freeSlots <= 0 when selecting minNode in global balance to avoid proposing moves to full nodes. - Include loop index and source/target node IDs in TaskID to guarantee uniqueness across moves with the same volumeID/shardID. ec_balance_handler.go: - Fail fast with error when shard_id is absent in fallback parameter decoding instead of silently defaulting to shard 0. ec_balance_task.go: - Delegate GetProgress() to BaseTask.GetProgress() so progress updates from ReportProgressWithStage are visible to callers. - Add fail-fast guard rejecting multiple sources/targets until batch execution is implemented. Findings verified but not changed (matches existing codebase pattern in vacuum/balance/erasure_coding handlers): - register.go globalTaskDef.Config race: same unsynchronized pattern in all 4 task packages. - CreateTask using generated ID: same fmt.Sprintf pattern in all 4 task packages. * fix(ec_balance): harden parameter decoding, progress tracking, and validation ec_balance_handler.go (decodeECBalanceTaskParams): - Validate execution-critical fields (Sources[0].Node, ShardIds, Targets[0].Node, ShardIds) after protobuf deserialization. - Require source_disk_id and target_disk_id in legacy fallback path so Targets[0].DiskId is populated for VolumeEcShardsCopyRequest. - All error messages reference decodeECBalanceTaskParams and the specific missing field (TaskParams, shard_id, Targets[0].DiskId, EcBalanceTaskParams) for debuggability. ec_balance_task.go: - Track progress in ECBalanceTask.progress field, updated via reportProgress() helper called before ReportProgressWithStage(), so GetProgress() returns real stage progress instead of stale 0. - Validate: require exactly 1 source and 1 target (mirrors Execute guard), require ShardIds on both, with error messages referencing ECBalanceTask.Validate and the specific field. * fix(ec_balance): fix dedup execution path, stale topology, collection filter, timeout, and dedupeKey detection.go: - Dedup moves now set target=source so isDedupPhase() triggers the unmount+delete-only execution path instead of attempting a copy. - Apply moves to in-memory topology between phases via applyMovesToTopology() so subsequent phases see updated shard placement and don't conflict with already-planned moves. - detectGlobalImbalance now accepts allowedVids and filters both shard counting and shard selection to respect CollectionFilter. ec_balance_task.go: - Apply EcBalanceTaskParams.TimeoutSeconds to the context via context.WithTimeout so all RPC operations respect the configured timeout instead of hanging indefinitely. ec_balance_handler.go: - Include source node ID in dedupeKey so dedup deletions from different source nodes for the same shard aren't collapsed. - Clamp minServerCountRaw and minIntervalRaw lower bounds on int64 before narrowing to int, preventing undefined overflow on 32-bit. * fix(ec_balance): log warning before cancelling on progress send failure Log the error, job ID, job type, progress percentage, and stage before calling execCancel() in the progress callback so failed progress sends are diagnosable instead of silently cancelling. |
||
|
|
e4a77b8b16 |
feat(admin): support env var and security.toml for credentials (#8606)
* feat(security): add [admin] section to security.toml scaffold Add admin credential fields (user, password, readonly.user, readonly.password) to security.toml. Via viper's WEED_ env prefix and AutomaticEnv(), these are automatically overridable as WEED_ADMIN_USER, WEED_ADMIN_PASSWORD, etc. Ref: https://github.com/seaweedfs/seaweedfs/discussions/8586 * feat(admin): support env var and security.toml fallbacks for credentials Add applyViperFallback() to read admin credentials from security.toml / WEED_* environment variables when CLI flags are not explicitly set. This allows systems like NixOS to pass secrets via env vars instead of CLI flags, which appear in process listings. Precedence: CLI flag > env var / security.toml > default value. Also change -adminUser default from "admin" to "" so that credentials are fully opt-in. Ref: https://github.com/seaweedfs/seaweedfs/discussions/8586 * feat(helm): use WEED_ env vars for admin credentials instead of CLI flags Rename SEAWEEDFS_ADMIN_USER/PASSWORD to WEED_ADMIN_USER/PASSWORD so viper picks them up natively. Remove -adminUser/-adminPassword shell expansion from command args since the Go binary now reads these directly via viper. * docs(admin): document env var and security.toml credential support Add environment variable mapping table, security.toml example, and precedence rules to the admin README. * style(security): use nested [admin.readonly] table in security.toml Use a nested TOML table instead of dotted keys for the readonly credentials. More idiomatic and easier to read; no change in how Viper parses it. * fix(admin): use util.GetViper() for env var support and fix README example applyViperFallback() was using viper.GetString() directly, which bypasses the WEED_ env prefix and AutomaticEnv setup that only happens in util.GetViper(). Switch to util.GetViper().GetString() so WEED_ADMIN_* environment variables are actually picked up. Also fix the README example to include WEED_ADMIN_USER alongside WEED_ADMIN_PASSWORD, since runAdmin() rejects an empty username when a password is set. * fix(admin): restore default adminUser to "admin" Defaulting adminUser to "" broke the common flow of setting only WEED_ADMIN_PASSWORD — runAdmin() rejects an empty username when a password is set. Restore "admin" as the default so that setting only the password works out of the box. * docs(admin): align README security.toml example with scaffold format Use nested [admin.readonly] table instead of flat dotted keys to match the format in weed/command/scaffold/security.toml. * docs(admin): remove README.md in favor of wiki page Admin documentation lives at the wiki (Admin-UI.md). Remove the in-repo README to avoid maintaining duplicate docs. --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
737116e83c | fix port probing | ||
|
|
d4d2e511ed | for mini, default to bind all | ||
|
|
587c24ec89 |
plugin worker: support job type categories (all, default, heavy) (#8547)
* plugin worker: add handler registry with job categories
Introduce a self-registration pattern for plugin worker job handlers.
Each handler can register itself via init() with a HandlerFactory that
declares its job type, category (default/heavy), CLI aliases, and a
builder function.
ResolveHandlerFactories accepts a mix of category names ("all",
"default", "heavy") and explicit job type names/aliases, returning the
matching factories. This enables workers to be configured by resource
profile rather than requiring explicit job type enumeration.
* plugin worker: register all handlers via init()
Each job handler now self-registers into the global handler registry
with its canonical job type, category, CLI aliases, and build function:
- vacuum: category=default
- volume_balance: category=default
- admin_script: category=default
- erasure_coding: category=heavy
- iceberg_maintenance: category=heavy
Adding a new job type now only requires adding the init() call in the
handler file itself — no other files need to be touched.
* plugin worker: replace hardcoded job type switch with registry
Remove buildPluginWorkerHandler, parsePluginWorkerJobTypes, and
canonicalPluginWorkerJobType from worker_runtime.go. The simplified
buildPluginWorkerHandlers now delegates to
pluginworker.ResolveHandlerFactories, which resolves category names
("all", "default", "heavy") and explicit job type names/aliases.
The default job type is changed from an explicit list to "all", so new
handlers registered via init() are automatically picked up.
Update all tests to use the new API.
* plugin worker: update CLI help text for job categories
Update the -jobType flag description and command examples to document
category support (all, default, heavy) alongside explicit job type names.
* plugin worker: address review feedback
- Add CategoryAll constant; use typed constants in tokenAsCategory
- Pre-allocate result slice in ResolveHandlerFactories
- Add vacuum aliases (vol.vacuum, volume.vacuum)
- List alias examples (ec, balance, iceberg) in -jobType flag help
- Create handlers aggregator package for subpackage blank imports so
new handler subpackages only need to be added in one place
- Make category tests relationship-based (subset/union checks) instead
of asserting exact handler counts
- Add clarifying comments to worker_test.go and mini_plugin_test.go
listing expected handler names next to count assertions
---------
Co-authored-by: Copilot <copilot@github.com>
|
||
|
|
72c2c7ef8b |
Add iceberg_maintenance plugin worker handler (Phase 1) (#8501)
* Add iceberg_maintenance plugin worker handler (Phase 1) Implement automated Iceberg table maintenance as a new plugin worker job type. The handler scans S3 table buckets for tables needing maintenance and executes operations in the correct Iceberg order: expire snapshots, remove orphan files, and rewrite manifests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix unsafe int64→int narrowing for MaxSnapshotsToKeep Use int64(wouldKeep) instead of int(config.MaxSnapshotsToKeep) to avoid potential truncation on 32-bit platforms (CodeQL high severity). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix unsafe int64→int narrowing for MinInputFiles Use int64(len(manifests)) instead of int(config.MinInputFiles) to avoid potential truncation on 32-bit platforms (CodeQL high severity). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix unsafe int64→int narrowing for MaxCommitRetries Clamp MaxCommitRetries to [1,20] range and keep as int64 throughout the retry loop to avoid truncation on 32-bit platforms (CodeQL high severity). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Sort snapshots explicitly by timestamp in expireSnapshots The previous logic relied on implicit ordering of the snapshot list. Now explicitly sorts snapshots by timestamp descending (most recent first) and uses a simpler keep-count loop: keep the first MaxSnapshotsToKeep newest snapshots plus the current snapshot unconditionally, then expire the rest that exceed the retention window. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Handle errors properly in listFilerEntries Previously all errors from ListEntries and Recv were silently swallowed. Now: treat "not found" errors as empty directory, propagate other ListEntries errors, and check for io.EOF explicitly on Recv instead of breaking on any error. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix overly broad HasSuffix check in orphan detection The bare strings.HasSuffix(ref, entry.Name) could match files with similar suffixes (e.g. "123.avro" matching "snap-123.avro"). Replaced with exact relPath match and a "/"-prefixed suffix check to avoid false positives. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Replace fmt.Sscanf with strconv.Atoi in extractMetadataVersion strconv.Atoi is more explicit and less fragile than fmt.Sscanf for parsing a simple integer from a trimmed string. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Recursively traverse directories for orphan file detection The orphan cleanup only listed a single directory level under data/ and metadata/, skipping IsDirectory entries. Partitioned Iceberg tables store data files in nested partition directories (e.g. data/region=us-east/file.parquet) which were never evaluated. Add walkFilerEntries helper that recursively descends into subdirectories, and use it in removeOrphans so all nested files are considered for orphan checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix manifest path drift from double time.Now() calls rewriteManifests called time.Now().UnixMilli() twice: once for the path embedded in WriteManifest and once for the filename passed to saveFilerFile. These timestamps would differ, causing the manifest's internal path reference to not match the actual saved filename. Compute the filename once and reuse it for both WriteManifest and saveFilerFile so they always reference the same path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add TestManifestRewritePathConsistency test Verifies that WriteManifest returns a ManifestFile whose FilePath() matches the path passed in, and that path.Base() of that path matches the filename used for saveFilerFile. This validates the single- timestamp pattern used in rewriteManifests produces consistent paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Make parseOperations return error on unknown operations Previously parseOperations silently dropped unknown operation names and could return an empty list. Now validates inputs against the canonical set and returns a clear error if any unknown operation is specified. Updated Execute to surface the error instead of proceeding with an empty operation list. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Use gRPC status codes instead of string matching in listFilerEntries Replace brittle strings.Contains(err.Error(), "not found") check with status.Code(err) == codes.NotFound for proper gRPC error handling. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add stale-plan guard in commit closures for expireSnapshots and rewriteManifests Both operations plan outside the commit mutation using a snapshot ID captured from the initial metadata read. If the table head advances concurrently, the mutation would create a snapshot parented to the wrong head or remove snapshots based on a stale view. Add a guard inside each mutation closure that verifies currentMeta.CurrentSnapshot().SnapshotID still matches the planned snapshot ID. If it differs, return errStalePlan which propagates immediately (not retried, since the plan itself is invalid). Also fix rewriteManifests to derive SequenceNumber from the fresh metadata (cs.SequenceNumber) instead of the captured currentSnap. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add compare-and-swap to updateTableMetadataXattr updateTableMetadataXattr previously re-read the entry but did not verify the metadataVersion matched what commitWithRetry had loaded. A concurrent update could be silently clobbered. Now accepts expectedVersion parameter and compares it against the stored metadataVersion before writing. Returns errMetadataVersionConflict on mismatch, which commitWithRetry treats as retryable (deletes the staged metadata file and retries with fresh state). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Export shared plugin worker helpers for use by sub-packages Export ShouldSkipDetectionByInterval, BuildExecutorActivity, and BuildDetectorActivity so the iceberg sub-package can reuse them without duplicating logic. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Refactor iceberg maintenance handler into weed/plugin/worker/iceberg package Split the 1432-line iceberg_maintenance_handler.go into focused files in a new iceberg sub-package: handler.go, config.go, detection.go, operations.go, filer_io.go, and compact.go (Phase 2 data compaction). Key changes: - Rename types to drop stutter (IcebergMaintenanceHandler → Handler, etc.) - Fix loadFileByIcebergPath to preserve nested directory paths via normalizeIcebergPath instead of path.Base which dropped subdirectories - Check SendProgress errors instead of discarding them - Add stale-plan guard to compactDataFiles commitWithRetry closure - Add "compact" operation to parseOperations canonical order - Duplicate readStringConfig/readInt64Config helpers (~20 lines) - Update worker_runtime.go to import new iceberg sub-package Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove iceberg_maintenance from default plugin worker job types Iceberg maintenance is not yet ready to be enabled by default. Workers can still opt in by explicitly listing iceberg_maintenance in their job types configuration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Clamp config values to safe minimums in ParseConfig Prevents misconfiguration by enforcing minimum values using the default constants for all config fields. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Harden filer I/O: path helpers, strict CAS guard, path traversal prevention - Use path.Dir/path.Base instead of strings.SplitN in loadCurrentMetadata - Make CAS guard error on missing or unparseable metadataVersion - Add path.Clean and traversal validation in loadFileByIcebergPath Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix compact: single snapshot ID, oversized bin splitting, ensureFilerDir - Use single newSnapID for all manifest entries in a compaction run - Add splitOversizedBin to break bins exceeding targetSize - Make ensureFilerDir only create on NotFound, propagate other errors Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add wildcard filters, scan limit, and context cancellation to table scanning - Use wildcard matchers (*, ?) for bucket/namespace/table filters - Add limit parameter to scanTablesForMaintenance for early termination - Add ctx.Done() checks in bucket and namespace scan loops - Update filter UI descriptions and placeholders for wildcard support Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove dead detection interval check and validate namespace parameter - Remove ineffective ShouldSkipDetectionByInterval call with hardcoded 0 - Add namespace to required parameter validation in Execute Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Improve operations: exponential backoff, orphan matching, full file cleanup - Use exponential backoff (50ms, 100ms, 200ms, ...) in commitWithRetry - Use normalizeIcebergPath for orphan matching instead of fragile suffix check - Add collectSnapshotFiles to traverse manifest lists → manifests → data files - Delete all unreferenced files after expiring snapshots, not just manifest lists - Refactor removeOrphans to reuse collectSnapshotFiles Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * iceberg: fix ensureFilerDir to handle filer_pb.ErrNotFound sentinel filer_pb.LookupEntry converts gRPC NotFound errors to filer_pb.ErrNotFound (a plain sentinel), so status.Code() never returns codes.NotFound for that error. This caused ensureFilerDir to return an error instead of creating the directory when it didn't exist. * iceberg: clean up orphaned artifacts when compaction commit fails Track all files written during compaction (merged data files, manifest, manifest list) and delete them if the commit or any subsequent write step fails, preventing orphaned files from accumulating in the filer. * iceberg: derive tablePath from namespace/tableName when empty An empty table_path parameter would be passed to maintenance operations unchecked. Default it to path.Join(namespace, tableName) when not provided. * iceberg: make collectSnapshotFiles return error on read/parse failure Previously, errors reading manifests were logged and skipped, returning a partial reference set. This could cause incorrect delete decisions during snapshot expiration or orphan cleanup. Now the function returns an error and all callers abort when reference data is incomplete. * iceberg: include active metadata file in removeOrphans referenced set The metadataFileName returned by loadCurrentMetadata was discarded, so the active metadata file could be incorrectly treated as an orphan and deleted. Capture it and add it to the referencedFiles map. * iceberg: only retry commitWithRetry on metadata version conflicts Previously all errors from updateTableMetadataXattr triggered retries. Now only errMetadataVersionConflict causes retry; other errors (permissions, transport, malformed xattr) fail immediately. * iceberg: respect req.Limit in fakeFilerServer.ListEntries mock The mock ListEntries ignored the Limit field, so tests couldn't exercise pagination. Now it stops streaming once Limit entries have been sent. * iceberg: validate parquet schema compatibility before merging files mergeParquetFiles now compares each source file's schema against the first file's schema and aborts with a clear error if they differ, instead of blindly writing rows that could panic or produce corrupt output. * iceberg: normalize empty JobType to canonical jobType in Execute events When request.Job.JobType is empty, status events and completion messages were emitted with a blank job type. Derive a canonical value early and use it consistently in all outbound events. * iceberg: log warning on unexpected config value types in read helpers readStringConfig and readInt64Config now log a V(1) warning when they encounter an unhandled ConfigValue kind, aiding debugging of unexpected config types that silently fall back to defaults. * worker: add iceberg_maintenance to default plugin worker job types Workers using the default job types list didn't advertise the iceberg_maintenance handler despite the handler and canonical name being registered. Add it so workers pick up the handler by default. * iceberg: use defer and detached context for compaction artifact cleanup The cleanup closure used the job context which could already be canceled, and was not called on ctx.Done() early exits. Switch to a deferred cleanup with a detached context (30s timeout) so artifact deletion completes on all exit paths including context cancellation. * iceberg: use proportional jitter in commitWithRetry backoff Fixed 25ms max jitter becomes insignificant at higher retry attempts. Use 0-20% of the current backoff value instead so jitter scales with the exponential delay. * iceberg: add malformed filename cases to extractMetadataVersion test Cover edge cases like "invalid.metadata.json", "metadata.json", "", and "v.metadata.json" to ensure the function returns 0 for unparseable inputs. * iceberg: fail compaction on manifest read errors and skip delete manifests Previously, unreadable manifests were silently skipped during compaction, which could drop live files from the entry set. Now manifest read/parse errors are returned as fatal errors. Also abort compaction when delete manifests exist since the compactor does not apply deletes — carrying them through unchanged could produce incorrect results. * iceberg: use table-relative path for active metadata file in orphan scan metadataFileName was stored as a basename (e.g. "v1.metadata.json") but the orphan scanner matches against table-relative paths like "metadata/v1.metadata.json". Prefix with "metadata/" so the active metadata file is correctly recognized as referenced. * iceberg: fix MetadataBuilderFromBase location to use metadata file path The second argument to MetadataBuilderFromBase records the previous metadata file in the metadata log. Using meta.Location() (the table root) was incorrect — it must be the actual metadata file path so old metadata files can be tracked and eventually cleaned up. * iceberg: update metadataLocation and versionToken in xattr on commit updateTableMetadataXattr was only updating metadataVersion, modifiedAt, and fullMetadata but not metadataLocation or versionToken. This left catalog state inconsistent after maintenance commits — the metadataLocation still pointed to the old metadata file and the versionToken was stale. Add a newMetadataLocation parameter and regenerate the versionToken on every commit, matching the S3 Tables handler behavior. * iceberg: group manifest entries by partition spec in rewriteManifests rewriteManifests was writing all entries into a single manifest using the table's current partition spec. For spec-evolved tables where manifests reference different partition specs, this produces an invalid manifest. Group entries by the source manifest's PartitionSpecID and write one merged manifest per spec, looking up each spec from the table's PartitionSpecs list. * iceberg: remove dead code loop for non-data manifests in compaction The early abort guard at the top of compactDataFiles already ensures no delete manifests are present. The loop that copied non-data manifests into allManifests was unreachable dead code. * iceberg: use JSON encoding in partitionKey for unambiguous grouping partitionKey used fmt.Sprintf("%d=%v") joined by commas, which produces ambiguous keys when partition values contain commas or '='. Use json.Marshal for values and NUL byte as separator to eliminate collisions. * iceberg: precompute normalized reference set in removeOrphans The orphan check was O(files × refs) because it normalized each reference path inside the per-file loop. Precompute the normalized set once for O(1) lookups per candidate file. * iceberg: add artifact cleanup to rewriteManifests on commit failure rewriteManifests writes merged manifests and a manifest list to the filer before committing but did not clean them up on failure. Add the same deferred cleanup pattern used by compactDataFiles: track written artifacts and delete them if the commit does not succeed. * iceberg: pass isDeleteData=true in deleteFilerFile deleteFilerFile called DoRemove with isDeleteData=false, which only removed filer metadata and left chunk data behind on volume servers. All other data-file deletion callers in the codebase pass true. * iceberg: clean up test: remove unused snapID, simplify TestDetectWithFakeFiler Remove unused snapID variable and eliminate the unnecessary second fake filer + entry copy in TestDetectWithFakeFiler by capturing the client from the first startFakeFiler call. * fix: update TestWorkerDefaultJobTypes to expect 5 job types The test expected 4 default job types but iceberg_maintenance was added as a 5th default in a previous commit. * iceberg: document client-side CAS TOCTOU limitation in updateTableMetadataXattr Add a note explaining the race window where two workers can both pass the version check and race at UpdateEntry. The proper fix requires server-side precondition support on UpdateEntryRequest. * iceberg: remove unused sender variable in TestFullExecuteFlow * iceberg: abort compaction when multiple partition specs are present The compactor writes all entries into a single manifest using the current partition spec, which is invalid for spec-evolved tables. Detect multiple PartitionSpecIDs and skip compaction until per-spec compaction is implemented. * iceberg: validate tablePath to prevent directory traversal Sanitize the table_path parameter with path.Clean and verify it matches the expected namespace/tableName prefix to prevent path traversal attacks via crafted job parameters. * iceberg: cap retry backoff at 5s and make it context-aware The exponential backoff could grow unbounded and blocked on time.Sleep ignoring context cancellation. Cap at 5s and use a timer with select on ctx.Done so retries respect cancellation. * iceberg: write manifest list with new snapshot identity in rewriteManifests The manifest list was written with the old snapshot's ID and sequence number, but the new snapshot created afterwards used a different identity. Compute newSnapshotID and newSeqNum before writing manifests and the manifest list so all artifacts are consistent. * ec: also remove .vif file in removeEcVolumeFiles removeEcVolumeFiles cleaned up .ecx, .ecj, and shard files but not the .vif volume info file, leaving it orphaned. The .vif file lives in the data directory alongside shard files. The directory handling for index vs data files was already correct: .ecx/.ecj are removed from IdxDirectory and shard files from Directory, matching how NewEcVolume loads them. Revert "ec: also remove .vif file in removeEcVolumeFiles" This reverts commit acc82449e12a00115268a5652aef0d6c46d9f2dd. * iceberg: skip orphan entries with nil Attributes instead of defaulting to epoch When entry.Attributes is nil, mtime defaulted to Unix epoch (1970), making unknown-age entries appear ancient and eligible for deletion. Skip these entries instead to avoid deleting files whose age cannot be determined. * iceberg: use unique metadata filenames to prevent concurrent write clobbering Add timestamp nonce to metadata filenames (e.g. v3-1709766000.metadata.json) so concurrent writers stage to distinct files. Update extractMetadataVersion to strip the nonce suffix, and loadCurrentMetadata to read the actual filename from the metadataLocation xattr field. * iceberg: defer artifact tracking until data file builder succeeds Move the writtenArtifacts append to after NewDataFileBuilder succeeds, so a failed builder doesn't leave a stale entry for an already-deleted file in the cleanup list. * iceberg: use detached context for metadata file cleanup Use context.WithTimeout(context.Background(), 10s) when deleting staged metadata files after CAS failure, so cleanup runs even if the original request context is canceled. * test: update default job types count to include iceberg_maintenance * iceberg: use parquet.EqualNodes for structural schema comparison Replace String()-based schema comparison with parquet.EqualNodes which correctly compares types, repetition levels, and logical types. * iceberg: add nonce-suffixed filename cases to TestExtractMetadataVersion * test: assert iceberg_maintenance is present in default job types * iceberg: validate operations config early in Detect Call parseOperations in Detect so typos in the operations config fail fast before emitting proposals, matching the validation already done in Execute. * iceberg: detect chunked files in loadFileByIcebergPath Return an explicit error when a file has chunks but no inline content, rather than silently returning empty data. Data files uploaded via S3 are stored as chunks, so compaction would otherwise produce corrupt merged files. --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> |
||
|
|
45ce18266a |
Disable master maintenance scripts when admin server runs (#8499)
* Disable master maintenance scripts when admin server runs * Stop defaulting master maintenance scripts * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Clarify master scripts are disabled by default * Skip master maintenance scripts when admin server is connected * Restore default master maintenance scripts * Document admin server skip for master maintenance scripts --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
||
|
|
e1e5b4a8a6 |
add admin script worker (#8491)
* admin: add plugin lock coordination * shell: allow bypassing lock checks * plugin worker: add admin script handler * mini: include admin_script in plugin defaults * admin script UI: drop name and enlarge text * admin script: add default script * admin_script: make run interval configurable * plugin: gate other jobs during admin_script runs * plugin: use last completed admin_script run * admin: backfill plugin config defaults * templ Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * comparable to default version Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * default to run Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * format Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * shell: respect pre-set noLock for fix.replication * shell: add force no-lock mode for admin scripts * volume balance worker already exists Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * admin: expose scheduler status JSON * shell: add sleep command * shell: restrict sleep syntax * Revert "shell: respect pre-set noLock for fix.replication" This reverts commit 2b14e8b82602a740d3a473c085e3b3a14f1ddbb3. * templ Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * fix import Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * less logs Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * Reduce master client logs on canceled contexts * Update mini default job type count --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
f5c35240be |
Add volume dir tags and EC placement priority (#8472)
* Add volume dir tags to topology Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add preferred tag config for EC Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Prioritize EC destinations by tags Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add EC placement planner tag tests Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Refactor EC placement tests to reuse buildActiveTopology Remove buildActiveTopologyWithDiskTags helper function and consolidate tag setup inline in test cases. Tests now use UpdateTopology to apply tags after topology creation, reusing the existing buildActiveTopology function rather than duplicating its logic. All tag scenario tests pass: - TestECPlacementPlannerPrefersTaggedDisks - TestECPlacementPlannerFallsBackWhenTagsInsufficient Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Consolidate normalizeTagList into shared util package Extract normalizeTagList from three locations (volume.go, detection.go, erasure_coding_handler.go) into new weed/util/tag.go as exported NormalizeTagList function. Replace all duplicate implementations with imports and calls to util.NormalizeTagList. This improves code reuse and maintainability by centralizing tag normalization logic. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add PreferredTags to EC config persistence Add preferred_tags field to ErasureCodingTaskConfig protobuf with field number 5. Update GetConfigSpec to include preferred_tags field in the UI configuration schema. Add PreferredTags to ToTaskPolicy to serialize config to protobuf. Add PreferredTags to FromTaskPolicy to deserialize from protobuf with defensive copy to prevent external mutation. This allows EC preferred tags to be persisted and restored across worker restarts. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add defensive copy for Tags slice in DiskLocation Copy the incoming tags slice in NewDiskLocation instead of storing by reference. This prevents external callers from mutating the DiskLocation.Tags slice after construction, improving encapsulation and preventing unexpected changes to disk metadata. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add doc comment to buildCandidateSets method Document the tiered candidate selection and fallback behavior. Explain that for a planner with preferredTags, it accumulates disks matching each tag in order into progressively larger tiers, emits a candidate set once a tier reaches shardsNeeded, and finally falls back to the full candidates set if preferred-tag tiers are insufficient. This clarifies the intended semantics for future maintainers. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Apply final PR review fixes 1. Update parseVolumeTags to replicate single tag entry to all folders instead of leaving some folders with nil tags. This prevents nil pointer dereferences when processing folders without explicit tags. 2. Add defensive copy in ToTaskPolicy for PreferredTags slice to match the pattern used in FromTaskPolicy, preventing external mutation of the returned TaskPolicy. 3. Add clarifying comment in buildCandidateSets explaining that the shardsNeeded <= 0 branch is a defensive check for direct callers, since selectDestinations guarantees shardsNeeded > 0. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix nil pointer dereference in parseVolumeTags Ensure all folder tags are initialized to either normalized tags or empty slices, not nil. When multiple tag entries are provided and there are more folders than entries, remaining folders now get empty slices instead of nil, preventing nil pointer dereference in downstream code. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix NormalizeTagList to return empty slice instead of nil Change NormalizeTagList to always return a non-nil slice. When all tags are empty or whitespace after normalization, return an empty slice instead of nil. This prevents nil pointer dereferences in downstream code that expects a valid (possibly empty) slice. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add nil safety check for v.tags pointer Add a safety check to handle the case where v.tags might be nil, preventing a nil pointer dereference. If v.tags is nil, use an empty string instead. This is defensive programming to prevent panics in edge cases. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add volume.tags flag to weed server and weed mini commands Add the volume.tags CLI option to both the 'weed server' and 'weed mini' commands. This allows users to specify disk tags when running the combined server modes, just like they can with 'weed volume'. The flag uses the same format and description as the volume command: comma-separated tag groups per data dir with ':' separators (e.g. fast:ssd,archive). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
4f647e1036 |
Worker set its working directory (#8461)
* set working directory * consolidate to worker directory * working directory * correct directory name * refactoring to use wildcard matcher * simplify * cleaning ec working directory * fix reference * clean * adjust test |
||
|
|
3d81d5bef7 |
Fix S3 signature verification behind reverse proxies (#8444)
* Fix S3 signature verification behind reverse proxies When SeaweedFS is deployed behind a reverse proxy (e.g. nginx, Kong, Traefik), AWS S3 Signature V4 verification fails because the Host header the client signed with (e.g. "localhost:9000") differs from the Host header SeaweedFS receives on the backend (e.g. "seaweedfs:8333"). This commit adds a new -s3.externalUrl parameter (and S3_EXTERNAL_URL environment variable) that tells SeaweedFS what public-facing URL clients use to connect. When set, SeaweedFS uses this host value for signature verification instead of the Host header from the incoming request. New parameter: -s3.externalUrl (flag) or S3_EXTERNAL_URL (environment variable) Example: -s3.externalUrl=http://localhost:9000 Example: S3_EXTERNAL_URL=https://s3.example.com The environment variable is particularly useful in Docker/Kubernetes deployments where the external URL is injected via container config. The flag takes precedence over the environment variable when both are set. At startup, the URL is parsed and default ports are stripped to match AWS SDK behavior (port 80 for HTTP, port 443 for HTTPS), so "http://s3.example.com:80" and "http://s3.example.com" are equivalent. Bugs fixed: - Default port stripping was removed by a prior PR, causing signature mismatches when clients connect on standard ports (80/443) - X-Forwarded-Port was ignored when X-Forwarded-Host was not present - Scheme detection now uses proper precedence: X-Forwarded-Proto > TLS connection > URL scheme > "http" - Test expectations for standard port stripping were incorrect - expectedHost field in TestSignatureV4WithForwardedPort was declared but never actually checked (self-referential test) * Add Docker integration test for S3 proxy signature verification Docker Compose setup with nginx reverse proxy to validate that the -s3.externalUrl parameter (or S3_EXTERNAL_URL env var) correctly resolves S3 signature verification when SeaweedFS runs behind a proxy. The test uses nginx proxying port 9000 to SeaweedFS on port 8333, with X-Forwarded-Host/Port/Proto headers set. SeaweedFS is configured with -s3.externalUrl=http://localhost:9000 so it uses "localhost:9000" for signature verification, matching what the AWS CLI signs with. The test can be run with aws CLI on the host or without it by using the amazon/aws-cli Docker image with --network host. Test covers: create-bucket, list-buckets, put-object, head-object, list-objects-v2, get-object, content round-trip integrity, delete-object, and delete-bucket — all through the reverse proxy. * Create s3-proxy-signature-tests.yml * fix CLI * fix CI * Update s3-proxy-signature-tests.yml * address comments * Update Dockerfile * add user * no need for fuse * Update s3-proxy-signature-tests.yml * debug * weed mini * fix health check * health check * fix health checking --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
||
|
|
d2b92938ee |
Make EC detection context aware (#8449)
* Make EC detection context aware * Update register.go * Speed up EC detection planning * Add tests for EC detection planner * optimizations detection.go: extracted ParseCollectionFilter (exported) and feed it into the detection loop so both detection and tracing share the same parsing/whitelisting logic; the detection loop now iterates on a sorted list of volume IDs, checks the context at every iteration, and only sets hasMore when there are still unprocessed groups after hitting maxResults, keeping runtime bounded while still scheduling planned tasks before returning the results. erasure_coding_handler.go: dropped the duplicated inline filter parsing in emitErasureCodingDetectionDecisionTrace and now reuse erasurecodingtask.ParseCollectionFilter, and the summary suffix logic now only accounts for the hasMore case that can actually happen. detection_test.go: updated the helper topology builder to use master_pb.VolumeInformationMessage (matching the current protobuf types) and tightened the cancellation/max-results tests so they reliably exercise the detection logic (cancel before calling Detection, and provide enough disks so one result is produced before the limit). * use working directory * fix compilation * fix compilation * rename * go vet * fix getenv * address comments, fix error |
||
|
|
7f6e58b791 |
Fix SFTP file upload failures with JWT filer tokens (#8448)
* Fix SFTP file upload failures with JWT filer tokens (issue #8425) When JWT authentication is enabled for filer operations via jwt.filer_signing.* configuration, SFTP server file upload requests were rejected because they lacked JWT authorization headers. Changes: - Added JWT signing key and expiration fields to SftpServer struct - Modified putFile() to generate and include JWT tokens in upload requests - Enhanced SFTPServiceOptions with JWT configuration fields - Updated SFTP command startup to load and pass JWT config to service This allows SFTP uploads to authenticate with JWT-enabled filers, consistent with how other SeaweedFS components (S3 API, file browser) handle filer auth. Fixes #8425 * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
||
|
|
427c975ff3 |
fix(plugin/worker): make VacuumHandler report MaxExecutionConcurrency from worker startup flag (#8435)
* fix(plugin/worker): make VacuumHandler report MaxExecutionConcurrency from worker startup flag Previously, MaxExecutionConcurrency was hardcoded to 2 in VacuumHandler.Capability(). The scheduler's schedulerWorkerExecutionLimit() takes the minimum of the UI-configured PerWorkerExecutionConcurrency and the worker-reported capability limit, so the hardcoded value silently capped each worker to 2 concurrent vacuum executions regardless of the --max-execute flag passed at worker startup. Pass maxExecutionConcurrency into NewVacuumHandler() and wire it through buildPluginWorkerHandler/buildPluginWorkerHandlers so the capability reflects the actual worker configuration. The default falls back to 2 when the value is unset or zero. * Update weed/command/worker_runtime.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Anton Ustyugov <anton@devops> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
||
|
|
8d59ef41d5 |
Admin UI: replace gin with mux (#8420)
* Replace admin gin router with mux * Update layout_templ.go * Harden admin handlers * Add login CSRF handling * Fix filer copy naming conflict * address comments * address comments |
||
|
|
e596542295 |
Move SQL engine and PostgreSQL server to their own binaries (#8417)
* Drop SQL engine and PostgreSQL server * Split SQL tooling into weed-db and weed-sql * move * fix building |
||
|
|
e4b70c2521 | go fix | ||
|
|
8ec9ff4a12 |
Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> |
||
|
|
3300874cb5 |
filer: add default log purging to master maintenance scripts (#8359)
* filer: add default log purging to master maintenance scripts * filer: fix default maintenance scripts to include full set of tasks * filer: refactor maintenance scripts to avoid duplication |
||
|
|
0d8588e3ae |
S3: Implement IAM defaults and STS signing key fallback (#8348)
* S3: Implement IAM defaults and STS signing key fallback logic * S3: Refactor startup order to init SSE-S3 key manager before IAM * S3: Derive STS signing key from KEK using HKDF for security isolation * S3: Document STS signing key fallback in security.toml * fix(s3api): refine anonymous access logic and secure-by-default behavior - Initialize anonymous identity by default in `NewIdentityAccessManagement` to prevent nil pointer exceptions. - Ensure `ReplaceS3ApiConfiguration` preserves the anonymous identity if not present in the new configuration. - Update `NewIdentityAccessManagement` signature to accept `filerClient`. - In legacy mode (no policy engine), anonymous defaults to Deny (no actions), preserving secure-by-default behavior. - Use specific `LookupAnonymous` method instead of generic map lookup. - Update tests to accommodate signature changes and verify improved anonymous handling. * feat(s3api): make IAM configuration optional - Start S3 API server without a configuration file if `EnableIam` option is set. - Default to `Allow` effect for policy engine when no configuration is provided (Zero-Config mode). - Handle empty configuration path gracefully in `loadIAMManagerFromConfig`. - Add integration test `iam_optional_test.go` to verify empty config behavior. * fix(iamapi): fix signature mismatch in NewIdentityAccessManagementWithStore * fix(iamapi): properly initialize FilerClient instead of passing nil * fix(iamapi): properly initialize filer client for IAM management - Instead of passing `nil`, construct a `wdclient.FilerClient` using the provided `Filers` addresses. - Ensure `NewIdentityAccessManagementWithStore` receives a valid `filerClient` to avoid potential nil pointer dereferences or limited functionality. * clean: remove dead code in s3api_server.go * refactor(s3api): improve IAM initialization, safety and anonymous access security * fix(s3api): ensure IAM config loads from filer after client init * fix(s3): resolve test failures in integration, CORS, and tagging tests - Fix CORS tests by providing explicit anonymous permissions config - Fix S3 integration tests by setting admin credentials in init - Align tagging test credentials in CI with IAM defaults - Added goroutine to retry IAM config load in iamapi server * fix(s3): allow anonymous access to health targets and S3 Tables when identities are present * fix(ci): use /healthz for Caddy health check in awscli tests * iam, s3api: expose DefaultAllow from IAM and Policy Engine This allows checking the global "Open by Default" configuration from other components like S3 Tables. * s3api/s3tables: support DefaultAllow in permission logic and handler Updated CheckPermissionWithContext to respect the DefaultAllow flag in PolicyContext. This enables "Open by Default" behavior for unauthenticated access in zero-config environments. Added a targeted unit test to verify the logic. * s3api/s3tables: propagate DefaultAllow through handlers Propagated the DefaultAllow flag to individual handlers for namespaces, buckets, tables, policies, and tagging. This ensures consistent "Open by Default" behavior across all S3 Tables API endpoints. * s3api: wire up DefaultAllow for S3 Tables API initialization Updated registerS3TablesRoutes to query the global IAM configuration and set the DefaultAllow flag on the S3 Tables API server. This completes the end-to-end propagation required for anonymous access in zero-config environments. Added a SetDefaultAllow method to S3TablesApiServer to facilitate this. * s3api: fix tests by adding DefaultAllow to mock IAM integrations The IAMIntegration interface was updated to include DefaultAllow(), breaking several mock implementations in tests. This commit fixes the build errors by adding the missing method to the mocks. * env * ensure ports * env * env * fix default allow * add one more test using non-anonymous user * debug * add more debug * less logs |
||
|
|
0721e3c1e9 |
Rework volume compaction (a.k.a vacuuming) logic to cleanly support new parameters. (#8337)
We'll leverage on this to support a "ignore broken needles" option, necessary to properly recover damaged volumes, as described in https://github.com/seaweedfs/seaweedfs/issues/7442#issuecomment-3897784283 . |
||
|
|
b57429ef2e |
Switch empty-folder cleanup to bucket policy (#8292)
* Fix Spark _temporary cleanup and add issue #8285 regression test * Generalize empty folder cleanup for Spark temp artifacts * Revert synchronous folder pruning and add cleanup diagnostics * Add actionable empty-folder cleanup diagnostics * Fix Spark temp marker cleanup in async folder cleaner * Fix Spark temp cleanup with implicit directory markers * Keep explicit directory markers non-implicit * logging * more logs * Switch empty-folder cleanup to bucket policy * Seaweed-X-Amz-Allow-Empty-Folders * less logs * go vet * less logs * refactoring |
||
|
|
ba8e2aaae9 |
Fix master leader election when grpc ports change (#8272)
* Fix master leader detection when grpc ports change * Canonicalize self peer entry to avoid raft self-alias panic * Normalize and deduplicate master peer addresses |
||
|
|
e6ee293c17 |
Add table operations test (#8241)
* Add Trino blog operations test * Update test/s3tables/catalog_trino/trino_blog_operations_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * feat: add table bucket path helpers and filer operations - Add table object root and table location mapping directories - Implement ensureDirectory, upsertFile, deleteEntryIfExists helpers - Support table location bucket mapping for S3 access * feat: manage table bucket object roots on creation/deletion - Create .objects directory for table buckets on creation - Clean up table object bucket paths on deletion - Enable S3 operations on table bucket object roots * feat: add table location mapping for Iceberg REST - Track table location bucket mappings when tables are created/updated/deleted - Enable location-based routing for S3 operations on table data * feat: route S3 operations to table bucket object roots - Route table-s3 bucket names to mapped table paths - Route table buckets to object root directories - Support table location bucket mapping lookup * feat: emit table-s3 locations from Iceberg REST - Generate unique table-s3 bucket names with UUID suffix - Store table metadata under table bucket paths - Return table-s3 locations for Trino compatibility * fix: handle missing directories in S3 list operations - Propagate ErrNotFound from ListEntries for non-existent directories - Treat missing directories as empty results for list operations - Fixes Trino non-empty location checks on table creation * test: improve Trino CSV parsing for single-value results - Sanitize Trino output to skip jline warnings - Handle single-value CSV results without header rows - Strip quotes from numeric values in tests * refactor: use bucket path helpers throughout S3 API - Replace direct bucket path operations with helper functions - Leverage centralized table bucket routing logic - Improve maintainability with consistent path resolution * fix: add table bucket cache and improve filer error handling - Cache table bucket lookups to reduce filer overhead on repeated checks - Use filer_pb.CreateEntry and filer_pb.UpdateEntry helpers to check resp.Error - Fix delete order in handler_bucket_get_list_delete: delete table object before directory - Make location mapping errors best-effort: log and continue, don't fail API - Update table location mappings to delete stale prior bucket mappings on update - Add 1-second sleep before timestamp time travel query to ensure timestamps are in past - Fix CSV parsing: examine all lines, not skip first; handle single-value rows * fix: properly handle stale metadata location mapping cleanup - Capture oldMetadataLocation before mutation in handleUpdateTable - Update updateTableLocationMapping to accept both old and new locations - Use passed-in oldMetadataLocation to detect location changes - Delete stale mapping only when location actually changes - Pass empty string for oldLocation in handleCreateTable (new tables have no prior mapping) - Improve logging to show old -> new location transitions * refactor: cleanup imports and cache design - Remove unused 'sync' import from bucket_paths.go - Use filer_pb.UpdateEntry helper in setExtendedAttribute and deleteExtendedAttribute for consistent error handling - Add dedicated tableBucketCache map[string]bool to BucketRegistry instead of mixing concerns with metadataCache - Improve cache separation: table buckets cache is now separate from bucket metadata cache * fix: improve cache invalidation and add transient error handling Cache invalidation (critical fix): - Add tableLocationCache to BucketRegistry for location mapping lookups - Clear tableBucketCache and tableLocationCache in RemoveBucketMetadata - Prevents stale cache entries when buckets are deleted/recreated Transient error handling: - Only cache table bucket lookups when conclusive (found or ErrNotFound) - Skip caching on transient errors (network, permission, etc) - Prevents marking real table buckets as non-table due to transient failures Performance optimization: - Cache tableLocationDir results to avoid repeated filer RPCs on hot paths - tableLocationDir now checks cache before making expensive filer lookups - Cache stores empty string for 'not found' to avoid redundant lookups Code clarity: - Add comment to deleteDirectory explaining DeleteEntry response lacks Error field * go fmt * fix: mirror transient error handling in tableLocationDir and optimize bucketDir Transient error handling: - tableLocationDir now only caches definitive results - Mirrors isTableBucket behavior to prevent treating transient errors as permanent misses - Improves reliability on flaky systems or during recovery Performance optimization: - bucketDir avoids redundant isTableBucket call via bucketRoot - Directly use s3a.option.BucketsPath for regular buckets - Saves one cache lookup for every non-table bucket operation * fix: revert bucketDir optimization to preserve bucketRoot logic The optimization to directly use BucketsPath bypassed bucketRoot's logic and caused issues with S3 list operations on delimiter+prefix cases. Revert to using path.Join(s3a.bucketRoot(bucket), bucket) which properly handles all bucket types and ensures consistent path resolution across the codebase. The slight performance cost of an extra cache lookup is worth the correctness and consistency benefits. * feat: move table buckets under /buckets Add a table-bucket marker attribute, reuse bucket metadata cache for table bucket detection, and update list/validation/UI/test paths to treat table buckets as /buckets entries. * Fix S3 Tables code review issues - handler_bucket_create.go: Fix bucket existence check to properly validate entryResp.Entry before setting s3BucketExists flag (nil Entry should not indicate existing bucket) - bucket_paths.go: Add clarifying comment to bucketRoot() explaining unified buckets root path for all bucket types - file_browser_data.go: Optimize by extracting table bucket check early to avoid redundant WithFilerClient call * Fix list prefix delimiter handling * Handle list errors conservatively * Fix Trino FOR TIMESTAMP query - use past timestamp Iceberg requires the timestamp to be strictly in the past. Use current_timestamp - interval '1' second instead of current_timestamp. --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |