2d4ea8c665112528ede886795e26a11218d532c8
6 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
d34cf0d046 | adjust default timing | ||
|
|
5f85bf5e8a |
Batch volume balance: run multiple moves per job (#8561)
* proto: add BalanceMoveSpec and batch fields to BalanceTaskParams Add BalanceMoveSpec message for encoding individual volume moves, and max_concurrent_moves + repeated moves fields to BalanceTaskParams to support batching multiple volume moves in a single job. * balance handler: add batch execution with concurrent volume moves Refactor Execute() into executeSingleMove() (backward compatible) and executeBatchMoves() which runs multiple volume moves concurrently using a semaphore-bounded goroutine pool. When BalanceTaskParams.Moves is populated, the batch path is taken; otherwise the single-move path. Includes aggregate progress reporting across concurrent moves, per-move error collection, and partial failure support. * balance handler: add batch config fields to Descriptor and worker config Add max_concurrent_moves and batch_size fields to the worker config form and deriveBalanceWorkerConfig(). These control how many volume moves run concurrently within a batch job and the maximum batch size. * balance handler: group detection proposals into batch jobs When batch_size > 1, the Detect method groups detection results into batch proposals where each proposal encodes multiple BalanceMoveSpec entries in BalanceTaskParams.Moves. Single-result batches fall back to the existing single-move proposal format for backward compatibility. * admin UI: add volume balance execution plan and batch badge Add renderBalanceExecutionPlan() for rich rendering of volume balance jobs in the job detail modal. Single-move jobs show source/target/volume info; batch jobs show a moves table with all volume moves. Add batch badge (e.g., "5 moves") next to job type in the execution jobs table when the job has batch=true label. * Update plugin_templ.go * fix: detection algorithm uses greedy target instead of divergent topology scores The detection loop tracked effective volume counts via an adjustments map, but createBalanceTask independently called planBalanceDestination which used the topology's LoadCount — a separate, unadjusted source of truth. This divergence caused multiple moves to pile onto the same server. Changes: - Add resolveBalanceDestination to resolve the detection loop's greedy target (minServer) rather than independently picking a destination - Add oscillation guard: stop when max-min <= 1 since no single move can improve the balance beyond that point - Track unseeded destinations: if a target server wasn't in the initial serverVolumeCounts, add it so subsequent iterations include it - Add TestDetection_UnseededDestinationDoesNotOverload * fix: handler force_move propagation, partial failure, deterministic dedupe - Propagate ForceMove from outer BalanceTaskParams to individual move TaskParams so batch moves respect the force_move flag - Fix partial failure: mark job successful if at least one move succeeded (succeeded > 0 || failed == 0) to avoid re-running already-completed moves on retry - Use SHA-256 hash for deterministic dedupe key fallback instead of time.Now().UnixNano() which is non-deterministic - Remove unused successDetails variable - Extract maxProposalStringLength constant to replace magic number 200 * admin UI: use template literals in balance execution plan rendering * fix: integration test handles batch proposals from batched detection With batch_size=20, all moves are grouped into a single proposal containing BalanceParams.Moves instead of top-level Sources/Targets. Update assertions to handle both batch and single-move proposal formats. * fix: verify volume size on target before deleting source during balance Add a pre-delete safety check that reads the volume file status on both source and target, then compares .dat file size and file count. If they don't match, the move is aborted — leaving the source intact rather than risking irreversible data loss. Also removes the redundant mountVolume call since VolumeCopy already mounts the volume on the target server. * fix: clamp maxConcurrent, serialize progress sends, validate config as int64 - Clamp maxConcurrentMoves to defaultMaxConcurrentMoves before creating the semaphore so a stale or malicious job cannot request unbounded concurrent volume moves - Extend progressMu to cover sender.SendProgress calls since the underlying gRPC stream is not safe for concurrent writes - Perform bounds checks on max_concurrent_moves and batch_size in int64 space before casting to int, avoiding potential overflow on 32-bit * fix: check disk capacity in resolveBalanceDestination Skip disks where VolumeCount >= MaxVolumeCount so the detection loop does not propose moves to a full disk that would fail at execution time. * test: rename unseeded destination test to match actual behavior The test exercises a server with 0 volumes that IS seeded from topology (matching disk type), not an unseeded destination. Rename to TestDetection_ZeroVolumeServerIncludedInBalance and fix comments. * test: tighten integration test to assert exactly one batch proposal With default batch_size=20, all moves should be grouped into a single batch proposal. Assert len(proposals)==1 and require BalanceParams with Moves, removing the legacy single-move else branch. * fix: propagate ctx to RPCs and restore source writability on abort - All helper methods (markVolumeReadonly, copyVolume, tailVolume, readVolumeFileStatus, deleteVolume) now accept a context parameter instead of using context.Background(), so Execute's ctx propagates cancellation and timeouts into every volume server RPC - Add deferred cleanup that restores the source volume to writable if any step after markVolumeReadonly fails, preventing the source from being left permanently readonly on abort - Add markVolumeWritable helper using VolumeMarkWritableRequest * fix: deep-copy protobuf messages in test recording sender Use proto.Clone in recordingExecutionSender to store immutable snapshots of JobProgressUpdate and JobCompleted, preventing assertions from observing mutations if the handler reuses message pointers. * fix: add VolumeMarkWritable and ReadVolumeFileStatus to fake volume server The balance task now calls ReadVolumeFileStatus for pre-delete verification and VolumeMarkWritable to restore writability on abort. Add both RPCs to the test fake, and drop the mountCalls assertion since BalanceTask no longer calls VolumeMount directly (VolumeCopy handles it). * fix: use maxConcurrentMovesLimit (50) for clamp, not defaultMaxConcurrentMoves defaultMaxConcurrentMoves (5) is the fallback when the field is unset, not an upper bound. Clamping to it silently overrides valid config values like 10/20/50. Introduce maxConcurrentMovesLimit (50) matching the descriptor's MaxValue and clamp to that instead. * fix: cancel batch moves on progress stream failure Derive a cancellable batchCtx from the caller's ctx. If sender.SendProgress returns an error (client disconnect, context cancelled), capture it, skip further sends, and cancel batchCtx so in-flight moves abort via their propagated context rather than running blind to completion. * fix: bound cleanup timeout and validate batch move fields - Use a 30-second timeout for the deferred markVolumeWritable cleanup instead of context.Background() which can block indefinitely if the volume server is unreachable - Validate required fields (VolumeID, SourceNode, TargetNode) before appending moves to a batch proposal, skipping invalid entries - Fall back to a single-move proposal when filtering leaves only one valid move in a batch * fix: cancel task execution on SendProgress stream failure All handler progress callbacks previously ignored SendProgress errors, allowing tasks to continue executing after the client disconnected. Now each handler creates a derived cancellable context and cancels it on the first SendProgress error, stopping the in-flight task promptly. Handlers fixed: erasure_coding, vacuum, volume_balance (single-move), and admin_script (breaks command loop on send failure). * fix: validate batch moves before scheduling in executeBatchMoves Reject empty batches, enforce a hard upper bound (100 moves), and filter out nil or incomplete move specs (missing source/target/volume) before allocating progress tracking and launching goroutines. * test: add batch balance execution integration test Tests the batch move path with 3 volumes, max concurrency 2, using fake volume servers. Verifies all moves complete with correct readonly, copy, tail, and delete RPC counts. * test: add MarkWritableCount and ReadFileStatusCount accessors Expose the markWritableCalls and readFileStatusCalls counters on the fake volume server, following the existing MarkReadonlyCount pattern. * fix: oscillation guard uses global effective counts for heterogeneous capacity The oscillation guard (max-min <= 1) previously used maxServer/minServer which are determined by utilization ratio. With heterogeneous capacity, maxServer by utilization can have fewer raw volumes than minServer, producing a negative diff and incorrectly triggering the guard. Now scans all servers' effective counts to find the true global max/min volume counts, so the guard works correctly regardless of whether utilization-based or raw-count balancing is used. * fix: admin script handler breaks outer loop on SendProgress failure The break on SendProgress error inside the shell.Commands scan only exited the inner loop, letting the outer command loop continue executing commands on a broken stream. Use a sendBroken flag to propagate the break to the outer execCommands loop. |
||
|
|
587c24ec89 |
plugin worker: support job type categories (all, default, heavy) (#8547)
* plugin worker: add handler registry with job categories
Introduce a self-registration pattern for plugin worker job handlers.
Each handler can register itself via init() with a HandlerFactory that
declares its job type, category (default/heavy), CLI aliases, and a
builder function.
ResolveHandlerFactories accepts a mix of category names ("all",
"default", "heavy") and explicit job type names/aliases, returning the
matching factories. This enables workers to be configured by resource
profile rather than requiring explicit job type enumeration.
* plugin worker: register all handlers via init()
Each job handler now self-registers into the global handler registry
with its canonical job type, category, CLI aliases, and build function:
- vacuum: category=default
- volume_balance: category=default
- admin_script: category=default
- erasure_coding: category=heavy
- iceberg_maintenance: category=heavy
Adding a new job type now only requires adding the init() call in the
handler file itself — no other files need to be touched.
* plugin worker: replace hardcoded job type switch with registry
Remove buildPluginWorkerHandler, parsePluginWorkerJobTypes, and
canonicalPluginWorkerJobType from worker_runtime.go. The simplified
buildPluginWorkerHandlers now delegates to
pluginworker.ResolveHandlerFactories, which resolves category names
("all", "default", "heavy") and explicit job type names/aliases.
The default job type is changed from an explicit list to "all", so new
handlers registered via init() are automatically picked up.
Update all tests to use the new API.
* plugin worker: update CLI help text for job categories
Update the -jobType flag description and command examples to document
category support (all, default, heavy) alongside explicit job type names.
* plugin worker: address review feedback
- Add CategoryAll constant; use typed constants in tokenAsCategory
- Pre-allocate result slice in ResolveHandlerFactories
- Add vacuum aliases (vol.vacuum, volume.vacuum)
- List alias examples (ec, balance, iceberg) in -jobType flag help
- Create handlers aggregator package for subpackage blank imports so
new handler subpackages only need to be added in one place
- Make category tests relationship-based (subset/union checks) instead
of asserting exact handler counts
- Add clarifying comments to worker_test.go and mini_plugin_test.go
listing expected handler names next to count assertions
---------
Co-authored-by: Copilot <copilot@github.com>
|
||
|
|
72c2c7ef8b |
Add iceberg_maintenance plugin worker handler (Phase 1) (#8501)
* Add iceberg_maintenance plugin worker handler (Phase 1) Implement automated Iceberg table maintenance as a new plugin worker job type. The handler scans S3 table buckets for tables needing maintenance and executes operations in the correct Iceberg order: expire snapshots, remove orphan files, and rewrite manifests. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix unsafe int64→int narrowing for MaxSnapshotsToKeep Use int64(wouldKeep) instead of int(config.MaxSnapshotsToKeep) to avoid potential truncation on 32-bit platforms (CodeQL high severity). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix unsafe int64→int narrowing for MinInputFiles Use int64(len(manifests)) instead of int(config.MinInputFiles) to avoid potential truncation on 32-bit platforms (CodeQL high severity). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix unsafe int64→int narrowing for MaxCommitRetries Clamp MaxCommitRetries to [1,20] range and keep as int64 throughout the retry loop to avoid truncation on 32-bit platforms (CodeQL high severity). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Sort snapshots explicitly by timestamp in expireSnapshots The previous logic relied on implicit ordering of the snapshot list. Now explicitly sorts snapshots by timestamp descending (most recent first) and uses a simpler keep-count loop: keep the first MaxSnapshotsToKeep newest snapshots plus the current snapshot unconditionally, then expire the rest that exceed the retention window. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Handle errors properly in listFilerEntries Previously all errors from ListEntries and Recv were silently swallowed. Now: treat "not found" errors as empty directory, propagate other ListEntries errors, and check for io.EOF explicitly on Recv instead of breaking on any error. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix overly broad HasSuffix check in orphan detection The bare strings.HasSuffix(ref, entry.Name) could match files with similar suffixes (e.g. "123.avro" matching "snap-123.avro"). Replaced with exact relPath match and a "/"-prefixed suffix check to avoid false positives. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Replace fmt.Sscanf with strconv.Atoi in extractMetadataVersion strconv.Atoi is more explicit and less fragile than fmt.Sscanf for parsing a simple integer from a trimmed string. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Recursively traverse directories for orphan file detection The orphan cleanup only listed a single directory level under data/ and metadata/, skipping IsDirectory entries. Partitioned Iceberg tables store data files in nested partition directories (e.g. data/region=us-east/file.parquet) which were never evaluated. Add walkFilerEntries helper that recursively descends into subdirectories, and use it in removeOrphans so all nested files are considered for orphan checks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix manifest path drift from double time.Now() calls rewriteManifests called time.Now().UnixMilli() twice: once for the path embedded in WriteManifest and once for the filename passed to saveFilerFile. These timestamps would differ, causing the manifest's internal path reference to not match the actual saved filename. Compute the filename once and reuse it for both WriteManifest and saveFilerFile so they always reference the same path. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add TestManifestRewritePathConsistency test Verifies that WriteManifest returns a ManifestFile whose FilePath() matches the path passed in, and that path.Base() of that path matches the filename used for saveFilerFile. This validates the single- timestamp pattern used in rewriteManifests produces consistent paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Make parseOperations return error on unknown operations Previously parseOperations silently dropped unknown operation names and could return an empty list. Now validates inputs against the canonical set and returns a clear error if any unknown operation is specified. Updated Execute to surface the error instead of proceeding with an empty operation list. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Use gRPC status codes instead of string matching in listFilerEntries Replace brittle strings.Contains(err.Error(), "not found") check with status.Code(err) == codes.NotFound for proper gRPC error handling. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add stale-plan guard in commit closures for expireSnapshots and rewriteManifests Both operations plan outside the commit mutation using a snapshot ID captured from the initial metadata read. If the table head advances concurrently, the mutation would create a snapshot parented to the wrong head or remove snapshots based on a stale view. Add a guard inside each mutation closure that verifies currentMeta.CurrentSnapshot().SnapshotID still matches the planned snapshot ID. If it differs, return errStalePlan which propagates immediately (not retried, since the plan itself is invalid). Also fix rewriteManifests to derive SequenceNumber from the fresh metadata (cs.SequenceNumber) instead of the captured currentSnap. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add compare-and-swap to updateTableMetadataXattr updateTableMetadataXattr previously re-read the entry but did not verify the metadataVersion matched what commitWithRetry had loaded. A concurrent update could be silently clobbered. Now accepts expectedVersion parameter and compares it against the stored metadataVersion before writing. Returns errMetadataVersionConflict on mismatch, which commitWithRetry treats as retryable (deletes the staged metadata file and retries with fresh state). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Export shared plugin worker helpers for use by sub-packages Export ShouldSkipDetectionByInterval, BuildExecutorActivity, and BuildDetectorActivity so the iceberg sub-package can reuse them without duplicating logic. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Refactor iceberg maintenance handler into weed/plugin/worker/iceberg package Split the 1432-line iceberg_maintenance_handler.go into focused files in a new iceberg sub-package: handler.go, config.go, detection.go, operations.go, filer_io.go, and compact.go (Phase 2 data compaction). Key changes: - Rename types to drop stutter (IcebergMaintenanceHandler → Handler, etc.) - Fix loadFileByIcebergPath to preserve nested directory paths via normalizeIcebergPath instead of path.Base which dropped subdirectories - Check SendProgress errors instead of discarding them - Add stale-plan guard to compactDataFiles commitWithRetry closure - Add "compact" operation to parseOperations canonical order - Duplicate readStringConfig/readInt64Config helpers (~20 lines) - Update worker_runtime.go to import new iceberg sub-package Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove iceberg_maintenance from default plugin worker job types Iceberg maintenance is not yet ready to be enabled by default. Workers can still opt in by explicitly listing iceberg_maintenance in their job types configuration. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Clamp config values to safe minimums in ParseConfig Prevents misconfiguration by enforcing minimum values using the default constants for all config fields. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Harden filer I/O: path helpers, strict CAS guard, path traversal prevention - Use path.Dir/path.Base instead of strings.SplitN in loadCurrentMetadata - Make CAS guard error on missing or unparseable metadataVersion - Add path.Clean and traversal validation in loadFileByIcebergPath Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix compact: single snapshot ID, oversized bin splitting, ensureFilerDir - Use single newSnapID for all manifest entries in a compaction run - Add splitOversizedBin to break bins exceeding targetSize - Make ensureFilerDir only create on NotFound, propagate other errors Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add wildcard filters, scan limit, and context cancellation to table scanning - Use wildcard matchers (*, ?) for bucket/namespace/table filters - Add limit parameter to scanTablesForMaintenance for early termination - Add ctx.Done() checks in bucket and namespace scan loops - Update filter UI descriptions and placeholders for wildcard support Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove dead detection interval check and validate namespace parameter - Remove ineffective ShouldSkipDetectionByInterval call with hardcoded 0 - Add namespace to required parameter validation in Execute Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Improve operations: exponential backoff, orphan matching, full file cleanup - Use exponential backoff (50ms, 100ms, 200ms, ...) in commitWithRetry - Use normalizeIcebergPath for orphan matching instead of fragile suffix check - Add collectSnapshotFiles to traverse manifest lists → manifests → data files - Delete all unreferenced files after expiring snapshots, not just manifest lists - Refactor removeOrphans to reuse collectSnapshotFiles Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * iceberg: fix ensureFilerDir to handle filer_pb.ErrNotFound sentinel filer_pb.LookupEntry converts gRPC NotFound errors to filer_pb.ErrNotFound (a plain sentinel), so status.Code() never returns codes.NotFound for that error. This caused ensureFilerDir to return an error instead of creating the directory when it didn't exist. * iceberg: clean up orphaned artifacts when compaction commit fails Track all files written during compaction (merged data files, manifest, manifest list) and delete them if the commit or any subsequent write step fails, preventing orphaned files from accumulating in the filer. * iceberg: derive tablePath from namespace/tableName when empty An empty table_path parameter would be passed to maintenance operations unchecked. Default it to path.Join(namespace, tableName) when not provided. * iceberg: make collectSnapshotFiles return error on read/parse failure Previously, errors reading manifests were logged and skipped, returning a partial reference set. This could cause incorrect delete decisions during snapshot expiration or orphan cleanup. Now the function returns an error and all callers abort when reference data is incomplete. * iceberg: include active metadata file in removeOrphans referenced set The metadataFileName returned by loadCurrentMetadata was discarded, so the active metadata file could be incorrectly treated as an orphan and deleted. Capture it and add it to the referencedFiles map. * iceberg: only retry commitWithRetry on metadata version conflicts Previously all errors from updateTableMetadataXattr triggered retries. Now only errMetadataVersionConflict causes retry; other errors (permissions, transport, malformed xattr) fail immediately. * iceberg: respect req.Limit in fakeFilerServer.ListEntries mock The mock ListEntries ignored the Limit field, so tests couldn't exercise pagination. Now it stops streaming once Limit entries have been sent. * iceberg: validate parquet schema compatibility before merging files mergeParquetFiles now compares each source file's schema against the first file's schema and aborts with a clear error if they differ, instead of blindly writing rows that could panic or produce corrupt output. * iceberg: normalize empty JobType to canonical jobType in Execute events When request.Job.JobType is empty, status events and completion messages were emitted with a blank job type. Derive a canonical value early and use it consistently in all outbound events. * iceberg: log warning on unexpected config value types in read helpers readStringConfig and readInt64Config now log a V(1) warning when they encounter an unhandled ConfigValue kind, aiding debugging of unexpected config types that silently fall back to defaults. * worker: add iceberg_maintenance to default plugin worker job types Workers using the default job types list didn't advertise the iceberg_maintenance handler despite the handler and canonical name being registered. Add it so workers pick up the handler by default. * iceberg: use defer and detached context for compaction artifact cleanup The cleanup closure used the job context which could already be canceled, and was not called on ctx.Done() early exits. Switch to a deferred cleanup with a detached context (30s timeout) so artifact deletion completes on all exit paths including context cancellation. * iceberg: use proportional jitter in commitWithRetry backoff Fixed 25ms max jitter becomes insignificant at higher retry attempts. Use 0-20% of the current backoff value instead so jitter scales with the exponential delay. * iceberg: add malformed filename cases to extractMetadataVersion test Cover edge cases like "invalid.metadata.json", "metadata.json", "", and "v.metadata.json" to ensure the function returns 0 for unparseable inputs. * iceberg: fail compaction on manifest read errors and skip delete manifests Previously, unreadable manifests were silently skipped during compaction, which could drop live files from the entry set. Now manifest read/parse errors are returned as fatal errors. Also abort compaction when delete manifests exist since the compactor does not apply deletes — carrying them through unchanged could produce incorrect results. * iceberg: use table-relative path for active metadata file in orphan scan metadataFileName was stored as a basename (e.g. "v1.metadata.json") but the orphan scanner matches against table-relative paths like "metadata/v1.metadata.json". Prefix with "metadata/" so the active metadata file is correctly recognized as referenced. * iceberg: fix MetadataBuilderFromBase location to use metadata file path The second argument to MetadataBuilderFromBase records the previous metadata file in the metadata log. Using meta.Location() (the table root) was incorrect — it must be the actual metadata file path so old metadata files can be tracked and eventually cleaned up. * iceberg: update metadataLocation and versionToken in xattr on commit updateTableMetadataXattr was only updating metadataVersion, modifiedAt, and fullMetadata but not metadataLocation or versionToken. This left catalog state inconsistent after maintenance commits — the metadataLocation still pointed to the old metadata file and the versionToken was stale. Add a newMetadataLocation parameter and regenerate the versionToken on every commit, matching the S3 Tables handler behavior. * iceberg: group manifest entries by partition spec in rewriteManifests rewriteManifests was writing all entries into a single manifest using the table's current partition spec. For spec-evolved tables where manifests reference different partition specs, this produces an invalid manifest. Group entries by the source manifest's PartitionSpecID and write one merged manifest per spec, looking up each spec from the table's PartitionSpecs list. * iceberg: remove dead code loop for non-data manifests in compaction The early abort guard at the top of compactDataFiles already ensures no delete manifests are present. The loop that copied non-data manifests into allManifests was unreachable dead code. * iceberg: use JSON encoding in partitionKey for unambiguous grouping partitionKey used fmt.Sprintf("%d=%v") joined by commas, which produces ambiguous keys when partition values contain commas or '='. Use json.Marshal for values and NUL byte as separator to eliminate collisions. * iceberg: precompute normalized reference set in removeOrphans The orphan check was O(files × refs) because it normalized each reference path inside the per-file loop. Precompute the normalized set once for O(1) lookups per candidate file. * iceberg: add artifact cleanup to rewriteManifests on commit failure rewriteManifests writes merged manifests and a manifest list to the filer before committing but did not clean them up on failure. Add the same deferred cleanup pattern used by compactDataFiles: track written artifacts and delete them if the commit does not succeed. * iceberg: pass isDeleteData=true in deleteFilerFile deleteFilerFile called DoRemove with isDeleteData=false, which only removed filer metadata and left chunk data behind on volume servers. All other data-file deletion callers in the codebase pass true. * iceberg: clean up test: remove unused snapID, simplify TestDetectWithFakeFiler Remove unused snapID variable and eliminate the unnecessary second fake filer + entry copy in TestDetectWithFakeFiler by capturing the client from the first startFakeFiler call. * fix: update TestWorkerDefaultJobTypes to expect 5 job types The test expected 4 default job types but iceberg_maintenance was added as a 5th default in a previous commit. * iceberg: document client-side CAS TOCTOU limitation in updateTableMetadataXattr Add a note explaining the race window where two workers can both pass the version check and race at UpdateEntry. The proper fix requires server-side precondition support on UpdateEntryRequest. * iceberg: remove unused sender variable in TestFullExecuteFlow * iceberg: abort compaction when multiple partition specs are present The compactor writes all entries into a single manifest using the current partition spec, which is invalid for spec-evolved tables. Detect multiple PartitionSpecIDs and skip compaction until per-spec compaction is implemented. * iceberg: validate tablePath to prevent directory traversal Sanitize the table_path parameter with path.Clean and verify it matches the expected namespace/tableName prefix to prevent path traversal attacks via crafted job parameters. * iceberg: cap retry backoff at 5s and make it context-aware The exponential backoff could grow unbounded and blocked on time.Sleep ignoring context cancellation. Cap at 5s and use a timer with select on ctx.Done so retries respect cancellation. * iceberg: write manifest list with new snapshot identity in rewriteManifests The manifest list was written with the old snapshot's ID and sequence number, but the new snapshot created afterwards used a different identity. Compute newSnapshotID and newSeqNum before writing manifests and the manifest list so all artifacts are consistent. * ec: also remove .vif file in removeEcVolumeFiles removeEcVolumeFiles cleaned up .ecx, .ecj, and shard files but not the .vif volume info file, leaving it orphaned. The .vif file lives in the data directory alongside shard files. The directory handling for index vs data files was already correct: .ecx/.ecj are removed from IdxDirectory and shard files from Directory, matching how NewEcVolume loads them. Revert "ec: also remove .vif file in removeEcVolumeFiles" This reverts commit acc82449e12a00115268a5652aef0d6c46d9f2dd. * iceberg: skip orphan entries with nil Attributes instead of defaulting to epoch When entry.Attributes is nil, mtime defaulted to Unix epoch (1970), making unknown-age entries appear ancient and eligible for deletion. Skip these entries instead to avoid deleting files whose age cannot be determined. * iceberg: use unique metadata filenames to prevent concurrent write clobbering Add timestamp nonce to metadata filenames (e.g. v3-1709766000.metadata.json) so concurrent writers stage to distinct files. Update extractMetadataVersion to strip the nonce suffix, and loadCurrentMetadata to read the actual filename from the metadataLocation xattr field. * iceberg: defer artifact tracking until data file builder succeeds Move the writtenArtifacts append to after NewDataFileBuilder succeeds, so a failed builder doesn't leave a stale entry for an already-deleted file in the cleanup list. * iceberg: use detached context for metadata file cleanup Use context.WithTimeout(context.Background(), 10s) when deleting staged metadata files after CAS failure, so cleanup runs even if the original request context is canceled. * test: update default job types count to include iceberg_maintenance * iceberg: use parquet.EqualNodes for structural schema comparison Replace String()-based schema comparison with parquet.EqualNodes which correctly compares types, repetition levels, and logical types. * iceberg: add nonce-suffixed filename cases to TestExtractMetadataVersion * test: assert iceberg_maintenance is present in default job types * iceberg: validate operations config early in Detect Call parseOperations in Detect so typos in the operations config fail fast before emitting proposals, matching the validation already done in Execute. * iceberg: detect chunked files in loadFileByIcebergPath Return an explicit error when a file has chunks but no inline content, rather than silently returning empty data. Data files uploaded via S3 are stored as chunks, so compaction would otherwise produce corrupt merged files. --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> |
||
|
|
18ccc9b773 |
Plugin scheduler: sequential iterations with max runtime (#8496)
* pb: add job type max runtime setting * plugin: default job type max runtime * plugin: redesign scheduler loop * admin ui: update scheduler settings * plugin: fix scheduler loop state name * plugin scheduler: restore backlog skip * plugin scheduler: drop legacy detection helper * admin api: require scheduler config body * admin ui: preserve detection interval on save * plugin scheduler: use job context and drain cancels * plugin scheduler: respect detection intervals * plugin scheduler: gate runs and drain queue * ec test: reuse req/resp vars * ec test: add scheduler debug logs * Adjust scheduler idle sleep and initial run delay * Clear pending job queue before scheduler runs * Log next detection time in EC integration test * Improve plugin scheduler debug logging in EC test * Expose scheduler next detection time * Log scheduler next detection time in EC test * Wake scheduler on config or worker updates * Expose scheduler sleep interval in UI * Fix scheduler sleep save value selection * Set scheduler idle sleep default to 613s * Show scheduler next run time in plugin UI --------- Co-authored-by: Copilot <copilot@github.com> |
||
|
|
e1e5b4a8a6 |
add admin script worker (#8491)
* admin: add plugin lock coordination * shell: allow bypassing lock checks * plugin worker: add admin script handler * mini: include admin_script in plugin defaults * admin script UI: drop name and enlarge text * admin script: add default script * admin_script: make run interval configurable * plugin: gate other jobs during admin_script runs * plugin: use last completed admin_script run * admin: backfill plugin config defaults * templ Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * comparable to default version Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * default to run Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * format Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * shell: respect pre-set noLock for fix.replication * shell: add force no-lock mode for admin scripts * volume balance worker already exists Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * admin: expose scheduler status JSON * shell: add sleep command * shell: restrict sleep syntax * Revert "shell: respect pre-set noLock for fix.replication" This reverts commit 2b14e8b82602a740d3a473c085e3b3a14f1ddbb3. * templ Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * fix import Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * less logs Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * Reduce master client logs on canceled contexts * Update mini default job type count --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |