Files
seaweedFS/weed/admin/topology/storage_impact.go
Chris Lu a838661b83 feat(plugin): EC shard balance handler for plugin worker (#8629)
* feat(ec_balance): add TaskTypeECBalance constant and protobuf definitions

Add the ec_balance task type constant to both topology and worker type
systems. Define EcBalanceTaskParams, EcShardMoveSpec, and
EcBalanceTaskConfig protobuf messages for EC shard balance operations.

* feat(ec_balance): add configuration for EC shard balance task

Config includes imbalance threshold, min server count, collection
filter, disk type, and preferred tags for tag-aware placement.

* feat(ec_balance): add multi-phase EC shard balance detection algorithm

Implements four detection phases adapted from the ec.balance shell
command:
1. Duplicate shard detection and removal proposals
2. Cross-rack shard distribution balancing
3. Within-rack node-level shard balancing
4. Global shard count equalization across nodes

Detection is side-effect-free: it builds an EC topology view from
ActiveTopology and generates move proposals without executing them.

* feat(ec_balance): add EC shard move task execution

Implements the shard move sequence using the same VolumeEcShardsCopy,
VolumeEcShardsMount, VolumeEcShardsUnmount, and VolumeEcShardsDelete
RPCs as the shell ec.balance command. Supports both regular shard
moves and dedup-phase deletions (unmount+delete without copy).

* feat(ec_balance): add task registration and scheduling

Register EC balance task definition with auto-config update support.
Scheduling respects max concurrent limits and worker capabilities.

* feat(ec_balance): add plugin handler for EC shard balance

Implements the full plugin handler with detection, execution, admin
and worker config forms, proposal building, and decision trace
reporting. Supports collection/DC/disk type filtering, preferred tag
placement, and configurable detection intervals. Auto-registered via
init() with the handler registry.

* test(ec_balance): add tests for detection algorithm and plugin handler

Detection tests cover: duplicate shard detection, cross-rack imbalance,
within-rack imbalance, global rebalancing, topology building, collection
filtering, and edge cases. Handler tests cover: config derivation with
clamping, proposal building, protobuf encode/decode round-trip, fallback
parameter decoding, capability, and config policy round-trip.

* fix(ec_balance): address PR review feedback and fix CI test failure

- Update TestWorkerDefaultJobTypes to expect 6 handlers (was 5)
- Extract threshold constants (ecBalanceMinImbalanceThreshold, etc.)
  to eliminate magic numbers in Descriptor and config derivation
- Remove duplicate ShardIdsToUint32 helper (use erasure_coding package)
- Add bounds checks for int64→int/uint32 conversions to fix CodeQL
  integer conversion warnings

* fix(ec_balance): address code review findings

storage_impact.go:
- Add TaskTypeECBalance case returning shard-level reservation
  (ShardSlots: -1/+1) instead of falling through to default which
  incorrectly reserves a full volume slot on target.

detection.go:
- Use dc:rack composite key to avoid cross-DC rack name collisions.
  Only create rack entries after confirming node has matching disks.
- Add exceedsImbalanceThreshold check to cross-rack, within-rack,
  and global phases so trivial skews below the configured threshold
  are ignored. Dedup phase always runs since duplicates are errors.
- Reserve destination capacity after each planned move (decrement
  destNode.freeSlots, update rackShardCount/nodeShardCount) to
  prevent overbooking the same destination.
- Skip nodes with freeSlots <= 0 when selecting minNode in global
  balance to avoid proposing moves to full nodes.
- Include loop index and source/target node IDs in TaskID to
  guarantee uniqueness across moves with the same volumeID/shardID.

ec_balance_handler.go:
- Fail fast with error when shard_id is absent in fallback parameter
  decoding instead of silently defaulting to shard 0.

ec_balance_task.go:
- Delegate GetProgress() to BaseTask.GetProgress() so progress
  updates from ReportProgressWithStage are visible to callers.
- Add fail-fast guard rejecting multiple sources/targets until
  batch execution is implemented.

Findings verified but not changed (matches existing codebase pattern
in vacuum/balance/erasure_coding handlers):
- register.go globalTaskDef.Config race: same unsynchronized pattern
  in all 4 task packages.
- CreateTask using generated ID: same fmt.Sprintf pattern in all 4
  task packages.

* fix(ec_balance): harden parameter decoding, progress tracking, and validation

ec_balance_handler.go (decodeECBalanceTaskParams):
- Validate execution-critical fields (Sources[0].Node, ShardIds,
  Targets[0].Node, ShardIds) after protobuf deserialization.
- Require source_disk_id and target_disk_id in legacy fallback path
  so Targets[0].DiskId is populated for VolumeEcShardsCopyRequest.
- All error messages reference decodeECBalanceTaskParams and the
  specific missing field (TaskParams, shard_id, Targets[0].DiskId,
  EcBalanceTaskParams) for debuggability.

ec_balance_task.go:
- Track progress in ECBalanceTask.progress field, updated via
  reportProgress() helper called before ReportProgressWithStage(),
  so GetProgress() returns real stage progress instead of stale 0.
- Validate: require exactly 1 source and 1 target (mirrors Execute
  guard), require ShardIds on both, with error messages referencing
  ECBalanceTask.Validate and the specific field.

* fix(ec_balance): fix dedup execution path, stale topology, collection filter, timeout, and dedupeKey

detection.go:
- Dedup moves now set target=source so isDedupPhase() triggers the
  unmount+delete-only execution path instead of attempting a copy.
- Apply moves to in-memory topology between phases via
  applyMovesToTopology() so subsequent phases see updated shard
  placement and don't conflict with already-planned moves.
- detectGlobalImbalance now accepts allowedVids and filters both
  shard counting and shard selection to respect CollectionFilter.

ec_balance_task.go:
- Apply EcBalanceTaskParams.TimeoutSeconds to the context via
  context.WithTimeout so all RPC operations respect the configured
  timeout instead of hanging indefinitely.

ec_balance_handler.go:
- Include source node ID in dedupeKey so dedup deletions from
  different source nodes for the same shard aren't collapsed.
- Clamp minServerCountRaw and minIntervalRaw lower bounds on int64
  before narrowing to int, preventing undefined overflow on 32-bit.

* fix(ec_balance): log warning before cancelling on progress send failure

Log the error, job ID, job type, progress percentage, and stage
before calling execCancel() in the progress callback so failed
progress sends are diagnosable instead of silently cancelling.
2026-03-14 21:34:53 -07:00

56 lines
2.8 KiB
Go

package topology
import (
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
)
// CalculateTaskStorageImpact calculates storage impact for different task types
func CalculateTaskStorageImpact(taskType TaskType, volumeSize int64) (sourceChange, targetChange StorageSlotChange) {
switch taskType {
case TaskTypeErasureCoding:
// EC task: distributes shards to MULTIPLE targets, source reserves with zero impact
// Source reserves capacity but with zero StorageSlotChange (no actual capacity consumption during planning)
// WARNING: EC has multiple targets! Use AddPendingTask with multiple destinations for proper multi-target handling
// This simplified function returns zero impact; real EC requires specialized multi-destination calculation
return StorageSlotChange{VolumeSlots: 0, ShardSlots: 0}, StorageSlotChange{VolumeSlots: 0, ShardSlots: 0}
case TaskTypeBalance:
// Balance task: moves volume from source to target
// Source loses 1 volume, target gains 1 volume
return StorageSlotChange{VolumeSlots: -1, ShardSlots: 0}, StorageSlotChange{VolumeSlots: 1, ShardSlots: 0}
case TaskTypeVacuum:
// Vacuum task: frees space by removing deleted entries, no slot change
return StorageSlotChange{VolumeSlots: 0, ShardSlots: 0}, StorageSlotChange{VolumeSlots: 0, ShardSlots: 0}
case TaskTypeReplication:
// Replication task: creates new replica on target
return StorageSlotChange{VolumeSlots: 0, ShardSlots: 0}, StorageSlotChange{VolumeSlots: 1, ShardSlots: 0}
case TaskTypeECBalance:
// EC balance task: moves individual shard from source to target
// Source frees 1 shard slot, target gains 1 shard slot
return StorageSlotChange{VolumeSlots: 0, ShardSlots: -1}, StorageSlotChange{VolumeSlots: 0, ShardSlots: 1}
default:
// Unknown task type, assume minimal impact
glog.Warningf("unhandled task type %s in CalculateTaskStorageImpact, assuming default impact", taskType)
return StorageSlotChange{VolumeSlots: 0, ShardSlots: 0}, StorageSlotChange{VolumeSlots: 1, ShardSlots: 0}
}
}
// CalculateECShardStorageImpact calculates storage impact for EC shards specifically
func CalculateECShardStorageImpact(shardCount int32, expectedShardSize int64) StorageSlotChange {
// EC shards are typically much smaller than full volumes
// Use shard-level tracking for granular capacity planning
return StorageSlotChange{VolumeSlots: 0, ShardSlots: shardCount}
}
// CalculateECShardCleanupImpact calculates storage impact for cleaning up existing EC shards
func CalculateECShardCleanupImpact(originalVolumeSize int64) StorageSlotChange {
// Cleaning up existing EC shards frees shard slots
// Use the actual EC configuration constants for accurate shard count
return StorageSlotChange{VolumeSlots: 0, ShardSlots: -int32(erasure_coding.TotalShardsCount)} // Negative = freed capacity
}