* feat(ec_balance): add TaskTypeECBalance constant and protobuf definitions Add the ec_balance task type constant to both topology and worker type systems. Define EcBalanceTaskParams, EcShardMoveSpec, and EcBalanceTaskConfig protobuf messages for EC shard balance operations. * feat(ec_balance): add configuration for EC shard balance task Config includes imbalance threshold, min server count, collection filter, disk type, and preferred tags for tag-aware placement. * feat(ec_balance): add multi-phase EC shard balance detection algorithm Implements four detection phases adapted from the ec.balance shell command: 1. Duplicate shard detection and removal proposals 2. Cross-rack shard distribution balancing 3. Within-rack node-level shard balancing 4. Global shard count equalization across nodes Detection is side-effect-free: it builds an EC topology view from ActiveTopology and generates move proposals without executing them. * feat(ec_balance): add EC shard move task execution Implements the shard move sequence using the same VolumeEcShardsCopy, VolumeEcShardsMount, VolumeEcShardsUnmount, and VolumeEcShardsDelete RPCs as the shell ec.balance command. Supports both regular shard moves and dedup-phase deletions (unmount+delete without copy). * feat(ec_balance): add task registration and scheduling Register EC balance task definition with auto-config update support. Scheduling respects max concurrent limits and worker capabilities. * feat(ec_balance): add plugin handler for EC shard balance Implements the full plugin handler with detection, execution, admin and worker config forms, proposal building, and decision trace reporting. Supports collection/DC/disk type filtering, preferred tag placement, and configurable detection intervals. Auto-registered via init() with the handler registry. * test(ec_balance): add tests for detection algorithm and plugin handler Detection tests cover: duplicate shard detection, cross-rack imbalance, within-rack imbalance, global rebalancing, topology building, collection filtering, and edge cases. Handler tests cover: config derivation with clamping, proposal building, protobuf encode/decode round-trip, fallback parameter decoding, capability, and config policy round-trip. * fix(ec_balance): address PR review feedback and fix CI test failure - Update TestWorkerDefaultJobTypes to expect 6 handlers (was 5) - Extract threshold constants (ecBalanceMinImbalanceThreshold, etc.) to eliminate magic numbers in Descriptor and config derivation - Remove duplicate ShardIdsToUint32 helper (use erasure_coding package) - Add bounds checks for int64→int/uint32 conversions to fix CodeQL integer conversion warnings * fix(ec_balance): address code review findings storage_impact.go: - Add TaskTypeECBalance case returning shard-level reservation (ShardSlots: -1/+1) instead of falling through to default which incorrectly reserves a full volume slot on target. detection.go: - Use dc:rack composite key to avoid cross-DC rack name collisions. Only create rack entries after confirming node has matching disks. - Add exceedsImbalanceThreshold check to cross-rack, within-rack, and global phases so trivial skews below the configured threshold are ignored. Dedup phase always runs since duplicates are errors. - Reserve destination capacity after each planned move (decrement destNode.freeSlots, update rackShardCount/nodeShardCount) to prevent overbooking the same destination. - Skip nodes with freeSlots <= 0 when selecting minNode in global balance to avoid proposing moves to full nodes. - Include loop index and source/target node IDs in TaskID to guarantee uniqueness across moves with the same volumeID/shardID. ec_balance_handler.go: - Fail fast with error when shard_id is absent in fallback parameter decoding instead of silently defaulting to shard 0. ec_balance_task.go: - Delegate GetProgress() to BaseTask.GetProgress() so progress updates from ReportProgressWithStage are visible to callers. - Add fail-fast guard rejecting multiple sources/targets until batch execution is implemented. Findings verified but not changed (matches existing codebase pattern in vacuum/balance/erasure_coding handlers): - register.go globalTaskDef.Config race: same unsynchronized pattern in all 4 task packages. - CreateTask using generated ID: same fmt.Sprintf pattern in all 4 task packages. * fix(ec_balance): harden parameter decoding, progress tracking, and validation ec_balance_handler.go (decodeECBalanceTaskParams): - Validate execution-critical fields (Sources[0].Node, ShardIds, Targets[0].Node, ShardIds) after protobuf deserialization. - Require source_disk_id and target_disk_id in legacy fallback path so Targets[0].DiskId is populated for VolumeEcShardsCopyRequest. - All error messages reference decodeECBalanceTaskParams and the specific missing field (TaskParams, shard_id, Targets[0].DiskId, EcBalanceTaskParams) for debuggability. ec_balance_task.go: - Track progress in ECBalanceTask.progress field, updated via reportProgress() helper called before ReportProgressWithStage(), so GetProgress() returns real stage progress instead of stale 0. - Validate: require exactly 1 source and 1 target (mirrors Execute guard), require ShardIds on both, with error messages referencing ECBalanceTask.Validate and the specific field. * fix(ec_balance): fix dedup execution path, stale topology, collection filter, timeout, and dedupeKey detection.go: - Dedup moves now set target=source so isDedupPhase() triggers the unmount+delete-only execution path instead of attempting a copy. - Apply moves to in-memory topology between phases via applyMovesToTopology() so subsequent phases see updated shard placement and don't conflict with already-planned moves. - detectGlobalImbalance now accepts allowedVids and filters both shard counting and shard selection to respect CollectionFilter. ec_balance_task.go: - Apply EcBalanceTaskParams.TimeoutSeconds to the context via context.WithTimeout so all RPC operations respect the configured timeout instead of hanging indefinitely. ec_balance_handler.go: - Include source node ID in dedupeKey so dedup deletions from different source nodes for the same shard aren't collapsed. - Clamp minServerCountRaw and minIntervalRaw lower bounds on int64 before narrowing to int, preventing undefined overflow on 32-bit. * fix(ec_balance): log warning before cancelling on progress send failure Log the error, job ID, job type, progress percentage, and stage before calling execCancel() in the progress callback so failed progress sends are diagnosable instead of silently cancelling.
225 lines
8.1 KiB
Go
225 lines
8.1 KiB
Go
package ec_balance
|
|
|
|
import (
|
|
"context"
|
|
"fmt"
|
|
"time"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/operation"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/volume_server_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/worker/types"
|
|
"github.com/seaweedfs/seaweedfs/weed/worker/types/base"
|
|
"google.golang.org/grpc"
|
|
)
|
|
|
|
// ECBalanceTask implements a single EC shard move operation.
|
|
// The move sequence is: copy+mount on dest → unmount on source → delete on source.
|
|
type ECBalanceTask struct {
|
|
*base.BaseTask
|
|
volumeID uint32
|
|
collection string
|
|
grpcDialOption grpc.DialOption
|
|
progress float64
|
|
}
|
|
|
|
// NewECBalanceTask creates a new EC balance task instance
|
|
func NewECBalanceTask(id string, volumeID uint32, collection string, grpcDialOption grpc.DialOption) *ECBalanceTask {
|
|
return &ECBalanceTask{
|
|
BaseTask: base.NewBaseTask(id, types.TaskTypeECBalance),
|
|
volumeID: volumeID,
|
|
collection: collection,
|
|
grpcDialOption: grpcDialOption,
|
|
}
|
|
}
|
|
|
|
// Execute performs the EC shard move operation using the same RPC sequence
|
|
// as the shell ec.balance command's moveMountedShardToEcNode function.
|
|
func (t *ECBalanceTask) Execute(ctx context.Context, params *worker_pb.TaskParams) error {
|
|
if params == nil {
|
|
return fmt.Errorf("task parameters are required")
|
|
}
|
|
|
|
if len(params.Sources) == 0 || len(params.Targets) == 0 {
|
|
return fmt.Errorf("sources and targets are required for EC shard move")
|
|
}
|
|
if len(params.Sources) > 1 || len(params.Targets) > 1 {
|
|
return fmt.Errorf("batch EC shard moves not supported: got %d sources and %d targets, expected 1 each", len(params.Sources), len(params.Targets))
|
|
}
|
|
|
|
source := params.Sources[0]
|
|
target := params.Targets[0]
|
|
|
|
if len(source.ShardIds) == 0 || len(target.ShardIds) == 0 {
|
|
return fmt.Errorf("shard IDs are required in sources and targets")
|
|
}
|
|
|
|
sourceAddr := pb.ServerAddress(source.Node)
|
|
targetAddr := pb.ServerAddress(target.Node)
|
|
|
|
ecParams := params.GetEcBalanceParams()
|
|
|
|
// Apply configured timeout to the context for all RPC operations
|
|
if ecParams != nil && ecParams.TimeoutSeconds > 0 {
|
|
var cancel context.CancelFunc
|
|
ctx, cancel = context.WithTimeout(ctx, time.Duration(ecParams.TimeoutSeconds)*time.Second)
|
|
defer cancel()
|
|
}
|
|
|
|
isDedupDelete := ecParams != nil && isDedupPhase(params)
|
|
|
|
glog.Infof("EC balance: moving shard(s) %v of volume %d from %s to %s",
|
|
source.ShardIds, params.VolumeId, source.Node, target.Node)
|
|
|
|
// For dedup, we only unmount+delete from source (no copy needed)
|
|
if isDedupDelete {
|
|
return t.executeDedupDelete(ctx, params.VolumeId, sourceAddr, source.ShardIds)
|
|
}
|
|
|
|
// Step 1: Copy shard to destination and mount
|
|
t.reportProgress(10.0, "Copying EC shard to destination")
|
|
if err := t.copyAndMountShard(ctx, params.VolumeId, sourceAddr, targetAddr, source.ShardIds, target.DiskId); err != nil {
|
|
return fmt.Errorf("copy and mount shard: %v", err)
|
|
}
|
|
|
|
// Step 2: Unmount shard on source
|
|
t.reportProgress(50.0, "Unmounting EC shard from source")
|
|
if err := t.unmountShard(ctx, params.VolumeId, sourceAddr, source.ShardIds); err != nil {
|
|
return fmt.Errorf("unmount shard on source: %v", err)
|
|
}
|
|
|
|
// Step 3: Delete shard from source
|
|
t.reportProgress(75.0, "Deleting EC shard from source")
|
|
if err := t.deleteShard(ctx, params.VolumeId, params.Collection, sourceAddr, source.ShardIds); err != nil {
|
|
return fmt.Errorf("delete shard on source: %v", err)
|
|
}
|
|
|
|
t.reportProgress(100.0, "EC shard move complete")
|
|
glog.Infof("EC balance: successfully moved shard(s) %v of volume %d from %s to %s",
|
|
source.ShardIds, params.VolumeId, source.Node, target.Node)
|
|
return nil
|
|
}
|
|
|
|
// executeDedupDelete removes a duplicate shard without copying
|
|
func (t *ECBalanceTask) executeDedupDelete(ctx context.Context, volumeID uint32, sourceAddr pb.ServerAddress, shardIDs []uint32) error {
|
|
t.reportProgress(25.0, "Unmounting duplicate EC shard")
|
|
if err := t.unmountShard(ctx, volumeID, sourceAddr, shardIDs); err != nil {
|
|
return fmt.Errorf("unmount duplicate shard: %v", err)
|
|
}
|
|
|
|
t.reportProgress(75.0, "Deleting duplicate EC shard")
|
|
if err := t.deleteShard(ctx, volumeID, t.collection, sourceAddr, shardIDs); err != nil {
|
|
return fmt.Errorf("delete duplicate shard: %v", err)
|
|
}
|
|
|
|
t.reportProgress(100.0, "Duplicate shard removed")
|
|
return nil
|
|
}
|
|
|
|
// copyAndMountShard copies EC shard from source to destination and mounts it
|
|
func (t *ECBalanceTask) copyAndMountShard(ctx context.Context, volumeID uint32, sourceAddr, targetAddr pb.ServerAddress, shardIDs []uint32, destDiskID uint32) error {
|
|
return operation.WithVolumeServerClient(false, targetAddr, t.grpcDialOption,
|
|
func(client volume_server_pb.VolumeServerClient) error {
|
|
// Copy shard data (if source != target)
|
|
if sourceAddr != targetAddr {
|
|
_, err := client.VolumeEcShardsCopy(ctx, &volume_server_pb.VolumeEcShardsCopyRequest{
|
|
VolumeId: volumeID,
|
|
Collection: t.collection,
|
|
ShardIds: shardIDs,
|
|
CopyEcxFile: true,
|
|
CopyEcjFile: true,
|
|
CopyVifFile: true,
|
|
SourceDataNode: string(sourceAddr),
|
|
DiskId: destDiskID,
|
|
})
|
|
if err != nil {
|
|
return fmt.Errorf("copy shard(s) %v from %s to %s: %v", shardIDs, sourceAddr, targetAddr, err)
|
|
}
|
|
}
|
|
|
|
// Mount the shard on destination
|
|
_, err := client.VolumeEcShardsMount(ctx, &volume_server_pb.VolumeEcShardsMountRequest{
|
|
VolumeId: volumeID,
|
|
Collection: t.collection,
|
|
ShardIds: shardIDs,
|
|
})
|
|
if err != nil {
|
|
return fmt.Errorf("mount shard(s) %v on %s: %v", shardIDs, targetAddr, err)
|
|
}
|
|
|
|
return nil
|
|
})
|
|
}
|
|
|
|
// unmountShard unmounts EC shards from a server
|
|
func (t *ECBalanceTask) unmountShard(ctx context.Context, volumeID uint32, addr pb.ServerAddress, shardIDs []uint32) error {
|
|
return operation.WithVolumeServerClient(false, addr, t.grpcDialOption,
|
|
func(client volume_server_pb.VolumeServerClient) error {
|
|
_, err := client.VolumeEcShardsUnmount(ctx, &volume_server_pb.VolumeEcShardsUnmountRequest{
|
|
VolumeId: volumeID,
|
|
ShardIds: shardIDs,
|
|
})
|
|
return err
|
|
})
|
|
}
|
|
|
|
// deleteShard deletes EC shards from a server
|
|
func (t *ECBalanceTask) deleteShard(ctx context.Context, volumeID uint32, collection string, addr pb.ServerAddress, shardIDs []uint32) error {
|
|
return operation.WithVolumeServerClient(false, addr, t.grpcDialOption,
|
|
func(client volume_server_pb.VolumeServerClient) error {
|
|
_, err := client.VolumeEcShardsDelete(ctx, &volume_server_pb.VolumeEcShardsDeleteRequest{
|
|
VolumeId: volumeID,
|
|
Collection: collection,
|
|
ShardIds: shardIDs,
|
|
})
|
|
return err
|
|
})
|
|
}
|
|
|
|
// Validate validates the task parameters.
|
|
// ECBalanceTask handles exactly one source→target shard move per execution.
|
|
func (t *ECBalanceTask) Validate(params *worker_pb.TaskParams) error {
|
|
if params == nil {
|
|
return fmt.Errorf("ECBalanceTask.Validate: TaskParams are required")
|
|
}
|
|
if len(params.Sources) != 1 {
|
|
return fmt.Errorf("ECBalanceTask.Validate: expected exactly 1 source, got %d", len(params.Sources))
|
|
}
|
|
if len(params.Targets) != 1 {
|
|
return fmt.Errorf("ECBalanceTask.Validate: expected exactly 1 target, got %d", len(params.Targets))
|
|
}
|
|
if len(params.Sources[0].ShardIds) == 0 {
|
|
return fmt.Errorf("ECBalanceTask.Validate: Sources[0].ShardIds is empty")
|
|
}
|
|
if len(params.Targets[0].ShardIds) == 0 {
|
|
return fmt.Errorf("ECBalanceTask.Validate: Targets[0].ShardIds is empty")
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// EstimateTime estimates the time for an EC shard move
|
|
func (t *ECBalanceTask) EstimateTime(params *worker_pb.TaskParams) time.Duration {
|
|
return 30 * time.Second
|
|
}
|
|
|
|
// GetProgress returns current progress
|
|
func (t *ECBalanceTask) GetProgress() float64 {
|
|
return t.progress
|
|
}
|
|
|
|
// reportProgress updates the stored progress and reports it via the callback
|
|
func (t *ECBalanceTask) reportProgress(progress float64, stage string) {
|
|
t.progress = progress
|
|
t.reportProgress(progress, stage)
|
|
}
|
|
|
|
// isDedupPhase checks if this is a dedup-phase task (source and target are the same node)
|
|
func isDedupPhase(params *worker_pb.TaskParams) bool {
|
|
if len(params.Sources) > 0 && len(params.Targets) > 0 {
|
|
return params.Sources[0].Node == params.Targets[0].Node
|
|
}
|
|
return false
|
|
}
|