Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
This commit is contained in:
826
weed/plugin/worker/volume_balance_handler.go
Normal file
826
weed/plugin/worker/volume_balance_handler.go
Normal file
@@ -0,0 +1,826 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/topology"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
balancetask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
workertypes "github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultBalanceTimeoutSeconds = int32(10 * 60)
|
||||
)
|
||||
|
||||
type volumeBalanceWorkerConfig struct {
|
||||
TaskConfig *balancetask.Config
|
||||
MinIntervalSeconds int
|
||||
}
|
||||
|
||||
// VolumeBalanceHandler is the plugin job handler for volume balancing.
|
||||
type VolumeBalanceHandler struct {
|
||||
grpcDialOption grpc.DialOption
|
||||
}
|
||||
|
||||
func NewVolumeBalanceHandler(grpcDialOption grpc.DialOption) *VolumeBalanceHandler {
|
||||
return &VolumeBalanceHandler{grpcDialOption: grpcDialOption}
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Capability() *plugin_pb.JobTypeCapability {
|
||||
return &plugin_pb.JobTypeCapability{
|
||||
JobType: "volume_balance",
|
||||
CanDetect: true,
|
||||
CanExecute: true,
|
||||
MaxDetectionConcurrency: 1,
|
||||
MaxExecutionConcurrency: 1,
|
||||
DisplayName: "Volume Balance",
|
||||
Description: "Moves volumes between servers to reduce skew in volume distribution",
|
||||
}
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
|
||||
return &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "volume_balance",
|
||||
DisplayName: "Volume Balance",
|
||||
Description: "Detect and execute volume moves to balance server load",
|
||||
Icon: "fas fa-balance-scale",
|
||||
DescriptorVersion: 1,
|
||||
AdminConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "volume-balance-admin",
|
||||
Title: "Volume Balance Admin Config",
|
||||
Description: "Admin-side controls for volume balance detection scope.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "scope",
|
||||
Title: "Scope",
|
||||
Description: "Optional filters applied before balance detection.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "collection_filter",
|
||||
Label: "Collection Filter",
|
||||
Description: "Only detect balance opportunities in this collection when set.",
|
||||
Placeholder: "all collections",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_STRING,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_TEXT,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"collection_filter": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: ""},
|
||||
},
|
||||
},
|
||||
},
|
||||
WorkerConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "volume-balance-worker",
|
||||
Title: "Volume Balance Worker Config",
|
||||
Description: "Worker-side balance thresholds.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "thresholds",
|
||||
Title: "Detection Thresholds",
|
||||
Description: "Controls for when balance jobs should be proposed.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "imbalance_threshold",
|
||||
Label: "Imbalance Threshold",
|
||||
Description: "Detect when skew exceeds this ratio.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_DOUBLE,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0}},
|
||||
MaxValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 1}},
|
||||
},
|
||||
{
|
||||
Name: "min_server_count",
|
||||
Label: "Minimum Server Count",
|
||||
Description: "Require at least this many servers for balancing.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 2}},
|
||||
},
|
||||
{
|
||||
Name: "min_interval_seconds",
|
||||
Label: "Minimum Detection Interval (s)",
|
||||
Description: "Skip detection if the last successful run is more recent than this interval.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0}},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"imbalance_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.2},
|
||||
},
|
||||
"min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 2},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 30 * 60},
|
||||
},
|
||||
},
|
||||
},
|
||||
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 30 * 60,
|
||||
DetectionTimeoutSeconds: 120,
|
||||
MaxJobsPerDetection: 100,
|
||||
GlobalExecutionConcurrency: 16,
|
||||
PerWorkerExecutionConcurrency: 4,
|
||||
RetryLimit: 1,
|
||||
RetryBackoffSeconds: 15,
|
||||
},
|
||||
WorkerDefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"imbalance_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.2},
|
||||
},
|
||||
"min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 2},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 30 * 60},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Detect(
|
||||
ctx context.Context,
|
||||
request *plugin_pb.RunDetectionRequest,
|
||||
sender DetectionSender,
|
||||
) error {
|
||||
if request == nil {
|
||||
return fmt.Errorf("run detection request is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("detection sender is nil")
|
||||
}
|
||||
if request.JobType != "" && request.JobType != "volume_balance" {
|
||||
return fmt.Errorf("job type %q is not handled by volume_balance worker", request.JobType)
|
||||
}
|
||||
|
||||
workerConfig := deriveBalanceWorkerConfig(request.GetWorkerConfigValues())
|
||||
if shouldSkipDetectionByInterval(request.GetLastSuccessfulRun(), workerConfig.MinIntervalSeconds) {
|
||||
minInterval := time.Duration(workerConfig.MinIntervalSeconds) * time.Second
|
||||
_ = sender.SendActivity(buildDetectorActivity(
|
||||
"skipped_by_interval",
|
||||
fmt.Sprintf("VOLUME BALANCE: Detection skipped due to min interval (%s)", minInterval),
|
||||
map[string]*plugin_pb.ConfigValue{
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(workerConfig.MinIntervalSeconds)},
|
||||
},
|
||||
},
|
||||
))
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "volume_balance",
|
||||
Proposals: []*plugin_pb.JobProposal{},
|
||||
HasMore: false,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "volume_balance",
|
||||
Success: true,
|
||||
TotalProposals: 0,
|
||||
})
|
||||
}
|
||||
|
||||
collectionFilter := strings.TrimSpace(readStringConfig(request.GetAdminConfigValues(), "collection_filter", ""))
|
||||
masters := make([]string, 0)
|
||||
if request.ClusterContext != nil {
|
||||
masters = append(masters, request.ClusterContext.MasterGrpcAddresses...)
|
||||
}
|
||||
|
||||
metrics, activeTopology, err := h.collectVolumeMetrics(ctx, masters, collectionFilter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
clusterInfo := &workertypes.ClusterInfo{ActiveTopology: activeTopology}
|
||||
results, err := balancetask.Detection(metrics, clusterInfo, workerConfig.TaskConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if traceErr := emitVolumeBalanceDetectionDecisionTrace(sender, metrics, workerConfig.TaskConfig, results); traceErr != nil {
|
||||
glog.Warningf("Plugin worker failed to emit volume_balance detection trace: %v", traceErr)
|
||||
}
|
||||
|
||||
maxResults := int(request.MaxResults)
|
||||
hasMore := false
|
||||
if maxResults > 0 && len(results) > maxResults {
|
||||
hasMore = true
|
||||
results = results[:maxResults]
|
||||
}
|
||||
|
||||
proposals := make([]*plugin_pb.JobProposal, 0, len(results))
|
||||
for _, result := range results {
|
||||
proposal, proposalErr := buildVolumeBalanceProposal(result)
|
||||
if proposalErr != nil {
|
||||
glog.Warningf("Plugin worker skip invalid volume_balance proposal: %v", proposalErr)
|
||||
continue
|
||||
}
|
||||
proposals = append(proposals, proposal)
|
||||
}
|
||||
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "volume_balance",
|
||||
Proposals: proposals,
|
||||
HasMore: hasMore,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "volume_balance",
|
||||
Success: true,
|
||||
TotalProposals: int32(len(proposals)),
|
||||
})
|
||||
}
|
||||
|
||||
func emitVolumeBalanceDetectionDecisionTrace(
|
||||
sender DetectionSender,
|
||||
metrics []*workertypes.VolumeHealthMetrics,
|
||||
taskConfig *balancetask.Config,
|
||||
results []*workertypes.TaskDetectionResult,
|
||||
) error {
|
||||
if sender == nil || taskConfig == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
totalVolumes := len(metrics)
|
||||
summaryMessage := ""
|
||||
if len(results) == 0 {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"BALANCE: No tasks created for %d volumes across %d disk type(s). Threshold=%.1f%%, MinServers=%d",
|
||||
totalVolumes,
|
||||
countBalanceDiskTypes(metrics),
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
taskConfig.MinServerCount,
|
||||
)
|
||||
} else {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"BALANCE: Created %d task(s) for %d volumes across %d disk type(s). Threshold=%.1f%%, MinServers=%d",
|
||||
len(results),
|
||||
totalVolumes,
|
||||
countBalanceDiskTypes(metrics),
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
taskConfig.MinServerCount,
|
||||
)
|
||||
}
|
||||
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_summary", summaryMessage, map[string]*plugin_pb.ConfigValue{
|
||||
"total_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(totalVolumes)},
|
||||
},
|
||||
"selected_tasks": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(results))},
|
||||
},
|
||||
"imbalance_threshold_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: taskConfig.ImbalanceThreshold * 100},
|
||||
},
|
||||
"min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.MinServerCount)},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
volumesByDiskType := make(map[string][]*workertypes.VolumeHealthMetrics)
|
||||
for _, metric := range metrics {
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
diskType := strings.TrimSpace(metric.DiskType)
|
||||
if diskType == "" {
|
||||
diskType = "unknown"
|
||||
}
|
||||
volumesByDiskType[diskType] = append(volumesByDiskType[diskType], metric)
|
||||
}
|
||||
|
||||
diskTypes := make([]string, 0, len(volumesByDiskType))
|
||||
for diskType := range volumesByDiskType {
|
||||
diskTypes = append(diskTypes, diskType)
|
||||
}
|
||||
sort.Strings(diskTypes)
|
||||
|
||||
const minVolumeCount = 2
|
||||
detailCount := 0
|
||||
for _, diskType := range diskTypes {
|
||||
diskMetrics := volumesByDiskType[diskType]
|
||||
volumeCount := len(diskMetrics)
|
||||
if volumeCount < minVolumeCount {
|
||||
message := fmt.Sprintf(
|
||||
"BALANCE [%s]: No tasks created - cluster too small (%d volumes, need ≥%d)",
|
||||
diskType,
|
||||
volumeCount,
|
||||
minVolumeCount,
|
||||
)
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_disk_type", message, map[string]*plugin_pb.ConfigValue{
|
||||
"disk_type": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: diskType},
|
||||
},
|
||||
"volume_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(volumeCount)},
|
||||
},
|
||||
"required_min_volume_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: minVolumeCount},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
detailCount++
|
||||
if detailCount >= 3 {
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
serverVolumeCounts := make(map[string]int)
|
||||
for _, metric := range diskMetrics {
|
||||
serverVolumeCounts[metric.Server]++
|
||||
}
|
||||
if len(serverVolumeCounts) < taskConfig.MinServerCount {
|
||||
message := fmt.Sprintf(
|
||||
"BALANCE [%s]: No tasks created - too few servers (%d servers, need ≥%d)",
|
||||
diskType,
|
||||
len(serverVolumeCounts),
|
||||
taskConfig.MinServerCount,
|
||||
)
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_disk_type", message, map[string]*plugin_pb.ConfigValue{
|
||||
"disk_type": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: diskType},
|
||||
},
|
||||
"server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(serverVolumeCounts))},
|
||||
},
|
||||
"required_min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.MinServerCount)},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
detailCount++
|
||||
if detailCount >= 3 {
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
totalDiskTypeVolumes := len(diskMetrics)
|
||||
avgVolumesPerServer := float64(totalDiskTypeVolumes) / float64(len(serverVolumeCounts))
|
||||
maxVolumes := 0
|
||||
minVolumes := totalDiskTypeVolumes
|
||||
maxServer := ""
|
||||
minServer := ""
|
||||
for server, count := range serverVolumeCounts {
|
||||
if count > maxVolumes {
|
||||
maxVolumes = count
|
||||
maxServer = server
|
||||
}
|
||||
if count < minVolumes {
|
||||
minVolumes = count
|
||||
minServer = server
|
||||
}
|
||||
}
|
||||
|
||||
imbalanceRatio := 0.0
|
||||
if avgVolumesPerServer > 0 {
|
||||
imbalanceRatio = float64(maxVolumes-minVolumes) / avgVolumesPerServer
|
||||
}
|
||||
|
||||
stage := "decision_disk_type"
|
||||
message := ""
|
||||
if imbalanceRatio <= taskConfig.ImbalanceThreshold {
|
||||
message = fmt.Sprintf(
|
||||
"BALANCE [%s]: No tasks created - cluster well balanced. Imbalance=%.1f%% (threshold=%.1f%%). Max=%d volumes on %s, Min=%d on %s, Avg=%.1f",
|
||||
diskType,
|
||||
imbalanceRatio*100,
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
maxVolumes,
|
||||
maxServer,
|
||||
minVolumes,
|
||||
minServer,
|
||||
avgVolumesPerServer,
|
||||
)
|
||||
} else {
|
||||
stage = "decision_candidate"
|
||||
message = fmt.Sprintf(
|
||||
"BALANCE [%s]: Candidate detected. Imbalance=%.1f%% (threshold=%.1f%%). Max=%d volumes on %s, Min=%d on %s, Avg=%.1f",
|
||||
diskType,
|
||||
imbalanceRatio*100,
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
maxVolumes,
|
||||
maxServer,
|
||||
minVolumes,
|
||||
minServer,
|
||||
avgVolumesPerServer,
|
||||
)
|
||||
}
|
||||
|
||||
if err := sender.SendActivity(buildDetectorActivity(stage, message, map[string]*plugin_pb.ConfigValue{
|
||||
"disk_type": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: diskType},
|
||||
},
|
||||
"volume_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(totalDiskTypeVolumes)},
|
||||
},
|
||||
"server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(serverVolumeCounts))},
|
||||
},
|
||||
"imbalance_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: imbalanceRatio * 100},
|
||||
},
|
||||
"threshold_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: taskConfig.ImbalanceThreshold * 100},
|
||||
},
|
||||
"max_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(maxVolumes)},
|
||||
},
|
||||
"min_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(minVolumes)},
|
||||
},
|
||||
"avg_volumes_per_server": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: avgVolumesPerServer},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
detailCount++
|
||||
if detailCount >= 3 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func countBalanceDiskTypes(metrics []*workertypes.VolumeHealthMetrics) int {
|
||||
diskTypes := make(map[string]struct{})
|
||||
for _, metric := range metrics {
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
diskType := strings.TrimSpace(metric.DiskType)
|
||||
if diskType == "" {
|
||||
diskType = "unknown"
|
||||
}
|
||||
diskTypes[diskType] = struct{}{}
|
||||
}
|
||||
return len(diskTypes)
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Execute(
|
||||
ctx context.Context,
|
||||
request *plugin_pb.ExecuteJobRequest,
|
||||
sender ExecutionSender,
|
||||
) error {
|
||||
if request == nil || request.Job == nil {
|
||||
return fmt.Errorf("execute request/job is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("execution sender is nil")
|
||||
}
|
||||
if request.Job.JobType != "" && request.Job.JobType != "volume_balance" {
|
||||
return fmt.Errorf("job type %q is not handled by volume_balance worker", request.Job.JobType)
|
||||
}
|
||||
|
||||
params, err := decodeVolumeBalanceTaskParams(request.Job)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(params.Sources) == 0 || strings.TrimSpace(params.Sources[0].Node) == "" {
|
||||
return fmt.Errorf("volume balance source node is required")
|
||||
}
|
||||
if len(params.Targets) == 0 || strings.TrimSpace(params.Targets[0].Node) == "" {
|
||||
return fmt.Errorf("volume balance target node is required")
|
||||
}
|
||||
|
||||
applyBalanceExecutionDefaults(params)
|
||||
|
||||
task := balancetask.NewBalanceTask(
|
||||
request.Job.JobId,
|
||||
params.Sources[0].Node,
|
||||
params.VolumeId,
|
||||
params.Collection,
|
||||
)
|
||||
task.SetProgressCallback(func(progress float64, stage string) {
|
||||
message := fmt.Sprintf("balance progress %.0f%%", progress)
|
||||
if strings.TrimSpace(stage) != "" {
|
||||
message = stage
|
||||
}
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: progress,
|
||||
Stage: stage,
|
||||
Message: message,
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity(stage, message),
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
if err := sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_ASSIGNED,
|
||||
ProgressPercent: 0,
|
||||
Stage: "assigned",
|
||||
Message: "volume balance job accepted",
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("assigned", "volume balance job accepted"),
|
||||
},
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := task.Execute(ctx, params); err != nil {
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_FAILED,
|
||||
ProgressPercent: 100,
|
||||
Stage: "failed",
|
||||
Message: err.Error(),
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("failed", err.Error()),
|
||||
},
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
sourceNode := params.Sources[0].Node
|
||||
targetNode := params.Targets[0].Node
|
||||
resultSummary := fmt.Sprintf("volume %d moved from %s to %s", params.VolumeId, sourceNode, targetNode)
|
||||
|
||||
return sender.SendCompleted(&plugin_pb.JobCompleted{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
Success: true,
|
||||
Result: &plugin_pb.JobResult{
|
||||
Summary: resultSummary,
|
||||
OutputValues: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(params.VolumeId)},
|
||||
},
|
||||
"source_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: sourceNode},
|
||||
},
|
||||
"target_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: targetNode},
|
||||
},
|
||||
},
|
||||
},
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("completed", resultSummary),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) collectVolumeMetrics(
|
||||
ctx context.Context,
|
||||
masterAddresses []string,
|
||||
collectionFilter string,
|
||||
) ([]*workertypes.VolumeHealthMetrics, *topology.ActiveTopology, error) {
|
||||
// Reuse the same master topology fetch/build flow used by the vacuum handler.
|
||||
helper := &VacuumHandler{grpcDialOption: h.grpcDialOption}
|
||||
return helper.collectVolumeMetrics(ctx, masterAddresses, collectionFilter)
|
||||
}
|
||||
|
||||
func deriveBalanceWorkerConfig(values map[string]*plugin_pb.ConfigValue) *volumeBalanceWorkerConfig {
|
||||
taskConfig := balancetask.NewDefaultConfig()
|
||||
|
||||
imbalanceThreshold := readDoubleConfig(values, "imbalance_threshold", taskConfig.ImbalanceThreshold)
|
||||
if imbalanceThreshold < 0 {
|
||||
imbalanceThreshold = 0
|
||||
}
|
||||
if imbalanceThreshold > 1 {
|
||||
imbalanceThreshold = 1
|
||||
}
|
||||
taskConfig.ImbalanceThreshold = imbalanceThreshold
|
||||
|
||||
minServerCount := int(readInt64Config(values, "min_server_count", int64(taskConfig.MinServerCount)))
|
||||
if minServerCount < 2 {
|
||||
minServerCount = 2
|
||||
}
|
||||
taskConfig.MinServerCount = minServerCount
|
||||
|
||||
minIntervalSeconds := int(readInt64Config(values, "min_interval_seconds", 0))
|
||||
if minIntervalSeconds < 0 {
|
||||
minIntervalSeconds = 0
|
||||
}
|
||||
|
||||
return &volumeBalanceWorkerConfig{
|
||||
TaskConfig: taskConfig,
|
||||
MinIntervalSeconds: minIntervalSeconds,
|
||||
}
|
||||
}
|
||||
|
||||
func buildVolumeBalanceProposal(
|
||||
result *workertypes.TaskDetectionResult,
|
||||
) (*plugin_pb.JobProposal, error) {
|
||||
if result == nil {
|
||||
return nil, fmt.Errorf("task detection result is nil")
|
||||
}
|
||||
if result.TypedParams == nil {
|
||||
return nil, fmt.Errorf("missing typed params for volume %d", result.VolumeID)
|
||||
}
|
||||
|
||||
params := proto.Clone(result.TypedParams).(*worker_pb.TaskParams)
|
||||
applyBalanceExecutionDefaults(params)
|
||||
|
||||
paramsPayload, err := proto.Marshal(params)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal task params: %w", err)
|
||||
}
|
||||
|
||||
proposalID := strings.TrimSpace(result.TaskID)
|
||||
if proposalID == "" {
|
||||
proposalID = fmt.Sprintf("volume-balance-%d-%d", result.VolumeID, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
dedupeKey := fmt.Sprintf("volume_balance:%d", result.VolumeID)
|
||||
if result.Collection != "" {
|
||||
dedupeKey += ":" + result.Collection
|
||||
}
|
||||
|
||||
sourceNode := ""
|
||||
if len(params.Sources) > 0 {
|
||||
sourceNode = strings.TrimSpace(params.Sources[0].Node)
|
||||
}
|
||||
targetNode := ""
|
||||
if len(params.Targets) > 0 {
|
||||
targetNode = strings.TrimSpace(params.Targets[0].Node)
|
||||
}
|
||||
|
||||
summary := fmt.Sprintf("Balance volume %d", result.VolumeID)
|
||||
if sourceNode != "" && targetNode != "" {
|
||||
summary = fmt.Sprintf("Move volume %d from %s to %s", result.VolumeID, sourceNode, targetNode)
|
||||
}
|
||||
|
||||
return &plugin_pb.JobProposal{
|
||||
ProposalId: proposalID,
|
||||
DedupeKey: dedupeKey,
|
||||
JobType: "volume_balance",
|
||||
Priority: mapTaskPriority(result.Priority),
|
||||
Summary: summary,
|
||||
Detail: strings.TrimSpace(result.Reason),
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {
|
||||
Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: paramsPayload},
|
||||
},
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(result.VolumeID)},
|
||||
},
|
||||
"source_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: sourceNode},
|
||||
},
|
||||
"target_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: targetNode},
|
||||
},
|
||||
"collection": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: result.Collection},
|
||||
},
|
||||
},
|
||||
Labels: map[string]string{
|
||||
"task_type": "balance",
|
||||
"volume_id": fmt.Sprintf("%d", result.VolumeID),
|
||||
"collection": result.Collection,
|
||||
"source_node": sourceNode,
|
||||
"target_node": targetNode,
|
||||
"source_server": sourceNode,
|
||||
"target_server": targetNode,
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func decodeVolumeBalanceTaskParams(job *plugin_pb.JobSpec) (*worker_pb.TaskParams, error) {
|
||||
if job == nil {
|
||||
return nil, fmt.Errorf("job spec is nil")
|
||||
}
|
||||
|
||||
if payload := readBytesConfig(job.Parameters, "task_params_pb"); len(payload) > 0 {
|
||||
params := &worker_pb.TaskParams{}
|
||||
if err := proto.Unmarshal(payload, params); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal task_params_pb: %w", err)
|
||||
}
|
||||
if params.TaskId == "" {
|
||||
params.TaskId = job.JobId
|
||||
}
|
||||
return params, nil
|
||||
}
|
||||
|
||||
volumeID := readInt64Config(job.Parameters, "volume_id", 0)
|
||||
sourceNode := strings.TrimSpace(readStringConfig(job.Parameters, "source_server", ""))
|
||||
if sourceNode == "" {
|
||||
sourceNode = strings.TrimSpace(readStringConfig(job.Parameters, "server", ""))
|
||||
}
|
||||
targetNode := strings.TrimSpace(readStringConfig(job.Parameters, "target_server", ""))
|
||||
if targetNode == "" {
|
||||
targetNode = strings.TrimSpace(readStringConfig(job.Parameters, "target", ""))
|
||||
}
|
||||
collection := readStringConfig(job.Parameters, "collection", "")
|
||||
timeoutSeconds := int32(readInt64Config(job.Parameters, "timeout_seconds", int64(defaultBalanceTimeoutSeconds)))
|
||||
if timeoutSeconds <= 0 {
|
||||
timeoutSeconds = defaultBalanceTimeoutSeconds
|
||||
}
|
||||
forceMove := readBoolConfig(job.Parameters, "force_move", false)
|
||||
|
||||
if volumeID <= 0 {
|
||||
return nil, fmt.Errorf("missing volume_id in job parameters")
|
||||
}
|
||||
if sourceNode == "" {
|
||||
return nil, fmt.Errorf("missing source_server in job parameters")
|
||||
}
|
||||
if targetNode == "" {
|
||||
return nil, fmt.Errorf("missing target_server in job parameters")
|
||||
}
|
||||
|
||||
return &worker_pb.TaskParams{
|
||||
TaskId: job.JobId,
|
||||
VolumeId: uint32(volumeID),
|
||||
Collection: collection,
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: sourceNode,
|
||||
VolumeId: uint32(volumeID),
|
||||
},
|
||||
},
|
||||
Targets: []*worker_pb.TaskTarget{
|
||||
{
|
||||
Node: targetNode,
|
||||
VolumeId: uint32(volumeID),
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_BalanceParams{
|
||||
BalanceParams: &worker_pb.BalanceTaskParams{
|
||||
ForceMove: forceMove,
|
||||
TimeoutSeconds: timeoutSeconds,
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func applyBalanceExecutionDefaults(params *worker_pb.TaskParams) {
|
||||
if params == nil {
|
||||
return
|
||||
}
|
||||
|
||||
balanceParams := params.GetBalanceParams()
|
||||
if balanceParams == nil {
|
||||
params.TaskParams = &worker_pb.TaskParams_BalanceParams{
|
||||
BalanceParams: &worker_pb.BalanceTaskParams{
|
||||
ForceMove: false,
|
||||
TimeoutSeconds: defaultBalanceTimeoutSeconds,
|
||||
},
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if balanceParams.TimeoutSeconds <= 0 {
|
||||
balanceParams.TimeoutSeconds = defaultBalanceTimeoutSeconds
|
||||
}
|
||||
}
|
||||
|
||||
func readBoolConfig(values map[string]*plugin_pb.ConfigValue, field string, fallback bool) bool {
|
||||
if values == nil {
|
||||
return fallback
|
||||
}
|
||||
value := values[field]
|
||||
if value == nil {
|
||||
return fallback
|
||||
}
|
||||
switch kind := value.Kind.(type) {
|
||||
case *plugin_pb.ConfigValue_BoolValue:
|
||||
return kind.BoolValue
|
||||
case *plugin_pb.ConfigValue_Int64Value:
|
||||
return kind.Int64Value != 0
|
||||
case *plugin_pb.ConfigValue_DoubleValue:
|
||||
return kind.DoubleValue != 0
|
||||
case *plugin_pb.ConfigValue_StringValue:
|
||||
text := strings.TrimSpace(strings.ToLower(kind.StringValue))
|
||||
switch text {
|
||||
case "1", "true", "yes", "on":
|
||||
return true
|
||||
case "0", "false", "no", "off":
|
||||
return false
|
||||
}
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
Reference in New Issue
Block a user