Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
This commit is contained in:
899
weed/plugin/worker/erasure_coding_handler.go
Normal file
899
weed/plugin/worker/erasure_coding_handler.go
Normal file
@@ -0,0 +1,899 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/topology"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
ecstorage "github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
||||
erasurecodingtask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
workertypes "github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
type erasureCodingWorkerConfig struct {
|
||||
TaskConfig *erasurecodingtask.Config
|
||||
MinIntervalSeconds int
|
||||
}
|
||||
|
||||
// ErasureCodingHandler is the plugin job handler for erasure coding.
|
||||
type ErasureCodingHandler struct {
|
||||
grpcDialOption grpc.DialOption
|
||||
}
|
||||
|
||||
func NewErasureCodingHandler(grpcDialOption grpc.DialOption) *ErasureCodingHandler {
|
||||
return &ErasureCodingHandler{grpcDialOption: grpcDialOption}
|
||||
}
|
||||
|
||||
func (h *ErasureCodingHandler) Capability() *plugin_pb.JobTypeCapability {
|
||||
return &plugin_pb.JobTypeCapability{
|
||||
JobType: "erasure_coding",
|
||||
CanDetect: true,
|
||||
CanExecute: true,
|
||||
MaxDetectionConcurrency: 1,
|
||||
MaxExecutionConcurrency: 1,
|
||||
DisplayName: "Erasure Coding",
|
||||
Description: "Converts full and quiet volumes into EC shards",
|
||||
}
|
||||
}
|
||||
|
||||
func (h *ErasureCodingHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
|
||||
return &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "erasure_coding",
|
||||
DisplayName: "Erasure Coding",
|
||||
Description: "Detect and execute erasure coding for suitable volumes",
|
||||
Icon: "fas fa-shield-alt",
|
||||
DescriptorVersion: 1,
|
||||
AdminConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "erasure-coding-admin",
|
||||
Title: "Erasure Coding Admin Config",
|
||||
Description: "Admin-side controls for erasure coding detection scope.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "scope",
|
||||
Title: "Scope",
|
||||
Description: "Optional filters applied before erasure coding detection.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "collection_filter",
|
||||
Label: "Collection Filter",
|
||||
Description: "Only detect erasure coding opportunities in this collection when set.",
|
||||
Placeholder: "all collections",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_STRING,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_TEXT,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"collection_filter": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: ""},
|
||||
},
|
||||
},
|
||||
},
|
||||
WorkerConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "erasure-coding-worker",
|
||||
Title: "Erasure Coding Worker Config",
|
||||
Description: "Worker-side detection thresholds.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "thresholds",
|
||||
Title: "Detection Thresholds",
|
||||
Description: "Controls for when erasure coding jobs should be proposed.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "quiet_for_seconds",
|
||||
Label: "Quiet Period (s)",
|
||||
Description: "Volume must remain unmodified for at least this duration.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0}},
|
||||
},
|
||||
{
|
||||
Name: "fullness_ratio",
|
||||
Label: "Fullness Ratio",
|
||||
Description: "Minimum volume fullness ratio to trigger erasure coding.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_DOUBLE,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0}},
|
||||
MaxValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 1}},
|
||||
},
|
||||
{
|
||||
Name: "min_size_mb",
|
||||
Label: "Minimum Volume Size (MB)",
|
||||
Description: "Only volumes larger than this size are considered.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 1}},
|
||||
},
|
||||
{
|
||||
Name: "min_interval_seconds",
|
||||
Label: "Minimum Detection Interval (s)",
|
||||
Description: "Skip detection if the last successful run is more recent than this interval.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0}},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"quiet_for_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 300},
|
||||
},
|
||||
"fullness_ratio": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.8},
|
||||
},
|
||||
"min_size_mb": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 30},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 60},
|
||||
},
|
||||
},
|
||||
},
|
||||
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 60 * 5,
|
||||
DetectionTimeoutSeconds: 300,
|
||||
MaxJobsPerDetection: 500,
|
||||
GlobalExecutionConcurrency: 16,
|
||||
PerWorkerExecutionConcurrency: 4,
|
||||
RetryLimit: 1,
|
||||
RetryBackoffSeconds: 30,
|
||||
},
|
||||
WorkerDefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"quiet_for_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 300},
|
||||
},
|
||||
"fullness_ratio": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.8},
|
||||
},
|
||||
"min_size_mb": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 30},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 60},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *ErasureCodingHandler) Detect(
|
||||
ctx context.Context,
|
||||
request *plugin_pb.RunDetectionRequest,
|
||||
sender DetectionSender,
|
||||
) error {
|
||||
if request == nil {
|
||||
return fmt.Errorf("run detection request is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("detection sender is nil")
|
||||
}
|
||||
if request.JobType != "" && request.JobType != "erasure_coding" {
|
||||
return fmt.Errorf("job type %q is not handled by erasure_coding worker", request.JobType)
|
||||
}
|
||||
|
||||
workerConfig := deriveErasureCodingWorkerConfig(request.GetWorkerConfigValues())
|
||||
if shouldSkipDetectionByInterval(request.GetLastSuccessfulRun(), workerConfig.MinIntervalSeconds) {
|
||||
minInterval := time.Duration(workerConfig.MinIntervalSeconds) * time.Second
|
||||
_ = sender.SendActivity(buildDetectorActivity(
|
||||
"skipped_by_interval",
|
||||
fmt.Sprintf("ERASURE CODING: Detection skipped due to min interval (%s)", minInterval),
|
||||
map[string]*plugin_pb.ConfigValue{
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(workerConfig.MinIntervalSeconds)},
|
||||
},
|
||||
},
|
||||
))
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "erasure_coding",
|
||||
Proposals: []*plugin_pb.JobProposal{},
|
||||
HasMore: false,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "erasure_coding",
|
||||
Success: true,
|
||||
TotalProposals: 0,
|
||||
})
|
||||
}
|
||||
|
||||
collectionFilter := strings.TrimSpace(readStringConfig(request.GetAdminConfigValues(), "collection_filter", ""))
|
||||
if collectionFilter != "" {
|
||||
workerConfig.TaskConfig.CollectionFilter = collectionFilter
|
||||
}
|
||||
|
||||
masters := make([]string, 0)
|
||||
if request.ClusterContext != nil {
|
||||
masters = append(masters, request.ClusterContext.MasterGrpcAddresses...)
|
||||
}
|
||||
|
||||
metrics, activeTopology, err := h.collectVolumeMetrics(ctx, masters, collectionFilter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
clusterInfo := &workertypes.ClusterInfo{ActiveTopology: activeTopology}
|
||||
results, err := erasurecodingtask.Detection(metrics, clusterInfo, workerConfig.TaskConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if traceErr := emitErasureCodingDetectionDecisionTrace(sender, metrics, workerConfig.TaskConfig, results); traceErr != nil {
|
||||
glog.Warningf("Plugin worker failed to emit erasure_coding detection trace: %v", traceErr)
|
||||
}
|
||||
|
||||
maxResults := int(request.MaxResults)
|
||||
hasMore := false
|
||||
if maxResults > 0 && len(results) > maxResults {
|
||||
hasMore = true
|
||||
results = results[:maxResults]
|
||||
}
|
||||
|
||||
proposals := make([]*plugin_pb.JobProposal, 0, len(results))
|
||||
for _, result := range results {
|
||||
proposal, proposalErr := buildErasureCodingProposal(result)
|
||||
if proposalErr != nil {
|
||||
glog.Warningf("Plugin worker skip invalid erasure_coding proposal: %v", proposalErr)
|
||||
continue
|
||||
}
|
||||
proposals = append(proposals, proposal)
|
||||
}
|
||||
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "erasure_coding",
|
||||
Proposals: proposals,
|
||||
HasMore: hasMore,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "erasure_coding",
|
||||
Success: true,
|
||||
TotalProposals: int32(len(proposals)),
|
||||
})
|
||||
}
|
||||
|
||||
func emitErasureCodingDetectionDecisionTrace(
|
||||
sender DetectionSender,
|
||||
metrics []*workertypes.VolumeHealthMetrics,
|
||||
taskConfig *erasurecodingtask.Config,
|
||||
results []*workertypes.TaskDetectionResult,
|
||||
) error {
|
||||
if sender == nil || taskConfig == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
quietThreshold := time.Duration(taskConfig.QuietForSeconds) * time.Second
|
||||
minSizeBytes := uint64(taskConfig.MinSizeMB) * 1024 * 1024
|
||||
allowedCollections := make(map[string]bool)
|
||||
if strings.TrimSpace(taskConfig.CollectionFilter) != "" {
|
||||
for _, collection := range strings.Split(taskConfig.CollectionFilter, ",") {
|
||||
trimmed := strings.TrimSpace(collection)
|
||||
if trimmed != "" {
|
||||
allowedCollections[trimmed] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
volumeGroups := make(map[uint32][]*workertypes.VolumeHealthMetrics)
|
||||
for _, metric := range metrics {
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
volumeGroups[metric.VolumeID] = append(volumeGroups[metric.VolumeID], metric)
|
||||
}
|
||||
|
||||
skippedAlreadyEC := 0
|
||||
skippedTooSmall := 0
|
||||
skippedCollectionFilter := 0
|
||||
skippedQuietTime := 0
|
||||
skippedFullness := 0
|
||||
|
||||
for _, groupMetrics := range volumeGroups {
|
||||
if len(groupMetrics) == 0 {
|
||||
continue
|
||||
}
|
||||
metric := groupMetrics[0]
|
||||
for _, candidate := range groupMetrics {
|
||||
if candidate != nil && candidate.Server < metric.Server {
|
||||
metric = candidate
|
||||
}
|
||||
}
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if metric.IsECVolume {
|
||||
skippedAlreadyEC++
|
||||
continue
|
||||
}
|
||||
if metric.Size < minSizeBytes {
|
||||
skippedTooSmall++
|
||||
continue
|
||||
}
|
||||
if len(allowedCollections) > 0 && !allowedCollections[metric.Collection] {
|
||||
skippedCollectionFilter++
|
||||
continue
|
||||
}
|
||||
if metric.Age < quietThreshold {
|
||||
skippedQuietTime++
|
||||
}
|
||||
if metric.FullnessRatio < taskConfig.FullnessRatio {
|
||||
skippedFullness++
|
||||
}
|
||||
}
|
||||
|
||||
totalVolumes := len(metrics)
|
||||
summaryMessage := ""
|
||||
if len(results) == 0 {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"EC detection: No tasks created for %d volumes (skipped: %d already EC, %d too small, %d filtered, %d not quiet, %d not full)",
|
||||
totalVolumes,
|
||||
skippedAlreadyEC,
|
||||
skippedTooSmall,
|
||||
skippedCollectionFilter,
|
||||
skippedQuietTime,
|
||||
skippedFullness,
|
||||
)
|
||||
} else {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"EC detection: Created %d task(s) from %d volumes (skipped: %d already EC, %d too small, %d filtered, %d not quiet, %d not full)",
|
||||
len(results),
|
||||
totalVolumes,
|
||||
skippedAlreadyEC,
|
||||
skippedTooSmall,
|
||||
skippedCollectionFilter,
|
||||
skippedQuietTime,
|
||||
skippedFullness,
|
||||
)
|
||||
}
|
||||
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_summary", summaryMessage, map[string]*plugin_pb.ConfigValue{
|
||||
"total_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(totalVolumes)},
|
||||
},
|
||||
"selected_tasks": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(results))},
|
||||
},
|
||||
"skipped_already_ec": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(skippedAlreadyEC)},
|
||||
},
|
||||
"skipped_too_small": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(skippedTooSmall)},
|
||||
},
|
||||
"skipped_filtered": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(skippedCollectionFilter)},
|
||||
},
|
||||
"skipped_not_quiet": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(skippedQuietTime)},
|
||||
},
|
||||
"skipped_not_full": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(skippedFullness)},
|
||||
},
|
||||
"quiet_for_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.QuietForSeconds)},
|
||||
},
|
||||
"min_size_mb": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.MinSizeMB)},
|
||||
},
|
||||
"fullness_threshold_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: taskConfig.FullnessRatio * 100},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
detailsEmitted := 0
|
||||
for _, metric := range metrics {
|
||||
if metric == nil || metric.IsECVolume {
|
||||
continue
|
||||
}
|
||||
sizeMB := float64(metric.Size) / (1024 * 1024)
|
||||
message := fmt.Sprintf(
|
||||
"ERASURE CODING: Volume %d: size=%.1fMB (need ≥%dMB), age=%s (need ≥%s), fullness=%.1f%% (need ≥%.1f%%)",
|
||||
metric.VolumeID,
|
||||
sizeMB,
|
||||
taskConfig.MinSizeMB,
|
||||
metric.Age.Truncate(time.Minute),
|
||||
quietThreshold.Truncate(time.Minute),
|
||||
metric.FullnessRatio*100,
|
||||
taskConfig.FullnessRatio*100,
|
||||
)
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_volume", message, map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(metric.VolumeID)},
|
||||
},
|
||||
"size_mb": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: sizeMB},
|
||||
},
|
||||
"required_min_size_mb": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.MinSizeMB)},
|
||||
},
|
||||
"age_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(metric.Age.Seconds())},
|
||||
},
|
||||
"required_quiet_for_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.QuietForSeconds)},
|
||||
},
|
||||
"fullness_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: metric.FullnessRatio * 100},
|
||||
},
|
||||
"required_fullness_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: taskConfig.FullnessRatio * 100},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
detailsEmitted++
|
||||
if detailsEmitted >= 3 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *ErasureCodingHandler) Execute(
|
||||
ctx context.Context,
|
||||
request *plugin_pb.ExecuteJobRequest,
|
||||
sender ExecutionSender,
|
||||
) error {
|
||||
if request == nil || request.Job == nil {
|
||||
return fmt.Errorf("execute request/job is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("execution sender is nil")
|
||||
}
|
||||
if request.Job.JobType != "" && request.Job.JobType != "erasure_coding" {
|
||||
return fmt.Errorf("job type %q is not handled by erasure_coding worker", request.Job.JobType)
|
||||
}
|
||||
|
||||
params, err := decodeErasureCodingTaskParams(request.Job)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
applyErasureCodingExecutionDefaults(params, request.GetClusterContext())
|
||||
|
||||
if len(params.Sources) == 0 || strings.TrimSpace(params.Sources[0].Node) == "" {
|
||||
return fmt.Errorf("erasure coding source node is required")
|
||||
}
|
||||
if len(params.Targets) == 0 {
|
||||
return fmt.Errorf("erasure coding targets are required")
|
||||
}
|
||||
|
||||
task := erasurecodingtask.NewErasureCodingTask(
|
||||
request.Job.JobId,
|
||||
params.Sources[0].Node,
|
||||
params.VolumeId,
|
||||
params.Collection,
|
||||
)
|
||||
task.SetProgressCallback(func(progress float64, stage string) {
|
||||
message := fmt.Sprintf("erasure coding progress %.0f%%", progress)
|
||||
if strings.TrimSpace(stage) != "" {
|
||||
message = stage
|
||||
}
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: progress,
|
||||
Stage: stage,
|
||||
Message: message,
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity(stage, message),
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
if err := sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_ASSIGNED,
|
||||
ProgressPercent: 0,
|
||||
Stage: "assigned",
|
||||
Message: "erasure coding job accepted",
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("assigned", "erasure coding job accepted"),
|
||||
},
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := task.Execute(ctx, params); err != nil {
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_FAILED,
|
||||
ProgressPercent: 100,
|
||||
Stage: "failed",
|
||||
Message: err.Error(),
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("failed", err.Error()),
|
||||
},
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
sourceNode := params.Sources[0].Node
|
||||
resultSummary := fmt.Sprintf("erasure coding completed for volume %d across %d targets", params.VolumeId, len(params.Targets))
|
||||
|
||||
return sender.SendCompleted(&plugin_pb.JobCompleted{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
Success: true,
|
||||
Result: &plugin_pb.JobResult{
|
||||
Summary: resultSummary,
|
||||
OutputValues: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(params.VolumeId)},
|
||||
},
|
||||
"source_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: sourceNode},
|
||||
},
|
||||
"target_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(params.Targets))},
|
||||
},
|
||||
},
|
||||
},
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("completed", resultSummary),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (h *ErasureCodingHandler) collectVolumeMetrics(
|
||||
ctx context.Context,
|
||||
masterAddresses []string,
|
||||
collectionFilter string,
|
||||
) ([]*workertypes.VolumeHealthMetrics, *topology.ActiveTopology, error) {
|
||||
// Reuse the same master topology fetch/build flow used by the vacuum handler.
|
||||
helper := &VacuumHandler{grpcDialOption: h.grpcDialOption}
|
||||
return helper.collectVolumeMetrics(ctx, masterAddresses, collectionFilter)
|
||||
}
|
||||
|
||||
func deriveErasureCodingWorkerConfig(values map[string]*plugin_pb.ConfigValue) *erasureCodingWorkerConfig {
|
||||
taskConfig := erasurecodingtask.NewDefaultConfig()
|
||||
|
||||
quietForSeconds := int(readInt64Config(values, "quiet_for_seconds", int64(taskConfig.QuietForSeconds)))
|
||||
if quietForSeconds < 0 {
|
||||
quietForSeconds = 0
|
||||
}
|
||||
taskConfig.QuietForSeconds = quietForSeconds
|
||||
|
||||
fullnessRatio := readDoubleConfig(values, "fullness_ratio", taskConfig.FullnessRatio)
|
||||
if fullnessRatio < 0 {
|
||||
fullnessRatio = 0
|
||||
}
|
||||
if fullnessRatio > 1 {
|
||||
fullnessRatio = 1
|
||||
}
|
||||
taskConfig.FullnessRatio = fullnessRatio
|
||||
|
||||
minSizeMB := int(readInt64Config(values, "min_size_mb", int64(taskConfig.MinSizeMB)))
|
||||
if minSizeMB < 1 {
|
||||
minSizeMB = 1
|
||||
}
|
||||
taskConfig.MinSizeMB = minSizeMB
|
||||
|
||||
minIntervalSeconds := int(readInt64Config(values, "min_interval_seconds", 60*60))
|
||||
if minIntervalSeconds < 0 {
|
||||
minIntervalSeconds = 0
|
||||
}
|
||||
|
||||
return &erasureCodingWorkerConfig{
|
||||
TaskConfig: taskConfig,
|
||||
MinIntervalSeconds: minIntervalSeconds,
|
||||
}
|
||||
}
|
||||
|
||||
func buildErasureCodingProposal(
|
||||
result *workertypes.TaskDetectionResult,
|
||||
) (*plugin_pb.JobProposal, error) {
|
||||
if result == nil {
|
||||
return nil, fmt.Errorf("task detection result is nil")
|
||||
}
|
||||
if result.TypedParams == nil {
|
||||
return nil, fmt.Errorf("missing typed params for volume %d", result.VolumeID)
|
||||
}
|
||||
params := proto.Clone(result.TypedParams).(*worker_pb.TaskParams)
|
||||
applyErasureCodingExecutionDefaults(params, nil)
|
||||
|
||||
paramsPayload, err := proto.Marshal(params)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal task params: %w", err)
|
||||
}
|
||||
|
||||
proposalID := strings.TrimSpace(result.TaskID)
|
||||
if proposalID == "" {
|
||||
proposalID = fmt.Sprintf("erasure-coding-%d-%d", result.VolumeID, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
dedupeKey := fmt.Sprintf("erasure_coding:%d", result.VolumeID)
|
||||
if result.Collection != "" {
|
||||
dedupeKey += ":" + result.Collection
|
||||
}
|
||||
|
||||
sourceNode := ""
|
||||
if len(params.Sources) > 0 {
|
||||
sourceNode = strings.TrimSpace(params.Sources[0].Node)
|
||||
}
|
||||
|
||||
summary := fmt.Sprintf("Erasure code volume %d", result.VolumeID)
|
||||
if sourceNode != "" {
|
||||
summary = fmt.Sprintf("Erasure code volume %d from %s", result.VolumeID, sourceNode)
|
||||
}
|
||||
|
||||
return &plugin_pb.JobProposal{
|
||||
ProposalId: proposalID,
|
||||
DedupeKey: dedupeKey,
|
||||
JobType: "erasure_coding",
|
||||
Priority: mapTaskPriority(result.Priority),
|
||||
Summary: summary,
|
||||
Detail: strings.TrimSpace(result.Reason),
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {
|
||||
Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: paramsPayload},
|
||||
},
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(result.VolumeID)},
|
||||
},
|
||||
"source_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: sourceNode},
|
||||
},
|
||||
"collection": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: result.Collection},
|
||||
},
|
||||
"target_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(params.Targets))},
|
||||
},
|
||||
},
|
||||
Labels: map[string]string{
|
||||
"task_type": "erasure_coding",
|
||||
"volume_id": fmt.Sprintf("%d", result.VolumeID),
|
||||
"collection": result.Collection,
|
||||
"source_node": sourceNode,
|
||||
"target_count": fmt.Sprintf("%d", len(params.Targets)),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func decodeErasureCodingTaskParams(job *plugin_pb.JobSpec) (*worker_pb.TaskParams, error) {
|
||||
if job == nil {
|
||||
return nil, fmt.Errorf("job spec is nil")
|
||||
}
|
||||
|
||||
if payload := readBytesConfig(job.Parameters, "task_params_pb"); len(payload) > 0 {
|
||||
params := &worker_pb.TaskParams{}
|
||||
if err := proto.Unmarshal(payload, params); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal task_params_pb: %w", err)
|
||||
}
|
||||
if params.TaskId == "" {
|
||||
params.TaskId = job.JobId
|
||||
}
|
||||
return params, nil
|
||||
}
|
||||
|
||||
volumeID := readInt64Config(job.Parameters, "volume_id", 0)
|
||||
sourceNode := strings.TrimSpace(readStringConfig(job.Parameters, "source_server", ""))
|
||||
if sourceNode == "" {
|
||||
sourceNode = strings.TrimSpace(readStringConfig(job.Parameters, "server", ""))
|
||||
}
|
||||
targetServers := readStringListConfig(job.Parameters, "target_servers")
|
||||
if len(targetServers) == 0 {
|
||||
targetServers = readStringListConfig(job.Parameters, "targets")
|
||||
}
|
||||
collection := readStringConfig(job.Parameters, "collection", "")
|
||||
|
||||
dataShards := int32(readInt64Config(job.Parameters, "data_shards", int64(ecstorage.DataShardsCount)))
|
||||
if dataShards <= 0 {
|
||||
dataShards = ecstorage.DataShardsCount
|
||||
}
|
||||
parityShards := int32(readInt64Config(job.Parameters, "parity_shards", int64(ecstorage.ParityShardsCount)))
|
||||
if parityShards <= 0 {
|
||||
parityShards = ecstorage.ParityShardsCount
|
||||
}
|
||||
totalShards := int(dataShards + parityShards)
|
||||
|
||||
if volumeID <= 0 {
|
||||
return nil, fmt.Errorf("missing volume_id in job parameters")
|
||||
}
|
||||
if sourceNode == "" {
|
||||
return nil, fmt.Errorf("missing source_server in job parameters")
|
||||
}
|
||||
if len(targetServers) == 0 {
|
||||
return nil, fmt.Errorf("missing target_servers in job parameters")
|
||||
}
|
||||
if len(targetServers) < totalShards {
|
||||
return nil, fmt.Errorf("insufficient target_servers: got %d, need at least %d", len(targetServers), totalShards)
|
||||
}
|
||||
|
||||
shardAssignments := assignECShardIDs(totalShards, len(targetServers))
|
||||
targets := make([]*worker_pb.TaskTarget, 0, len(targetServers))
|
||||
for i := 0; i < len(targetServers); i++ {
|
||||
targetNode := strings.TrimSpace(targetServers[i])
|
||||
if targetNode == "" {
|
||||
continue
|
||||
}
|
||||
targets = append(targets, &worker_pb.TaskTarget{
|
||||
Node: targetNode,
|
||||
VolumeId: uint32(volumeID),
|
||||
ShardIds: shardAssignments[i],
|
||||
})
|
||||
}
|
||||
if len(targets) < totalShards {
|
||||
return nil, fmt.Errorf("insufficient non-empty target_servers after normalization: got %d, need at least %d", len(targets), totalShards)
|
||||
}
|
||||
|
||||
return &worker_pb.TaskParams{
|
||||
TaskId: job.JobId,
|
||||
VolumeId: uint32(volumeID),
|
||||
Collection: collection,
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: sourceNode,
|
||||
VolumeId: uint32(volumeID),
|
||||
},
|
||||
},
|
||||
Targets: targets,
|
||||
TaskParams: &worker_pb.TaskParams_ErasureCodingParams{
|
||||
ErasureCodingParams: &worker_pb.ErasureCodingTaskParams{
|
||||
DataShards: dataShards,
|
||||
ParityShards: parityShards,
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func applyErasureCodingExecutionDefaults(
|
||||
params *worker_pb.TaskParams,
|
||||
clusterContext *plugin_pb.ClusterContext,
|
||||
) {
|
||||
if params == nil {
|
||||
return
|
||||
}
|
||||
|
||||
ecParams := params.GetErasureCodingParams()
|
||||
if ecParams == nil {
|
||||
ecParams = &worker_pb.ErasureCodingTaskParams{
|
||||
DataShards: ecstorage.DataShardsCount,
|
||||
ParityShards: ecstorage.ParityShardsCount,
|
||||
}
|
||||
params.TaskParams = &worker_pb.TaskParams_ErasureCodingParams{ErasureCodingParams: ecParams}
|
||||
}
|
||||
|
||||
if ecParams.DataShards <= 0 {
|
||||
ecParams.DataShards = ecstorage.DataShardsCount
|
||||
}
|
||||
if ecParams.ParityShards <= 0 {
|
||||
ecParams.ParityShards = ecstorage.ParityShardsCount
|
||||
}
|
||||
ecParams.WorkingDir = defaultErasureCodingWorkingDir()
|
||||
ecParams.CleanupSource = true
|
||||
if strings.TrimSpace(ecParams.MasterClient) == "" && clusterContext != nil && len(clusterContext.MasterGrpcAddresses) > 0 {
|
||||
ecParams.MasterClient = clusterContext.MasterGrpcAddresses[0]
|
||||
}
|
||||
|
||||
totalShards := int(ecParams.DataShards + ecParams.ParityShards)
|
||||
if totalShards <= 0 {
|
||||
totalShards = ecstorage.TotalShardsCount
|
||||
}
|
||||
needsShardAssignment := false
|
||||
for _, target := range params.Targets {
|
||||
if target == nil || len(target.ShardIds) == 0 {
|
||||
needsShardAssignment = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if needsShardAssignment && len(params.Targets) > 0 {
|
||||
assignments := assignECShardIDs(totalShards, len(params.Targets))
|
||||
for i := 0; i < len(params.Targets); i++ {
|
||||
if params.Targets[i] == nil {
|
||||
continue
|
||||
}
|
||||
if len(params.Targets[i].ShardIds) == 0 {
|
||||
params.Targets[i].ShardIds = assignments[i]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func readStringListConfig(values map[string]*plugin_pb.ConfigValue, field string) []string {
|
||||
if values == nil {
|
||||
return nil
|
||||
}
|
||||
value := values[field]
|
||||
if value == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
switch kind := value.Kind.(type) {
|
||||
case *plugin_pb.ConfigValue_StringList:
|
||||
return normalizeStringList(kind.StringList.GetValues())
|
||||
case *plugin_pb.ConfigValue_ListValue:
|
||||
out := make([]string, 0, len(kind.ListValue.GetValues()))
|
||||
for _, item := range kind.ListValue.GetValues() {
|
||||
itemText := readStringFromConfigValue(item)
|
||||
if itemText != "" {
|
||||
out = append(out, itemText)
|
||||
}
|
||||
}
|
||||
return normalizeStringList(out)
|
||||
case *plugin_pb.ConfigValue_StringValue:
|
||||
return normalizeStringList(strings.Split(kind.StringValue, ","))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func readStringFromConfigValue(value *plugin_pb.ConfigValue) string {
|
||||
if value == nil {
|
||||
return ""
|
||||
}
|
||||
switch kind := value.Kind.(type) {
|
||||
case *plugin_pb.ConfigValue_StringValue:
|
||||
return strings.TrimSpace(kind.StringValue)
|
||||
case *plugin_pb.ConfigValue_Int64Value:
|
||||
return fmt.Sprintf("%d", kind.Int64Value)
|
||||
case *plugin_pb.ConfigValue_DoubleValue:
|
||||
return fmt.Sprintf("%g", kind.DoubleValue)
|
||||
case *plugin_pb.ConfigValue_BoolValue:
|
||||
if kind.BoolValue {
|
||||
return "true"
|
||||
}
|
||||
return "false"
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func normalizeStringList(values []string) []string {
|
||||
normalized := make([]string, 0, len(values))
|
||||
seen := make(map[string]struct{}, len(values))
|
||||
for _, value := range values {
|
||||
item := strings.TrimSpace(value)
|
||||
if item == "" {
|
||||
continue
|
||||
}
|
||||
if _, found := seen[item]; found {
|
||||
continue
|
||||
}
|
||||
seen[item] = struct{}{}
|
||||
normalized = append(normalized, item)
|
||||
}
|
||||
return normalized
|
||||
}
|
||||
|
||||
func assignECShardIDs(totalShards int, targetCount int) [][]uint32 {
|
||||
if targetCount <= 0 {
|
||||
return nil
|
||||
}
|
||||
if totalShards <= 0 {
|
||||
totalShards = ecstorage.TotalShardsCount
|
||||
}
|
||||
|
||||
assignments := make([][]uint32, targetCount)
|
||||
for i := 0; i < totalShards; i++ {
|
||||
targetIndex := i % targetCount
|
||||
assignments[targetIndex] = append(assignments[targetIndex], uint32(i))
|
||||
}
|
||||
return assignments
|
||||
}
|
||||
|
||||
func defaultErasureCodingWorkingDir() string {
|
||||
return filepath.Join(os.TempDir(), "seaweedfs-ec")
|
||||
}
|
||||
329
weed/plugin/worker/erasure_coding_handler_test.go
Normal file
329
weed/plugin/worker/erasure_coding_handler_test.go
Normal file
@@ -0,0 +1,329 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
ecstorage "github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
||||
erasurecodingtask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
workertypes "github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
func TestDecodeErasureCodingTaskParamsFromPayload(t *testing.T) {
|
||||
expected := &worker_pb.TaskParams{
|
||||
TaskId: "task-ec-1",
|
||||
VolumeId: 88,
|
||||
Collection: "images",
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: "10.0.0.1:8080",
|
||||
VolumeId: 88,
|
||||
},
|
||||
},
|
||||
Targets: []*worker_pb.TaskTarget{
|
||||
{
|
||||
Node: "10.0.0.2:8080",
|
||||
VolumeId: 88,
|
||||
ShardIds: []uint32{0, 10},
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_ErasureCodingParams{
|
||||
ErasureCodingParams: &worker_pb.ErasureCodingTaskParams{
|
||||
DataShards: ecstorage.DataShardsCount,
|
||||
ParityShards: ecstorage.ParityShardsCount,
|
||||
WorkingDir: "/tmp/ec-work",
|
||||
CleanupSource: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
payload, err := proto.Marshal(expected)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal payload: %v", err)
|
||||
}
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-from-admin",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: payload}},
|
||||
},
|
||||
}
|
||||
|
||||
actual, err := decodeErasureCodingTaskParams(job)
|
||||
if err != nil {
|
||||
t.Fatalf("decodeErasureCodingTaskParams() err = %v", err)
|
||||
}
|
||||
if !proto.Equal(expected, actual) {
|
||||
t.Fatalf("decoded params mismatch\nexpected: %+v\nactual: %+v", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecodeErasureCodingTaskParamsFallback(t *testing.T) {
|
||||
targetServers := make([]string, 0, ecstorage.TotalShardsCount)
|
||||
for i := 0; i < ecstorage.TotalShardsCount; i++ {
|
||||
targetServers = append(targetServers, "10.0.0."+string(rune('a'+i))+":8080")
|
||||
}
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-ec-2",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 7},
|
||||
},
|
||||
"source_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "127.0.0.1:8080"},
|
||||
},
|
||||
"target_servers": {
|
||||
Kind: &plugin_pb.ConfigValue_StringList{
|
||||
StringList: &plugin_pb.StringList{Values: targetServers},
|
||||
},
|
||||
},
|
||||
"collection": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "videos"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
params, err := decodeErasureCodingTaskParams(job)
|
||||
if err != nil {
|
||||
t.Fatalf("decodeErasureCodingTaskParams() err = %v", err)
|
||||
}
|
||||
if params.TaskId != "job-ec-2" || params.VolumeId != 7 || params.Collection != "videos" {
|
||||
t.Fatalf("unexpected basic params: %+v", params)
|
||||
}
|
||||
if len(params.Sources) != 1 || params.Sources[0].Node != "127.0.0.1:8080" {
|
||||
t.Fatalf("unexpected sources: %+v", params.Sources)
|
||||
}
|
||||
if len(params.Targets) != ecstorage.TotalShardsCount {
|
||||
t.Fatalf("unexpected target count: %d", len(params.Targets))
|
||||
}
|
||||
if params.GetErasureCodingParams() == nil {
|
||||
t.Fatalf("expected fallback erasure coding params")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeriveErasureCodingWorkerConfig(t *testing.T) {
|
||||
values := map[string]*plugin_pb.ConfigValue{
|
||||
"quiet_for_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 720},
|
||||
},
|
||||
"fullness_ratio": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.92},
|
||||
},
|
||||
"min_size_mb": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 128},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 55},
|
||||
},
|
||||
}
|
||||
|
||||
cfg := deriveErasureCodingWorkerConfig(values)
|
||||
if cfg.TaskConfig.QuietForSeconds != 720 {
|
||||
t.Fatalf("expected quiet_for_seconds 720, got %d", cfg.TaskConfig.QuietForSeconds)
|
||||
}
|
||||
if cfg.TaskConfig.FullnessRatio != 0.92 {
|
||||
t.Fatalf("expected fullness_ratio 0.92, got %v", cfg.TaskConfig.FullnessRatio)
|
||||
}
|
||||
if cfg.TaskConfig.MinSizeMB != 128 {
|
||||
t.Fatalf("expected min_size_mb 128, got %d", cfg.TaskConfig.MinSizeMB)
|
||||
}
|
||||
if cfg.MinIntervalSeconds != 55 {
|
||||
t.Fatalf("expected min_interval_seconds 55, got %d", cfg.MinIntervalSeconds)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildErasureCodingProposal(t *testing.T) {
|
||||
params := &worker_pb.TaskParams{
|
||||
TaskId: "ec-task-1",
|
||||
VolumeId: 99,
|
||||
Collection: "c1",
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: "source-a:8080",
|
||||
VolumeId: 99,
|
||||
},
|
||||
},
|
||||
Targets: []*worker_pb.TaskTarget{
|
||||
{
|
||||
Node: "target-a:8080",
|
||||
VolumeId: 99,
|
||||
ShardIds: []uint32{0, 10},
|
||||
},
|
||||
{
|
||||
Node: "target-b:8080",
|
||||
VolumeId: 99,
|
||||
ShardIds: []uint32{1, 11},
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_ErasureCodingParams{
|
||||
ErasureCodingParams: &worker_pb.ErasureCodingTaskParams{
|
||||
DataShards: ecstorage.DataShardsCount,
|
||||
ParityShards: ecstorage.ParityShardsCount,
|
||||
},
|
||||
},
|
||||
}
|
||||
result := &workertypes.TaskDetectionResult{
|
||||
TaskID: "ec-task-1",
|
||||
TaskType: workertypes.TaskTypeErasureCoding,
|
||||
VolumeID: 99,
|
||||
Server: "source-a",
|
||||
Collection: "c1",
|
||||
Priority: workertypes.TaskPriorityLow,
|
||||
Reason: "quiet and full",
|
||||
TypedParams: params,
|
||||
}
|
||||
|
||||
proposal, err := buildErasureCodingProposal(result)
|
||||
if err != nil {
|
||||
t.Fatalf("buildErasureCodingProposal() err = %v", err)
|
||||
}
|
||||
if proposal.JobType != "erasure_coding" {
|
||||
t.Fatalf("unexpected job type %q", proposal.JobType)
|
||||
}
|
||||
if proposal.Parameters["task_params_pb"] == nil {
|
||||
t.Fatalf("expected serialized task params")
|
||||
}
|
||||
if proposal.Labels["source_node"] != "source-a:8080" {
|
||||
t.Fatalf("unexpected source label %q", proposal.Labels["source_node"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestErasureCodingHandlerRejectsUnsupportedJobType(t *testing.T) {
|
||||
handler := NewErasureCodingHandler(nil)
|
||||
err := handler.Detect(context.Background(), &plugin_pb.RunDetectionRequest{
|
||||
JobType: "vacuum",
|
||||
}, noopDetectionSender{})
|
||||
if err == nil {
|
||||
t.Fatalf("expected detect job type mismatch error")
|
||||
}
|
||||
|
||||
err = handler.Execute(context.Background(), &plugin_pb.ExecuteJobRequest{
|
||||
Job: &plugin_pb.JobSpec{JobId: "job-1", JobType: "vacuum"},
|
||||
}, noopExecutionSender{})
|
||||
if err == nil {
|
||||
t.Fatalf("expected execute job type mismatch error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestErasureCodingHandlerDetectSkipsByMinInterval(t *testing.T) {
|
||||
handler := NewErasureCodingHandler(nil)
|
||||
sender := &recordingDetectionSender{}
|
||||
err := handler.Detect(context.Background(), &plugin_pb.RunDetectionRequest{
|
||||
JobType: "erasure_coding",
|
||||
LastSuccessfulRun: timestamppb.New(time.Now().Add(-3 * time.Second)),
|
||||
WorkerConfigValues: map[string]*plugin_pb.ConfigValue{
|
||||
"min_interval_seconds": {Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 10}},
|
||||
},
|
||||
}, sender)
|
||||
if err != nil {
|
||||
t.Fatalf("detect returned err = %v", err)
|
||||
}
|
||||
if sender.proposals == nil {
|
||||
t.Fatalf("expected proposals message")
|
||||
}
|
||||
if len(sender.proposals.Proposals) != 0 {
|
||||
t.Fatalf("expected zero proposals, got %d", len(sender.proposals.Proposals))
|
||||
}
|
||||
if sender.complete == nil || !sender.complete.Success {
|
||||
t.Fatalf("expected successful completion message")
|
||||
}
|
||||
if len(sender.events) == 0 {
|
||||
t.Fatalf("expected detector activity events")
|
||||
}
|
||||
if !strings.Contains(sender.events[0].Message, "min interval") {
|
||||
t.Fatalf("unexpected skip-by-interval message: %q", sender.events[0].Message)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEmitErasureCodingDetectionDecisionTraceNoTasks(t *testing.T) {
|
||||
sender := &recordingDetectionSender{}
|
||||
config := erasurecodingtask.NewDefaultConfig()
|
||||
config.QuietForSeconds = 5 * 60
|
||||
config.MinSizeMB = 30
|
||||
config.FullnessRatio = 0.91
|
||||
|
||||
metrics := []*workertypes.VolumeHealthMetrics{
|
||||
{
|
||||
VolumeID: 20,
|
||||
Size: 0,
|
||||
Age: 218*time.Hour + 41*time.Minute,
|
||||
FullnessRatio: 0,
|
||||
},
|
||||
{
|
||||
VolumeID: 27,
|
||||
Size: uint64(16 * 1024 * 1024 / 10),
|
||||
Age: 91*time.Hour + time.Minute,
|
||||
FullnessRatio: 0.002,
|
||||
},
|
||||
{
|
||||
VolumeID: 12,
|
||||
Size: 0,
|
||||
Age: 219*time.Hour + 49*time.Minute,
|
||||
FullnessRatio: 0,
|
||||
},
|
||||
}
|
||||
|
||||
if err := emitErasureCodingDetectionDecisionTrace(sender, metrics, config, nil); err != nil {
|
||||
t.Fatalf("emitErasureCodingDetectionDecisionTrace error: %v", err)
|
||||
}
|
||||
if len(sender.events) < 4 {
|
||||
t.Fatalf("expected at least 4 detection events, got %d", len(sender.events))
|
||||
}
|
||||
|
||||
if sender.events[0].Source != plugin_pb.ActivitySource_ACTIVITY_SOURCE_DETECTOR {
|
||||
t.Fatalf("expected detector source, got %v", sender.events[0].Source)
|
||||
}
|
||||
if !strings.Contains(sender.events[0].Message, "EC detection: No tasks created for 3 volumes") {
|
||||
t.Fatalf("unexpected summary message: %q", sender.events[0].Message)
|
||||
}
|
||||
if !strings.Contains(sender.events[1].Message, "ERASURE CODING: Volume 20: size=0.0MB") {
|
||||
t.Fatalf("unexpected first detail message: %q", sender.events[1].Message)
|
||||
}
|
||||
}
|
||||
|
||||
func TestErasureCodingDescriptorOmitsLocalExecutionFields(t *testing.T) {
|
||||
descriptor := NewErasureCodingHandler(nil).Descriptor()
|
||||
if descriptor == nil || descriptor.WorkerConfigForm == nil {
|
||||
t.Fatalf("expected worker config form in descriptor")
|
||||
}
|
||||
if workerConfigFormHasField(descriptor.WorkerConfigForm, "working_dir") {
|
||||
t.Fatalf("unexpected working_dir in erasure coding worker config form")
|
||||
}
|
||||
if workerConfigFormHasField(descriptor.WorkerConfigForm, "cleanup_source") {
|
||||
t.Fatalf("unexpected cleanup_source in erasure coding worker config form")
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyErasureCodingExecutionDefaultsForcesLocalFields(t *testing.T) {
|
||||
params := &worker_pb.TaskParams{
|
||||
TaskId: "ec-test",
|
||||
VolumeId: 100,
|
||||
TaskParams: &worker_pb.TaskParams_ErasureCodingParams{
|
||||
ErasureCodingParams: &worker_pb.ErasureCodingTaskParams{
|
||||
DataShards: ecstorage.DataShardsCount,
|
||||
ParityShards: ecstorage.ParityShardsCount,
|
||||
WorkingDir: "/tmp/custom-from-job",
|
||||
CleanupSource: false,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
applyErasureCodingExecutionDefaults(params, nil)
|
||||
|
||||
ecParams := params.GetErasureCodingParams()
|
||||
if ecParams == nil {
|
||||
t.Fatalf("expected erasure coding params")
|
||||
}
|
||||
if ecParams.WorkingDir != defaultErasureCodingWorkingDir() {
|
||||
t.Fatalf("expected local working_dir %q, got %q", defaultErasureCodingWorkingDir(), ecParams.WorkingDir)
|
||||
}
|
||||
if !ecParams.CleanupSource {
|
||||
t.Fatalf("expected cleanup_source true")
|
||||
}
|
||||
}
|
||||
870
weed/plugin/worker/vacuum_handler.go
Normal file
870
weed/plugin/worker/vacuum_handler.go
Normal file
@@ -0,0 +1,870 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/topology"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
vacuumtask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
workertypes "github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultVacuumTaskBatchSize = int32(1000)
|
||||
)
|
||||
|
||||
// VacuumHandler is the plugin job handler for vacuum job type.
|
||||
type VacuumHandler struct {
|
||||
grpcDialOption grpc.DialOption
|
||||
}
|
||||
|
||||
func NewVacuumHandler(grpcDialOption grpc.DialOption) *VacuumHandler {
|
||||
return &VacuumHandler{grpcDialOption: grpcDialOption}
|
||||
}
|
||||
|
||||
func (h *VacuumHandler) Capability() *plugin_pb.JobTypeCapability {
|
||||
return &plugin_pb.JobTypeCapability{
|
||||
JobType: "vacuum",
|
||||
CanDetect: true,
|
||||
CanExecute: true,
|
||||
MaxDetectionConcurrency: 1,
|
||||
MaxExecutionConcurrency: 2,
|
||||
DisplayName: "Volume Vacuum",
|
||||
Description: "Reclaims disk space by removing deleted files from volumes",
|
||||
}
|
||||
}
|
||||
|
||||
func (h *VacuumHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
|
||||
return &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "vacuum",
|
||||
DisplayName: "Volume Vacuum",
|
||||
Description: "Detect and vacuum volumes with high garbage ratio",
|
||||
Icon: "fas fa-broom",
|
||||
DescriptorVersion: 1,
|
||||
AdminConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "vacuum-admin",
|
||||
Title: "Vacuum Admin Config",
|
||||
Description: "Admin-side controls for vacuum detection scope.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "scope",
|
||||
Title: "Scope",
|
||||
Description: "Optional filter to restrict detection.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "collection_filter",
|
||||
Label: "Collection Filter",
|
||||
Description: "Only scan this collection when set.",
|
||||
Placeholder: "all collections",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_STRING,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_TEXT,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"collection_filter": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: ""},
|
||||
},
|
||||
},
|
||||
},
|
||||
WorkerConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "vacuum-worker",
|
||||
Title: "Vacuum Worker Config",
|
||||
Description: "Worker-side vacuum thresholds.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "thresholds",
|
||||
Title: "Thresholds",
|
||||
Description: "Detection thresholds and timing constraints.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "garbage_threshold",
|
||||
Label: "Garbage Threshold",
|
||||
Description: "Detect volumes with garbage ratio >= threshold.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_DOUBLE,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0}},
|
||||
MaxValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 1}},
|
||||
},
|
||||
{
|
||||
Name: "min_volume_age_seconds",
|
||||
Label: "Min Volume Age (s)",
|
||||
Description: "Only detect volumes older than this age.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0}},
|
||||
},
|
||||
{
|
||||
Name: "min_interval_seconds",
|
||||
Label: "Min Interval (s)",
|
||||
Description: "Minimum interval between vacuum on the same volume.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0}},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"garbage_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.3},
|
||||
},
|
||||
"min_volume_age_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 24 * 60 * 60},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 7 * 24 * 60 * 60},
|
||||
},
|
||||
},
|
||||
},
|
||||
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 2 * 60 * 60,
|
||||
DetectionTimeoutSeconds: 120,
|
||||
MaxJobsPerDetection: 200,
|
||||
GlobalExecutionConcurrency: 16,
|
||||
PerWorkerExecutionConcurrency: 4,
|
||||
RetryLimit: 1,
|
||||
RetryBackoffSeconds: 10,
|
||||
},
|
||||
WorkerDefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"garbage_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.3},
|
||||
},
|
||||
"min_volume_age_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 24 * 60 * 60},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 7 * 24 * 60 * 60},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *VacuumHandler) Detect(ctx context.Context, request *plugin_pb.RunDetectionRequest, sender DetectionSender) error {
|
||||
if request == nil {
|
||||
return fmt.Errorf("run detection request is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("detection sender is nil")
|
||||
}
|
||||
if request.JobType != "" && request.JobType != "vacuum" {
|
||||
return fmt.Errorf("job type %q is not handled by vacuum worker", request.JobType)
|
||||
}
|
||||
|
||||
workerConfig := deriveVacuumConfig(request.GetWorkerConfigValues())
|
||||
if shouldSkipDetectionByInterval(request.GetLastSuccessfulRun(), workerConfig.MinIntervalSeconds) {
|
||||
minInterval := time.Duration(workerConfig.MinIntervalSeconds) * time.Second
|
||||
_ = sender.SendActivity(buildDetectorActivity(
|
||||
"skipped_by_interval",
|
||||
fmt.Sprintf("VACUUM: Detection skipped due to min interval (%s)", minInterval),
|
||||
map[string]*plugin_pb.ConfigValue{
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(workerConfig.MinIntervalSeconds)},
|
||||
},
|
||||
},
|
||||
))
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "vacuum",
|
||||
Proposals: []*plugin_pb.JobProposal{},
|
||||
HasMore: false,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "vacuum",
|
||||
Success: true,
|
||||
TotalProposals: 0,
|
||||
})
|
||||
}
|
||||
|
||||
collectionFilter := strings.TrimSpace(readStringConfig(request.GetAdminConfigValues(), "collection_filter", ""))
|
||||
masters := make([]string, 0)
|
||||
if request.ClusterContext != nil {
|
||||
masters = append(masters, request.ClusterContext.MasterGrpcAddresses...)
|
||||
}
|
||||
metrics, activeTopology, err := h.collectVolumeMetrics(ctx, masters, collectionFilter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
clusterInfo := &workertypes.ClusterInfo{ActiveTopology: activeTopology}
|
||||
results, err := vacuumtask.Detection(metrics, clusterInfo, workerConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if traceErr := emitVacuumDetectionDecisionTrace(sender, metrics, workerConfig, results); traceErr != nil {
|
||||
glog.Warningf("Plugin worker failed to emit vacuum detection trace: %v", traceErr)
|
||||
}
|
||||
|
||||
maxResults := int(request.MaxResults)
|
||||
hasMore := false
|
||||
if maxResults > 0 && len(results) > maxResults {
|
||||
hasMore = true
|
||||
results = results[:maxResults]
|
||||
}
|
||||
|
||||
proposals := make([]*plugin_pb.JobProposal, 0, len(results))
|
||||
for _, result := range results {
|
||||
proposal, proposalErr := buildVacuumProposal(result)
|
||||
if proposalErr != nil {
|
||||
glog.Warningf("Plugin worker skip invalid vacuum proposal: %v", proposalErr)
|
||||
continue
|
||||
}
|
||||
proposals = append(proposals, proposal)
|
||||
}
|
||||
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "vacuum",
|
||||
Proposals: proposals,
|
||||
HasMore: hasMore,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "vacuum",
|
||||
Success: true,
|
||||
TotalProposals: int32(len(proposals)),
|
||||
})
|
||||
}
|
||||
|
||||
func emitVacuumDetectionDecisionTrace(
|
||||
sender DetectionSender,
|
||||
metrics []*workertypes.VolumeHealthMetrics,
|
||||
workerConfig *vacuumtask.Config,
|
||||
results []*workertypes.TaskDetectionResult,
|
||||
) error {
|
||||
if sender == nil || workerConfig == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
minVolumeAge := time.Duration(workerConfig.MinVolumeAgeSeconds) * time.Second
|
||||
totalVolumes := len(metrics)
|
||||
|
||||
debugCount := 0
|
||||
skippedDueToGarbage := 0
|
||||
skippedDueToAge := 0
|
||||
for _, metric := range metrics {
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
if metric.GarbageRatio >= workerConfig.GarbageThreshold && metric.Age >= minVolumeAge {
|
||||
continue
|
||||
}
|
||||
if debugCount < 5 {
|
||||
if metric.GarbageRatio < workerConfig.GarbageThreshold {
|
||||
skippedDueToGarbage++
|
||||
}
|
||||
if metric.Age < minVolumeAge {
|
||||
skippedDueToAge++
|
||||
}
|
||||
}
|
||||
debugCount++
|
||||
}
|
||||
|
||||
summaryMessage := ""
|
||||
summaryStage := "decision_summary"
|
||||
if len(results) == 0 {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"VACUUM: No tasks created for %d volumes. Threshold=%.2f%%, MinAge=%s. Skipped: %d (garbage<threshold), %d (age<minimum)",
|
||||
totalVolumes,
|
||||
workerConfig.GarbageThreshold*100,
|
||||
minVolumeAge,
|
||||
skippedDueToGarbage,
|
||||
skippedDueToAge,
|
||||
)
|
||||
} else {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"VACUUM: Created %d task(s) from %d volumes. Threshold=%.2f%%, MinAge=%s",
|
||||
len(results),
|
||||
totalVolumes,
|
||||
workerConfig.GarbageThreshold*100,
|
||||
minVolumeAge,
|
||||
)
|
||||
}
|
||||
|
||||
if err := sender.SendActivity(buildDetectorActivity(summaryStage, summaryMessage, map[string]*plugin_pb.ConfigValue{
|
||||
"total_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(totalVolumes)},
|
||||
},
|
||||
"selected_tasks": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(results))},
|
||||
},
|
||||
"garbage_threshold_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: workerConfig.GarbageThreshold * 100},
|
||||
},
|
||||
"min_volume_age_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(workerConfig.MinVolumeAgeSeconds)},
|
||||
},
|
||||
"skipped_garbage": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(skippedDueToGarbage)},
|
||||
},
|
||||
"skipped_age": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(skippedDueToAge)},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
limit := 3
|
||||
if len(metrics) < limit {
|
||||
limit = len(metrics)
|
||||
}
|
||||
for i := 0; i < limit; i++ {
|
||||
metric := metrics[i]
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
message := fmt.Sprintf(
|
||||
"VACUUM: Volume %d: garbage=%.2f%% (need ≥%.2f%%), age=%s (need ≥%s)",
|
||||
metric.VolumeID,
|
||||
metric.GarbageRatio*100,
|
||||
workerConfig.GarbageThreshold*100,
|
||||
metric.Age.Truncate(time.Minute),
|
||||
minVolumeAge.Truncate(time.Minute),
|
||||
)
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_volume", message, map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(metric.VolumeID)},
|
||||
},
|
||||
"garbage_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: metric.GarbageRatio * 100},
|
||||
},
|
||||
"required_garbage_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: workerConfig.GarbageThreshold * 100},
|
||||
},
|
||||
"age_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(metric.Age.Seconds())},
|
||||
},
|
||||
"required_age_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(minVolumeAge.Seconds())},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *VacuumHandler) Execute(ctx context.Context, request *plugin_pb.ExecuteJobRequest, sender ExecutionSender) error {
|
||||
if request == nil || request.Job == nil {
|
||||
return fmt.Errorf("execute request/job is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("execution sender is nil")
|
||||
}
|
||||
if request.Job.JobType != "" && request.Job.JobType != "vacuum" {
|
||||
return fmt.Errorf("job type %q is not handled by vacuum worker", request.Job.JobType)
|
||||
}
|
||||
|
||||
params, err := decodeVacuumTaskParams(request.Job)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(params.Sources) == 0 || strings.TrimSpace(params.Sources[0].Node) == "" {
|
||||
return fmt.Errorf("vacuum task source node is required")
|
||||
}
|
||||
|
||||
workerConfig := deriveVacuumConfig(request.GetWorkerConfigValues())
|
||||
if vacuumParams := params.GetVacuumParams(); vacuumParams != nil {
|
||||
if vacuumParams.GarbageThreshold <= 0 {
|
||||
vacuumParams.GarbageThreshold = workerConfig.GarbageThreshold
|
||||
}
|
||||
} else {
|
||||
params.TaskParams = &worker_pb.TaskParams_VacuumParams{
|
||||
VacuumParams: &worker_pb.VacuumTaskParams{
|
||||
GarbageThreshold: workerConfig.GarbageThreshold,
|
||||
BatchSize: defaultVacuumTaskBatchSize,
|
||||
VerifyChecksum: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
task := vacuumtask.NewVacuumTask(
|
||||
request.Job.JobId,
|
||||
params.Sources[0].Node,
|
||||
params.VolumeId,
|
||||
params.Collection,
|
||||
)
|
||||
task.SetProgressCallback(func(progress float64, stage string) {
|
||||
message := fmt.Sprintf("vacuum progress %.0f%%", progress)
|
||||
if strings.TrimSpace(stage) != "" {
|
||||
message = stage
|
||||
}
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: progress,
|
||||
Stage: stage,
|
||||
Message: message,
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity(stage, message),
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
if err := sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_ASSIGNED,
|
||||
ProgressPercent: 0,
|
||||
Stage: "assigned",
|
||||
Message: "vacuum job accepted",
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("assigned", "vacuum job accepted"),
|
||||
},
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := task.Execute(ctx, params); err != nil {
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_FAILED,
|
||||
ProgressPercent: 100,
|
||||
Stage: "failed",
|
||||
Message: err.Error(),
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("failed", err.Error()),
|
||||
},
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
resultSummary := fmt.Sprintf("vacuum completed for volume %d", params.VolumeId)
|
||||
return sender.SendCompleted(&plugin_pb.JobCompleted{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
Success: true,
|
||||
Result: &plugin_pb.JobResult{
|
||||
Summary: resultSummary,
|
||||
OutputValues: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(params.VolumeId)},
|
||||
},
|
||||
"server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: params.Sources[0].Node},
|
||||
},
|
||||
},
|
||||
},
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("completed", resultSummary),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (h *VacuumHandler) collectVolumeMetrics(
|
||||
ctx context.Context,
|
||||
masterAddresses []string,
|
||||
collectionFilter string,
|
||||
) ([]*workertypes.VolumeHealthMetrics, *topology.ActiveTopology, error) {
|
||||
if h.grpcDialOption == nil {
|
||||
return nil, nil, fmt.Errorf("grpc dial option is not configured")
|
||||
}
|
||||
if len(masterAddresses) == 0 {
|
||||
return nil, nil, fmt.Errorf("no master addresses provided in cluster context")
|
||||
}
|
||||
|
||||
for _, masterAddress := range masterAddresses {
|
||||
response, err := h.fetchVolumeList(ctx, masterAddress)
|
||||
if err != nil {
|
||||
glog.Warningf("Plugin worker failed master volume list at %s: %v", masterAddress, err)
|
||||
continue
|
||||
}
|
||||
|
||||
metrics, activeTopology, buildErr := buildVolumeMetrics(response, collectionFilter)
|
||||
if buildErr != nil {
|
||||
glog.Warningf("Plugin worker failed to build metrics from master %s: %v", masterAddress, buildErr)
|
||||
continue
|
||||
}
|
||||
return metrics, activeTopology, nil
|
||||
}
|
||||
|
||||
return nil, nil, fmt.Errorf("failed to load topology from all provided masters")
|
||||
}
|
||||
|
||||
func (h *VacuumHandler) fetchVolumeList(ctx context.Context, address string) (*master_pb.VolumeListResponse, error) {
|
||||
var lastErr error
|
||||
for _, candidate := range masterAddressCandidates(address) {
|
||||
if ctx.Err() != nil {
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
|
||||
dialCtx, cancelDial := context.WithTimeout(ctx, 5*time.Second)
|
||||
conn, err := pb.GrpcDial(dialCtx, candidate, false, h.grpcDialOption)
|
||||
cancelDial()
|
||||
if err != nil {
|
||||
lastErr = err
|
||||
continue
|
||||
}
|
||||
|
||||
client := master_pb.NewSeaweedClient(conn)
|
||||
callCtx, cancelCall := context.WithTimeout(ctx, 10*time.Second)
|
||||
response, callErr := client.VolumeList(callCtx, &master_pb.VolumeListRequest{})
|
||||
cancelCall()
|
||||
_ = conn.Close()
|
||||
|
||||
if callErr == nil {
|
||||
return response, nil
|
||||
}
|
||||
lastErr = callErr
|
||||
}
|
||||
|
||||
if lastErr == nil {
|
||||
lastErr = fmt.Errorf("no valid master address candidate")
|
||||
}
|
||||
return nil, lastErr
|
||||
}
|
||||
|
||||
func deriveVacuumConfig(values map[string]*plugin_pb.ConfigValue) *vacuumtask.Config {
|
||||
config := vacuumtask.NewDefaultConfig()
|
||||
config.GarbageThreshold = readDoubleConfig(values, "garbage_threshold", config.GarbageThreshold)
|
||||
config.MinVolumeAgeSeconds = int(readInt64Config(values, "min_volume_age_seconds", int64(config.MinVolumeAgeSeconds)))
|
||||
config.MinIntervalSeconds = int(readInt64Config(values, "min_interval_seconds", int64(config.MinIntervalSeconds)))
|
||||
return config
|
||||
}
|
||||
|
||||
func buildVolumeMetrics(
|
||||
response *master_pb.VolumeListResponse,
|
||||
collectionFilter string,
|
||||
) ([]*workertypes.VolumeHealthMetrics, *topology.ActiveTopology, error) {
|
||||
if response == nil || response.TopologyInfo == nil {
|
||||
return nil, nil, fmt.Errorf("volume list response has no topology info")
|
||||
}
|
||||
|
||||
activeTopology := topology.NewActiveTopology(10)
|
||||
if err := activeTopology.UpdateTopology(response.TopologyInfo); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
filter := strings.TrimSpace(collectionFilter)
|
||||
volumeSizeLimitBytes := uint64(response.VolumeSizeLimitMb) * 1024 * 1024
|
||||
now := time.Now()
|
||||
metrics := make([]*workertypes.VolumeHealthMetrics, 0, 256)
|
||||
|
||||
for _, dc := range response.TopologyInfo.DataCenterInfos {
|
||||
for _, rack := range dc.RackInfos {
|
||||
for _, node := range rack.DataNodeInfos {
|
||||
for diskType, diskInfo := range node.DiskInfos {
|
||||
for _, volume := range diskInfo.VolumeInfos {
|
||||
if filter != "" && volume.Collection != filter {
|
||||
continue
|
||||
}
|
||||
|
||||
metric := &workertypes.VolumeHealthMetrics{
|
||||
VolumeID: volume.Id,
|
||||
Server: node.Id,
|
||||
ServerAddress: node.Address,
|
||||
DiskType: diskType,
|
||||
DiskId: volume.DiskId,
|
||||
DataCenter: dc.Id,
|
||||
Rack: rack.Id,
|
||||
Collection: volume.Collection,
|
||||
Size: volume.Size,
|
||||
DeletedBytes: volume.DeletedByteCount,
|
||||
LastModified: time.Unix(volume.ModifiedAtSecond, 0),
|
||||
ReplicaCount: 1,
|
||||
ExpectedReplicas: int(volume.ReplicaPlacement),
|
||||
IsReadOnly: volume.ReadOnly,
|
||||
}
|
||||
if metric.Size > 0 {
|
||||
metric.GarbageRatio = float64(metric.DeletedBytes) / float64(metric.Size)
|
||||
}
|
||||
if volumeSizeLimitBytes > 0 {
|
||||
metric.FullnessRatio = float64(metric.Size) / float64(volumeSizeLimitBytes)
|
||||
}
|
||||
metric.Age = now.Sub(metric.LastModified)
|
||||
metrics = append(metrics, metric)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
replicaCounts := make(map[uint32]int)
|
||||
for _, metric := range metrics {
|
||||
replicaCounts[metric.VolumeID]++
|
||||
}
|
||||
for _, metric := range metrics {
|
||||
metric.ReplicaCount = replicaCounts[metric.VolumeID]
|
||||
}
|
||||
|
||||
return metrics, activeTopology, nil
|
||||
}
|
||||
|
||||
func buildVacuumProposal(result *workertypes.TaskDetectionResult) (*plugin_pb.JobProposal, error) {
|
||||
if result == nil {
|
||||
return nil, fmt.Errorf("task detection result is nil")
|
||||
}
|
||||
if result.TypedParams == nil {
|
||||
return nil, fmt.Errorf("missing typed params for volume %d", result.VolumeID)
|
||||
}
|
||||
|
||||
paramsPayload, err := proto.Marshal(result.TypedParams)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal task params: %w", err)
|
||||
}
|
||||
|
||||
proposalID := strings.TrimSpace(result.TaskID)
|
||||
if proposalID == "" {
|
||||
proposalID = fmt.Sprintf("vacuum-%d-%d", result.VolumeID, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
dedupeKey := fmt.Sprintf("vacuum:%d", result.VolumeID)
|
||||
if result.Collection != "" {
|
||||
dedupeKey = dedupeKey + ":" + result.Collection
|
||||
}
|
||||
|
||||
summary := fmt.Sprintf("Vacuum volume %d", result.VolumeID)
|
||||
if strings.TrimSpace(result.Server) != "" {
|
||||
summary = summary + " on " + result.Server
|
||||
}
|
||||
|
||||
return &plugin_pb.JobProposal{
|
||||
ProposalId: proposalID,
|
||||
DedupeKey: dedupeKey,
|
||||
JobType: "vacuum",
|
||||
Priority: mapTaskPriority(result.Priority),
|
||||
Summary: summary,
|
||||
Detail: strings.TrimSpace(result.Reason),
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {
|
||||
Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: paramsPayload},
|
||||
},
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(result.VolumeID)},
|
||||
},
|
||||
"server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: result.Server},
|
||||
},
|
||||
"collection": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: result.Collection},
|
||||
},
|
||||
},
|
||||
Labels: map[string]string{
|
||||
"task_type": "vacuum",
|
||||
"volume_id": fmt.Sprintf("%d", result.VolumeID),
|
||||
"collection": result.Collection,
|
||||
"source_node": result.Server,
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func decodeVacuumTaskParams(job *plugin_pb.JobSpec) (*worker_pb.TaskParams, error) {
|
||||
if job == nil {
|
||||
return nil, fmt.Errorf("job spec is nil")
|
||||
}
|
||||
|
||||
if payload := readBytesConfig(job.Parameters, "task_params_pb"); len(payload) > 0 {
|
||||
params := &worker_pb.TaskParams{}
|
||||
if err := proto.Unmarshal(payload, params); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal task_params_pb: %w", err)
|
||||
}
|
||||
if params.TaskId == "" {
|
||||
params.TaskId = job.JobId
|
||||
}
|
||||
return params, nil
|
||||
}
|
||||
|
||||
volumeID := readInt64Config(job.Parameters, "volume_id", 0)
|
||||
server := readStringConfig(job.Parameters, "server", "")
|
||||
collection := readStringConfig(job.Parameters, "collection", "")
|
||||
if volumeID <= 0 {
|
||||
return nil, fmt.Errorf("missing volume_id in job parameters")
|
||||
}
|
||||
if strings.TrimSpace(server) == "" {
|
||||
return nil, fmt.Errorf("missing server in job parameters")
|
||||
}
|
||||
|
||||
return &worker_pb.TaskParams{
|
||||
TaskId: job.JobId,
|
||||
VolumeId: uint32(volumeID),
|
||||
Collection: collection,
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: server,
|
||||
VolumeId: uint32(volumeID),
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_VacuumParams{
|
||||
VacuumParams: &worker_pb.VacuumTaskParams{
|
||||
GarbageThreshold: 0.3,
|
||||
BatchSize: defaultVacuumTaskBatchSize,
|
||||
VerifyChecksum: true,
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func readStringConfig(values map[string]*plugin_pb.ConfigValue, field string, fallback string) string {
|
||||
if values == nil {
|
||||
return fallback
|
||||
}
|
||||
value := values[field]
|
||||
if value == nil {
|
||||
return fallback
|
||||
}
|
||||
switch kind := value.Kind.(type) {
|
||||
case *plugin_pb.ConfigValue_StringValue:
|
||||
return kind.StringValue
|
||||
case *plugin_pb.ConfigValue_Int64Value:
|
||||
return strconv.FormatInt(kind.Int64Value, 10)
|
||||
case *plugin_pb.ConfigValue_DoubleValue:
|
||||
return strconv.FormatFloat(kind.DoubleValue, 'f', -1, 64)
|
||||
case *plugin_pb.ConfigValue_BoolValue:
|
||||
return strconv.FormatBool(kind.BoolValue)
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func readDoubleConfig(values map[string]*plugin_pb.ConfigValue, field string, fallback float64) float64 {
|
||||
if values == nil {
|
||||
return fallback
|
||||
}
|
||||
value := values[field]
|
||||
if value == nil {
|
||||
return fallback
|
||||
}
|
||||
switch kind := value.Kind.(type) {
|
||||
case *plugin_pb.ConfigValue_DoubleValue:
|
||||
return kind.DoubleValue
|
||||
case *plugin_pb.ConfigValue_Int64Value:
|
||||
return float64(kind.Int64Value)
|
||||
case *plugin_pb.ConfigValue_StringValue:
|
||||
parsed, err := strconv.ParseFloat(strings.TrimSpace(kind.StringValue), 64)
|
||||
if err == nil {
|
||||
return parsed
|
||||
}
|
||||
case *plugin_pb.ConfigValue_BoolValue:
|
||||
if kind.BoolValue {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func readInt64Config(values map[string]*plugin_pb.ConfigValue, field string, fallback int64) int64 {
|
||||
if values == nil {
|
||||
return fallback
|
||||
}
|
||||
value := values[field]
|
||||
if value == nil {
|
||||
return fallback
|
||||
}
|
||||
switch kind := value.Kind.(type) {
|
||||
case *plugin_pb.ConfigValue_Int64Value:
|
||||
return kind.Int64Value
|
||||
case *plugin_pb.ConfigValue_DoubleValue:
|
||||
return int64(kind.DoubleValue)
|
||||
case *plugin_pb.ConfigValue_StringValue:
|
||||
parsed, err := strconv.ParseInt(strings.TrimSpace(kind.StringValue), 10, 64)
|
||||
if err == nil {
|
||||
return parsed
|
||||
}
|
||||
case *plugin_pb.ConfigValue_BoolValue:
|
||||
if kind.BoolValue {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
|
||||
func readBytesConfig(values map[string]*plugin_pb.ConfigValue, field string) []byte {
|
||||
if values == nil {
|
||||
return nil
|
||||
}
|
||||
value := values[field]
|
||||
if value == nil {
|
||||
return nil
|
||||
}
|
||||
if kind, ok := value.Kind.(*plugin_pb.ConfigValue_BytesValue); ok {
|
||||
return kind.BytesValue
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func mapTaskPriority(priority workertypes.TaskPriority) plugin_pb.JobPriority {
|
||||
switch strings.ToLower(string(priority)) {
|
||||
case "low":
|
||||
return plugin_pb.JobPriority_JOB_PRIORITY_LOW
|
||||
case "medium", "normal":
|
||||
return plugin_pb.JobPriority_JOB_PRIORITY_NORMAL
|
||||
case "high":
|
||||
return plugin_pb.JobPriority_JOB_PRIORITY_HIGH
|
||||
case "critical":
|
||||
return plugin_pb.JobPriority_JOB_PRIORITY_CRITICAL
|
||||
default:
|
||||
return plugin_pb.JobPriority_JOB_PRIORITY_NORMAL
|
||||
}
|
||||
}
|
||||
|
||||
func masterAddressCandidates(address string) []string {
|
||||
trimmed := strings.TrimSpace(address)
|
||||
if trimmed == "" {
|
||||
return nil
|
||||
}
|
||||
candidateSet := map[string]struct{}{
|
||||
trimmed: {},
|
||||
}
|
||||
converted := pb.ServerToGrpcAddress(trimmed)
|
||||
candidateSet[converted] = struct{}{}
|
||||
|
||||
candidates := make([]string, 0, len(candidateSet))
|
||||
for candidate := range candidateSet {
|
||||
candidates = append(candidates, candidate)
|
||||
}
|
||||
sort.Strings(candidates)
|
||||
return candidates
|
||||
}
|
||||
|
||||
func shouldSkipDetectionByInterval(lastSuccessfulRun *timestamppb.Timestamp, minIntervalSeconds int) bool {
|
||||
if lastSuccessfulRun == nil || minIntervalSeconds <= 0 {
|
||||
return false
|
||||
}
|
||||
lastRun := lastSuccessfulRun.AsTime()
|
||||
if lastRun.IsZero() {
|
||||
return false
|
||||
}
|
||||
return time.Since(lastRun) < time.Duration(minIntervalSeconds)*time.Second
|
||||
}
|
||||
|
||||
func buildExecutorActivity(stage string, message string) *plugin_pb.ActivityEvent {
|
||||
return &plugin_pb.ActivityEvent{
|
||||
Source: plugin_pb.ActivitySource_ACTIVITY_SOURCE_EXECUTOR,
|
||||
Stage: stage,
|
||||
Message: message,
|
||||
CreatedAt: timestamppb.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
func buildDetectorActivity(stage string, message string, details map[string]*plugin_pb.ConfigValue) *plugin_pb.ActivityEvent {
|
||||
return &plugin_pb.ActivityEvent{
|
||||
Source: plugin_pb.ActivitySource_ACTIVITY_SOURCE_DETECTOR,
|
||||
Stage: stage,
|
||||
Message: message,
|
||||
Details: details,
|
||||
CreatedAt: timestamppb.Now(),
|
||||
}
|
||||
}
|
||||
277
weed/plugin/worker/vacuum_handler_test.go
Normal file
277
weed/plugin/worker/vacuum_handler_test.go
Normal file
@@ -0,0 +1,277 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
vacuumtask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
workertypes "github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
func TestDecodeVacuumTaskParamsFromPayload(t *testing.T) {
|
||||
expected := &worker_pb.TaskParams{
|
||||
TaskId: "task-1",
|
||||
VolumeId: 42,
|
||||
Collection: "photos",
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: "10.0.0.1:8080",
|
||||
VolumeId: 42,
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_VacuumParams{
|
||||
VacuumParams: &worker_pb.VacuumTaskParams{
|
||||
GarbageThreshold: 0.33,
|
||||
BatchSize: 500,
|
||||
VerifyChecksum: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
payload, err := proto.Marshal(expected)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal payload: %v", err)
|
||||
}
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-from-admin",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: payload}},
|
||||
},
|
||||
}
|
||||
|
||||
actual, err := decodeVacuumTaskParams(job)
|
||||
if err != nil {
|
||||
t.Fatalf("decodeVacuumTaskParams() err = %v", err)
|
||||
}
|
||||
if !proto.Equal(expected, actual) {
|
||||
t.Fatalf("decoded params mismatch\nexpected: %+v\nactual: %+v", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecodeVacuumTaskParamsFallback(t *testing.T) {
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-2",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 7}},
|
||||
"server": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "127.0.0.1:8080"}},
|
||||
"collection": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "videos"}},
|
||||
},
|
||||
}
|
||||
|
||||
params, err := decodeVacuumTaskParams(job)
|
||||
if err != nil {
|
||||
t.Fatalf("decodeVacuumTaskParams() err = %v", err)
|
||||
}
|
||||
if params.TaskId != "job-2" || params.VolumeId != 7 || params.Collection != "videos" {
|
||||
t.Fatalf("unexpected basic params: %+v", params)
|
||||
}
|
||||
if len(params.Sources) != 1 || params.Sources[0].Node != "127.0.0.1:8080" {
|
||||
t.Fatalf("unexpected sources: %+v", params.Sources)
|
||||
}
|
||||
if params.GetVacuumParams() == nil {
|
||||
t.Fatalf("expected fallback vacuum params")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeriveVacuumConfigAllowsZeroValues(t *testing.T) {
|
||||
values := map[string]*plugin_pb.ConfigValue{
|
||||
"garbage_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0},
|
||||
},
|
||||
"min_volume_age_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0},
|
||||
},
|
||||
}
|
||||
|
||||
cfg := deriveVacuumConfig(values)
|
||||
if cfg.GarbageThreshold != 0 {
|
||||
t.Fatalf("expected garbage_threshold 0, got %v", cfg.GarbageThreshold)
|
||||
}
|
||||
if cfg.MinVolumeAgeSeconds != 0 {
|
||||
t.Fatalf("expected min_volume_age_seconds 0, got %d", cfg.MinVolumeAgeSeconds)
|
||||
}
|
||||
if cfg.MinIntervalSeconds != 0 {
|
||||
t.Fatalf("expected min_interval_seconds 0, got %d", cfg.MinIntervalSeconds)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMasterAddressCandidates(t *testing.T) {
|
||||
candidates := masterAddressCandidates("localhost:9333")
|
||||
if len(candidates) != 2 {
|
||||
t.Fatalf("expected 2 candidates, got %d: %v", len(candidates), candidates)
|
||||
}
|
||||
seen := map[string]bool{}
|
||||
for _, candidate := range candidates {
|
||||
seen[candidate] = true
|
||||
}
|
||||
if !seen["localhost:9333"] {
|
||||
t.Fatalf("expected original address in candidates: %v", candidates)
|
||||
}
|
||||
if !seen["localhost:19333"] {
|
||||
t.Fatalf("expected grpc address in candidates: %v", candidates)
|
||||
}
|
||||
}
|
||||
|
||||
func TestShouldSkipDetectionByInterval(t *testing.T) {
|
||||
if shouldSkipDetectionByInterval(nil, 10) {
|
||||
t.Fatalf("expected false when timestamp is nil")
|
||||
}
|
||||
if shouldSkipDetectionByInterval(timestamppb.Now(), 0) {
|
||||
t.Fatalf("expected false when min interval is zero")
|
||||
}
|
||||
|
||||
recent := timestamppb.New(time.Now().Add(-5 * time.Second))
|
||||
if !shouldSkipDetectionByInterval(recent, 10) {
|
||||
t.Fatalf("expected true for recent successful run")
|
||||
}
|
||||
|
||||
old := timestamppb.New(time.Now().Add(-30 * time.Second))
|
||||
if shouldSkipDetectionByInterval(old, 10) {
|
||||
t.Fatalf("expected false for old successful run")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVacuumHandlerRejectsUnsupportedJobType(t *testing.T) {
|
||||
handler := NewVacuumHandler(nil)
|
||||
err := handler.Detect(context.Background(), &plugin_pb.RunDetectionRequest{
|
||||
JobType: "balance",
|
||||
}, noopDetectionSender{})
|
||||
if err == nil {
|
||||
t.Fatalf("expected detect job type mismatch error")
|
||||
}
|
||||
|
||||
err = handler.Execute(context.Background(), &plugin_pb.ExecuteJobRequest{
|
||||
Job: &plugin_pb.JobSpec{JobId: "job-1", JobType: "balance"},
|
||||
}, noopExecutionSender{})
|
||||
if err == nil {
|
||||
t.Fatalf("expected execute job type mismatch error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVacuumHandlerDetectSkipsByMinInterval(t *testing.T) {
|
||||
handler := NewVacuumHandler(nil)
|
||||
sender := &recordingDetectionSender{}
|
||||
err := handler.Detect(context.Background(), &plugin_pb.RunDetectionRequest{
|
||||
JobType: "vacuum",
|
||||
LastSuccessfulRun: timestamppb.New(time.Now().Add(-3 * time.Second)),
|
||||
WorkerConfigValues: map[string]*plugin_pb.ConfigValue{
|
||||
"min_interval_seconds": {Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 10}},
|
||||
},
|
||||
}, sender)
|
||||
if err != nil {
|
||||
t.Fatalf("detect returned err = %v", err)
|
||||
}
|
||||
if sender.proposals == nil {
|
||||
t.Fatalf("expected proposals message")
|
||||
}
|
||||
if len(sender.proposals.Proposals) != 0 {
|
||||
t.Fatalf("expected zero proposals, got %d", len(sender.proposals.Proposals))
|
||||
}
|
||||
if sender.complete == nil || !sender.complete.Success {
|
||||
t.Fatalf("expected successful completion message")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildExecutorActivity(t *testing.T) {
|
||||
activity := buildExecutorActivity("running", "vacuum in progress")
|
||||
if activity == nil {
|
||||
t.Fatalf("expected non-nil activity")
|
||||
}
|
||||
if activity.Source != plugin_pb.ActivitySource_ACTIVITY_SOURCE_EXECUTOR {
|
||||
t.Fatalf("unexpected source: %v", activity.Source)
|
||||
}
|
||||
if activity.Stage != "running" {
|
||||
t.Fatalf("unexpected stage: %q", activity.Stage)
|
||||
}
|
||||
if activity.Message != "vacuum in progress" {
|
||||
t.Fatalf("unexpected message: %q", activity.Message)
|
||||
}
|
||||
if activity.CreatedAt == nil {
|
||||
t.Fatalf("expected created_at timestamp")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEmitVacuumDetectionDecisionTraceNoTasks(t *testing.T) {
|
||||
sender := &recordingDetectionSender{}
|
||||
config := vacuumtask.NewDefaultConfig()
|
||||
config.GarbageThreshold = 0.3
|
||||
config.MinVolumeAgeSeconds = int((24 * time.Hour).Seconds())
|
||||
|
||||
metrics := []*workertypes.VolumeHealthMetrics{
|
||||
{
|
||||
VolumeID: 17,
|
||||
GarbageRatio: 0,
|
||||
Age: 218*time.Hour + 23*time.Minute,
|
||||
},
|
||||
{
|
||||
VolumeID: 16,
|
||||
GarbageRatio: 0,
|
||||
Age: 218*time.Hour + 22*time.Minute,
|
||||
},
|
||||
{
|
||||
VolumeID: 6,
|
||||
GarbageRatio: 0,
|
||||
Age: 90*time.Hour + 42*time.Minute,
|
||||
},
|
||||
}
|
||||
|
||||
if err := emitVacuumDetectionDecisionTrace(sender, metrics, config, nil); err != nil {
|
||||
t.Fatalf("emitVacuumDetectionDecisionTrace error: %v", err)
|
||||
}
|
||||
if len(sender.events) < 4 {
|
||||
t.Fatalf("expected at least 4 detection events, got %d", len(sender.events))
|
||||
}
|
||||
|
||||
if sender.events[0].Source != plugin_pb.ActivitySource_ACTIVITY_SOURCE_DETECTOR {
|
||||
t.Fatalf("expected detector source, got %v", sender.events[0].Source)
|
||||
}
|
||||
if !strings.Contains(sender.events[0].Message, "VACUUM: No tasks created for 3 volumes") {
|
||||
t.Fatalf("unexpected summary message: %q", sender.events[0].Message)
|
||||
}
|
||||
if !strings.Contains(sender.events[1].Message, "VACUUM: Volume 17: garbage=0.00%") {
|
||||
t.Fatalf("unexpected first detail message: %q", sender.events[1].Message)
|
||||
}
|
||||
}
|
||||
|
||||
type noopDetectionSender struct{}
|
||||
|
||||
func (noopDetectionSender) SendProposals(*plugin_pb.DetectionProposals) error { return nil }
|
||||
func (noopDetectionSender) SendComplete(*plugin_pb.DetectionComplete) error { return nil }
|
||||
func (noopDetectionSender) SendActivity(*plugin_pb.ActivityEvent) error { return nil }
|
||||
|
||||
type noopExecutionSender struct{}
|
||||
|
||||
func (noopExecutionSender) SendProgress(*plugin_pb.JobProgressUpdate) error { return nil }
|
||||
func (noopExecutionSender) SendCompleted(*plugin_pb.JobCompleted) error { return nil }
|
||||
|
||||
type recordingDetectionSender struct {
|
||||
proposals *plugin_pb.DetectionProposals
|
||||
complete *plugin_pb.DetectionComplete
|
||||
events []*plugin_pb.ActivityEvent
|
||||
}
|
||||
|
||||
func (r *recordingDetectionSender) SendProposals(proposals *plugin_pb.DetectionProposals) error {
|
||||
r.proposals = proposals
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *recordingDetectionSender) SendComplete(complete *plugin_pb.DetectionComplete) error {
|
||||
r.complete = complete
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *recordingDetectionSender) SendActivity(event *plugin_pb.ActivityEvent) error {
|
||||
if event != nil {
|
||||
r.events = append(r.events, event)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
826
weed/plugin/worker/volume_balance_handler.go
Normal file
826
weed/plugin/worker/volume_balance_handler.go
Normal file
@@ -0,0 +1,826 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/topology"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
balancetask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
workertypes "github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultBalanceTimeoutSeconds = int32(10 * 60)
|
||||
)
|
||||
|
||||
type volumeBalanceWorkerConfig struct {
|
||||
TaskConfig *balancetask.Config
|
||||
MinIntervalSeconds int
|
||||
}
|
||||
|
||||
// VolumeBalanceHandler is the plugin job handler for volume balancing.
|
||||
type VolumeBalanceHandler struct {
|
||||
grpcDialOption grpc.DialOption
|
||||
}
|
||||
|
||||
func NewVolumeBalanceHandler(grpcDialOption grpc.DialOption) *VolumeBalanceHandler {
|
||||
return &VolumeBalanceHandler{grpcDialOption: grpcDialOption}
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Capability() *plugin_pb.JobTypeCapability {
|
||||
return &plugin_pb.JobTypeCapability{
|
||||
JobType: "volume_balance",
|
||||
CanDetect: true,
|
||||
CanExecute: true,
|
||||
MaxDetectionConcurrency: 1,
|
||||
MaxExecutionConcurrency: 1,
|
||||
DisplayName: "Volume Balance",
|
||||
Description: "Moves volumes between servers to reduce skew in volume distribution",
|
||||
}
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
|
||||
return &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "volume_balance",
|
||||
DisplayName: "Volume Balance",
|
||||
Description: "Detect and execute volume moves to balance server load",
|
||||
Icon: "fas fa-balance-scale",
|
||||
DescriptorVersion: 1,
|
||||
AdminConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "volume-balance-admin",
|
||||
Title: "Volume Balance Admin Config",
|
||||
Description: "Admin-side controls for volume balance detection scope.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "scope",
|
||||
Title: "Scope",
|
||||
Description: "Optional filters applied before balance detection.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "collection_filter",
|
||||
Label: "Collection Filter",
|
||||
Description: "Only detect balance opportunities in this collection when set.",
|
||||
Placeholder: "all collections",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_STRING,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_TEXT,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"collection_filter": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: ""},
|
||||
},
|
||||
},
|
||||
},
|
||||
WorkerConfigForm: &plugin_pb.ConfigForm{
|
||||
FormId: "volume-balance-worker",
|
||||
Title: "Volume Balance Worker Config",
|
||||
Description: "Worker-side balance thresholds.",
|
||||
Sections: []*plugin_pb.ConfigSection{
|
||||
{
|
||||
SectionId: "thresholds",
|
||||
Title: "Detection Thresholds",
|
||||
Description: "Controls for when balance jobs should be proposed.",
|
||||
Fields: []*plugin_pb.ConfigField{
|
||||
{
|
||||
Name: "imbalance_threshold",
|
||||
Label: "Imbalance Threshold",
|
||||
Description: "Detect when skew exceeds this ratio.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_DOUBLE,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0}},
|
||||
MaxValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 1}},
|
||||
},
|
||||
{
|
||||
Name: "min_server_count",
|
||||
Label: "Minimum Server Count",
|
||||
Description: "Require at least this many servers for balancing.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 2}},
|
||||
},
|
||||
{
|
||||
Name: "min_interval_seconds",
|
||||
Label: "Minimum Detection Interval (s)",
|
||||
Description: "Skip detection if the last successful run is more recent than this interval.",
|
||||
FieldType: plugin_pb.ConfigFieldType_CONFIG_FIELD_TYPE_INT64,
|
||||
Widget: plugin_pb.ConfigWidget_CONFIG_WIDGET_NUMBER,
|
||||
Required: true,
|
||||
MinValue: &plugin_pb.ConfigValue{Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 0}},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"imbalance_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.2},
|
||||
},
|
||||
"min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 2},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 30 * 60},
|
||||
},
|
||||
},
|
||||
},
|
||||
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 30 * 60,
|
||||
DetectionTimeoutSeconds: 120,
|
||||
MaxJobsPerDetection: 100,
|
||||
GlobalExecutionConcurrency: 16,
|
||||
PerWorkerExecutionConcurrency: 4,
|
||||
RetryLimit: 1,
|
||||
RetryBackoffSeconds: 15,
|
||||
},
|
||||
WorkerDefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"imbalance_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.2},
|
||||
},
|
||||
"min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 2},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 30 * 60},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Detect(
|
||||
ctx context.Context,
|
||||
request *plugin_pb.RunDetectionRequest,
|
||||
sender DetectionSender,
|
||||
) error {
|
||||
if request == nil {
|
||||
return fmt.Errorf("run detection request is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("detection sender is nil")
|
||||
}
|
||||
if request.JobType != "" && request.JobType != "volume_balance" {
|
||||
return fmt.Errorf("job type %q is not handled by volume_balance worker", request.JobType)
|
||||
}
|
||||
|
||||
workerConfig := deriveBalanceWorkerConfig(request.GetWorkerConfigValues())
|
||||
if shouldSkipDetectionByInterval(request.GetLastSuccessfulRun(), workerConfig.MinIntervalSeconds) {
|
||||
minInterval := time.Duration(workerConfig.MinIntervalSeconds) * time.Second
|
||||
_ = sender.SendActivity(buildDetectorActivity(
|
||||
"skipped_by_interval",
|
||||
fmt.Sprintf("VOLUME BALANCE: Detection skipped due to min interval (%s)", minInterval),
|
||||
map[string]*plugin_pb.ConfigValue{
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(workerConfig.MinIntervalSeconds)},
|
||||
},
|
||||
},
|
||||
))
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "volume_balance",
|
||||
Proposals: []*plugin_pb.JobProposal{},
|
||||
HasMore: false,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "volume_balance",
|
||||
Success: true,
|
||||
TotalProposals: 0,
|
||||
})
|
||||
}
|
||||
|
||||
collectionFilter := strings.TrimSpace(readStringConfig(request.GetAdminConfigValues(), "collection_filter", ""))
|
||||
masters := make([]string, 0)
|
||||
if request.ClusterContext != nil {
|
||||
masters = append(masters, request.ClusterContext.MasterGrpcAddresses...)
|
||||
}
|
||||
|
||||
metrics, activeTopology, err := h.collectVolumeMetrics(ctx, masters, collectionFilter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
clusterInfo := &workertypes.ClusterInfo{ActiveTopology: activeTopology}
|
||||
results, err := balancetask.Detection(metrics, clusterInfo, workerConfig.TaskConfig)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if traceErr := emitVolumeBalanceDetectionDecisionTrace(sender, metrics, workerConfig.TaskConfig, results); traceErr != nil {
|
||||
glog.Warningf("Plugin worker failed to emit volume_balance detection trace: %v", traceErr)
|
||||
}
|
||||
|
||||
maxResults := int(request.MaxResults)
|
||||
hasMore := false
|
||||
if maxResults > 0 && len(results) > maxResults {
|
||||
hasMore = true
|
||||
results = results[:maxResults]
|
||||
}
|
||||
|
||||
proposals := make([]*plugin_pb.JobProposal, 0, len(results))
|
||||
for _, result := range results {
|
||||
proposal, proposalErr := buildVolumeBalanceProposal(result)
|
||||
if proposalErr != nil {
|
||||
glog.Warningf("Plugin worker skip invalid volume_balance proposal: %v", proposalErr)
|
||||
continue
|
||||
}
|
||||
proposals = append(proposals, proposal)
|
||||
}
|
||||
|
||||
if err := sender.SendProposals(&plugin_pb.DetectionProposals{
|
||||
JobType: "volume_balance",
|
||||
Proposals: proposals,
|
||||
HasMore: hasMore,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
JobType: "volume_balance",
|
||||
Success: true,
|
||||
TotalProposals: int32(len(proposals)),
|
||||
})
|
||||
}
|
||||
|
||||
func emitVolumeBalanceDetectionDecisionTrace(
|
||||
sender DetectionSender,
|
||||
metrics []*workertypes.VolumeHealthMetrics,
|
||||
taskConfig *balancetask.Config,
|
||||
results []*workertypes.TaskDetectionResult,
|
||||
) error {
|
||||
if sender == nil || taskConfig == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
totalVolumes := len(metrics)
|
||||
summaryMessage := ""
|
||||
if len(results) == 0 {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"BALANCE: No tasks created for %d volumes across %d disk type(s). Threshold=%.1f%%, MinServers=%d",
|
||||
totalVolumes,
|
||||
countBalanceDiskTypes(metrics),
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
taskConfig.MinServerCount,
|
||||
)
|
||||
} else {
|
||||
summaryMessage = fmt.Sprintf(
|
||||
"BALANCE: Created %d task(s) for %d volumes across %d disk type(s). Threshold=%.1f%%, MinServers=%d",
|
||||
len(results),
|
||||
totalVolumes,
|
||||
countBalanceDiskTypes(metrics),
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
taskConfig.MinServerCount,
|
||||
)
|
||||
}
|
||||
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_summary", summaryMessage, map[string]*plugin_pb.ConfigValue{
|
||||
"total_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(totalVolumes)},
|
||||
},
|
||||
"selected_tasks": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(results))},
|
||||
},
|
||||
"imbalance_threshold_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: taskConfig.ImbalanceThreshold * 100},
|
||||
},
|
||||
"min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.MinServerCount)},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
volumesByDiskType := make(map[string][]*workertypes.VolumeHealthMetrics)
|
||||
for _, metric := range metrics {
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
diskType := strings.TrimSpace(metric.DiskType)
|
||||
if diskType == "" {
|
||||
diskType = "unknown"
|
||||
}
|
||||
volumesByDiskType[diskType] = append(volumesByDiskType[diskType], metric)
|
||||
}
|
||||
|
||||
diskTypes := make([]string, 0, len(volumesByDiskType))
|
||||
for diskType := range volumesByDiskType {
|
||||
diskTypes = append(diskTypes, diskType)
|
||||
}
|
||||
sort.Strings(diskTypes)
|
||||
|
||||
const minVolumeCount = 2
|
||||
detailCount := 0
|
||||
for _, diskType := range diskTypes {
|
||||
diskMetrics := volumesByDiskType[diskType]
|
||||
volumeCount := len(diskMetrics)
|
||||
if volumeCount < minVolumeCount {
|
||||
message := fmt.Sprintf(
|
||||
"BALANCE [%s]: No tasks created - cluster too small (%d volumes, need ≥%d)",
|
||||
diskType,
|
||||
volumeCount,
|
||||
minVolumeCount,
|
||||
)
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_disk_type", message, map[string]*plugin_pb.ConfigValue{
|
||||
"disk_type": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: diskType},
|
||||
},
|
||||
"volume_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(volumeCount)},
|
||||
},
|
||||
"required_min_volume_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: minVolumeCount},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
detailCount++
|
||||
if detailCount >= 3 {
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
serverVolumeCounts := make(map[string]int)
|
||||
for _, metric := range diskMetrics {
|
||||
serverVolumeCounts[metric.Server]++
|
||||
}
|
||||
if len(serverVolumeCounts) < taskConfig.MinServerCount {
|
||||
message := fmt.Sprintf(
|
||||
"BALANCE [%s]: No tasks created - too few servers (%d servers, need ≥%d)",
|
||||
diskType,
|
||||
len(serverVolumeCounts),
|
||||
taskConfig.MinServerCount,
|
||||
)
|
||||
if err := sender.SendActivity(buildDetectorActivity("decision_disk_type", message, map[string]*plugin_pb.ConfigValue{
|
||||
"disk_type": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: diskType},
|
||||
},
|
||||
"server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(serverVolumeCounts))},
|
||||
},
|
||||
"required_min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(taskConfig.MinServerCount)},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
detailCount++
|
||||
if detailCount >= 3 {
|
||||
break
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
totalDiskTypeVolumes := len(diskMetrics)
|
||||
avgVolumesPerServer := float64(totalDiskTypeVolumes) / float64(len(serverVolumeCounts))
|
||||
maxVolumes := 0
|
||||
minVolumes := totalDiskTypeVolumes
|
||||
maxServer := ""
|
||||
minServer := ""
|
||||
for server, count := range serverVolumeCounts {
|
||||
if count > maxVolumes {
|
||||
maxVolumes = count
|
||||
maxServer = server
|
||||
}
|
||||
if count < minVolumes {
|
||||
minVolumes = count
|
||||
minServer = server
|
||||
}
|
||||
}
|
||||
|
||||
imbalanceRatio := 0.0
|
||||
if avgVolumesPerServer > 0 {
|
||||
imbalanceRatio = float64(maxVolumes-minVolumes) / avgVolumesPerServer
|
||||
}
|
||||
|
||||
stage := "decision_disk_type"
|
||||
message := ""
|
||||
if imbalanceRatio <= taskConfig.ImbalanceThreshold {
|
||||
message = fmt.Sprintf(
|
||||
"BALANCE [%s]: No tasks created - cluster well balanced. Imbalance=%.1f%% (threshold=%.1f%%). Max=%d volumes on %s, Min=%d on %s, Avg=%.1f",
|
||||
diskType,
|
||||
imbalanceRatio*100,
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
maxVolumes,
|
||||
maxServer,
|
||||
minVolumes,
|
||||
minServer,
|
||||
avgVolumesPerServer,
|
||||
)
|
||||
} else {
|
||||
stage = "decision_candidate"
|
||||
message = fmt.Sprintf(
|
||||
"BALANCE [%s]: Candidate detected. Imbalance=%.1f%% (threshold=%.1f%%). Max=%d volumes on %s, Min=%d on %s, Avg=%.1f",
|
||||
diskType,
|
||||
imbalanceRatio*100,
|
||||
taskConfig.ImbalanceThreshold*100,
|
||||
maxVolumes,
|
||||
maxServer,
|
||||
minVolumes,
|
||||
minServer,
|
||||
avgVolumesPerServer,
|
||||
)
|
||||
}
|
||||
|
||||
if err := sender.SendActivity(buildDetectorActivity(stage, message, map[string]*plugin_pb.ConfigValue{
|
||||
"disk_type": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: diskType},
|
||||
},
|
||||
"volume_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(totalDiskTypeVolumes)},
|
||||
},
|
||||
"server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(len(serverVolumeCounts))},
|
||||
},
|
||||
"imbalance_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: imbalanceRatio * 100},
|
||||
},
|
||||
"threshold_percent": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: taskConfig.ImbalanceThreshold * 100},
|
||||
},
|
||||
"max_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(maxVolumes)},
|
||||
},
|
||||
"min_volumes": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(minVolumes)},
|
||||
},
|
||||
"avg_volumes_per_server": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: avgVolumesPerServer},
|
||||
},
|
||||
})); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
detailCount++
|
||||
if detailCount >= 3 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func countBalanceDiskTypes(metrics []*workertypes.VolumeHealthMetrics) int {
|
||||
diskTypes := make(map[string]struct{})
|
||||
for _, metric := range metrics {
|
||||
if metric == nil {
|
||||
continue
|
||||
}
|
||||
diskType := strings.TrimSpace(metric.DiskType)
|
||||
if diskType == "" {
|
||||
diskType = "unknown"
|
||||
}
|
||||
diskTypes[diskType] = struct{}{}
|
||||
}
|
||||
return len(diskTypes)
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) Execute(
|
||||
ctx context.Context,
|
||||
request *plugin_pb.ExecuteJobRequest,
|
||||
sender ExecutionSender,
|
||||
) error {
|
||||
if request == nil || request.Job == nil {
|
||||
return fmt.Errorf("execute request/job is nil")
|
||||
}
|
||||
if sender == nil {
|
||||
return fmt.Errorf("execution sender is nil")
|
||||
}
|
||||
if request.Job.JobType != "" && request.Job.JobType != "volume_balance" {
|
||||
return fmt.Errorf("job type %q is not handled by volume_balance worker", request.Job.JobType)
|
||||
}
|
||||
|
||||
params, err := decodeVolumeBalanceTaskParams(request.Job)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(params.Sources) == 0 || strings.TrimSpace(params.Sources[0].Node) == "" {
|
||||
return fmt.Errorf("volume balance source node is required")
|
||||
}
|
||||
if len(params.Targets) == 0 || strings.TrimSpace(params.Targets[0].Node) == "" {
|
||||
return fmt.Errorf("volume balance target node is required")
|
||||
}
|
||||
|
||||
applyBalanceExecutionDefaults(params)
|
||||
|
||||
task := balancetask.NewBalanceTask(
|
||||
request.Job.JobId,
|
||||
params.Sources[0].Node,
|
||||
params.VolumeId,
|
||||
params.Collection,
|
||||
)
|
||||
task.SetProgressCallback(func(progress float64, stage string) {
|
||||
message := fmt.Sprintf("balance progress %.0f%%", progress)
|
||||
if strings.TrimSpace(stage) != "" {
|
||||
message = stage
|
||||
}
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: progress,
|
||||
Stage: stage,
|
||||
Message: message,
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity(stage, message),
|
||||
},
|
||||
})
|
||||
})
|
||||
|
||||
if err := sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_ASSIGNED,
|
||||
ProgressPercent: 0,
|
||||
Stage: "assigned",
|
||||
Message: "volume balance job accepted",
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("assigned", "volume balance job accepted"),
|
||||
},
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := task.Execute(ctx, params); err != nil {
|
||||
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_FAILED,
|
||||
ProgressPercent: 100,
|
||||
Stage: "failed",
|
||||
Message: err.Error(),
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("failed", err.Error()),
|
||||
},
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
sourceNode := params.Sources[0].Node
|
||||
targetNode := params.Targets[0].Node
|
||||
resultSummary := fmt.Sprintf("volume %d moved from %s to %s", params.VolumeId, sourceNode, targetNode)
|
||||
|
||||
return sender.SendCompleted(&plugin_pb.JobCompleted{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
Success: true,
|
||||
Result: &plugin_pb.JobResult{
|
||||
Summary: resultSummary,
|
||||
OutputValues: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(params.VolumeId)},
|
||||
},
|
||||
"source_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: sourceNode},
|
||||
},
|
||||
"target_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: targetNode},
|
||||
},
|
||||
},
|
||||
},
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
buildExecutorActivity("completed", resultSummary),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (h *VolumeBalanceHandler) collectVolumeMetrics(
|
||||
ctx context.Context,
|
||||
masterAddresses []string,
|
||||
collectionFilter string,
|
||||
) ([]*workertypes.VolumeHealthMetrics, *topology.ActiveTopology, error) {
|
||||
// Reuse the same master topology fetch/build flow used by the vacuum handler.
|
||||
helper := &VacuumHandler{grpcDialOption: h.grpcDialOption}
|
||||
return helper.collectVolumeMetrics(ctx, masterAddresses, collectionFilter)
|
||||
}
|
||||
|
||||
func deriveBalanceWorkerConfig(values map[string]*plugin_pb.ConfigValue) *volumeBalanceWorkerConfig {
|
||||
taskConfig := balancetask.NewDefaultConfig()
|
||||
|
||||
imbalanceThreshold := readDoubleConfig(values, "imbalance_threshold", taskConfig.ImbalanceThreshold)
|
||||
if imbalanceThreshold < 0 {
|
||||
imbalanceThreshold = 0
|
||||
}
|
||||
if imbalanceThreshold > 1 {
|
||||
imbalanceThreshold = 1
|
||||
}
|
||||
taskConfig.ImbalanceThreshold = imbalanceThreshold
|
||||
|
||||
minServerCount := int(readInt64Config(values, "min_server_count", int64(taskConfig.MinServerCount)))
|
||||
if minServerCount < 2 {
|
||||
minServerCount = 2
|
||||
}
|
||||
taskConfig.MinServerCount = minServerCount
|
||||
|
||||
minIntervalSeconds := int(readInt64Config(values, "min_interval_seconds", 0))
|
||||
if minIntervalSeconds < 0 {
|
||||
minIntervalSeconds = 0
|
||||
}
|
||||
|
||||
return &volumeBalanceWorkerConfig{
|
||||
TaskConfig: taskConfig,
|
||||
MinIntervalSeconds: minIntervalSeconds,
|
||||
}
|
||||
}
|
||||
|
||||
func buildVolumeBalanceProposal(
|
||||
result *workertypes.TaskDetectionResult,
|
||||
) (*plugin_pb.JobProposal, error) {
|
||||
if result == nil {
|
||||
return nil, fmt.Errorf("task detection result is nil")
|
||||
}
|
||||
if result.TypedParams == nil {
|
||||
return nil, fmt.Errorf("missing typed params for volume %d", result.VolumeID)
|
||||
}
|
||||
|
||||
params := proto.Clone(result.TypedParams).(*worker_pb.TaskParams)
|
||||
applyBalanceExecutionDefaults(params)
|
||||
|
||||
paramsPayload, err := proto.Marshal(params)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("marshal task params: %w", err)
|
||||
}
|
||||
|
||||
proposalID := strings.TrimSpace(result.TaskID)
|
||||
if proposalID == "" {
|
||||
proposalID = fmt.Sprintf("volume-balance-%d-%d", result.VolumeID, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
dedupeKey := fmt.Sprintf("volume_balance:%d", result.VolumeID)
|
||||
if result.Collection != "" {
|
||||
dedupeKey += ":" + result.Collection
|
||||
}
|
||||
|
||||
sourceNode := ""
|
||||
if len(params.Sources) > 0 {
|
||||
sourceNode = strings.TrimSpace(params.Sources[0].Node)
|
||||
}
|
||||
targetNode := ""
|
||||
if len(params.Targets) > 0 {
|
||||
targetNode = strings.TrimSpace(params.Targets[0].Node)
|
||||
}
|
||||
|
||||
summary := fmt.Sprintf("Balance volume %d", result.VolumeID)
|
||||
if sourceNode != "" && targetNode != "" {
|
||||
summary = fmt.Sprintf("Move volume %d from %s to %s", result.VolumeID, sourceNode, targetNode)
|
||||
}
|
||||
|
||||
return &plugin_pb.JobProposal{
|
||||
ProposalId: proposalID,
|
||||
DedupeKey: dedupeKey,
|
||||
JobType: "volume_balance",
|
||||
Priority: mapTaskPriority(result.Priority),
|
||||
Summary: summary,
|
||||
Detail: strings.TrimSpace(result.Reason),
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {
|
||||
Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: paramsPayload},
|
||||
},
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: int64(result.VolumeID)},
|
||||
},
|
||||
"source_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: sourceNode},
|
||||
},
|
||||
"target_server": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: targetNode},
|
||||
},
|
||||
"collection": {
|
||||
Kind: &plugin_pb.ConfigValue_StringValue{StringValue: result.Collection},
|
||||
},
|
||||
},
|
||||
Labels: map[string]string{
|
||||
"task_type": "balance",
|
||||
"volume_id": fmt.Sprintf("%d", result.VolumeID),
|
||||
"collection": result.Collection,
|
||||
"source_node": sourceNode,
|
||||
"target_node": targetNode,
|
||||
"source_server": sourceNode,
|
||||
"target_server": targetNode,
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func decodeVolumeBalanceTaskParams(job *plugin_pb.JobSpec) (*worker_pb.TaskParams, error) {
|
||||
if job == nil {
|
||||
return nil, fmt.Errorf("job spec is nil")
|
||||
}
|
||||
|
||||
if payload := readBytesConfig(job.Parameters, "task_params_pb"); len(payload) > 0 {
|
||||
params := &worker_pb.TaskParams{}
|
||||
if err := proto.Unmarshal(payload, params); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal task_params_pb: %w", err)
|
||||
}
|
||||
if params.TaskId == "" {
|
||||
params.TaskId = job.JobId
|
||||
}
|
||||
return params, nil
|
||||
}
|
||||
|
||||
volumeID := readInt64Config(job.Parameters, "volume_id", 0)
|
||||
sourceNode := strings.TrimSpace(readStringConfig(job.Parameters, "source_server", ""))
|
||||
if sourceNode == "" {
|
||||
sourceNode = strings.TrimSpace(readStringConfig(job.Parameters, "server", ""))
|
||||
}
|
||||
targetNode := strings.TrimSpace(readStringConfig(job.Parameters, "target_server", ""))
|
||||
if targetNode == "" {
|
||||
targetNode = strings.TrimSpace(readStringConfig(job.Parameters, "target", ""))
|
||||
}
|
||||
collection := readStringConfig(job.Parameters, "collection", "")
|
||||
timeoutSeconds := int32(readInt64Config(job.Parameters, "timeout_seconds", int64(defaultBalanceTimeoutSeconds)))
|
||||
if timeoutSeconds <= 0 {
|
||||
timeoutSeconds = defaultBalanceTimeoutSeconds
|
||||
}
|
||||
forceMove := readBoolConfig(job.Parameters, "force_move", false)
|
||||
|
||||
if volumeID <= 0 {
|
||||
return nil, fmt.Errorf("missing volume_id in job parameters")
|
||||
}
|
||||
if sourceNode == "" {
|
||||
return nil, fmt.Errorf("missing source_server in job parameters")
|
||||
}
|
||||
if targetNode == "" {
|
||||
return nil, fmt.Errorf("missing target_server in job parameters")
|
||||
}
|
||||
|
||||
return &worker_pb.TaskParams{
|
||||
TaskId: job.JobId,
|
||||
VolumeId: uint32(volumeID),
|
||||
Collection: collection,
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: sourceNode,
|
||||
VolumeId: uint32(volumeID),
|
||||
},
|
||||
},
|
||||
Targets: []*worker_pb.TaskTarget{
|
||||
{
|
||||
Node: targetNode,
|
||||
VolumeId: uint32(volumeID),
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_BalanceParams{
|
||||
BalanceParams: &worker_pb.BalanceTaskParams{
|
||||
ForceMove: forceMove,
|
||||
TimeoutSeconds: timeoutSeconds,
|
||||
},
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func applyBalanceExecutionDefaults(params *worker_pb.TaskParams) {
|
||||
if params == nil {
|
||||
return
|
||||
}
|
||||
|
||||
balanceParams := params.GetBalanceParams()
|
||||
if balanceParams == nil {
|
||||
params.TaskParams = &worker_pb.TaskParams_BalanceParams{
|
||||
BalanceParams: &worker_pb.BalanceTaskParams{
|
||||
ForceMove: false,
|
||||
TimeoutSeconds: defaultBalanceTimeoutSeconds,
|
||||
},
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if balanceParams.TimeoutSeconds <= 0 {
|
||||
balanceParams.TimeoutSeconds = defaultBalanceTimeoutSeconds
|
||||
}
|
||||
}
|
||||
|
||||
func readBoolConfig(values map[string]*plugin_pb.ConfigValue, field string, fallback bool) bool {
|
||||
if values == nil {
|
||||
return fallback
|
||||
}
|
||||
value := values[field]
|
||||
if value == nil {
|
||||
return fallback
|
||||
}
|
||||
switch kind := value.Kind.(type) {
|
||||
case *plugin_pb.ConfigValue_BoolValue:
|
||||
return kind.BoolValue
|
||||
case *plugin_pb.ConfigValue_Int64Value:
|
||||
return kind.Int64Value != 0
|
||||
case *plugin_pb.ConfigValue_DoubleValue:
|
||||
return kind.DoubleValue != 0
|
||||
case *plugin_pb.ConfigValue_StringValue:
|
||||
text := strings.TrimSpace(strings.ToLower(kind.StringValue))
|
||||
switch text {
|
||||
case "1", "true", "yes", "on":
|
||||
return true
|
||||
case "0", "false", "no", "off":
|
||||
return false
|
||||
}
|
||||
}
|
||||
return fallback
|
||||
}
|
||||
283
weed/plugin/worker/volume_balance_handler_test.go
Normal file
283
weed/plugin/worker/volume_balance_handler_test.go
Normal file
@@ -0,0 +1,283 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
balancetask "github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
workertypes "github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
func TestDecodeVolumeBalanceTaskParamsFromPayload(t *testing.T) {
|
||||
expected := &worker_pb.TaskParams{
|
||||
TaskId: "task-1",
|
||||
VolumeId: 42,
|
||||
Collection: "photos",
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: "10.0.0.1:8080",
|
||||
VolumeId: 42,
|
||||
},
|
||||
},
|
||||
Targets: []*worker_pb.TaskTarget{
|
||||
{
|
||||
Node: "10.0.0.2:8080",
|
||||
VolumeId: 42,
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_BalanceParams{
|
||||
BalanceParams: &worker_pb.BalanceTaskParams{
|
||||
ForceMove: true,
|
||||
TimeoutSeconds: 1200,
|
||||
},
|
||||
},
|
||||
}
|
||||
payload, err := proto.Marshal(expected)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal payload: %v", err)
|
||||
}
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-from-admin",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: payload}},
|
||||
},
|
||||
}
|
||||
|
||||
actual, err := decodeVolumeBalanceTaskParams(job)
|
||||
if err != nil {
|
||||
t.Fatalf("decodeVolumeBalanceTaskParams() err = %v", err)
|
||||
}
|
||||
if !proto.Equal(expected, actual) {
|
||||
t.Fatalf("decoded params mismatch\nexpected: %+v\nactual: %+v", expected, actual)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecodeVolumeBalanceTaskParamsFallback(t *testing.T) {
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-2",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 7}},
|
||||
"source_server": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "127.0.0.1:8080"}},
|
||||
"target_server": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "127.0.0.2:8080"}},
|
||||
"collection": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "videos"}},
|
||||
},
|
||||
}
|
||||
|
||||
params, err := decodeVolumeBalanceTaskParams(job)
|
||||
if err != nil {
|
||||
t.Fatalf("decodeVolumeBalanceTaskParams() err = %v", err)
|
||||
}
|
||||
if params.TaskId != "job-2" || params.VolumeId != 7 || params.Collection != "videos" {
|
||||
t.Fatalf("unexpected basic params: %+v", params)
|
||||
}
|
||||
if len(params.Sources) != 1 || params.Sources[0].Node != "127.0.0.1:8080" {
|
||||
t.Fatalf("unexpected sources: %+v", params.Sources)
|
||||
}
|
||||
if len(params.Targets) != 1 || params.Targets[0].Node != "127.0.0.2:8080" {
|
||||
t.Fatalf("unexpected targets: %+v", params.Targets)
|
||||
}
|
||||
if params.GetBalanceParams() == nil {
|
||||
t.Fatalf("expected fallback balance params")
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeriveBalanceWorkerConfig(t *testing.T) {
|
||||
values := map[string]*plugin_pb.ConfigValue{
|
||||
"imbalance_threshold": {
|
||||
Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.45},
|
||||
},
|
||||
"min_server_count": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 5},
|
||||
},
|
||||
"min_interval_seconds": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 33},
|
||||
},
|
||||
}
|
||||
|
||||
cfg := deriveBalanceWorkerConfig(values)
|
||||
if cfg.TaskConfig.ImbalanceThreshold != 0.45 {
|
||||
t.Fatalf("expected imbalance_threshold 0.45, got %v", cfg.TaskConfig.ImbalanceThreshold)
|
||||
}
|
||||
if cfg.TaskConfig.MinServerCount != 5 {
|
||||
t.Fatalf("expected min_server_count 5, got %d", cfg.TaskConfig.MinServerCount)
|
||||
}
|
||||
if cfg.MinIntervalSeconds != 33 {
|
||||
t.Fatalf("expected min_interval_seconds 33, got %d", cfg.MinIntervalSeconds)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildVolumeBalanceProposal(t *testing.T) {
|
||||
params := &worker_pb.TaskParams{
|
||||
TaskId: "balance-task-1",
|
||||
VolumeId: 55,
|
||||
Collection: "images",
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: "source-a:8080",
|
||||
VolumeId: 55,
|
||||
},
|
||||
},
|
||||
Targets: []*worker_pb.TaskTarget{
|
||||
{
|
||||
Node: "target-b:8080",
|
||||
VolumeId: 55,
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_BalanceParams{
|
||||
BalanceParams: &worker_pb.BalanceTaskParams{
|
||||
TimeoutSeconds: 600,
|
||||
},
|
||||
},
|
||||
}
|
||||
result := &workertypes.TaskDetectionResult{
|
||||
TaskID: "balance-task-1",
|
||||
TaskType: workertypes.TaskTypeBalance,
|
||||
VolumeID: 55,
|
||||
Server: "source-a",
|
||||
Collection: "images",
|
||||
Priority: workertypes.TaskPriorityHigh,
|
||||
Reason: "imbalanced load",
|
||||
TypedParams: params,
|
||||
}
|
||||
|
||||
proposal, err := buildVolumeBalanceProposal(result)
|
||||
if err != nil {
|
||||
t.Fatalf("buildVolumeBalanceProposal() err = %v", err)
|
||||
}
|
||||
if proposal.JobType != "volume_balance" {
|
||||
t.Fatalf("unexpected job type %q", proposal.JobType)
|
||||
}
|
||||
if proposal.DedupeKey == "" {
|
||||
t.Fatalf("expected dedupe key")
|
||||
}
|
||||
if proposal.Parameters["task_params_pb"] == nil {
|
||||
t.Fatalf("expected serialized task params")
|
||||
}
|
||||
if proposal.Labels["source_node"] != "source-a:8080" {
|
||||
t.Fatalf("unexpected source label %q", proposal.Labels["source_node"])
|
||||
}
|
||||
if proposal.Labels["target_node"] != "target-b:8080" {
|
||||
t.Fatalf("unexpected target label %q", proposal.Labels["target_node"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestVolumeBalanceHandlerRejectsUnsupportedJobType(t *testing.T) {
|
||||
handler := NewVolumeBalanceHandler(nil)
|
||||
err := handler.Detect(context.Background(), &plugin_pb.RunDetectionRequest{
|
||||
JobType: "vacuum",
|
||||
}, noopDetectionSender{})
|
||||
if err == nil {
|
||||
t.Fatalf("expected detect job type mismatch error")
|
||||
}
|
||||
|
||||
err = handler.Execute(context.Background(), &plugin_pb.ExecuteJobRequest{
|
||||
Job: &plugin_pb.JobSpec{JobId: "job-1", JobType: "vacuum"},
|
||||
}, noopExecutionSender{})
|
||||
if err == nil {
|
||||
t.Fatalf("expected execute job type mismatch error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVolumeBalanceHandlerDetectSkipsByMinInterval(t *testing.T) {
|
||||
handler := NewVolumeBalanceHandler(nil)
|
||||
sender := &recordingDetectionSender{}
|
||||
err := handler.Detect(context.Background(), &plugin_pb.RunDetectionRequest{
|
||||
JobType: "volume_balance",
|
||||
LastSuccessfulRun: timestamppb.New(time.Now().Add(-3 * time.Second)),
|
||||
WorkerConfigValues: map[string]*plugin_pb.ConfigValue{
|
||||
"min_interval_seconds": {Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 10}},
|
||||
},
|
||||
}, sender)
|
||||
if err != nil {
|
||||
t.Fatalf("detect returned err = %v", err)
|
||||
}
|
||||
if sender.proposals == nil {
|
||||
t.Fatalf("expected proposals message")
|
||||
}
|
||||
if len(sender.proposals.Proposals) != 0 {
|
||||
t.Fatalf("expected zero proposals, got %d", len(sender.proposals.Proposals))
|
||||
}
|
||||
if sender.complete == nil || !sender.complete.Success {
|
||||
t.Fatalf("expected successful completion message")
|
||||
}
|
||||
if len(sender.events) == 0 {
|
||||
t.Fatalf("expected detector activity events")
|
||||
}
|
||||
if !strings.Contains(sender.events[0].Message, "min interval") {
|
||||
t.Fatalf("unexpected skip-by-interval message: %q", sender.events[0].Message)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEmitVolumeBalanceDetectionDecisionTraceNoTasks(t *testing.T) {
|
||||
sender := &recordingDetectionSender{}
|
||||
config := balancetask.NewDefaultConfig()
|
||||
config.ImbalanceThreshold = 0.2
|
||||
config.MinServerCount = 2
|
||||
|
||||
metrics := []*workertypes.VolumeHealthMetrics{
|
||||
{VolumeID: 1, Server: "server-a", DiskType: "hdd"},
|
||||
{VolumeID: 2, Server: "server-a", DiskType: "hdd"},
|
||||
{VolumeID: 3, Server: "server-b", DiskType: "hdd"},
|
||||
{VolumeID: 4, Server: "server-b", DiskType: "hdd"},
|
||||
}
|
||||
|
||||
if err := emitVolumeBalanceDetectionDecisionTrace(sender, metrics, config, nil); err != nil {
|
||||
t.Fatalf("emitVolumeBalanceDetectionDecisionTrace error: %v", err)
|
||||
}
|
||||
if len(sender.events) < 2 {
|
||||
t.Fatalf("expected at least 2 detection events, got %d", len(sender.events))
|
||||
}
|
||||
if sender.events[0].Source != plugin_pb.ActivitySource_ACTIVITY_SOURCE_DETECTOR {
|
||||
t.Fatalf("expected detector source, got %v", sender.events[0].Source)
|
||||
}
|
||||
if !strings.Contains(sender.events[0].Message, "BALANCE: No tasks created for 4 volumes") {
|
||||
t.Fatalf("unexpected summary message: %q", sender.events[0].Message)
|
||||
}
|
||||
foundDiskTypeDecision := false
|
||||
for _, event := range sender.events {
|
||||
if strings.Contains(event.Message, "BALANCE [hdd]: No tasks created - cluster well balanced") {
|
||||
foundDiskTypeDecision = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !foundDiskTypeDecision {
|
||||
t.Fatalf("expected per-disk-type decision message")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVolumeBalanceDescriptorOmitsExecutionTuningFields(t *testing.T) {
|
||||
descriptor := NewVolumeBalanceHandler(nil).Descriptor()
|
||||
if descriptor == nil || descriptor.WorkerConfigForm == nil {
|
||||
t.Fatalf("expected worker config form in descriptor")
|
||||
}
|
||||
if workerConfigFormHasField(descriptor.WorkerConfigForm, "timeout_seconds") {
|
||||
t.Fatalf("unexpected timeout_seconds in volume balance worker config form")
|
||||
}
|
||||
if workerConfigFormHasField(descriptor.WorkerConfigForm, "force_move") {
|
||||
t.Fatalf("unexpected force_move in volume balance worker config form")
|
||||
}
|
||||
}
|
||||
|
||||
func workerConfigFormHasField(form *plugin_pb.ConfigForm, fieldName string) bool {
|
||||
if form == nil {
|
||||
return false
|
||||
}
|
||||
for _, section := range form.Sections {
|
||||
if section == nil {
|
||||
continue
|
||||
}
|
||||
for _, field := range section.Fields {
|
||||
if field != nil && field.Name == fieldName {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
939
weed/plugin/worker/worker.go
Normal file
939
weed/plugin/worker/worker.go
Normal file
@@ -0,0 +1,939 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultHeartbeatInterval = 15 * time.Second
|
||||
defaultReconnectDelay = 5 * time.Second
|
||||
defaultSendBufferSize = 256
|
||||
)
|
||||
|
||||
// DetectionSender sends detection responses for one request.
|
||||
type DetectionSender interface {
|
||||
SendProposals(*plugin_pb.DetectionProposals) error
|
||||
SendComplete(*plugin_pb.DetectionComplete) error
|
||||
SendActivity(*plugin_pb.ActivityEvent) error
|
||||
}
|
||||
|
||||
// ExecutionSender sends execution progress/completion responses for one request.
|
||||
type ExecutionSender interface {
|
||||
SendProgress(*plugin_pb.JobProgressUpdate) error
|
||||
SendCompleted(*plugin_pb.JobCompleted) error
|
||||
}
|
||||
|
||||
// JobHandler implements one plugin job type on the worker side.
|
||||
type JobHandler interface {
|
||||
Capability() *plugin_pb.JobTypeCapability
|
||||
Descriptor() *plugin_pb.JobTypeDescriptor
|
||||
Detect(context.Context, *plugin_pb.RunDetectionRequest, DetectionSender) error
|
||||
Execute(context.Context, *plugin_pb.ExecuteJobRequest, ExecutionSender) error
|
||||
}
|
||||
|
||||
// WorkerOptions configures one plugin worker process.
|
||||
type WorkerOptions struct {
|
||||
AdminServer string
|
||||
WorkerID string
|
||||
WorkerVersion string
|
||||
WorkerAddress string
|
||||
HeartbeatInterval time.Duration
|
||||
ReconnectDelay time.Duration
|
||||
MaxDetectionConcurrency int
|
||||
MaxExecutionConcurrency int
|
||||
GrpcDialOption grpc.DialOption
|
||||
Handlers []JobHandler
|
||||
Handler JobHandler
|
||||
}
|
||||
|
||||
// Worker runs one plugin job handler over plugin.proto stream.
|
||||
type Worker struct {
|
||||
opts WorkerOptions
|
||||
|
||||
detectSlots chan struct{}
|
||||
execSlots chan struct{}
|
||||
|
||||
handlers map[string]JobHandler
|
||||
|
||||
runningMu sync.RWMutex
|
||||
runningWork map[string]*plugin_pb.RunningWork
|
||||
|
||||
workCancelMu sync.Mutex
|
||||
workCancel map[string]context.CancelFunc
|
||||
|
||||
workerID string
|
||||
|
||||
connectionMu sync.RWMutex
|
||||
connected bool
|
||||
}
|
||||
|
||||
// NewWorker creates a plugin worker instance.
|
||||
func NewWorker(options WorkerOptions) (*Worker, error) {
|
||||
if strings.TrimSpace(options.AdminServer) == "" {
|
||||
return nil, fmt.Errorf("admin server is required")
|
||||
}
|
||||
if options.GrpcDialOption == nil {
|
||||
return nil, fmt.Errorf("grpc dial option is required")
|
||||
}
|
||||
if options.HeartbeatInterval <= 0 {
|
||||
options.HeartbeatInterval = defaultHeartbeatInterval
|
||||
}
|
||||
if options.ReconnectDelay <= 0 {
|
||||
options.ReconnectDelay = defaultReconnectDelay
|
||||
}
|
||||
if options.MaxDetectionConcurrency <= 0 {
|
||||
options.MaxDetectionConcurrency = 1
|
||||
}
|
||||
if options.MaxExecutionConcurrency <= 0 {
|
||||
options.MaxExecutionConcurrency = 1
|
||||
}
|
||||
if strings.TrimSpace(options.WorkerVersion) == "" {
|
||||
options.WorkerVersion = "dev"
|
||||
}
|
||||
|
||||
workerID := strings.TrimSpace(options.WorkerID)
|
||||
if workerID == "" {
|
||||
workerID = generateWorkerID()
|
||||
}
|
||||
|
||||
workerAddress := strings.TrimSpace(options.WorkerAddress)
|
||||
if workerAddress == "" {
|
||||
hostname, _ := os.Hostname()
|
||||
workerAddress = hostname
|
||||
}
|
||||
opts := options
|
||||
opts.WorkerAddress = workerAddress
|
||||
|
||||
allHandlers := make([]JobHandler, 0, len(opts.Handlers)+1)
|
||||
if opts.Handler != nil {
|
||||
allHandlers = append(allHandlers, opts.Handler)
|
||||
}
|
||||
allHandlers = append(allHandlers, opts.Handlers...)
|
||||
if len(allHandlers) == 0 {
|
||||
return nil, fmt.Errorf("at least one job handler is required")
|
||||
}
|
||||
|
||||
handlers := make(map[string]JobHandler, len(allHandlers))
|
||||
for i, handler := range allHandlers {
|
||||
if handler == nil {
|
||||
return nil, fmt.Errorf("job handler at index %d is nil", i)
|
||||
}
|
||||
handlerJobType, err := resolveHandlerJobType(handler)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("resolve job handler at index %d: %w", i, err)
|
||||
}
|
||||
key := normalizeJobTypeKey(handlerJobType)
|
||||
if key == "" {
|
||||
return nil, fmt.Errorf("job handler at index %d has empty job type", i)
|
||||
}
|
||||
if _, found := handlers[key]; found {
|
||||
return nil, fmt.Errorf("duplicate job handler for job type %q", handlerJobType)
|
||||
}
|
||||
handlers[key] = handler
|
||||
}
|
||||
if opts.Handler == nil {
|
||||
opts.Handler = allHandlers[0]
|
||||
}
|
||||
|
||||
w := &Worker{
|
||||
opts: opts,
|
||||
detectSlots: make(chan struct{}, opts.MaxDetectionConcurrency),
|
||||
execSlots: make(chan struct{}, opts.MaxExecutionConcurrency),
|
||||
handlers: handlers,
|
||||
runningWork: make(map[string]*plugin_pb.RunningWork),
|
||||
workCancel: make(map[string]context.CancelFunc),
|
||||
workerID: workerID,
|
||||
}
|
||||
return w, nil
|
||||
}
|
||||
|
||||
// Run keeps the plugin worker connected and reconnects on stream failures.
|
||||
func (w *Worker) Run(ctx context.Context) error {
|
||||
adminAddress := pb.ServerToGrpcAddress(w.opts.AdminServer)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil
|
||||
default:
|
||||
}
|
||||
|
||||
if err := w.runOnce(ctx, adminAddress); err != nil {
|
||||
if ctx.Err() != nil {
|
||||
return nil
|
||||
}
|
||||
glog.Warningf("Plugin worker %s stream ended: %v", w.workerID, err)
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil
|
||||
case <-time.After(w.opts.ReconnectDelay):
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Worker) runOnce(ctx context.Context, adminAddress string) error {
|
||||
defer w.setConnected(false)
|
||||
|
||||
dialCtx, cancelDial := context.WithTimeout(ctx, 5*time.Second)
|
||||
defer cancelDial()
|
||||
|
||||
conn, err := pb.GrpcDial(dialCtx, adminAddress, false, w.opts.GrpcDialOption)
|
||||
if err != nil {
|
||||
return fmt.Errorf("dial admin %s: %w", adminAddress, err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
client := plugin_pb.NewPluginControlServiceClient(conn)
|
||||
connCtx, cancelConn := context.WithCancel(ctx)
|
||||
defer cancelConn()
|
||||
|
||||
stream, err := client.WorkerStream(connCtx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("open worker stream: %w", err)
|
||||
}
|
||||
w.setConnected(true)
|
||||
|
||||
sendCh := make(chan *plugin_pb.WorkerToAdminMessage, defaultSendBufferSize)
|
||||
sendErrCh := make(chan error, 1)
|
||||
|
||||
send := func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
if msg == nil {
|
||||
return false
|
||||
}
|
||||
msg.WorkerId = w.workerID
|
||||
if msg.SentAt == nil {
|
||||
msg.SentAt = timestamppb.Now()
|
||||
}
|
||||
select {
|
||||
case <-connCtx.Done():
|
||||
return false
|
||||
case sendCh <- msg:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-connCtx.Done():
|
||||
return
|
||||
case msg := <-sendCh:
|
||||
if msg == nil {
|
||||
continue
|
||||
}
|
||||
if err := stream.Send(msg); err != nil {
|
||||
select {
|
||||
case sendErrCh <- err:
|
||||
default:
|
||||
}
|
||||
cancelConn()
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
if !send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_Hello{Hello: w.buildHello()},
|
||||
}) {
|
||||
return fmt.Errorf("send worker hello: stream closed")
|
||||
}
|
||||
|
||||
heartbeatTicker := time.NewTicker(w.opts.HeartbeatInterval)
|
||||
defer heartbeatTicker.Stop()
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-connCtx.Done():
|
||||
return
|
||||
case <-heartbeatTicker.C:
|
||||
w.sendHeartbeat(send)
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-connCtx.Done():
|
||||
return connCtx.Err()
|
||||
case err := <-sendErrCh:
|
||||
return fmt.Errorf("send to admin stream: %w", err)
|
||||
default:
|
||||
}
|
||||
|
||||
message, err := stream.Recv()
|
||||
if err != nil {
|
||||
return fmt.Errorf("recv admin message: %w", err)
|
||||
}
|
||||
|
||||
w.handleAdminMessage(connCtx, message, send)
|
||||
}
|
||||
}
|
||||
|
||||
// IsConnected reports whether the worker currently has an active stream to admin.
|
||||
func (w *Worker) IsConnected() bool {
|
||||
w.connectionMu.RLock()
|
||||
defer w.connectionMu.RUnlock()
|
||||
return w.connected
|
||||
}
|
||||
|
||||
func (w *Worker) setConnected(connected bool) {
|
||||
w.connectionMu.Lock()
|
||||
w.connected = connected
|
||||
w.connectionMu.Unlock()
|
||||
}
|
||||
|
||||
func (w *Worker) handleAdminMessage(
|
||||
ctx context.Context,
|
||||
message *plugin_pb.AdminToWorkerMessage,
|
||||
send func(*plugin_pb.WorkerToAdminMessage) bool,
|
||||
) {
|
||||
if message == nil {
|
||||
return
|
||||
}
|
||||
|
||||
switch body := message.Body.(type) {
|
||||
case *plugin_pb.AdminToWorkerMessage_Hello:
|
||||
_ = body
|
||||
case *plugin_pb.AdminToWorkerMessage_RequestConfigSchema:
|
||||
w.handleSchemaRequest(message.GetRequestId(), body.RequestConfigSchema, send)
|
||||
case *plugin_pb.AdminToWorkerMessage_RunDetectionRequest:
|
||||
w.handleDetectionRequest(ctx, message.GetRequestId(), body.RunDetectionRequest, send)
|
||||
case *plugin_pb.AdminToWorkerMessage_ExecuteJobRequest:
|
||||
w.handleExecuteRequest(ctx, message.GetRequestId(), body.ExecuteJobRequest, send)
|
||||
case *plugin_pb.AdminToWorkerMessage_CancelRequest:
|
||||
cancel := body.CancelRequest
|
||||
targetID := ""
|
||||
if cancel != nil {
|
||||
targetID = strings.TrimSpace(cancel.TargetId)
|
||||
}
|
||||
accepted := false
|
||||
ackMessage := "cancel target is required"
|
||||
if targetID != "" {
|
||||
if w.cancelWork(targetID) {
|
||||
accepted = true
|
||||
ackMessage = "cancel request accepted"
|
||||
} else {
|
||||
ackMessage = "cancel target not found"
|
||||
}
|
||||
}
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_Acknowledge{Acknowledge: &plugin_pb.WorkerAcknowledge{
|
||||
RequestId: message.GetRequestId(),
|
||||
Accepted: accepted,
|
||||
Message: ackMessage,
|
||||
}},
|
||||
})
|
||||
case *plugin_pb.AdminToWorkerMessage_Shutdown:
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_Acknowledge{Acknowledge: &plugin_pb.WorkerAcknowledge{
|
||||
RequestId: message.GetRequestId(),
|
||||
Accepted: true,
|
||||
Message: "shutdown acknowledged",
|
||||
}},
|
||||
})
|
||||
default:
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_Acknowledge{Acknowledge: &plugin_pb.WorkerAcknowledge{
|
||||
RequestId: message.GetRequestId(),
|
||||
Accepted: false,
|
||||
Message: "unsupported request body",
|
||||
}},
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Worker) handleSchemaRequest(requestID string, request *plugin_pb.RequestConfigSchema, send func(*plugin_pb.WorkerToAdminMessage) bool) {
|
||||
jobType := ""
|
||||
if request != nil {
|
||||
jobType = strings.TrimSpace(request.JobType)
|
||||
}
|
||||
|
||||
handler, resolvedJobType, err := w.findHandler(jobType)
|
||||
if err != nil {
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_ConfigSchemaResponse{ConfigSchemaResponse: &plugin_pb.ConfigSchemaResponse{
|
||||
RequestId: requestID,
|
||||
JobType: jobType,
|
||||
Success: false,
|
||||
ErrorMessage: err.Error(),
|
||||
}},
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
descriptor := handler.Descriptor()
|
||||
if descriptor == nil || descriptor.JobType == "" {
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_ConfigSchemaResponse{ConfigSchemaResponse: &plugin_pb.ConfigSchemaResponse{
|
||||
RequestId: requestID,
|
||||
JobType: resolvedJobType,
|
||||
Success: false,
|
||||
ErrorMessage: "handler descriptor is not configured",
|
||||
}},
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_ConfigSchemaResponse{ConfigSchemaResponse: &plugin_pb.ConfigSchemaResponse{
|
||||
RequestId: requestID,
|
||||
JobType: descriptor.JobType,
|
||||
Success: true,
|
||||
JobTypeDescriptor: descriptor,
|
||||
}},
|
||||
})
|
||||
}
|
||||
|
||||
func (w *Worker) handleDetectionRequest(
|
||||
ctx context.Context,
|
||||
requestID string,
|
||||
request *plugin_pb.RunDetectionRequest,
|
||||
send func(*plugin_pb.WorkerToAdminMessage) bool,
|
||||
) {
|
||||
if request == nil {
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_DetectionComplete{DetectionComplete: &plugin_pb.DetectionComplete{
|
||||
RequestId: requestID,
|
||||
Success: false,
|
||||
ErrorMessage: "run detection request is nil",
|
||||
}},
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
handler, resolvedJobType, err := w.findHandler(request.JobType)
|
||||
if err != nil {
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_DetectionComplete{DetectionComplete: &plugin_pb.DetectionComplete{
|
||||
RequestId: requestID,
|
||||
JobType: request.JobType,
|
||||
Success: false,
|
||||
ErrorMessage: err.Error(),
|
||||
}},
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
workKey := "detect:" + requestID
|
||||
w.setRunningWork(workKey, &plugin_pb.RunningWork{
|
||||
WorkId: requestID,
|
||||
Kind: plugin_pb.WorkKind_WORK_KIND_DETECTION,
|
||||
JobType: resolvedJobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_ASSIGNED,
|
||||
ProgressPercent: 0,
|
||||
Stage: "queued",
|
||||
})
|
||||
w.sendHeartbeat(send)
|
||||
|
||||
requestCtx, cancelRequest := context.WithCancel(ctx)
|
||||
w.setWorkCancel(cancelRequest, requestID)
|
||||
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_Acknowledge{Acknowledge: &plugin_pb.WorkerAcknowledge{
|
||||
RequestId: requestID,
|
||||
Accepted: true,
|
||||
Message: "detection request accepted",
|
||||
}},
|
||||
})
|
||||
|
||||
go func() {
|
||||
detectionSender := &detectionSender{
|
||||
requestID: requestID,
|
||||
jobType: resolvedJobType,
|
||||
send: send,
|
||||
}
|
||||
defer func() {
|
||||
w.clearWorkCancel(requestID)
|
||||
cancelRequest()
|
||||
w.clearRunningWork(workKey)
|
||||
w.sendHeartbeat(send)
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-requestCtx.Done():
|
||||
detectionSender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
Success: false,
|
||||
ErrorMessage: requestCtx.Err().Error(),
|
||||
})
|
||||
return
|
||||
case w.detectSlots <- struct{}{}:
|
||||
}
|
||||
defer func() {
|
||||
<-w.detectSlots
|
||||
w.sendHeartbeat(send)
|
||||
}()
|
||||
|
||||
w.setRunningWork(workKey, &plugin_pb.RunningWork{
|
||||
WorkId: requestID,
|
||||
Kind: plugin_pb.WorkKind_WORK_KIND_DETECTION,
|
||||
JobType: resolvedJobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: 0,
|
||||
Stage: "detecting",
|
||||
})
|
||||
w.sendHeartbeat(send)
|
||||
|
||||
if err := handler.Detect(requestCtx, request, detectionSender); err != nil {
|
||||
detectionSender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
Success: false,
|
||||
ErrorMessage: err.Error(),
|
||||
})
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func (w *Worker) handleExecuteRequest(
|
||||
ctx context.Context,
|
||||
requestID string,
|
||||
request *plugin_pb.ExecuteJobRequest,
|
||||
send func(*plugin_pb.WorkerToAdminMessage) bool,
|
||||
) {
|
||||
if request == nil || request.Job == nil {
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_JobCompleted{JobCompleted: &plugin_pb.JobCompleted{
|
||||
RequestId: requestID,
|
||||
Success: false,
|
||||
ErrorMessage: "execute request/job is nil",
|
||||
}},
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
handler, resolvedJobType, err := w.findHandler(request.Job.JobType)
|
||||
if err != nil {
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_JobCompleted{JobCompleted: &plugin_pb.JobCompleted{
|
||||
RequestId: requestID,
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
Success: false,
|
||||
ErrorMessage: err.Error(),
|
||||
}},
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
select {
|
||||
case w.execSlots <- struct{}{}:
|
||||
default:
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_JobCompleted{JobCompleted: &plugin_pb.JobCompleted{
|
||||
RequestId: requestID,
|
||||
JobId: request.Job.JobId,
|
||||
JobType: resolvedJobType,
|
||||
Success: false,
|
||||
ErrorMessage: "executor is at capacity",
|
||||
}},
|
||||
})
|
||||
return
|
||||
}
|
||||
w.sendHeartbeat(send)
|
||||
|
||||
workKey := "exec:" + requestID
|
||||
w.setRunningWork(workKey, &plugin_pb.RunningWork{
|
||||
WorkId: request.Job.JobId,
|
||||
Kind: plugin_pb.WorkKind_WORK_KIND_EXECUTION,
|
||||
JobType: resolvedJobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: 0,
|
||||
Stage: "starting",
|
||||
})
|
||||
w.sendHeartbeat(send)
|
||||
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_Acknowledge{Acknowledge: &plugin_pb.WorkerAcknowledge{
|
||||
RequestId: requestID,
|
||||
Accepted: true,
|
||||
Message: "execute request accepted",
|
||||
}},
|
||||
})
|
||||
|
||||
go func() {
|
||||
requestCtx, cancelRequest := context.WithCancel(ctx)
|
||||
w.setWorkCancel(cancelRequest, requestID, request.Job.JobId)
|
||||
defer func() {
|
||||
w.clearWorkCancel(requestID, request.Job.JobId)
|
||||
cancelRequest()
|
||||
<-w.execSlots
|
||||
w.clearRunningWork(workKey)
|
||||
w.sendHeartbeat(send)
|
||||
}()
|
||||
|
||||
executionSender := &executionSender{
|
||||
requestID: requestID,
|
||||
jobID: request.Job.JobId,
|
||||
jobType: resolvedJobType,
|
||||
send: send,
|
||||
onProgress: func(progress float64, stage string) {
|
||||
w.updateRunningExecution(workKey, progress, stage)
|
||||
},
|
||||
}
|
||||
if err := handler.Execute(requestCtx, request, executionSender); err != nil {
|
||||
executionSender.SendCompleted(&plugin_pb.JobCompleted{
|
||||
Success: false,
|
||||
ErrorMessage: err.Error(),
|
||||
})
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func (w *Worker) buildHello() *plugin_pb.WorkerHello {
|
||||
jobTypeKeys := make([]string, 0, len(w.handlers))
|
||||
for key := range w.handlers {
|
||||
jobTypeKeys = append(jobTypeKeys, key)
|
||||
}
|
||||
sort.Strings(jobTypeKeys)
|
||||
|
||||
capabilities := make([]*plugin_pb.JobTypeCapability, 0, len(jobTypeKeys))
|
||||
jobTypes := make([]string, 0, len(jobTypeKeys))
|
||||
|
||||
for _, key := range jobTypeKeys {
|
||||
handler := w.handlers[key]
|
||||
if handler == nil {
|
||||
continue
|
||||
}
|
||||
jobType, _ := resolveHandlerJobType(handler)
|
||||
capability := handler.Capability()
|
||||
if capability == nil {
|
||||
capability = &plugin_pb.JobTypeCapability{}
|
||||
} else {
|
||||
capability = proto.Clone(capability).(*plugin_pb.JobTypeCapability)
|
||||
}
|
||||
if strings.TrimSpace(capability.JobType) == "" {
|
||||
capability.JobType = jobType
|
||||
}
|
||||
capability.MaxDetectionConcurrency = int32(cap(w.detectSlots))
|
||||
capability.MaxExecutionConcurrency = int32(cap(w.execSlots))
|
||||
capabilities = append(capabilities, capability)
|
||||
if capability.JobType != "" {
|
||||
jobTypes = append(jobTypes, capability.JobType)
|
||||
}
|
||||
}
|
||||
|
||||
instanceID := generateWorkerID()
|
||||
return &plugin_pb.WorkerHello{
|
||||
WorkerId: w.workerID,
|
||||
WorkerInstanceId: "inst-" + instanceID,
|
||||
Address: w.opts.WorkerAddress,
|
||||
WorkerVersion: w.opts.WorkerVersion,
|
||||
ProtocolVersion: "plugin.v1",
|
||||
Capabilities: capabilities,
|
||||
Metadata: map[string]string{
|
||||
"runtime": "plugin",
|
||||
"job_types": strings.Join(jobTypes, ","),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Worker) buildHeartbeat() *plugin_pb.WorkerHeartbeat {
|
||||
w.runningMu.RLock()
|
||||
running := make([]*plugin_pb.RunningWork, 0, len(w.runningWork))
|
||||
for _, work := range w.runningWork {
|
||||
if work == nil {
|
||||
continue
|
||||
}
|
||||
cloned := *work
|
||||
running = append(running, &cloned)
|
||||
}
|
||||
w.runningMu.RUnlock()
|
||||
|
||||
detectUsed := len(w.detectSlots)
|
||||
execUsed := len(w.execSlots)
|
||||
return &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: w.workerID,
|
||||
RunningWork: running,
|
||||
DetectionSlotsUsed: int32(detectUsed),
|
||||
DetectionSlotsTotal: int32(cap(w.detectSlots)),
|
||||
ExecutionSlotsUsed: int32(execUsed),
|
||||
ExecutionSlotsTotal: int32(cap(w.execSlots)),
|
||||
QueuedJobsByType: map[string]int32{},
|
||||
Metadata: map[string]string{
|
||||
"runtime": "plugin",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Worker) sendHeartbeat(send func(*plugin_pb.WorkerToAdminMessage) bool) {
|
||||
if send == nil {
|
||||
return
|
||||
}
|
||||
send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_Heartbeat{
|
||||
Heartbeat: w.buildHeartbeat(),
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (w *Worker) setRunningWork(key string, work *plugin_pb.RunningWork) {
|
||||
if strings.TrimSpace(key) == "" || work == nil {
|
||||
return
|
||||
}
|
||||
w.runningMu.Lock()
|
||||
w.runningWork[key] = work
|
||||
w.runningMu.Unlock()
|
||||
}
|
||||
|
||||
func (w *Worker) clearRunningWork(key string) {
|
||||
w.runningMu.Lock()
|
||||
delete(w.runningWork, key)
|
||||
w.runningMu.Unlock()
|
||||
}
|
||||
|
||||
func (w *Worker) updateRunningExecution(key string, progress float64, stage string) {
|
||||
w.runningMu.Lock()
|
||||
if running := w.runningWork[key]; running != nil {
|
||||
running.ProgressPercent = progress
|
||||
if strings.TrimSpace(stage) != "" {
|
||||
running.Stage = stage
|
||||
}
|
||||
running.State = plugin_pb.JobState_JOB_STATE_RUNNING
|
||||
}
|
||||
w.runningMu.Unlock()
|
||||
}
|
||||
|
||||
type detectionSender struct {
|
||||
requestID string
|
||||
jobType string
|
||||
send func(*plugin_pb.WorkerToAdminMessage) bool
|
||||
}
|
||||
|
||||
func (s *detectionSender) SendProposals(proposals *plugin_pb.DetectionProposals) error {
|
||||
if proposals == nil {
|
||||
return fmt.Errorf("detection proposals are nil")
|
||||
}
|
||||
if proposals.RequestId == "" {
|
||||
proposals.RequestId = s.requestID
|
||||
}
|
||||
if proposals.JobType == "" {
|
||||
proposals.JobType = s.jobType
|
||||
}
|
||||
if !s.send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_DetectionProposals{DetectionProposals: proposals},
|
||||
}) {
|
||||
return fmt.Errorf("stream closed")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *detectionSender) SendComplete(complete *plugin_pb.DetectionComplete) error {
|
||||
if complete == nil {
|
||||
return fmt.Errorf("detection complete is nil")
|
||||
}
|
||||
if complete.RequestId == "" {
|
||||
complete.RequestId = s.requestID
|
||||
}
|
||||
if complete.JobType == "" {
|
||||
complete.JobType = s.jobType
|
||||
}
|
||||
if !s.send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_DetectionComplete{DetectionComplete: complete},
|
||||
}) {
|
||||
return fmt.Errorf("stream closed")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *detectionSender) SendActivity(activity *plugin_pb.ActivityEvent) error {
|
||||
if activity == nil {
|
||||
return fmt.Errorf("detection activity is nil")
|
||||
}
|
||||
if activity.CreatedAt == nil {
|
||||
activity.CreatedAt = timestamppb.Now()
|
||||
}
|
||||
if activity.Source == plugin_pb.ActivitySource_ACTIVITY_SOURCE_UNSPECIFIED {
|
||||
activity.Source = plugin_pb.ActivitySource_ACTIVITY_SOURCE_DETECTOR
|
||||
}
|
||||
|
||||
update := &plugin_pb.JobProgressUpdate{
|
||||
RequestId: s.requestID,
|
||||
JobType: s.jobType,
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: 0,
|
||||
Stage: activity.Stage,
|
||||
Message: activity.Message,
|
||||
Activities: []*plugin_pb.ActivityEvent{activity},
|
||||
UpdatedAt: timestamppb.Now(),
|
||||
}
|
||||
|
||||
if !s.send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_JobProgressUpdate{JobProgressUpdate: update},
|
||||
}) {
|
||||
return fmt.Errorf("stream closed")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type executionSender struct {
|
||||
requestID string
|
||||
jobID string
|
||||
jobType string
|
||||
send func(*plugin_pb.WorkerToAdminMessage) bool
|
||||
onProgress func(progress float64, stage string)
|
||||
}
|
||||
|
||||
func (s *executionSender) SendProgress(progress *plugin_pb.JobProgressUpdate) error {
|
||||
if progress == nil {
|
||||
return fmt.Errorf("job progress is nil")
|
||||
}
|
||||
if progress.RequestId == "" {
|
||||
progress.RequestId = s.requestID
|
||||
}
|
||||
if progress.JobId == "" {
|
||||
progress.JobId = s.jobID
|
||||
}
|
||||
if progress.JobType == "" {
|
||||
progress.JobType = s.jobType
|
||||
}
|
||||
if progress.UpdatedAt == nil {
|
||||
progress.UpdatedAt = timestamppb.Now()
|
||||
}
|
||||
if s.onProgress != nil {
|
||||
s.onProgress(progress.ProgressPercent, progress.Stage)
|
||||
}
|
||||
if !s.send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_JobProgressUpdate{JobProgressUpdate: progress},
|
||||
}) {
|
||||
return fmt.Errorf("stream closed")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *executionSender) SendCompleted(completed *plugin_pb.JobCompleted) error {
|
||||
if completed == nil {
|
||||
return fmt.Errorf("job completed is nil")
|
||||
}
|
||||
if completed.RequestId == "" {
|
||||
completed.RequestId = s.requestID
|
||||
}
|
||||
if completed.JobId == "" {
|
||||
completed.JobId = s.jobID
|
||||
}
|
||||
if completed.JobType == "" {
|
||||
completed.JobType = s.jobType
|
||||
}
|
||||
if completed.CompletedAt == nil {
|
||||
completed.CompletedAt = timestamppb.Now()
|
||||
}
|
||||
if !s.send(&plugin_pb.WorkerToAdminMessage{
|
||||
Body: &plugin_pb.WorkerToAdminMessage_JobCompleted{JobCompleted: completed},
|
||||
}) {
|
||||
return fmt.Errorf("stream closed")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func generateWorkerID() string {
|
||||
random := make([]byte, 3)
|
||||
if _, err := rand.Read(random); err != nil {
|
||||
return fmt.Sprintf("plugin-%d", time.Now().UnixNano())
|
||||
}
|
||||
return "plugin-" + hex.EncodeToString(random)
|
||||
}
|
||||
|
||||
func (w *Worker) setWorkCancel(cancel context.CancelFunc, keys ...string) {
|
||||
if cancel == nil {
|
||||
return
|
||||
}
|
||||
w.workCancelMu.Lock()
|
||||
defer w.workCancelMu.Unlock()
|
||||
for _, key := range keys {
|
||||
key = strings.TrimSpace(key)
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
w.workCancel[key] = cancel
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Worker) clearWorkCancel(keys ...string) {
|
||||
w.workCancelMu.Lock()
|
||||
defer w.workCancelMu.Unlock()
|
||||
for _, key := range keys {
|
||||
key = strings.TrimSpace(key)
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
delete(w.workCancel, key)
|
||||
}
|
||||
}
|
||||
|
||||
func (w *Worker) cancelWork(targetID string) bool {
|
||||
targetID = strings.TrimSpace(targetID)
|
||||
if targetID == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
w.workCancelMu.Lock()
|
||||
cancel := w.workCancel[targetID]
|
||||
w.workCancelMu.Unlock()
|
||||
if cancel == nil {
|
||||
return false
|
||||
}
|
||||
cancel()
|
||||
return true
|
||||
}
|
||||
|
||||
func (w *Worker) findHandler(jobType string) (JobHandler, string, error) {
|
||||
trimmed := strings.TrimSpace(jobType)
|
||||
if trimmed == "" {
|
||||
if len(w.handlers) == 1 {
|
||||
for _, handler := range w.handlers {
|
||||
resolvedJobType, err := resolveHandlerJobType(handler)
|
||||
return handler, resolvedJobType, err
|
||||
}
|
||||
}
|
||||
return nil, "", fmt.Errorf("job type is required when worker serves multiple job types")
|
||||
}
|
||||
|
||||
key := normalizeJobTypeKey(trimmed)
|
||||
handler := w.handlers[key]
|
||||
if handler == nil {
|
||||
return nil, "", fmt.Errorf("job type %q is not handled by this worker", trimmed)
|
||||
}
|
||||
resolvedJobType, err := resolveHandlerJobType(handler)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return handler, resolvedJobType, nil
|
||||
}
|
||||
|
||||
func resolveHandlerJobType(handler JobHandler) (string, error) {
|
||||
if handler == nil {
|
||||
return "", fmt.Errorf("job handler is nil")
|
||||
}
|
||||
|
||||
if descriptor := handler.Descriptor(); descriptor != nil {
|
||||
if jobType := strings.TrimSpace(descriptor.JobType); jobType != "" {
|
||||
return jobType, nil
|
||||
}
|
||||
}
|
||||
if capability := handler.Capability(); capability != nil {
|
||||
if jobType := strings.TrimSpace(capability.JobType); jobType != "" {
|
||||
return jobType, nil
|
||||
}
|
||||
}
|
||||
return "", fmt.Errorf("handler job type is not configured")
|
||||
}
|
||||
|
||||
func normalizeJobTypeKey(jobType string) string {
|
||||
return strings.ToLower(strings.TrimSpace(jobType))
|
||||
}
|
||||
599
weed/plugin/worker/worker_test.go
Normal file
599
weed/plugin/worker/worker_test.go
Normal file
@@ -0,0 +1,599 @@
|
||||
package pluginworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
)
|
||||
|
||||
func TestWorkerBuildHelloUsesConfiguredConcurrency(t *testing.T) {
|
||||
handler := &testJobHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{
|
||||
JobType: "vacuum",
|
||||
CanDetect: true,
|
||||
CanExecute: true,
|
||||
MaxDetectionConcurrency: 99,
|
||||
MaxExecutionConcurrency: 88,
|
||||
},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
}
|
||||
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handler: handler,
|
||||
MaxDetectionConcurrency: 3,
|
||||
MaxExecutionConcurrency: 4,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
hello := worker.buildHello()
|
||||
if hello == nil || len(hello.Capabilities) != 1 {
|
||||
t.Fatalf("expected one capability in hello")
|
||||
}
|
||||
capability := hello.Capabilities[0]
|
||||
if capability.MaxDetectionConcurrency != 3 {
|
||||
t.Fatalf("expected max_detection_concurrency=3, got=%d", capability.MaxDetectionConcurrency)
|
||||
}
|
||||
if capability.MaxExecutionConcurrency != 4 {
|
||||
t.Fatalf("expected max_execution_concurrency=4, got=%d", capability.MaxExecutionConcurrency)
|
||||
}
|
||||
if capability.JobType != "vacuum" {
|
||||
t.Fatalf("expected job type vacuum, got=%q", capability.JobType)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkerBuildHelloIncludesMultipleCapabilities(t *testing.T) {
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handlers: []JobHandler{
|
||||
&testJobHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{JobType: "vacuum", CanDetect: true, CanExecute: true},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
},
|
||||
&testJobHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{JobType: "volume_balance", CanDetect: true, CanExecute: true},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "volume_balance"},
|
||||
},
|
||||
},
|
||||
MaxDetectionConcurrency: 2,
|
||||
MaxExecutionConcurrency: 3,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
hello := worker.buildHello()
|
||||
if hello == nil || len(hello.Capabilities) != 2 {
|
||||
t.Fatalf("expected two capabilities in hello")
|
||||
}
|
||||
|
||||
found := map[string]bool{}
|
||||
for _, capability := range hello.Capabilities {
|
||||
found[capability.JobType] = true
|
||||
if capability.MaxDetectionConcurrency != 2 {
|
||||
t.Fatalf("expected max_detection_concurrency=2, got=%d", capability.MaxDetectionConcurrency)
|
||||
}
|
||||
if capability.MaxExecutionConcurrency != 3 {
|
||||
t.Fatalf("expected max_execution_concurrency=3, got=%d", capability.MaxExecutionConcurrency)
|
||||
}
|
||||
}
|
||||
if !found["vacuum"] || !found["volume_balance"] {
|
||||
t.Fatalf("expected capabilities for vacuum and volume_balance, got=%v", found)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkerCancelWorkByTargetID(t *testing.T) {
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handler: &testJobHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{JobType: "vacuum"},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
worker.setWorkCancel(cancel, "request-1", "job-1")
|
||||
|
||||
if !worker.cancelWork("request-1") {
|
||||
t.Fatalf("expected cancel by request id to succeed")
|
||||
}
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
t.Fatalf("expected context to be canceled")
|
||||
}
|
||||
|
||||
if !worker.cancelWork("job-1") {
|
||||
t.Fatalf("expected cancel by job id to succeed")
|
||||
}
|
||||
if worker.cancelWork("unknown-target") {
|
||||
t.Fatalf("expected cancel unknown target to fail")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkerHandleCancelRequestAck(t *testing.T) {
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handler: &testJobHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{JobType: "vacuum"},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
canceled := false
|
||||
worker.setWorkCancel(func() { canceled = true }, "job-42")
|
||||
|
||||
var response *plugin_pb.WorkerToAdminMessage
|
||||
ok := worker.handleAdminMessageForTest(&plugin_pb.AdminToWorkerMessage{
|
||||
RequestId: "cancel-req-1",
|
||||
Body: &plugin_pb.AdminToWorkerMessage_CancelRequest{
|
||||
CancelRequest: &plugin_pb.CancelRequest{TargetId: "job-42"},
|
||||
},
|
||||
}, func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
response = msg
|
||||
return true
|
||||
})
|
||||
if !ok {
|
||||
t.Fatalf("expected send callback to be invoked")
|
||||
}
|
||||
if !canceled {
|
||||
t.Fatalf("expected registered work cancel function to be called")
|
||||
}
|
||||
if response == nil || response.GetAcknowledge() == nil || !response.GetAcknowledge().Accepted {
|
||||
t.Fatalf("expected accepted acknowledge response, got=%+v", response)
|
||||
}
|
||||
|
||||
response = nil
|
||||
ok = worker.handleAdminMessageForTest(&plugin_pb.AdminToWorkerMessage{
|
||||
RequestId: "cancel-req-2",
|
||||
Body: &plugin_pb.AdminToWorkerMessage_CancelRequest{
|
||||
CancelRequest: &plugin_pb.CancelRequest{TargetId: "missing"},
|
||||
},
|
||||
}, func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
response = msg
|
||||
return true
|
||||
})
|
||||
if !ok {
|
||||
t.Fatalf("expected send callback to be invoked")
|
||||
}
|
||||
if response == nil || response.GetAcknowledge() == nil || response.GetAcknowledge().Accepted {
|
||||
t.Fatalf("expected rejected acknowledge for missing target, got=%+v", response)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkerSchemaRequestRequiresJobTypeWhenMultipleHandlers(t *testing.T) {
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handlers: []JobHandler{
|
||||
&testJobHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{JobType: "vacuum"},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
},
|
||||
&testJobHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{JobType: "erasure_coding"},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "erasure_coding"},
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
var response *plugin_pb.WorkerToAdminMessage
|
||||
ok := worker.handleAdminMessageForTest(&plugin_pb.AdminToWorkerMessage{
|
||||
RequestId: "schema-req-1",
|
||||
Body: &plugin_pb.AdminToWorkerMessage_RequestConfigSchema{
|
||||
RequestConfigSchema: &plugin_pb.RequestConfigSchema{},
|
||||
},
|
||||
}, func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
response = msg
|
||||
return true
|
||||
})
|
||||
if !ok {
|
||||
t.Fatalf("expected send callback to be invoked")
|
||||
}
|
||||
schema := response.GetConfigSchemaResponse()
|
||||
if schema == nil || schema.Success {
|
||||
t.Fatalf("expected schema error response, got=%+v", response)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWorkerHandleDetectionQueuesWhenAtCapacity(t *testing.T) {
|
||||
handler := &detectionQueueTestHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{
|
||||
JobType: "vacuum",
|
||||
CanDetect: true,
|
||||
CanExecute: false,
|
||||
},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
detectEntered: make(chan struct{}, 2),
|
||||
detectContinue: make(chan struct{}, 2),
|
||||
}
|
||||
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handler: handler,
|
||||
MaxDetectionConcurrency: 1,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
msgCh := make(chan *plugin_pb.WorkerToAdminMessage, 8)
|
||||
send := func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
msgCh <- msg
|
||||
return true
|
||||
}
|
||||
|
||||
sendDetection := func(requestID string) {
|
||||
worker.handleAdminMessage(context.Background(), &plugin_pb.AdminToWorkerMessage{
|
||||
RequestId: requestID,
|
||||
Body: &plugin_pb.AdminToWorkerMessage_RunDetectionRequest{
|
||||
RunDetectionRequest: &plugin_pb.RunDetectionRequest{
|
||||
JobType: "vacuum",
|
||||
},
|
||||
},
|
||||
}, send)
|
||||
}
|
||||
|
||||
sendDetection("detect-1")
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
ack := message.GetAcknowledge()
|
||||
return ack != nil && ack.RequestId == "detect-1" && ack.Accepted
|
||||
}, "detection acknowledge detect-1")
|
||||
<-handler.detectEntered
|
||||
|
||||
sendDetection("detect-2")
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
ack := message.GetAcknowledge()
|
||||
return ack != nil && ack.RequestId == "detect-2" && ack.Accepted
|
||||
}, "detection acknowledge detect-2")
|
||||
|
||||
select {
|
||||
case unexpected := <-msgCh:
|
||||
t.Fatalf("did not expect detection completion before slot is available, got=%+v", unexpected)
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
}
|
||||
|
||||
handler.detectContinue <- struct{}{}
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
complete := message.GetDetectionComplete()
|
||||
return complete != nil && complete.RequestId == "detect-1" && complete.Success
|
||||
}, "detection complete detect-1")
|
||||
|
||||
<-handler.detectEntered
|
||||
handler.detectContinue <- struct{}{}
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
complete := message.GetDetectionComplete()
|
||||
return complete != nil && complete.RequestId == "detect-2" && complete.Success
|
||||
}, "detection complete detect-2")
|
||||
}
|
||||
|
||||
func TestWorkerHeartbeatReflectsActiveDetectionLoad(t *testing.T) {
|
||||
handler := &detectionQueueTestHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{
|
||||
JobType: "vacuum",
|
||||
CanDetect: true,
|
||||
CanExecute: false,
|
||||
},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
detectEntered: make(chan struct{}, 1),
|
||||
detectContinue: make(chan struct{}, 1),
|
||||
}
|
||||
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handler: handler,
|
||||
MaxDetectionConcurrency: 1,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
msgCh := make(chan *plugin_pb.WorkerToAdminMessage, 16)
|
||||
send := func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
msgCh <- msg
|
||||
return true
|
||||
}
|
||||
|
||||
requestID := "detect-heartbeat-1"
|
||||
worker.handleAdminMessage(context.Background(), &plugin_pb.AdminToWorkerMessage{
|
||||
RequestId: requestID,
|
||||
Body: &plugin_pb.AdminToWorkerMessage_RunDetectionRequest{
|
||||
RunDetectionRequest: &plugin_pb.RunDetectionRequest{
|
||||
JobType: "vacuum",
|
||||
},
|
||||
},
|
||||
}, send)
|
||||
|
||||
<-handler.detectEntered
|
||||
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
heartbeat := message.GetHeartbeat()
|
||||
return heartbeat != nil &&
|
||||
heartbeat.DetectionSlotsUsed > 0 &&
|
||||
heartbeatHasRunningWork(heartbeat, requestID, plugin_pb.WorkKind_WORK_KIND_DETECTION)
|
||||
}, "active detection heartbeat")
|
||||
|
||||
handler.detectContinue <- struct{}{}
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
complete := message.GetDetectionComplete()
|
||||
return complete != nil && complete.RequestId == requestID && complete.Success
|
||||
}, "detection complete")
|
||||
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
heartbeat := message.GetHeartbeat()
|
||||
return heartbeat != nil && heartbeat.DetectionSlotsUsed == 0 &&
|
||||
!heartbeatHasRunningWork(heartbeat, requestID, plugin_pb.WorkKind_WORK_KIND_DETECTION)
|
||||
}, "idle detection heartbeat")
|
||||
}
|
||||
|
||||
func TestWorkerHeartbeatReflectsActiveExecutionLoad(t *testing.T) {
|
||||
handler := &executionHeartbeatTestHandler{
|
||||
capability: &plugin_pb.JobTypeCapability{
|
||||
JobType: "vacuum",
|
||||
CanDetect: false,
|
||||
CanExecute: true,
|
||||
},
|
||||
descriptor: &plugin_pb.JobTypeDescriptor{JobType: "vacuum"},
|
||||
executeEntered: make(chan struct{}, 1),
|
||||
executeDone: make(chan struct{}, 1),
|
||||
}
|
||||
|
||||
worker, err := NewWorker(WorkerOptions{
|
||||
AdminServer: "localhost:23646",
|
||||
GrpcDialOption: grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||
Handler: handler,
|
||||
MaxExecutionConcurrency: 1,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("NewWorker error = %v", err)
|
||||
}
|
||||
|
||||
msgCh := make(chan *plugin_pb.WorkerToAdminMessage, 16)
|
||||
send := func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
msgCh <- msg
|
||||
return true
|
||||
}
|
||||
|
||||
requestID := "exec-heartbeat-1"
|
||||
jobID := "job-heartbeat-1"
|
||||
worker.handleAdminMessage(context.Background(), &plugin_pb.AdminToWorkerMessage{
|
||||
RequestId: requestID,
|
||||
Body: &plugin_pb.AdminToWorkerMessage_ExecuteJobRequest{
|
||||
ExecuteJobRequest: &plugin_pb.ExecuteJobRequest{
|
||||
Job: &plugin_pb.JobSpec{
|
||||
JobId: jobID,
|
||||
JobType: "vacuum",
|
||||
},
|
||||
},
|
||||
},
|
||||
}, send)
|
||||
|
||||
<-handler.executeEntered
|
||||
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
heartbeat := message.GetHeartbeat()
|
||||
return heartbeat != nil &&
|
||||
heartbeat.ExecutionSlotsUsed > 0 &&
|
||||
heartbeatHasRunningWork(heartbeat, jobID, plugin_pb.WorkKind_WORK_KIND_EXECUTION)
|
||||
}, "active execution heartbeat")
|
||||
|
||||
handler.executeDone <- struct{}{}
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
completed := message.GetJobCompleted()
|
||||
return completed != nil && completed.RequestId == requestID && completed.Success
|
||||
}, "execution complete")
|
||||
|
||||
waitForWorkerMessage(t, msgCh, func(message *plugin_pb.WorkerToAdminMessage) bool {
|
||||
heartbeat := message.GetHeartbeat()
|
||||
return heartbeat != nil && heartbeat.ExecutionSlotsUsed == 0 &&
|
||||
!heartbeatHasRunningWork(heartbeat, jobID, plugin_pb.WorkKind_WORK_KIND_EXECUTION)
|
||||
}, "idle execution heartbeat")
|
||||
}
|
||||
|
||||
type testJobHandler struct {
|
||||
capability *plugin_pb.JobTypeCapability
|
||||
descriptor *plugin_pb.JobTypeDescriptor
|
||||
}
|
||||
|
||||
func (h *testJobHandler) Capability() *plugin_pb.JobTypeCapability {
|
||||
return h.capability
|
||||
}
|
||||
|
||||
func (h *testJobHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
|
||||
return h.descriptor
|
||||
}
|
||||
|
||||
func (h *testJobHandler) Detect(context.Context, *plugin_pb.RunDetectionRequest, DetectionSender) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *testJobHandler) Execute(context.Context, *plugin_pb.ExecuteJobRequest, ExecutionSender) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type detectionQueueTestHandler struct {
|
||||
capability *plugin_pb.JobTypeCapability
|
||||
descriptor *plugin_pb.JobTypeDescriptor
|
||||
|
||||
detectEntered chan struct{}
|
||||
detectContinue chan struct{}
|
||||
}
|
||||
|
||||
func (h *detectionQueueTestHandler) Capability() *plugin_pb.JobTypeCapability {
|
||||
return h.capability
|
||||
}
|
||||
|
||||
func (h *detectionQueueTestHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
|
||||
return h.descriptor
|
||||
}
|
||||
|
||||
func (h *detectionQueueTestHandler) Detect(ctx context.Context, _ *plugin_pb.RunDetectionRequest, sender DetectionSender) error {
|
||||
select {
|
||||
case h.detectEntered <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-h.detectContinue:
|
||||
}
|
||||
|
||||
return sender.SendComplete(&plugin_pb.DetectionComplete{
|
||||
Success: true,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *detectionQueueTestHandler) Execute(context.Context, *plugin_pb.ExecuteJobRequest, ExecutionSender) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type executionHeartbeatTestHandler struct {
|
||||
capability *plugin_pb.JobTypeCapability
|
||||
descriptor *plugin_pb.JobTypeDescriptor
|
||||
|
||||
executeEntered chan struct{}
|
||||
executeDone chan struct{}
|
||||
}
|
||||
|
||||
func (h *executionHeartbeatTestHandler) Capability() *plugin_pb.JobTypeCapability {
|
||||
return h.capability
|
||||
}
|
||||
|
||||
func (h *executionHeartbeatTestHandler) Descriptor() *plugin_pb.JobTypeDescriptor {
|
||||
return h.descriptor
|
||||
}
|
||||
|
||||
func (h *executionHeartbeatTestHandler) Detect(context.Context, *plugin_pb.RunDetectionRequest, DetectionSender) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *executionHeartbeatTestHandler) Execute(ctx context.Context, request *plugin_pb.ExecuteJobRequest, sender ExecutionSender) error {
|
||||
select {
|
||||
case h.executeEntered <- struct{}{}:
|
||||
default:
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-h.executeDone:
|
||||
}
|
||||
|
||||
return sender.SendCompleted(&plugin_pb.JobCompleted{
|
||||
JobId: request.Job.JobId,
|
||||
JobType: request.Job.JobType,
|
||||
Success: true,
|
||||
})
|
||||
}
|
||||
|
||||
func recvWorkerMessage(t *testing.T, msgCh <-chan *plugin_pb.WorkerToAdminMessage) *plugin_pb.WorkerToAdminMessage {
|
||||
t.Helper()
|
||||
select {
|
||||
case msg := <-msgCh:
|
||||
return msg
|
||||
case <-time.After(2 * time.Second):
|
||||
t.Fatal("timed out waiting for worker message")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func expectDetectionAck(t *testing.T, message *plugin_pb.WorkerToAdminMessage, requestID string) {
|
||||
t.Helper()
|
||||
ack := message.GetAcknowledge()
|
||||
if ack == nil {
|
||||
t.Fatalf("expected acknowledge for request %q, got=%+v", requestID, message)
|
||||
}
|
||||
if ack.RequestId != requestID {
|
||||
t.Fatalf("expected acknowledge request_id=%q, got=%q", requestID, ack.RequestId)
|
||||
}
|
||||
if !ack.Accepted {
|
||||
t.Fatalf("expected acknowledge accepted for request %q, got=%+v", requestID, ack)
|
||||
}
|
||||
}
|
||||
|
||||
func expectDetectionCompleteSuccess(t *testing.T, message *plugin_pb.WorkerToAdminMessage, requestID string) {
|
||||
t.Helper()
|
||||
complete := message.GetDetectionComplete()
|
||||
if complete == nil {
|
||||
t.Fatalf("expected detection complete for request %q, got=%+v", requestID, message)
|
||||
}
|
||||
if complete.RequestId != requestID {
|
||||
t.Fatalf("expected detection complete request_id=%q, got=%q", requestID, complete.RequestId)
|
||||
}
|
||||
if !complete.Success {
|
||||
t.Fatalf("expected successful detection complete for request %q, got=%+v", requestID, complete)
|
||||
}
|
||||
}
|
||||
|
||||
func waitForWorkerMessage(
|
||||
t *testing.T,
|
||||
msgCh <-chan *plugin_pb.WorkerToAdminMessage,
|
||||
predicate func(*plugin_pb.WorkerToAdminMessage) bool,
|
||||
description string,
|
||||
) *plugin_pb.WorkerToAdminMessage {
|
||||
t.Helper()
|
||||
|
||||
timeout := time.NewTimer(3 * time.Second)
|
||||
defer timeout.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case message := <-msgCh:
|
||||
if predicate(message) {
|
||||
return message
|
||||
}
|
||||
case <-timeout.C:
|
||||
t.Fatalf("timed out waiting for %s", description)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func heartbeatHasRunningWork(heartbeat *plugin_pb.WorkerHeartbeat, workID string, kind plugin_pb.WorkKind) bool {
|
||||
if heartbeat == nil || workID == "" {
|
||||
return false
|
||||
}
|
||||
for _, work := range heartbeat.RunningWork {
|
||||
if work == nil {
|
||||
continue
|
||||
}
|
||||
if work.WorkId == workID && work.Kind == kind {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (w *Worker) handleAdminMessageForTest(
|
||||
message *plugin_pb.AdminToWorkerMessage,
|
||||
send func(*plugin_pb.WorkerToAdminMessage) bool,
|
||||
) bool {
|
||||
called := false
|
||||
w.handleAdminMessage(context.Background(), message, func(msg *plugin_pb.WorkerToAdminMessage) bool {
|
||||
called = true
|
||||
return send(msg)
|
||||
})
|
||||
return called
|
||||
}
|
||||
Reference in New Issue
Block a user