Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
This commit is contained in:
@@ -1,76 +1,54 @@
|
||||
package command
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/security"
|
||||
statsCollect "github.com/seaweedfs/seaweedfs/weed/stats"
|
||||
"github.com/seaweedfs/seaweedfs/weed/util"
|
||||
"github.com/seaweedfs/seaweedfs/weed/util/grace"
|
||||
"github.com/seaweedfs/seaweedfs/weed/util/version"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
|
||||
// Import task packages to trigger their auto-registration
|
||||
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
|
||||
// TODO: Implement additional task packages (add to default capabilities when ready):
|
||||
// _ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/remote" - for uploading volumes to remote/cloud storage
|
||||
// _ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/replication" - for fixing replication issues and maintaining data consistency
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
)
|
||||
|
||||
var cmdWorker = &Command{
|
||||
UsageLine: "worker -admin=<admin_server> [-capabilities=<task_types>] [-maxConcurrent=<num>] [-workingDir=<path>] [-metricsPort=<port>] [-debug]",
|
||||
Short: "start a maintenance worker to process cluster maintenance tasks",
|
||||
Long: `Start a maintenance worker that connects to an admin server to process
|
||||
maintenance tasks like vacuum, erasure coding, remote upload, and replication fixes.
|
||||
UsageLine: "worker -admin=<admin_server> [-id=<worker_id>] [-jobType=vacuum,volume_balance,erasure_coding] [-workingDir=<path>] [-heartbeat=15s] [-reconnect=5s] [-maxDetect=1] [-maxExecute=4] [-metricsPort=<port>] [-metricsIp=<ip>] [-debug]",
|
||||
Short: "start a plugin.proto worker process",
|
||||
Long: `Start an external plugin worker using weed/pb/plugin.proto over gRPC.
|
||||
|
||||
The worker ID and address are automatically generated.
|
||||
The worker connects to the admin server via gRPC (admin HTTP port + 10000).
|
||||
This command provides vacuum, volume_balance, and erasure_coding job type
|
||||
contracts with the plugin stream runtime, including descriptor delivery,
|
||||
heartbeat/load reporting, detection, and execution.
|
||||
|
||||
Behavior:
|
||||
- Use -jobType to choose one or more plugin job handlers (comma-separated list)
|
||||
- Use -workingDir to persist plugin.worker.id for stable worker identity across restarts
|
||||
- Use -metricsPort/-metricsIp to expose /health, /ready, and /metrics
|
||||
|
||||
Examples:
|
||||
weed worker -admin=localhost:23646
|
||||
weed worker -admin=admin.example.com:23646
|
||||
weed worker -admin=localhost:23646 -capabilities=vacuum,replication
|
||||
weed worker -admin=localhost:23646 -maxConcurrent=4
|
||||
weed worker -admin=localhost:23646 -workingDir=/tmp/worker
|
||||
weed worker -admin=localhost:23646 -metricsPort=9327
|
||||
weed worker -admin=localhost:23646 -debug -debug.port=6060
|
||||
weed worker -admin=localhost:23646 -jobType=volume_balance
|
||||
weed worker -admin=localhost:23646 -jobType=vacuum,volume_balance
|
||||
weed worker -admin=localhost:23646 -jobType=erasure_coding
|
||||
weed worker -admin=admin.example.com:23646 -id=plugin-vacuum-a -heartbeat=10s
|
||||
weed worker -admin=localhost:23646 -workingDir=/var/lib/seaweedfs-plugin
|
||||
weed worker -admin=localhost:23646 -metricsPort=9327 -metricsIp=0.0.0.0
|
||||
`,
|
||||
}
|
||||
|
||||
var (
|
||||
workerAdminServer = cmdWorker.Flag.String("admin", "localhost:23646", "admin server address")
|
||||
workerCapabilities = cmdWorker.Flag.String("capabilities", "vacuum,ec,balance", "comma-separated list of task types this worker can handle")
|
||||
workerMaxConcurrent = cmdWorker.Flag.Int("maxConcurrent", 2, "maximum number of concurrent tasks")
|
||||
workerHeartbeatInterval = cmdWorker.Flag.Duration("heartbeat", 30*time.Second, "heartbeat interval")
|
||||
workerTaskRequestInterval = cmdWorker.Flag.Duration("taskInterval", 5*time.Second, "task request interval")
|
||||
workerWorkingDir = cmdWorker.Flag.String("workingDir", "", "working directory for the worker")
|
||||
workerMetricsPort = cmdWorker.Flag.Int("metricsPort", 0, "Prometheus metrics listen port")
|
||||
workerMetricsIp = cmdWorker.Flag.String("metricsIp", "0.0.0.0", "Prometheus metrics listen IP")
|
||||
workerDebug = cmdWorker.Flag.Bool("debug", false, "serves runtime profiling data via pprof on the port specified by -debug.port")
|
||||
workerDebugPort = cmdWorker.Flag.Int("debug.port", 6060, "http port for debugging")
|
||||
|
||||
workerServerHeader = "SeaweedFS Worker " + version.VERSION
|
||||
workerAdminServer = cmdWorker.Flag.String("admin", "localhost:23646", "admin server address")
|
||||
workerID = cmdWorker.Flag.String("id", "", "worker ID (auto-generated when empty)")
|
||||
workerWorkingDir = cmdWorker.Flag.String("workingDir", "", "working directory for persistent worker state")
|
||||
workerJobType = cmdWorker.Flag.String("jobType", defaultPluginWorkerJobTypes, "job types to serve (comma-separated list)")
|
||||
workerHeartbeat = cmdWorker.Flag.Duration("heartbeat", 15*time.Second, "heartbeat interval")
|
||||
workerReconnect = cmdWorker.Flag.Duration("reconnect", 5*time.Second, "reconnect delay")
|
||||
workerMaxDetect = cmdWorker.Flag.Int("maxDetect", 1, "max concurrent detection requests")
|
||||
workerMaxExecute = cmdWorker.Flag.Int("maxExecute", 4, "max concurrent execute requests")
|
||||
workerAddress = cmdWorker.Flag.String("address", "", "worker address advertised to admin")
|
||||
workerMetricsPort = cmdWorker.Flag.Int("metricsPort", 0, "Prometheus metrics listen port")
|
||||
workerMetricsIp = cmdWorker.Flag.String("metricsIp", "0.0.0.0", "Prometheus metrics listen IP")
|
||||
workerDebug = cmdWorker.Flag.Bool("debug", false, "serves runtime profiling data via pprof on the port specified by -debug.port")
|
||||
workerDebugPort = cmdWorker.Flag.Int("debug.port", 6060, "http port for debugging")
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmdWorker.Run = runWorker
|
||||
|
||||
// Set default capabilities from registered task types
|
||||
// This happens after package imports have triggered auto-registration
|
||||
tasks.SetDefaultCapabilitiesFromRegistry()
|
||||
}
|
||||
|
||||
func runWorker(cmd *Command, args []string) bool {
|
||||
@@ -78,218 +56,17 @@ func runWorker(cmd *Command, args []string) bool {
|
||||
grace.StartDebugServer(*workerDebugPort)
|
||||
}
|
||||
|
||||
util.LoadConfiguration("security", false)
|
||||
|
||||
glog.Infof("Starting maintenance worker")
|
||||
glog.Infof("Admin server: %s", *workerAdminServer)
|
||||
glog.Infof("Capabilities: %s", *workerCapabilities)
|
||||
|
||||
// Parse capabilities
|
||||
capabilities := parseCapabilities(*workerCapabilities)
|
||||
if len(capabilities) == 0 {
|
||||
glog.Fatalf("No valid capabilities specified")
|
||||
return false
|
||||
}
|
||||
|
||||
// Set working directory and create task-specific subdirectories
|
||||
var baseWorkingDir string
|
||||
if *workerWorkingDir != "" {
|
||||
glog.Infof("Setting working directory to: %s", *workerWorkingDir)
|
||||
if err := os.Chdir(*workerWorkingDir); err != nil {
|
||||
glog.Fatalf("Failed to change working directory: %v", err)
|
||||
return false
|
||||
}
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to get working directory: %v", err)
|
||||
return false
|
||||
}
|
||||
baseWorkingDir = wd
|
||||
glog.Infof("Current working directory: %s", baseWorkingDir)
|
||||
} else {
|
||||
// Use default working directory when not specified
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to get current working directory: %v", err)
|
||||
return false
|
||||
}
|
||||
baseWorkingDir = wd
|
||||
glog.Infof("Using current working directory: %s", baseWorkingDir)
|
||||
}
|
||||
|
||||
// Create task-specific subdirectories
|
||||
for _, capability := range capabilities {
|
||||
taskDir := filepath.Join(baseWorkingDir, string(capability))
|
||||
if err := os.MkdirAll(taskDir, 0755); err != nil {
|
||||
glog.Fatalf("Failed to create task directory %s: %v", taskDir, err)
|
||||
return false
|
||||
}
|
||||
glog.Infof("Created task directory: %s", taskDir)
|
||||
}
|
||||
|
||||
// Create gRPC dial option using TLS configuration
|
||||
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.worker")
|
||||
|
||||
// Create worker configuration
|
||||
config := &types.WorkerConfig{
|
||||
AdminServer: *workerAdminServer,
|
||||
Capabilities: capabilities,
|
||||
MaxConcurrent: *workerMaxConcurrent,
|
||||
HeartbeatInterval: *workerHeartbeatInterval,
|
||||
TaskRequestInterval: *workerTaskRequestInterval,
|
||||
BaseWorkingDir: baseWorkingDir,
|
||||
GrpcDialOption: grpcDialOption,
|
||||
}
|
||||
|
||||
// Create worker instance
|
||||
workerInstance, err := worker.NewWorker(config)
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to create worker: %v", err)
|
||||
return false
|
||||
}
|
||||
adminClient, err := worker.CreateAdminClient(*workerAdminServer, workerInstance.ID(), grpcDialOption)
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to create admin client: %v", err)
|
||||
return false
|
||||
}
|
||||
|
||||
// Set admin client
|
||||
workerInstance.SetAdminClient(adminClient)
|
||||
|
||||
// Set working directory
|
||||
if *workerWorkingDir != "" {
|
||||
glog.Infof("Setting working directory to: %s", *workerWorkingDir)
|
||||
if err := os.Chdir(*workerWorkingDir); err != nil {
|
||||
glog.Fatalf("Failed to change working directory: %v", err)
|
||||
return false
|
||||
}
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to get working directory: %v", err)
|
||||
return false
|
||||
}
|
||||
glog.Infof("Current working directory: %s", wd)
|
||||
}
|
||||
|
||||
// Start metrics HTTP server if port is specified
|
||||
if *workerMetricsPort > 0 {
|
||||
go startWorkerMetricsServer(*workerMetricsIp, *workerMetricsPort, workerInstance)
|
||||
}
|
||||
|
||||
// Start the worker
|
||||
err = workerInstance.Start()
|
||||
if err != nil {
|
||||
glog.Errorf("Failed to start worker: %v", err)
|
||||
return false
|
||||
}
|
||||
|
||||
// Set up signal handling
|
||||
sigChan := make(chan os.Signal, 1)
|
||||
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
|
||||
|
||||
glog.Infof("Maintenance worker %s started successfully", workerInstance.ID())
|
||||
glog.Infof("Press Ctrl+C to stop the worker")
|
||||
|
||||
// Wait for shutdown signal
|
||||
<-sigChan
|
||||
glog.Infof("Shutdown signal received, stopping worker...")
|
||||
|
||||
// Gracefully stop the worker
|
||||
err = workerInstance.Stop()
|
||||
if err != nil {
|
||||
glog.Errorf("Error stopping worker: %v", err)
|
||||
}
|
||||
glog.Infof("Worker stopped")
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// parseCapabilities converts comma-separated capability string to task types
|
||||
func parseCapabilities(capabilityStr string) []types.TaskType {
|
||||
if capabilityStr == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
capabilityMap := map[string]types.TaskType{}
|
||||
|
||||
// Populate capabilityMap with registered task types
|
||||
typesRegistry := tasks.GetGlobalTypesRegistry()
|
||||
for taskType := range typesRegistry.GetAllDetectors() {
|
||||
// Use the task type string directly as the key
|
||||
capabilityMap[strings.ToLower(string(taskType))] = taskType
|
||||
}
|
||||
|
||||
// Add common aliases for convenience
|
||||
if taskType, exists := capabilityMap["erasure_coding"]; exists {
|
||||
capabilityMap["ec"] = taskType
|
||||
}
|
||||
if taskType, exists := capabilityMap["remote_upload"]; exists {
|
||||
capabilityMap["remote"] = taskType
|
||||
}
|
||||
if taskType, exists := capabilityMap["fix_replication"]; exists {
|
||||
capabilityMap["replication"] = taskType
|
||||
}
|
||||
|
||||
var capabilities []types.TaskType
|
||||
parts := strings.Split(capabilityStr, ",")
|
||||
|
||||
for _, part := range parts {
|
||||
part = strings.TrimSpace(part)
|
||||
if taskType, exists := capabilityMap[part]; exists {
|
||||
capabilities = append(capabilities, taskType)
|
||||
} else {
|
||||
glog.Warningf("Unknown capability: %s", part)
|
||||
}
|
||||
}
|
||||
|
||||
return capabilities
|
||||
}
|
||||
|
||||
// Legacy compatibility types for backward compatibility
|
||||
// These will be deprecated in future versions
|
||||
|
||||
// WorkerStatus represents the current status of a worker (deprecated)
|
||||
type WorkerStatus struct {
|
||||
WorkerID string `json:"worker_id"`
|
||||
Address string `json:"address"`
|
||||
Status string `json:"status"`
|
||||
Capabilities []types.TaskType `json:"capabilities"`
|
||||
MaxConcurrent int `json:"max_concurrent"`
|
||||
CurrentLoad int `json:"current_load"`
|
||||
LastHeartbeat time.Time `json:"last_heartbeat"`
|
||||
CurrentTasks []types.Task `json:"current_tasks"`
|
||||
Uptime time.Duration `json:"uptime"`
|
||||
TasksCompleted int `json:"tasks_completed"`
|
||||
TasksFailed int `json:"tasks_failed"`
|
||||
}
|
||||
|
||||
func workerHealthHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Server", workerServerHeader)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}
|
||||
|
||||
func workerReadyHandler(workerInstance *worker.Worker) http.HandlerFunc {
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Server", workerServerHeader)
|
||||
|
||||
admin := workerInstance.GetAdmin()
|
||||
if admin == nil || !admin.IsConnected() {
|
||||
w.WriteHeader(http.StatusServiceUnavailable)
|
||||
return
|
||||
}
|
||||
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}
|
||||
}
|
||||
|
||||
func startWorkerMetricsServer(ip string, port int, w *worker.Worker) {
|
||||
mux := http.NewServeMux()
|
||||
mux.HandleFunc("/health", workerHealthHandler)
|
||||
mux.HandleFunc("/ready", workerReadyHandler(w))
|
||||
mux.Handle("/metrics", promhttp.HandlerFor(statsCollect.Gather, promhttp.HandlerOpts{}))
|
||||
|
||||
glog.V(0).Infof("Starting worker metrics server at %s", statsCollect.JoinHostPort(ip, port))
|
||||
if err := http.ListenAndServe(statsCollect.JoinHostPort(ip, port), mux); err != nil {
|
||||
glog.Errorf("Worker metrics server failed to start: %v", err)
|
||||
}
|
||||
return runPluginWorkerWithOptions(pluginWorkerRunOptions{
|
||||
AdminServer: *workerAdminServer,
|
||||
WorkerID: *workerID,
|
||||
WorkingDir: *workerWorkingDir,
|
||||
JobTypes: *workerJobType,
|
||||
Heartbeat: *workerHeartbeat,
|
||||
Reconnect: *workerReconnect,
|
||||
MaxDetect: *workerMaxDetect,
|
||||
MaxExecute: *workerMaxExecute,
|
||||
Address: *workerAddress,
|
||||
MetricsPort: *workerMetricsPort,
|
||||
MetricsIP: *workerMetricsIp,
|
||||
})
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user