Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
This commit is contained in:
File diff suppressed because it is too large
Load Diff
735
weed/admin/dash/plugin_api.go
Normal file
735
weed/admin/dash/plugin_api.go
Normal file
@@ -0,0 +1,735 @@
|
||||
package dash
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/plugin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"google.golang.org/protobuf/encoding/protojson"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultPluginDetectionTimeout = 45 * time.Second
|
||||
defaultPluginExecutionTimeout = 90 * time.Second
|
||||
maxPluginDetectionTimeout = 5 * time.Minute
|
||||
maxPluginExecutionTimeout = 10 * time.Minute
|
||||
defaultPluginRunTimeout = 5 * time.Minute
|
||||
maxPluginRunTimeout = 30 * time.Minute
|
||||
)
|
||||
|
||||
// GetPluginStatusAPI returns plugin status.
|
||||
func (s *AdminServer) GetPluginStatusAPI(c *gin.Context) {
|
||||
plugin := s.GetPlugin()
|
||||
if plugin == nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"enabled": false,
|
||||
"worker_grpc_port": s.GetWorkerGrpcPort(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"enabled": true,
|
||||
"configured": plugin.IsConfigured(),
|
||||
"base_dir": plugin.BaseDir(),
|
||||
"worker_count": len(plugin.ListWorkers()),
|
||||
"worker_grpc_port": s.GetWorkerGrpcPort(),
|
||||
})
|
||||
}
|
||||
|
||||
// GetPluginWorkersAPI returns currently connected plugin workers.
|
||||
func (s *AdminServer) GetPluginWorkersAPI(c *gin.Context) {
|
||||
workers := s.GetPluginWorkers()
|
||||
if workers == nil {
|
||||
c.JSON(http.StatusOK, []interface{}{})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, workers)
|
||||
}
|
||||
|
||||
// GetPluginJobTypesAPI returns known plugin job types from workers and persisted data.
|
||||
func (s *AdminServer) GetPluginJobTypesAPI(c *gin.Context) {
|
||||
jobTypes, err := s.ListPluginJobTypes()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if jobTypes == nil {
|
||||
c.JSON(http.StatusOK, []interface{}{})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, jobTypes)
|
||||
}
|
||||
|
||||
// GetPluginJobsAPI returns tracked jobs for monitoring.
|
||||
func (s *AdminServer) GetPluginJobsAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Query("job_type"))
|
||||
state := strings.TrimSpace(c.Query("state"))
|
||||
limit := parsePositiveInt(c.Query("limit"), 200)
|
||||
jobs := s.ListPluginJobs(jobType, state, limit)
|
||||
if jobs == nil {
|
||||
c.JSON(http.StatusOK, []interface{}{})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, jobs)
|
||||
}
|
||||
|
||||
// GetPluginJobAPI returns one tracked job.
|
||||
func (s *AdminServer) GetPluginJobAPI(c *gin.Context) {
|
||||
jobID := strings.TrimSpace(c.Param("jobId"))
|
||||
if jobID == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobId is required"})
|
||||
return
|
||||
}
|
||||
|
||||
job, found := s.GetPluginJob(jobID)
|
||||
if !found {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "job not found"})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, job)
|
||||
}
|
||||
|
||||
// GetPluginJobDetailAPI returns detailed information for one tracked plugin job.
|
||||
func (s *AdminServer) GetPluginJobDetailAPI(c *gin.Context) {
|
||||
jobID := strings.TrimSpace(c.Param("jobId"))
|
||||
if jobID == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobId is required"})
|
||||
return
|
||||
}
|
||||
|
||||
activityLimit := parsePositiveInt(c.Query("activity_limit"), 500)
|
||||
relatedLimit := parsePositiveInt(c.Query("related_limit"), 20)
|
||||
|
||||
detail, found, err := s.GetPluginJobDetail(jobID, activityLimit, relatedLimit)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if !found || detail == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "job detail not found"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, detail)
|
||||
}
|
||||
|
||||
// GetPluginActivitiesAPI returns recent plugin activities.
|
||||
func (s *AdminServer) GetPluginActivitiesAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Query("job_type"))
|
||||
limit := parsePositiveInt(c.Query("limit"), 500)
|
||||
activities := s.ListPluginActivities(jobType, limit)
|
||||
if activities == nil {
|
||||
c.JSON(http.StatusOK, []interface{}{})
|
||||
return
|
||||
}
|
||||
c.JSON(http.StatusOK, activities)
|
||||
}
|
||||
|
||||
// GetPluginSchedulerStatesAPI returns per-job-type scheduler status for monitoring.
|
||||
func (s *AdminServer) GetPluginSchedulerStatesAPI(c *gin.Context) {
|
||||
jobTypeFilter := strings.TrimSpace(c.Query("job_type"))
|
||||
|
||||
states, err := s.ListPluginSchedulerStates()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
if jobTypeFilter != "" {
|
||||
filtered := make([]interface{}, 0, len(states))
|
||||
for _, state := range states {
|
||||
if state.JobType == jobTypeFilter {
|
||||
filtered = append(filtered, state)
|
||||
}
|
||||
}
|
||||
c.JSON(http.StatusOK, filtered)
|
||||
return
|
||||
}
|
||||
|
||||
if states == nil {
|
||||
c.JSON(http.StatusOK, []interface{}{})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, states)
|
||||
}
|
||||
|
||||
// RequestPluginJobTypeSchemaAPI asks a worker for one job type schema.
|
||||
func (s *AdminServer) RequestPluginJobTypeSchemaAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Param("jobType"))
|
||||
if jobType == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobType is required"})
|
||||
return
|
||||
}
|
||||
|
||||
forceRefresh := c.DefaultQuery("force_refresh", "false") == "true"
|
||||
|
||||
ctx, cancel := context.WithTimeout(c.Request.Context(), defaultPluginDetectionTimeout)
|
||||
defer cancel()
|
||||
descriptor, err := s.RequestPluginJobTypeDescriptor(ctx, jobType, forceRefresh)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
renderProtoJSON(c, http.StatusOK, descriptor)
|
||||
}
|
||||
|
||||
// GetPluginJobTypeDescriptorAPI returns persisted descriptor for a job type.
|
||||
func (s *AdminServer) GetPluginJobTypeDescriptorAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Param("jobType"))
|
||||
if jobType == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobType is required"})
|
||||
return
|
||||
}
|
||||
|
||||
descriptor, err := s.LoadPluginJobTypeDescriptor(jobType)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if descriptor == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "descriptor not found"})
|
||||
return
|
||||
}
|
||||
|
||||
renderProtoJSON(c, http.StatusOK, descriptor)
|
||||
}
|
||||
|
||||
// GetPluginJobTypeConfigAPI loads persisted config for a job type.
|
||||
func (s *AdminServer) GetPluginJobTypeConfigAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Param("jobType"))
|
||||
if jobType == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobType is required"})
|
||||
return
|
||||
}
|
||||
|
||||
config, err := s.LoadPluginJobTypeConfig(jobType)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if config == nil {
|
||||
config = &plugin_pb.PersistedJobTypeConfig{
|
||||
JobType: jobType,
|
||||
AdminConfigValues: map[string]*plugin_pb.ConfigValue{},
|
||||
WorkerConfigValues: map[string]*plugin_pb.ConfigValue{},
|
||||
AdminRuntime: &plugin_pb.AdminRuntimeConfig{},
|
||||
}
|
||||
}
|
||||
|
||||
renderProtoJSON(c, http.StatusOK, config)
|
||||
}
|
||||
|
||||
// UpdatePluginJobTypeConfigAPI stores persisted config for a job type.
|
||||
func (s *AdminServer) UpdatePluginJobTypeConfigAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Param("jobType"))
|
||||
if jobType == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobType is required"})
|
||||
return
|
||||
}
|
||||
|
||||
config := &plugin_pb.PersistedJobTypeConfig{}
|
||||
if err := parseProtoJSONBody(c, config); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
config.JobType = jobType
|
||||
if config.UpdatedAt == nil {
|
||||
config.UpdatedAt = timestamppb.Now()
|
||||
}
|
||||
if config.AdminRuntime == nil {
|
||||
config.AdminRuntime = &plugin_pb.AdminRuntimeConfig{}
|
||||
}
|
||||
if config.AdminConfigValues == nil {
|
||||
config.AdminConfigValues = map[string]*plugin_pb.ConfigValue{}
|
||||
}
|
||||
if config.WorkerConfigValues == nil {
|
||||
config.WorkerConfigValues = map[string]*plugin_pb.ConfigValue{}
|
||||
}
|
||||
|
||||
username := c.GetString("username")
|
||||
if username == "" {
|
||||
username = "admin"
|
||||
}
|
||||
config.UpdatedBy = username
|
||||
|
||||
if err := s.SavePluginJobTypeConfig(config); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
renderProtoJSON(c, http.StatusOK, config)
|
||||
}
|
||||
|
||||
// GetPluginRunHistoryAPI returns bounded run history for a job type.
|
||||
func (s *AdminServer) GetPluginRunHistoryAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Param("jobType"))
|
||||
if jobType == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobType is required"})
|
||||
return
|
||||
}
|
||||
|
||||
history, err := s.GetPluginRunHistory(jobType)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if history == nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"job_type": jobType,
|
||||
"successful_runs": []interface{}{},
|
||||
"error_runs": []interface{}{},
|
||||
"last_updated_time": nil,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, history)
|
||||
}
|
||||
|
||||
// TriggerPluginDetectionAPI runs one detector for this job type and returns proposals.
|
||||
func (s *AdminServer) TriggerPluginDetectionAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Param("jobType"))
|
||||
if jobType == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobType is required"})
|
||||
return
|
||||
}
|
||||
|
||||
var req struct {
|
||||
ClusterContext json.RawMessage `json:"cluster_context"`
|
||||
MaxResults int32 `json:"max_results"`
|
||||
TimeoutSeconds int `json:"timeout_seconds"`
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil && err != io.EOF {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
clusterContext, err := s.parseOrBuildClusterContext(req.ClusterContext)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
timeout := normalizeTimeout(req.TimeoutSeconds, defaultPluginDetectionTimeout, maxPluginDetectionTimeout)
|
||||
ctx, cancel := context.WithTimeout(c.Request.Context(), timeout)
|
||||
defer cancel()
|
||||
|
||||
report, err := s.RunPluginDetectionWithReport(ctx, jobType, clusterContext, req.MaxResults)
|
||||
proposals := make([]*plugin_pb.JobProposal, 0)
|
||||
requestID := ""
|
||||
detectorWorkerID := ""
|
||||
totalProposals := int32(0)
|
||||
if report != nil {
|
||||
proposals = report.Proposals
|
||||
requestID = report.RequestID
|
||||
detectorWorkerID = report.WorkerID
|
||||
if report.Complete != nil {
|
||||
totalProposals = report.Complete.TotalProposals
|
||||
}
|
||||
}
|
||||
|
||||
proposalPayloads := make([]map[string]interface{}, 0, len(proposals))
|
||||
for _, proposal := range proposals {
|
||||
payload, marshalErr := protoMessageToMap(proposal)
|
||||
if marshalErr != nil {
|
||||
glog.Warningf("failed to marshal proposal for jobType=%s: %v", jobType, marshalErr)
|
||||
continue
|
||||
}
|
||||
proposalPayloads = append(proposalPayloads, payload)
|
||||
}
|
||||
|
||||
sort.Slice(proposalPayloads, func(i, j int) bool {
|
||||
iPriorityStr, _ := proposalPayloads[i]["priority"].(string)
|
||||
jPriorityStr, _ := proposalPayloads[j]["priority"].(string)
|
||||
|
||||
iPriority := plugin_pb.JobPriority_value[iPriorityStr]
|
||||
jPriority := plugin_pb.JobPriority_value[jPriorityStr]
|
||||
|
||||
if iPriority != jPriority {
|
||||
return iPriority > jPriority
|
||||
}
|
||||
iID, _ := proposalPayloads[i]["proposal_id"].(string)
|
||||
jID, _ := proposalPayloads[j]["proposal_id"].(string)
|
||||
return iID < jID
|
||||
})
|
||||
|
||||
activities := s.ListPluginActivities(jobType, 500)
|
||||
filteredActivities := make([]interface{}, 0, len(activities))
|
||||
if requestID != "" {
|
||||
for i := len(activities) - 1; i >= 0; i-- {
|
||||
activity := activities[i]
|
||||
if activity.RequestID != requestID {
|
||||
continue
|
||||
}
|
||||
filteredActivities = append(filteredActivities, activity)
|
||||
}
|
||||
}
|
||||
|
||||
response := gin.H{
|
||||
"job_type": jobType,
|
||||
"request_id": requestID,
|
||||
"detector_worker_id": detectorWorkerID,
|
||||
"total_proposals": totalProposals,
|
||||
"count": len(proposalPayloads),
|
||||
"proposals": proposalPayloads,
|
||||
"activities": filteredActivities,
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
response["error"] = err.Error()
|
||||
c.JSON(http.StatusInternalServerError, response)
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, response)
|
||||
}
|
||||
|
||||
// RunPluginJobTypeAPI runs full workflow for one job type: detect then dispatch detected jobs.
|
||||
func (s *AdminServer) RunPluginJobTypeAPI(c *gin.Context) {
|
||||
jobType := strings.TrimSpace(c.Param("jobType"))
|
||||
if jobType == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "jobType is required"})
|
||||
return
|
||||
}
|
||||
|
||||
var req struct {
|
||||
ClusterContext json.RawMessage `json:"cluster_context"`
|
||||
MaxResults int32 `json:"max_results"`
|
||||
TimeoutSeconds int `json:"timeout_seconds"`
|
||||
Attempt int32 `json:"attempt"`
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil && err != io.EOF {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body: " + err.Error()})
|
||||
return
|
||||
}
|
||||
if req.Attempt < 1 {
|
||||
req.Attempt = 1
|
||||
}
|
||||
|
||||
clusterContext, err := s.parseOrBuildClusterContext(req.ClusterContext)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
timeout := normalizeTimeout(req.TimeoutSeconds, defaultPluginRunTimeout, maxPluginRunTimeout)
|
||||
ctx, cancel := context.WithTimeout(c.Request.Context(), timeout)
|
||||
defer cancel()
|
||||
|
||||
proposals, err := s.RunPluginDetection(ctx, jobType, clusterContext, req.MaxResults)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
detectedCount := len(proposals)
|
||||
|
||||
filteredProposals, skippedActiveCount, err := s.FilterPluginProposalsWithActiveJobs(jobType, proposals)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
type executionResult struct {
|
||||
JobID string `json:"job_id"`
|
||||
Success bool `json:"success"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Completion map[string]interface{} `json:"completion,omitempty"`
|
||||
}
|
||||
|
||||
results := make([]executionResult, 0, len(filteredProposals))
|
||||
successCount := 0
|
||||
errorCount := 0
|
||||
|
||||
for index, proposal := range filteredProposals {
|
||||
job := buildJobSpecFromProposal(jobType, proposal, index)
|
||||
completed, execErr := s.ExecutePluginJob(ctx, job, clusterContext, req.Attempt)
|
||||
|
||||
result := executionResult{
|
||||
JobID: job.JobId,
|
||||
Success: execErr == nil,
|
||||
}
|
||||
|
||||
if completed != nil {
|
||||
if payload, marshalErr := protoMessageToMap(completed); marshalErr == nil {
|
||||
result.Completion = payload
|
||||
}
|
||||
}
|
||||
|
||||
if execErr != nil {
|
||||
result.Error = execErr.Error()
|
||||
errorCount++
|
||||
} else {
|
||||
successCount++
|
||||
}
|
||||
|
||||
results = append(results, result)
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"job_type": jobType,
|
||||
"detected_count": detectedCount,
|
||||
"ready_to_execute_count": len(filteredProposals),
|
||||
"skipped_active_count": skippedActiveCount,
|
||||
"executed_count": len(results),
|
||||
"success_count": successCount,
|
||||
"error_count": errorCount,
|
||||
"execution_results": results,
|
||||
})
|
||||
}
|
||||
|
||||
// ExecutePluginJobAPI executes one job on a capable worker and waits for completion.
|
||||
func (s *AdminServer) ExecutePluginJobAPI(c *gin.Context) {
|
||||
var req struct {
|
||||
Job json.RawMessage `json:"job"`
|
||||
ClusterContext json.RawMessage `json:"cluster_context"`
|
||||
Attempt int32 `json:"attempt"`
|
||||
TimeoutSeconds int `json:"timeout_seconds"`
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body: " + err.Error()})
|
||||
return
|
||||
}
|
||||
if len(req.Job) == 0 {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "job is required"})
|
||||
return
|
||||
}
|
||||
|
||||
job := &plugin_pb.JobSpec{}
|
||||
if err := (protojson.UnmarshalOptions{DiscardUnknown: true}).Unmarshal(req.Job, job); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid job payload: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
clusterContext, err := s.parseOrBuildClusterContext(req.ClusterContext)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
if req.Attempt < 1 {
|
||||
req.Attempt = 1
|
||||
}
|
||||
|
||||
timeout := normalizeTimeout(req.TimeoutSeconds, defaultPluginExecutionTimeout, maxPluginExecutionTimeout)
|
||||
ctx, cancel := context.WithTimeout(c.Request.Context(), timeout)
|
||||
defer cancel()
|
||||
|
||||
completed, err := s.ExecutePluginJob(ctx, job, clusterContext, req.Attempt)
|
||||
if err != nil {
|
||||
if completed != nil {
|
||||
payload, marshalErr := protoMessageToMap(completed)
|
||||
if marshalErr == nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error(), "completion": payload})
|
||||
return
|
||||
}
|
||||
}
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
renderProtoJSON(c, http.StatusOK, completed)
|
||||
}
|
||||
|
||||
func (s *AdminServer) parseOrBuildClusterContext(raw json.RawMessage) (*plugin_pb.ClusterContext, error) {
|
||||
if len(raw) == 0 {
|
||||
return s.buildDefaultPluginClusterContext(), nil
|
||||
}
|
||||
|
||||
contextMessage := &plugin_pb.ClusterContext{}
|
||||
if err := (protojson.UnmarshalOptions{DiscardUnknown: true}).Unmarshal(raw, contextMessage); err != nil {
|
||||
return nil, fmt.Errorf("invalid cluster_context payload: %w", err)
|
||||
}
|
||||
|
||||
fallback := s.buildDefaultPluginClusterContext()
|
||||
if len(contextMessage.MasterGrpcAddresses) == 0 {
|
||||
contextMessage.MasterGrpcAddresses = append(contextMessage.MasterGrpcAddresses, fallback.MasterGrpcAddresses...)
|
||||
}
|
||||
if len(contextMessage.FilerGrpcAddresses) == 0 {
|
||||
contextMessage.FilerGrpcAddresses = append(contextMessage.FilerGrpcAddresses, fallback.FilerGrpcAddresses...)
|
||||
}
|
||||
if len(contextMessage.VolumeGrpcAddresses) == 0 {
|
||||
contextMessage.VolumeGrpcAddresses = append(contextMessage.VolumeGrpcAddresses, fallback.VolumeGrpcAddresses...)
|
||||
}
|
||||
if contextMessage.Metadata == nil {
|
||||
contextMessage.Metadata = map[string]string{}
|
||||
}
|
||||
contextMessage.Metadata["source"] = "admin"
|
||||
|
||||
return contextMessage, nil
|
||||
}
|
||||
|
||||
func (s *AdminServer) buildDefaultPluginClusterContext() *plugin_pb.ClusterContext {
|
||||
clusterContext := &plugin_pb.ClusterContext{
|
||||
MasterGrpcAddresses: make([]string, 0),
|
||||
FilerGrpcAddresses: make([]string, 0),
|
||||
VolumeGrpcAddresses: make([]string, 0),
|
||||
Metadata: map[string]string{
|
||||
"source": "admin",
|
||||
},
|
||||
}
|
||||
|
||||
masterAddress := string(s.masterClient.GetMaster(context.Background()))
|
||||
if masterAddress != "" {
|
||||
clusterContext.MasterGrpcAddresses = append(clusterContext.MasterGrpcAddresses, masterAddress)
|
||||
}
|
||||
|
||||
filerSeen := map[string]struct{}{}
|
||||
for _, filer := range s.GetAllFilers() {
|
||||
filer = strings.TrimSpace(filer)
|
||||
if filer == "" {
|
||||
continue
|
||||
}
|
||||
if _, exists := filerSeen[filer]; exists {
|
||||
continue
|
||||
}
|
||||
filerSeen[filer] = struct{}{}
|
||||
clusterContext.FilerGrpcAddresses = append(clusterContext.FilerGrpcAddresses, filer)
|
||||
}
|
||||
|
||||
volumeSeen := map[string]struct{}{}
|
||||
if volumeServers, err := s.GetClusterVolumeServers(); err == nil {
|
||||
for _, server := range volumeServers.VolumeServers {
|
||||
address := strings.TrimSpace(server.GetDisplayAddress())
|
||||
if address == "" {
|
||||
address = strings.TrimSpace(server.Address)
|
||||
}
|
||||
if address == "" {
|
||||
continue
|
||||
}
|
||||
if _, exists := volumeSeen[address]; exists {
|
||||
continue
|
||||
}
|
||||
volumeSeen[address] = struct{}{}
|
||||
clusterContext.VolumeGrpcAddresses = append(clusterContext.VolumeGrpcAddresses, address)
|
||||
}
|
||||
} else {
|
||||
glog.V(1).Infof("failed to build default plugin volume context: %v", err)
|
||||
}
|
||||
|
||||
sort.Strings(clusterContext.MasterGrpcAddresses)
|
||||
sort.Strings(clusterContext.FilerGrpcAddresses)
|
||||
sort.Strings(clusterContext.VolumeGrpcAddresses)
|
||||
|
||||
return clusterContext
|
||||
}
|
||||
|
||||
const parseProtoJSONBodyMaxBytes = 1 << 20 // 1 MB
|
||||
|
||||
func parseProtoJSONBody(c *gin.Context, message proto.Message) error {
|
||||
limitedBody := http.MaxBytesReader(c.Writer, c.Request.Body, parseProtoJSONBodyMaxBytes)
|
||||
data, err := io.ReadAll(limitedBody)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read request body: %w", err)
|
||||
}
|
||||
if len(data) == 0 {
|
||||
return fmt.Errorf("request body is empty")
|
||||
}
|
||||
if err := (protojson.UnmarshalOptions{DiscardUnknown: true}).Unmarshal(data, message); err != nil {
|
||||
return fmt.Errorf("invalid protobuf json: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func renderProtoJSON(c *gin.Context, statusCode int, message proto.Message) {
|
||||
payload, err := protojson.MarshalOptions{
|
||||
UseProtoNames: true,
|
||||
EmitUnpopulated: true,
|
||||
}.Marshal(message)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to encode response: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.Data(statusCode, "application/json", payload)
|
||||
}
|
||||
|
||||
func protoMessageToMap(message proto.Message) (map[string]interface{}, error) {
|
||||
payload, err := protojson.MarshalOptions{UseProtoNames: true}.Marshal(message)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out := map[string]interface{}{}
|
||||
if err := json.Unmarshal(payload, &out); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func normalizeTimeout(timeoutSeconds int, defaultTimeout, maxTimeout time.Duration) time.Duration {
|
||||
if timeoutSeconds <= 0 {
|
||||
return defaultTimeout
|
||||
}
|
||||
timeout := time.Duration(timeoutSeconds) * time.Second
|
||||
if timeout > maxTimeout {
|
||||
return maxTimeout
|
||||
}
|
||||
return timeout
|
||||
}
|
||||
|
||||
func buildJobSpecFromProposal(jobType string, proposal *plugin_pb.JobProposal, index int) *plugin_pb.JobSpec {
|
||||
now := timestamppb.Now()
|
||||
suffix := make([]byte, 4)
|
||||
if _, err := rand.Read(suffix); err != nil {
|
||||
// Fallback to simpler ID if rand fails
|
||||
suffix = []byte(fmt.Sprintf("%d", index))
|
||||
}
|
||||
jobID := fmt.Sprintf("%s-%d-%s", jobType, now.AsTime().UnixNano(), hex.EncodeToString(suffix))
|
||||
|
||||
jobSpec := &plugin_pb.JobSpec{
|
||||
JobId: jobID,
|
||||
JobType: jobType,
|
||||
Priority: plugin_pb.JobPriority_JOB_PRIORITY_NORMAL,
|
||||
CreatedAt: now,
|
||||
Labels: make(map[string]string),
|
||||
Parameters: make(map[string]*plugin_pb.ConfigValue),
|
||||
DedupeKey: "",
|
||||
}
|
||||
|
||||
if proposal != nil {
|
||||
jobSpec.Summary = proposal.Summary
|
||||
jobSpec.Detail = proposal.Detail
|
||||
if proposal.Priority != plugin_pb.JobPriority_JOB_PRIORITY_UNSPECIFIED {
|
||||
jobSpec.Priority = proposal.Priority
|
||||
}
|
||||
jobSpec.DedupeKey = proposal.DedupeKey
|
||||
jobSpec.Parameters = plugin.CloneConfigValueMap(proposal.Parameters)
|
||||
if proposal.Labels != nil {
|
||||
for k, v := range proposal.Labels {
|
||||
jobSpec.Labels[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return jobSpec
|
||||
}
|
||||
|
||||
func parsePositiveInt(raw string, defaultValue int) int {
|
||||
value, err := strconv.Atoi(strings.TrimSpace(raw))
|
||||
if err != nil || value <= 0 {
|
||||
return defaultValue
|
||||
}
|
||||
return value
|
||||
}
|
||||
|
||||
// cloneConfigValueMap is now exported by the plugin package as CloneConfigValueMap
|
||||
33
weed/admin/dash/plugin_api_test.go
Normal file
33
weed/admin/dash/plugin_api_test.go
Normal file
@@ -0,0 +1,33 @@
|
||||
package dash
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
func TestBuildJobSpecFromProposalDoesNotReuseProposalID(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
proposal := &plugin_pb.JobProposal{
|
||||
ProposalId: "vacuum-2",
|
||||
DedupeKey: "vacuum:2",
|
||||
JobType: "vacuum",
|
||||
}
|
||||
|
||||
jobA := buildJobSpecFromProposal("vacuum", proposal, 0)
|
||||
jobB := buildJobSpecFromProposal("vacuum", proposal, 1)
|
||||
|
||||
if jobA.JobId == proposal.ProposalId {
|
||||
t.Fatalf("job id must not reuse proposal id: %s", jobA.JobId)
|
||||
}
|
||||
if jobB.JobId == proposal.ProposalId {
|
||||
t.Fatalf("job id must not reuse proposal id: %s", jobB.JobId)
|
||||
}
|
||||
if jobA.JobId == jobB.JobId {
|
||||
t.Fatalf("job ids must be unique across jobs: %s", jobA.JobId)
|
||||
}
|
||||
if jobA.DedupeKey != proposal.DedupeKey {
|
||||
t.Fatalf("dedupe key must be preserved: got=%s want=%s", jobA.DedupeKey, proposal.DedupeKey)
|
||||
}
|
||||
}
|
||||
@@ -5,12 +5,14 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/security"
|
||||
"github.com/seaweedfs/seaweedfs/weed/util"
|
||||
@@ -93,6 +95,10 @@ func (s *WorkerGrpcServer) StartWithTLS(port int) error {
|
||||
grpcServer := pb.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.admin"))
|
||||
|
||||
worker_pb.RegisterWorkerServiceServer(grpcServer, s)
|
||||
if plugin := s.adminServer.GetPlugin(); plugin != nil {
|
||||
plugin_pb.RegisterPluginControlServiceServer(grpcServer, plugin)
|
||||
glog.V(0).Infof("Plugin gRPC service registered on worker gRPC server")
|
||||
}
|
||||
|
||||
s.grpcServer = grpcServer
|
||||
s.listener = listener
|
||||
@@ -114,6 +120,25 @@ func (s *WorkerGrpcServer) StartWithTLS(port int) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListenPort returns the currently bound worker gRPC listen port.
|
||||
func (s *WorkerGrpcServer) ListenPort() int {
|
||||
if s == nil || s.listener == nil {
|
||||
return 0
|
||||
}
|
||||
if tcpAddr, ok := s.listener.Addr().(*net.TCPAddr); ok {
|
||||
return tcpAddr.Port
|
||||
}
|
||||
_, portStr, err := net.SplitHostPort(s.listener.Addr().String())
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
port, err := strconv.Atoi(portStr)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return port
|
||||
}
|
||||
|
||||
// Stop stops the gRPC server
|
||||
func (s *WorkerGrpcServer) Stop() error {
|
||||
if !s.running {
|
||||
|
||||
@@ -23,7 +23,7 @@ type AdminHandlers struct {
|
||||
fileBrowserHandlers *FileBrowserHandlers
|
||||
userHandlers *UserHandlers
|
||||
policyHandlers *PolicyHandlers
|
||||
maintenanceHandlers *MaintenanceHandlers
|
||||
pluginHandlers *PluginHandlers
|
||||
mqHandlers *MessageQueueHandlers
|
||||
serviceAccountHandlers *ServiceAccountHandlers
|
||||
}
|
||||
@@ -35,7 +35,7 @@ func NewAdminHandlers(adminServer *dash.AdminServer) *AdminHandlers {
|
||||
fileBrowserHandlers := NewFileBrowserHandlers(adminServer)
|
||||
userHandlers := NewUserHandlers(adminServer)
|
||||
policyHandlers := NewPolicyHandlers(adminServer)
|
||||
maintenanceHandlers := NewMaintenanceHandlers(adminServer)
|
||||
pluginHandlers := NewPluginHandlers(adminServer)
|
||||
mqHandlers := NewMessageQueueHandlers(adminServer)
|
||||
serviceAccountHandlers := NewServiceAccountHandlers(adminServer)
|
||||
return &AdminHandlers{
|
||||
@@ -45,7 +45,7 @@ func NewAdminHandlers(adminServer *dash.AdminServer) *AdminHandlers {
|
||||
fileBrowserHandlers: fileBrowserHandlers,
|
||||
userHandlers: userHandlers,
|
||||
policyHandlers: policyHandlers,
|
||||
maintenanceHandlers: maintenanceHandlers,
|
||||
pluginHandlers: pluginHandlers,
|
||||
mqHandlers: mqHandlers,
|
||||
serviceAccountHandlers: serviceAccountHandlers,
|
||||
}
|
||||
@@ -119,14 +119,12 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
protected.GET("/mq/topics", h.mqHandlers.ShowTopics)
|
||||
protected.GET("/mq/topics/:namespace/:topic", h.mqHandlers.ShowTopicDetails)
|
||||
|
||||
// Maintenance system routes
|
||||
protected.GET("/maintenance", h.maintenanceHandlers.ShowMaintenanceQueue)
|
||||
protected.GET("/maintenance/workers", h.maintenanceHandlers.ShowMaintenanceWorkers)
|
||||
protected.GET("/maintenance/config", h.maintenanceHandlers.ShowMaintenanceConfig)
|
||||
protected.POST("/maintenance/config", dash.RequireWriteAccess(), h.maintenanceHandlers.UpdateMaintenanceConfig)
|
||||
protected.GET("/maintenance/config/:taskType", h.maintenanceHandlers.ShowTaskConfig)
|
||||
protected.POST("/maintenance/config/:taskType", dash.RequireWriteAccess(), h.maintenanceHandlers.UpdateTaskConfig)
|
||||
protected.GET("/maintenance/tasks/:id", h.maintenanceHandlers.ShowTaskDetail)
|
||||
protected.GET("/plugin", h.pluginHandlers.ShowPlugin)
|
||||
protected.GET("/plugin/configuration", h.pluginHandlers.ShowPluginConfiguration)
|
||||
protected.GET("/plugin/queue", h.pluginHandlers.ShowPluginQueue)
|
||||
protected.GET("/plugin/detection", h.pluginHandlers.ShowPluginDetection)
|
||||
protected.GET("/plugin/execution", h.pluginHandlers.ShowPluginExecution)
|
||||
protected.GET("/plugin/monitoring", h.pluginHandlers.ShowPluginMonitoring)
|
||||
|
||||
// API routes for AJAX calls
|
||||
api := r.Group("/api")
|
||||
@@ -226,20 +224,25 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
volumeApi.POST("/:id/:server/vacuum", dash.RequireWriteAccess(), h.clusterHandlers.VacuumVolume)
|
||||
}
|
||||
|
||||
// Maintenance API routes
|
||||
maintenanceApi := api.Group("/maintenance")
|
||||
// Plugin API routes
|
||||
pluginApi := api.Group("/plugin")
|
||||
{
|
||||
maintenanceApi.POST("/scan", dash.RequireWriteAccess(), h.adminServer.TriggerMaintenanceScan)
|
||||
maintenanceApi.GET("/tasks", h.adminServer.GetMaintenanceTasks)
|
||||
maintenanceApi.GET("/tasks/:id", h.adminServer.GetMaintenanceTask)
|
||||
maintenanceApi.GET("/tasks/:id/detail", h.adminServer.GetMaintenanceTaskDetailAPI)
|
||||
maintenanceApi.POST("/tasks/:id/cancel", dash.RequireWriteAccess(), h.adminServer.CancelMaintenanceTask)
|
||||
maintenanceApi.GET("/workers", h.adminServer.GetMaintenanceWorkersAPI)
|
||||
maintenanceApi.GET("/workers/:id", h.adminServer.GetMaintenanceWorker)
|
||||
maintenanceApi.GET("/workers/:id/logs", h.adminServer.GetWorkerLogs)
|
||||
maintenanceApi.GET("/stats", h.adminServer.GetMaintenanceStats)
|
||||
maintenanceApi.GET("/config", h.adminServer.GetMaintenanceConfigAPI)
|
||||
maintenanceApi.PUT("/config", dash.RequireWriteAccess(), h.adminServer.UpdateMaintenanceConfigAPI)
|
||||
pluginApi.GET("/status", h.adminServer.GetPluginStatusAPI)
|
||||
pluginApi.GET("/workers", h.adminServer.GetPluginWorkersAPI)
|
||||
pluginApi.GET("/job-types", h.adminServer.GetPluginJobTypesAPI)
|
||||
pluginApi.GET("/jobs", h.adminServer.GetPluginJobsAPI)
|
||||
pluginApi.GET("/jobs/:jobId", h.adminServer.GetPluginJobAPI)
|
||||
pluginApi.GET("/jobs/:jobId/detail", h.adminServer.GetPluginJobDetailAPI)
|
||||
pluginApi.GET("/activities", h.adminServer.GetPluginActivitiesAPI)
|
||||
pluginApi.GET("/scheduler-states", h.adminServer.GetPluginSchedulerStatesAPI)
|
||||
pluginApi.GET("/job-types/:jobType/descriptor", h.adminServer.GetPluginJobTypeDescriptorAPI)
|
||||
pluginApi.POST("/job-types/:jobType/schema", h.adminServer.RequestPluginJobTypeSchemaAPI)
|
||||
pluginApi.GET("/job-types/:jobType/config", h.adminServer.GetPluginJobTypeConfigAPI)
|
||||
pluginApi.PUT("/job-types/:jobType/config", dash.RequireWriteAccess(), h.adminServer.UpdatePluginJobTypeConfigAPI)
|
||||
pluginApi.GET("/job-types/:jobType/runs", h.adminServer.GetPluginRunHistoryAPI)
|
||||
pluginApi.POST("/job-types/:jobType/detect", dash.RequireWriteAccess(), h.adminServer.TriggerPluginDetectionAPI)
|
||||
pluginApi.POST("/job-types/:jobType/run", dash.RequireWriteAccess(), h.adminServer.RunPluginJobTypeAPI)
|
||||
pluginApi.POST("/jobs/execute", dash.RequireWriteAccess(), h.adminServer.ExecutePluginJobAPI)
|
||||
}
|
||||
|
||||
// Message Queue API routes
|
||||
@@ -292,14 +295,12 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
r.GET("/mq/topics", h.mqHandlers.ShowTopics)
|
||||
r.GET("/mq/topics/:namespace/:topic", h.mqHandlers.ShowTopicDetails)
|
||||
|
||||
// Maintenance system routes
|
||||
r.GET("/maintenance", h.maintenanceHandlers.ShowMaintenanceQueue)
|
||||
r.GET("/maintenance/workers", h.maintenanceHandlers.ShowMaintenanceWorkers)
|
||||
r.GET("/maintenance/config", h.maintenanceHandlers.ShowMaintenanceConfig)
|
||||
r.POST("/maintenance/config", h.maintenanceHandlers.UpdateMaintenanceConfig)
|
||||
r.GET("/maintenance/config/:taskType", h.maintenanceHandlers.ShowTaskConfig)
|
||||
r.POST("/maintenance/config/:taskType", h.maintenanceHandlers.UpdateTaskConfig)
|
||||
r.GET("/maintenance/tasks/:id", h.maintenanceHandlers.ShowTaskDetail)
|
||||
r.GET("/plugin", h.pluginHandlers.ShowPlugin)
|
||||
r.GET("/plugin/configuration", h.pluginHandlers.ShowPluginConfiguration)
|
||||
r.GET("/plugin/queue", h.pluginHandlers.ShowPluginQueue)
|
||||
r.GET("/plugin/detection", h.pluginHandlers.ShowPluginDetection)
|
||||
r.GET("/plugin/execution", h.pluginHandlers.ShowPluginExecution)
|
||||
r.GET("/plugin/monitoring", h.pluginHandlers.ShowPluginMonitoring)
|
||||
|
||||
// API routes for AJAX calls
|
||||
api := r.Group("/api")
|
||||
@@ -398,20 +399,25 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
volumeApi.POST("/:id/:server/vacuum", h.clusterHandlers.VacuumVolume)
|
||||
}
|
||||
|
||||
// Maintenance API routes
|
||||
maintenanceApi := api.Group("/maintenance")
|
||||
// Plugin API routes
|
||||
pluginApi := api.Group("/plugin")
|
||||
{
|
||||
maintenanceApi.POST("/scan", h.adminServer.TriggerMaintenanceScan)
|
||||
maintenanceApi.GET("/tasks", h.adminServer.GetMaintenanceTasks)
|
||||
maintenanceApi.GET("/tasks/:id", h.adminServer.GetMaintenanceTask)
|
||||
maintenanceApi.GET("/tasks/:id/detail", h.adminServer.GetMaintenanceTaskDetailAPI)
|
||||
maintenanceApi.POST("/tasks/:id/cancel", h.adminServer.CancelMaintenanceTask)
|
||||
maintenanceApi.GET("/workers", h.adminServer.GetMaintenanceWorkersAPI)
|
||||
maintenanceApi.GET("/workers/:id", h.adminServer.GetMaintenanceWorker)
|
||||
maintenanceApi.GET("/workers/:id/logs", h.adminServer.GetWorkerLogs)
|
||||
maintenanceApi.GET("/stats", h.adminServer.GetMaintenanceStats)
|
||||
maintenanceApi.GET("/config", h.adminServer.GetMaintenanceConfigAPI)
|
||||
maintenanceApi.PUT("/config", h.adminServer.UpdateMaintenanceConfigAPI)
|
||||
pluginApi.GET("/status", h.adminServer.GetPluginStatusAPI)
|
||||
pluginApi.GET("/workers", h.adminServer.GetPluginWorkersAPI)
|
||||
pluginApi.GET("/job-types", h.adminServer.GetPluginJobTypesAPI)
|
||||
pluginApi.GET("/jobs", h.adminServer.GetPluginJobsAPI)
|
||||
pluginApi.GET("/jobs/:jobId", h.adminServer.GetPluginJobAPI)
|
||||
pluginApi.GET("/jobs/:jobId/detail", h.adminServer.GetPluginJobDetailAPI)
|
||||
pluginApi.GET("/activities", h.adminServer.GetPluginActivitiesAPI)
|
||||
pluginApi.GET("/scheduler-states", h.adminServer.GetPluginSchedulerStatesAPI)
|
||||
pluginApi.GET("/job-types/:jobType/descriptor", h.adminServer.GetPluginJobTypeDescriptorAPI)
|
||||
pluginApi.POST("/job-types/:jobType/schema", h.adminServer.RequestPluginJobTypeSchemaAPI)
|
||||
pluginApi.GET("/job-types/:jobType/config", h.adminServer.GetPluginJobTypeConfigAPI)
|
||||
pluginApi.PUT("/job-types/:jobType/config", h.adminServer.UpdatePluginJobTypeConfigAPI)
|
||||
pluginApi.GET("/job-types/:jobType/runs", h.adminServer.GetPluginRunHistoryAPI)
|
||||
pluginApi.POST("/job-types/:jobType/detect", h.adminServer.TriggerPluginDetectionAPI)
|
||||
pluginApi.POST("/job-types/:jobType/run", h.adminServer.RunPluginJobTypeAPI)
|
||||
pluginApi.POST("/jobs/execute", h.adminServer.ExecutePluginJobAPI)
|
||||
}
|
||||
|
||||
// Message Queue API routes
|
||||
|
||||
95
weed/admin/handlers/admin_handlers_routes_test.go
Normal file
95
weed/admin/handlers/admin_handlers_routes_test.go
Normal file
@@ -0,0 +1,95 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
)
|
||||
|
||||
func TestSetupRoutes_RegistersPluginSchedulerStatesAPI_NoAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, false, "", "", "", "", true)
|
||||
|
||||
if !hasRoute(router, "GET", "/api/plugin/scheduler-states") {
|
||||
t.Fatalf("expected GET /api/plugin/scheduler-states to be registered in no-auth mode")
|
||||
}
|
||||
if !hasRoute(router, "GET", "/api/plugin/jobs/:jobId/detail") {
|
||||
t.Fatalf("expected GET /api/plugin/jobs/:jobId/detail to be registered in no-auth mode")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetupRoutes_RegistersPluginSchedulerStatesAPI_WithAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, true, "admin", "password", "", "", true)
|
||||
|
||||
if !hasRoute(router, "GET", "/api/plugin/scheduler-states") {
|
||||
t.Fatalf("expected GET /api/plugin/scheduler-states to be registered in auth mode")
|
||||
}
|
||||
if !hasRoute(router, "GET", "/api/plugin/jobs/:jobId/detail") {
|
||||
t.Fatalf("expected GET /api/plugin/jobs/:jobId/detail to be registered in auth mode")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetupRoutes_RegistersPluginPages_NoAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, false, "", "", "", "", true)
|
||||
|
||||
assertHasRoute(t, router, "GET", "/plugin")
|
||||
assertHasRoute(t, router, "GET", "/plugin/configuration")
|
||||
assertHasRoute(t, router, "GET", "/plugin/queue")
|
||||
assertHasRoute(t, router, "GET", "/plugin/detection")
|
||||
assertHasRoute(t, router, "GET", "/plugin/execution")
|
||||
assertHasRoute(t, router, "GET", "/plugin/monitoring")
|
||||
}
|
||||
|
||||
func TestSetupRoutes_RegistersPluginPages_WithAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, true, "admin", "password", "", "", true)
|
||||
|
||||
assertHasRoute(t, router, "GET", "/plugin")
|
||||
assertHasRoute(t, router, "GET", "/plugin/configuration")
|
||||
assertHasRoute(t, router, "GET", "/plugin/queue")
|
||||
assertHasRoute(t, router, "GET", "/plugin/detection")
|
||||
assertHasRoute(t, router, "GET", "/plugin/execution")
|
||||
assertHasRoute(t, router, "GET", "/plugin/monitoring")
|
||||
}
|
||||
|
||||
func newRouteTestAdminHandlers() *AdminHandlers {
|
||||
adminServer := &dash.AdminServer{}
|
||||
return &AdminHandlers{
|
||||
adminServer: adminServer,
|
||||
authHandlers: &AuthHandlers{adminServer: adminServer},
|
||||
clusterHandlers: &ClusterHandlers{adminServer: adminServer},
|
||||
fileBrowserHandlers: &FileBrowserHandlers{adminServer: adminServer},
|
||||
userHandlers: &UserHandlers{adminServer: adminServer},
|
||||
policyHandlers: &PolicyHandlers{adminServer: adminServer},
|
||||
pluginHandlers: &PluginHandlers{adminServer: adminServer},
|
||||
mqHandlers: &MessageQueueHandlers{adminServer: adminServer},
|
||||
serviceAccountHandlers: &ServiceAccountHandlers{adminServer: adminServer},
|
||||
}
|
||||
}
|
||||
|
||||
func hasRoute(router *gin.Engine, method string, path string) bool {
|
||||
for _, route := range router.Routes() {
|
||||
if route.Method == method && route.Path == path {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func assertHasRoute(t *testing.T, router *gin.Engine, method string, path string) {
|
||||
t.Helper()
|
||||
if !hasRoute(router, method, path) {
|
||||
t.Fatalf("expected %s %s to be registered", method, path)
|
||||
}
|
||||
}
|
||||
@@ -1,550 +0,0 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/config"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/app"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/layout"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
)
|
||||
|
||||
// MaintenanceHandlers handles maintenance-related HTTP requests
|
||||
type MaintenanceHandlers struct {
|
||||
adminServer *dash.AdminServer
|
||||
}
|
||||
|
||||
// NewMaintenanceHandlers creates a new instance of MaintenanceHandlers
|
||||
func NewMaintenanceHandlers(adminServer *dash.AdminServer) *MaintenanceHandlers {
|
||||
return &MaintenanceHandlers{
|
||||
adminServer: adminServer,
|
||||
}
|
||||
}
|
||||
|
||||
// ShowTaskDetail displays the task detail page
|
||||
func (h *MaintenanceHandlers) ShowTaskDetail(c *gin.Context) {
|
||||
taskID := c.Param("id")
|
||||
|
||||
if h.adminServer == nil {
|
||||
c.String(http.StatusInternalServerError, "Admin server not initialized")
|
||||
return
|
||||
}
|
||||
|
||||
taskDetail, err := h.adminServer.GetMaintenanceTaskDetail(taskID)
|
||||
if err != nil {
|
||||
glog.Errorf("DEBUG ShowTaskDetail: error getting task detail for %s: %v", taskID, err)
|
||||
c.String(http.StatusNotFound, "Task not found: %s (Error: %v)", taskID, err)
|
||||
return
|
||||
}
|
||||
|
||||
c.Header("Content-Type", "text/html")
|
||||
taskDetailComponent := app.TaskDetail(taskDetail)
|
||||
layoutComponent := layout.Layout(c, taskDetailComponent)
|
||||
err = layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
glog.Errorf("DEBUG ShowTaskDetail: render error: %v", err)
|
||||
c.String(http.StatusInternalServerError, "Failed to render template: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// ShowMaintenanceQueue displays the maintenance queue page
|
||||
func (h *MaintenanceHandlers) ShowMaintenanceQueue(c *gin.Context) {
|
||||
// Add timeout to prevent hanging
|
||||
ctx, cancel := context.WithTimeout(c.Request.Context(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Use a channel to handle timeout for data retrieval
|
||||
type result struct {
|
||||
data *maintenance.MaintenanceQueueData
|
||||
err error
|
||||
}
|
||||
resultChan := make(chan result, 1)
|
||||
|
||||
go func() {
|
||||
data, err := h.getMaintenanceQueueData()
|
||||
resultChan <- result{data: data, err: err}
|
||||
}()
|
||||
|
||||
select {
|
||||
case res := <-resultChan:
|
||||
if res.err != nil {
|
||||
glog.V(1).Infof("ShowMaintenanceQueue: error getting data: %v", res.err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": res.err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
glog.V(2).Infof("ShowMaintenanceQueue: got data with %d tasks", len(res.data.Tasks))
|
||||
|
||||
// Render HTML template
|
||||
c.Header("Content-Type", "text/html")
|
||||
maintenanceComponent := app.MaintenanceQueue(res.data)
|
||||
layoutComponent := layout.Layout(c, maintenanceComponent)
|
||||
err := layoutComponent.Render(ctx, c.Writer)
|
||||
if err != nil {
|
||||
glog.V(1).Infof("ShowMaintenanceQueue: render error: %v", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
glog.V(3).Infof("ShowMaintenanceQueue: template rendered successfully")
|
||||
|
||||
case <-ctx.Done():
|
||||
glog.Warningf("ShowMaintenanceQueue: timeout waiting for data")
|
||||
c.JSON(http.StatusRequestTimeout, gin.H{
|
||||
"error": "Request timeout - maintenance data retrieval took too long. This may indicate a system issue.",
|
||||
"suggestion": "Try refreshing the page or contact system administrator if the problem persists.",
|
||||
})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// ShowMaintenanceWorkers displays the maintenance workers page
|
||||
func (h *MaintenanceHandlers) ShowMaintenanceWorkers(c *gin.Context) {
|
||||
if h.adminServer == nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Admin server not initialized"})
|
||||
return
|
||||
}
|
||||
workersData, err := h.adminServer.GetMaintenanceWorkersData()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Render HTML template
|
||||
c.Header("Content-Type", "text/html")
|
||||
workersComponent := app.MaintenanceWorkers(workersData)
|
||||
layoutComponent := layout.Layout(c, workersComponent)
|
||||
err = layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// ShowMaintenanceConfig displays the maintenance configuration page
|
||||
func (h *MaintenanceHandlers) ShowMaintenanceConfig(c *gin.Context) {
|
||||
config, err := h.getMaintenanceConfig()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Get the schema for dynamic form rendering
|
||||
schema := maintenance.GetMaintenanceConfigSchema()
|
||||
|
||||
// Render HTML template using schema-driven approach
|
||||
c.Header("Content-Type", "text/html")
|
||||
configComponent := app.MaintenanceConfigSchema(config, schema)
|
||||
layoutComponent := layout.Layout(c, configComponent)
|
||||
err = layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// ShowTaskConfig displays the configuration page for a specific task type
|
||||
func (h *MaintenanceHandlers) ShowTaskConfig(c *gin.Context) {
|
||||
taskTypeName := c.Param("taskType")
|
||||
|
||||
// Get the schema for this task type
|
||||
schema := tasks.GetTaskConfigSchema(taskTypeName)
|
||||
if schema == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "Task type not found or no schema available"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get the UI provider for current configuration
|
||||
uiRegistry := tasks.GetGlobalUIRegistry()
|
||||
typesRegistry := tasks.GetGlobalTypesRegistry()
|
||||
|
||||
var provider types.TaskUIProvider
|
||||
for workerTaskType := range typesRegistry.GetAllDetectors() {
|
||||
if string(workerTaskType) == taskTypeName {
|
||||
provider = uiRegistry.GetProvider(workerTaskType)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if provider == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "UI provider not found for task type"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get current configuration
|
||||
currentConfig := provider.GetCurrentConfig()
|
||||
|
||||
// Note: Do NOT apply schema defaults to current config as it overrides saved values
|
||||
// Only apply defaults when creating new configs, not when displaying existing ones
|
||||
|
||||
// Create task configuration data
|
||||
configData := &maintenance.TaskConfigData{
|
||||
TaskType: maintenance.MaintenanceTaskType(taskTypeName),
|
||||
TaskName: schema.DisplayName,
|
||||
TaskIcon: schema.Icon,
|
||||
Description: schema.Description,
|
||||
}
|
||||
|
||||
// Render HTML template using schema-based approach
|
||||
c.Header("Content-Type", "text/html")
|
||||
taskConfigComponent := app.TaskConfigSchema(configData, schema, currentConfig)
|
||||
layoutComponent := layout.Layout(c, taskConfigComponent)
|
||||
err := layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateTaskConfig updates task configuration from form
|
||||
func (h *MaintenanceHandlers) UpdateTaskConfig(c *gin.Context) {
|
||||
taskTypeName := c.Param("taskType")
|
||||
taskType := types.TaskType(taskTypeName)
|
||||
|
||||
// Parse form data
|
||||
err := c.Request.ParseForm()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse form data: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Debug logging - show received form data
|
||||
glog.V(1).Infof("Received form data for task type %s:", taskTypeName)
|
||||
for key, values := range c.Request.PostForm {
|
||||
glog.V(1).Infof(" %s: %v", key, values)
|
||||
}
|
||||
|
||||
// Get the task configuration schema
|
||||
schema := tasks.GetTaskConfigSchema(taskTypeName)
|
||||
if schema == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "Schema not found for task type: " + taskTypeName})
|
||||
return
|
||||
}
|
||||
|
||||
// Create a new config instance based on task type and apply schema defaults
|
||||
var config TaskConfig
|
||||
switch taskType {
|
||||
case types.TaskTypeVacuum:
|
||||
config = &vacuum.Config{}
|
||||
case types.TaskTypeBalance:
|
||||
config = &balance.Config{}
|
||||
case types.TaskTypeErasureCoding:
|
||||
config = &erasure_coding.Config{}
|
||||
default:
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Unsupported task type: " + taskTypeName})
|
||||
return
|
||||
}
|
||||
|
||||
// Apply schema defaults first using type-safe method
|
||||
if err := schema.ApplyDefaultsToConfig(config); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to apply defaults: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// First, get the current configuration to preserve existing values
|
||||
currentUIRegistry := tasks.GetGlobalUIRegistry()
|
||||
currentTypesRegistry := tasks.GetGlobalTypesRegistry()
|
||||
|
||||
var currentProvider types.TaskUIProvider
|
||||
for workerTaskType := range currentTypesRegistry.GetAllDetectors() {
|
||||
if string(workerTaskType) == string(taskType) {
|
||||
currentProvider = currentUIRegistry.GetProvider(workerTaskType)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if currentProvider != nil {
|
||||
// Copy current config values to the new config
|
||||
currentConfig := currentProvider.GetCurrentConfig()
|
||||
if currentConfigProtobuf, ok := currentConfig.(TaskConfig); ok {
|
||||
// Apply current values using protobuf directly - no map conversion needed!
|
||||
currentPolicy := currentConfigProtobuf.ToTaskPolicy()
|
||||
if err := config.FromTaskPolicy(currentPolicy); err != nil {
|
||||
glog.Warningf("Failed to load current config for %s: %v", taskTypeName, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse form data using schema-based approach (this will override with new values)
|
||||
err = h.parseTaskConfigFromForm(c.Request.PostForm, schema, config)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse configuration: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Debug logging - show parsed config values
|
||||
switch taskType {
|
||||
case types.TaskTypeVacuum:
|
||||
if vacuumConfig, ok := config.(*vacuum.Config); ok {
|
||||
glog.V(1).Infof("Parsed vacuum config - GarbageThreshold: %f, MinVolumeAgeSeconds: %d, MinIntervalSeconds: %d",
|
||||
vacuumConfig.GarbageThreshold, vacuumConfig.MinVolumeAgeSeconds, vacuumConfig.MinIntervalSeconds)
|
||||
}
|
||||
case types.TaskTypeErasureCoding:
|
||||
if ecConfig, ok := config.(*erasure_coding.Config); ok {
|
||||
glog.V(1).Infof("Parsed EC config - FullnessRatio: %f, QuietForSeconds: %d, MinSizeMB: %d, CollectionFilter: '%s'",
|
||||
ecConfig.FullnessRatio, ecConfig.QuietForSeconds, ecConfig.MinSizeMB, ecConfig.CollectionFilter)
|
||||
}
|
||||
case types.TaskTypeBalance:
|
||||
if balanceConfig, ok := config.(*balance.Config); ok {
|
||||
glog.V(1).Infof("Parsed balance config - Enabled: %v, MaxConcurrent: %d, ScanIntervalSeconds: %d, ImbalanceThreshold: %f, MinServerCount: %d",
|
||||
balanceConfig.Enabled, balanceConfig.MaxConcurrent, balanceConfig.ScanIntervalSeconds, balanceConfig.ImbalanceThreshold, balanceConfig.MinServerCount)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate the configuration
|
||||
if validationErrors := schema.ValidateConfig(config); len(validationErrors) > 0 {
|
||||
errorMessages := make([]string, len(validationErrors))
|
||||
for i, err := range validationErrors {
|
||||
errorMessages[i] = err.Error()
|
||||
}
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Configuration validation failed", "details": errorMessages})
|
||||
return
|
||||
}
|
||||
|
||||
// Apply configuration using UIProvider
|
||||
uiRegistry := tasks.GetGlobalUIRegistry()
|
||||
typesRegistry := tasks.GetGlobalTypesRegistry()
|
||||
|
||||
var provider types.TaskUIProvider
|
||||
for workerTaskType := range typesRegistry.GetAllDetectors() {
|
||||
if string(workerTaskType) == string(taskType) {
|
||||
provider = uiRegistry.GetProvider(workerTaskType)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if provider == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "UI provider not found for task type"})
|
||||
return
|
||||
}
|
||||
|
||||
// Apply configuration using provider
|
||||
err = provider.ApplyTaskConfig(config)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to apply configuration: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Save task configuration to protobuf file using ConfigPersistence
|
||||
if h.adminServer != nil && h.adminServer.GetConfigPersistence() != nil {
|
||||
err = h.saveTaskConfigToProtobuf(taskType, config)
|
||||
if err != nil {
|
||||
glog.Warningf("Failed to save task config to protobuf file: %v", err)
|
||||
// Don't fail the request, just log the warning
|
||||
}
|
||||
} else if h.adminServer == nil {
|
||||
glog.Warningf("Failed to save task config: admin server not initialized")
|
||||
}
|
||||
|
||||
// Trigger a configuration reload in the maintenance manager
|
||||
if h.adminServer != nil {
|
||||
if manager := h.adminServer.GetMaintenanceManager(); manager != nil {
|
||||
err = manager.ReloadTaskConfigurations()
|
||||
if err != nil {
|
||||
glog.Warningf("Failed to reload task configurations: %v", err)
|
||||
} else {
|
||||
glog.V(1).Infof("Successfully reloaded task configurations after updating %s", taskTypeName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Redirect back to task configuration page
|
||||
c.Redirect(http.StatusSeeOther, "/maintenance/config/"+taskTypeName)
|
||||
}
|
||||
|
||||
// parseTaskConfigFromForm parses form data using schema definitions
|
||||
func (h *MaintenanceHandlers) parseTaskConfigFromForm(formData map[string][]string, schema *tasks.TaskConfigSchema, config interface{}) error {
|
||||
configValue := reflect.ValueOf(config)
|
||||
if configValue.Kind() == reflect.Ptr {
|
||||
configValue = configValue.Elem()
|
||||
}
|
||||
|
||||
if configValue.Kind() != reflect.Struct {
|
||||
return fmt.Errorf("config must be a struct or pointer to struct")
|
||||
}
|
||||
|
||||
configType := configValue.Type()
|
||||
|
||||
for i := 0; i < configValue.NumField(); i++ {
|
||||
field := configValue.Field(i)
|
||||
fieldType := configType.Field(i)
|
||||
|
||||
// Handle embedded structs recursively
|
||||
if fieldType.Anonymous && field.Kind() == reflect.Struct {
|
||||
err := h.parseTaskConfigFromForm(formData, schema, field.Addr().Interface())
|
||||
if err != nil {
|
||||
return fmt.Errorf("error parsing embedded struct %s: %w", fieldType.Name, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Get JSON tag name
|
||||
jsonTag := fieldType.Tag.Get("json")
|
||||
if jsonTag == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Remove options like ",omitempty"
|
||||
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
|
||||
jsonTag = jsonTag[:commaIdx]
|
||||
}
|
||||
|
||||
// Find corresponding schema field
|
||||
schemaField := schema.GetFieldByName(jsonTag)
|
||||
if schemaField == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse value based on field type
|
||||
if err := h.parseFieldFromForm(formData, schemaField, field); err != nil {
|
||||
return fmt.Errorf("error parsing field %s: %w", schemaField.DisplayName, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseFieldFromForm parses a single field value from form data
|
||||
func (h *MaintenanceHandlers) parseFieldFromForm(formData map[string][]string, schemaField *config.Field, fieldValue reflect.Value) error {
|
||||
if !fieldValue.CanSet() {
|
||||
return nil
|
||||
}
|
||||
|
||||
switch schemaField.Type {
|
||||
case config.FieldTypeBool:
|
||||
// Checkbox fields - present means true, absent means false
|
||||
_, exists := formData[schemaField.JSONName]
|
||||
fieldValue.SetBool(exists)
|
||||
|
||||
case config.FieldTypeInt:
|
||||
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
|
||||
if intVal, err := strconv.Atoi(values[0]); err != nil {
|
||||
return fmt.Errorf("invalid integer value: %s", values[0])
|
||||
} else {
|
||||
fieldValue.SetInt(int64(intVal))
|
||||
}
|
||||
}
|
||||
|
||||
case config.FieldTypeFloat:
|
||||
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
|
||||
if floatVal, err := strconv.ParseFloat(values[0], 64); err != nil {
|
||||
return fmt.Errorf("invalid float value: %s", values[0])
|
||||
} else {
|
||||
fieldValue.SetFloat(floatVal)
|
||||
}
|
||||
}
|
||||
|
||||
case config.FieldTypeString:
|
||||
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
|
||||
fieldValue.SetString(values[0])
|
||||
}
|
||||
|
||||
case config.FieldTypeInterval:
|
||||
// Parse interval fields with value + unit
|
||||
valueKey := schemaField.JSONName + "_value"
|
||||
unitKey := schemaField.JSONName + "_unit"
|
||||
|
||||
if valueStrs, ok := formData[valueKey]; ok && len(valueStrs) > 0 {
|
||||
value, err := strconv.Atoi(valueStrs[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid interval value: %s", valueStrs[0])
|
||||
}
|
||||
|
||||
unit := "minutes" // default
|
||||
if unitStrs, ok := formData[unitKey]; ok && len(unitStrs) > 0 {
|
||||
unit = unitStrs[0]
|
||||
}
|
||||
|
||||
// Convert to seconds
|
||||
seconds := config.IntervalValueUnitToSeconds(value, unit)
|
||||
fieldValue.SetInt(int64(seconds))
|
||||
}
|
||||
|
||||
default:
|
||||
return fmt.Errorf("unsupported field type: %s", schemaField.Type)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateMaintenanceConfig updates maintenance configuration from form
|
||||
func (h *MaintenanceHandlers) UpdateMaintenanceConfig(c *gin.Context) {
|
||||
var config maintenance.MaintenanceConfig
|
||||
if err := c.ShouldBind(&config); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
err := h.updateMaintenanceConfig(&config)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.Redirect(http.StatusSeeOther, "/maintenance/config")
|
||||
}
|
||||
|
||||
// Helper methods that delegate to AdminServer
|
||||
|
||||
func (h *MaintenanceHandlers) getMaintenanceQueueData() (*maintenance.MaintenanceQueueData, error) {
|
||||
if h.adminServer == nil {
|
||||
return nil, fmt.Errorf("admin server not initialized")
|
||||
}
|
||||
// Use the exported method from AdminServer used by the JSON API
|
||||
return h.adminServer.GetMaintenanceQueueData()
|
||||
}
|
||||
|
||||
func (h *MaintenanceHandlers) getMaintenanceConfig() (*maintenance.MaintenanceConfigData, error) {
|
||||
if h.adminServer == nil {
|
||||
return nil, fmt.Errorf("admin server not initialized")
|
||||
}
|
||||
// Delegate to AdminServer's real persistence method
|
||||
return h.adminServer.GetMaintenanceConfigData()
|
||||
}
|
||||
|
||||
func (h *MaintenanceHandlers) updateMaintenanceConfig(config *maintenance.MaintenanceConfig) error {
|
||||
if h.adminServer == nil {
|
||||
return fmt.Errorf("admin server not initialized")
|
||||
}
|
||||
// Delegate to AdminServer's real persistence method
|
||||
return h.adminServer.UpdateMaintenanceConfigData(config)
|
||||
}
|
||||
|
||||
// saveTaskConfigToProtobuf saves task configuration to protobuf file
|
||||
func (h *MaintenanceHandlers) saveTaskConfigToProtobuf(taskType types.TaskType, config TaskConfig) error {
|
||||
configPersistence := h.adminServer.GetConfigPersistence()
|
||||
if configPersistence == nil {
|
||||
return fmt.Errorf("config persistence not available")
|
||||
}
|
||||
|
||||
// Use the new ToTaskPolicy method - much simpler and more maintainable!
|
||||
taskPolicy := config.ToTaskPolicy()
|
||||
|
||||
// Save using task-specific methods
|
||||
switch taskType {
|
||||
case types.TaskTypeVacuum:
|
||||
return configPersistence.SaveVacuumTaskPolicy(taskPolicy)
|
||||
case types.TaskTypeErasureCoding:
|
||||
return configPersistence.SaveErasureCodingTaskPolicy(taskPolicy)
|
||||
case types.TaskTypeBalance:
|
||||
return configPersistence.SaveBalanceTaskPolicy(taskPolicy)
|
||||
default:
|
||||
return fmt.Errorf("unsupported task type for protobuf persistence: %s", taskType)
|
||||
}
|
||||
}
|
||||
@@ -1,389 +0,0 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
"testing"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/config"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/base"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
)
|
||||
|
||||
func TestParseTaskConfigFromForm_WithEmbeddedStruct(t *testing.T) {
|
||||
// Create a maintenance handlers instance for testing
|
||||
h := &MaintenanceHandlers{}
|
||||
|
||||
// Test with balance config
|
||||
t.Run("Balance Config", func(t *testing.T) {
|
||||
// Simulate form data
|
||||
formData := url.Values{
|
||||
"enabled": {"on"}, // checkbox field
|
||||
"scan_interval_seconds_value": {"30"}, // interval field
|
||||
"scan_interval_seconds_unit": {"minutes"}, // interval unit
|
||||
"max_concurrent": {"2"}, // number field
|
||||
"imbalance_threshold": {"0.15"}, // float field
|
||||
"min_server_count": {"3"}, // number field
|
||||
}
|
||||
|
||||
// Get schema
|
||||
schema := tasks.GetTaskConfigSchema("balance")
|
||||
if schema == nil {
|
||||
t.Fatal("Failed to get balance schema")
|
||||
}
|
||||
|
||||
// Create config instance
|
||||
config := &balance.Config{}
|
||||
|
||||
// Parse form data
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse form data: %v", err)
|
||||
}
|
||||
|
||||
// Verify embedded struct fields were set correctly
|
||||
if !config.Enabled {
|
||||
t.Errorf("Expected Enabled=true, got %v", config.Enabled)
|
||||
}
|
||||
|
||||
if config.ScanIntervalSeconds != 1800 { // 30 minutes * 60
|
||||
t.Errorf("Expected ScanIntervalSeconds=1800, got %v", config.ScanIntervalSeconds)
|
||||
}
|
||||
|
||||
if config.MaxConcurrent != 2 {
|
||||
t.Errorf("Expected MaxConcurrent=2, got %v", config.MaxConcurrent)
|
||||
}
|
||||
|
||||
// Verify balance-specific fields were set correctly
|
||||
if config.ImbalanceThreshold != 0.15 {
|
||||
t.Errorf("Expected ImbalanceThreshold=0.15, got %v", config.ImbalanceThreshold)
|
||||
}
|
||||
|
||||
if config.MinServerCount != 3 {
|
||||
t.Errorf("Expected MinServerCount=3, got %v", config.MinServerCount)
|
||||
}
|
||||
})
|
||||
|
||||
// Test with vacuum config
|
||||
t.Run("Vacuum Config", func(t *testing.T) {
|
||||
// Simulate form data
|
||||
formData := url.Values{
|
||||
// "enabled" field omitted to simulate unchecked checkbox
|
||||
"scan_interval_seconds_value": {"4"}, // interval field
|
||||
"scan_interval_seconds_unit": {"hours"}, // interval unit
|
||||
"max_concurrent": {"3"}, // number field
|
||||
"garbage_threshold": {"0.4"}, // float field
|
||||
"min_volume_age_seconds_value": {"2"}, // interval field
|
||||
"min_volume_age_seconds_unit": {"days"}, // interval unit
|
||||
"min_interval_seconds_value": {"1"}, // interval field
|
||||
"min_interval_seconds_unit": {"days"}, // interval unit
|
||||
}
|
||||
|
||||
// Get schema
|
||||
schema := tasks.GetTaskConfigSchema("vacuum")
|
||||
if schema == nil {
|
||||
t.Fatal("Failed to get vacuum schema")
|
||||
}
|
||||
|
||||
// Create config instance
|
||||
config := &vacuum.Config{}
|
||||
|
||||
// Parse form data
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse form data: %v", err)
|
||||
}
|
||||
|
||||
// Verify embedded struct fields were set correctly
|
||||
if config.Enabled {
|
||||
t.Errorf("Expected Enabled=false, got %v", config.Enabled)
|
||||
}
|
||||
|
||||
if config.ScanIntervalSeconds != 14400 { // 4 hours * 3600
|
||||
t.Errorf("Expected ScanIntervalSeconds=14400, got %v", config.ScanIntervalSeconds)
|
||||
}
|
||||
|
||||
if config.MaxConcurrent != 3 {
|
||||
t.Errorf("Expected MaxConcurrent=3, got %v", config.MaxConcurrent)
|
||||
}
|
||||
|
||||
// Verify vacuum-specific fields were set correctly
|
||||
if config.GarbageThreshold != 0.4 {
|
||||
t.Errorf("Expected GarbageThreshold=0.4, got %v", config.GarbageThreshold)
|
||||
}
|
||||
|
||||
if config.MinVolumeAgeSeconds != 172800 { // 2 days * 86400
|
||||
t.Errorf("Expected MinVolumeAgeSeconds=172800, got %v", config.MinVolumeAgeSeconds)
|
||||
}
|
||||
|
||||
if config.MinIntervalSeconds != 86400 { // 1 day * 86400
|
||||
t.Errorf("Expected MinIntervalSeconds=86400, got %v", config.MinIntervalSeconds)
|
||||
}
|
||||
})
|
||||
|
||||
// Test with erasure coding config
|
||||
t.Run("Erasure Coding Config", func(t *testing.T) {
|
||||
// Simulate form data
|
||||
formData := url.Values{
|
||||
"enabled": {"on"}, // checkbox field
|
||||
"scan_interval_seconds_value": {"2"}, // interval field
|
||||
"scan_interval_seconds_unit": {"hours"}, // interval unit
|
||||
"max_concurrent": {"1"}, // number field
|
||||
"quiet_for_seconds_value": {"10"}, // interval field
|
||||
"quiet_for_seconds_unit": {"minutes"}, // interval unit
|
||||
"fullness_ratio": {"0.85"}, // float field
|
||||
"collection_filter": {"test_collection"}, // string field
|
||||
"min_size_mb": {"50"}, // number field
|
||||
}
|
||||
|
||||
// Get schema
|
||||
schema := tasks.GetTaskConfigSchema("erasure_coding")
|
||||
if schema == nil {
|
||||
t.Fatal("Failed to get erasure_coding schema")
|
||||
}
|
||||
|
||||
// Create config instance
|
||||
config := &erasure_coding.Config{}
|
||||
|
||||
// Parse form data
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse form data: %v", err)
|
||||
}
|
||||
|
||||
// Verify embedded struct fields were set correctly
|
||||
if !config.Enabled {
|
||||
t.Errorf("Expected Enabled=true, got %v", config.Enabled)
|
||||
}
|
||||
|
||||
if config.ScanIntervalSeconds != 7200 { // 2 hours * 3600
|
||||
t.Errorf("Expected ScanIntervalSeconds=7200, got %v", config.ScanIntervalSeconds)
|
||||
}
|
||||
|
||||
if config.MaxConcurrent != 1 {
|
||||
t.Errorf("Expected MaxConcurrent=1, got %v", config.MaxConcurrent)
|
||||
}
|
||||
|
||||
// Verify erasure coding-specific fields were set correctly
|
||||
if config.QuietForSeconds != 600 { // 10 minutes * 60
|
||||
t.Errorf("Expected QuietForSeconds=600, got %v", config.QuietForSeconds)
|
||||
}
|
||||
|
||||
if config.FullnessRatio != 0.85 {
|
||||
t.Errorf("Expected FullnessRatio=0.85, got %v", config.FullnessRatio)
|
||||
}
|
||||
|
||||
if config.CollectionFilter != "test_collection" {
|
||||
t.Errorf("Expected CollectionFilter='test_collection', got %v", config.CollectionFilter)
|
||||
}
|
||||
|
||||
if config.MinSizeMB != 50 {
|
||||
t.Errorf("Expected MinSizeMB=50, got %v", config.MinSizeMB)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestConfigurationValidation(t *testing.T) {
|
||||
// Test that config structs can be validated and converted to protobuf format
|
||||
taskTypes := []struct {
|
||||
name string
|
||||
config interface{}
|
||||
}{
|
||||
{
|
||||
"balance",
|
||||
&balance.Config{
|
||||
BaseConfig: base.BaseConfig{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 2400,
|
||||
MaxConcurrent: 3,
|
||||
},
|
||||
ImbalanceThreshold: 0.18,
|
||||
MinServerCount: 4,
|
||||
},
|
||||
},
|
||||
{
|
||||
"vacuum",
|
||||
&vacuum.Config{
|
||||
BaseConfig: base.BaseConfig{
|
||||
Enabled: false,
|
||||
ScanIntervalSeconds: 7200,
|
||||
MaxConcurrent: 2,
|
||||
},
|
||||
GarbageThreshold: 0.35,
|
||||
MinVolumeAgeSeconds: 86400,
|
||||
MinIntervalSeconds: 604800,
|
||||
},
|
||||
},
|
||||
{
|
||||
"erasure_coding",
|
||||
&erasure_coding.Config{
|
||||
BaseConfig: base.BaseConfig{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 3600,
|
||||
MaxConcurrent: 1,
|
||||
},
|
||||
QuietForSeconds: 900,
|
||||
FullnessRatio: 0.9,
|
||||
CollectionFilter: "important",
|
||||
MinSizeMB: 100,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range taskTypes {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
// Test that configs can be converted to protobuf TaskPolicy
|
||||
switch cfg := test.config.(type) {
|
||||
case *balance.Config:
|
||||
policy := cfg.ToTaskPolicy()
|
||||
if policy == nil {
|
||||
t.Fatal("ToTaskPolicy returned nil")
|
||||
}
|
||||
if policy.Enabled != cfg.Enabled {
|
||||
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
|
||||
}
|
||||
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
|
||||
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
|
||||
}
|
||||
case *vacuum.Config:
|
||||
policy := cfg.ToTaskPolicy()
|
||||
if policy == nil {
|
||||
t.Fatal("ToTaskPolicy returned nil")
|
||||
}
|
||||
if policy.Enabled != cfg.Enabled {
|
||||
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
|
||||
}
|
||||
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
|
||||
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
|
||||
}
|
||||
case *erasure_coding.Config:
|
||||
policy := cfg.ToTaskPolicy()
|
||||
if policy == nil {
|
||||
t.Fatal("ToTaskPolicy returned nil")
|
||||
}
|
||||
if policy.Enabled != cfg.Enabled {
|
||||
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
|
||||
}
|
||||
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
|
||||
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
|
||||
}
|
||||
default:
|
||||
t.Fatalf("Unknown config type: %T", test.config)
|
||||
}
|
||||
|
||||
// Test that configs can be validated
|
||||
switch cfg := test.config.(type) {
|
||||
case *balance.Config:
|
||||
if err := cfg.Validate(); err != nil {
|
||||
t.Errorf("Validation failed: %v", err)
|
||||
}
|
||||
case *vacuum.Config:
|
||||
if err := cfg.Validate(); err != nil {
|
||||
t.Errorf("Validation failed: %v", err)
|
||||
}
|
||||
case *erasure_coding.Config:
|
||||
if err := cfg.Validate(); err != nil {
|
||||
t.Errorf("Validation failed: %v", err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseFieldFromForm_EdgeCases(t *testing.T) {
|
||||
h := &MaintenanceHandlers{}
|
||||
|
||||
// Test checkbox parsing (boolean fields)
|
||||
t.Run("Checkbox Fields", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
formData url.Values
|
||||
expectedValue bool
|
||||
}{
|
||||
{"Checked checkbox", url.Values{"test_field": {"on"}}, true},
|
||||
{"Unchecked checkbox", url.Values{}, false},
|
||||
{"Empty value checkbox", url.Values{"test_field": {""}}, true}, // Present but empty means checked
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
schema := &tasks.TaskConfigSchema{
|
||||
Schema: config.Schema{
|
||||
Fields: []*config.Field{
|
||||
{
|
||||
JSONName: "test_field",
|
||||
Type: config.FieldTypeBool,
|
||||
InputType: "checkbox",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
type TestConfig struct {
|
||||
TestField bool `json:"test_field"`
|
||||
}
|
||||
|
||||
config := &TestConfig{}
|
||||
err := h.parseTaskConfigFromForm(test.formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("parseTaskConfigFromForm failed: %v", err)
|
||||
}
|
||||
|
||||
if config.TestField != test.expectedValue {
|
||||
t.Errorf("Expected %v, got %v", test.expectedValue, config.TestField)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// Test interval parsing
|
||||
t.Run("Interval Fields", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
value string
|
||||
unit string
|
||||
expectedSecs int
|
||||
}{
|
||||
{"Minutes", "30", "minutes", 1800},
|
||||
{"Hours", "2", "hours", 7200},
|
||||
{"Days", "1", "days", 86400},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
formData := url.Values{
|
||||
"test_field_value": {test.value},
|
||||
"test_field_unit": {test.unit},
|
||||
}
|
||||
|
||||
schema := &tasks.TaskConfigSchema{
|
||||
Schema: config.Schema{
|
||||
Fields: []*config.Field{
|
||||
{
|
||||
JSONName: "test_field",
|
||||
Type: config.FieldTypeInterval,
|
||||
InputType: "interval",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
type TestConfig struct {
|
||||
TestField int `json:"test_field"`
|
||||
}
|
||||
|
||||
config := &TestConfig{}
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("parseTaskConfigFromForm failed: %v", err)
|
||||
}
|
||||
|
||||
if config.TestField != test.expectedSecs {
|
||||
t.Errorf("Expected %d seconds, got %d", test.expectedSecs, config.TestField)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
67
weed/admin/handlers/plugin_handlers.go
Normal file
67
weed/admin/handlers/plugin_handlers.go
Normal file
@@ -0,0 +1,67 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"net/http"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/app"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/layout"
|
||||
)
|
||||
|
||||
// PluginHandlers handles plugin UI pages.
|
||||
type PluginHandlers struct {
|
||||
adminServer *dash.AdminServer
|
||||
}
|
||||
|
||||
// NewPluginHandlers creates a new instance of PluginHandlers.
|
||||
func NewPluginHandlers(adminServer *dash.AdminServer) *PluginHandlers {
|
||||
return &PluginHandlers{
|
||||
adminServer: adminServer,
|
||||
}
|
||||
}
|
||||
|
||||
// ShowPlugin displays plugin overview page.
|
||||
func (h *PluginHandlers) ShowPlugin(c *gin.Context) {
|
||||
h.renderPluginPage(c, "overview")
|
||||
}
|
||||
|
||||
// ShowPluginConfiguration displays plugin configuration page.
|
||||
func (h *PluginHandlers) ShowPluginConfiguration(c *gin.Context) {
|
||||
h.renderPluginPage(c, "configuration")
|
||||
}
|
||||
|
||||
// ShowPluginDetection displays plugin detection jobs page.
|
||||
func (h *PluginHandlers) ShowPluginDetection(c *gin.Context) {
|
||||
h.renderPluginPage(c, "detection")
|
||||
}
|
||||
|
||||
// ShowPluginQueue displays plugin job queue page.
|
||||
func (h *PluginHandlers) ShowPluginQueue(c *gin.Context) {
|
||||
h.renderPluginPage(c, "queue")
|
||||
}
|
||||
|
||||
// ShowPluginExecution displays plugin execution jobs page.
|
||||
func (h *PluginHandlers) ShowPluginExecution(c *gin.Context) {
|
||||
h.renderPluginPage(c, "execution")
|
||||
}
|
||||
|
||||
// ShowPluginMonitoring displays plugin monitoring page.
|
||||
func (h *PluginHandlers) ShowPluginMonitoring(c *gin.Context) {
|
||||
// Backward-compatible alias for the old monitoring URL.
|
||||
h.renderPluginPage(c, "detection")
|
||||
}
|
||||
|
||||
func (h *PluginHandlers) renderPluginPage(c *gin.Context, page string) {
|
||||
component := app.Plugin(page)
|
||||
layoutComponent := layout.Layout(c, component)
|
||||
|
||||
var buf bytes.Buffer
|
||||
if err := layoutComponent.Render(c.Request.Context(), &buf); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.Data(http.StatusOK, "text/html; charset=utf-8", buf.Bytes())
|
||||
}
|
||||
205
weed/admin/plugin/DESIGN.md
Normal file
205
weed/admin/plugin/DESIGN.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Admin Worker Plugin System (Design)
|
||||
|
||||
This document describes the plugin system for admin-managed workers, implemented in parallel with the current maintenance/worker mechanism.
|
||||
|
||||
## Scope
|
||||
|
||||
- Add a new plugin protocol and runtime model for multi-language workers.
|
||||
- Keep all current admin + worker code paths untouched.
|
||||
- Use gRPC for all admin-worker communication.
|
||||
- Let workers describe job configuration UI declaratively via protobuf.
|
||||
- Persist all job type configuration under admin server data directory.
|
||||
- Support detector workers and executor workers per job type.
|
||||
- Add end-to-end workflow observability (activities, active jobs, progress).
|
||||
|
||||
## New Contract
|
||||
|
||||
- Proto file: `weed/pb/plugin.proto`
|
||||
- gRPC service: `PluginControlService.WorkerStream`
|
||||
- Connection model: worker-initiated long-lived bidirectional stream.
|
||||
|
||||
Why this model:
|
||||
|
||||
- Works for workers in any language with gRPC support.
|
||||
- Avoids admin dialing constraints in NAT/private networks.
|
||||
- Allows command/response, progress streaming, and heartbeat over one channel.
|
||||
|
||||
## Core Runtime Components (Admin Side)
|
||||
|
||||
1. `PluginRegistry`
|
||||
- Tracks connected workers and their per-job-type capabilities.
|
||||
- Maintains liveness via heartbeat timeout.
|
||||
|
||||
2. `SchemaCoordinator`
|
||||
- For each job type, asks one capable worker for `JobTypeDescriptor`.
|
||||
- Caches descriptor version and refresh timestamp.
|
||||
|
||||
3. `ConfigStore`
|
||||
- Persists descriptor + saved config values in `dataDir`.
|
||||
- Stores both:
|
||||
- Admin-owned runtime config (detection interval, dispatch concurrency, retry).
|
||||
- Worker-owned config values (plugin-specific detection/execution knobs).
|
||||
|
||||
4. `DetectorScheduler`
|
||||
- Per job type, chooses one detector worker (`can_detect=true`).
|
||||
- Sends `RunDetectionRequest` with saved configs + cluster context.
|
||||
- Accepts `DetectionProposals`, dedupes by `dedupe_key`, inserts jobs.
|
||||
|
||||
5. `JobDispatcher`
|
||||
- Chooses executor worker (`can_execute=true`) for each pending job.
|
||||
- Sends `ExecuteJobRequest`.
|
||||
- Consumes `JobProgressUpdate` and `JobCompleted`.
|
||||
|
||||
6. `WorkflowMonitor`
|
||||
- Builds live counters and timeline from events:
|
||||
- activities per job type,
|
||||
- active jobs,
|
||||
- per-job progress/state,
|
||||
- worker health/load.
|
||||
|
||||
## Worker Responsibilities
|
||||
|
||||
1. Register capabilities on connect (`WorkerHello`).
|
||||
2. Expose job type descriptor (`ConfigSchemaResponse`) including UI schemas:
|
||||
- admin config form,
|
||||
- worker config form,
|
||||
- defaults.
|
||||
3. Run detection on demand (`RunDetectionRequest`) and return proposals.
|
||||
4. Execute assigned jobs (`ExecuteJobRequest`) and stream progress.
|
||||
5. Heartbeat regularly with slot usage and running work.
|
||||
6. Handle cancellation requests (`CancelRequest`) for in-flight detection/execution.
|
||||
|
||||
## Declarative UI Model
|
||||
|
||||
UI is fully derived from protobuf schema:
|
||||
|
||||
- `ConfigForm`
|
||||
- `ConfigSection`
|
||||
- `ConfigField`
|
||||
- `ConfigOption`
|
||||
- `ValidationRule`
|
||||
- `ConfigValue` (typed scalar/list/map/object value container)
|
||||
|
||||
Result:
|
||||
|
||||
- Admin can render forms without hardcoded task structs.
|
||||
- New job types can ship UI schema from worker binary alone.
|
||||
- Worker language is irrelevant as long as it can emit protobuf messages.
|
||||
|
||||
## Detection and Dispatch Flow
|
||||
|
||||
1. Worker connects and registers capabilities.
|
||||
2. Admin requests descriptor per job type.
|
||||
3. Admin persists descriptor and editable config values.
|
||||
4. On detection interval (admin-owned setting):
|
||||
- Admin chooses one detector worker for that job type.
|
||||
- Sends `RunDetectionRequest` with:
|
||||
- `AdminRuntimeConfig`,
|
||||
- `admin_config_values`,
|
||||
- `worker_config_values`,
|
||||
- `ClusterContext` (master/filer/volume grpc locations, metadata).
|
||||
5. Detector emits `DetectionProposals` and `DetectionComplete`.
|
||||
6. Admin dedupes and enqueues jobs.
|
||||
7. Dispatcher assigns jobs to any eligible executor worker.
|
||||
8. Executor emits `JobProgressUpdate` and `JobCompleted`.
|
||||
9. Monitor updates workflow UI in near-real-time.
|
||||
|
||||
## Persistence Layout (Admin Data Dir)
|
||||
|
||||
Current layout under `<admin-data-dir>/plugin/`:
|
||||
|
||||
- `job_types/<job_type>/descriptor.pb`
|
||||
- `job_types/<job_type>/descriptor.json`
|
||||
- `job_types/<job_type>/config.pb`
|
||||
- `job_types/<job_type>/config.json`
|
||||
- `job_types/<job_type>/runs.json`
|
||||
- `jobs/tracked_jobs.json`
|
||||
- `activities/activities.json`
|
||||
|
||||
`config.pb` should use `PersistedJobTypeConfig` from `plugin.proto`.
|
||||
|
||||
## Admin UI
|
||||
|
||||
- Route: `/plugin`
|
||||
- Includes:
|
||||
- runtime status,
|
||||
- workers/capabilities,
|
||||
- declarative descriptor-driven config forms,
|
||||
- run history (last 10 success + last 10 errors),
|
||||
- tracked jobs and activity stream,
|
||||
- manual actions for schema refresh, detection, and detect+execute workflow.
|
||||
|
||||
## Scheduling Policy (Initial)
|
||||
|
||||
Detector selection per job type:
|
||||
- only workers with `can_detect=true`.
|
||||
- prefer healthy worker with highest free detection slots.
|
||||
- lease ends when heartbeat timeout or stream drop.
|
||||
|
||||
Execution dispatch:
|
||||
- only workers with `can_execute=true`.
|
||||
- select by available execution slots and least active jobs.
|
||||
- retry on failure using admin runtime retry config.
|
||||
|
||||
## Safety and Reliability
|
||||
|
||||
- Idempotency: dedupe proposals by (`job_type`, `dedupe_key`).
|
||||
- Backpressure: enforce max jobs per detection run.
|
||||
- Timeouts: detection and execution timeout from admin runtime config.
|
||||
- Replay-safe persistence: write job state changes before emitting UI events.
|
||||
- Heartbeat-based failover for detector/executor reassignment.
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
- Legacy `worker.proto` runtime remains internally available where still referenced.
|
||||
- External CLI worker path is moved to plugin runtime behavior.
|
||||
- Runtime is enabled by default on admin worker gRPC server.
|
||||
|
||||
## Incremental Rollout Plan
|
||||
|
||||
Phase 1
|
||||
- Introduce protocol and storage models only.
|
||||
|
||||
Phase 2
|
||||
- Build admin registry/scheduler/dispatcher behind feature flag.
|
||||
|
||||
Phase 3
|
||||
- Add dedicated plugin UI pages and metrics.
|
||||
|
||||
Phase 4
|
||||
- Port one existing job type (e.g. vacuum) as external worker plugin.
|
||||
|
||||
Phase 4 status (starter)
|
||||
- Added `weed worker` command as an external `plugin.proto` worker process.
|
||||
- Initial handler implements `vacuum` job type with:
|
||||
- declarative descriptor/config form response (`ConfigSchemaResponse`),
|
||||
- detection via master topology scan (`RunDetectionRequest`),
|
||||
- execution via existing vacuum task logic (`ExecuteJobRequest`),
|
||||
- heartbeat/load reporting for monitor UI.
|
||||
- Legacy maintenance-worker-specific CLI path is removed.
|
||||
|
||||
Run example:
|
||||
- Start admin: `weed admin -master=localhost:9333`
|
||||
- Start worker: `weed worker -admin=localhost:23646`
|
||||
- Optional explicit job type: `weed worker -admin=localhost:23646 -jobType=vacuum`
|
||||
- Optional stable worker ID persistence: `weed worker -admin=localhost:23646 -workingDir=/var/lib/seaweedfs-plugin`
|
||||
|
||||
Phase 5
|
||||
- Migrate remaining job types and deprecate old mechanism.
|
||||
|
||||
## Agreed Defaults
|
||||
|
||||
1. Detector multiplicity
|
||||
- Exactly one detector worker per job type at a time. Admin selects one worker and runs detection there.
|
||||
|
||||
2. Secret handling
|
||||
- No encryption at rest required for plugin config in this phase.
|
||||
|
||||
3. Schema compatibility
|
||||
- No migration policy required yet; this is a new system.
|
||||
|
||||
4. Execution ownership
|
||||
- Same worker is allowed to do both detection and execution.
|
||||
|
||||
5. Retention
|
||||
- Keep last 10 successful runs and last 10 error runs per job type.
|
||||
739
weed/admin/plugin/config_store.go
Normal file
739
weed/admin/plugin/config_store.go
Normal file
@@ -0,0 +1,739 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"google.golang.org/protobuf/encoding/protojson"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
const (
|
||||
pluginDirName = "plugin"
|
||||
jobTypesDirName = "job_types"
|
||||
jobsDirName = "jobs"
|
||||
jobDetailsDirName = "job_details"
|
||||
activitiesDirName = "activities"
|
||||
descriptorPBFileName = "descriptor.pb"
|
||||
descriptorJSONFileName = "descriptor.json"
|
||||
configPBFileName = "config.pb"
|
||||
configJSONFileName = "config.json"
|
||||
runsJSONFileName = "runs.json"
|
||||
trackedJobsJSONFileName = "tracked_jobs.json"
|
||||
activitiesJSONFileName = "activities.json"
|
||||
defaultDirPerm = 0o755
|
||||
defaultFilePerm = 0o644
|
||||
)
|
||||
|
||||
// validJobTypePattern is the canonical pattern for safe job type names.
|
||||
// Only letters, digits, underscore, dash, and dot are allowed, which prevents
|
||||
// path traversal because '/', '\\', and whitespace are rejected.
|
||||
var validJobTypePattern = regexp.MustCompile(`^[A-Za-z0-9_.-]+$`)
|
||||
|
||||
// ConfigStore persists plugin configuration and bounded run history.
|
||||
// If admin data dir is empty, it transparently falls back to in-memory mode.
|
||||
type ConfigStore struct {
|
||||
configured bool
|
||||
baseDir string
|
||||
|
||||
mu sync.RWMutex
|
||||
|
||||
memDescriptors map[string]*plugin_pb.JobTypeDescriptor
|
||||
memConfigs map[string]*plugin_pb.PersistedJobTypeConfig
|
||||
memRunHistory map[string]*JobTypeRunHistory
|
||||
memTrackedJobs []TrackedJob
|
||||
memActivities []JobActivity
|
||||
memJobDetails map[string]TrackedJob
|
||||
}
|
||||
|
||||
func NewConfigStore(adminDataDir string) (*ConfigStore, error) {
|
||||
store := &ConfigStore{
|
||||
configured: adminDataDir != "",
|
||||
memDescriptors: make(map[string]*plugin_pb.JobTypeDescriptor),
|
||||
memConfigs: make(map[string]*plugin_pb.PersistedJobTypeConfig),
|
||||
memRunHistory: make(map[string]*JobTypeRunHistory),
|
||||
memJobDetails: make(map[string]TrackedJob),
|
||||
}
|
||||
|
||||
if adminDataDir == "" {
|
||||
return store, nil
|
||||
}
|
||||
|
||||
store.baseDir = filepath.Join(adminDataDir, pluginDirName)
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, jobTypesDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin job_types dir: %w", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, jobsDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin jobs dir: %w", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, jobsDirName, jobDetailsDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin job_details dir: %w", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, activitiesDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin activities dir: %w", err)
|
||||
}
|
||||
|
||||
return store, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) IsConfigured() bool {
|
||||
return s.configured
|
||||
}
|
||||
|
||||
func (s *ConfigStore) BaseDir() string {
|
||||
return s.baseDir
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveDescriptor(jobType string, descriptor *plugin_pb.JobTypeDescriptor) error {
|
||||
if descriptor == nil {
|
||||
return fmt.Errorf("descriptor is nil")
|
||||
}
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
clone := proto.Clone(descriptor).(*plugin_pb.JobTypeDescriptor)
|
||||
if clone.JobType == "" {
|
||||
clone.JobType = jobType
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if !s.configured {
|
||||
s.memDescriptors[jobType] = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
jobTypeDir, err := s.ensureJobTypeDir(jobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pbPath := filepath.Join(jobTypeDir, descriptorPBFileName)
|
||||
jsonPath := filepath.Join(jobTypeDir, descriptorJSONFileName)
|
||||
|
||||
if err := writeProtoFiles(clone, pbPath, jsonPath); err != nil {
|
||||
return fmt.Errorf("save descriptor for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadDescriptor(jobType string) (*plugin_pb.JobTypeDescriptor, error) {
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
d := s.memDescriptors[jobType]
|
||||
s.mu.RUnlock()
|
||||
if d == nil {
|
||||
return nil, nil
|
||||
}
|
||||
return proto.Clone(d).(*plugin_pb.JobTypeDescriptor), nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
pbPath := filepath.Join(s.baseDir, jobTypesDirName, jobType, descriptorPBFileName)
|
||||
data, err := os.ReadFile(pbPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read descriptor for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
var descriptor plugin_pb.JobTypeDescriptor
|
||||
if err := proto.Unmarshal(data, &descriptor); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal descriptor for %s: %w", jobType, err)
|
||||
}
|
||||
return &descriptor, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveJobTypeConfig(config *plugin_pb.PersistedJobTypeConfig) error {
|
||||
if config == nil {
|
||||
return fmt.Errorf("job type config is nil")
|
||||
}
|
||||
if config.JobType == "" {
|
||||
return fmt.Errorf("job type config has empty job_type")
|
||||
}
|
||||
sanitizedJobType, err := sanitizeJobType(config.JobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Use the sanitized job type going forward to ensure it is safe for filesystem paths.
|
||||
config.JobType = sanitizedJobType
|
||||
|
||||
clone := proto.Clone(config).(*plugin_pb.PersistedJobTypeConfig)
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if !s.configured {
|
||||
s.memConfigs[config.JobType] = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
jobTypeDir, err := s.ensureJobTypeDir(config.JobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pbPath := filepath.Join(jobTypeDir, configPBFileName)
|
||||
jsonPath := filepath.Join(jobTypeDir, configJSONFileName)
|
||||
|
||||
if err := writeProtoFiles(clone, pbPath, jsonPath); err != nil {
|
||||
return fmt.Errorf("save job type config for %s: %w", config.JobType, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadJobTypeConfig(jobType string) (*plugin_pb.PersistedJobTypeConfig, error) {
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
cfg := s.memConfigs[jobType]
|
||||
s.mu.RUnlock()
|
||||
if cfg == nil {
|
||||
return nil, nil
|
||||
}
|
||||
return proto.Clone(cfg).(*plugin_pb.PersistedJobTypeConfig), nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
pbPath := filepath.Join(s.baseDir, jobTypesDirName, jobType, configPBFileName)
|
||||
data, err := os.ReadFile(pbPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read job type config for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
var config plugin_pb.PersistedJobTypeConfig
|
||||
if err := proto.Unmarshal(data, &config); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal job type config for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
return &config, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) AppendRunRecord(jobType string, record *JobRunRecord) error {
|
||||
if record == nil {
|
||||
return fmt.Errorf("run record is nil")
|
||||
}
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
safeRecord := *record
|
||||
if safeRecord.JobType == "" {
|
||||
safeRecord.JobType = jobType
|
||||
}
|
||||
if safeRecord.CompletedAt == nil || safeRecord.CompletedAt.IsZero() {
|
||||
safeRecord.CompletedAt = timeToPtr(time.Now().UTC())
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
history, err := s.loadRunHistoryLocked(jobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if safeRecord.Outcome == RunOutcomeSuccess {
|
||||
history.SuccessfulRuns = append(history.SuccessfulRuns, safeRecord)
|
||||
} else {
|
||||
safeRecord.Outcome = RunOutcomeError
|
||||
history.ErrorRuns = append(history.ErrorRuns, safeRecord)
|
||||
}
|
||||
|
||||
history.SuccessfulRuns = trimRuns(history.SuccessfulRuns, MaxSuccessfulRunHistory)
|
||||
history.ErrorRuns = trimRuns(history.ErrorRuns, MaxErrorRunHistory)
|
||||
history.LastUpdatedTime = timeToPtr(time.Now().UTC())
|
||||
|
||||
return s.saveRunHistoryLocked(jobType, history)
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadRunHistory(jobType string) (*JobTypeRunHistory, error) {
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
history, err := s.loadRunHistoryLocked(jobType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cloneRunHistory(history), nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveTrackedJobs(jobs []TrackedJob) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
clone := cloneTrackedJobs(jobs)
|
||||
|
||||
if !s.configured {
|
||||
s.memTrackedJobs = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(clone, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode tracked jobs: %w", err)
|
||||
}
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, trackedJobsJSONFileName)
|
||||
if err := atomicWriteFile(path, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write tracked jobs: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadTrackedJobs() ([]TrackedJob, error) {
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
out := cloneTrackedJobs(s.memTrackedJobs)
|
||||
s.mu.RUnlock()
|
||||
return out, nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, trackedJobsJSONFileName)
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read tracked jobs: %w", err)
|
||||
}
|
||||
|
||||
var jobs []TrackedJob
|
||||
if err := json.Unmarshal(data, &jobs); err != nil {
|
||||
return nil, fmt.Errorf("parse tracked jobs: %w", err)
|
||||
}
|
||||
return cloneTrackedJobs(jobs), nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveJobDetail(job TrackedJob) error {
|
||||
jobID, err := sanitizeJobID(job.JobID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
clone := cloneTrackedJob(job)
|
||||
clone.JobID = jobID
|
||||
|
||||
if !s.configured {
|
||||
s.memJobDetails[jobID] = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(clone, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode job detail: %w", err)
|
||||
}
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, jobDetailsDirName, jobDetailFileName(jobID))
|
||||
if err := atomicWriteFile(path, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write job detail: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadJobDetail(jobID string) (*TrackedJob, error) {
|
||||
jobID, err := sanitizeJobID(jobID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
job, ok := s.memJobDetails[jobID]
|
||||
s.mu.RUnlock()
|
||||
if !ok {
|
||||
return nil, nil
|
||||
}
|
||||
clone := cloneTrackedJob(job)
|
||||
return &clone, nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, jobDetailsDirName, jobDetailFileName(jobID))
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read job detail: %w", err)
|
||||
}
|
||||
|
||||
var job TrackedJob
|
||||
if err := json.Unmarshal(data, &job); err != nil {
|
||||
return nil, fmt.Errorf("parse job detail: %w", err)
|
||||
}
|
||||
clone := cloneTrackedJob(job)
|
||||
return &clone, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveActivities(activities []JobActivity) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
clone := cloneActivities(activities)
|
||||
|
||||
if !s.configured {
|
||||
s.memActivities = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(clone, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode activities: %w", err)
|
||||
}
|
||||
|
||||
path := filepath.Join(s.baseDir, activitiesDirName, activitiesJSONFileName)
|
||||
if err := atomicWriteFile(path, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write activities: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadActivities() ([]JobActivity, error) {
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
out := cloneActivities(s.memActivities)
|
||||
s.mu.RUnlock()
|
||||
return out, nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
path := filepath.Join(s.baseDir, activitiesDirName, activitiesJSONFileName)
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read activities: %w", err)
|
||||
}
|
||||
|
||||
var activities []JobActivity
|
||||
if err := json.Unmarshal(data, &activities); err != nil {
|
||||
return nil, fmt.Errorf("parse activities: %w", err)
|
||||
}
|
||||
return cloneActivities(activities), nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) ListJobTypes() ([]string, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
jobTypeSet := make(map[string]struct{})
|
||||
|
||||
if !s.configured {
|
||||
for jobType := range s.memDescriptors {
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
for jobType := range s.memConfigs {
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
for jobType := range s.memRunHistory {
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
} else {
|
||||
jobTypesPath := filepath.Join(s.baseDir, jobTypesDirName)
|
||||
entries, err := os.ReadDir(jobTypesPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return []string{}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("list job types: %w", err)
|
||||
}
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
jobType := strings.TrimSpace(entry.Name())
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
continue
|
||||
}
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
jobTypes := make([]string, 0, len(jobTypeSet))
|
||||
for jobType := range jobTypeSet {
|
||||
jobTypes = append(jobTypes, jobType)
|
||||
}
|
||||
sort.Strings(jobTypes)
|
||||
return jobTypes, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) loadRunHistoryLocked(jobType string) (*JobTypeRunHistory, error) {
|
||||
if !s.configured {
|
||||
history, ok := s.memRunHistory[jobType]
|
||||
if !ok {
|
||||
history = &JobTypeRunHistory{JobType: jobType}
|
||||
s.memRunHistory[jobType] = history
|
||||
}
|
||||
return cloneRunHistory(history), nil
|
||||
}
|
||||
|
||||
runsPath := filepath.Join(s.baseDir, jobTypesDirName, jobType, runsJSONFileName)
|
||||
data, err := os.ReadFile(runsPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return &JobTypeRunHistory{JobType: jobType}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read run history for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
var history JobTypeRunHistory
|
||||
if err := json.Unmarshal(data, &history); err != nil {
|
||||
return nil, fmt.Errorf("parse run history for %s: %w", jobType, err)
|
||||
}
|
||||
if history.JobType == "" {
|
||||
history.JobType = jobType
|
||||
}
|
||||
return &history, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) saveRunHistoryLocked(jobType string, history *JobTypeRunHistory) error {
|
||||
if !s.configured {
|
||||
s.memRunHistory[jobType] = cloneRunHistory(history)
|
||||
return nil
|
||||
}
|
||||
|
||||
jobTypeDir, err := s.ensureJobTypeDir(jobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(history, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode run history for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
runsPath := filepath.Join(jobTypeDir, runsJSONFileName)
|
||||
if err := atomicWriteFile(runsPath, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write run history for %s: %w", jobType, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) ensureJobTypeDir(jobType string) (string, error) {
|
||||
if !s.configured {
|
||||
return "", nil
|
||||
}
|
||||
jobTypeDir := filepath.Join(s.baseDir, jobTypesDirName, jobType)
|
||||
if err := os.MkdirAll(jobTypeDir, defaultDirPerm); err != nil {
|
||||
return "", fmt.Errorf("create job type dir for %s: %w", jobType, err)
|
||||
}
|
||||
return jobTypeDir, nil
|
||||
}
|
||||
|
||||
func sanitizeJobType(jobType string) (string, error) {
|
||||
jobType = strings.TrimSpace(jobType)
|
||||
if jobType == "" {
|
||||
return "", fmt.Errorf("job type is empty")
|
||||
}
|
||||
// Enforce a strict, path-safe pattern for job types: only letters, digits, underscore, dash and dot.
|
||||
// This prevents path traversal because '/', '\\' and whitespace are rejected.
|
||||
if !validJobTypePattern.MatchString(jobType) {
|
||||
return "", fmt.Errorf("invalid job type %q: must match %s", jobType, validJobTypePattern.String())
|
||||
}
|
||||
return jobType, nil
|
||||
}
|
||||
|
||||
// validJobIDPattern allows letters, digits, dash, underscore, and dot.
|
||||
// url.PathEscape in jobDetailFileName provides a second layer of defense.
|
||||
var validJobIDPattern = regexp.MustCompile(`^[A-Za-z0-9_.-]+$`)
|
||||
|
||||
func sanitizeJobID(jobID string) (string, error) {
|
||||
jobID = strings.TrimSpace(jobID)
|
||||
if jobID == "" {
|
||||
return "", fmt.Errorf("job id is empty")
|
||||
}
|
||||
if !validJobIDPattern.MatchString(jobID) {
|
||||
return "", fmt.Errorf("invalid job id %q: must match %s", jobID, validJobIDPattern.String())
|
||||
}
|
||||
return jobID, nil
|
||||
}
|
||||
|
||||
func jobDetailFileName(jobID string) string {
|
||||
return url.PathEscape(jobID) + ".json"
|
||||
}
|
||||
|
||||
func trimRuns(runs []JobRunRecord, maxKeep int) []JobRunRecord {
|
||||
if len(runs) == 0 {
|
||||
return runs
|
||||
}
|
||||
sort.Slice(runs, func(i, j int) bool {
|
||||
ti := time.Time{}
|
||||
if runs[i].CompletedAt != nil {
|
||||
ti = *runs[i].CompletedAt
|
||||
}
|
||||
tj := time.Time{}
|
||||
if runs[j].CompletedAt != nil {
|
||||
tj = *runs[j].CompletedAt
|
||||
}
|
||||
return ti.After(tj)
|
||||
})
|
||||
if len(runs) > maxKeep {
|
||||
runs = runs[:maxKeep]
|
||||
}
|
||||
return runs
|
||||
}
|
||||
|
||||
func cloneRunHistory(in *JobTypeRunHistory) *JobTypeRunHistory {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := *in
|
||||
if in.SuccessfulRuns != nil {
|
||||
out.SuccessfulRuns = append([]JobRunRecord(nil), in.SuccessfulRuns...)
|
||||
}
|
||||
if in.ErrorRuns != nil {
|
||||
out.ErrorRuns = append([]JobRunRecord(nil), in.ErrorRuns...)
|
||||
}
|
||||
return &out
|
||||
}
|
||||
|
||||
func cloneTrackedJobs(in []TrackedJob) []TrackedJob {
|
||||
if len(in) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]TrackedJob, len(in))
|
||||
for i := range in {
|
||||
out[i] = cloneTrackedJob(in[i])
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func cloneTrackedJob(in TrackedJob) TrackedJob {
|
||||
out := in
|
||||
if in.Parameters != nil {
|
||||
out.Parameters = make(map[string]interface{}, len(in.Parameters))
|
||||
for key, value := range in.Parameters {
|
||||
out.Parameters[key] = deepCopyGenericValue(value)
|
||||
}
|
||||
}
|
||||
if in.Labels != nil {
|
||||
out.Labels = make(map[string]string, len(in.Labels))
|
||||
for key, value := range in.Labels {
|
||||
out.Labels[key] = value
|
||||
}
|
||||
}
|
||||
if in.ResultOutputValues != nil {
|
||||
out.ResultOutputValues = make(map[string]interface{}, len(in.ResultOutputValues))
|
||||
for key, value := range in.ResultOutputValues {
|
||||
out.ResultOutputValues[key] = deepCopyGenericValue(value)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func deepCopyGenericValue(val interface{}) interface{} {
|
||||
switch v := val.(type) {
|
||||
case map[string]interface{}:
|
||||
res := make(map[string]interface{}, len(v))
|
||||
for k, val := range v {
|
||||
res[k] = deepCopyGenericValue(val)
|
||||
}
|
||||
return res
|
||||
case []interface{}:
|
||||
res := make([]interface{}, len(v))
|
||||
for i, val := range v {
|
||||
res[i] = deepCopyGenericValue(val)
|
||||
}
|
||||
return res
|
||||
default:
|
||||
return v
|
||||
}
|
||||
}
|
||||
|
||||
func cloneActivities(in []JobActivity) []JobActivity {
|
||||
if len(in) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]JobActivity, len(in))
|
||||
for i := range in {
|
||||
out[i] = in[i]
|
||||
if in[i].Details != nil {
|
||||
out[i].Details = make(map[string]interface{}, len(in[i].Details))
|
||||
for key, value := range in[i].Details {
|
||||
out[i].Details[key] = deepCopyGenericValue(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// writeProtoFiles writes message to both a binary protobuf file (pbPath) and a
|
||||
// human-readable JSON file (jsonPath) using atomicWriteFile for each.
|
||||
// The .pb file is the authoritative source of truth: all reads use proto.Unmarshal
|
||||
// on the .pb file. The .json file is for human inspection only, so a partial
|
||||
// failure where .pb succeeds but .json fails leaves the store in a consistent state.
|
||||
func writeProtoFiles(message proto.Message, pbPath string, jsonPath string) error {
|
||||
pbData, err := proto.Marshal(message)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal protobuf: %w", err)
|
||||
}
|
||||
if err := atomicWriteFile(pbPath, pbData, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write protobuf file: %w", err)
|
||||
}
|
||||
|
||||
jsonData, err := protojson.MarshalOptions{
|
||||
Multiline: true,
|
||||
Indent: " ",
|
||||
EmitUnpopulated: true,
|
||||
}.Marshal(message)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal json: %w", err)
|
||||
}
|
||||
if err := atomicWriteFile(jsonPath, jsonData, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write json file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func atomicWriteFile(filename string, data []byte, perm os.FileMode) error {
|
||||
dir := filepath.Dir(filename)
|
||||
if err := os.MkdirAll(dir, defaultDirPerm); err != nil {
|
||||
return fmt.Errorf("create directory %s: %w", dir, err)
|
||||
}
|
||||
tmpFile := filename + ".tmp"
|
||||
if err := os.WriteFile(tmpFile, data, perm); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Rename(tmpFile, filename); err != nil {
|
||||
_ = os.Remove(tmpFile)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
257
weed/admin/plugin/config_store_test.go
Normal file
257
weed/admin/plugin/config_store_test.go
Normal file
@@ -0,0 +1,257 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
func TestConfigStoreDescriptorRoundTrip(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
tempDir := t.TempDir()
|
||||
store, err := NewConfigStore(tempDir)
|
||||
if err != nil {
|
||||
t.Fatalf("NewConfigStore: %v", err)
|
||||
}
|
||||
|
||||
descriptor := &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "vacuum",
|
||||
DisplayName: "Vacuum",
|
||||
Description: "Vacuum volumes",
|
||||
DescriptorVersion: 1,
|
||||
}
|
||||
|
||||
if err := store.SaveDescriptor("vacuum", descriptor); err != nil {
|
||||
t.Fatalf("SaveDescriptor: %v", err)
|
||||
}
|
||||
|
||||
got, err := store.LoadDescriptor("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("LoadDescriptor: %v", err)
|
||||
}
|
||||
if got == nil {
|
||||
t.Fatalf("LoadDescriptor: nil descriptor")
|
||||
}
|
||||
if got.DisplayName != descriptor.DisplayName {
|
||||
t.Fatalf("unexpected display name: got %q want %q", got.DisplayName, descriptor.DisplayName)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestConfigStoreRunHistoryRetention(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
store, err := NewConfigStore(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("NewConfigStore: %v", err)
|
||||
}
|
||||
|
||||
base := time.Now().UTC().Add(-24 * time.Hour)
|
||||
for i := 0; i < 15; i++ {
|
||||
err := store.AppendRunRecord("balance", &JobRunRecord{
|
||||
RunID: "s" + time.Duration(i).String(),
|
||||
JobID: "job-success",
|
||||
JobType: "balance",
|
||||
WorkerID: "worker-a",
|
||||
Outcome: RunOutcomeSuccess,
|
||||
CompletedAt: timeToPtr(base.Add(time.Duration(i) * time.Minute)),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("AppendRunRecord success[%d]: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
for i := 0; i < 12; i++ {
|
||||
err := store.AppendRunRecord("balance", &JobRunRecord{
|
||||
RunID: "e" + time.Duration(i).String(),
|
||||
JobID: "job-error",
|
||||
JobType: "balance",
|
||||
WorkerID: "worker-b",
|
||||
Outcome: RunOutcomeError,
|
||||
CompletedAt: timeToPtr(base.Add(time.Duration(i) * time.Minute)),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("AppendRunRecord error[%d]: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
history, err := store.LoadRunHistory("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("LoadRunHistory: %v", err)
|
||||
}
|
||||
if len(history.SuccessfulRuns) != MaxSuccessfulRunHistory {
|
||||
t.Fatalf("successful retention mismatch: got %d want %d", len(history.SuccessfulRuns), MaxSuccessfulRunHistory)
|
||||
}
|
||||
if len(history.ErrorRuns) != MaxErrorRunHistory {
|
||||
t.Fatalf("error retention mismatch: got %d want %d", len(history.ErrorRuns), MaxErrorRunHistory)
|
||||
}
|
||||
|
||||
for i := 1; i < len(history.SuccessfulRuns); i++ {
|
||||
t1 := time.Time{}
|
||||
if history.SuccessfulRuns[i-1].CompletedAt != nil {
|
||||
t1 = *history.SuccessfulRuns[i-1].CompletedAt
|
||||
}
|
||||
t2 := time.Time{}
|
||||
if history.SuccessfulRuns[i].CompletedAt != nil {
|
||||
t2 = *history.SuccessfulRuns[i].CompletedAt
|
||||
}
|
||||
if t1.Before(t2) {
|
||||
t.Fatalf("successful run order not descending at %d", i)
|
||||
}
|
||||
}
|
||||
for i := 1; i < len(history.ErrorRuns); i++ {
|
||||
t1 := time.Time{}
|
||||
if history.ErrorRuns[i-1].CompletedAt != nil {
|
||||
t1 = *history.ErrorRuns[i-1].CompletedAt
|
||||
}
|
||||
t2 := time.Time{}
|
||||
if history.ErrorRuns[i].CompletedAt != nil {
|
||||
t2 = *history.ErrorRuns[i].CompletedAt
|
||||
}
|
||||
if t1.Before(t2) {
|
||||
t.Fatalf("error run order not descending at %d", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigStoreListJobTypes(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
store, err := NewConfigStore("")
|
||||
if err != nil {
|
||||
t.Fatalf("NewConfigStore: %v", err)
|
||||
}
|
||||
|
||||
if err := store.SaveDescriptor("vacuum", &plugin_pb.JobTypeDescriptor{JobType: "vacuum"}); err != nil {
|
||||
t.Fatalf("SaveDescriptor: %v", err)
|
||||
}
|
||||
if err := store.SaveJobTypeConfig(&plugin_pb.PersistedJobTypeConfig{
|
||||
JobType: "balance",
|
||||
AdminRuntime: &plugin_pb.AdminRuntimeConfig{Enabled: true},
|
||||
}); err != nil {
|
||||
t.Fatalf("SaveJobTypeConfig: %v", err)
|
||||
}
|
||||
if err := store.AppendRunRecord("ec", &JobRunRecord{Outcome: RunOutcomeSuccess, CompletedAt: timeToPtr(time.Now().UTC())}); err != nil {
|
||||
t.Fatalf("AppendRunRecord: %v", err)
|
||||
}
|
||||
|
||||
got, err := store.ListJobTypes()
|
||||
if err != nil {
|
||||
t.Fatalf("ListJobTypes: %v", err)
|
||||
}
|
||||
want := []string{"balance", "ec", "vacuum"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("unexpected job types: got=%v want=%v", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigStoreMonitorStateRoundTrip(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
store, err := NewConfigStore(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("NewConfigStore: %v", err)
|
||||
}
|
||||
|
||||
tracked := []TrackedJob{
|
||||
{
|
||||
JobID: "job-1",
|
||||
JobType: "vacuum",
|
||||
State: "running",
|
||||
Progress: 55,
|
||||
WorkerID: "worker-a",
|
||||
CreatedAt: timeToPtr(time.Now().UTC().Add(-2 * time.Minute)),
|
||||
UpdatedAt: timeToPtr(time.Now().UTC().Add(-1 * time.Minute)),
|
||||
},
|
||||
}
|
||||
activities := []JobActivity{
|
||||
{
|
||||
JobID: "job-1",
|
||||
JobType: "vacuum",
|
||||
Source: "worker_progress",
|
||||
Message: "processing",
|
||||
Stage: "running",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
Details: map[string]interface{}{
|
||||
"step": "scan",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if err := store.SaveTrackedJobs(tracked); err != nil {
|
||||
t.Fatalf("SaveTrackedJobs: %v", err)
|
||||
}
|
||||
if err := store.SaveActivities(activities); err != nil {
|
||||
t.Fatalf("SaveActivities: %v", err)
|
||||
}
|
||||
|
||||
gotTracked, err := store.LoadTrackedJobs()
|
||||
if err != nil {
|
||||
t.Fatalf("LoadTrackedJobs: %v", err)
|
||||
}
|
||||
if len(gotTracked) != 1 || gotTracked[0].JobID != tracked[0].JobID {
|
||||
t.Fatalf("unexpected tracked jobs: %+v", gotTracked)
|
||||
}
|
||||
|
||||
gotActivities, err := store.LoadActivities()
|
||||
if err != nil {
|
||||
t.Fatalf("LoadActivities: %v", err)
|
||||
}
|
||||
if len(gotActivities) != 1 || gotActivities[0].Message != activities[0].Message {
|
||||
t.Fatalf("unexpected activities: %+v", gotActivities)
|
||||
}
|
||||
if gotActivities[0].Details["step"] != "scan" {
|
||||
t.Fatalf("unexpected activity details: %+v", gotActivities[0].Details)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigStoreJobDetailRoundTrip(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
store, err := NewConfigStore(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("NewConfigStore: %v", err)
|
||||
}
|
||||
|
||||
input := TrackedJob{
|
||||
JobID: "job-detail-1",
|
||||
JobType: "vacuum",
|
||||
Summary: "detail summary",
|
||||
Detail: "detail payload",
|
||||
CreatedAt: timeToPtr(time.Now().UTC().Add(-2 * time.Minute)),
|
||||
UpdatedAt: timeToPtr(time.Now().UTC()),
|
||||
Parameters: map[string]interface{}{
|
||||
"volume_id": map[string]interface{}{"int64_value": "3"},
|
||||
},
|
||||
Labels: map[string]string{
|
||||
"source": "detector",
|
||||
},
|
||||
ResultOutputValues: map[string]interface{}{
|
||||
"moved": map[string]interface{}{"bool_value": true},
|
||||
},
|
||||
}
|
||||
|
||||
if err := store.SaveJobDetail(input); err != nil {
|
||||
t.Fatalf("SaveJobDetail: %v", err)
|
||||
}
|
||||
|
||||
got, err := store.LoadJobDetail(input.JobID)
|
||||
if err != nil {
|
||||
t.Fatalf("LoadJobDetail: %v", err)
|
||||
}
|
||||
if got == nil {
|
||||
t.Fatalf("LoadJobDetail returned nil")
|
||||
}
|
||||
if got.Detail != input.Detail {
|
||||
t.Fatalf("unexpected detail: got=%q want=%q", got.Detail, input.Detail)
|
||||
}
|
||||
if got.Labels["source"] != "detector" {
|
||||
t.Fatalf("unexpected labels: %+v", got.Labels)
|
||||
}
|
||||
if got.ResultOutputValues == nil {
|
||||
t.Fatalf("expected result output values")
|
||||
}
|
||||
}
|
||||
231
weed/admin/plugin/job_execution_plan.go
Normal file
231
weed/admin/plugin/job_execution_plan.go
Normal file
@@ -0,0 +1,231 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
func enrichTrackedJobParameters(jobType string, parameters map[string]interface{}) map[string]interface{} {
|
||||
if len(parameters) == 0 {
|
||||
return parameters
|
||||
}
|
||||
if _, exists := parameters["execution_plan"]; exists {
|
||||
return parameters
|
||||
}
|
||||
|
||||
taskParams, ok := decodeTaskParamsFromPlainParameters(parameters)
|
||||
if !ok || taskParams == nil {
|
||||
return parameters
|
||||
}
|
||||
|
||||
plan := buildExecutionPlan(strings.TrimSpace(jobType), taskParams)
|
||||
if plan == nil {
|
||||
return parameters
|
||||
}
|
||||
|
||||
enriched := make(map[string]interface{}, len(parameters)+1)
|
||||
for key, value := range parameters {
|
||||
enriched[key] = value
|
||||
}
|
||||
enriched["execution_plan"] = plan
|
||||
return enriched
|
||||
}
|
||||
|
||||
func decodeTaskParamsFromPlainParameters(parameters map[string]interface{}) (*worker_pb.TaskParams, bool) {
|
||||
rawField, ok := parameters["task_params_pb"]
|
||||
if !ok || rawField == nil {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
fieldMap, ok := rawField.(map[string]interface{})
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
bytesValue, _ := fieldMap["bytes_value"].(string)
|
||||
bytesValue = strings.TrimSpace(bytesValue)
|
||||
if bytesValue == "" {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
payload, err := base64.StdEncoding.DecodeString(bytesValue)
|
||||
if err != nil {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
params := &worker_pb.TaskParams{}
|
||||
if err := proto.Unmarshal(payload, params); err != nil {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return params, true
|
||||
}
|
||||
|
||||
func buildExecutionPlan(jobType string, params *worker_pb.TaskParams) map[string]interface{} {
|
||||
if params == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
normalizedJobType := strings.TrimSpace(jobType)
|
||||
if normalizedJobType == "" && params.GetErasureCodingParams() != nil {
|
||||
normalizedJobType = "erasure_coding"
|
||||
}
|
||||
|
||||
switch normalizedJobType {
|
||||
case "erasure_coding":
|
||||
return buildErasureCodingExecutionPlan(params)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func buildErasureCodingExecutionPlan(params *worker_pb.TaskParams) map[string]interface{} {
|
||||
if params == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
ecParams := params.GetErasureCodingParams()
|
||||
if ecParams == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
dataShards := int(ecParams.DataShards)
|
||||
if dataShards <= 0 {
|
||||
dataShards = int(erasure_coding.DataShardsCount)
|
||||
}
|
||||
parityShards := int(ecParams.ParityShards)
|
||||
if parityShards <= 0 {
|
||||
parityShards = int(erasure_coding.ParityShardsCount)
|
||||
}
|
||||
totalShards := dataShards + parityShards
|
||||
|
||||
sources := make([]map[string]interface{}, 0, len(params.Sources))
|
||||
for _, source := range params.Sources {
|
||||
if source == nil {
|
||||
continue
|
||||
}
|
||||
sources = append(sources, buildExecutionEndpoint(
|
||||
source.Node,
|
||||
source.DataCenter,
|
||||
source.Rack,
|
||||
source.VolumeId,
|
||||
source.ShardIds,
|
||||
dataShards,
|
||||
))
|
||||
}
|
||||
|
||||
targets := make([]map[string]interface{}, 0, len(params.Targets))
|
||||
shardAssignments := make([]map[string]interface{}, 0, totalShards)
|
||||
for targetIndex, target := range params.Targets {
|
||||
if target == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
targets = append(targets, buildExecutionEndpoint(
|
||||
target.Node,
|
||||
target.DataCenter,
|
||||
target.Rack,
|
||||
target.VolumeId,
|
||||
target.ShardIds,
|
||||
dataShards,
|
||||
))
|
||||
|
||||
for _, shardID := range normalizeShardIDs(target.ShardIds) {
|
||||
kind, label := classifyShardID(shardID, dataShards)
|
||||
shardAssignments = append(shardAssignments, map[string]interface{}{
|
||||
"shard_id": shardID,
|
||||
"kind": kind,
|
||||
"label": label,
|
||||
"target_index": targetIndex + 1,
|
||||
"target_node": strings.TrimSpace(target.Node),
|
||||
"target_data_center": strings.TrimSpace(target.DataCenter),
|
||||
"target_rack": strings.TrimSpace(target.Rack),
|
||||
"target_volume_id": int(target.VolumeId),
|
||||
})
|
||||
}
|
||||
}
|
||||
sort.Slice(shardAssignments, func(i, j int) bool {
|
||||
left, _ := shardAssignments[i]["shard_id"].(int)
|
||||
right, _ := shardAssignments[j]["shard_id"].(int)
|
||||
return left < right
|
||||
})
|
||||
|
||||
plan := map[string]interface{}{
|
||||
"job_type": "erasure_coding",
|
||||
"task_id": strings.TrimSpace(params.TaskId),
|
||||
"volume_id": int(params.VolumeId),
|
||||
"collection": strings.TrimSpace(params.Collection),
|
||||
"data_shards": dataShards,
|
||||
"parity_shards": parityShards,
|
||||
"total_shards": totalShards,
|
||||
"sources": sources,
|
||||
"targets": targets,
|
||||
"source_count": len(sources),
|
||||
"target_count": len(targets),
|
||||
}
|
||||
|
||||
if len(shardAssignments) > 0 {
|
||||
plan["shard_assignments"] = shardAssignments
|
||||
}
|
||||
|
||||
return plan
|
||||
}
|
||||
|
||||
func buildExecutionEndpoint(
|
||||
node string,
|
||||
dataCenter string,
|
||||
rack string,
|
||||
volumeID uint32,
|
||||
shardIDs []uint32,
|
||||
dataShardCount int,
|
||||
) map[string]interface{} {
|
||||
allShards := normalizeShardIDs(shardIDs)
|
||||
dataShards := make([]int, 0, len(allShards))
|
||||
parityShards := make([]int, 0, len(allShards))
|
||||
for _, shardID := range allShards {
|
||||
if shardID < dataShardCount {
|
||||
dataShards = append(dataShards, shardID)
|
||||
} else {
|
||||
parityShards = append(parityShards, shardID)
|
||||
}
|
||||
}
|
||||
|
||||
return map[string]interface{}{
|
||||
"node": strings.TrimSpace(node),
|
||||
"data_center": strings.TrimSpace(dataCenter),
|
||||
"rack": strings.TrimSpace(rack),
|
||||
"volume_id": int(volumeID),
|
||||
"shard_ids": allShards,
|
||||
"data_shard_ids": dataShards,
|
||||
"parity_shard_ids": parityShards,
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeShardIDs(shardIDs []uint32) []int {
|
||||
if len(shardIDs) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]int, 0, len(shardIDs))
|
||||
for _, shardID := range shardIDs {
|
||||
out = append(out, int(shardID))
|
||||
}
|
||||
sort.Ints(out)
|
||||
return out
|
||||
}
|
||||
|
||||
func classifyShardID(shardID int, dataShardCount int) (kind string, label string) {
|
||||
if dataShardCount <= 0 {
|
||||
dataShardCount = int(erasure_coding.DataShardsCount)
|
||||
}
|
||||
if shardID < dataShardCount {
|
||||
return "data", "D" + strconv.Itoa(shardID)
|
||||
}
|
||||
return "parity", "P" + strconv.Itoa(shardID)
|
||||
}
|
||||
1243
weed/admin/plugin/plugin.go
Normal file
1243
weed/admin/plugin/plugin.go
Normal file
File diff suppressed because it is too large
Load Diff
112
weed/admin/plugin/plugin_cancel_test.go
Normal file
112
weed/admin/plugin/plugin_cancel_test.go
Normal file
@@ -0,0 +1,112 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"testing"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
func TestRunDetectionSendsCancelOnContextDone(t *testing.T) {
|
||||
t.Parallel()
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New plugin error: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
const workerID = "worker-detect"
|
||||
const jobType = "vacuum"
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: workerID,
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: jobType, CanDetect: true, MaxDetectionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
session := &streamSession{workerID: workerID, outgoing: make(chan *plugin_pb.AdminToWorkerMessage, 4)}
|
||||
pluginSvc.putSession(session)
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
_, runErr := pluginSvc.RunDetection(ctx, jobType, &plugin_pb.ClusterContext{}, 10)
|
||||
errCh <- runErr
|
||||
}()
|
||||
|
||||
first := <-session.outgoing
|
||||
if first.GetRunDetectionRequest() == nil {
|
||||
t.Fatalf("expected first message to be run_detection_request")
|
||||
}
|
||||
|
||||
cancel()
|
||||
|
||||
second := <-session.outgoing
|
||||
cancelReq := second.GetCancelRequest()
|
||||
if cancelReq == nil {
|
||||
t.Fatalf("expected second message to be cancel_request")
|
||||
}
|
||||
if cancelReq.TargetId != first.RequestId {
|
||||
t.Fatalf("unexpected cancel target id: got=%s want=%s", cancelReq.TargetId, first.RequestId)
|
||||
}
|
||||
if cancelReq.TargetKind != plugin_pb.WorkKind_WORK_KIND_DETECTION {
|
||||
t.Fatalf("unexpected cancel target kind: %v", cancelReq.TargetKind)
|
||||
}
|
||||
|
||||
runErr := <-errCh
|
||||
if !errors.Is(runErr, context.Canceled) {
|
||||
t.Fatalf("expected context canceled error, got %v", runErr)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecuteJobSendsCancelOnContextDone(t *testing.T) {
|
||||
t.Parallel()
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New plugin error: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
const workerID = "worker-exec"
|
||||
const jobType = "vacuum"
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: workerID,
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: jobType, CanExecute: true, MaxExecutionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
session := &streamSession{workerID: workerID, outgoing: make(chan *plugin_pb.AdminToWorkerMessage, 4)}
|
||||
pluginSvc.putSession(session)
|
||||
|
||||
job := &plugin_pb.JobSpec{JobId: "job-1", JobType: jobType}
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
_, runErr := pluginSvc.ExecuteJob(ctx, job, &plugin_pb.ClusterContext{}, 1)
|
||||
errCh <- runErr
|
||||
}()
|
||||
|
||||
first := <-session.outgoing
|
||||
if first.GetExecuteJobRequest() == nil {
|
||||
t.Fatalf("expected first message to be execute_job_request")
|
||||
}
|
||||
|
||||
cancel()
|
||||
|
||||
second := <-session.outgoing
|
||||
cancelReq := second.GetCancelRequest()
|
||||
if cancelReq == nil {
|
||||
t.Fatalf("expected second message to be cancel_request")
|
||||
}
|
||||
if cancelReq.TargetId != first.RequestId {
|
||||
t.Fatalf("unexpected cancel target id: got=%s want=%s", cancelReq.TargetId, first.RequestId)
|
||||
}
|
||||
if cancelReq.TargetKind != plugin_pb.WorkKind_WORK_KIND_EXECUTION {
|
||||
t.Fatalf("unexpected cancel target kind: %v", cancelReq.TargetKind)
|
||||
}
|
||||
|
||||
runErr := <-errCh
|
||||
if !errors.Is(runErr, context.Canceled) {
|
||||
t.Fatalf("expected context canceled error, got %v", runErr)
|
||||
}
|
||||
}
|
||||
125
weed/admin/plugin/plugin_config_bootstrap_test.go
Normal file
125
weed/admin/plugin/plugin_config_bootstrap_test.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
func TestEnsureJobTypeConfigFromDescriptorBootstrapsDefaults(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
descriptor := &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "vacuum",
|
||||
DescriptorVersion: 3,
|
||||
AdminConfigForm: &plugin_pb.ConfigForm{
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"scan_scope": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "all"}},
|
||||
},
|
||||
},
|
||||
WorkerConfigForm: &plugin_pb.ConfigForm{
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"threshold": {Kind: &plugin_pb.ConfigValue_DoubleValue{DoubleValue: 0.3}},
|
||||
},
|
||||
},
|
||||
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 60,
|
||||
DetectionTimeoutSeconds: 20,
|
||||
MaxJobsPerDetection: 30,
|
||||
GlobalExecutionConcurrency: 4,
|
||||
PerWorkerExecutionConcurrency: 2,
|
||||
RetryLimit: 3,
|
||||
RetryBackoffSeconds: 5,
|
||||
},
|
||||
}
|
||||
|
||||
if err := pluginSvc.ensureJobTypeConfigFromDescriptor("vacuum", descriptor); err != nil {
|
||||
t.Fatalf("ensureJobTypeConfigFromDescriptor: %v", err)
|
||||
}
|
||||
|
||||
cfg, err := pluginSvc.LoadJobTypeConfig("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("LoadJobTypeConfig: %v", err)
|
||||
}
|
||||
if cfg == nil {
|
||||
t.Fatalf("expected non-nil config")
|
||||
}
|
||||
if cfg.DescriptorVersion != 3 {
|
||||
t.Fatalf("unexpected descriptor version: got=%d", cfg.DescriptorVersion)
|
||||
}
|
||||
if cfg.AdminRuntime == nil || !cfg.AdminRuntime.Enabled {
|
||||
t.Fatalf("expected enabled admin settings")
|
||||
}
|
||||
if cfg.AdminRuntime.GlobalExecutionConcurrency != 4 {
|
||||
t.Fatalf("unexpected global execution concurrency: %d", cfg.AdminRuntime.GlobalExecutionConcurrency)
|
||||
}
|
||||
if _, ok := cfg.AdminConfigValues["scan_scope"]; !ok {
|
||||
t.Fatalf("missing admin default value")
|
||||
}
|
||||
if _, ok := cfg.WorkerConfigValues["threshold"]; !ok {
|
||||
t.Fatalf("missing worker default value")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnsureJobTypeConfigFromDescriptorDoesNotOverwriteExisting(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
if err := pluginSvc.SaveJobTypeConfig(&plugin_pb.PersistedJobTypeConfig{
|
||||
JobType: "balance",
|
||||
AdminRuntime: &plugin_pb.AdminRuntimeConfig{
|
||||
Enabled: true,
|
||||
GlobalExecutionConcurrency: 9,
|
||||
},
|
||||
AdminConfigValues: map[string]*plugin_pb.ConfigValue{
|
||||
"custom": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "keep"}},
|
||||
},
|
||||
}); err != nil {
|
||||
t.Fatalf("SaveJobTypeConfig: %v", err)
|
||||
}
|
||||
|
||||
descriptor := &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "balance",
|
||||
DescriptorVersion: 7,
|
||||
AdminConfigForm: &plugin_pb.ConfigForm{
|
||||
DefaultValues: map[string]*plugin_pb.ConfigValue{
|
||||
"custom": {Kind: &plugin_pb.ConfigValue_StringValue{StringValue: "overwrite"}},
|
||||
},
|
||||
},
|
||||
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
|
||||
Enabled: true,
|
||||
GlobalExecutionConcurrency: 1,
|
||||
},
|
||||
}
|
||||
|
||||
if err := pluginSvc.ensureJobTypeConfigFromDescriptor("balance", descriptor); err != nil {
|
||||
t.Fatalf("ensureJobTypeConfigFromDescriptor: %v", err)
|
||||
}
|
||||
|
||||
cfg, err := pluginSvc.LoadJobTypeConfig("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("LoadJobTypeConfig: %v", err)
|
||||
}
|
||||
if cfg == nil {
|
||||
t.Fatalf("expected config")
|
||||
}
|
||||
if cfg.AdminRuntime == nil || cfg.AdminRuntime.GlobalExecutionConcurrency != 9 {
|
||||
t.Fatalf("existing admin settings should be preserved, got=%v", cfg.AdminRuntime)
|
||||
}
|
||||
custom := cfg.AdminConfigValues["custom"]
|
||||
if custom == nil || custom.GetStringValue() != "keep" {
|
||||
t.Fatalf("existing admin config should be preserved")
|
||||
}
|
||||
}
|
||||
197
weed/admin/plugin/plugin_detection_test.go
Normal file
197
weed/admin/plugin/plugin_detection_test.go
Normal file
@@ -0,0 +1,197 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
func TestRunDetectionIncludesLatestSuccessfulRun(t *testing.T) {
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New plugin error: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
jobType := "vacuum"
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: jobType, CanDetect: true, MaxDetectionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
session := &streamSession{workerID: "worker-a", outgoing: make(chan *plugin_pb.AdminToWorkerMessage, 1)}
|
||||
pluginSvc.putSession(session)
|
||||
|
||||
oldSuccess := time.Date(2026, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
latestSuccess := time.Date(2026, 2, 1, 0, 0, 0, 0, time.UTC)
|
||||
if err := pluginSvc.store.AppendRunRecord(jobType, &JobRunRecord{Outcome: RunOutcomeSuccess, CompletedAt: timeToPtr(oldSuccess)}); err != nil {
|
||||
t.Fatalf("AppendRunRecord old success: %v", err)
|
||||
}
|
||||
if err := pluginSvc.store.AppendRunRecord(jobType, &JobRunRecord{Outcome: RunOutcomeError, CompletedAt: timeToPtr(latestSuccess.Add(2 * time.Hour))}); err != nil {
|
||||
t.Fatalf("AppendRunRecord error run: %v", err)
|
||||
}
|
||||
if err := pluginSvc.store.AppendRunRecord(jobType, &JobRunRecord{Outcome: RunOutcomeSuccess, CompletedAt: timeToPtr(latestSuccess)}); err != nil {
|
||||
t.Fatalf("AppendRunRecord latest success: %v", err)
|
||||
}
|
||||
|
||||
resultCh := make(chan error, 1)
|
||||
go func() {
|
||||
_, runErr := pluginSvc.RunDetection(context.Background(), jobType, &plugin_pb.ClusterContext{}, 10)
|
||||
resultCh <- runErr
|
||||
}()
|
||||
|
||||
message := <-session.outgoing
|
||||
detectRequest := message.GetRunDetectionRequest()
|
||||
if detectRequest == nil {
|
||||
t.Fatalf("expected run detection request message")
|
||||
}
|
||||
if detectRequest.LastSuccessfulRun == nil {
|
||||
t.Fatalf("expected last_successful_run to be set")
|
||||
}
|
||||
if got := detectRequest.LastSuccessfulRun.AsTime().UTC(); !got.Equal(latestSuccess) {
|
||||
t.Fatalf("unexpected last_successful_run, got=%s want=%s", got, latestSuccess)
|
||||
}
|
||||
|
||||
pluginSvc.handleDetectionComplete("worker-a", &plugin_pb.DetectionComplete{
|
||||
RequestId: message.RequestId,
|
||||
JobType: jobType,
|
||||
Success: true,
|
||||
})
|
||||
|
||||
if runErr := <-resultCh; runErr != nil {
|
||||
t.Fatalf("RunDetection error: %v", runErr)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunDetectionOmitsLastSuccessfulRunWhenNoSuccessHistory(t *testing.T) {
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New plugin error: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
jobType := "vacuum"
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: jobType, CanDetect: true, MaxDetectionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
session := &streamSession{workerID: "worker-a", outgoing: make(chan *plugin_pb.AdminToWorkerMessage, 1)}
|
||||
pluginSvc.putSession(session)
|
||||
|
||||
if err := pluginSvc.store.AppendRunRecord(jobType, &JobRunRecord{
|
||||
Outcome: RunOutcomeError,
|
||||
CompletedAt: timeToPtr(time.Date(2026, 2, 10, 0, 0, 0, 0, time.UTC)),
|
||||
}); err != nil {
|
||||
t.Fatalf("AppendRunRecord error run: %v", err)
|
||||
}
|
||||
|
||||
resultCh := make(chan error, 1)
|
||||
go func() {
|
||||
_, runErr := pluginSvc.RunDetection(context.Background(), jobType, &plugin_pb.ClusterContext{}, 10)
|
||||
resultCh <- runErr
|
||||
}()
|
||||
|
||||
message := <-session.outgoing
|
||||
detectRequest := message.GetRunDetectionRequest()
|
||||
if detectRequest == nil {
|
||||
t.Fatalf("expected run detection request message")
|
||||
}
|
||||
if detectRequest.LastSuccessfulRun != nil {
|
||||
t.Fatalf("expected last_successful_run to be nil when no success history")
|
||||
}
|
||||
|
||||
pluginSvc.handleDetectionComplete("worker-a", &plugin_pb.DetectionComplete{
|
||||
RequestId: message.RequestId,
|
||||
JobType: jobType,
|
||||
Success: true,
|
||||
})
|
||||
|
||||
if runErr := <-resultCh; runErr != nil {
|
||||
t.Fatalf("RunDetection error: %v", runErr)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRunDetectionWithReportCapturesDetectionActivities(t *testing.T) {
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New plugin error: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
jobType := "vacuum"
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: jobType, CanDetect: true, MaxDetectionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
session := &streamSession{workerID: "worker-a", outgoing: make(chan *plugin_pb.AdminToWorkerMessage, 1)}
|
||||
pluginSvc.putSession(session)
|
||||
|
||||
reportCh := make(chan *DetectionReport, 1)
|
||||
errCh := make(chan error, 1)
|
||||
go func() {
|
||||
report, runErr := pluginSvc.RunDetectionWithReport(context.Background(), jobType, &plugin_pb.ClusterContext{}, 10)
|
||||
reportCh <- report
|
||||
errCh <- runErr
|
||||
}()
|
||||
|
||||
message := <-session.outgoing
|
||||
requestID := message.GetRequestId()
|
||||
if requestID == "" {
|
||||
t.Fatalf("expected request id in detection request")
|
||||
}
|
||||
|
||||
pluginSvc.handleDetectionProposals("worker-a", &plugin_pb.DetectionProposals{
|
||||
RequestId: requestID,
|
||||
JobType: jobType,
|
||||
Proposals: []*plugin_pb.JobProposal{
|
||||
{
|
||||
ProposalId: "proposal-1",
|
||||
JobType: jobType,
|
||||
Summary: "vacuum proposal",
|
||||
Detail: "based on garbage ratio",
|
||||
},
|
||||
},
|
||||
})
|
||||
pluginSvc.handleDetectionComplete("worker-a", &plugin_pb.DetectionComplete{
|
||||
RequestId: requestID,
|
||||
JobType: jobType,
|
||||
Success: true,
|
||||
TotalProposals: 1,
|
||||
})
|
||||
|
||||
report := <-reportCh
|
||||
if report == nil {
|
||||
t.Fatalf("expected detection report")
|
||||
}
|
||||
if report.RequestID == "" {
|
||||
t.Fatalf("expected detection report request id")
|
||||
}
|
||||
if report.WorkerID != "worker-a" {
|
||||
t.Fatalf("expected worker-a, got %q", report.WorkerID)
|
||||
}
|
||||
if len(report.Proposals) != 1 {
|
||||
t.Fatalf("expected one proposal in report, got %d", len(report.Proposals))
|
||||
}
|
||||
if runErr := <-errCh; runErr != nil {
|
||||
t.Fatalf("RunDetectionWithReport error: %v", runErr)
|
||||
}
|
||||
|
||||
activities := pluginSvc.ListActivities(jobType, 0)
|
||||
stages := map[string]bool{}
|
||||
for _, activity := range activities {
|
||||
if activity.RequestID != report.RequestID {
|
||||
continue
|
||||
}
|
||||
stages[activity.Stage] = true
|
||||
}
|
||||
if !stages["requested"] || !stages["proposal"] || !stages["completed"] {
|
||||
t.Fatalf("expected requested/proposal/completed activities, got stages=%v", stages)
|
||||
}
|
||||
}
|
||||
896
weed/admin/plugin/plugin_monitor.go
Normal file
896
weed/admin/plugin/plugin_monitor.go
Normal file
@@ -0,0 +1,896 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"google.golang.org/protobuf/encoding/protojson"
|
||||
)
|
||||
|
||||
const (
|
||||
maxTrackedJobsTotal = 1000
|
||||
maxActivityRecords = 4000
|
||||
maxRelatedJobs = 100
|
||||
)
|
||||
|
||||
var (
|
||||
StateSucceeded = strings.ToLower(plugin_pb.JobState_JOB_STATE_SUCCEEDED.String())
|
||||
StateFailed = strings.ToLower(plugin_pb.JobState_JOB_STATE_FAILED.String())
|
||||
StateCanceled = strings.ToLower(plugin_pb.JobState_JOB_STATE_CANCELED.String())
|
||||
)
|
||||
|
||||
// activityLess reports whether activity a occurred after activity b (newest-first order).
|
||||
// A nil OccurredAt is treated as the zero time.
|
||||
func activityLess(a, b JobActivity) bool {
|
||||
ta := time.Time{}
|
||||
if a.OccurredAt != nil {
|
||||
ta = *a.OccurredAt
|
||||
}
|
||||
tb := time.Time{}
|
||||
if b.OccurredAt != nil {
|
||||
tb = *b.OccurredAt
|
||||
}
|
||||
return ta.After(tb)
|
||||
}
|
||||
|
||||
func (r *Plugin) loadPersistedMonitorState() error {
|
||||
trackedJobs, err := r.store.LoadTrackedJobs()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
activities, err := r.store.LoadActivities()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(trackedJobs) > 0 {
|
||||
r.jobsMu.Lock()
|
||||
for i := range trackedJobs {
|
||||
job := trackedJobs[i]
|
||||
if strings.TrimSpace(job.JobID) == "" {
|
||||
continue
|
||||
}
|
||||
// Backward compatibility: migrate older inline detail payloads
|
||||
// out of tracked_jobs.json into dedicated per-job detail files.
|
||||
if hasTrackedJobRichDetails(job) {
|
||||
if err := r.store.SaveJobDetail(job); err != nil {
|
||||
glog.Warningf("Plugin failed to migrate detail snapshot for job %s: %v", job.JobID, err)
|
||||
}
|
||||
}
|
||||
stripTrackedJobDetailFields(&job)
|
||||
jobCopy := job
|
||||
r.jobs[job.JobID] = &jobCopy
|
||||
}
|
||||
r.pruneTrackedJobsLocked()
|
||||
r.jobsMu.Unlock()
|
||||
}
|
||||
|
||||
if len(activities) > maxActivityRecords {
|
||||
activities = activities[len(activities)-maxActivityRecords:]
|
||||
}
|
||||
if len(activities) > 0 {
|
||||
r.activitiesMu.Lock()
|
||||
r.activities = append([]JobActivity(nil), activities...)
|
||||
r.activitiesMu.Unlock()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *Plugin) ListTrackedJobs(jobType string, state string, limit int) []TrackedJob {
|
||||
r.jobsMu.RLock()
|
||||
defer r.jobsMu.RUnlock()
|
||||
|
||||
normalizedJobType := strings.TrimSpace(jobType)
|
||||
normalizedState := strings.TrimSpace(strings.ToLower(state))
|
||||
|
||||
items := make([]TrackedJob, 0, len(r.jobs))
|
||||
for _, job := range r.jobs {
|
||||
if job == nil {
|
||||
continue
|
||||
}
|
||||
if normalizedJobType != "" && job.JobType != normalizedJobType {
|
||||
continue
|
||||
}
|
||||
if normalizedState != "" && strings.ToLower(job.State) != normalizedState {
|
||||
continue
|
||||
}
|
||||
items = append(items, cloneTrackedJob(*job))
|
||||
}
|
||||
|
||||
sort.Slice(items, func(i, j int) bool {
|
||||
ti := time.Time{}
|
||||
if items[i].UpdatedAt != nil {
|
||||
ti = *items[i].UpdatedAt
|
||||
}
|
||||
tj := time.Time{}
|
||||
if items[j].UpdatedAt != nil {
|
||||
tj = *items[j].UpdatedAt
|
||||
}
|
||||
if !ti.Equal(tj) {
|
||||
return ti.After(tj)
|
||||
}
|
||||
return items[i].JobID < items[j].JobID
|
||||
})
|
||||
|
||||
if limit > 0 && len(items) > limit {
|
||||
items = items[:limit]
|
||||
}
|
||||
return items
|
||||
}
|
||||
|
||||
func (r *Plugin) GetTrackedJob(jobID string) (*TrackedJob, bool) {
|
||||
r.jobsMu.RLock()
|
||||
defer r.jobsMu.RUnlock()
|
||||
|
||||
job, ok := r.jobs[jobID]
|
||||
if !ok || job == nil {
|
||||
return nil, false
|
||||
}
|
||||
clone := cloneTrackedJob(*job)
|
||||
return &clone, true
|
||||
}
|
||||
|
||||
func (r *Plugin) ListActivities(jobType string, limit int) []JobActivity {
|
||||
r.activitiesMu.RLock()
|
||||
defer r.activitiesMu.RUnlock()
|
||||
|
||||
normalized := strings.TrimSpace(jobType)
|
||||
activities := make([]JobActivity, 0, len(r.activities))
|
||||
for _, activity := range r.activities {
|
||||
if normalized != "" && activity.JobType != normalized {
|
||||
continue
|
||||
}
|
||||
activities = append(activities, activity)
|
||||
}
|
||||
|
||||
sort.Slice(activities, func(i, j int) bool {
|
||||
return activityLess(activities[i], activities[j])
|
||||
})
|
||||
if limit > 0 && len(activities) > limit {
|
||||
activities = activities[:limit]
|
||||
}
|
||||
return activities
|
||||
}
|
||||
|
||||
func (r *Plugin) ListJobActivities(jobID string, limit int) []JobActivity {
|
||||
normalizedJobID := strings.TrimSpace(jobID)
|
||||
if normalizedJobID == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
r.activitiesMu.RLock()
|
||||
activities := make([]JobActivity, 0, len(r.activities))
|
||||
for _, activity := range r.activities {
|
||||
if strings.TrimSpace(activity.JobID) != normalizedJobID {
|
||||
continue
|
||||
}
|
||||
activities = append(activities, activity)
|
||||
}
|
||||
r.activitiesMu.RUnlock()
|
||||
|
||||
sort.Slice(activities, func(i, j int) bool {
|
||||
return !activityLess(activities[i], activities[j]) // oldest-first for job timeline
|
||||
})
|
||||
if limit > 0 && len(activities) > limit {
|
||||
activities = activities[len(activities)-limit:]
|
||||
}
|
||||
return activities
|
||||
}
|
||||
|
||||
func (r *Plugin) BuildJobDetail(jobID string, activityLimit int, relatedLimit int) (*JobDetail, bool, error) {
|
||||
normalizedJobID := strings.TrimSpace(jobID)
|
||||
if normalizedJobID == "" {
|
||||
return nil, false, nil
|
||||
}
|
||||
|
||||
// Clamp relatedLimit to a safe range to avoid excessive memory allocation from untrusted input.
|
||||
if relatedLimit <= 0 {
|
||||
relatedLimit = 0
|
||||
} else if relatedLimit > maxRelatedJobs {
|
||||
relatedLimit = maxRelatedJobs
|
||||
}
|
||||
|
||||
r.jobsMu.RLock()
|
||||
trackedSnapshot, ok := r.jobs[normalizedJobID]
|
||||
if ok && trackedSnapshot != nil {
|
||||
candidate := cloneTrackedJob(*trackedSnapshot)
|
||||
stripTrackedJobDetailFields(&candidate)
|
||||
trackedSnapshot = &candidate
|
||||
} else {
|
||||
trackedSnapshot = nil
|
||||
}
|
||||
r.jobsMu.RUnlock()
|
||||
|
||||
detailJob, err := r.store.LoadJobDetail(normalizedJobID)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
|
||||
if trackedSnapshot == nil && detailJob == nil {
|
||||
return nil, false, nil
|
||||
}
|
||||
if detailJob == nil && trackedSnapshot != nil {
|
||||
clone := cloneTrackedJob(*trackedSnapshot)
|
||||
detailJob = &clone
|
||||
}
|
||||
if detailJob == nil {
|
||||
return nil, false, nil
|
||||
}
|
||||
if trackedSnapshot != nil {
|
||||
mergeTrackedStatusIntoDetail(detailJob, trackedSnapshot)
|
||||
}
|
||||
detailJob.Parameters = enrichTrackedJobParameters(detailJob.JobType, detailJob.Parameters)
|
||||
|
||||
r.activitiesMu.RLock()
|
||||
activities := append([]JobActivity(nil), r.activities...)
|
||||
r.activitiesMu.RUnlock()
|
||||
|
||||
detail := &JobDetail{
|
||||
Job: detailJob,
|
||||
Activities: filterJobActivitiesFromSlice(activities, normalizedJobID, activityLimit),
|
||||
LastUpdated: timeToPtr(time.Now().UTC()),
|
||||
}
|
||||
|
||||
if history, err := r.store.LoadRunHistory(detailJob.JobType); err != nil {
|
||||
return nil, true, err
|
||||
} else if history != nil {
|
||||
for i := range history.SuccessfulRuns {
|
||||
record := history.SuccessfulRuns[i]
|
||||
if strings.TrimSpace(record.JobID) == normalizedJobID {
|
||||
recordCopy := record
|
||||
detail.RunRecord = &recordCopy
|
||||
break
|
||||
}
|
||||
}
|
||||
if detail.RunRecord == nil {
|
||||
for i := range history.ErrorRuns {
|
||||
record := history.ErrorRuns[i]
|
||||
if strings.TrimSpace(record.JobID) == normalizedJobID {
|
||||
recordCopy := record
|
||||
detail.RunRecord = &recordCopy
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if relatedLimit > 0 {
|
||||
related := make([]TrackedJob, 0, relatedLimit)
|
||||
r.jobsMu.RLock()
|
||||
for _, candidate := range r.jobs {
|
||||
if strings.TrimSpace(candidate.JobType) != strings.TrimSpace(detailJob.JobType) {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(candidate.JobID) == normalizedJobID {
|
||||
continue
|
||||
}
|
||||
cloned := cloneTrackedJob(*candidate)
|
||||
stripTrackedJobDetailFields(&cloned)
|
||||
related = append(related, cloned)
|
||||
if len(related) >= relatedLimit {
|
||||
break
|
||||
}
|
||||
}
|
||||
r.jobsMu.RUnlock()
|
||||
detail.RelatedJobs = related
|
||||
}
|
||||
|
||||
return detail, true, nil
|
||||
}
|
||||
|
||||
func filterJobActivitiesFromSlice(all []JobActivity, jobID string, limit int) []JobActivity {
|
||||
if strings.TrimSpace(jobID) == "" || len(all) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
activities := make([]JobActivity, 0, len(all))
|
||||
for _, activity := range all {
|
||||
if strings.TrimSpace(activity.JobID) != jobID {
|
||||
continue
|
||||
}
|
||||
activities = append(activities, activity)
|
||||
}
|
||||
|
||||
sort.Slice(activities, func(i, j int) bool {
|
||||
return !activityLess(activities[i], activities[j]) // oldest-first for job timeline
|
||||
})
|
||||
if limit > 0 && len(activities) > limit {
|
||||
activities = activities[len(activities)-limit:]
|
||||
}
|
||||
return activities
|
||||
}
|
||||
|
||||
func stripTrackedJobDetailFields(job *TrackedJob) {
|
||||
if job == nil {
|
||||
return
|
||||
}
|
||||
job.Detail = ""
|
||||
job.Parameters = nil
|
||||
job.Labels = nil
|
||||
job.ResultOutputValues = nil
|
||||
}
|
||||
|
||||
func hasTrackedJobRichDetails(job TrackedJob) bool {
|
||||
return strings.TrimSpace(job.Detail) != "" ||
|
||||
len(job.Parameters) > 0 ||
|
||||
len(job.Labels) > 0 ||
|
||||
len(job.ResultOutputValues) > 0
|
||||
}
|
||||
|
||||
func mergeTrackedStatusIntoDetail(detail *TrackedJob, tracked *TrackedJob) {
|
||||
if detail == nil || tracked == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if detail.JobType == "" {
|
||||
detail.JobType = tracked.JobType
|
||||
}
|
||||
if detail.RequestID == "" {
|
||||
detail.RequestID = tracked.RequestID
|
||||
}
|
||||
if detail.WorkerID == "" {
|
||||
detail.WorkerID = tracked.WorkerID
|
||||
}
|
||||
if detail.DedupeKey == "" {
|
||||
detail.DedupeKey = tracked.DedupeKey
|
||||
}
|
||||
if detail.Summary == "" {
|
||||
detail.Summary = tracked.Summary
|
||||
}
|
||||
if detail.State == "" {
|
||||
detail.State = tracked.State
|
||||
}
|
||||
if detail.Progress == 0 {
|
||||
detail.Progress = tracked.Progress
|
||||
}
|
||||
if detail.Stage == "" {
|
||||
detail.Stage = tracked.Stage
|
||||
}
|
||||
if detail.Message == "" {
|
||||
detail.Message = tracked.Message
|
||||
}
|
||||
if detail.Attempt == 0 {
|
||||
detail.Attempt = tracked.Attempt
|
||||
}
|
||||
if detail.CreatedAt == nil || detail.CreatedAt.IsZero() {
|
||||
detail.CreatedAt = tracked.CreatedAt
|
||||
}
|
||||
if detail.UpdatedAt == nil || detail.UpdatedAt.IsZero() {
|
||||
detail.UpdatedAt = tracked.UpdatedAt
|
||||
}
|
||||
if detail.CompletedAt == nil || detail.CompletedAt.IsZero() {
|
||||
detail.CompletedAt = tracked.CompletedAt
|
||||
}
|
||||
if detail.ErrorMessage == "" {
|
||||
detail.ErrorMessage = tracked.ErrorMessage
|
||||
}
|
||||
if detail.ResultSummary == "" {
|
||||
detail.ResultSummary = tracked.ResultSummary
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) handleJobProgressUpdate(workerID string, update *plugin_pb.JobProgressUpdate) {
|
||||
if update == nil {
|
||||
return
|
||||
}
|
||||
|
||||
now := time.Now().UTC()
|
||||
resolvedWorkerID := strings.TrimSpace(workerID)
|
||||
|
||||
if strings.TrimSpace(update.JobId) != "" {
|
||||
r.jobsMu.Lock()
|
||||
job := r.jobs[update.JobId]
|
||||
if job == nil {
|
||||
job = &TrackedJob{
|
||||
JobID: update.JobId,
|
||||
JobType: update.JobType,
|
||||
RequestID: update.RequestId,
|
||||
WorkerID: resolvedWorkerID,
|
||||
CreatedAt: timeToPtr(now),
|
||||
}
|
||||
r.jobs[update.JobId] = job
|
||||
}
|
||||
|
||||
if update.JobType != "" {
|
||||
job.JobType = update.JobType
|
||||
}
|
||||
if update.RequestId != "" {
|
||||
job.RequestID = update.RequestId
|
||||
}
|
||||
if job.WorkerID != "" {
|
||||
resolvedWorkerID = job.WorkerID
|
||||
} else if resolvedWorkerID != "" {
|
||||
job.WorkerID = resolvedWorkerID
|
||||
}
|
||||
job.State = strings.ToLower(update.State.String())
|
||||
job.Progress = update.ProgressPercent
|
||||
job.Stage = update.Stage
|
||||
job.Message = update.Message
|
||||
job.UpdatedAt = timeToPtr(now)
|
||||
r.pruneTrackedJobsLocked()
|
||||
r.dirtyJobs = true
|
||||
r.jobsMu.Unlock()
|
||||
}
|
||||
|
||||
r.trackWorkerActivities(update.JobType, update.JobId, update.RequestId, resolvedWorkerID, update.Activities)
|
||||
if update.Message != "" || update.Stage != "" {
|
||||
source := "worker_progress"
|
||||
if strings.TrimSpace(update.JobId) == "" {
|
||||
source = "worker_detection"
|
||||
}
|
||||
r.appendActivity(JobActivity{
|
||||
JobID: update.JobId,
|
||||
JobType: update.JobType,
|
||||
RequestID: update.RequestId,
|
||||
WorkerID: resolvedWorkerID,
|
||||
Source: source,
|
||||
Message: update.Message,
|
||||
Stage: update.Stage,
|
||||
OccurredAt: timeToPtr(now),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) trackExecutionStart(requestID, workerID string, job *plugin_pb.JobSpec, attempt int32) {
|
||||
if job == nil || strings.TrimSpace(job.JobId) == "" {
|
||||
return
|
||||
}
|
||||
|
||||
now := time.Now().UTC()
|
||||
|
||||
r.jobsMu.Lock()
|
||||
tracked := r.jobs[job.JobId]
|
||||
if tracked == nil {
|
||||
tracked = &TrackedJob{
|
||||
JobID: job.JobId,
|
||||
CreatedAt: timeToPtr(now),
|
||||
}
|
||||
r.jobs[job.JobId] = tracked
|
||||
}
|
||||
|
||||
tracked.JobType = job.JobType
|
||||
tracked.RequestID = requestID
|
||||
tracked.WorkerID = workerID
|
||||
tracked.DedupeKey = job.DedupeKey
|
||||
tracked.Summary = job.Summary
|
||||
tracked.State = strings.ToLower(plugin_pb.JobState_JOB_STATE_ASSIGNED.String())
|
||||
tracked.Progress = 0
|
||||
tracked.Stage = "assigned"
|
||||
tracked.Message = "job assigned to worker"
|
||||
tracked.Attempt = attempt
|
||||
if tracked.CreatedAt == nil || tracked.CreatedAt.IsZero() {
|
||||
tracked.CreatedAt = timeToPtr(now)
|
||||
}
|
||||
tracked.UpdatedAt = timeToPtr(now)
|
||||
trackedSnapshot := cloneTrackedJob(*tracked)
|
||||
r.pruneTrackedJobsLocked()
|
||||
r.dirtyJobs = true
|
||||
r.jobsMu.Unlock()
|
||||
r.persistJobDetailSnapshot(job.JobId, func(detail *TrackedJob) {
|
||||
detail.JobID = job.JobId
|
||||
detail.JobType = job.JobType
|
||||
detail.RequestID = requestID
|
||||
detail.WorkerID = workerID
|
||||
detail.DedupeKey = job.DedupeKey
|
||||
detail.Summary = job.Summary
|
||||
detail.Detail = job.Detail
|
||||
detail.Parameters = enrichTrackedJobParameters(job.JobType, configValueMapToPlain(job.Parameters))
|
||||
if len(job.Labels) > 0 {
|
||||
labels := make(map[string]string, len(job.Labels))
|
||||
for key, value := range job.Labels {
|
||||
labels[key] = value
|
||||
}
|
||||
detail.Labels = labels
|
||||
} else {
|
||||
detail.Labels = nil
|
||||
}
|
||||
detail.State = trackedSnapshot.State
|
||||
detail.Progress = trackedSnapshot.Progress
|
||||
detail.Stage = trackedSnapshot.Stage
|
||||
detail.Message = trackedSnapshot.Message
|
||||
detail.Attempt = attempt
|
||||
if detail.CreatedAt == nil || detail.CreatedAt.IsZero() {
|
||||
detail.CreatedAt = trackedSnapshot.CreatedAt
|
||||
}
|
||||
detail.UpdatedAt = trackedSnapshot.UpdatedAt
|
||||
})
|
||||
|
||||
r.appendActivity(JobActivity{
|
||||
JobID: job.JobId,
|
||||
JobType: job.JobType,
|
||||
RequestID: requestID,
|
||||
WorkerID: workerID,
|
||||
Source: "admin_dispatch",
|
||||
Message: "job assigned",
|
||||
Stage: "assigned",
|
||||
OccurredAt: timeToPtr(now),
|
||||
})
|
||||
}
|
||||
|
||||
func (r *Plugin) trackExecutionQueued(job *plugin_pb.JobSpec) {
|
||||
if job == nil || strings.TrimSpace(job.JobId) == "" {
|
||||
return
|
||||
}
|
||||
|
||||
now := time.Now().UTC()
|
||||
|
||||
r.jobsMu.Lock()
|
||||
tracked := r.jobs[job.JobId]
|
||||
if tracked == nil {
|
||||
tracked = &TrackedJob{
|
||||
JobID: job.JobId,
|
||||
CreatedAt: timeToPtr(now),
|
||||
}
|
||||
r.jobs[job.JobId] = tracked
|
||||
}
|
||||
|
||||
tracked.JobType = job.JobType
|
||||
tracked.DedupeKey = job.DedupeKey
|
||||
tracked.Summary = job.Summary
|
||||
tracked.State = strings.ToLower(plugin_pb.JobState_JOB_STATE_PENDING.String())
|
||||
tracked.Progress = 0
|
||||
tracked.Stage = "queued"
|
||||
tracked.Message = "waiting for available executor"
|
||||
if tracked.CreatedAt == nil || tracked.CreatedAt.IsZero() {
|
||||
tracked.CreatedAt = timeToPtr(now)
|
||||
}
|
||||
tracked.UpdatedAt = timeToPtr(now)
|
||||
trackedSnapshot := cloneTrackedJob(*tracked)
|
||||
r.pruneTrackedJobsLocked()
|
||||
r.dirtyJobs = true
|
||||
r.jobsMu.Unlock()
|
||||
r.persistJobDetailSnapshot(job.JobId, func(detail *TrackedJob) {
|
||||
detail.JobID = job.JobId
|
||||
detail.JobType = job.JobType
|
||||
detail.DedupeKey = job.DedupeKey
|
||||
detail.Summary = job.Summary
|
||||
detail.Detail = job.Detail
|
||||
detail.Parameters = enrichTrackedJobParameters(job.JobType, configValueMapToPlain(job.Parameters))
|
||||
if len(job.Labels) > 0 {
|
||||
labels := make(map[string]string, len(job.Labels))
|
||||
for key, value := range job.Labels {
|
||||
labels[key] = value
|
||||
}
|
||||
detail.Labels = labels
|
||||
} else {
|
||||
detail.Labels = nil
|
||||
}
|
||||
detail.State = trackedSnapshot.State
|
||||
detail.Progress = trackedSnapshot.Progress
|
||||
detail.Stage = trackedSnapshot.Stage
|
||||
detail.Message = trackedSnapshot.Message
|
||||
if detail.CreatedAt == nil || detail.CreatedAt.IsZero() {
|
||||
detail.CreatedAt = trackedSnapshot.CreatedAt
|
||||
}
|
||||
detail.UpdatedAt = trackedSnapshot.UpdatedAt
|
||||
})
|
||||
|
||||
r.appendActivity(JobActivity{
|
||||
JobID: job.JobId,
|
||||
JobType: job.JobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: "job queued for execution",
|
||||
Stage: "queued",
|
||||
OccurredAt: timeToPtr(now),
|
||||
})
|
||||
}
|
||||
|
||||
func (r *Plugin) trackExecutionCompletion(completed *plugin_pb.JobCompleted) *TrackedJob {
|
||||
if completed == nil || strings.TrimSpace(completed.JobId) == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
now := time.Now().UTC()
|
||||
if completed.CompletedAt != nil {
|
||||
now = completed.CompletedAt.AsTime().UTC()
|
||||
}
|
||||
|
||||
r.jobsMu.Lock()
|
||||
tracked := r.jobs[completed.JobId]
|
||||
if tracked == nil {
|
||||
tracked = &TrackedJob{
|
||||
JobID: completed.JobId,
|
||||
CreatedAt: timeToPtr(now),
|
||||
}
|
||||
r.jobs[completed.JobId] = tracked
|
||||
}
|
||||
|
||||
if completed.JobType != "" {
|
||||
tracked.JobType = completed.JobType
|
||||
}
|
||||
if completed.RequestId != "" {
|
||||
tracked.RequestID = completed.RequestId
|
||||
}
|
||||
if completed.Success {
|
||||
tracked.State = strings.ToLower(plugin_pb.JobState_JOB_STATE_SUCCEEDED.String())
|
||||
tracked.Progress = 100
|
||||
tracked.Stage = "completed"
|
||||
if completed.Result != nil {
|
||||
tracked.ResultSummary = completed.Result.Summary
|
||||
}
|
||||
tracked.Message = tracked.ResultSummary
|
||||
if tracked.Message == "" {
|
||||
tracked.Message = "completed"
|
||||
}
|
||||
tracked.ErrorMessage = ""
|
||||
} else {
|
||||
tracked.State = strings.ToLower(plugin_pb.JobState_JOB_STATE_FAILED.String())
|
||||
tracked.Stage = "failed"
|
||||
tracked.ErrorMessage = completed.ErrorMessage
|
||||
tracked.Message = completed.ErrorMessage
|
||||
}
|
||||
|
||||
tracked.UpdatedAt = timeToPtr(now)
|
||||
tracked.CompletedAt = timeToPtr(now)
|
||||
r.pruneTrackedJobsLocked()
|
||||
clone := cloneTrackedJob(*tracked)
|
||||
r.dirtyJobs = true
|
||||
r.jobsMu.Unlock()
|
||||
r.persistJobDetailSnapshot(completed.JobId, func(detail *TrackedJob) {
|
||||
detail.JobID = completed.JobId
|
||||
if completed.JobType != "" {
|
||||
detail.JobType = completed.JobType
|
||||
}
|
||||
if completed.RequestId != "" {
|
||||
detail.RequestID = completed.RequestId
|
||||
}
|
||||
detail.State = clone.State
|
||||
detail.Progress = clone.Progress
|
||||
detail.Stage = clone.Stage
|
||||
detail.Message = clone.Message
|
||||
detail.ErrorMessage = clone.ErrorMessage
|
||||
detail.ResultSummary = clone.ResultSummary
|
||||
if completed.Success && completed.Result != nil {
|
||||
detail.ResultOutputValues = configValueMapToPlain(completed.Result.OutputValues)
|
||||
} else {
|
||||
detail.ResultOutputValues = nil
|
||||
}
|
||||
if detail.CreatedAt == nil || detail.CreatedAt.IsZero() {
|
||||
detail.CreatedAt = clone.CreatedAt
|
||||
}
|
||||
if detail.UpdatedAt == nil || detail.UpdatedAt.IsZero() {
|
||||
detail.UpdatedAt = clone.UpdatedAt
|
||||
}
|
||||
if detail.CompletedAt == nil || detail.CompletedAt.IsZero() {
|
||||
detail.CompletedAt = clone.CompletedAt
|
||||
}
|
||||
})
|
||||
|
||||
r.appendActivity(JobActivity{
|
||||
JobID: completed.JobId,
|
||||
JobType: completed.JobType,
|
||||
RequestID: completed.RequestId,
|
||||
WorkerID: clone.WorkerID,
|
||||
Source: "worker_completion",
|
||||
Message: clone.Message,
|
||||
Stage: clone.Stage,
|
||||
OccurredAt: timeToPtr(now),
|
||||
})
|
||||
|
||||
return &clone
|
||||
}
|
||||
|
||||
func (r *Plugin) trackWorkerActivities(jobType, jobID, requestID, workerID string, events []*plugin_pb.ActivityEvent) {
|
||||
if len(events) == 0 {
|
||||
return
|
||||
}
|
||||
for _, event := range events {
|
||||
if event == nil {
|
||||
continue
|
||||
}
|
||||
timestamp := time.Now().UTC()
|
||||
if event.CreatedAt != nil {
|
||||
timestamp = event.CreatedAt.AsTime().UTC()
|
||||
}
|
||||
r.appendActivity(JobActivity{
|
||||
JobID: jobID,
|
||||
JobType: jobType,
|
||||
RequestID: requestID,
|
||||
WorkerID: workerID,
|
||||
Source: strings.ToLower(event.Source.String()),
|
||||
Message: event.Message,
|
||||
Stage: event.Stage,
|
||||
Details: configValueMapToPlain(event.Details),
|
||||
OccurredAt: timeToPtr(timestamp),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) appendActivity(activity JobActivity) {
|
||||
if activity.OccurredAt == nil || activity.OccurredAt.IsZero() {
|
||||
activity.OccurredAt = timeToPtr(time.Now().UTC())
|
||||
}
|
||||
|
||||
r.activitiesMu.Lock()
|
||||
r.activities = append(r.activities, activity)
|
||||
if len(r.activities) > maxActivityRecords {
|
||||
r.activities = r.activities[len(r.activities)-maxActivityRecords:]
|
||||
}
|
||||
r.dirtyActivities = true
|
||||
r.activitiesMu.Unlock()
|
||||
}
|
||||
|
||||
func (r *Plugin) pruneTrackedJobsLocked() {
|
||||
if len(r.jobs) <= maxTrackedJobsTotal {
|
||||
return
|
||||
}
|
||||
|
||||
type sortableJob struct {
|
||||
jobID string
|
||||
updatedAt time.Time
|
||||
}
|
||||
terminalJobs := make([]sortableJob, 0)
|
||||
for jobID, job := range r.jobs {
|
||||
if job.State == StateSucceeded ||
|
||||
job.State == StateFailed ||
|
||||
job.State == StateCanceled {
|
||||
updAt := time.Time{}
|
||||
if job.UpdatedAt != nil {
|
||||
updAt = *job.UpdatedAt
|
||||
}
|
||||
terminalJobs = append(terminalJobs, sortableJob{jobID, updAt})
|
||||
}
|
||||
}
|
||||
|
||||
if len(terminalJobs) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.Slice(terminalJobs, func(i, j int) bool {
|
||||
return terminalJobs[i].updatedAt.Before(terminalJobs[j].updatedAt)
|
||||
})
|
||||
|
||||
toDelete := len(r.jobs) - maxTrackedJobsTotal
|
||||
if toDelete <= 0 {
|
||||
return
|
||||
}
|
||||
if toDelete > len(terminalJobs) {
|
||||
toDelete = len(terminalJobs)
|
||||
}
|
||||
|
||||
for i := 0; i < toDelete; i++ {
|
||||
delete(r.jobs, terminalJobs[i].jobID)
|
||||
}
|
||||
}
|
||||
|
||||
func configValueMapToPlain(values map[string]*plugin_pb.ConfigValue) map[string]interface{} {
|
||||
if len(values) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
payload, err := protojson.MarshalOptions{UseProtoNames: true}.Marshal(&plugin_pb.ValueMap{Fields: values})
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
decoded := map[string]interface{}{}
|
||||
if err := json.Unmarshal(payload, &decoded); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
fields, ok := decoded["fields"].(map[string]interface{})
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return fields
|
||||
}
|
||||
|
||||
func (r *Plugin) persistTrackedJobsSnapshot() {
|
||||
r.jobsMu.Lock()
|
||||
r.dirtyJobs = false
|
||||
jobs := make([]TrackedJob, 0, len(r.jobs))
|
||||
for _, job := range r.jobs {
|
||||
if job == nil || strings.TrimSpace(job.JobID) == "" {
|
||||
continue
|
||||
}
|
||||
clone := cloneTrackedJob(*job)
|
||||
stripTrackedJobDetailFields(&clone)
|
||||
jobs = append(jobs, clone)
|
||||
}
|
||||
r.jobsMu.Unlock()
|
||||
|
||||
if len(jobs) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.Slice(jobs, func(i, j int) bool {
|
||||
ti := time.Time{}
|
||||
if jobs[i].UpdatedAt != nil {
|
||||
ti = *jobs[i].UpdatedAt
|
||||
}
|
||||
tj := time.Time{}
|
||||
if jobs[j].UpdatedAt != nil {
|
||||
tj = *jobs[j].UpdatedAt
|
||||
}
|
||||
if !ti.Equal(tj) {
|
||||
return ti.After(tj)
|
||||
}
|
||||
return jobs[i].JobID < jobs[j].JobID
|
||||
})
|
||||
if len(jobs) > maxTrackedJobsTotal {
|
||||
jobs = jobs[:maxTrackedJobsTotal]
|
||||
}
|
||||
|
||||
if err := r.store.SaveTrackedJobs(jobs); err != nil {
|
||||
glog.Warningf("Plugin failed to persist tracked jobs: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) persistJobDetailSnapshot(jobID string, apply func(detail *TrackedJob)) {
|
||||
normalizedJobID, _ := sanitizeJobID(jobID)
|
||||
if normalizedJobID == "" {
|
||||
return
|
||||
}
|
||||
|
||||
r.jobDetailsMu.Lock()
|
||||
defer r.jobDetailsMu.Unlock()
|
||||
|
||||
detail, err := r.store.LoadJobDetail(normalizedJobID)
|
||||
if err != nil {
|
||||
glog.Warningf("Plugin failed to load job detail snapshot for %s: %v", normalizedJobID, err)
|
||||
return
|
||||
}
|
||||
if detail == nil {
|
||||
detail = &TrackedJob{
|
||||
JobID: normalizedJobID,
|
||||
}
|
||||
}
|
||||
|
||||
if apply != nil {
|
||||
apply(detail)
|
||||
}
|
||||
|
||||
if err := r.store.SaveJobDetail(*detail); err != nil {
|
||||
glog.Warningf("Plugin failed to persist job detail snapshot for %s: %v", normalizedJobID, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) persistActivitiesSnapshot() {
|
||||
r.activitiesMu.Lock()
|
||||
r.dirtyActivities = false
|
||||
activities := append([]JobActivity(nil), r.activities...)
|
||||
r.activitiesMu.Unlock()
|
||||
|
||||
if len(activities) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
if len(activities) > maxActivityRecords {
|
||||
activities = activities[len(activities)-maxActivityRecords:]
|
||||
}
|
||||
|
||||
if err := r.store.SaveActivities(activities); err != nil {
|
||||
glog.Warningf("Plugin failed to persist activities: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) persistenceLoop() {
|
||||
defer r.wg.Done()
|
||||
for {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
r.persistTrackedJobsSnapshot()
|
||||
r.persistActivitiesSnapshot()
|
||||
return
|
||||
case <-r.persistTicker.C:
|
||||
r.jobsMu.RLock()
|
||||
needsJobsFlush := r.dirtyJobs
|
||||
r.jobsMu.RUnlock()
|
||||
if needsJobsFlush {
|
||||
r.persistTrackedJobsSnapshot()
|
||||
}
|
||||
|
||||
r.activitiesMu.RLock()
|
||||
needsActivitiesFlush := r.dirtyActivities
|
||||
r.activitiesMu.RUnlock()
|
||||
if needsActivitiesFlush {
|
||||
r.persistActivitiesSnapshot()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
600
weed/admin/plugin/plugin_monitor_test.go
Normal file
600
weed/admin/plugin/plugin_monitor_test.go
Normal file
@@ -0,0 +1,600 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
|
||||
"google.golang.org/protobuf/proto"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
func TestPluginLoadsPersistedMonitorStateOnStart(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
dataDir := t.TempDir()
|
||||
store, err := NewConfigStore(dataDir)
|
||||
if err != nil {
|
||||
t.Fatalf("NewConfigStore: %v", err)
|
||||
}
|
||||
|
||||
seedJobs := []TrackedJob{
|
||||
{
|
||||
JobID: "job-seeded",
|
||||
JobType: "vacuum",
|
||||
State: "running",
|
||||
CreatedAt: timeToPtr(time.Now().UTC().Add(-2 * time.Minute)),
|
||||
UpdatedAt: timeToPtr(time.Now().UTC().Add(-1 * time.Minute)),
|
||||
},
|
||||
}
|
||||
seedActivities := []JobActivity{
|
||||
{
|
||||
JobID: "job-seeded",
|
||||
JobType: "vacuum",
|
||||
Source: "worker_progress",
|
||||
Message: "seeded",
|
||||
OccurredAt: timeToPtr(time.Now().UTC().Add(-30 * time.Second)),
|
||||
},
|
||||
}
|
||||
|
||||
if err := store.SaveTrackedJobs(seedJobs); err != nil {
|
||||
t.Fatalf("SaveTrackedJobs: %v", err)
|
||||
}
|
||||
if err := store.SaveActivities(seedActivities); err != nil {
|
||||
t.Fatalf("SaveActivities: %v", err)
|
||||
}
|
||||
|
||||
pluginSvc, err := New(Options{DataDir: dataDir})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
gotJobs := pluginSvc.ListTrackedJobs("", "", 0)
|
||||
if len(gotJobs) != 1 || gotJobs[0].JobID != "job-seeded" {
|
||||
t.Fatalf("unexpected loaded jobs: %+v", gotJobs)
|
||||
}
|
||||
|
||||
gotActivities := pluginSvc.ListActivities("", 0)
|
||||
if len(gotActivities) != 1 || gotActivities[0].Message != "seeded" {
|
||||
t.Fatalf("unexpected loaded activities: %+v", gotActivities)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPluginPersistsMonitorStateAfterJobUpdates(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
dataDir := t.TempDir()
|
||||
pluginSvc, err := New(Options{DataDir: dataDir})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-persist",
|
||||
JobType: "vacuum",
|
||||
Summary: "persist test",
|
||||
}
|
||||
pluginSvc.trackExecutionStart("req-persist", "worker-a", job, 1)
|
||||
|
||||
pluginSvc.trackExecutionCompletion(&plugin_pb.JobCompleted{
|
||||
RequestId: "req-persist",
|
||||
JobId: "job-persist",
|
||||
JobType: "vacuum",
|
||||
Success: true,
|
||||
Result: &plugin_pb.JobResult{Summary: "done"},
|
||||
CompletedAt: timestamppb.New(time.Now().UTC()),
|
||||
})
|
||||
pluginSvc.Shutdown()
|
||||
|
||||
store, err := NewConfigStore(dataDir)
|
||||
if err != nil {
|
||||
t.Fatalf("NewConfigStore: %v", err)
|
||||
}
|
||||
|
||||
trackedJobs, err := store.LoadTrackedJobs()
|
||||
if err != nil {
|
||||
t.Fatalf("LoadTrackedJobs: %v", err)
|
||||
}
|
||||
if len(trackedJobs) == 0 {
|
||||
t.Fatalf("expected persisted tracked jobs")
|
||||
}
|
||||
|
||||
found := false
|
||||
for _, tracked := range trackedJobs {
|
||||
if tracked.JobID == "job-persist" {
|
||||
found = true
|
||||
if tracked.State == "" {
|
||||
t.Fatalf("persisted job state should not be empty")
|
||||
}
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatalf("persisted tracked jobs missing job-persist")
|
||||
}
|
||||
|
||||
activities, err := store.LoadActivities()
|
||||
if err != nil {
|
||||
t.Fatalf("LoadActivities: %v", err)
|
||||
}
|
||||
if len(activities) == 0 {
|
||||
t.Fatalf("expected persisted activities")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrackExecutionQueuedMarksPendingState(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.trackExecutionQueued(&plugin_pb.JobSpec{
|
||||
JobId: "job-pending-1",
|
||||
JobType: "vacuum",
|
||||
DedupeKey: "vacuum:1",
|
||||
Summary: "pending queue item",
|
||||
})
|
||||
|
||||
jobs := pluginSvc.ListTrackedJobs("vacuum", "", 10)
|
||||
if len(jobs) != 1 {
|
||||
t.Fatalf("expected one tracked pending job, got=%d", len(jobs))
|
||||
}
|
||||
job := jobs[0]
|
||||
if job.JobID != "job-pending-1" {
|
||||
t.Fatalf("unexpected pending job id: %s", job.JobID)
|
||||
}
|
||||
if job.State != "job_state_pending" {
|
||||
t.Fatalf("unexpected pending job state: %s", job.State)
|
||||
}
|
||||
if job.Stage != "queued" {
|
||||
t.Fatalf("unexpected pending job stage: %s", job.Stage)
|
||||
}
|
||||
|
||||
activities := pluginSvc.ListActivities("vacuum", 50)
|
||||
found := false
|
||||
for _, activity := range activities {
|
||||
if activity.JobID == "job-pending-1" && activity.Stage == "queued" && activity.Source == "admin_scheduler" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatalf("expected queued activity for pending job")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleJobProgressUpdateCarriesWorkerIDInActivities(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-progress-worker",
|
||||
JobType: "vacuum",
|
||||
}
|
||||
pluginSvc.trackExecutionStart("req-progress-worker", "worker-a", job, 1)
|
||||
|
||||
pluginSvc.handleJobProgressUpdate("worker-a", &plugin_pb.JobProgressUpdate{
|
||||
RequestId: "req-progress-worker",
|
||||
JobId: "job-progress-worker",
|
||||
JobType: "vacuum",
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
ProgressPercent: 42.0,
|
||||
Stage: "scan",
|
||||
Message: "in progress",
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
{
|
||||
Source: plugin_pb.ActivitySource_ACTIVITY_SOURCE_EXECUTOR,
|
||||
Message: "volume scanned",
|
||||
Stage: "scan",
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
activities := pluginSvc.ListActivities("vacuum", 0)
|
||||
if len(activities) == 0 {
|
||||
t.Fatalf("expected activity entries")
|
||||
}
|
||||
|
||||
foundProgress := false
|
||||
foundEvent := false
|
||||
for _, activity := range activities {
|
||||
if activity.Source == "worker_progress" && activity.Message == "in progress" {
|
||||
foundProgress = true
|
||||
if activity.WorkerID != "worker-a" {
|
||||
t.Fatalf("worker_progress activity worker mismatch: got=%q want=%q", activity.WorkerID, "worker-a")
|
||||
}
|
||||
}
|
||||
if activity.Message == "volume scanned" {
|
||||
foundEvent = true
|
||||
if activity.WorkerID != "worker-a" {
|
||||
t.Fatalf("worker event worker mismatch: got=%q want=%q", activity.WorkerID, "worker-a")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !foundProgress {
|
||||
t.Fatalf("expected worker_progress activity")
|
||||
}
|
||||
if !foundEvent {
|
||||
t.Fatalf("expected worker activity event")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleJobProgressUpdateWithoutJobIDTracksDetectionActivities(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.handleJobProgressUpdate("worker-detector", &plugin_pb.JobProgressUpdate{
|
||||
RequestId: "detect-req-1",
|
||||
JobType: "vacuum",
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
Stage: "decision_summary",
|
||||
Message: "VACUUM: No tasks created for 3 volumes",
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
{
|
||||
Source: plugin_pb.ActivitySource_ACTIVITY_SOURCE_DETECTOR,
|
||||
Stage: "decision_summary",
|
||||
Message: "VACUUM: No tasks created for 3 volumes",
|
||||
},
|
||||
},
|
||||
})
|
||||
|
||||
activities := pluginSvc.ListActivities("vacuum", 0)
|
||||
if len(activities) == 0 {
|
||||
t.Fatalf("expected activity entries")
|
||||
}
|
||||
|
||||
foundDetectionProgress := false
|
||||
foundDetectorEvent := false
|
||||
for _, activity := range activities {
|
||||
if activity.RequestID != "detect-req-1" {
|
||||
continue
|
||||
}
|
||||
if activity.Source == "worker_detection" {
|
||||
foundDetectionProgress = true
|
||||
if activity.WorkerID != "worker-detector" {
|
||||
t.Fatalf("worker_detection worker mismatch: got=%q want=%q", activity.WorkerID, "worker-detector")
|
||||
}
|
||||
}
|
||||
if activity.Source == "activity_source_detector" {
|
||||
foundDetectorEvent = true
|
||||
if activity.WorkerID != "worker-detector" {
|
||||
t.Fatalf("detector event worker mismatch: got=%q want=%q", activity.WorkerID, "worker-detector")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !foundDetectionProgress {
|
||||
t.Fatalf("expected worker_detection activity")
|
||||
}
|
||||
if !foundDetectorEvent {
|
||||
t.Fatalf("expected detector activity event")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleJobCompletedCarriesWorkerIDInActivitiesAndRunHistory(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: "job-complete-worker",
|
||||
JobType: "vacuum",
|
||||
}
|
||||
pluginSvc.trackExecutionStart("req-complete-worker", "worker-b", job, 1)
|
||||
|
||||
pluginSvc.handleJobCompleted(&plugin_pb.JobCompleted{
|
||||
RequestId: "req-complete-worker",
|
||||
JobId: "job-complete-worker",
|
||||
JobType: "vacuum",
|
||||
Success: true,
|
||||
Activities: []*plugin_pb.ActivityEvent{
|
||||
{
|
||||
Source: plugin_pb.ActivitySource_ACTIVITY_SOURCE_EXECUTOR,
|
||||
Message: "finalizer done",
|
||||
Stage: "finalize",
|
||||
},
|
||||
},
|
||||
CompletedAt: timestamppb.Now(),
|
||||
})
|
||||
pluginSvc.Shutdown()
|
||||
|
||||
activities := pluginSvc.ListActivities("vacuum", 0)
|
||||
foundWorkerEvent := false
|
||||
for _, activity := range activities {
|
||||
if activity.Message == "finalizer done" {
|
||||
foundWorkerEvent = true
|
||||
if activity.WorkerID != "worker-b" {
|
||||
t.Fatalf("worker completion event worker mismatch: got=%q want=%q", activity.WorkerID, "worker-b")
|
||||
}
|
||||
}
|
||||
}
|
||||
if !foundWorkerEvent {
|
||||
t.Fatalf("expected completion worker event activity")
|
||||
}
|
||||
|
||||
history, err := pluginSvc.LoadRunHistory("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("LoadRunHistory: %v", err)
|
||||
}
|
||||
if history == nil || len(history.SuccessfulRuns) == 0 {
|
||||
t.Fatalf("expected successful run history entry")
|
||||
}
|
||||
if history.SuccessfulRuns[0].WorkerID != "worker-b" {
|
||||
t.Fatalf("run history worker mismatch: got=%q want=%q", history.SuccessfulRuns[0].WorkerID, "worker-b")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrackExecutionStartStoresJobPayloadDetails(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{DataDir: t.TempDir()})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.trackExecutionStart("req-payload", "worker-c", &plugin_pb.JobSpec{
|
||||
JobId: "job-payload",
|
||||
JobType: "vacuum",
|
||||
Summary: "payload summary",
|
||||
Detail: "payload detail",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"volume_id": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 9},
|
||||
},
|
||||
},
|
||||
Labels: map[string]string{
|
||||
"source": "detector",
|
||||
},
|
||||
}, 2)
|
||||
pluginSvc.Shutdown()
|
||||
|
||||
job, found := pluginSvc.GetTrackedJob("job-payload")
|
||||
if !found || job == nil {
|
||||
t.Fatalf("expected tracked job")
|
||||
}
|
||||
if job.Detail != "" {
|
||||
t.Fatalf("expected in-memory tracked job detail to be stripped, got=%q", job.Detail)
|
||||
}
|
||||
if job.Attempt != 2 {
|
||||
t.Fatalf("unexpected attempt: %d", job.Attempt)
|
||||
}
|
||||
if len(job.Labels) != 0 {
|
||||
t.Fatalf("expected in-memory labels to be stripped, got=%+v", job.Labels)
|
||||
}
|
||||
if len(job.Parameters) != 0 {
|
||||
t.Fatalf("expected in-memory parameters to be stripped, got=%+v", job.Parameters)
|
||||
}
|
||||
|
||||
detail, found, err := pluginSvc.BuildJobDetail("job-payload", 100, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("BuildJobDetail: %v", err)
|
||||
}
|
||||
if !found || detail == nil || detail.Job == nil {
|
||||
t.Fatalf("expected disk-backed job detail")
|
||||
}
|
||||
if detail.Job.Detail != "payload detail" {
|
||||
t.Fatalf("unexpected disk-backed detail: %q", detail.Job.Detail)
|
||||
}
|
||||
if got := detail.Job.Labels["source"]; got != "detector" {
|
||||
t.Fatalf("unexpected disk-backed label source: %q", got)
|
||||
}
|
||||
if got, ok := detail.Job.Parameters["volume_id"].(map[string]interface{}); !ok || got["int64_value"] != "9" {
|
||||
t.Fatalf("unexpected disk-backed parameters payload: %#v", detail.Job.Parameters["volume_id"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrackExecutionStartStoresErasureCodingExecutionPlan(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{DataDir: t.TempDir()})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
taskParams := &worker_pb.TaskParams{
|
||||
TaskId: "task-ec-1",
|
||||
VolumeId: 29,
|
||||
Collection: "photos",
|
||||
Sources: []*worker_pb.TaskSource{
|
||||
{
|
||||
Node: "source-a:8080",
|
||||
DataCenter: "dc1",
|
||||
Rack: "rack1",
|
||||
VolumeId: 29,
|
||||
},
|
||||
},
|
||||
Targets: []*worker_pb.TaskTarget{
|
||||
{
|
||||
Node: "target-a:8080",
|
||||
DataCenter: "dc1",
|
||||
Rack: "rack2",
|
||||
VolumeId: 29,
|
||||
ShardIds: []uint32{0, 10},
|
||||
},
|
||||
{
|
||||
Node: "target-b:8080",
|
||||
DataCenter: "dc2",
|
||||
Rack: "rack3",
|
||||
VolumeId: 29,
|
||||
ShardIds: []uint32{1, 11},
|
||||
},
|
||||
},
|
||||
TaskParams: &worker_pb.TaskParams_ErasureCodingParams{
|
||||
ErasureCodingParams: &worker_pb.ErasureCodingTaskParams{
|
||||
DataShards: 10,
|
||||
ParityShards: 4,
|
||||
},
|
||||
},
|
||||
}
|
||||
payload, err := proto.Marshal(taskParams)
|
||||
if err != nil {
|
||||
t.Fatalf("Marshal task params: %v", err)
|
||||
}
|
||||
|
||||
pluginSvc.trackExecutionStart("req-ec-plan", "worker-ec", &plugin_pb.JobSpec{
|
||||
JobId: "job-ec-plan",
|
||||
JobType: "erasure_coding",
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{
|
||||
"task_params_pb": {
|
||||
Kind: &plugin_pb.ConfigValue_BytesValue{BytesValue: payload},
|
||||
},
|
||||
},
|
||||
}, 1)
|
||||
pluginSvc.Shutdown()
|
||||
|
||||
detail, found, err := pluginSvc.BuildJobDetail("job-ec-plan", 100, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("BuildJobDetail: %v", err)
|
||||
}
|
||||
if !found || detail == nil || detail.Job == nil {
|
||||
t.Fatalf("expected disk-backed detail")
|
||||
}
|
||||
|
||||
rawPlan, ok := detail.Job.Parameters["execution_plan"]
|
||||
if !ok {
|
||||
t.Fatalf("expected execution_plan in parameters, got=%+v", detail.Job.Parameters)
|
||||
}
|
||||
plan, ok := rawPlan.(map[string]interface{})
|
||||
if !ok {
|
||||
t.Fatalf("unexpected execution_plan type: %T", rawPlan)
|
||||
}
|
||||
if plan["job_type"] != "erasure_coding" {
|
||||
t.Fatalf("unexpected execution plan job type: %+v", plan["job_type"])
|
||||
}
|
||||
if plan["volume_id"] != float64(29) {
|
||||
t.Fatalf("unexpected execution plan volume id: %+v", plan["volume_id"])
|
||||
}
|
||||
targets, ok := plan["targets"].([]interface{})
|
||||
if !ok || len(targets) != 2 {
|
||||
t.Fatalf("unexpected targets in execution plan: %+v", plan["targets"])
|
||||
}
|
||||
assignments, ok := plan["shard_assignments"].([]interface{})
|
||||
if !ok || len(assignments) != 4 {
|
||||
t.Fatalf("unexpected shard assignments in execution plan: %+v", plan["shard_assignments"])
|
||||
}
|
||||
firstAssignment, ok := assignments[0].(map[string]interface{})
|
||||
if !ok {
|
||||
t.Fatalf("unexpected first assignment payload: %+v", assignments[0])
|
||||
}
|
||||
if firstAssignment["shard_id"] != float64(0) || firstAssignment["kind"] != "data" {
|
||||
t.Fatalf("unexpected first assignment: %+v", firstAssignment)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildJobDetailIncludesActivitiesAndRunRecord(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{DataDir: t.TempDir()})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.trackExecutionStart("req-detail", "worker-z", &plugin_pb.JobSpec{
|
||||
JobId: "job-detail",
|
||||
JobType: "vacuum",
|
||||
Summary: "detail summary",
|
||||
}, 1)
|
||||
pluginSvc.handleJobProgressUpdate("worker-z", &plugin_pb.JobProgressUpdate{
|
||||
RequestId: "req-detail",
|
||||
JobId: "job-detail",
|
||||
JobType: "vacuum",
|
||||
State: plugin_pb.JobState_JOB_STATE_RUNNING,
|
||||
Stage: "scan",
|
||||
Message: "scanning volume",
|
||||
})
|
||||
pluginSvc.handleJobCompleted(&plugin_pb.JobCompleted{
|
||||
RequestId: "req-detail",
|
||||
JobId: "job-detail",
|
||||
JobType: "vacuum",
|
||||
Success: true,
|
||||
Result: &plugin_pb.JobResult{
|
||||
Summary: "done",
|
||||
OutputValues: map[string]*plugin_pb.ConfigValue{
|
||||
"affected": {
|
||||
Kind: &plugin_pb.ConfigValue_Int64Value{Int64Value: 1},
|
||||
},
|
||||
},
|
||||
},
|
||||
CompletedAt: timestamppb.Now(),
|
||||
})
|
||||
pluginSvc.Shutdown()
|
||||
|
||||
detail, found, err := pluginSvc.BuildJobDetail("job-detail", 100, 5)
|
||||
if err != nil {
|
||||
t.Fatalf("BuildJobDetail error: %v", err)
|
||||
}
|
||||
if !found || detail == nil {
|
||||
t.Fatalf("expected job detail")
|
||||
}
|
||||
if detail.Job == nil || detail.Job.JobID != "job-detail" {
|
||||
t.Fatalf("unexpected job detail payload: %+v", detail.Job)
|
||||
}
|
||||
if detail.RunRecord == nil || detail.RunRecord.JobID != "job-detail" {
|
||||
t.Fatalf("expected run record for job-detail, got=%+v", detail.RunRecord)
|
||||
}
|
||||
if len(detail.Activities) == 0 {
|
||||
t.Fatalf("expected activity timeline entries")
|
||||
}
|
||||
if detail.Job.ResultOutputValues == nil {
|
||||
t.Fatalf("expected result output values")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildJobDetailLoadsFromDiskWhenMemoryCleared(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{DataDir: t.TempDir()})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.trackExecutionStart("req-disk", "worker-d", &plugin_pb.JobSpec{
|
||||
JobId: "job-disk",
|
||||
JobType: "vacuum",
|
||||
Summary: "disk summary",
|
||||
Detail: "disk detail payload",
|
||||
}, 1)
|
||||
pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.jobsMu.Lock()
|
||||
pluginSvc.jobs = map[string]*TrackedJob{}
|
||||
pluginSvc.jobsMu.Unlock()
|
||||
pluginSvc.activitiesMu.Lock()
|
||||
pluginSvc.activities = nil
|
||||
pluginSvc.activitiesMu.Unlock()
|
||||
|
||||
detail, found, err := pluginSvc.BuildJobDetail("job-disk", 100, 0)
|
||||
if err != nil {
|
||||
t.Fatalf("BuildJobDetail: %v", err)
|
||||
}
|
||||
if !found || detail == nil || detail.Job == nil {
|
||||
t.Fatalf("expected detail from disk")
|
||||
}
|
||||
if detail.Job.Detail != "disk detail payload" {
|
||||
t.Fatalf("unexpected disk detail payload: %q", detail.Job.Detail)
|
||||
}
|
||||
}
|
||||
945
weed/admin/plugin/plugin_scheduler.go
Normal file
945
weed/admin/plugin/plugin_scheduler.go
Normal file
@@ -0,0 +1,945 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
var errExecutorAtCapacity = errors.New("executor is at capacity")
|
||||
|
||||
const (
|
||||
defaultSchedulerTick = 5 * time.Second
|
||||
defaultScheduledDetectionInterval = 300 * time.Second
|
||||
defaultScheduledDetectionTimeout = 45 * time.Second
|
||||
defaultScheduledExecutionTimeout = 90 * time.Second
|
||||
defaultScheduledMaxResults int32 = 1000
|
||||
defaultScheduledExecutionConcurrency = 1
|
||||
defaultScheduledPerWorkerConcurrency = 1
|
||||
maxScheduledExecutionConcurrency = 128
|
||||
defaultScheduledRetryBackoff = 5 * time.Second
|
||||
defaultClusterContextTimeout = 10 * time.Second
|
||||
defaultWaitingBacklogFloor = 8
|
||||
defaultWaitingBacklogMultiplier = 4
|
||||
)
|
||||
|
||||
type schedulerPolicy struct {
|
||||
DetectionInterval time.Duration
|
||||
DetectionTimeout time.Duration
|
||||
ExecutionTimeout time.Duration
|
||||
RetryBackoff time.Duration
|
||||
MaxResults int32
|
||||
ExecutionConcurrency int
|
||||
PerWorkerConcurrency int
|
||||
RetryLimit int
|
||||
ExecutorReserveBackoff time.Duration
|
||||
}
|
||||
|
||||
func (r *Plugin) schedulerLoop() {
|
||||
defer r.wg.Done()
|
||||
ticker := time.NewTicker(r.schedulerTick)
|
||||
defer ticker.Stop()
|
||||
|
||||
// Try once immediately on startup.
|
||||
r.runSchedulerTick()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
return
|
||||
case <-ticker.C:
|
||||
r.runSchedulerTick()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) runSchedulerTick() {
|
||||
jobTypes := r.registry.DetectableJobTypes()
|
||||
if len(jobTypes) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
active := make(map[string]struct{}, len(jobTypes))
|
||||
for _, jobType := range jobTypes {
|
||||
active[jobType] = struct{}{}
|
||||
|
||||
policy, enabled, err := r.loadSchedulerPolicy(jobType)
|
||||
if err != nil {
|
||||
glog.Warningf("Plugin scheduler failed to load policy for %s: %v", jobType, err)
|
||||
continue
|
||||
}
|
||||
if !enabled {
|
||||
r.clearSchedulerJobType(jobType)
|
||||
continue
|
||||
}
|
||||
|
||||
if !r.markDetectionDue(jobType, policy.DetectionInterval) {
|
||||
continue
|
||||
}
|
||||
|
||||
r.wg.Add(1)
|
||||
go func(jt string, p schedulerPolicy) {
|
||||
defer r.wg.Done()
|
||||
r.runScheduledDetection(jt, p)
|
||||
}(jobType, policy)
|
||||
}
|
||||
|
||||
r.pruneSchedulerState(active)
|
||||
r.pruneDetectorLeases(active)
|
||||
}
|
||||
|
||||
func (r *Plugin) loadSchedulerPolicy(jobType string) (schedulerPolicy, bool, error) {
|
||||
cfg, err := r.store.LoadJobTypeConfig(jobType)
|
||||
if err != nil {
|
||||
return schedulerPolicy{}, false, err
|
||||
}
|
||||
descriptor, err := r.store.LoadDescriptor(jobType)
|
||||
if err != nil {
|
||||
return schedulerPolicy{}, false, err
|
||||
}
|
||||
|
||||
adminRuntime := deriveSchedulerAdminRuntime(cfg, descriptor)
|
||||
if adminRuntime == nil {
|
||||
return schedulerPolicy{}, false, nil
|
||||
}
|
||||
if !adminRuntime.Enabled {
|
||||
return schedulerPolicy{}, false, nil
|
||||
}
|
||||
|
||||
policy := schedulerPolicy{
|
||||
DetectionInterval: durationFromSeconds(adminRuntime.DetectionIntervalSeconds, defaultScheduledDetectionInterval),
|
||||
DetectionTimeout: durationFromSeconds(adminRuntime.DetectionTimeoutSeconds, defaultScheduledDetectionTimeout),
|
||||
ExecutionTimeout: defaultScheduledExecutionTimeout,
|
||||
RetryBackoff: durationFromSeconds(adminRuntime.RetryBackoffSeconds, defaultScheduledRetryBackoff),
|
||||
MaxResults: adminRuntime.MaxJobsPerDetection,
|
||||
ExecutionConcurrency: int(adminRuntime.GlobalExecutionConcurrency),
|
||||
PerWorkerConcurrency: int(adminRuntime.PerWorkerExecutionConcurrency),
|
||||
RetryLimit: int(adminRuntime.RetryLimit),
|
||||
ExecutorReserveBackoff: 200 * time.Millisecond,
|
||||
}
|
||||
|
||||
if policy.DetectionInterval < r.schedulerTick {
|
||||
policy.DetectionInterval = r.schedulerTick
|
||||
}
|
||||
if policy.MaxResults <= 0 {
|
||||
policy.MaxResults = defaultScheduledMaxResults
|
||||
}
|
||||
if policy.ExecutionConcurrency <= 0 {
|
||||
policy.ExecutionConcurrency = defaultScheduledExecutionConcurrency
|
||||
}
|
||||
if policy.ExecutionConcurrency > maxScheduledExecutionConcurrency {
|
||||
policy.ExecutionConcurrency = maxScheduledExecutionConcurrency
|
||||
}
|
||||
if policy.PerWorkerConcurrency <= 0 {
|
||||
policy.PerWorkerConcurrency = defaultScheduledPerWorkerConcurrency
|
||||
}
|
||||
if policy.PerWorkerConcurrency > policy.ExecutionConcurrency {
|
||||
policy.PerWorkerConcurrency = policy.ExecutionConcurrency
|
||||
}
|
||||
if policy.RetryLimit < 0 {
|
||||
policy.RetryLimit = 0
|
||||
}
|
||||
|
||||
// Plugin protocol currently has only detection timeout in admin settings.
|
||||
execTimeout := time.Duration(adminRuntime.DetectionTimeoutSeconds*2) * time.Second
|
||||
if execTimeout < defaultScheduledExecutionTimeout {
|
||||
execTimeout = defaultScheduledExecutionTimeout
|
||||
}
|
||||
policy.ExecutionTimeout = execTimeout
|
||||
|
||||
return policy, true, nil
|
||||
}
|
||||
|
||||
func (r *Plugin) ListSchedulerStates() ([]SchedulerJobTypeState, error) {
|
||||
jobTypes, err := r.ListKnownJobTypes()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
r.schedulerMu.Lock()
|
||||
nextDetectionAt := make(map[string]time.Time, len(r.nextDetectionAt))
|
||||
for jobType, nextRun := range r.nextDetectionAt {
|
||||
nextDetectionAt[jobType] = nextRun
|
||||
}
|
||||
detectionInFlight := make(map[string]bool, len(r.detectionInFlight))
|
||||
for jobType, inFlight := range r.detectionInFlight {
|
||||
detectionInFlight[jobType] = inFlight
|
||||
}
|
||||
r.schedulerMu.Unlock()
|
||||
|
||||
states := make([]SchedulerJobTypeState, 0, len(jobTypes))
|
||||
for _, jobType := range jobTypes {
|
||||
state := SchedulerJobTypeState{
|
||||
JobType: jobType,
|
||||
DetectionInFlight: detectionInFlight[jobType],
|
||||
}
|
||||
|
||||
if nextRun, ok := nextDetectionAt[jobType]; ok && !nextRun.IsZero() {
|
||||
nextRunUTC := nextRun.UTC()
|
||||
state.NextDetectionAt = &nextRunUTC
|
||||
}
|
||||
|
||||
policy, enabled, loadErr := r.loadSchedulerPolicy(jobType)
|
||||
if loadErr != nil {
|
||||
state.PolicyError = loadErr.Error()
|
||||
} else {
|
||||
state.Enabled = enabled
|
||||
if enabled {
|
||||
state.DetectionIntervalSeconds = secondsFromDuration(policy.DetectionInterval)
|
||||
state.DetectionTimeoutSeconds = secondsFromDuration(policy.DetectionTimeout)
|
||||
state.ExecutionTimeoutSeconds = secondsFromDuration(policy.ExecutionTimeout)
|
||||
state.MaxJobsPerDetection = policy.MaxResults
|
||||
state.GlobalExecutionConcurrency = policy.ExecutionConcurrency
|
||||
state.PerWorkerExecutionConcurrency = policy.PerWorkerConcurrency
|
||||
state.RetryLimit = policy.RetryLimit
|
||||
state.RetryBackoffSeconds = secondsFromDuration(policy.RetryBackoff)
|
||||
}
|
||||
}
|
||||
|
||||
leasedWorkerID := r.getDetectorLease(jobType)
|
||||
if leasedWorkerID != "" {
|
||||
state.DetectorWorkerID = leasedWorkerID
|
||||
if worker, ok := r.registry.Get(leasedWorkerID); ok {
|
||||
if capability := worker.Capabilities[jobType]; capability != nil && capability.CanDetect {
|
||||
state.DetectorAvailable = true
|
||||
}
|
||||
}
|
||||
}
|
||||
if state.DetectorWorkerID == "" {
|
||||
detector, detectorErr := r.registry.PickDetector(jobType)
|
||||
if detectorErr == nil && detector != nil {
|
||||
state.DetectorAvailable = true
|
||||
state.DetectorWorkerID = detector.WorkerID
|
||||
}
|
||||
}
|
||||
|
||||
executors, executorErr := r.registry.ListExecutors(jobType)
|
||||
if executorErr == nil {
|
||||
state.ExecutorWorkerCount = len(executors)
|
||||
}
|
||||
|
||||
states = append(states, state)
|
||||
}
|
||||
|
||||
return states, nil
|
||||
}
|
||||
|
||||
func deriveSchedulerAdminRuntime(
|
||||
cfg *plugin_pb.PersistedJobTypeConfig,
|
||||
descriptor *plugin_pb.JobTypeDescriptor,
|
||||
) *plugin_pb.AdminRuntimeConfig {
|
||||
if cfg != nil && cfg.AdminRuntime != nil {
|
||||
adminConfig := *cfg.AdminRuntime
|
||||
return &adminConfig
|
||||
}
|
||||
|
||||
if descriptor == nil || descriptor.AdminRuntimeDefaults == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
defaults := descriptor.AdminRuntimeDefaults
|
||||
return &plugin_pb.AdminRuntimeConfig{
|
||||
Enabled: defaults.Enabled,
|
||||
DetectionIntervalSeconds: defaults.DetectionIntervalSeconds,
|
||||
DetectionTimeoutSeconds: defaults.DetectionTimeoutSeconds,
|
||||
MaxJobsPerDetection: defaults.MaxJobsPerDetection,
|
||||
GlobalExecutionConcurrency: defaults.GlobalExecutionConcurrency,
|
||||
PerWorkerExecutionConcurrency: defaults.PerWorkerExecutionConcurrency,
|
||||
RetryLimit: defaults.RetryLimit,
|
||||
RetryBackoffSeconds: defaults.RetryBackoffSeconds,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) markDetectionDue(jobType string, interval time.Duration) bool {
|
||||
now := time.Now().UTC()
|
||||
|
||||
r.schedulerMu.Lock()
|
||||
defer r.schedulerMu.Unlock()
|
||||
|
||||
if r.detectionInFlight[jobType] {
|
||||
return false
|
||||
}
|
||||
|
||||
nextRun, exists := r.nextDetectionAt[jobType]
|
||||
if exists && now.Before(nextRun) {
|
||||
return false
|
||||
}
|
||||
|
||||
r.nextDetectionAt[jobType] = now.Add(interval)
|
||||
r.detectionInFlight[jobType] = true
|
||||
return true
|
||||
}
|
||||
|
||||
func (r *Plugin) finishDetection(jobType string) {
|
||||
r.schedulerMu.Lock()
|
||||
delete(r.detectionInFlight, jobType)
|
||||
r.schedulerMu.Unlock()
|
||||
}
|
||||
|
||||
func (r *Plugin) pruneSchedulerState(activeJobTypes map[string]struct{}) {
|
||||
r.schedulerMu.Lock()
|
||||
defer r.schedulerMu.Unlock()
|
||||
|
||||
for jobType := range r.nextDetectionAt {
|
||||
if _, ok := activeJobTypes[jobType]; !ok {
|
||||
delete(r.nextDetectionAt, jobType)
|
||||
delete(r.detectionInFlight, jobType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) clearSchedulerJobType(jobType string) {
|
||||
r.schedulerMu.Lock()
|
||||
delete(r.nextDetectionAt, jobType)
|
||||
delete(r.detectionInFlight, jobType)
|
||||
r.schedulerMu.Unlock()
|
||||
r.clearDetectorLease(jobType, "")
|
||||
}
|
||||
|
||||
func (r *Plugin) pruneDetectorLeases(activeJobTypes map[string]struct{}) {
|
||||
r.detectorLeaseMu.Lock()
|
||||
defer r.detectorLeaseMu.Unlock()
|
||||
|
||||
for jobType := range r.detectorLeases {
|
||||
if _, ok := activeJobTypes[jobType]; !ok {
|
||||
delete(r.detectorLeases, jobType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) runScheduledDetection(jobType string, policy schedulerPolicy) {
|
||||
defer r.finishDetection(jobType)
|
||||
|
||||
start := time.Now().UTC()
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: "scheduled detection started",
|
||||
Stage: "detecting",
|
||||
OccurredAt: timeToPtr(start),
|
||||
})
|
||||
|
||||
if skip, waitingCount, waitingThreshold := r.shouldSkipDetectionForWaitingJobs(jobType, policy); skip {
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled detection skipped: waiting backlog %d reached threshold %d", waitingCount, waitingThreshold),
|
||||
Stage: "skipped_waiting_backlog",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
clusterContext, err := r.loadSchedulerClusterContext()
|
||||
if err != nil {
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled detection aborted: %v", err),
|
||||
Stage: "failed",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), policy.DetectionTimeout)
|
||||
proposals, err := r.RunDetection(ctx, jobType, clusterContext, policy.MaxResults)
|
||||
cancel()
|
||||
if err != nil {
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled detection failed: %v", err),
|
||||
Stage: "failed",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled detection completed: %d proposal(s)", len(proposals)),
|
||||
Stage: "detected",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
|
||||
filteredByActive, skippedActive := r.filterProposalsWithActiveJobs(jobType, proposals)
|
||||
if skippedActive > 0 {
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled detection skipped %d proposal(s) due to active assigned/running jobs", skippedActive),
|
||||
Stage: "deduped_active_jobs",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
}
|
||||
|
||||
if len(filteredByActive) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
filtered := r.filterScheduledProposals(filteredByActive)
|
||||
if len(filtered) != len(filteredByActive) {
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled detection deduped %d proposal(s) within this run", len(filteredByActive)-len(filtered)),
|
||||
Stage: "deduped",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
}
|
||||
|
||||
if len(filtered) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
r.dispatchScheduledProposals(jobType, filtered, clusterContext, policy)
|
||||
}
|
||||
|
||||
func (r *Plugin) loadSchedulerClusterContext() (*plugin_pb.ClusterContext, error) {
|
||||
if r.clusterContextProvider == nil {
|
||||
return nil, fmt.Errorf("cluster context provider is not configured")
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultClusterContextTimeout)
|
||||
defer cancel()
|
||||
|
||||
clusterContext, err := r.clusterContextProvider(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if clusterContext == nil {
|
||||
return nil, fmt.Errorf("cluster context provider returned nil")
|
||||
}
|
||||
return clusterContext, nil
|
||||
}
|
||||
|
||||
func (r *Plugin) dispatchScheduledProposals(
|
||||
jobType string,
|
||||
proposals []*plugin_pb.JobProposal,
|
||||
clusterContext *plugin_pb.ClusterContext,
|
||||
policy schedulerPolicy,
|
||||
) {
|
||||
jobQueue := make(chan *plugin_pb.JobSpec, len(proposals))
|
||||
for index, proposal := range proposals {
|
||||
job := buildScheduledJobSpec(jobType, proposal, index)
|
||||
r.trackExecutionQueued(job)
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
close(jobQueue)
|
||||
return
|
||||
default:
|
||||
jobQueue <- job
|
||||
}
|
||||
}
|
||||
close(jobQueue)
|
||||
|
||||
var wg sync.WaitGroup
|
||||
var statsMu sync.Mutex
|
||||
successCount := 0
|
||||
errorCount := 0
|
||||
|
||||
workerCount := policy.ExecutionConcurrency
|
||||
if workerCount < 1 {
|
||||
workerCount = 1
|
||||
}
|
||||
|
||||
for i := 0; i < workerCount; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
|
||||
for job := range jobQueue {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
executor, release, reserveErr := r.reserveScheduledExecutor(jobType, policy)
|
||||
if reserveErr != nil {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
return
|
||||
default:
|
||||
}
|
||||
statsMu.Lock()
|
||||
errorCount++
|
||||
statsMu.Unlock()
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled execution reservation failed: %v", reserveErr),
|
||||
Stage: "failed",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
break
|
||||
}
|
||||
|
||||
err := r.executeScheduledJobWithExecutor(executor, job, clusterContext, policy)
|
||||
release()
|
||||
if errors.Is(err, errExecutorAtCapacity) {
|
||||
r.trackExecutionQueued(job)
|
||||
if !waitForShutdownOrTimer(r.shutdownCh, policy.ExecutorReserveBackoff) {
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
if err != nil {
|
||||
statsMu.Lock()
|
||||
errorCount++
|
||||
statsMu.Unlock()
|
||||
r.appendActivity(JobActivity{
|
||||
JobID: job.JobId,
|
||||
JobType: job.JobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled execution failed: %v", err),
|
||||
Stage: "failed",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
break
|
||||
}
|
||||
|
||||
statsMu.Lock()
|
||||
successCount++
|
||||
statsMu.Unlock()
|
||||
break
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
r.appendActivity(JobActivity{
|
||||
JobType: jobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("scheduled execution finished: success=%d error=%d", successCount, errorCount),
|
||||
Stage: "executed",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
}
|
||||
|
||||
func (r *Plugin) reserveScheduledExecutor(
|
||||
jobType string,
|
||||
policy schedulerPolicy,
|
||||
) (*WorkerSession, func(), error) {
|
||||
deadline := time.Now().Add(policy.ExecutionTimeout)
|
||||
if policy.ExecutionTimeout <= 0 {
|
||||
deadline = time.Now().Add(10 * time.Minute) // Default cap
|
||||
}
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
return nil, nil, fmt.Errorf("plugin is shutting down")
|
||||
default:
|
||||
}
|
||||
|
||||
if time.Now().After(deadline) {
|
||||
return nil, nil, fmt.Errorf("timed out waiting for executor capacity for %s", jobType)
|
||||
}
|
||||
|
||||
executors, err := r.registry.ListExecutors(jobType)
|
||||
if err != nil {
|
||||
if !waitForShutdownOrTimer(r.shutdownCh, policy.ExecutorReserveBackoff) {
|
||||
return nil, nil, fmt.Errorf("plugin is shutting down")
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
for _, executor := range executors {
|
||||
release, ok := r.tryReserveExecutorCapacity(executor, jobType, policy)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
return executor, release, nil
|
||||
}
|
||||
|
||||
if !waitForShutdownOrTimer(r.shutdownCh, policy.ExecutorReserveBackoff) {
|
||||
return nil, nil, fmt.Errorf("plugin is shutting down")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) tryReserveExecutorCapacity(
|
||||
executor *WorkerSession,
|
||||
jobType string,
|
||||
policy schedulerPolicy,
|
||||
) (func(), bool) {
|
||||
if executor == nil || strings.TrimSpace(executor.WorkerID) == "" {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
limit := schedulerWorkerExecutionLimit(executor, jobType, policy)
|
||||
if limit <= 0 {
|
||||
return nil, false
|
||||
}
|
||||
heartbeatUsed := 0
|
||||
if executor.Heartbeat != nil && executor.Heartbeat.ExecutionSlotsUsed > 0 {
|
||||
heartbeatUsed = int(executor.Heartbeat.ExecutionSlotsUsed)
|
||||
}
|
||||
|
||||
workerID := strings.TrimSpace(executor.WorkerID)
|
||||
|
||||
r.schedulerExecMu.Lock()
|
||||
reserved := r.schedulerExecReservations[workerID]
|
||||
if heartbeatUsed+reserved >= limit {
|
||||
r.schedulerExecMu.Unlock()
|
||||
return nil, false
|
||||
}
|
||||
r.schedulerExecReservations[workerID] = reserved + 1
|
||||
r.schedulerExecMu.Unlock()
|
||||
|
||||
release := func() {
|
||||
r.releaseExecutorCapacity(workerID)
|
||||
}
|
||||
return release, true
|
||||
}
|
||||
|
||||
func (r *Plugin) releaseExecutorCapacity(workerID string) {
|
||||
workerID = strings.TrimSpace(workerID)
|
||||
if workerID == "" {
|
||||
return
|
||||
}
|
||||
|
||||
r.schedulerExecMu.Lock()
|
||||
defer r.schedulerExecMu.Unlock()
|
||||
|
||||
current := r.schedulerExecReservations[workerID]
|
||||
if current <= 1 {
|
||||
delete(r.schedulerExecReservations, workerID)
|
||||
return
|
||||
}
|
||||
r.schedulerExecReservations[workerID] = current - 1
|
||||
}
|
||||
|
||||
func schedulerWorkerExecutionLimit(executor *WorkerSession, jobType string, policy schedulerPolicy) int {
|
||||
limit := policy.PerWorkerConcurrency
|
||||
if limit <= 0 {
|
||||
limit = defaultScheduledPerWorkerConcurrency
|
||||
}
|
||||
|
||||
if capability := executor.Capabilities[jobType]; capability != nil && capability.MaxExecutionConcurrency > 0 {
|
||||
capLimit := int(capability.MaxExecutionConcurrency)
|
||||
if capLimit < limit {
|
||||
limit = capLimit
|
||||
}
|
||||
}
|
||||
|
||||
if executor.Heartbeat != nil && executor.Heartbeat.ExecutionSlotsTotal > 0 {
|
||||
heartbeatLimit := int(executor.Heartbeat.ExecutionSlotsTotal)
|
||||
if heartbeatLimit < limit {
|
||||
limit = heartbeatLimit
|
||||
}
|
||||
}
|
||||
|
||||
if limit < 0 {
|
||||
return 0
|
||||
}
|
||||
return limit
|
||||
}
|
||||
|
||||
func (r *Plugin) executeScheduledJobWithExecutor(
|
||||
executor *WorkerSession,
|
||||
job *plugin_pb.JobSpec,
|
||||
clusterContext *plugin_pb.ClusterContext,
|
||||
policy schedulerPolicy,
|
||||
) error {
|
||||
maxAttempts := policy.RetryLimit + 1
|
||||
if maxAttempts < 1 {
|
||||
maxAttempts = 1
|
||||
}
|
||||
|
||||
var lastErr error
|
||||
for attempt := 1; attempt <= maxAttempts; attempt++ {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
return fmt.Errorf("plugin is shutting down")
|
||||
default:
|
||||
}
|
||||
|
||||
execCtx, cancel := context.WithTimeout(context.Background(), policy.ExecutionTimeout)
|
||||
_, err := r.executeJobWithExecutor(execCtx, executor, job, clusterContext, int32(attempt))
|
||||
cancel()
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
if isExecutorAtCapacityError(err) {
|
||||
return errExecutorAtCapacity
|
||||
}
|
||||
lastErr = err
|
||||
|
||||
if attempt < maxAttempts {
|
||||
r.appendActivity(JobActivity{
|
||||
JobID: job.JobId,
|
||||
JobType: job.JobType,
|
||||
Source: "admin_scheduler",
|
||||
Message: fmt.Sprintf("retrying job attempt %d/%d after error: %v", attempt, maxAttempts, err),
|
||||
Stage: "retry",
|
||||
OccurredAt: timeToPtr(time.Now().UTC()),
|
||||
})
|
||||
if !waitForShutdownOrTimer(r.shutdownCh, policy.RetryBackoff) {
|
||||
return fmt.Errorf("plugin is shutting down")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if lastErr == nil {
|
||||
lastErr = fmt.Errorf("execution failed without an explicit error")
|
||||
}
|
||||
return lastErr
|
||||
}
|
||||
|
||||
func (r *Plugin) shouldSkipDetectionForWaitingJobs(jobType string, policy schedulerPolicy) (bool, int, int) {
|
||||
waitingCount := r.countWaitingTrackedJobs(jobType)
|
||||
threshold := waitingBacklogThreshold(policy)
|
||||
if threshold <= 0 {
|
||||
return false, waitingCount, threshold
|
||||
}
|
||||
return waitingCount >= threshold, waitingCount, threshold
|
||||
}
|
||||
|
||||
func (r *Plugin) countWaitingTrackedJobs(jobType string) int {
|
||||
normalizedJobType := strings.TrimSpace(jobType)
|
||||
if normalizedJobType == "" {
|
||||
return 0
|
||||
}
|
||||
|
||||
waiting := 0
|
||||
r.jobsMu.RLock()
|
||||
for _, job := range r.jobs {
|
||||
if job == nil {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(job.JobType) != normalizedJobType {
|
||||
continue
|
||||
}
|
||||
if !isWaitingTrackedJobState(job.State) {
|
||||
continue
|
||||
}
|
||||
waiting++
|
||||
}
|
||||
r.jobsMu.RUnlock()
|
||||
|
||||
return waiting
|
||||
}
|
||||
|
||||
func waitingBacklogThreshold(policy schedulerPolicy) int {
|
||||
concurrency := policy.ExecutionConcurrency
|
||||
if concurrency <= 0 {
|
||||
concurrency = defaultScheduledExecutionConcurrency
|
||||
}
|
||||
threshold := concurrency * defaultWaitingBacklogMultiplier
|
||||
if threshold < defaultWaitingBacklogFloor {
|
||||
threshold = defaultWaitingBacklogFloor
|
||||
}
|
||||
if policy.MaxResults > 0 && threshold > int(policy.MaxResults) {
|
||||
threshold = int(policy.MaxResults)
|
||||
}
|
||||
return threshold
|
||||
}
|
||||
|
||||
func isExecutorAtCapacityError(err error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
if errors.Is(err, errExecutorAtCapacity) {
|
||||
return true
|
||||
}
|
||||
return strings.Contains(strings.ToLower(err.Error()), "executor is at capacity")
|
||||
}
|
||||
|
||||
func buildScheduledJobSpec(jobType string, proposal *plugin_pb.JobProposal, index int) *plugin_pb.JobSpec {
|
||||
now := timestamppb.Now()
|
||||
|
||||
jobID := fmt.Sprintf("%s-scheduled-%d-%d", jobType, now.AsTime().UnixNano(), index)
|
||||
|
||||
job := &plugin_pb.JobSpec{
|
||||
JobId: jobID,
|
||||
JobType: jobType,
|
||||
Priority: plugin_pb.JobPriority_JOB_PRIORITY_NORMAL,
|
||||
Parameters: map[string]*plugin_pb.ConfigValue{},
|
||||
Labels: map[string]string{},
|
||||
CreatedAt: now,
|
||||
ScheduledAt: now,
|
||||
}
|
||||
|
||||
if proposal == nil {
|
||||
return job
|
||||
}
|
||||
|
||||
if proposal.JobType != "" {
|
||||
job.JobType = proposal.JobType
|
||||
}
|
||||
job.Summary = proposal.Summary
|
||||
job.Detail = proposal.Detail
|
||||
if proposal.Priority != plugin_pb.JobPriority_JOB_PRIORITY_UNSPECIFIED {
|
||||
job.Priority = proposal.Priority
|
||||
}
|
||||
job.DedupeKey = proposal.DedupeKey
|
||||
job.Parameters = CloneConfigValueMap(proposal.Parameters)
|
||||
if proposal.Labels != nil {
|
||||
job.Labels = make(map[string]string, len(proposal.Labels))
|
||||
for k, v := range proposal.Labels {
|
||||
job.Labels[k] = v
|
||||
}
|
||||
}
|
||||
if proposal.NotBefore != nil {
|
||||
job.ScheduledAt = proposal.NotBefore
|
||||
}
|
||||
|
||||
return job
|
||||
}
|
||||
|
||||
func durationFromSeconds(seconds int32, defaultValue time.Duration) time.Duration {
|
||||
if seconds <= 0 {
|
||||
return defaultValue
|
||||
}
|
||||
return time.Duration(seconds) * time.Second
|
||||
}
|
||||
|
||||
func secondsFromDuration(duration time.Duration) int32 {
|
||||
if duration <= 0 {
|
||||
return 0
|
||||
}
|
||||
return int32(duration / time.Second)
|
||||
}
|
||||
|
||||
func waitForShutdownOrTimer(shutdown <-chan struct{}, duration time.Duration) bool {
|
||||
if duration <= 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
timer := time.NewTimer(duration)
|
||||
defer timer.Stop()
|
||||
|
||||
select {
|
||||
case <-shutdown:
|
||||
return false
|
||||
case <-timer.C:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Plugin) filterProposalsWithActiveJobs(jobType string, proposals []*plugin_pb.JobProposal) ([]*plugin_pb.JobProposal, int) {
|
||||
if len(proposals) == 0 {
|
||||
return proposals, 0
|
||||
}
|
||||
|
||||
activeKeys := make(map[string]struct{})
|
||||
r.jobsMu.RLock()
|
||||
for _, job := range r.jobs {
|
||||
if job == nil {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(job.JobType) != strings.TrimSpace(jobType) {
|
||||
continue
|
||||
}
|
||||
if !isActiveTrackedJobState(job.State) {
|
||||
continue
|
||||
}
|
||||
|
||||
key := strings.TrimSpace(job.DedupeKey)
|
||||
if key == "" {
|
||||
key = strings.TrimSpace(job.JobID)
|
||||
}
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
activeKeys[key] = struct{}{}
|
||||
}
|
||||
r.jobsMu.RUnlock()
|
||||
|
||||
if len(activeKeys) == 0 {
|
||||
return proposals, 0
|
||||
}
|
||||
|
||||
filtered := make([]*plugin_pb.JobProposal, 0, len(proposals))
|
||||
skipped := 0
|
||||
for _, proposal := range proposals {
|
||||
if proposal == nil {
|
||||
continue
|
||||
}
|
||||
key := proposalExecutionKey(proposal)
|
||||
if key != "" {
|
||||
if _, exists := activeKeys[key]; exists {
|
||||
skipped++
|
||||
continue
|
||||
}
|
||||
}
|
||||
filtered = append(filtered, proposal)
|
||||
}
|
||||
|
||||
return filtered, skipped
|
||||
}
|
||||
|
||||
func proposalExecutionKey(proposal *plugin_pb.JobProposal) string {
|
||||
if proposal == nil {
|
||||
return ""
|
||||
}
|
||||
key := strings.TrimSpace(proposal.DedupeKey)
|
||||
if key != "" {
|
||||
return key
|
||||
}
|
||||
return strings.TrimSpace(proposal.ProposalId)
|
||||
}
|
||||
|
||||
func isActiveTrackedJobState(state string) bool {
|
||||
normalized := strings.ToLower(strings.TrimSpace(state))
|
||||
switch normalized {
|
||||
case "pending", "assigned", "running", "in_progress", "job_state_pending", "job_state_assigned", "job_state_running":
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func isWaitingTrackedJobState(state string) bool {
|
||||
normalized := strings.ToLower(strings.TrimSpace(state))
|
||||
return normalized == "pending" || normalized == "job_state_pending"
|
||||
}
|
||||
|
||||
func (r *Plugin) filterScheduledProposals(proposals []*plugin_pb.JobProposal) []*plugin_pb.JobProposal {
|
||||
filtered := make([]*plugin_pb.JobProposal, 0, len(proposals))
|
||||
seenInRun := make(map[string]struct{}, len(proposals))
|
||||
|
||||
for _, proposal := range proposals {
|
||||
if proposal == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
key := proposal.DedupeKey
|
||||
if key == "" {
|
||||
key = proposal.ProposalId
|
||||
}
|
||||
if key == "" {
|
||||
filtered = append(filtered, proposal)
|
||||
continue
|
||||
}
|
||||
|
||||
if _, exists := seenInRun[key]; exists {
|
||||
continue
|
||||
}
|
||||
|
||||
seenInRun[key] = struct{}{}
|
||||
filtered = append(filtered, proposal)
|
||||
}
|
||||
|
||||
return filtered
|
||||
}
|
||||
583
weed/admin/plugin/plugin_scheduler_test.go
Normal file
583
weed/admin/plugin/plugin_scheduler_test.go
Normal file
@@ -0,0 +1,583 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
func TestLoadSchedulerPolicyUsesAdminConfig(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
err = pluginSvc.SaveJobTypeConfig(&plugin_pb.PersistedJobTypeConfig{
|
||||
JobType: "vacuum",
|
||||
AdminRuntime: &plugin_pb.AdminRuntimeConfig{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 30,
|
||||
DetectionTimeoutSeconds: 20,
|
||||
MaxJobsPerDetection: 123,
|
||||
GlobalExecutionConcurrency: 5,
|
||||
PerWorkerExecutionConcurrency: 2,
|
||||
RetryLimit: 4,
|
||||
RetryBackoffSeconds: 7,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("SaveJobTypeConfig: %v", err)
|
||||
}
|
||||
|
||||
policy, enabled, err := pluginSvc.loadSchedulerPolicy("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("loadSchedulerPolicy: %v", err)
|
||||
}
|
||||
if !enabled {
|
||||
t.Fatalf("expected enabled policy")
|
||||
}
|
||||
if policy.MaxResults != 123 {
|
||||
t.Fatalf("unexpected max results: got=%d", policy.MaxResults)
|
||||
}
|
||||
if policy.ExecutionConcurrency != 5 {
|
||||
t.Fatalf("unexpected global concurrency: got=%d", policy.ExecutionConcurrency)
|
||||
}
|
||||
if policy.PerWorkerConcurrency != 2 {
|
||||
t.Fatalf("unexpected per-worker concurrency: got=%d", policy.PerWorkerConcurrency)
|
||||
}
|
||||
if policy.RetryLimit != 4 {
|
||||
t.Fatalf("unexpected retry limit: got=%d", policy.RetryLimit)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLoadSchedulerPolicyUsesDescriptorDefaultsWhenConfigMissing(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
err = pluginSvc.store.SaveDescriptor("ec", &plugin_pb.JobTypeDescriptor{
|
||||
JobType: "ec",
|
||||
AdminRuntimeDefaults: &plugin_pb.AdminRuntimeDefaults{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 60,
|
||||
DetectionTimeoutSeconds: 25,
|
||||
MaxJobsPerDetection: 30,
|
||||
GlobalExecutionConcurrency: 4,
|
||||
PerWorkerExecutionConcurrency: 2,
|
||||
RetryLimit: 3,
|
||||
RetryBackoffSeconds: 6,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("SaveDescriptor: %v", err)
|
||||
}
|
||||
|
||||
policy, enabled, err := pluginSvc.loadSchedulerPolicy("ec")
|
||||
if err != nil {
|
||||
t.Fatalf("loadSchedulerPolicy: %v", err)
|
||||
}
|
||||
if !enabled {
|
||||
t.Fatalf("expected enabled policy from descriptor defaults")
|
||||
}
|
||||
if policy.MaxResults != 30 {
|
||||
t.Fatalf("unexpected max results: got=%d", policy.MaxResults)
|
||||
}
|
||||
if policy.ExecutionConcurrency != 4 {
|
||||
t.Fatalf("unexpected global concurrency: got=%d", policy.ExecutionConcurrency)
|
||||
}
|
||||
if policy.PerWorkerConcurrency != 2 {
|
||||
t.Fatalf("unexpected per-worker concurrency: got=%d", policy.PerWorkerConcurrency)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReserveScheduledExecutorRespectsPerWorkerLimit(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 4},
|
||||
},
|
||||
})
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 2},
|
||||
},
|
||||
})
|
||||
|
||||
policy := schedulerPolicy{
|
||||
PerWorkerConcurrency: 1,
|
||||
ExecutorReserveBackoff: time.Millisecond,
|
||||
}
|
||||
|
||||
executor1, release1, err := pluginSvc.reserveScheduledExecutor("balance", policy)
|
||||
if err != nil {
|
||||
t.Fatalf("reserve executor 1: %v", err)
|
||||
}
|
||||
defer release1()
|
||||
|
||||
executor2, release2, err := pluginSvc.reserveScheduledExecutor("balance", policy)
|
||||
if err != nil {
|
||||
t.Fatalf("reserve executor 2: %v", err)
|
||||
}
|
||||
defer release2()
|
||||
|
||||
if executor1.WorkerID == executor2.WorkerID {
|
||||
t.Fatalf("expected different executors due per-worker limit, got same worker %s", executor1.WorkerID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterScheduledProposalsDedupe(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
proposals := []*plugin_pb.JobProposal{
|
||||
{ProposalId: "p1", DedupeKey: "d1"},
|
||||
{ProposalId: "p2", DedupeKey: "d1"}, // same dedupe key
|
||||
{ProposalId: "p3", DedupeKey: "d3"},
|
||||
{ProposalId: "p3"}, // fallback dedupe by proposal id
|
||||
{ProposalId: "p4"},
|
||||
{ProposalId: "p4"}, // same proposal id, no dedupe key
|
||||
}
|
||||
|
||||
filtered := pluginSvc.filterScheduledProposals(proposals)
|
||||
if len(filtered) != 4 {
|
||||
t.Fatalf("unexpected filtered size: got=%d want=4", len(filtered))
|
||||
}
|
||||
|
||||
filtered2 := pluginSvc.filterScheduledProposals(proposals)
|
||||
if len(filtered2) != 4 {
|
||||
t.Fatalf("expected second run dedupe to be per-run only, got=%d", len(filtered2))
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildScheduledJobSpecDoesNotReuseProposalID(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
proposal := &plugin_pb.JobProposal{
|
||||
ProposalId: "vacuum-2",
|
||||
DedupeKey: "vacuum:2",
|
||||
JobType: "vacuum",
|
||||
}
|
||||
|
||||
jobA := buildScheduledJobSpec("vacuum", proposal, 0)
|
||||
jobB := buildScheduledJobSpec("vacuum", proposal, 1)
|
||||
|
||||
if jobA.JobId == proposal.ProposalId {
|
||||
t.Fatalf("scheduled job id must not reuse proposal id: %s", jobA.JobId)
|
||||
}
|
||||
if jobB.JobId == proposal.ProposalId {
|
||||
t.Fatalf("scheduled job id must not reuse proposal id: %s", jobB.JobId)
|
||||
}
|
||||
if jobA.JobId == jobB.JobId {
|
||||
t.Fatalf("scheduled job ids must be unique across jobs: %s", jobA.JobId)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterProposalsWithActiveJobs(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.trackExecutionStart("req-1", "worker-a", &plugin_pb.JobSpec{
|
||||
JobId: "job-1",
|
||||
JobType: "vacuum",
|
||||
DedupeKey: "vacuum:k1",
|
||||
}, 1)
|
||||
pluginSvc.trackExecutionStart("req-2", "worker-b", &plugin_pb.JobSpec{
|
||||
JobId: "job-2",
|
||||
JobType: "vacuum",
|
||||
}, 1)
|
||||
pluginSvc.trackExecutionQueued(&plugin_pb.JobSpec{
|
||||
JobId: "job-3",
|
||||
JobType: "vacuum",
|
||||
DedupeKey: "vacuum:k4",
|
||||
})
|
||||
|
||||
filtered, skipped := pluginSvc.filterProposalsWithActiveJobs("vacuum", []*plugin_pb.JobProposal{
|
||||
{ProposalId: "proposal-1", JobType: "vacuum", DedupeKey: "vacuum:k1"},
|
||||
{ProposalId: "job-2", JobType: "vacuum"},
|
||||
{ProposalId: "proposal-2b", JobType: "vacuum", DedupeKey: "vacuum:k4"},
|
||||
{ProposalId: "proposal-3", JobType: "vacuum", DedupeKey: "vacuum:k3"},
|
||||
{ProposalId: "proposal-4", JobType: "balance", DedupeKey: "balance:k1"},
|
||||
})
|
||||
if skipped != 3 {
|
||||
t.Fatalf("unexpected skipped count: got=%d want=3", skipped)
|
||||
}
|
||||
if len(filtered) != 2 {
|
||||
t.Fatalf("unexpected filtered size: got=%d want=2", len(filtered))
|
||||
}
|
||||
if filtered[0].ProposalId != "proposal-3" || filtered[1].ProposalId != "proposal-4" {
|
||||
t.Fatalf("unexpected filtered proposals: got=%s,%s", filtered[0].ProposalId, filtered[1].ProposalId)
|
||||
}
|
||||
}
|
||||
|
||||
func TestReserveScheduledExecutorTimesOutWhenNoExecutor(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
policy := schedulerPolicy{
|
||||
ExecutionTimeout: 30 * time.Millisecond,
|
||||
ExecutorReserveBackoff: 5 * time.Millisecond,
|
||||
PerWorkerConcurrency: 1,
|
||||
}
|
||||
|
||||
start := time.Now()
|
||||
pluginSvc.Shutdown()
|
||||
_, _, err = pluginSvc.reserveScheduledExecutor("missing-job-type", policy)
|
||||
if err == nil {
|
||||
t.Fatalf("expected reservation shutdown error")
|
||||
}
|
||||
if time.Since(start) > 50*time.Millisecond {
|
||||
t.Fatalf("reservation returned too late after shutdown: duration=%v", time.Since(start))
|
||||
}
|
||||
}
|
||||
|
||||
func TestReserveScheduledExecutorWaitsForWorkerCapacity(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
|
||||
policy := schedulerPolicy{
|
||||
ExecutionTimeout: time.Second,
|
||||
PerWorkerConcurrency: 8,
|
||||
ExecutorReserveBackoff: 5 * time.Millisecond,
|
||||
}
|
||||
|
||||
_, release1, err := pluginSvc.reserveScheduledExecutor("balance", policy)
|
||||
if err != nil {
|
||||
t.Fatalf("reserve executor 1: %v", err)
|
||||
}
|
||||
defer release1()
|
||||
|
||||
type reserveResult struct {
|
||||
err error
|
||||
}
|
||||
secondReserveCh := make(chan reserveResult, 1)
|
||||
go func() {
|
||||
_, release2, reserveErr := pluginSvc.reserveScheduledExecutor("balance", policy)
|
||||
if release2 != nil {
|
||||
release2()
|
||||
}
|
||||
secondReserveCh <- reserveResult{err: reserveErr}
|
||||
}()
|
||||
|
||||
select {
|
||||
case result := <-secondReserveCh:
|
||||
t.Fatalf("expected second reservation to wait for capacity, got=%v", result.err)
|
||||
case <-time.After(25 * time.Millisecond):
|
||||
// Expected: still waiting.
|
||||
}
|
||||
|
||||
release1()
|
||||
|
||||
select {
|
||||
case result := <-secondReserveCh:
|
||||
if result.err != nil {
|
||||
t.Fatalf("second reservation error: %v", result.err)
|
||||
}
|
||||
case <-time.After(200 * time.Millisecond):
|
||||
t.Fatalf("second reservation did not acquire after capacity release")
|
||||
}
|
||||
}
|
||||
|
||||
func TestShouldSkipDetectionForWaitingJobs(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
policy := schedulerPolicy{
|
||||
ExecutionConcurrency: 2,
|
||||
MaxResults: 100,
|
||||
}
|
||||
threshold := waitingBacklogThreshold(policy)
|
||||
if threshold <= 0 {
|
||||
t.Fatalf("expected positive waiting threshold")
|
||||
}
|
||||
|
||||
for i := 0; i < threshold; i++ {
|
||||
pluginSvc.trackExecutionQueued(&plugin_pb.JobSpec{
|
||||
JobId: fmt.Sprintf("job-waiting-%d", i),
|
||||
JobType: "vacuum",
|
||||
DedupeKey: fmt.Sprintf("vacuum:%d", i),
|
||||
})
|
||||
}
|
||||
|
||||
skip, waitingCount, waitingThreshold := pluginSvc.shouldSkipDetectionForWaitingJobs("vacuum", policy)
|
||||
if !skip {
|
||||
t.Fatalf("expected detection to skip when waiting backlog reaches threshold")
|
||||
}
|
||||
if waitingCount != threshold {
|
||||
t.Fatalf("unexpected waiting count: got=%d want=%d", waitingCount, threshold)
|
||||
}
|
||||
if waitingThreshold != threshold {
|
||||
t.Fatalf("unexpected waiting threshold: got=%d want=%d", waitingThreshold, threshold)
|
||||
}
|
||||
}
|
||||
|
||||
func TestWaitingBacklogThresholdHonorsMaxResultsCap(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
policy := schedulerPolicy{
|
||||
ExecutionConcurrency: 8,
|
||||
MaxResults: 6,
|
||||
}
|
||||
threshold := waitingBacklogThreshold(policy)
|
||||
if threshold != 6 {
|
||||
t.Fatalf("expected threshold to be capped by max results, got=%d", threshold)
|
||||
}
|
||||
}
|
||||
|
||||
func TestListSchedulerStatesIncludesPolicyAndState(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
const jobType = "vacuum"
|
||||
err = pluginSvc.SaveJobTypeConfig(&plugin_pb.PersistedJobTypeConfig{
|
||||
JobType: jobType,
|
||||
AdminRuntime: &plugin_pb.AdminRuntimeConfig{
|
||||
Enabled: true,
|
||||
DetectionIntervalSeconds: 45,
|
||||
DetectionTimeoutSeconds: 30,
|
||||
MaxJobsPerDetection: 80,
|
||||
GlobalExecutionConcurrency: 3,
|
||||
PerWorkerExecutionConcurrency: 2,
|
||||
RetryLimit: 1,
|
||||
RetryBackoffSeconds: 9,
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("SaveJobTypeConfig: %v", err)
|
||||
}
|
||||
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: jobType, CanDetect: true, CanExecute: true},
|
||||
},
|
||||
})
|
||||
|
||||
nextDetectionAt := time.Now().UTC().Add(2 * time.Minute).Round(time.Second)
|
||||
pluginSvc.schedulerMu.Lock()
|
||||
pluginSvc.nextDetectionAt[jobType] = nextDetectionAt
|
||||
pluginSvc.detectionInFlight[jobType] = true
|
||||
pluginSvc.schedulerMu.Unlock()
|
||||
|
||||
states, err := pluginSvc.ListSchedulerStates()
|
||||
if err != nil {
|
||||
t.Fatalf("ListSchedulerStates: %v", err)
|
||||
}
|
||||
|
||||
state := findSchedulerState(states, jobType)
|
||||
if state == nil {
|
||||
t.Fatalf("missing scheduler state for %s", jobType)
|
||||
}
|
||||
if !state.Enabled {
|
||||
t.Fatalf("expected enabled scheduler state")
|
||||
}
|
||||
if state.PolicyError != "" {
|
||||
t.Fatalf("unexpected policy error: %s", state.PolicyError)
|
||||
}
|
||||
if !state.DetectionInFlight {
|
||||
t.Fatalf("expected detection in flight")
|
||||
}
|
||||
if state.NextDetectionAt == nil {
|
||||
t.Fatalf("expected next detection time")
|
||||
}
|
||||
if state.NextDetectionAt.Unix() != nextDetectionAt.Unix() {
|
||||
t.Fatalf("unexpected next detection time: got=%v want=%v", state.NextDetectionAt, nextDetectionAt)
|
||||
}
|
||||
if state.DetectionIntervalSeconds != 45 {
|
||||
t.Fatalf("unexpected detection interval: got=%d", state.DetectionIntervalSeconds)
|
||||
}
|
||||
if state.DetectionTimeoutSeconds != 30 {
|
||||
t.Fatalf("unexpected detection timeout: got=%d", state.DetectionTimeoutSeconds)
|
||||
}
|
||||
if state.ExecutionTimeoutSeconds != 90 {
|
||||
t.Fatalf("unexpected execution timeout: got=%d", state.ExecutionTimeoutSeconds)
|
||||
}
|
||||
if state.MaxJobsPerDetection != 80 {
|
||||
t.Fatalf("unexpected max jobs per detection: got=%d", state.MaxJobsPerDetection)
|
||||
}
|
||||
if state.GlobalExecutionConcurrency != 3 {
|
||||
t.Fatalf("unexpected global execution concurrency: got=%d", state.GlobalExecutionConcurrency)
|
||||
}
|
||||
if state.PerWorkerExecutionConcurrency != 2 {
|
||||
t.Fatalf("unexpected per worker execution concurrency: got=%d", state.PerWorkerExecutionConcurrency)
|
||||
}
|
||||
if state.RetryLimit != 1 {
|
||||
t.Fatalf("unexpected retry limit: got=%d", state.RetryLimit)
|
||||
}
|
||||
if state.RetryBackoffSeconds != 9 {
|
||||
t.Fatalf("unexpected retry backoff: got=%d", state.RetryBackoffSeconds)
|
||||
}
|
||||
if !state.DetectorAvailable || state.DetectorWorkerID != "worker-a" {
|
||||
t.Fatalf("unexpected detector assignment: available=%v worker=%s", state.DetectorAvailable, state.DetectorWorkerID)
|
||||
}
|
||||
if state.ExecutorWorkerCount != 1 {
|
||||
t.Fatalf("unexpected executor worker count: got=%d", state.ExecutorWorkerCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestListSchedulerStatesShowsDisabledWhenNoPolicy(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
const jobType = "balance"
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: jobType, CanDetect: true, CanExecute: true},
|
||||
},
|
||||
})
|
||||
|
||||
states, err := pluginSvc.ListSchedulerStates()
|
||||
if err != nil {
|
||||
t.Fatalf("ListSchedulerStates: %v", err)
|
||||
}
|
||||
|
||||
state := findSchedulerState(states, jobType)
|
||||
if state == nil {
|
||||
t.Fatalf("missing scheduler state for %s", jobType)
|
||||
}
|
||||
if state.Enabled {
|
||||
t.Fatalf("expected disabled scheduler state")
|
||||
}
|
||||
if state.PolicyError != "" {
|
||||
t.Fatalf("unexpected policy error: %s", state.PolicyError)
|
||||
}
|
||||
if !state.DetectorAvailable || state.DetectorWorkerID != "worker-b" {
|
||||
t.Fatalf("unexpected detector details: available=%v worker=%s", state.DetectorAvailable, state.DetectorWorkerID)
|
||||
}
|
||||
if state.ExecutorWorkerCount != 1 {
|
||||
t.Fatalf("unexpected executor worker count: got=%d", state.ExecutorWorkerCount)
|
||||
}
|
||||
}
|
||||
|
||||
func findSchedulerState(states []SchedulerJobTypeState, jobType string) *SchedulerJobTypeState {
|
||||
for i := range states {
|
||||
if states[i].JobType == jobType {
|
||||
return &states[i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestPickDetectorPrefersLeasedWorker(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true},
|
||||
},
|
||||
})
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true},
|
||||
},
|
||||
})
|
||||
|
||||
pluginSvc.setDetectorLease("vacuum", "worker-b")
|
||||
|
||||
detector, err := pluginSvc.pickDetector("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("pickDetector: %v", err)
|
||||
}
|
||||
if detector.WorkerID != "worker-b" {
|
||||
t.Fatalf("expected leased detector worker-b, got=%s", detector.WorkerID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPickDetectorReassignsWhenLeaseIsStale(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
pluginSvc, err := New(Options{})
|
||||
if err != nil {
|
||||
t.Fatalf("New: %v", err)
|
||||
}
|
||||
defer pluginSvc.Shutdown()
|
||||
|
||||
pluginSvc.registry.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true},
|
||||
},
|
||||
})
|
||||
pluginSvc.setDetectorLease("vacuum", "worker-stale")
|
||||
|
||||
detector, err := pluginSvc.pickDetector("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("pickDetector: %v", err)
|
||||
}
|
||||
if detector.WorkerID != "worker-a" {
|
||||
t.Fatalf("expected reassigned detector worker-a, got=%s", detector.WorkerID)
|
||||
}
|
||||
|
||||
lease := pluginSvc.getDetectorLease("vacuum")
|
||||
if lease != "worker-a" {
|
||||
t.Fatalf("expected detector lease to be updated to worker-a, got=%s", lease)
|
||||
}
|
||||
}
|
||||
66
weed/admin/plugin/plugin_schema_prefetch.go
Normal file
66
weed/admin/plugin/plugin_schema_prefetch.go
Normal file
@@ -0,0 +1,66 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
const descriptorPrefetchTimeout = 20 * time.Second
|
||||
|
||||
func (r *Plugin) prefetchDescriptorsFromHello(hello *plugin_pb.WorkerHello) {
|
||||
if hello == nil || len(hello.Capabilities) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
jobTypeSet := make(map[string]struct{})
|
||||
for _, capability := range hello.Capabilities {
|
||||
if capability == nil || capability.JobType == "" {
|
||||
continue
|
||||
}
|
||||
if !capability.CanDetect && !capability.CanExecute {
|
||||
continue
|
||||
}
|
||||
jobTypeSet[capability.JobType] = struct{}{}
|
||||
}
|
||||
|
||||
if len(jobTypeSet) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
jobTypes := make([]string, 0, len(jobTypeSet))
|
||||
for jobType := range jobTypeSet {
|
||||
jobTypes = append(jobTypes, jobType)
|
||||
}
|
||||
sort.Strings(jobTypes)
|
||||
|
||||
for _, jobType := range jobTypes {
|
||||
select {
|
||||
case <-r.shutdownCh:
|
||||
return
|
||||
default:
|
||||
}
|
||||
|
||||
descriptor, err := r.store.LoadDescriptor(jobType)
|
||||
if err != nil {
|
||||
glog.Warningf("Plugin descriptor prefetch check failed for %s: %v", jobType, err)
|
||||
continue
|
||||
}
|
||||
if descriptor != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(r.ctx, descriptorPrefetchTimeout)
|
||||
_, err = r.RequestConfigSchema(ctx, jobType, false)
|
||||
cancel()
|
||||
if err != nil {
|
||||
glog.V(1).Infof("Plugin descriptor prefetch skipped for %s: %v", jobType, err)
|
||||
continue
|
||||
}
|
||||
|
||||
glog.V(1).Infof("Plugin descriptor prefetched for job_type=%s", jobType)
|
||||
}
|
||||
}
|
||||
465
weed/admin/plugin/registry.go
Normal file
465
weed/admin/plugin/registry.go
Normal file
@@ -0,0 +1,465 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
const defaultWorkerStaleTimeout = 2 * time.Minute
|
||||
|
||||
// WorkerSession contains tracked worker metadata and plugin status.
|
||||
type WorkerSession struct {
|
||||
WorkerID string
|
||||
WorkerInstance string
|
||||
Address string
|
||||
WorkerVersion string
|
||||
ProtocolVersion string
|
||||
ConnectedAt time.Time
|
||||
LastSeenAt time.Time
|
||||
Capabilities map[string]*plugin_pb.JobTypeCapability
|
||||
Heartbeat *plugin_pb.WorkerHeartbeat
|
||||
}
|
||||
|
||||
// Registry tracks connected plugin workers and capability-based selection.
|
||||
type Registry struct {
|
||||
mu sync.RWMutex
|
||||
sessions map[string]*WorkerSession
|
||||
staleAfter time.Duration
|
||||
detectorCursor map[string]int
|
||||
executorCursor map[string]int
|
||||
}
|
||||
|
||||
func NewRegistry() *Registry {
|
||||
return &Registry{
|
||||
sessions: make(map[string]*WorkerSession),
|
||||
staleAfter: defaultWorkerStaleTimeout,
|
||||
detectorCursor: make(map[string]int),
|
||||
executorCursor: make(map[string]int),
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Registry) UpsertFromHello(hello *plugin_pb.WorkerHello) *WorkerSession {
|
||||
now := time.Now()
|
||||
caps := make(map[string]*plugin_pb.JobTypeCapability, len(hello.Capabilities))
|
||||
for _, c := range hello.Capabilities {
|
||||
if c == nil || c.JobType == "" {
|
||||
continue
|
||||
}
|
||||
caps[c.JobType] = cloneJobTypeCapability(c)
|
||||
}
|
||||
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
session, ok := r.sessions[hello.WorkerId]
|
||||
if !ok {
|
||||
session = &WorkerSession{
|
||||
WorkerID: hello.WorkerId,
|
||||
ConnectedAt: now,
|
||||
}
|
||||
r.sessions[hello.WorkerId] = session
|
||||
}
|
||||
|
||||
session.WorkerInstance = hello.WorkerInstanceId
|
||||
session.Address = hello.Address
|
||||
session.WorkerVersion = hello.WorkerVersion
|
||||
session.ProtocolVersion = hello.ProtocolVersion
|
||||
session.LastSeenAt = now
|
||||
session.Capabilities = caps
|
||||
|
||||
return cloneWorkerSession(session)
|
||||
}
|
||||
|
||||
func (r *Registry) Remove(workerID string) {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
delete(r.sessions, workerID)
|
||||
}
|
||||
|
||||
func (r *Registry) UpdateHeartbeat(workerID string, heartbeat *plugin_pb.WorkerHeartbeat) {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
session, ok := r.sessions[workerID]
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
session.Heartbeat = cloneWorkerHeartbeat(heartbeat)
|
||||
session.LastSeenAt = time.Now()
|
||||
}
|
||||
|
||||
func (r *Registry) Get(workerID string) (*WorkerSession, bool) {
|
||||
r.mu.RLock()
|
||||
defer r.mu.RUnlock()
|
||||
session, ok := r.sessions[workerID]
|
||||
if !ok || r.isSessionStaleLocked(session, time.Now()) {
|
||||
return nil, false
|
||||
}
|
||||
return cloneWorkerSession(session), true
|
||||
}
|
||||
|
||||
func (r *Registry) List() []*WorkerSession {
|
||||
r.mu.RLock()
|
||||
defer r.mu.RUnlock()
|
||||
out := make([]*WorkerSession, 0, len(r.sessions))
|
||||
now := time.Now()
|
||||
for _, s := range r.sessions {
|
||||
if r.isSessionStaleLocked(s, now) {
|
||||
continue
|
||||
}
|
||||
out = append(out, cloneWorkerSession(s))
|
||||
}
|
||||
sort.Slice(out, func(i, j int) bool {
|
||||
return out[i].WorkerID < out[j].WorkerID
|
||||
})
|
||||
return out
|
||||
}
|
||||
|
||||
// DetectableJobTypes returns sorted job types that currently have at least one detect-capable worker.
|
||||
func (r *Registry) DetectableJobTypes() []string {
|
||||
r.mu.RLock()
|
||||
defer r.mu.RUnlock()
|
||||
|
||||
jobTypes := make(map[string]struct{})
|
||||
now := time.Now()
|
||||
for _, session := range r.sessions {
|
||||
if r.isSessionStaleLocked(session, now) {
|
||||
continue
|
||||
}
|
||||
for jobType, capability := range session.Capabilities {
|
||||
if capability == nil || !capability.CanDetect {
|
||||
continue
|
||||
}
|
||||
jobTypes[jobType] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
out := make([]string, 0, len(jobTypes))
|
||||
for jobType := range jobTypes {
|
||||
out = append(out, jobType)
|
||||
}
|
||||
sort.Strings(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// JobTypes returns sorted job types known by connected workers regardless of capability kind.
|
||||
func (r *Registry) JobTypes() []string {
|
||||
r.mu.RLock()
|
||||
defer r.mu.RUnlock()
|
||||
|
||||
jobTypes := make(map[string]struct{})
|
||||
now := time.Now()
|
||||
for _, session := range r.sessions {
|
||||
if r.isSessionStaleLocked(session, now) {
|
||||
continue
|
||||
}
|
||||
for jobType := range session.Capabilities {
|
||||
if jobType == "" {
|
||||
continue
|
||||
}
|
||||
jobTypes[jobType] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
out := make([]string, 0, len(jobTypes))
|
||||
for jobType := range jobTypes {
|
||||
out = append(out, jobType)
|
||||
}
|
||||
sort.Strings(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// PickSchemaProvider picks one worker for schema requests.
|
||||
// Preference order:
|
||||
// 1) workers that can detect this job type
|
||||
// 2) workers that can execute this job type
|
||||
// tie-break: more free slots, then lexical worker ID.
|
||||
func (r *Registry) PickSchemaProvider(jobType string) (*WorkerSession, error) {
|
||||
r.mu.RLock()
|
||||
defer r.mu.RUnlock()
|
||||
|
||||
var candidates []*WorkerSession
|
||||
now := time.Now()
|
||||
for _, s := range r.sessions {
|
||||
if r.isSessionStaleLocked(s, now) {
|
||||
continue
|
||||
}
|
||||
capability := s.Capabilities[jobType]
|
||||
if capability == nil {
|
||||
continue
|
||||
}
|
||||
if capability.CanDetect || capability.CanExecute {
|
||||
candidates = append(candidates, s)
|
||||
}
|
||||
}
|
||||
|
||||
if len(candidates) == 0 {
|
||||
return nil, fmt.Errorf("no worker available for schema job_type=%s", jobType)
|
||||
}
|
||||
|
||||
sort.Slice(candidates, func(i, j int) bool {
|
||||
a := candidates[i]
|
||||
b := candidates[j]
|
||||
ac := a.Capabilities[jobType]
|
||||
bc := b.Capabilities[jobType]
|
||||
|
||||
// Prefer detect-capable providers first.
|
||||
if ac.CanDetect != bc.CanDetect {
|
||||
return ac.CanDetect
|
||||
}
|
||||
|
||||
aSlots := availableDetectionSlots(a, ac) + availableExecutionSlots(a, ac)
|
||||
bSlots := availableDetectionSlots(b, bc) + availableExecutionSlots(b, bc)
|
||||
if aSlots != bSlots {
|
||||
return aSlots > bSlots
|
||||
}
|
||||
return a.WorkerID < b.WorkerID
|
||||
})
|
||||
|
||||
return cloneWorkerSession(candidates[0]), nil
|
||||
}
|
||||
|
||||
// PickDetector picks one detector worker for a job type.
|
||||
func (r *Registry) PickDetector(jobType string) (*WorkerSession, error) {
|
||||
return r.pickByKind(jobType, true)
|
||||
}
|
||||
|
||||
// PickExecutor picks one executor worker for a job type.
|
||||
func (r *Registry) PickExecutor(jobType string) (*WorkerSession, error) {
|
||||
return r.pickByKind(jobType, false)
|
||||
}
|
||||
|
||||
// ListExecutors returns sorted executor candidates for one job type.
|
||||
// Ordering is by most available execution slots, then lexical worker ID.
|
||||
// The top tie group is rotated round-robin to prevent sticky assignment.
|
||||
func (r *Registry) ListExecutors(jobType string) ([]*WorkerSession, error) {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
candidates := r.collectByKindLocked(jobType, false, time.Now())
|
||||
if len(candidates) == 0 {
|
||||
return nil, fmt.Errorf("no executor worker available for job_type=%s", jobType)
|
||||
}
|
||||
|
||||
sortByKind(candidates, jobType, false)
|
||||
r.rotateTopCandidatesLocked(candidates, jobType, false)
|
||||
|
||||
out := make([]*WorkerSession, 0, len(candidates))
|
||||
for _, candidate := range candidates {
|
||||
out = append(out, cloneWorkerSession(candidate))
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (r *Registry) pickByKind(jobType string, detect bool) (*WorkerSession, error) {
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
candidates := r.collectByKindLocked(jobType, detect, time.Now())
|
||||
|
||||
if len(candidates) == 0 {
|
||||
kind := "executor"
|
||||
if detect {
|
||||
kind = "detector"
|
||||
}
|
||||
return nil, fmt.Errorf("no %s worker available for job_type=%s", kind, jobType)
|
||||
}
|
||||
|
||||
sortByKind(candidates, jobType, detect)
|
||||
r.rotateTopCandidatesLocked(candidates, jobType, detect)
|
||||
|
||||
return cloneWorkerSession(candidates[0]), nil
|
||||
}
|
||||
|
||||
func (r *Registry) collectByKindLocked(jobType string, detect bool, now time.Time) []*WorkerSession {
|
||||
var candidates []*WorkerSession
|
||||
for _, session := range r.sessions {
|
||||
if r.isSessionStaleLocked(session, now) {
|
||||
continue
|
||||
}
|
||||
capability := session.Capabilities[jobType]
|
||||
if capability == nil {
|
||||
continue
|
||||
}
|
||||
if detect && capability.CanDetect {
|
||||
candidates = append(candidates, session)
|
||||
}
|
||||
if !detect && capability.CanExecute {
|
||||
candidates = append(candidates, session)
|
||||
}
|
||||
}
|
||||
return candidates
|
||||
}
|
||||
|
||||
func (r *Registry) isSessionStaleLocked(session *WorkerSession, now time.Time) bool {
|
||||
if session == nil {
|
||||
return true
|
||||
}
|
||||
if r.staleAfter <= 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
lastSeen := session.LastSeenAt
|
||||
if lastSeen.IsZero() {
|
||||
lastSeen = session.ConnectedAt
|
||||
}
|
||||
if lastSeen.IsZero() {
|
||||
return false
|
||||
}
|
||||
return now.Sub(lastSeen) > r.staleAfter
|
||||
}
|
||||
|
||||
func sortByKind(candidates []*WorkerSession, jobType string, detect bool) {
|
||||
sort.Slice(candidates, func(i, j int) bool {
|
||||
a := candidates[i]
|
||||
b := candidates[j]
|
||||
ac := a.Capabilities[jobType]
|
||||
bc := b.Capabilities[jobType]
|
||||
|
||||
aSlots := availableSlotsByKind(a, ac, detect)
|
||||
bSlots := availableSlotsByKind(b, bc, detect)
|
||||
|
||||
if aSlots != bSlots {
|
||||
return aSlots > bSlots
|
||||
}
|
||||
return a.WorkerID < b.WorkerID
|
||||
})
|
||||
}
|
||||
|
||||
func (r *Registry) rotateTopCandidatesLocked(candidates []*WorkerSession, jobType string, detect bool) {
|
||||
if len(candidates) < 2 {
|
||||
return
|
||||
}
|
||||
|
||||
capability := candidates[0].Capabilities[jobType]
|
||||
topSlots := availableSlotsByKind(candidates[0], capability, detect)
|
||||
tieEnd := 1
|
||||
for tieEnd < len(candidates) {
|
||||
nextCapability := candidates[tieEnd].Capabilities[jobType]
|
||||
if availableSlotsByKind(candidates[tieEnd], nextCapability, detect) != topSlots {
|
||||
break
|
||||
}
|
||||
tieEnd++
|
||||
}
|
||||
if tieEnd <= 1 {
|
||||
return
|
||||
}
|
||||
|
||||
cursorKey := strings.TrimSpace(jobType)
|
||||
if cursorKey == "" {
|
||||
cursorKey = "*"
|
||||
}
|
||||
|
||||
var offset int
|
||||
if detect {
|
||||
offset = r.detectorCursor[cursorKey] % tieEnd
|
||||
r.detectorCursor[cursorKey] = (offset + 1) % tieEnd
|
||||
} else {
|
||||
offset = r.executorCursor[cursorKey] % tieEnd
|
||||
r.executorCursor[cursorKey] = (offset + 1) % tieEnd
|
||||
}
|
||||
|
||||
if offset == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
prefix := append([]*WorkerSession(nil), candidates[:tieEnd]...)
|
||||
for i := 0; i < tieEnd; i++ {
|
||||
candidates[i] = prefix[(i+offset)%tieEnd]
|
||||
}
|
||||
}
|
||||
|
||||
func availableSlotsByKind(
|
||||
session *WorkerSession,
|
||||
capability *plugin_pb.JobTypeCapability,
|
||||
detect bool,
|
||||
) int {
|
||||
if detect {
|
||||
return availableDetectionSlots(session, capability)
|
||||
}
|
||||
return availableExecutionSlots(session, capability)
|
||||
}
|
||||
|
||||
func availableDetectionSlots(session *WorkerSession, capability *plugin_pb.JobTypeCapability) int {
|
||||
if session.Heartbeat != nil && session.Heartbeat.DetectionSlotsTotal > 0 {
|
||||
free := int(session.Heartbeat.DetectionSlotsTotal - session.Heartbeat.DetectionSlotsUsed)
|
||||
if free < 0 {
|
||||
return 0
|
||||
}
|
||||
return free
|
||||
}
|
||||
if capability.MaxDetectionConcurrency > 0 {
|
||||
return int(capability.MaxDetectionConcurrency)
|
||||
}
|
||||
return 1
|
||||
}
|
||||
|
||||
func availableExecutionSlots(session *WorkerSession, capability *plugin_pb.JobTypeCapability) int {
|
||||
if session.Heartbeat != nil && session.Heartbeat.ExecutionSlotsTotal > 0 {
|
||||
free := int(session.Heartbeat.ExecutionSlotsTotal - session.Heartbeat.ExecutionSlotsUsed)
|
||||
if free < 0 {
|
||||
return 0
|
||||
}
|
||||
return free
|
||||
}
|
||||
if capability.MaxExecutionConcurrency > 0 {
|
||||
return int(capability.MaxExecutionConcurrency)
|
||||
}
|
||||
return 1
|
||||
}
|
||||
|
||||
func cloneWorkerSession(in *WorkerSession) *WorkerSession {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := *in
|
||||
out.Capabilities = make(map[string]*plugin_pb.JobTypeCapability, len(in.Capabilities))
|
||||
for jobType, cap := range in.Capabilities {
|
||||
out.Capabilities[jobType] = cloneJobTypeCapability(cap)
|
||||
}
|
||||
out.Heartbeat = cloneWorkerHeartbeat(in.Heartbeat)
|
||||
return &out
|
||||
}
|
||||
|
||||
func cloneJobTypeCapability(in *plugin_pb.JobTypeCapability) *plugin_pb.JobTypeCapability {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := *in
|
||||
return &out
|
||||
}
|
||||
|
||||
func cloneWorkerHeartbeat(in *plugin_pb.WorkerHeartbeat) *plugin_pb.WorkerHeartbeat {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := *in
|
||||
if in.RunningWork != nil {
|
||||
out.RunningWork = make([]*plugin_pb.RunningWork, 0, len(in.RunningWork))
|
||||
for _, rw := range in.RunningWork {
|
||||
if rw == nil {
|
||||
continue
|
||||
}
|
||||
clone := *rw
|
||||
out.RunningWork = append(out.RunningWork, &clone)
|
||||
}
|
||||
}
|
||||
if in.QueuedJobsByType != nil {
|
||||
out.QueuedJobsByType = make(map[string]int32, len(in.QueuedJobsByType))
|
||||
for k, v := range in.QueuedJobsByType {
|
||||
out.QueuedJobsByType[k] = v
|
||||
}
|
||||
}
|
||||
if in.Metadata != nil {
|
||||
out.Metadata = make(map[string]string, len(in.Metadata))
|
||||
for k, v := range in.Metadata {
|
||||
out.Metadata[k] = v
|
||||
}
|
||||
}
|
||||
return &out
|
||||
}
|
||||
321
weed/admin/plugin/registry_test.go
Normal file
321
weed/admin/plugin/registry_test.go
Normal file
@@ -0,0 +1,321 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
)
|
||||
|
||||
func TestRegistryPickDetectorPrefersMoreFreeSlots(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true, CanExecute: true, MaxDetectionConcurrency: 2, MaxExecutionConcurrency: 2},
|
||||
},
|
||||
})
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true, CanExecute: true, MaxDetectionConcurrency: 4, MaxExecutionConcurrency: 4},
|
||||
},
|
||||
})
|
||||
|
||||
r.UpdateHeartbeat("worker-a", &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: "worker-a",
|
||||
DetectionSlotsUsed: 1,
|
||||
DetectionSlotsTotal: 2,
|
||||
})
|
||||
r.UpdateHeartbeat("worker-b", &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: "worker-b",
|
||||
DetectionSlotsUsed: 1,
|
||||
DetectionSlotsTotal: 4,
|
||||
})
|
||||
|
||||
picked, err := r.PickDetector("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("PickDetector: %v", err)
|
||||
}
|
||||
if picked.WorkerID != "worker-b" {
|
||||
t.Fatalf("unexpected detector picked: got %s want worker-b", picked.WorkerID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistryPickExecutorAllowsSameWorker(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-x",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanDetect: true, CanExecute: true, MaxDetectionConcurrency: 1, MaxExecutionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
|
||||
detector, err := r.PickDetector("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("PickDetector: %v", err)
|
||||
}
|
||||
executor, err := r.PickExecutor("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("PickExecutor: %v", err)
|
||||
}
|
||||
|
||||
if detector.WorkerID != "worker-x" || executor.WorkerID != "worker-x" {
|
||||
t.Fatalf("expected same worker for detect/execute, got detector=%s executor=%s", detector.WorkerID, executor.WorkerID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistryDetectableJobTypes(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true, CanExecute: true},
|
||||
{JobType: "balance", CanDetect: false, CanExecute: true},
|
||||
},
|
||||
})
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "ec", CanDetect: true, CanExecute: false},
|
||||
{JobType: "vacuum", CanDetect: true, CanExecute: false},
|
||||
},
|
||||
})
|
||||
|
||||
got := r.DetectableJobTypes()
|
||||
want := []string{"ec", "vacuum"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("unexpected detectable job types: got=%v want=%v", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistryJobTypes(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true},
|
||||
{JobType: "balance", CanExecute: true},
|
||||
},
|
||||
})
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "ec", CanDetect: true},
|
||||
},
|
||||
})
|
||||
|
||||
got := r.JobTypes()
|
||||
want := []string{"balance", "ec", "vacuum"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("unexpected job types: got=%v want=%v", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistryListExecutorsSortedBySlots(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 2},
|
||||
},
|
||||
})
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 4},
|
||||
},
|
||||
})
|
||||
|
||||
r.UpdateHeartbeat("worker-a", &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: "worker-a",
|
||||
ExecutionSlotsUsed: 1,
|
||||
ExecutionSlotsTotal: 2,
|
||||
})
|
||||
r.UpdateHeartbeat("worker-b", &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: "worker-b",
|
||||
ExecutionSlotsUsed: 1,
|
||||
ExecutionSlotsTotal: 4,
|
||||
})
|
||||
|
||||
executors, err := r.ListExecutors("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("ListExecutors: %v", err)
|
||||
}
|
||||
if len(executors) != 2 {
|
||||
t.Fatalf("unexpected candidate count: got=%d", len(executors))
|
||||
}
|
||||
if executors[0].WorkerID != "worker-b" || executors[1].WorkerID != "worker-a" {
|
||||
t.Fatalf("unexpected executor order: got=%s,%s", executors[0].WorkerID, executors[1].WorkerID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistryPickExecutorRoundRobinForTopTie(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
for _, workerID := range []string{"worker-a", "worker-b", "worker-c"} {
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: workerID,
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
got := make([]string, 0, 6)
|
||||
for i := 0; i < 6; i++ {
|
||||
executor, err := r.PickExecutor("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("PickExecutor: %v", err)
|
||||
}
|
||||
got = append(got, executor.WorkerID)
|
||||
}
|
||||
|
||||
want := []string{"worker-a", "worker-b", "worker-c", "worker-a", "worker-b", "worker-c"}
|
||||
if !reflect.DeepEqual(got, want) {
|
||||
t.Fatalf("unexpected pick order: got=%v want=%v", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistryListExecutorsRoundRobinForTopTie(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 2},
|
||||
},
|
||||
})
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-b",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 2},
|
||||
},
|
||||
})
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-c",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "balance", CanExecute: true, MaxExecutionConcurrency: 1},
|
||||
},
|
||||
})
|
||||
|
||||
r.UpdateHeartbeat("worker-a", &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: "worker-a",
|
||||
ExecutionSlotsUsed: 0,
|
||||
ExecutionSlotsTotal: 2,
|
||||
})
|
||||
r.UpdateHeartbeat("worker-b", &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: "worker-b",
|
||||
ExecutionSlotsUsed: 0,
|
||||
ExecutionSlotsTotal: 2,
|
||||
})
|
||||
r.UpdateHeartbeat("worker-c", &plugin_pb.WorkerHeartbeat{
|
||||
WorkerId: "worker-c",
|
||||
ExecutionSlotsUsed: 0,
|
||||
ExecutionSlotsTotal: 1,
|
||||
})
|
||||
|
||||
firstCall, err := r.ListExecutors("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("ListExecutors first call: %v", err)
|
||||
}
|
||||
secondCall, err := r.ListExecutors("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("ListExecutors second call: %v", err)
|
||||
}
|
||||
thirdCall, err := r.ListExecutors("balance")
|
||||
if err != nil {
|
||||
t.Fatalf("ListExecutors third call: %v", err)
|
||||
}
|
||||
|
||||
if firstCall[0].WorkerID != "worker-a" || firstCall[1].WorkerID != "worker-b" || firstCall[2].WorkerID != "worker-c" {
|
||||
t.Fatalf("unexpected first executor order: got=%s,%s,%s", firstCall[0].WorkerID, firstCall[1].WorkerID, firstCall[2].WorkerID)
|
||||
}
|
||||
if secondCall[0].WorkerID != "worker-b" || secondCall[1].WorkerID != "worker-a" || secondCall[2].WorkerID != "worker-c" {
|
||||
t.Fatalf("unexpected second executor order: got=%s,%s,%s", secondCall[0].WorkerID, secondCall[1].WorkerID, secondCall[2].WorkerID)
|
||||
}
|
||||
if thirdCall[0].WorkerID != "worker-a" || thirdCall[1].WorkerID != "worker-b" || thirdCall[2].WorkerID != "worker-c" {
|
||||
t.Fatalf("unexpected third executor order: got=%s,%s,%s", thirdCall[0].WorkerID, thirdCall[1].WorkerID, thirdCall[2].WorkerID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistrySkipsStaleWorkersForSelectionAndListing(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
r.staleAfter = 2 * time.Second
|
||||
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-stale",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true, CanExecute: true},
|
||||
},
|
||||
})
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-fresh",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true, CanExecute: true},
|
||||
},
|
||||
})
|
||||
|
||||
r.mu.Lock()
|
||||
r.sessions["worker-stale"].LastSeenAt = time.Now().Add(-10 * time.Second)
|
||||
r.sessions["worker-fresh"].LastSeenAt = time.Now()
|
||||
r.mu.Unlock()
|
||||
|
||||
picked, err := r.PickDetector("vacuum")
|
||||
if err != nil {
|
||||
t.Fatalf("PickDetector: %v", err)
|
||||
}
|
||||
if picked.WorkerID != "worker-fresh" {
|
||||
t.Fatalf("unexpected detector: got=%s want=worker-fresh", picked.WorkerID)
|
||||
}
|
||||
|
||||
if _, ok := r.Get("worker-stale"); ok {
|
||||
t.Fatalf("expected stale worker to be hidden from Get")
|
||||
}
|
||||
if _, ok := r.Get("worker-fresh"); !ok {
|
||||
t.Fatalf("expected fresh worker from Get")
|
||||
}
|
||||
|
||||
listed := r.List()
|
||||
if len(listed) != 1 || listed[0].WorkerID != "worker-fresh" {
|
||||
t.Fatalf("unexpected listed workers: %+v", listed)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRegistryReturnsNoDetectorWhenAllWorkersStale(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
r := NewRegistry()
|
||||
r.staleAfter = 2 * time.Second
|
||||
|
||||
r.UpsertFromHello(&plugin_pb.WorkerHello{
|
||||
WorkerId: "worker-a",
|
||||
Capabilities: []*plugin_pb.JobTypeCapability{
|
||||
{JobType: "vacuum", CanDetect: true},
|
||||
},
|
||||
})
|
||||
|
||||
r.mu.Lock()
|
||||
r.sessions["worker-a"].LastSeenAt = time.Now().Add(-10 * time.Second)
|
||||
r.mu.Unlock()
|
||||
|
||||
if _, err := r.PickDetector("vacuum"); err == nil {
|
||||
t.Fatalf("expected no detector when all workers are stale")
|
||||
}
|
||||
}
|
||||
103
weed/admin/plugin/types.go
Normal file
103
weed/admin/plugin/types.go
Normal file
@@ -0,0 +1,103 @@
|
||||
package plugin
|
||||
|
||||
import "time"
|
||||
|
||||
const (
|
||||
// Keep exactly the last 10 successful and last 10 error runs per job type.
|
||||
MaxSuccessfulRunHistory = 10
|
||||
MaxErrorRunHistory = 10
|
||||
)
|
||||
|
||||
type RunOutcome string
|
||||
|
||||
const (
|
||||
RunOutcomeSuccess RunOutcome = "success"
|
||||
RunOutcomeError RunOutcome = "error"
|
||||
)
|
||||
|
||||
type JobRunRecord struct {
|
||||
RunID string `json:"run_id"`
|
||||
JobID string `json:"job_id"`
|
||||
JobType string `json:"job_type"`
|
||||
WorkerID string `json:"worker_id"`
|
||||
Outcome RunOutcome `json:"outcome"`
|
||||
Message string `json:"message,omitempty"`
|
||||
DurationMs int64 `json:"duration_ms,omitempty"`
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty"`
|
||||
}
|
||||
|
||||
type JobTypeRunHistory struct {
|
||||
JobType string `json:"job_type"`
|
||||
SuccessfulRuns []JobRunRecord `json:"successful_runs"`
|
||||
ErrorRuns []JobRunRecord `json:"error_runs"`
|
||||
LastUpdatedTime *time.Time `json:"last_updated_time,omitempty"`
|
||||
}
|
||||
|
||||
type TrackedJob struct {
|
||||
JobID string `json:"job_id"`
|
||||
JobType string `json:"job_type"`
|
||||
RequestID string `json:"request_id"`
|
||||
WorkerID string `json:"worker_id"`
|
||||
DedupeKey string `json:"dedupe_key,omitempty"`
|
||||
Summary string `json:"summary,omitempty"`
|
||||
Detail string `json:"detail,omitempty"`
|
||||
Parameters map[string]interface{} `json:"parameters,omitempty"`
|
||||
Labels map[string]string `json:"labels,omitempty"`
|
||||
State string `json:"state"`
|
||||
Progress float64 `json:"progress"`
|
||||
Stage string `json:"stage,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Attempt int32 `json:"attempt,omitempty"`
|
||||
CreatedAt *time.Time `json:"created_at,omitempty"`
|
||||
UpdatedAt *time.Time `json:"updated_at,omitempty"`
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty"`
|
||||
ErrorMessage string `json:"error_message,omitempty"`
|
||||
ResultSummary string `json:"result_summary,omitempty"`
|
||||
ResultOutputValues map[string]interface{} `json:"result_output_values,omitempty"`
|
||||
}
|
||||
|
||||
type JobActivity struct {
|
||||
JobID string `json:"job_id"`
|
||||
JobType string `json:"job_type"`
|
||||
RequestID string `json:"request_id,omitempty"`
|
||||
WorkerID string `json:"worker_id,omitempty"`
|
||||
Source string `json:"source"`
|
||||
Message string `json:"message"`
|
||||
Stage string `json:"stage,omitempty"`
|
||||
Details map[string]interface{} `json:"details,omitempty"`
|
||||
OccurredAt *time.Time `json:"occurred_at,omitempty"`
|
||||
}
|
||||
|
||||
type JobDetail struct {
|
||||
Job *TrackedJob `json:"job"`
|
||||
RunRecord *JobRunRecord `json:"run_record,omitempty"`
|
||||
Activities []JobActivity `json:"activities"`
|
||||
RelatedJobs []TrackedJob `json:"related_jobs,omitempty"`
|
||||
LastUpdated *time.Time `json:"last_updated,omitempty"`
|
||||
}
|
||||
|
||||
type SchedulerJobTypeState struct {
|
||||
JobType string `json:"job_type"`
|
||||
Enabled bool `json:"enabled"`
|
||||
PolicyError string `json:"policy_error,omitempty"`
|
||||
DetectionInFlight bool `json:"detection_in_flight"`
|
||||
NextDetectionAt *time.Time `json:"next_detection_at,omitempty"`
|
||||
DetectionIntervalSeconds int32 `json:"detection_interval_seconds,omitempty"`
|
||||
DetectionTimeoutSeconds int32 `json:"detection_timeout_seconds,omitempty"`
|
||||
ExecutionTimeoutSeconds int32 `json:"execution_timeout_seconds,omitempty"`
|
||||
MaxJobsPerDetection int32 `json:"max_jobs_per_detection,omitempty"`
|
||||
GlobalExecutionConcurrency int `json:"global_execution_concurrency,omitempty"`
|
||||
PerWorkerExecutionConcurrency int `json:"per_worker_execution_concurrency,omitempty"`
|
||||
RetryLimit int `json:"retry_limit,omitempty"`
|
||||
RetryBackoffSeconds int32 `json:"retry_backoff_seconds,omitempty"`
|
||||
DetectorAvailable bool `json:"detector_available"`
|
||||
DetectorWorkerID string `json:"detector_worker_id,omitempty"`
|
||||
ExecutorWorkerCount int `json:"executor_worker_count"`
|
||||
}
|
||||
|
||||
func timeToPtr(t time.Time) *time.Time {
|
||||
if t.IsZero() {
|
||||
return nil
|
||||
}
|
||||
return &t
|
||||
}
|
||||
@@ -129,21 +129,6 @@ function setupSubmenuBehavior() {
|
||||
}
|
||||
}
|
||||
|
||||
// If we're on a maintenance page, expand the maintenance submenu
|
||||
if (currentPath.startsWith('/maintenance')) {
|
||||
const maintenanceSubmenu = document.getElementById('maintenanceSubmenu');
|
||||
if (maintenanceSubmenu) {
|
||||
maintenanceSubmenu.classList.add('show');
|
||||
|
||||
// Update the parent toggle button state
|
||||
const toggleButton = document.querySelector('[data-bs-target="#maintenanceSubmenu"]');
|
||||
if (toggleButton) {
|
||||
toggleButton.classList.remove('collapsed');
|
||||
toggleButton.setAttribute('aria-expanded', 'true');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Prevent submenu from collapsing when clicking on submenu items
|
||||
const clusterSubmenuLinks = document.querySelectorAll('#clusterSubmenu .nav-link');
|
||||
clusterSubmenuLinks.forEach(function (link) {
|
||||
@@ -161,14 +146,6 @@ function setupSubmenuBehavior() {
|
||||
});
|
||||
});
|
||||
|
||||
const maintenanceSubmenuLinks = document.querySelectorAll('#maintenanceSubmenu .nav-link');
|
||||
maintenanceSubmenuLinks.forEach(function (link) {
|
||||
link.addEventListener('click', function (e) {
|
||||
// Don't prevent the navigation, just stop the collapse behavior
|
||||
e.stopPropagation();
|
||||
});
|
||||
});
|
||||
|
||||
// Handle the main cluster toggle
|
||||
const clusterToggle = document.querySelector('[data-bs-target="#clusterSubmenu"]');
|
||||
if (clusterToggle) {
|
||||
@@ -215,28 +192,6 @@ function setupSubmenuBehavior() {
|
||||
});
|
||||
}
|
||||
|
||||
// Handle the main maintenance toggle
|
||||
const maintenanceToggle = document.querySelector('[data-bs-target="#maintenanceSubmenu"]');
|
||||
if (maintenanceToggle) {
|
||||
maintenanceToggle.addEventListener('click', function (e) {
|
||||
e.preventDefault();
|
||||
|
||||
const submenu = document.getElementById('maintenanceSubmenu');
|
||||
const isExpanded = submenu.classList.contains('show');
|
||||
|
||||
if (isExpanded) {
|
||||
// Collapse
|
||||
submenu.classList.remove('show');
|
||||
this.classList.add('collapsed');
|
||||
this.setAttribute('aria-expanded', 'false');
|
||||
} else {
|
||||
// Expand
|
||||
submenu.classList.add('show');
|
||||
this.classList.remove('collapsed');
|
||||
this.setAttribute('aria-expanded', 'true');
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Loading indicator functions
|
||||
|
||||
@@ -238,7 +238,7 @@ func (at *ActiveTopology) getPlanningCapacityUnsafe(disk *activeDisk) StorageSlo
|
||||
func (at *ActiveTopology) isDiskAvailableForPlanning(disk *activeDisk, taskType TaskType) bool {
|
||||
// Check total load including pending tasks
|
||||
totalLoad := len(disk.pendingTasks) + len(disk.assignedTasks)
|
||||
if totalLoad >= MaxTotalTaskLoadPerDisk {
|
||||
if MaxTotalTaskLoadPerDisk > 0 && totalLoad >= MaxTotalTaskLoadPerDisk {
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -299,6 +299,16 @@ func (at *ActiveTopology) getEffectiveAvailableCapacityUnsafe(disk *activeDisk)
|
||||
}
|
||||
|
||||
baseAvailable := disk.DiskInfo.DiskInfo.MaxVolumeCount - disk.DiskInfo.DiskInfo.VolumeCount
|
||||
if baseAvailable <= 0 &&
|
||||
disk.DiskInfo.DiskInfo.MaxVolumeCount == 0 &&
|
||||
disk.DiskInfo.DiskInfo.VolumeCount == 0 &&
|
||||
len(disk.DiskInfo.DiskInfo.VolumeInfos) == 0 &&
|
||||
len(disk.DiskInfo.DiskInfo.EcShardInfos) == 0 {
|
||||
// Some empty volume servers can report max_volume_counts=0 before
|
||||
// publishing concrete slot limits. Keep one provisional slot so EC
|
||||
// detection still sees the disk for placement planning.
|
||||
baseAvailable = 1
|
||||
}
|
||||
netImpact := at.getEffectiveCapacityUnsafe(disk)
|
||||
|
||||
// Calculate available volume slots (negative impact reduces availability)
|
||||
|
||||
82
weed/admin/topology/capacity_limits_test.go
Normal file
82
weed/admin/topology/capacity_limits_test.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package topology
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
|
||||
)
|
||||
|
||||
func TestGetDisksWithEffectiveCapacityNotCappedAtTenByLoad(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
activeTopology := NewActiveTopology(0)
|
||||
if err := activeTopology.UpdateTopology(singleDiskTopologyInfoForCapacityTest()); err != nil {
|
||||
t.Fatalf("UpdateTopology: %v", err)
|
||||
}
|
||||
|
||||
const pendingTasks = 32
|
||||
for i := 0; i < pendingTasks; i++ {
|
||||
taskID := fmt.Sprintf("ec-capacity-%d", i)
|
||||
err := activeTopology.AddPendingTask(TaskSpec{
|
||||
TaskID: taskID,
|
||||
TaskType: TaskTypeErasureCoding,
|
||||
VolumeID: uint32(i + 1),
|
||||
VolumeSize: 1,
|
||||
Sources: []TaskSourceSpec{
|
||||
{
|
||||
ServerID: "node-a",
|
||||
DiskID: 0,
|
||||
StorageImpact: &StorageSlotChange{},
|
||||
},
|
||||
},
|
||||
Destinations: []TaskDestinationSpec{
|
||||
{
|
||||
ServerID: "node-a",
|
||||
DiskID: 0,
|
||||
StorageImpact: &StorageSlotChange{},
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("AddPendingTask(%s): %v", taskID, err)
|
||||
}
|
||||
}
|
||||
|
||||
disks := activeTopology.GetDisksWithEffectiveCapacity(TaskTypeErasureCoding, "", 1)
|
||||
if len(disks) != 1 {
|
||||
t.Fatalf("expected disk to remain available after %d pending tasks, got %d", pendingTasks, len(disks))
|
||||
}
|
||||
if disks[0].LoadCount != pendingTasks {
|
||||
t.Fatalf("unexpected load count: got=%d want=%d", disks[0].LoadCount, pendingTasks)
|
||||
}
|
||||
}
|
||||
|
||||
func singleDiskTopologyInfoForCapacityTest() *master_pb.TopologyInfo {
|
||||
return &master_pb.TopologyInfo{
|
||||
Id: "topology-test",
|
||||
DataCenterInfos: []*master_pb.DataCenterInfo{
|
||||
{
|
||||
Id: "dc1",
|
||||
RackInfos: []*master_pb.RackInfo{
|
||||
{
|
||||
Id: "rack1",
|
||||
DataNodeInfos: []*master_pb.DataNodeInfo{
|
||||
{
|
||||
Id: "node-a",
|
||||
DiskInfos: map[string]*master_pb.DiskInfo{
|
||||
"hdd": {
|
||||
DiskId: 0,
|
||||
Type: "hdd",
|
||||
VolumeCount: 0,
|
||||
MaxVolumeCount: 1000,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -68,7 +68,7 @@ func (at *ActiveTopology) assignTaskToDisk(task *taskState) {
|
||||
func (at *ActiveTopology) isDiskAvailable(disk *activeDisk, taskType TaskType) bool {
|
||||
// Check if disk has too many pending and active tasks
|
||||
activeLoad := len(disk.pendingTasks) + len(disk.assignedTasks)
|
||||
if activeLoad >= MaxConcurrentTasksPerDisk {
|
||||
if MaxConcurrentTasksPerDisk > 0 && activeLoad >= MaxConcurrentTasksPerDisk {
|
||||
return false
|
||||
}
|
||||
|
||||
|
||||
@@ -317,6 +317,60 @@ func TestStorageSlotChangeCapacityCalculation(t *testing.T) {
|
||||
assert.Equal(t, int32(0), reservedShard, "Should show 0 reserved shard slots")
|
||||
}
|
||||
|
||||
func TestGetDisksWithEffectiveCapacity_UnknownEmptyDiskFallback(t *testing.T) {
|
||||
activeTopology := NewActiveTopology(10)
|
||||
|
||||
topologyInfo := &master_pb.TopologyInfo{
|
||||
DataCenterInfos: []*master_pb.DataCenterInfo{
|
||||
{
|
||||
Id: "dc1",
|
||||
RackInfos: []*master_pb.RackInfo{
|
||||
{
|
||||
Id: "rack1",
|
||||
DataNodeInfos: []*master_pb.DataNodeInfo{
|
||||
{
|
||||
Id: "empty-node",
|
||||
DiskInfos: map[string]*master_pb.DiskInfo{
|
||||
"hdd": {
|
||||
DiskId: 0,
|
||||
Type: "hdd",
|
||||
VolumeCount: 0,
|
||||
MaxVolumeCount: 0,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Id: "used-node",
|
||||
DiskInfos: map[string]*master_pb.DiskInfo{
|
||||
"hdd": {
|
||||
DiskId: 0,
|
||||
Type: "hdd",
|
||||
VolumeCount: 1,
|
||||
MaxVolumeCount: 0,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
err := activeTopology.UpdateTopology(topologyInfo)
|
||||
assert.NoError(t, err)
|
||||
|
||||
available := activeTopology.GetDisksWithEffectiveCapacity(TaskTypeErasureCoding, "", 1)
|
||||
assert.Len(t, available, 1, "only the empty unknown-capacity disk should be treated as provisionally available")
|
||||
if len(available) == 1 {
|
||||
assert.Equal(t, "empty-node", available[0].NodeID)
|
||||
assert.Equal(t, uint32(0), available[0].DiskID)
|
||||
}
|
||||
|
||||
assert.Equal(t, int64(1), activeTopology.GetEffectiveAvailableCapacity("empty-node", 0))
|
||||
assert.Equal(t, int64(0), activeTopology.GetEffectiveAvailableCapacity("used-node", 0))
|
||||
}
|
||||
|
||||
// TestECMultipleTargets demonstrates proper handling of EC operations with multiple targets
|
||||
func TestECMultipleTargets(t *testing.T) {
|
||||
activeTopology := NewActiveTopology(10)
|
||||
|
||||
@@ -26,17 +26,17 @@ const (
|
||||
|
||||
// Task and capacity management configuration constants
|
||||
const (
|
||||
// MaxConcurrentTasksPerDisk defines the maximum number of concurrent tasks per disk
|
||||
// This prevents overloading a single disk with too many simultaneous operations
|
||||
MaxConcurrentTasksPerDisk = 10
|
||||
// MaxConcurrentTasksPerDisk defines the maximum number of pending+assigned tasks per disk.
|
||||
// Set to 0 to disable hard load capping and rely on effective capacity checks.
|
||||
MaxConcurrentTasksPerDisk = 0
|
||||
|
||||
// MaxTotalTaskLoadPerDisk defines the maximum total task load (pending + active) per disk
|
||||
// This allows more tasks to be queued but limits the total pipeline depth
|
||||
MaxTotalTaskLoadPerDisk = 20
|
||||
// MaxTotalTaskLoadPerDisk defines the maximum total planning load (pending + active) per disk.
|
||||
// Set to 0 to disable hard load capping for planning.
|
||||
MaxTotalTaskLoadPerDisk = 0
|
||||
|
||||
// MaxTaskLoadForECPlacement defines the maximum task load to consider a disk for EC placement
|
||||
// This threshold ensures disks aren't overloaded when planning EC operations
|
||||
MaxTaskLoadForECPlacement = 10
|
||||
// MaxTaskLoadForECPlacement defines the maximum task load to consider a disk for EC placement.
|
||||
// Set to 0 to disable this filter.
|
||||
MaxTaskLoadForECPlacement = 0
|
||||
)
|
||||
|
||||
// StorageSlotChange represents storage impact at both volume and shard levels
|
||||
|
||||
@@ -3,6 +3,7 @@ package app
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
||||
)
|
||||
|
||||
templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
||||
@@ -61,11 +62,11 @@ templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
||||
<td>
|
||||
if data.IsComplete {
|
||||
<span class="badge bg-success">
|
||||
<i class="fas fa-check me-1"></i>Complete ({data.TotalShards}/14 shards)
|
||||
<i class="fas fa-check me-1"></i>Complete ({data.TotalShards}/{fmt.Sprintf("%d", erasure_coding.TotalShardsCount)} shards)
|
||||
</span>
|
||||
} else {
|
||||
<span class="badge bg-warning">
|
||||
<i class="fas fa-exclamation-triangle me-1"></i>Incomplete ({data.TotalShards}/14 shards)
|
||||
<i class="fas fa-exclamation-triangle me-1"></i>Incomplete ({data.TotalShards}/{fmt.Sprintf("%d", erasure_coding.TotalShardsCount)} shards)
|
||||
</span>
|
||||
}
|
||||
</td>
|
||||
@@ -78,7 +79,7 @@ templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
||||
if i > 0 {
|
||||
<span>, </span>
|
||||
}
|
||||
<span class="badge bg-danger">{fmt.Sprintf("%02d", shardID)}</span>
|
||||
@renderEcShardBadge(uint32(shardID), true)
|
||||
}
|
||||
</td>
|
||||
</tr>
|
||||
@@ -145,14 +146,19 @@ templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
||||
<h6>Present Shards:</h6>
|
||||
<div class="d-flex flex-wrap gap-1">
|
||||
for _, shard := range data.Shards {
|
||||
<span class="badge bg-success me-1 mb-1">{fmt.Sprintf("%02d", shard.ShardID)}</span>
|
||||
@renderEcShardBadge(shard.ShardID, false)
|
||||
}
|
||||
</div>
|
||||
<div class="small text-muted mt-2">
|
||||
<span class="badge bg-primary me-1">Data</span>
|
||||
<span class="badge bg-warning text-dark me-2">Parity</span>
|
||||
Data shards are blue, parity shards are yellow.
|
||||
</div>
|
||||
if len(data.MissingShards) > 0 {
|
||||
<h6 class="mt-2">Missing Shards:</h6>
|
||||
<div class="d-flex flex-wrap gap-1">
|
||||
for _, shardID := range data.MissingShards {
|
||||
<span class="badge bg-secondary me-1 mb-1">{fmt.Sprintf("%02d", shardID)}</span>
|
||||
@renderEcShardBadge(uint32(shardID), true)
|
||||
}
|
||||
</div>
|
||||
}
|
||||
@@ -240,7 +246,7 @@ templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
||||
for _, shard := range data.Shards {
|
||||
<tr>
|
||||
<td>
|
||||
<span class="badge bg-primary">{fmt.Sprintf("%02d", shard.ShardID)}</span>
|
||||
@renderEcShardBadge(shard.ShardID, false)
|
||||
</td>
|
||||
<td>
|
||||
<a href={ templ.URL("/cluster/volume-servers/" + shard.Server) } class="text-primary text-decoration-none">
|
||||
@@ -260,7 +266,7 @@ templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
||||
<span class="text-success">{bytesToHumanReadableUint64(shard.Size)}</span>
|
||||
</td>
|
||||
<td>
|
||||
<a href={ templ.SafeURL(fmt.Sprintf("http://%s/ui/index.html", shard.Server)) } target="_blank" class="btn btn-sm btn-primary">
|
||||
<a href={ templ.SafeURL(fmt.Sprintf("http://%s/ui/index.html", shard.Server)) } target="_blank" rel="noopener noreferrer" class="btn btn-sm btn-primary">
|
||||
<i class="fas fa-external-link-alt me-1"></i>Volume Server
|
||||
</a>
|
||||
</td>
|
||||
@@ -298,6 +304,22 @@ templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
|
||||
</script>
|
||||
}
|
||||
|
||||
templ renderEcShardBadge(shardID uint32, missing bool) {
|
||||
if shardID < erasure_coding.DataShardsCount {
|
||||
if missing {
|
||||
<span class="badge bg-primary opacity-50 me-1 mb-1" title={ fmt.Sprintf("Missing data shard %d", shardID) }>{ fmt.Sprintf("D%02d", shardID) }</span>
|
||||
} else {
|
||||
<span class="badge bg-primary me-1 mb-1" title={ fmt.Sprintf("Data shard %d", shardID) }>{ fmt.Sprintf("D%02d", shardID) }</span>
|
||||
}
|
||||
} else {
|
||||
if missing {
|
||||
<span class="badge bg-warning text-dark opacity-50 me-1 mb-1" title={ fmt.Sprintf("Missing parity shard %d", shardID) }>{ fmt.Sprintf("P%02d", shardID) }</span>
|
||||
} else {
|
||||
<span class="badge bg-warning text-dark me-1 mb-1" title={ fmt.Sprintf("Parity shard %d", shardID) }>{ fmt.Sprintf("P%02d", shardID) }</span>
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to convert bytes to human readable format (uint64 version)
|
||||
func bytesToHumanReadableUint64(bytes uint64) string {
|
||||
const unit = 1024
|
||||
@@ -310,4 +332,4 @@ func bytesToHumanReadableUint64(bytes uint64) string {
|
||||
exp++
|
||||
}
|
||||
return fmt.Sprintf("%.1f%cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
||||
}
|
||||
}
|
||||
|
||||
@@ -11,6 +11,7 @@ import templruntime "github.com/a-h/templ/runtime"
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
||||
)
|
||||
|
||||
func EcVolumeDetails(data dash.EcVolumeDetailsData) templ.Component {
|
||||
@@ -41,7 +42,7 @@ func EcVolumeDetails(data dash.EcVolumeDetailsData) templ.Component {
|
||||
var templ_7745c5c3_Var2 string
|
||||
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.VolumeID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 18, Col: 115}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 19, Col: 115}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
@@ -54,7 +55,7 @@ func EcVolumeDetails(data dash.EcVolumeDetailsData) templ.Component {
|
||||
var templ_7745c5c3_Var3 string
|
||||
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.VolumeID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 47, Col: 65}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 48, Col: 65}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
@@ -72,7 +73,7 @@ func EcVolumeDetails(data dash.EcVolumeDetailsData) templ.Component {
|
||||
var templ_7745c5c3_Var4 string
|
||||
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(data.Collection)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 53, Col: 80}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 54, Col: 80}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
@@ -100,445 +101,585 @@ func EcVolumeDetails(data dash.EcVolumeDetailsData) templ.Component {
|
||||
var templ_7745c5c3_Var5 string
|
||||
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(data.TotalShards)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 64, Col: 100}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 65, Col: 100}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "/14 shards)</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "<span class=\"badge bg-warning\"><i class=\"fas fa-exclamation-triangle me-1\"></i>Incomplete (")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "/")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var6 string
|
||||
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(data.TotalShards)
|
||||
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", erasure_coding.TotalShardsCount))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 68, Col: 117}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 65, Col: 153}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "/14 shards)</span>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, " shards)</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "</td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if !data.IsComplete {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "<tr><td><strong>Missing Shards:</strong></td><td>")
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "<span class=\"badge bg-warning\"><i class=\"fas fa-exclamation-triangle me-1\"></i>Incomplete (")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for i, shardID := range data.MissingShards {
|
||||
if i > 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "<span>, </span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, " <span class=\"badge bg-danger\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var7 string
|
||||
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 81, Col: 99}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var7 string
|
||||
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(data.TotalShards)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 69, Col: 117}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "</td></tr>")
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "<tr><td><strong>Data Centers:</strong></td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for i, dc := range data.DataCenters {
|
||||
if i > 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "<span>, </span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, " <span class=\"badge bg-primary\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "/")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var8 string
|
||||
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(dc)
|
||||
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", erasure_coding.TotalShardsCount))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 93, Col: 70}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 69, Col: 170}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "</span>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, " shards)</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "</td></tr><tr><td><strong>Servers:</strong></td><td><span class=\"text-muted\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "</td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var9 string
|
||||
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d servers", len(data.Servers)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 100, Col: 102}
|
||||
if !data.IsComplete {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, "<tr><td><strong>Missing Shards:</strong></td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for i, shardID := range data.MissingShards {
|
||||
if i > 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "<span>, </span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, " ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = renderEcShardBadge(uint32(shardID), true).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "</td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "<tr><td><strong>Data Centers:</strong></td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</span></td></tr><tr><td><strong>Last Updated:</strong></td><td><span class=\"text-muted\">")
|
||||
for i, dc := range data.DataCenters {
|
||||
if i > 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "<span>, </span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, " <span class=\"badge bg-primary\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var9 string
|
||||
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(dc)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 94, Col: 70}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</td></tr><tr><td><strong>Servers:</strong></td><td><span class=\"text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var10 string
|
||||
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(data.LastUpdated.Format("2006-01-02 15:04:05"))
|
||||
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d servers", len(data.Servers)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 106, Col: 104}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 101, Col: 102}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</span></td></tr></table></div></div></div><div class=\"col-md-6\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"card-title mb-0\"><i class=\"fas fa-chart-pie me-2\"></i>Shard Distribution</h5></div><div class=\"card-body\"><div class=\"row text-center\"><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-primary mb-1\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</span></td></tr><tr><td><strong>Last Updated:</strong></td><td><span class=\"text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var11 string
|
||||
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalShards))
|
||||
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(data.LastUpdated.Format("2006-01-02 15:04:05"))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 125, Col: 98}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 107, Col: 104}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "</h3><small class=\"text-muted\">Total Shards</small></div></div><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-success mb-1\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "</span></td></tr></table></div></div></div><div class=\"col-md-6\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"card-title mb-0\"><i class=\"fas fa-chart-pie me-2\"></i>Shard Distribution</h5></div><div class=\"card-body\"><div class=\"row text-center\"><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-primary mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var12 string
|
||||
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(data.DataCenters)))
|
||||
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalShards))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 131, Col: 103}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 126, Col: 98}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "</h3><small class=\"text-muted\">Data Centers</small></div></div><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-info mb-1\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "</h3><small class=\"text-muted\">Total Shards</small></div></div><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-success mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var13 string
|
||||
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(data.Servers)))
|
||||
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(data.DataCenters)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 137, Col: 96}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 132, Col: 103}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "</h3><small class=\"text-muted\">Servers</small></div></div></div><!-- Shard Distribution Visualization --><div class=\"mt-3\"><h6>Present Shards:</h6><div class=\"d-flex flex-wrap gap-1\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "</h3><small class=\"text-muted\">Data Centers</small></div></div><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-info mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var14 string
|
||||
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(data.Servers)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 138, Col: 96}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "</h3><small class=\"text-muted\">Servers</small></div></div></div><!-- Shard Distribution Visualization --><div class=\"mt-3\"><h6>Present Shards:</h6><div class=\"d-flex flex-wrap gap-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, shard := range data.Shards {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "<span class=\"badge bg-success me-1 mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var14 string
|
||||
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shard.ShardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 148, Col: 108}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "</span>")
|
||||
templ_7745c5c3_Err = renderEcShardBadge(shard.ShardID, false).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "</div>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "</div><div class=\"small text-muted mt-2\"><span class=\"badge bg-primary me-1\">Data</span> <span class=\"badge bg-warning text-dark me-2\">Parity</span> Data shards are blue, parity shards are yellow.</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if len(data.MissingShards) > 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "<h6 class=\"mt-2\">Missing Shards:</h6><div class=\"d-flex flex-wrap gap-1\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "<h6 class=\"mt-2\">Missing Shards:</h6><div class=\"d-flex flex-wrap gap-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, shardID := range data.MissingShards {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<span class=\"badge bg-secondary me-1 mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var15 string
|
||||
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 155, Col: 108}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</span>")
|
||||
templ_7745c5c3_Err = renderEcShardBadge(uint32(shardID), true).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "</div>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "</div></div></div></div></div><!-- Shard Details Table --><div class=\"card\"><div class=\"card-header\"><h5 class=\"card-title mb-0\"><i class=\"fas fa-list me-2\"></i>Shard Details</h5></div><div class=\"card-body\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "</div></div></div></div></div><!-- Shard Details Table --><div class=\"card\"><div class=\"card-header\"><h5 class=\"card-title mb-0\"><i class=\"fas fa-list me-2\"></i>Shard Details</h5></div><div class=\"card-body\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if len(data.Shards) > 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<div class=\"table-responsive\"><table class=\"table table-striped table-hover\"><thead><tr><th><a href=\"#\" onclick=\"sortBy('shard_id')\" class=\"text-dark text-decoration-none\">Shard ID ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "<div class=\"table-responsive\"><table class=\"table table-striped table-hover\"><thead><tr><th><a href=\"#\" onclick=\"sortBy('shard_id')\" class=\"text-dark text-decoration-none\">Shard ID ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.SortBy == "shard_id" {
|
||||
if data.SortOrder == "asc" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "</a></th><th><a href=\"#\" onclick=\"sortBy('server')\" class=\"text-dark text-decoration-none\">Server ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "</a></th><th><a href=\"#\" onclick=\"sortBy('server')\" class=\"text-dark text-decoration-none\">Server ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.SortBy == "server" {
|
||||
if data.SortOrder == "asc" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "</a></th><th><a href=\"#\" onclick=\"sortBy('data_center')\" class=\"text-dark text-decoration-none\">Data Center ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "</a></th><th><a href=\"#\" onclick=\"sortBy('data_center')\" class=\"text-dark text-decoration-none\">Data Center ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.SortBy == "data_center" {
|
||||
if data.SortOrder == "asc" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "</a></th><th><a href=\"#\" onclick=\"sortBy('rack')\" class=\"text-dark text-decoration-none\">Rack ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "</a></th><th><a href=\"#\" onclick=\"sortBy('rack')\" class=\"text-dark text-decoration-none\">Rack ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.SortBy == "rack" {
|
||||
if data.SortOrder == "asc" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "<i class=\"fas fa-sort-up ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "<i class=\"fas fa-sort-down ms-1\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "</a></th><th class=\"text-dark\">Disk Type</th><th class=\"text-dark\">Shard Size</th><th class=\"text-dark\">Actions</th></tr></thead> <tbody>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "</a></th><th class=\"text-dark\">Disk Type</th><th class=\"text-dark\">Shard Size</th><th class=\"text-dark\">Actions</th></tr></thead> <tbody>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, shard := range data.Shards {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "<tr><td><span class=\"badge bg-primary\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<tr><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = renderEcShardBadge(shard.ShardID, false).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "</td><td><a href=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var15 templ.SafeURL
|
||||
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinURLErrs(templ.URL("/cluster/volume-servers/" + shard.Server))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 252, Col: 106}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "\" class=\"text-primary text-decoration-none\"><code class=\"small\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var16 string
|
||||
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shard.ShardID))
|
||||
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Server)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 243, Col: 110}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 253, Col: 81}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "</span></td><td><a href=\"")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "</code></a></td><td><span class=\"badge bg-primary text-white\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var17 templ.SafeURL
|
||||
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinURLErrs(templ.URL("/cluster/volume-servers/" + shard.Server))
|
||||
var templ_7745c5c3_Var17 string
|
||||
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(shard.DataCenter)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 246, Col: 106}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 257, Col: 103}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "\" class=\"text-primary text-decoration-none\"><code class=\"small\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "</span></td><td><span class=\"badge bg-secondary text-white\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var18 string
|
||||
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Server)
|
||||
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Rack)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 247, Col: 81}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 260, Col: 99}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "</code></a></td><td><span class=\"badge bg-primary text-white\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "</span></td><td><span class=\"text-dark\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var19 string
|
||||
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(shard.DataCenter)
|
||||
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(shard.DiskType)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 251, Col: 103}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 263, Col: 83}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "</span></td><td><span class=\"badge bg-secondary text-white\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "</span></td><td><span class=\"text-success\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var20 string
|
||||
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Rack)
|
||||
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(bytesToHumanReadableUint64(shard.Size))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 254, Col: 99}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 266, Col: 110}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "</span></td><td><span class=\"text-dark\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "</span></td><td><a href=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var21 string
|
||||
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(shard.DiskType)
|
||||
var templ_7745c5c3_Var21 templ.SafeURL
|
||||
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(fmt.Sprintf("http://%s/ui/index.html", shard.Server)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 257, Col: 83}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 269, Col: 121}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</span></td><td><span class=\"text-success\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "\" target=\"_blank\" rel=\"noopener noreferrer\" class=\"btn btn-sm btn-primary\"><i class=\"fas fa-external-link-alt me-1\"></i>Volume Server</a></td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var22 string
|
||||
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(bytesToHumanReadableUint64(shard.Size))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 260, Col: 110}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</tbody></table></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "<div class=\"text-center py-4\"><i class=\"fas fa-exclamation-triangle fa-3x text-warning mb-3\"></i><h5>No EC shards found</h5><p class=\"text-muted\">This volume may not be EC encoded yet.</p></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "</div></div><script>\n // Sorting functionality\n function sortBy(field) {\n const currentSort = new URLSearchParams(window.location.search).get('sort_by');\n const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';\n \n let newOrder = 'asc';\n if (currentSort === field && currentOrder === 'asc') {\n newOrder = 'desc';\n }\n \n const url = new URL(window.location);\n url.searchParams.set('sort_by', field);\n url.searchParams.set('sort_order', newOrder);\n window.location.href = url.toString();\n }\n </script>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func renderEcShardBadge(shardID uint32, missing bool) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var22 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var22 == nil {
|
||||
templ_7745c5c3_Var22 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
if shardID < erasure_coding.DataShardsCount {
|
||||
if missing {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "<span class=\"badge bg-primary opacity-50 me-1 mb-1\" title=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "</span></td><td><a href=\"")
|
||||
var templ_7745c5c3_Var23 string
|
||||
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("Missing data shard %d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var23 templ.SafeURL
|
||||
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(fmt.Sprintf("http://%s/ui/index.html", shard.Server)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 263, Col: 121}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 310, Col: 108}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "\" target=\"_blank\" class=\"btn btn-sm btn-primary\"><i class=\"fas fa-external-link-alt me-1\"></i>Volume Server</a></td></tr>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var24 string
|
||||
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("D%02d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 310, Col: 142}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 65, "<span class=\"badge bg-primary me-1 mb-1\" title=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var25 string
|
||||
templ_7745c5c3_Var25, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("Data shard %d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 312, Col: 89}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var25))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 66, "\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var26 string
|
||||
templ_7745c5c3_Var26, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("D%02d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 312, Col: 123}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var26))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 67, "</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "</tbody></table></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "<div class=\"text-center py-4\"><i class=\"fas fa-exclamation-triangle fa-3x text-warning mb-3\"></i><h5>No EC shards found</h5><p class=\"text-muted\">This volume may not be EC encoded yet.</p></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
if missing {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 68, "<span class=\"badge bg-warning text-dark opacity-50 me-1 mb-1\" title=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var27 string
|
||||
templ_7745c5c3_Var27, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("Missing parity shard %d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 316, Col: 120}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var27))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 69, "\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var28 string
|
||||
templ_7745c5c3_Var28, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("P%02d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 316, Col: 154}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var28))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 70, "</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 71, "<span class=\"badge bg-warning text-dark me-1 mb-1\" title=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var29 string
|
||||
templ_7745c5c3_Var29, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("Parity shard %d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 318, Col: 101}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var29))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 72, "\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var30 string
|
||||
templ_7745c5c3_Var30, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("P%02d", shardID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/ec_volume_details.templ`, Line: 318, Col: 135}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var30))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 73, "</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "</div></div><script>\n // Sorting functionality\n function sortBy(field) {\n const currentSort = new URLSearchParams(window.location.search).get('sort_by');\n const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';\n \n let newOrder = 'asc';\n if (currentSort === field && currentOrder === 'asc') {\n newOrder = 'desc';\n }\n \n const url = new URL(window.location);\n url.searchParams.set('sort_by', field);\n url.searchParams.set('sort_order', newOrder);\n window.location.href = url.toString();\n }\n </script>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,267 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
)
|
||||
|
||||
templ MaintenanceConfig(data *maintenance.MaintenanceConfigData) {
|
||||
<div class="container-fluid">
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="d-flex justify-content-between align-items-center">
|
||||
<h2 class="mb-0">
|
||||
<i class="fas fa-cog me-2"></i>
|
||||
Maintenance Configuration
|
||||
</h2>
|
||||
<div class="btn-group">
|
||||
<a href="/maintenance" class="btn btn-outline-secondary">
|
||||
<i class="fas fa-arrow-left me-1"></i>
|
||||
Back to Queue
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">System Settings</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<form>
|
||||
<div class="mb-3">
|
||||
<div class="form-check form-switch">
|
||||
<input class="form-check-input" type="checkbox" id="enabled" checked?={data.IsEnabled}>
|
||||
<label class="form-check-label" for="enabled">
|
||||
<strong>Enable Maintenance System</strong>
|
||||
</label>
|
||||
</div>
|
||||
<small class="form-text text-muted">
|
||||
When enabled, the system will automatically scan for and execute maintenance tasks.
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="scanInterval" class="form-label">Scan Interval (minutes)</label>
|
||||
<input type="number" class="form-control" id="scanInterval"
|
||||
value={fmt.Sprintf("%.0f", float64(data.Config.ScanIntervalSeconds)/60)}
|
||||
placeholder="30 (default)" min="1" max="1440">
|
||||
<small class="form-text text-muted">
|
||||
How often to scan for maintenance tasks (1-1440 minutes). <strong>Default: 30 minutes</strong>
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="workerTimeout" class="form-label">Worker Timeout (minutes)</label>
|
||||
<input type="number" class="form-control" id="workerTimeout"
|
||||
value={fmt.Sprintf("%.0f", float64(data.Config.WorkerTimeoutSeconds)/60)}
|
||||
placeholder="5 (default)" min="1" max="60">
|
||||
<small class="form-text text-muted">
|
||||
How long to wait for worker heartbeat before considering it inactive (1-60 minutes). <strong>Default: 5 minutes</strong>
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="taskTimeout" class="form-label">Task Timeout (hours)</label>
|
||||
<input type="number" class="form-control" id="taskTimeout"
|
||||
value={fmt.Sprintf("%.0f", float64(data.Config.TaskTimeoutSeconds)/3600)}
|
||||
placeholder="2 (default)" min="1" max="24">
|
||||
<small class="form-text text-muted">
|
||||
Maximum time allowed for a single task to complete (1-24 hours). <strong>Default: 2 hours</strong>
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="globalMaxConcurrent" class="form-label">Global Concurrent Limit</label>
|
||||
<input type="number" class="form-control" id="globalMaxConcurrent"
|
||||
value={fmt.Sprintf("%d", data.Config.Policy.GlobalMaxConcurrent)}
|
||||
placeholder="4 (default)" min="1" max="20">
|
||||
<small class="form-text text-muted">
|
||||
Maximum number of maintenance tasks that can run simultaneously across all workers (1-20). <strong>Default: 4</strong>
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="maxRetries" class="form-label">Default Max Retries</label>
|
||||
<input type="number" class="form-control" id="maxRetries"
|
||||
value={fmt.Sprintf("%d", data.Config.MaxRetries)}
|
||||
placeholder="3 (default)" min="0" max="10">
|
||||
<small class="form-text text-muted">
|
||||
Default number of times to retry failed tasks (0-10). <strong>Default: 3</strong>
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="retryDelay" class="form-label">Retry Delay (minutes)</label>
|
||||
<input type="number" class="form-control" id="retryDelay"
|
||||
value={fmt.Sprintf("%.0f", float64(data.Config.RetryDelaySeconds)/60)}
|
||||
placeholder="15 (default)" min="1" max="120">
|
||||
<small class="form-text text-muted">
|
||||
Time to wait before retrying failed tasks (1-120 minutes). <strong>Default: 15 minutes</strong>
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="taskRetention" class="form-label">Task Retention (days)</label>
|
||||
<input type="number" class="form-control" id="taskRetention"
|
||||
value={fmt.Sprintf("%.0f", float64(data.Config.TaskRetentionSeconds)/(24*3600))}
|
||||
placeholder="7 (default)" min="1" max="30">
|
||||
<small class="form-text text-muted">
|
||||
How long to keep completed/failed task records (1-30 days). <strong>Default: 7 days</strong>
|
||||
</small>
|
||||
</div>
|
||||
|
||||
<div class="d-flex gap-2">
|
||||
<button type="button" class="btn btn-primary" onclick="saveConfiguration()">
|
||||
<i class="fas fa-save me-1"></i>
|
||||
Save Configuration
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary" onclick="resetToDefaults()">
|
||||
<i class="fas fa-undo me-1"></i>
|
||||
Reset to Defaults
|
||||
</button>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Individual Task Configuration Menu -->
|
||||
<div class="row mt-4">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-cogs me-2"></i>
|
||||
Task Configuration
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-muted mb-3">Configure specific settings for each maintenance task type.</p>
|
||||
<div class="list-group">
|
||||
for _, menuItem := range data.MenuItems {
|
||||
<a href={templ.SafeURL(menuItem.Path)} class="list-group-item list-group-item-action">
|
||||
<div class="d-flex w-100 justify-content-between">
|
||||
<h6 class="mb-1">
|
||||
<i class={menuItem.Icon + " me-2"}></i>
|
||||
{menuItem.DisplayName}
|
||||
</h6>
|
||||
if menuItem.IsEnabled {
|
||||
<span class="badge bg-success">Enabled</span>
|
||||
} else {
|
||||
<span class="badge bg-secondary">Disabled</span>
|
||||
}
|
||||
</div>
|
||||
<p class="mb-1 small text-muted">{menuItem.Description}</p>
|
||||
</a>
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Statistics Overview -->
|
||||
<div class="row mt-4">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">System Statistics</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="row">
|
||||
<div class="col-md-3">
|
||||
<div class="text-center">
|
||||
<h6 class="text-muted">Last Scan</h6>
|
||||
<p class="mb-0">{data.LastScanTime.Format("2006-01-02 15:04:05")}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-3">
|
||||
<div class="text-center">
|
||||
<h6 class="text-muted">Next Scan</h6>
|
||||
<p class="mb-0">{data.NextScanTime.Format("2006-01-02 15:04:05")}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-3">
|
||||
<div class="text-center">
|
||||
<h6 class="text-muted">Total Tasks</h6>
|
||||
<p class="mb-0">{fmt.Sprintf("%d", data.SystemStats.TotalTasks)}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-3">
|
||||
<div class="text-center">
|
||||
<h6 class="text-muted">Active Workers</h6>
|
||||
<p class="mb-0">{fmt.Sprintf("%d", data.SystemStats.ActiveWorkers)}</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function saveConfiguration() {
|
||||
// First, get current configuration to preserve existing values
|
||||
fetch('/api/maintenance/config')
|
||||
.then(response => response.json())
|
||||
.then(currentConfig => {
|
||||
// Update only the fields from the form
|
||||
const updatedConfig = {
|
||||
...currentConfig.config, // Preserve existing config
|
||||
enabled: document.getElementById('enabled').checked,
|
||||
scan_interval_seconds: parseInt(document.getElementById('scanInterval').value) * 60, // Convert to seconds
|
||||
worker_timeout_seconds: parseInt(document.getElementById('workerTimeout').value) * 60, // Convert to seconds
|
||||
task_timeout_seconds: parseInt(document.getElementById('taskTimeout').value) * 3600, // Convert to seconds
|
||||
retry_delay_seconds: parseInt(document.getElementById('retryDelay').value) * 60, // Convert to seconds
|
||||
max_retries: parseInt(document.getElementById('maxRetries').value),
|
||||
task_retention_seconds: parseInt(document.getElementById('taskRetention').value) * 24 * 3600, // Convert to seconds
|
||||
policy: {
|
||||
...currentConfig.config.policy, // Preserve existing policy
|
||||
global_max_concurrent: parseInt(document.getElementById('globalMaxConcurrent').value)
|
||||
}
|
||||
};
|
||||
|
||||
// Send the updated configuration
|
||||
return fetch('/api/maintenance/config', {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(updatedConfig)
|
||||
});
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (data.success) {
|
||||
alert('Configuration saved successfully');
|
||||
location.reload(); // Reload to show updated values
|
||||
} else {
|
||||
alert('Failed to save configuration: ' + (data.error || 'Unknown error'));
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
alert('Error: ' + error.message);
|
||||
});
|
||||
}
|
||||
|
||||
function resetToDefaults() {
|
||||
showConfirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.', function() {
|
||||
// Reset form to defaults (matching DefaultMaintenanceConfig values)
|
||||
document.getElementById('enabled').checked = false;
|
||||
document.getElementById('scanInterval').value = '30';
|
||||
document.getElementById('workerTimeout').value = '5';
|
||||
document.getElementById('taskTimeout').value = '2';
|
||||
document.getElementById('globalMaxConcurrent').value = '4';
|
||||
document.getElementById('maxRetries').value = '3';
|
||||
document.getElementById('retryDelay').value = '15';
|
||||
document.getElementById('taskRetention').value = '7';
|
||||
});
|
||||
}
|
||||
</script>
|
||||
}
|
||||
@@ -1,383 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/config"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
|
||||
)
|
||||
|
||||
templ MaintenanceConfigSchema(data *maintenance.MaintenanceConfigData, schema *maintenance.MaintenanceConfigSchema) {
|
||||
<div class="container-fluid">
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="d-flex justify-content-between align-items-center">
|
||||
<h2 class="mb-0">
|
||||
<i class="fas fa-cogs me-2"></i>
|
||||
Maintenance Configuration
|
||||
</h2>
|
||||
<div class="btn-group">
|
||||
<a href="/maintenance/tasks" class="btn btn-outline-primary">
|
||||
<i class="fas fa-tasks me-1"></i>
|
||||
View Tasks
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">System Settings</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<form id="maintenanceConfigForm">
|
||||
<!-- Dynamically render all schema fields in order -->
|
||||
for _, field := range schema.Fields {
|
||||
@ConfigField(field, data.Config)
|
||||
}
|
||||
|
||||
<div class="d-flex gap-2">
|
||||
<button type="button" class="btn btn-primary" onclick="saveConfiguration()">
|
||||
<i class="fas fa-save me-1"></i>
|
||||
Save Configuration
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary" onclick="resetToDefaults()">
|
||||
<i class="fas fa-undo me-1"></i>
|
||||
Reset to Defaults
|
||||
</button>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Task Configuration Cards -->
|
||||
<div class="row mt-4">
|
||||
<div class="col-md-4">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-broom me-2"></i>
|
||||
Volume Vacuum
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="card-text">Reclaims disk space by removing deleted files from volumes.</p>
|
||||
<a href="/maintenance/config/vacuum" class="btn btn-primary">Configure</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-balance-scale me-2"></i>
|
||||
Volume Balance
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="card-text">Redistributes volumes across servers to optimize storage utilization.</p>
|
||||
<a href="/maintenance/config/balance" class="btn btn-primary">Configure</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-4">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-shield-alt me-2"></i>
|
||||
Erasure Coding
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="card-text">Converts volumes to erasure coded format for improved durability.</p>
|
||||
<a href="/maintenance/config/erasure_coding" class="btn btn-primary">Configure</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function saveConfiguration() {
|
||||
const form = document.getElementById('maintenanceConfigForm');
|
||||
const formData = new FormData(form);
|
||||
|
||||
// Convert form data to JSON, handling interval fields specially
|
||||
const config = {};
|
||||
|
||||
for (let [key, value] of formData.entries()) {
|
||||
if (key.endsWith('_value')) {
|
||||
// This is an interval value part
|
||||
const baseKey = key.replace('_value', '');
|
||||
const unitKey = baseKey + '_unit';
|
||||
const unitValue = formData.get(unitKey);
|
||||
|
||||
if (unitValue) {
|
||||
// Convert to seconds based on unit
|
||||
const numValue = parseInt(value) || 0;
|
||||
let seconds = numValue;
|
||||
switch(unitValue) {
|
||||
case 'minutes':
|
||||
seconds = numValue * 60;
|
||||
break;
|
||||
case 'hours':
|
||||
seconds = numValue * 3600;
|
||||
break;
|
||||
case 'days':
|
||||
seconds = numValue * 24 * 3600;
|
||||
break;
|
||||
}
|
||||
config[baseKey] = seconds;
|
||||
}
|
||||
} else if (key.endsWith('_unit')) {
|
||||
// Skip unit keys - they're handled with their corresponding value
|
||||
continue;
|
||||
} else {
|
||||
// Regular field
|
||||
if (form.querySelector(`[name="${key}"]`).type === 'checkbox') {
|
||||
config[key] = form.querySelector(`[name="${key}"]`).checked;
|
||||
} else {
|
||||
const numValue = parseFloat(value);
|
||||
config[key] = isNaN(numValue) ? value : numValue;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fetch('/api/maintenance/config', {
|
||||
method: 'PUT',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify(config)
|
||||
})
|
||||
.then(response => {
|
||||
if (response.status === 401) {
|
||||
showAlert('Authentication required. Please log in first.', 'warning');
|
||||
setTimeout(() => {
|
||||
window.location.href = '/login';
|
||||
}, 2000);
|
||||
return;
|
||||
}
|
||||
return response.json();
|
||||
})
|
||||
.then(data => {
|
||||
if (!data) return; // Skip if redirected to login
|
||||
if (data.success) {
|
||||
showAlert('Configuration saved successfully!', 'success');
|
||||
location.reload();
|
||||
} else {
|
||||
showAlert('Error saving configuration: ' + (data.error || 'Unknown error'), 'error');
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error:', error);
|
||||
showAlert('Error saving configuration: ' + error.message, 'error');
|
||||
});
|
||||
}
|
||||
|
||||
function resetToDefaults() {
|
||||
showConfirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.', function() {
|
||||
fetch('/maintenance/config/defaults', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (data.success) {
|
||||
showAlert('Configuration reset to defaults!', 'success');
|
||||
location.reload();
|
||||
} else {
|
||||
showAlert('Error resetting configuration: ' + (data.error || 'Unknown error'), 'error');
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error:', error);
|
||||
showAlert('Error resetting configuration: ' + error.message, 'error');
|
||||
});
|
||||
});
|
||||
}
|
||||
</script>
|
||||
}
|
||||
|
||||
// ConfigField renders a single configuration field based on schema with typed value lookup
|
||||
templ ConfigField(field *config.Field, config *maintenance.MaintenanceConfig) {
|
||||
if field.InputType == "interval" {
|
||||
<!-- Interval field with number input + unit dropdown -->
|
||||
<div class="mb-3">
|
||||
<label for={ field.JSONName } class="form-label">
|
||||
{ field.DisplayName }
|
||||
if field.Required {
|
||||
<span class="text-danger">*</span>
|
||||
}
|
||||
</label>
|
||||
<div class="input-group">
|
||||
<input
|
||||
type="number"
|
||||
class="form-control"
|
||||
id={ field.JSONName + "_value" }
|
||||
name={ field.JSONName + "_value" }
|
||||
value={ fmt.Sprintf("%.0f", components.ConvertInt32SecondsToDisplayValue(getMaintenanceInt32Field(config, field.JSONName))) }
|
||||
step="1"
|
||||
min="1"
|
||||
if field.Required {
|
||||
required
|
||||
}
|
||||
/>
|
||||
<select
|
||||
class="form-select"
|
||||
id={ field.JSONName + "_unit" }
|
||||
name={ field.JSONName + "_unit" }
|
||||
style="max-width: 120px;"
|
||||
if field.Required {
|
||||
required
|
||||
}
|
||||
>
|
||||
<option
|
||||
value="minutes"
|
||||
if components.GetInt32DisplayUnit(getMaintenanceInt32Field(config, field.JSONName)) == "minutes" {
|
||||
selected
|
||||
}
|
||||
>
|
||||
Minutes
|
||||
</option>
|
||||
<option
|
||||
value="hours"
|
||||
if components.GetInt32DisplayUnit(getMaintenanceInt32Field(config, field.JSONName)) == "hours" {
|
||||
selected
|
||||
}
|
||||
>
|
||||
Hours
|
||||
</option>
|
||||
<option
|
||||
value="days"
|
||||
if components.GetInt32DisplayUnit(getMaintenanceInt32Field(config, field.JSONName)) == "days" {
|
||||
selected
|
||||
}
|
||||
>
|
||||
Days
|
||||
</option>
|
||||
</select>
|
||||
</div>
|
||||
if field.Description != "" {
|
||||
<div class="form-text text-muted">{ field.Description }</div>
|
||||
}
|
||||
</div>
|
||||
} else if field.InputType == "checkbox" {
|
||||
<!-- Checkbox field -->
|
||||
<div class="mb-3">
|
||||
<div class="form-check form-switch">
|
||||
<input
|
||||
class="form-check-input"
|
||||
type="checkbox"
|
||||
id={ field.JSONName }
|
||||
name={ field.JSONName }
|
||||
if getMaintenanceBoolField(config, field.JSONName) {
|
||||
checked
|
||||
}
|
||||
/>
|
||||
<label class="form-check-label" for={ field.JSONName }>
|
||||
<strong>{ field.DisplayName }</strong>
|
||||
</label>
|
||||
</div>
|
||||
if field.Description != "" {
|
||||
<div class="form-text text-muted">{ field.Description }</div>
|
||||
}
|
||||
</div>
|
||||
} else {
|
||||
<!-- Number field -->
|
||||
<div class="mb-3">
|
||||
<label for={ field.JSONName } class="form-label">
|
||||
{ field.DisplayName }
|
||||
if field.Required {
|
||||
<span class="text-danger">*</span>
|
||||
}
|
||||
</label>
|
||||
<input
|
||||
type="number"
|
||||
class="form-control"
|
||||
id={ field.JSONName }
|
||||
name={ field.JSONName }
|
||||
value={ fmt.Sprintf("%d", getMaintenanceInt32Field(config, field.JSONName)) }
|
||||
placeholder={ field.Placeholder }
|
||||
if field.MinValue != nil {
|
||||
min={ fmt.Sprintf("%v", field.MinValue) }
|
||||
}
|
||||
if field.MaxValue != nil {
|
||||
max={ fmt.Sprintf("%v", field.MaxValue) }
|
||||
}
|
||||
step={ getNumberStep(field) }
|
||||
if field.Required {
|
||||
required
|
||||
}
|
||||
/>
|
||||
if field.Description != "" {
|
||||
<div class="form-text text-muted">{ field.Description }</div>
|
||||
}
|
||||
</div>
|
||||
}
|
||||
}
|
||||
|
||||
// Helper functions for form field types
|
||||
|
||||
func getNumberStep(field *config.Field) string {
|
||||
if field.Type == config.FieldTypeFloat {
|
||||
return "0.01"
|
||||
}
|
||||
return "1"
|
||||
}
|
||||
|
||||
// Typed field getters for MaintenanceConfig - no interface{} needed
|
||||
func getMaintenanceInt32Field(config *maintenance.MaintenanceConfig, fieldName string) int32 {
|
||||
if config == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
switch fieldName {
|
||||
case "scan_interval_seconds":
|
||||
return config.ScanIntervalSeconds
|
||||
case "worker_timeout_seconds":
|
||||
return config.WorkerTimeoutSeconds
|
||||
case "task_timeout_seconds":
|
||||
return config.TaskTimeoutSeconds
|
||||
case "retry_delay_seconds":
|
||||
return config.RetryDelaySeconds
|
||||
case "max_retries":
|
||||
return config.MaxRetries
|
||||
case "cleanup_interval_seconds":
|
||||
return config.CleanupIntervalSeconds
|
||||
case "task_retention_seconds":
|
||||
return config.TaskRetentionSeconds
|
||||
case "global_max_concurrent":
|
||||
if config.Policy != nil {
|
||||
return config.Policy.GlobalMaxConcurrent
|
||||
}
|
||||
return 0
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func getMaintenanceBoolField(config *maintenance.MaintenanceConfig, fieldName string) bool {
|
||||
if config == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
switch fieldName {
|
||||
case "enabled":
|
||||
return config.Enabled
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to convert schema to JSON for JavaScript
|
||||
templ schemaToJSON(schema *maintenance.MaintenanceConfigSchema) {
|
||||
{`{}`}
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -1,284 +0,0 @@
|
||||
// Code generated by templ - DO NOT EDIT.
|
||||
|
||||
// templ: version: v0.3.977
|
||||
package app
|
||||
|
||||
//lint:file-ignore SA4006 This context is only used if a nested component is present.
|
||||
|
||||
import "github.com/a-h/templ"
|
||||
import templruntime "github.com/a-h/templ/runtime"
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
)
|
||||
|
||||
func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var1 == nil {
|
||||
templ_7745c5c3_Var1 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"container-fluid\"><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"d-flex justify-content-between align-items-center\"><h2 class=\"mb-0\"><i class=\"fas fa-cog me-2\"></i> Maintenance Configuration</h2><div class=\"btn-group\"><a href=\"/maintenance\" class=\"btn btn-outline-secondary\"><i class=\"fas fa-arrow-left me-1\"></i> Back to Queue</a></div></div></div></div><div class=\"row\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\">System Settings</h5></div><div class=\"card-body\"><form><div class=\"mb-3\"><div class=\"form-check form-switch\"><input class=\"form-check-input\" type=\"checkbox\" id=\"enabled\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.IsEnabled {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, " checked")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "> <label class=\"form-check-label\" for=\"enabled\"><strong>Enable Maintenance System</strong></label></div><small class=\"form-text text-muted\">When enabled, the system will automatically scan for and execute maintenance tasks.</small></div><div class=\"mb-3\"><label for=\"scanInterval\" class=\"form-label\">Scan Interval (minutes)</label> <input type=\"number\" class=\"form-control\" id=\"scanInterval\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var2 string
|
||||
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.ScanIntervalSeconds)/60))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 50, Col: 110}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "\" placeholder=\"30 (default)\" min=\"1\" max=\"1440\"> <small class=\"form-text text-muted\">How often to scan for maintenance tasks (1-1440 minutes). <strong>Default: 30 minutes</strong></small></div><div class=\"mb-3\"><label for=\"workerTimeout\" class=\"form-label\">Worker Timeout (minutes)</label> <input type=\"number\" class=\"form-control\" id=\"workerTimeout\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var3 string
|
||||
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.WorkerTimeoutSeconds)/60))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 60, Col: 111}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "\" placeholder=\"5 (default)\" min=\"1\" max=\"60\"> <small class=\"form-text text-muted\">How long to wait for worker heartbeat before considering it inactive (1-60 minutes). <strong>Default: 5 minutes</strong></small></div><div class=\"mb-3\"><label for=\"taskTimeout\" class=\"form-label\">Task Timeout (hours)</label> <input type=\"number\" class=\"form-control\" id=\"taskTimeout\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var4 string
|
||||
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.TaskTimeoutSeconds)/3600))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 70, Col: 111}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "\" placeholder=\"2 (default)\" min=\"1\" max=\"24\"> <small class=\"form-text text-muted\">Maximum time allowed for a single task to complete (1-24 hours). <strong>Default: 2 hours</strong></small></div><div class=\"mb-3\"><label for=\"globalMaxConcurrent\" class=\"form-label\">Global Concurrent Limit</label> <input type=\"number\" class=\"form-control\" id=\"globalMaxConcurrent\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var5 string
|
||||
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Config.Policy.GlobalMaxConcurrent))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 80, Col: 103}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "\" placeholder=\"4 (default)\" min=\"1\" max=\"20\"> <small class=\"form-text text-muted\">Maximum number of maintenance tasks that can run simultaneously across all workers (1-20). <strong>Default: 4</strong></small></div><div class=\"mb-3\"><label for=\"maxRetries\" class=\"form-label\">Default Max Retries</label> <input type=\"number\" class=\"form-control\" id=\"maxRetries\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var6 string
|
||||
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Config.MaxRetries))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 90, Col: 87}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "\" placeholder=\"3 (default)\" min=\"0\" max=\"10\"> <small class=\"form-text text-muted\">Default number of times to retry failed tasks (0-10). <strong>Default: 3</strong></small></div><div class=\"mb-3\"><label for=\"retryDelay\" class=\"form-label\">Retry Delay (minutes)</label> <input type=\"number\" class=\"form-control\" id=\"retryDelay\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var7 string
|
||||
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.RetryDelaySeconds)/60))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 100, Col: 108}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "\" placeholder=\"15 (default)\" min=\"1\" max=\"120\"> <small class=\"form-text text-muted\">Time to wait before retrying failed tasks (1-120 minutes). <strong>Default: 15 minutes</strong></small></div><div class=\"mb-3\"><label for=\"taskRetention\" class=\"form-label\">Task Retention (days)</label> <input type=\"number\" class=\"form-control\" id=\"taskRetention\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var8 string
|
||||
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.TaskRetentionSeconds)/(24*3600)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 110, Col: 118}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "\" placeholder=\"7 (default)\" min=\"1\" max=\"30\"> <small class=\"form-text text-muted\">How long to keep completed/failed task records (1-30 days). <strong>Default: 7 days</strong></small></div><div class=\"d-flex gap-2\"><button type=\"button\" class=\"btn btn-primary\" onclick=\"saveConfiguration()\"><i class=\"fas fa-save me-1\"></i> Save Configuration</button> <button type=\"button\" class=\"btn btn-secondary\" onclick=\"resetToDefaults()\"><i class=\"fas fa-undo me-1\"></i> Reset to Defaults</button></div></form></div></div></div></div><!-- Individual Task Configuration Menu --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-cogs me-2\"></i> Task Configuration</h5></div><div class=\"card-body\"><p class=\"text-muted mb-3\">Configure specific settings for each maintenance task type.</p><div class=\"list-group\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, menuItem := range data.MenuItems {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "<a href=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var9 templ.SafeURL
|
||||
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.Path))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 147, Col: 69}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "\" class=\"list-group-item list-group-item-action\"><div class=\"d-flex w-100 justify-content-between\"><h6 class=\"mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var10 = []any{menuItem.Icon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var10...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var11 string
|
||||
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var10).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "\"></i> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var12 string
|
||||
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.DisplayName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 151, Col: 65}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, "</h6>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if menuItem.IsEnabled {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "<span class=\"badge bg-success\">Enabled</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "<span class=\"badge bg-secondary\">Disabled</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "</div><p class=\"mb-1 small text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var13 string
|
||||
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 159, Col: 90}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "</p></a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "</div></div></div></div></div><!-- Statistics Overview --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\">System Statistics</h5></div><div class=\"card-body\"><div class=\"row\"><div class=\"col-md-3\"><div class=\"text-center\"><h6 class=\"text-muted\">Last Scan</h6><p class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var14 string
|
||||
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(data.LastScanTime.Format("2006-01-02 15:04:05"))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 180, Col: 100}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "</p></div></div><div class=\"col-md-3\"><div class=\"text-center\"><h6 class=\"text-muted\">Next Scan</h6><p class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var15 string
|
||||
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(data.NextScanTime.Format("2006-01-02 15:04:05"))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 186, Col: 100}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "</p></div></div><div class=\"col-md-3\"><div class=\"text-center\"><h6 class=\"text-muted\">Total Tasks</h6><p class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var16 string
|
||||
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.SystemStats.TotalTasks))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 192, Col: 99}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</p></div></div><div class=\"col-md-3\"><div class=\"text-center\"><h6 class=\"text-muted\">Active Workers</h6><p class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var17 string
|
||||
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.SystemStats.ActiveWorkers))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_config.templ`, Line: 198, Col: 102}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</p></div></div></div></div></div></div></div></div><script>\n function saveConfiguration() {\n // First, get current configuration to preserve existing values\n fetch('/api/maintenance/config')\n .then(response => response.json())\n .then(currentConfig => {\n // Update only the fields from the form\n const updatedConfig = {\n ...currentConfig.config, // Preserve existing config\n enabled: document.getElementById('enabled').checked,\n scan_interval_seconds: parseInt(document.getElementById('scanInterval').value) * 60, // Convert to seconds\n worker_timeout_seconds: parseInt(document.getElementById('workerTimeout').value) * 60, // Convert to seconds\n task_timeout_seconds: parseInt(document.getElementById('taskTimeout').value) * 3600, // Convert to seconds\n retry_delay_seconds: parseInt(document.getElementById('retryDelay').value) * 60, // Convert to seconds\n max_retries: parseInt(document.getElementById('maxRetries').value),\n task_retention_seconds: parseInt(document.getElementById('taskRetention').value) * 24 * 3600, // Convert to seconds\n policy: {\n ...currentConfig.config.policy, // Preserve existing policy\n global_max_concurrent: parseInt(document.getElementById('globalMaxConcurrent').value)\n }\n };\n\n // Send the updated configuration\n return fetch('/api/maintenance/config', {\n method: 'PUT',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify(updatedConfig)\n });\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n alert('Configuration saved successfully');\n location.reload(); // Reload to show updated values\n } else {\n alert('Failed to save configuration: ' + (data.error || 'Unknown error'));\n }\n })\n .catch(error => {\n alert('Error: ' + error.message);\n });\n }\n\n function resetToDefaults() {\n showConfirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.', function() {\n // Reset form to defaults (matching DefaultMaintenanceConfig values)\n document.getElementById('enabled').checked = false;\n document.getElementById('scanInterval').value = '30';\n document.getElementById('workerTimeout').value = '5';\n document.getElementById('taskTimeout').value = '2';\n document.getElementById('globalMaxConcurrent').value = '4';\n document.getElementById('maxRetries').value = '3';\n document.getElementById('retryDelay').value = '15';\n document.getElementById('taskRetention').value = '7';\n });\n }\n </script>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
var _ = templruntime.GeneratedTemplate
|
||||
@@ -1,405 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
)
|
||||
|
||||
templ MaintenanceQueue(data *maintenance.MaintenanceQueueData) {
|
||||
<div class="container-fluid">
|
||||
<!-- Header -->
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="d-flex justify-content-between align-items-center">
|
||||
<h2 class="mb-0">
|
||||
<i class="fas fa-tasks me-2"></i>
|
||||
Maintenance Queue
|
||||
</h2>
|
||||
<div class="btn-group">
|
||||
<button type="button" class="btn btn-primary" onclick="triggerScan()">
|
||||
<i class="fas fa-search me-1"></i>
|
||||
Trigger Scan
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary" onclick="refreshPage()">
|
||||
<i class="fas fa-sync-alt me-1"></i>
|
||||
Refresh
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Statistics Cards -->
|
||||
<div class="row mb-4">
|
||||
<div class="col-md-3">
|
||||
<div class="card border-primary">
|
||||
<div class="card-body text-center">
|
||||
<i class="fas fa-clock fa-2x text-primary mb-2"></i>
|
||||
<h4 class="mb-1">{fmt.Sprintf("%d", data.Stats.PendingTasks)}</h4>
|
||||
<p class="text-muted mb-0">Pending Tasks</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-3">
|
||||
<div class="card border-warning">
|
||||
<div class="card-body text-center">
|
||||
<i class="fas fa-running fa-2x text-warning mb-2"></i>
|
||||
<h4 class="mb-1">{fmt.Sprintf("%d", data.Stats.RunningTasks)}</h4>
|
||||
<p class="text-muted mb-0">Running Tasks</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-3">
|
||||
<div class="card border-success">
|
||||
<div class="card-body text-center">
|
||||
<i class="fas fa-check-circle fa-2x text-success mb-2"></i>
|
||||
<h4 class="mb-1">{fmt.Sprintf("%d", data.Stats.CompletedToday)}</h4>
|
||||
<p class="text-muted mb-0">Completed Today</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-md-3">
|
||||
<div class="card border-danger">
|
||||
<div class="card-body text-center">
|
||||
<i class="fas fa-exclamation-triangle fa-2x text-danger mb-2"></i>
|
||||
<h4 class="mb-1">{fmt.Sprintf("%d", data.Stats.FailedToday)}</h4>
|
||||
<p class="text-muted mb-0">Failed Today</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Completed Tasks -->
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header bg-success text-white">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-check-circle me-2"></i>
|
||||
Completed Tasks
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
if data.Stats.CompletedToday == 0 && data.Stats.FailedToday == 0 {
|
||||
<div class="text-center text-muted py-4">
|
||||
<i class="fas fa-check-circle fa-3x mb-3"></i>
|
||||
<p>No completed maintenance tasks today</p>
|
||||
<small>Completed tasks will appear here after workers finish processing them</small>
|
||||
</div>
|
||||
} else {
|
||||
<div class="table-responsive">
|
||||
<table class="table table-hover">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Type</th>
|
||||
<th>Status</th>
|
||||
<th>Volume</th>
|
||||
<th>Worker</th>
|
||||
<th>Duration</th>
|
||||
<th>Completed</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
for _, task := range data.Tasks {
|
||||
if string(task.Status) == "completed" || string(task.Status) == "failed" || string(task.Status) == "cancelled" {
|
||||
if string(task.Status) == "failed" {
|
||||
<tr class="table-danger clickable-row" data-task-id={task.ID} onclick="navigateToTask(this)" style="cursor: pointer;">
|
||||
<td>
|
||||
@TaskTypeIcon(task.Type)
|
||||
{string(task.Type)}
|
||||
</td>
|
||||
<td>@StatusBadge(task.Status)</td>
|
||||
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
|
||||
<td>
|
||||
if task.WorkerID != "" {
|
||||
<small>{task.WorkerID}</small>
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
<td>
|
||||
if task.StartedAt != nil && task.CompletedAt != nil {
|
||||
{formatDuration(task.CompletedAt.Sub(*task.StartedAt))}
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
<td>
|
||||
if task.CompletedAt != nil {
|
||||
{task.CompletedAt.Format("2006-01-02 15:04")}
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
</tr>
|
||||
} else {
|
||||
<tr class="clickable-row" data-task-id={task.ID} onclick="navigateToTask(this)" style="cursor: pointer;">
|
||||
<td>
|
||||
@TaskTypeIcon(task.Type)
|
||||
{string(task.Type)}
|
||||
</td>
|
||||
<td>@StatusBadge(task.Status)</td>
|
||||
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
|
||||
<td>
|
||||
if task.WorkerID != "" {
|
||||
<small>{task.WorkerID}</small>
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
<td>
|
||||
if task.StartedAt != nil && task.CompletedAt != nil {
|
||||
{formatDuration(task.CompletedAt.Sub(*task.StartedAt))}
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
<td>
|
||||
if task.CompletedAt != nil {
|
||||
{task.CompletedAt.Format("2006-01-02 15:04")}
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
</tr>
|
||||
}
|
||||
}
|
||||
}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Pending Tasks -->
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header bg-primary text-white">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-clock me-2"></i>
|
||||
Pending Tasks
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
if data.Stats.PendingTasks == 0 {
|
||||
<div class="text-center text-muted py-4">
|
||||
<i class="fas fa-clipboard-list fa-3x mb-3"></i>
|
||||
<p>No pending maintenance tasks</p>
|
||||
<small>Pending tasks will appear here when the system detects maintenance needs</small>
|
||||
</div>
|
||||
} else {
|
||||
<div class="table-responsive">
|
||||
<table class="table table-hover">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Type</th>
|
||||
<th>Priority</th>
|
||||
<th>Volume</th>
|
||||
<th>Server</th>
|
||||
<th>Reason</th>
|
||||
<th>Created</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
for _, task := range data.Tasks {
|
||||
if string(task.Status) == "pending" {
|
||||
<tr class="clickable-row" data-task-id={task.ID} onclick="navigateToTask(this)" style="cursor: pointer;">
|
||||
<td>
|
||||
@TaskTypeIcon(task.Type)
|
||||
{string(task.Type)}
|
||||
</td>
|
||||
<td>@PriorityBadge(task.Priority)</td>
|
||||
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
|
||||
<td><small>{task.Server}</small></td>
|
||||
<td><small>{task.Reason}</small></td>
|
||||
<td>{task.CreatedAt.Format("2006-01-02 15:04")}</td>
|
||||
</tr>
|
||||
}
|
||||
}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Active Tasks -->
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header bg-warning text-dark">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-running me-2"></i>
|
||||
Active Tasks
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
if data.Stats.RunningTasks == 0 {
|
||||
<div class="text-center text-muted py-4">
|
||||
<i class="fas fa-tasks fa-3x mb-3"></i>
|
||||
<p>No active maintenance tasks</p>
|
||||
<small>Active tasks will appear here when workers start processing them</small>
|
||||
</div>
|
||||
} else {
|
||||
<div class="table-responsive">
|
||||
<table class="table table-hover">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Type</th>
|
||||
<th>Status</th>
|
||||
<th>Progress</th>
|
||||
<th>Volume</th>
|
||||
<th>Worker</th>
|
||||
<th>Started</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
for _, task := range data.Tasks {
|
||||
if string(task.Status) == "assigned" || string(task.Status) == "in_progress" {
|
||||
<tr class="clickable-row" data-task-id={task.ID} onclick="navigateToTask(this)" style="cursor: pointer;">
|
||||
<td>
|
||||
@TaskTypeIcon(task.Type)
|
||||
{string(task.Type)}
|
||||
</td>
|
||||
<td>@StatusBadge(task.Status)</td>
|
||||
<td>@ProgressBar(task.Progress, task.Status)</td>
|
||||
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
|
||||
<td>
|
||||
if task.WorkerID != "" {
|
||||
<small>{task.WorkerID}</small>
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
<td>
|
||||
if task.StartedAt != nil {
|
||||
{task.StartedAt.Format("2006-01-02 15:04")}
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
</td>
|
||||
</tr>
|
||||
}
|
||||
}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
window.triggerScan = function() {
|
||||
console.log("triggerScan called");
|
||||
fetch('/api/maintenance/scan', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (data.success) {
|
||||
showToast('Success', 'Maintenance scan triggered successfully', 'success');
|
||||
setTimeout(() => window.location.reload(), 2000);
|
||||
} else {
|
||||
showToast('Error', 'Failed to trigger scan: ' + (data.error || 'Unknown error'), 'danger');
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
showToast('Error', 'Error: ' + error.message, 'danger');
|
||||
});
|
||||
};
|
||||
|
||||
window.refreshPage = function() {
|
||||
console.log("refreshPage called");
|
||||
window.location.reload();
|
||||
};
|
||||
|
||||
window.navigateToTask = function(element) {
|
||||
const taskId = element.getAttribute('data-task-id');
|
||||
if (taskId) {
|
||||
window.location.href = '/maintenance/tasks/' + taskId;
|
||||
}
|
||||
};
|
||||
</script>
|
||||
}
|
||||
|
||||
// Helper components
|
||||
templ TaskTypeIcon(taskType maintenance.MaintenanceTaskType) {
|
||||
<i class={maintenance.GetTaskIcon(taskType) + " me-1"}></i>
|
||||
}
|
||||
|
||||
templ PriorityBadge(priority maintenance.MaintenanceTaskPriority) {
|
||||
switch priority {
|
||||
case maintenance.PriorityCritical:
|
||||
<span class="badge bg-danger">Critical</span>
|
||||
case maintenance.PriorityHigh:
|
||||
<span class="badge bg-warning">High</span>
|
||||
case maintenance.PriorityNormal:
|
||||
<span class="badge bg-primary">Normal</span>
|
||||
case maintenance.PriorityLow:
|
||||
<span class="badge bg-secondary">Low</span>
|
||||
default:
|
||||
<span class="badge bg-light text-dark">Unknown</span>
|
||||
}
|
||||
}
|
||||
|
||||
templ StatusBadge(status maintenance.MaintenanceTaskStatus) {
|
||||
switch status {
|
||||
case maintenance.TaskStatusPending:
|
||||
<span class="badge bg-secondary">Pending</span>
|
||||
case maintenance.TaskStatusAssigned:
|
||||
<span class="badge bg-info">Assigned</span>
|
||||
case maintenance.TaskStatusInProgress:
|
||||
<span class="badge bg-warning">Running</span>
|
||||
case maintenance.TaskStatusCompleted:
|
||||
<span class="badge bg-success">Completed</span>
|
||||
case maintenance.TaskStatusFailed:
|
||||
<span class="badge bg-danger">Failed</span>
|
||||
case maintenance.TaskStatusCancelled:
|
||||
<span class="badge bg-light text-dark">Cancelled</span>
|
||||
default:
|
||||
<span class="badge bg-light text-dark">Unknown</span>
|
||||
}
|
||||
}
|
||||
|
||||
templ ProgressBar(progress float64, status maintenance.MaintenanceTaskStatus) {
|
||||
if status == maintenance.TaskStatusInProgress || status == maintenance.TaskStatusAssigned {
|
||||
<div class="progress" style="height: 8px; min-width: 100px;">
|
||||
<div class="progress-bar" role="progressbar" style={fmt.Sprintf("width: %.1f%%", progress)}>
|
||||
</div>
|
||||
</div>
|
||||
<small class="text-muted">{fmt.Sprintf("%.1f%%", progress)}</small>
|
||||
} else if status == maintenance.TaskStatusCompleted {
|
||||
<div class="progress" style="height: 8px; min-width: 100px;">
|
||||
<div class="progress-bar bg-success" role="progressbar" style="width: 100%">
|
||||
</div>
|
||||
</div>
|
||||
<small class="text-success">100%</small>
|
||||
} else {
|
||||
<span class="text-muted">-</span>
|
||||
}
|
||||
}
|
||||
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%.0fs", d.Seconds())
|
||||
} else if d < time.Hour {
|
||||
return fmt.Sprintf("%.1fm", d.Minutes())
|
||||
} else {
|
||||
return fmt.Sprintf("%.1fh", d.Hours())
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,860 +0,0 @@
|
||||
// Code generated by templ - DO NOT EDIT.
|
||||
|
||||
// templ: version: v0.3.977
|
||||
package app
|
||||
|
||||
//lint:file-ignore SA4006 This context is only used if a nested component is present.
|
||||
|
||||
import "github.com/a-h/templ"
|
||||
import templruntime "github.com/a-h/templ/runtime"
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"time"
|
||||
)
|
||||
|
||||
func MaintenanceQueue(data *maintenance.MaintenanceQueueData) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var1 == nil {
|
||||
templ_7745c5c3_Var1 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"container-fluid\"><!-- Header --><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"d-flex justify-content-between align-items-center\"><h2 class=\"mb-0\"><i class=\"fas fa-tasks me-2\"></i> Maintenance Queue</h2><div class=\"btn-group\"><button type=\"button\" class=\"btn btn-primary\" onclick=\"triggerScan()\"><i class=\"fas fa-search me-1\"></i> Trigger Scan</button> <button type=\"button\" class=\"btn btn-secondary\" onclick=\"refreshPage()\"><i class=\"fas fa-sync-alt me-1\"></i> Refresh</button></div></div></div></div><!-- Statistics Cards --><div class=\"row mb-4\"><div class=\"col-md-3\"><div class=\"card border-primary\"><div class=\"card-body text-center\"><i class=\"fas fa-clock fa-2x text-primary mb-2\"></i><h4 class=\"mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var2 string
|
||||
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Stats.PendingTasks))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 39, Col: 84}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "</h4><p class=\"text-muted mb-0\">Pending Tasks</p></div></div></div><div class=\"col-md-3\"><div class=\"card border-warning\"><div class=\"card-body text-center\"><i class=\"fas fa-running fa-2x text-warning mb-2\"></i><h4 class=\"mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var3 string
|
||||
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Stats.RunningTasks))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 48, Col: 84}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "</h4><p class=\"text-muted mb-0\">Running Tasks</p></div></div></div><div class=\"col-md-3\"><div class=\"card border-success\"><div class=\"card-body text-center\"><i class=\"fas fa-check-circle fa-2x text-success mb-2\"></i><h4 class=\"mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var4 string
|
||||
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Stats.CompletedToday))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 57, Col: 86}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "</h4><p class=\"text-muted mb-0\">Completed Today</p></div></div></div><div class=\"col-md-3\"><div class=\"card border-danger\"><div class=\"card-body text-center\"><i class=\"fas fa-exclamation-triangle fa-2x text-danger mb-2\"></i><h4 class=\"mb-1\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var5 string
|
||||
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Stats.FailedToday))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 66, Col: 83}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</h4><p class=\"text-muted mb-0\">Failed Today</p></div></div></div></div><!-- Completed Tasks --><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header bg-success text-white\"><h5 class=\"mb-0\"><i class=\"fas fa-check-circle me-2\"></i> Completed Tasks</h5></div><div class=\"card-body\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.Stats.CompletedToday == 0 && data.Stats.FailedToday == 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-check-circle fa-3x mb-3\"></i><p>No completed maintenance tasks today</p><small>Completed tasks will appear here after workers finish processing them</small></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "<div class=\"table-responsive\"><table class=\"table table-hover\"><thead><tr><th>Type</th><th>Status</th><th>Volume</th><th>Worker</th><th>Duration</th><th>Completed</th></tr></thead> <tbody>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, task := range data.Tasks {
|
||||
if string(task.Status) == "completed" || string(task.Status) == "failed" || string(task.Status) == "cancelled" {
|
||||
if string(task.Status) == "failed" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "<tr class=\"table-danger clickable-row\" data-task-id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var6 string
|
||||
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(task.ID)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 107, Col: 112}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "\" onclick=\"navigateToTask(this)\" style=\"cursor: pointer;\"><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var7 string
|
||||
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 110, Col: 78}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = StatusBadge(task.Status).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var8 string
|
||||
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 113, Col: 93}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.WorkerID != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "<small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var9 string
|
||||
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(task.WorkerID)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 116, Col: 85}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "</small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.StartedAt != nil && task.CompletedAt != nil {
|
||||
var templ_7745c5c3_Var10 string
|
||||
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(formatDuration(task.CompletedAt.Sub(*task.StartedAt)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 123, Col: 118}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.CompletedAt != nil {
|
||||
var templ_7745c5c3_Var11 string
|
||||
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(task.CompletedAt.Format("2006-01-02 15:04"))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 130, Col: 108}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "</td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "<tr class=\"clickable-row\" data-task-id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var12 string
|
||||
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(task.ID)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 137, Col: 99}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "\" onclick=\"navigateToTask(this)\" style=\"cursor: pointer;\"><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var13 string
|
||||
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 140, Col: 78}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = StatusBadge(task.Status).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var14 string
|
||||
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 143, Col: 93}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.WorkerID != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "<small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var15 string
|
||||
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(task.WorkerID)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 146, Col: 85}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "</small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.StartedAt != nil && task.CompletedAt != nil {
|
||||
var templ_7745c5c3_Var16 string
|
||||
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(formatDuration(task.CompletedAt.Sub(*task.StartedAt)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 153, Col: 118}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.CompletedAt != nil {
|
||||
var templ_7745c5c3_Var17 string
|
||||
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(task.CompletedAt.Format("2006-01-02 15:04"))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 160, Col: 108}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "</tbody></table></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "</div></div></div></div><!-- Pending Tasks --><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header bg-primary text-white\"><h5 class=\"mb-0\"><i class=\"fas fa-clock me-2\"></i> Pending Tasks</h5></div><div class=\"card-body\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.Stats.PendingTasks == 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-clipboard-list fa-3x mb-3\"></i><p>No pending maintenance tasks</p><small>Pending tasks will appear here when the system detects maintenance needs</small></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "<div class=\"table-responsive\"><table class=\"table table-hover\"><thead><tr><th>Type</th><th>Priority</th><th>Volume</th><th>Server</th><th>Reason</th><th>Created</th></tr></thead> <tbody>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, task := range data.Tasks {
|
||||
if string(task.Status) == "pending" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "<tr class=\"clickable-row\" data-task-id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var18 string
|
||||
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(task.ID)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 211, Col: 95}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "\" onclick=\"navigateToTask(this)\" style=\"cursor: pointer;\"><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var19 string
|
||||
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 214, Col: 74}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = PriorityBadge(task.Priority).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var20 string
|
||||
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 217, Col: 89}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "</td><td><small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var21 string
|
||||
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(task.Server)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 218, Col: 75}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "</small></td><td><small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var22 string
|
||||
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(task.Reason)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 219, Col: 75}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "</small></td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var23 string
|
||||
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinStringErrs(task.CreatedAt.Format("2006-01-02 15:04"))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 220, Col: 98}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "</td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "</tbody></table></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "</div></div></div></div><!-- Active Tasks --><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header bg-warning text-dark\"><h5 class=\"mb-0\"><i class=\"fas fa-running me-2\"></i> Active Tasks</h5></div><div class=\"card-body\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if data.Stats.RunningTasks == 0 {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-tasks fa-3x mb-3\"></i><p>No active maintenance tasks</p><small>Active tasks will appear here when workers start processing them</small></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "<div class=\"table-responsive\"><table class=\"table table-hover\"><thead><tr><th>Type</th><th>Status</th><th>Progress</th><th>Volume</th><th>Worker</th><th>Started</th></tr></thead> <tbody>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, task := range data.Tasks {
|
||||
if string(task.Status) == "assigned" || string(task.Status) == "in_progress" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<tr class=\"clickable-row\" data-task-id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var24 string
|
||||
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(task.ID)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 266, Col: 95}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "\" onclick=\"navigateToTask(this)\" style=\"cursor: pointer;\"><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var25 string
|
||||
templ_7745c5c3_Var25, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 269, Col: 74}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var25))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = StatusBadge(task.Status).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = ProgressBar(task.Progress, task.Status).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var26 string
|
||||
templ_7745c5c3_Var26, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 273, Col: 89}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var26))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.WorkerID != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "<small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var27 string
|
||||
templ_7745c5c3_Var27, templ_7745c5c3_Err = templ.JoinStringErrs(task.WorkerID)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 276, Col: 81}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var27))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "</small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</td><td>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if task.StartedAt != nil {
|
||||
var templ_7745c5c3_Var28 string
|
||||
templ_7745c5c3_Var28, templ_7745c5c3_Err = templ.JoinStringErrs(task.StartedAt.Format("2006-01-02 15:04"))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 283, Col: 102}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var28))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "</td></tr>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "</tbody></table></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "</div></div></div></div></div><script>\n window.triggerScan = function() {\n console.log(\"triggerScan called\");\n fetch('/api/maintenance/scan', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n }\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n showToast('Success', 'Maintenance scan triggered successfully', 'success');\n setTimeout(() => window.location.reload(), 2000);\n } else {\n showToast('Error', 'Failed to trigger scan: ' + (data.error || 'Unknown error'), 'danger');\n }\n })\n .catch(error => {\n showToast('Error', 'Error: ' + error.message, 'danger');\n });\n };\n\n window.refreshPage = function() {\n console.log(\"refreshPage called\");\n window.location.reload();\n };\n\n window.navigateToTask = function(element) {\n const taskId = element.getAttribute('data-task-id');\n if (taskId) {\n window.location.href = '/maintenance/tasks/' + taskId;\n }\n };\n </script>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// Helper components
|
||||
func TaskTypeIcon(taskType maintenance.MaintenanceTaskType) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var29 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var29 == nil {
|
||||
templ_7745c5c3_Var29 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
var templ_7745c5c3_Var30 = []any{maintenance.GetTaskIcon(taskType) + " me-1"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var30...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var31 string
|
||||
templ_7745c5c3_Var31, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var30).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var31))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 65, "\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func PriorityBadge(priority maintenance.MaintenanceTaskPriority) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var32 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var32 == nil {
|
||||
templ_7745c5c3_Var32 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
switch priority {
|
||||
case maintenance.PriorityCritical:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 66, "<span class=\"badge bg-danger\">Critical</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.PriorityHigh:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 67, "<span class=\"badge bg-warning\">High</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.PriorityNormal:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 68, "<span class=\"badge bg-primary\">Normal</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.PriorityLow:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 69, "<span class=\"badge bg-secondary\">Low</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
default:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 70, "<span class=\"badge bg-light text-dark\">Unknown</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func StatusBadge(status maintenance.MaintenanceTaskStatus) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var33 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var33 == nil {
|
||||
templ_7745c5c3_Var33 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
switch status {
|
||||
case maintenance.TaskStatusPending:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 71, "<span class=\"badge bg-secondary\">Pending</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.TaskStatusAssigned:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 72, "<span class=\"badge bg-info\">Assigned</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.TaskStatusInProgress:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 73, "<span class=\"badge bg-warning\">Running</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.TaskStatusCompleted:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 74, "<span class=\"badge bg-success\">Completed</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.TaskStatusFailed:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 75, "<span class=\"badge bg-danger\">Failed</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
case maintenance.TaskStatusCancelled:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 76, "<span class=\"badge bg-light text-dark\">Cancelled</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
default:
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 77, "<span class=\"badge bg-light text-dark\">Unknown</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func ProgressBar(progress float64, status maintenance.MaintenanceTaskStatus) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var34 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var34 == nil {
|
||||
templ_7745c5c3_Var34 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
if status == maintenance.TaskStatusInProgress || status == maintenance.TaskStatusAssigned {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 78, "<div class=\"progress\" style=\"height: 8px; min-width: 100px;\"><div class=\"progress-bar\" role=\"progressbar\" style=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var35 string
|
||||
templ_7745c5c3_Var35, templ_7745c5c3_Err = templruntime.SanitizeStyleAttributeValues(fmt.Sprintf("width: %.1f%%", progress))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 380, Col: 102}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var35))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 79, "\"></div></div><small class=\"text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var36 string
|
||||
templ_7745c5c3_Var36, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.1f%%", progress))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/maintenance_queue.templ`, Line: 383, Col: 66}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var36))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 80, "</small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else if status == maintenance.TaskStatusCompleted {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 81, "<div class=\"progress\" style=\"height: 8px; min-width: 100px;\"><div class=\"progress-bar bg-success\" role=\"progressbar\" style=\"width: 100%\"></div></div><small class=\"text-success\">100%</small>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 82, "<span class=\"text-muted\">-</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
func formatDuration(d time.Duration) string {
|
||||
if d < time.Minute {
|
||||
return fmt.Sprintf("%.0fs", d.Seconds())
|
||||
} else if d < time.Hour {
|
||||
return fmt.Sprintf("%.1fm", d.Minutes())
|
||||
} else {
|
||||
return fmt.Sprintf("%.1fh", d.Hours())
|
||||
}
|
||||
}
|
||||
|
||||
var _ = templruntime.GeneratedTemplate
|
||||
@@ -1,343 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
"time"
|
||||
)
|
||||
|
||||
templ MaintenanceWorkers(data *dash.MaintenanceWorkersData) {
|
||||
<div class="container-fluid">
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="d-flex justify-content-between align-items-center mb-4">
|
||||
<div>
|
||||
<h1 class="h3 mb-0 text-gray-800">Maintenance Workers</h1>
|
||||
<p class="text-muted">Monitor and manage maintenance workers</p>
|
||||
</div>
|
||||
<div class="text-end">
|
||||
<small class="text-muted">Last updated: { data.LastUpdated.Format("2006-01-02 15:04:05") }</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Summary Cards -->
|
||||
<div class="row mb-4">
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-primary shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-primary text-uppercase mb-1">
|
||||
Total Workers
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800">{ fmt.Sprintf("%d", len(data.Workers)) }</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-users fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-success shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-success text-uppercase mb-1">
|
||||
Active Workers
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800">
|
||||
{ fmt.Sprintf("%d", data.ActiveWorkers) }
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-check-circle fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-info shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-info text-uppercase mb-1">
|
||||
Busy Workers
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800">
|
||||
{ fmt.Sprintf("%d", data.BusyWorkers) }
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-spinner fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-warning shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-warning text-uppercase mb-1">
|
||||
Total Load
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800">
|
||||
{ fmt.Sprintf("%d", data.TotalLoad) }
|
||||
</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-tasks fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Workers Table -->
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Worker Details</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
if len(data.Workers) == 0 {
|
||||
<div class="text-center py-4">
|
||||
<i class="fas fa-users fa-3x text-gray-300 mb-3"></i>
|
||||
<h5 class="text-gray-600">No Workers Found</h5>
|
||||
<p class="text-muted">No maintenance workers are currently registered.</p>
|
||||
<div class="alert alert-info mt-3">
|
||||
<strong>Tip:</strong> To start a worker, run:
|
||||
<br><code>weed worker -admin=<admin_server> -capabilities=vacuum,ec,balance</code>
|
||||
</div>
|
||||
</div>
|
||||
} else {
|
||||
<div class="table-responsive">
|
||||
<table class="table table-bordered table-hover" id="workersTable">
|
||||
<thead class="table-light">
|
||||
<tr>
|
||||
<th>Worker ID</th>
|
||||
<th>Address</th>
|
||||
<th>Status</th>
|
||||
<th>Capabilities</th>
|
||||
<th>Load</th>
|
||||
<th>Current Tasks</th>
|
||||
<th>Performance</th>
|
||||
<th>Last Heartbeat</th>
|
||||
<th>Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
for _, worker := range data.Workers {
|
||||
<tr>
|
||||
<td>
|
||||
<code>{ worker.Worker.ID }</code>
|
||||
</td>
|
||||
<td>
|
||||
<code>{ worker.Worker.Address }</code>
|
||||
</td>
|
||||
<td>
|
||||
if worker.Worker.Status == "active" {
|
||||
<span class="badge bg-success">Active</span>
|
||||
} else if worker.Worker.Status == "busy" {
|
||||
<span class="badge bg-warning">Busy</span>
|
||||
} else {
|
||||
<span class="badge bg-danger">Inactive</span>
|
||||
}
|
||||
</td>
|
||||
<td>
|
||||
<div class="d-flex flex-wrap gap-1">
|
||||
for _, capability := range worker.Worker.Capabilities {
|
||||
<span class="badge bg-secondary rounded-pill">{ string(capability) }</span>
|
||||
}
|
||||
</div>
|
||||
</td>
|
||||
<td>
|
||||
<div class="progress" style="height: 20px;">
|
||||
if worker.Worker.MaxConcurrent > 0 {
|
||||
<div class="progress-bar" role="progressbar"
|
||||
style={ fmt.Sprintf("width: %d%%", (worker.Worker.CurrentLoad*100)/worker.Worker.MaxConcurrent) }
|
||||
aria-valuenow={ fmt.Sprintf("%d", worker.Worker.CurrentLoad) }
|
||||
aria-valuemin="0"
|
||||
aria-valuemax={ fmt.Sprintf("%d", worker.Worker.MaxConcurrent) }>
|
||||
{ fmt.Sprintf("%d/%d", worker.Worker.CurrentLoad, worker.Worker.MaxConcurrent) }
|
||||
</div>
|
||||
} else {
|
||||
<div class="progress-bar" role="progressbar" style="width: 0%">0/0</div>
|
||||
}
|
||||
</div>
|
||||
</td>
|
||||
<td>
|
||||
{ fmt.Sprintf("%d", len(worker.CurrentTasks)) }
|
||||
</td>
|
||||
<td>
|
||||
<small>
|
||||
<div>Completed: { fmt.Sprintf("%d", worker.Performance.TasksCompleted) }</div>
|
||||
<div>Failed: { fmt.Sprintf("%d", worker.Performance.TasksFailed) }</div>
|
||||
<div>Success Rate: { fmt.Sprintf("%.1f%%", worker.Performance.SuccessRate) }</div>
|
||||
</small>
|
||||
</td>
|
||||
<td>
|
||||
if time.Since(worker.Worker.LastHeartbeat) < 2*time.Minute {
|
||||
<span class="text-success">
|
||||
<i class="fas fa-heartbeat"></i>
|
||||
{ worker.Worker.LastHeartbeat.Format("15:04:05") }
|
||||
</span>
|
||||
} else {
|
||||
<span class="text-danger">
|
||||
<i class="fas fa-exclamation-triangle"></i>
|
||||
{ worker.Worker.LastHeartbeat.Format("15:04:05") }
|
||||
</span>
|
||||
}
|
||||
</td>
|
||||
<td>
|
||||
<div class="btn-group btn-group-sm" role="group">
|
||||
<button type="button" class="btn btn-outline-info" onclick="showWorkerDetails(event)" data-worker-id={ worker.Worker.ID }>
|
||||
<i class="fas fa-info-circle"></i>
|
||||
</button>
|
||||
if worker.Worker.Status == "active" {
|
||||
<button type="button" class="btn btn-outline-warning" onclick="pauseWorker(event)" data-worker-id={ worker.Worker.ID }>
|
||||
<i class="fas fa-pause"></i>
|
||||
</button>
|
||||
}
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Worker Details Modal -->
|
||||
<div class="modal fade" id="workerDetailsModal" tabindex="-1" aria-labelledby="workerDetailsModalLabel" aria-hidden="true">
|
||||
<div class="modal-dialog modal-lg">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h5 class="modal-title" id="workerDetailsModalLabel">Worker Details</h5>
|
||||
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
|
||||
</div>
|
||||
<div class="modal-body" id="workerDetailsContent">
|
||||
<!-- Content will be loaded dynamically -->
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function showWorkerDetails(event) {
|
||||
const workerID = event.target.closest('button').getAttribute('data-worker-id');
|
||||
|
||||
// Show modal
|
||||
var modal = new bootstrap.Modal(document.getElementById('workerDetailsModal'));
|
||||
|
||||
// Load worker details
|
||||
const encodedWorkerId = encodeURIComponent(workerID);
|
||||
fetch('/api/maintenance/workers/' + encodedWorkerId)
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
const content = document.getElementById('workerDetailsContent');
|
||||
content.innerHTML = '<div class="row">' +
|
||||
'<div class="col-md-6">' +
|
||||
'<h6>Worker Information</h6>' +
|
||||
'<ul class="list-unstyled">' +
|
||||
'<li><strong>ID:</strong> ' + data.worker.id + '</li>' +
|
||||
'<li><strong>Address:</strong> ' + data.worker.address + '</li>' +
|
||||
'<li><strong>Status:</strong> ' + data.worker.status + '</li>' +
|
||||
'<li><strong>Max Concurrent:</strong> ' + data.worker.max_concurrent + '</li>' +
|
||||
'<li><strong>Current Load:</strong> ' + data.worker.current_load + '</li>' +
|
||||
'</ul>' +
|
||||
'</div>' +
|
||||
'<div class="col-md-6">' +
|
||||
'<h6>Performance Metrics</h6>' +
|
||||
'<ul class="list-unstyled">' +
|
||||
'<li><strong>Tasks Completed:</strong> ' + data.performance.tasks_completed + '</li>' +
|
||||
'<li><strong>Tasks Failed:</strong> ' + data.performance.tasks_failed + '</li>' +
|
||||
'<li><strong>Success Rate:</strong> ' + data.performance.success_rate.toFixed(1) + '%</li>' +
|
||||
'<li><strong>Average Task Time:</strong> ' + formatDuration(data.performance.average_task_time) + '</li>' +
|
||||
'<li><strong>Uptime:</strong> ' + formatDuration(data.performance.uptime) + '</li>' +
|
||||
'</ul>' +
|
||||
'</div>' +
|
||||
'</div>' +
|
||||
'<hr>' +
|
||||
'<h6>Current Tasks</h6>' +
|
||||
(data.current_tasks === null || data.current_tasks.length === 0 ?
|
||||
'<p class="text-muted">No current tasks</p>' :
|
||||
data.current_tasks.map(task =>
|
||||
'<div class="card mb-2">' +
|
||||
'<div class="card-body py-2">' +
|
||||
'<div class="d-flex justify-content-between">' +
|
||||
'<span><strong>' + task.type + '</strong> - Volume ' + task.volume_id + '</span>' +
|
||||
'<span class="badge bg-info">' + task.status + '</span>' +
|
||||
'</div>' +
|
||||
'<small class="text-muted">' + task.reason + '</small>' +
|
||||
'</div>' +
|
||||
'</div>'
|
||||
).join('')
|
||||
);
|
||||
modal.show();
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error loading worker details:', error);
|
||||
const content = document.getElementById('workerDetailsContent');
|
||||
content.innerHTML = '<div class="alert alert-danger">Failed to load worker details</div>';
|
||||
modal.show();
|
||||
});
|
||||
}
|
||||
|
||||
function pauseWorker(event) {
|
||||
const workerID = event.target.closest('button').getAttribute('data-worker-id');
|
||||
|
||||
showConfirm(`Are you sure you want to pause worker ${workerID}?`, function() {
|
||||
const encodedWorkerId = encodeURIComponent(workerID);
|
||||
fetch('/api/maintenance/workers/' + encodedWorkerId + '/pause', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
}
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (data.success) {
|
||||
location.reload();
|
||||
} else {
|
||||
showAlert('Failed to pause worker: ' + data.error, 'error');
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error pausing worker:', error);
|
||||
showAlert('Failed to pause worker', 'error');
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function formatDuration(nanoseconds) {
|
||||
const seconds = Math.floor(nanoseconds / 1000000000);
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const hours = Math.floor(minutes / 60);
|
||||
|
||||
if (hours > 0) {
|
||||
return hours + 'h ' + (minutes % 60) + 'm';
|
||||
} else if (minutes > 0) {
|
||||
return minutes + 'm ' + (seconds % 60) + 's';
|
||||
} else {
|
||||
return seconds + 's';
|
||||
}
|
||||
}
|
||||
</script>
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
2876
weed/admin/view/app/plugin.templ
Normal file
2876
weed/admin/view/app/plugin.templ
Normal file
File diff suppressed because it is too large
Load Diff
57
weed/admin/view/app/plugin_templ.go
Normal file
57
weed/admin/view/app/plugin_templ.go
Normal file
File diff suppressed because one or more lines are too long
@@ -1,160 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
)
|
||||
|
||||
templ TaskConfig(data *maintenance.TaskConfigData) {
|
||||
<div class="container-fluid">
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="d-flex justify-content-between align-items-center">
|
||||
<h2 class="mb-0">
|
||||
<i class={data.TaskIcon + " me-2"}></i>
|
||||
{data.TaskName} Configuration
|
||||
</h2>
|
||||
<div class="btn-group">
|
||||
<a href="/maintenance/config" class="btn btn-outline-secondary">
|
||||
<i class="fas fa-arrow-left me-1"></i>
|
||||
Back to Configuration
|
||||
</a>
|
||||
<a href="/maintenance" class="btn btn-outline-primary">
|
||||
<i class="fas fa-list me-1"></i>
|
||||
View Queue
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class={data.TaskIcon + " me-2"}></i>
|
||||
{data.TaskName} Settings
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-muted mb-4">{data.Description}</p>
|
||||
|
||||
<!-- Task-specific configuration form -->
|
||||
<form method="POST">
|
||||
<div class="task-config-form">
|
||||
@templ.Raw(string(data.ConfigFormHTML))
|
||||
</div>
|
||||
|
||||
<hr class="my-4">
|
||||
|
||||
<div class="d-flex gap-2">
|
||||
<button type="submit" class="btn btn-primary">
|
||||
<i class="fas fa-save me-1"></i>
|
||||
Save Configuration
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary" onclick="resetForm()">
|
||||
<i class="fas fa-undo me-1"></i>
|
||||
Reset to Defaults
|
||||
</button>
|
||||
<a href="/maintenance/config" class="btn btn-outline-secondary">
|
||||
<i class="fas fa-times me-1"></i>
|
||||
Cancel
|
||||
</a>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Task Information -->
|
||||
<div class="row mt-4">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-info-circle me-2"></i>
|
||||
Task Information
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="row">
|
||||
<div class="col-md-6">
|
||||
<h6 class="text-muted">Task Type</h6>
|
||||
<p class="mb-3">
|
||||
<span class="badge bg-secondary">{string(data.TaskType)}</span>
|
||||
</p>
|
||||
</div>
|
||||
<div class="col-md-6">
|
||||
<h6 class="text-muted">Display Name</h6>
|
||||
<p class="mb-3">{data.TaskName}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<h6 class="text-muted">Description</h6>
|
||||
<p class="mb-0">{data.Description}</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function resetForm() {
|
||||
showConfirm('Are you sure you want to reset all settings to their default values?', function() {
|
||||
// Find all form inputs and reset them
|
||||
const form = document.querySelector('form');
|
||||
if (form) {
|
||||
form.reset();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Auto-save form data to localStorage for recovery
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
const form = document.querySelector('form');
|
||||
if (form) {
|
||||
const taskType = '{string(data.TaskType)}';
|
||||
const storageKey = 'taskConfig_' + taskType;
|
||||
|
||||
// Load saved data
|
||||
const savedData = localStorage.getItem(storageKey);
|
||||
if (savedData) {
|
||||
try {
|
||||
const data = JSON.parse(savedData);
|
||||
Object.keys(data).forEach(key => {
|
||||
const input = form.querySelector(`[name="${key}"]`);
|
||||
if (input) {
|
||||
if (input.type === 'checkbox') {
|
||||
input.checked = data[key];
|
||||
} else {
|
||||
input.value = data[key];
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (e) {
|
||||
console.warn('Failed to load saved configuration:', e);
|
||||
}
|
||||
}
|
||||
|
||||
// Save data on input change
|
||||
form.addEventListener('input', function() {
|
||||
const formData = new FormData(form);
|
||||
const data = {};
|
||||
for (let [key, value] of formData.entries()) {
|
||||
data[key] = value;
|
||||
}
|
||||
localStorage.setItem(storageKey, JSON.stringify(data));
|
||||
});
|
||||
|
||||
// Clear saved data on successful submit
|
||||
form.addEventListener('submit', function() {
|
||||
localStorage.removeItem(storageKey);
|
||||
});
|
||||
}
|
||||
});
|
||||
</script>
|
||||
}
|
||||
@@ -1,487 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/config"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
|
||||
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
||||
)
|
||||
|
||||
// Helper function to convert task schema to JSON string
|
||||
func taskSchemaToJSON(schema *tasks.TaskConfigSchema) string {
|
||||
if schema == nil {
|
||||
return "{}"
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"fields": schema.Fields,
|
||||
}
|
||||
|
||||
jsonBytes, err := json.Marshal(data)
|
||||
if err != nil {
|
||||
return "{}"
|
||||
}
|
||||
|
||||
return string(jsonBytes)
|
||||
}
|
||||
|
||||
// Helper function to base64 encode the JSON to avoid HTML escaping issues
|
||||
func taskSchemaToBase64JSON(schema *tasks.TaskConfigSchema) string {
|
||||
jsonStr := taskSchemaToJSON(schema)
|
||||
return base64.StdEncoding.EncodeToString([]byte(jsonStr))
|
||||
}
|
||||
|
||||
templ TaskConfigSchema(data *maintenance.TaskConfigData, schema *tasks.TaskConfigSchema, config interface{}) {
|
||||
<div class="container-fluid">
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="d-flex justify-content-between align-items-center">
|
||||
<h2 class="mb-0">
|
||||
<i class={schema.Icon + " me-2"}></i>
|
||||
{schema.DisplayName} Configuration
|
||||
</h2>
|
||||
<div class="btn-group">
|
||||
<a href="/maintenance/config" class="btn btn-outline-secondary">
|
||||
<i class="fas fa-arrow-left me-1"></i>
|
||||
Back to System Config
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Configuration Card -->
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-cogs me-2"></i>
|
||||
Task Configuration
|
||||
</h5>
|
||||
<p class="mb-0 text-muted small">{schema.Description}</p>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<form id="taskConfigForm" method="POST">
|
||||
<!-- Dynamically render all schema fields in defined order -->
|
||||
for _, field := range schema.Fields {
|
||||
@TaskConfigField(field, config)
|
||||
}
|
||||
|
||||
<div class="d-flex gap-2">
|
||||
<button type="submit" class="btn btn-primary">
|
||||
<i class="fas fa-save me-1"></i>
|
||||
Save Configuration
|
||||
</button>
|
||||
<button type="button" class="btn btn-secondary" onclick="resetToDefaults()">
|
||||
<i class="fas fa-undo me-1"></i>
|
||||
Reset to Defaults
|
||||
</button>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Performance Notes Card -->
|
||||
<div class="row mt-4">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<h5 class="mb-0">
|
||||
<i class="fas fa-info-circle me-2"></i>
|
||||
Important Notes
|
||||
</h5>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="alert alert-info" role="alert">
|
||||
if schema.TaskName == "vacuum" {
|
||||
<h6 class="alert-heading">Vacuum Operations:</h6>
|
||||
<p class="mb-2"><strong>Performance:</strong> Vacuum operations are I/O intensive and may impact cluster performance.</p>
|
||||
<p class="mb-2"><strong>Safety:</strong> Only volumes meeting age and garbage thresholds will be processed.</p>
|
||||
<p class="mb-0"><strong>Recommendation:</strong> Monitor cluster load and adjust concurrent limits accordingly.</p>
|
||||
} else if schema.TaskName == "balance" {
|
||||
<h6 class="alert-heading">Balance Operations:</h6>
|
||||
<p class="mb-2"><strong>Performance:</strong> Volume balancing involves data movement and can impact cluster performance.</p>
|
||||
<p class="mb-2"><strong>Safety:</strong> Requires adequate server count to ensure data safety during moves.</p>
|
||||
<p class="mb-0"><strong>Recommendation:</strong> Run during off-peak hours to minimize impact on production workloads.</p>
|
||||
} else if schema.TaskName == "erasure_coding" {
|
||||
<h6 class="alert-heading">Erasure Coding Operations:</h6>
|
||||
<p class="mb-2"><strong>Performance:</strong> Erasure coding is CPU and I/O intensive. Consider running during off-peak hours.</p>
|
||||
<p class="mb-2"><strong>Durability:</strong> With { fmt.Sprintf("%d+%d", erasure_coding.DataShardsCount, erasure_coding.ParityShardsCount) } configuration, can tolerate up to { fmt.Sprintf("%d", erasure_coding.ParityShardsCount) } shard failures.</p>
|
||||
<p class="mb-0"><strong>Configuration:</strong> Fullness ratio should be between 0.5 and 1.0 (e.g., 0.90 for 90%).</p>
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function resetToDefaults() {
|
||||
showConfirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.', function() {
|
||||
// Reset form fields to their default values
|
||||
const form = document.getElementById('taskConfigForm');
|
||||
const schemaFields = window.taskConfigSchema ? window.taskConfigSchema.fields : {};
|
||||
|
||||
Object.keys(schemaFields).forEach(fieldName => {
|
||||
const field = schemaFields[fieldName];
|
||||
const element = document.getElementById(fieldName);
|
||||
|
||||
if (element && field.default_value !== undefined) {
|
||||
if (field.input_type === 'checkbox') {
|
||||
element.checked = field.default_value;
|
||||
} else if (field.input_type === 'interval') {
|
||||
// Handle interval fields with value and unit
|
||||
const valueElement = document.getElementById(fieldName + '_value');
|
||||
const unitElement = document.getElementById(fieldName + '_unit');
|
||||
if (valueElement && unitElement && field.default_value) {
|
||||
const defaultSeconds = field.default_value;
|
||||
const { value, unit } = convertSecondsToTaskIntervalValueUnit(defaultSeconds);
|
||||
valueElement.value = value;
|
||||
unitElement.value = unit;
|
||||
}
|
||||
} else {
|
||||
element.value = field.default_value;
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function convertSecondsToTaskIntervalValueUnit(totalSeconds) {
|
||||
if (totalSeconds === 0) {
|
||||
return { value: 0, unit: 'minutes' };
|
||||
}
|
||||
|
||||
// Check if it's evenly divisible by days
|
||||
if (totalSeconds % (24 * 3600) === 0) {
|
||||
return { value: totalSeconds / (24 * 3600), unit: 'days' };
|
||||
}
|
||||
|
||||
// Check if it's evenly divisible by hours
|
||||
if (totalSeconds % 3600 === 0) {
|
||||
return { value: totalSeconds / 3600, unit: 'hours' };
|
||||
}
|
||||
|
||||
// Default to minutes
|
||||
return { value: totalSeconds / 60, unit: 'minutes' };
|
||||
}
|
||||
|
||||
// Store schema data for JavaScript access (moved to after div is created)
|
||||
</script>
|
||||
|
||||
<!-- Hidden element to store schema data -->
|
||||
<div data-task-schema={ taskSchemaToBase64JSON(schema) } style="display: none;"></div>
|
||||
|
||||
<script>
|
||||
// Load schema data now that the div exists
|
||||
const base64Data = document.querySelector('[data-task-schema]').getAttribute('data-task-schema');
|
||||
const jsonStr = atob(base64Data);
|
||||
window.taskConfigSchema = JSON.parse(jsonStr);
|
||||
</script>
|
||||
}
|
||||
|
||||
// TaskConfigField renders a single task configuration field based on schema with typed field lookup
|
||||
templ TaskConfigField(field *config.Field, config interface{}) {
|
||||
if field.InputType == "interval" {
|
||||
<!-- Interval field with number input + unit dropdown -->
|
||||
<div class="mb-3">
|
||||
<label for={ field.JSONName } class="form-label">
|
||||
{ field.DisplayName }
|
||||
if field.Required {
|
||||
<span class="text-danger">*</span>
|
||||
}
|
||||
</label>
|
||||
<div class="input-group">
|
||||
<input
|
||||
type="number"
|
||||
class="form-control"
|
||||
id={ field.JSONName + "_value" }
|
||||
name={ field.JSONName + "_value" }
|
||||
value={ fmt.Sprintf("%.0f", components.ConvertInt32SecondsToDisplayValue(getTaskConfigInt32Field(config, field.JSONName))) }
|
||||
step="1"
|
||||
min="1"
|
||||
if field.Required {
|
||||
required
|
||||
}
|
||||
/>
|
||||
<select
|
||||
class="form-select"
|
||||
id={ field.JSONName + "_unit" }
|
||||
name={ field.JSONName + "_unit" }
|
||||
style="max-width: 120px;"
|
||||
if field.Required {
|
||||
required
|
||||
}
|
||||
>
|
||||
<option
|
||||
value="minutes"
|
||||
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "minutes" {
|
||||
selected
|
||||
}
|
||||
>
|
||||
Minutes
|
||||
</option>
|
||||
<option
|
||||
value="hours"
|
||||
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "hours" {
|
||||
selected
|
||||
}
|
||||
>
|
||||
Hours
|
||||
</option>
|
||||
<option
|
||||
value="days"
|
||||
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "days" {
|
||||
selected
|
||||
}
|
||||
>
|
||||
Days
|
||||
</option>
|
||||
</select>
|
||||
</div>
|
||||
if field.Description != "" {
|
||||
<div class="form-text text-muted">{ field.Description }</div>
|
||||
}
|
||||
</div>
|
||||
} else if field.InputType == "checkbox" {
|
||||
<!-- Checkbox field -->
|
||||
<div class="mb-3">
|
||||
<div class="form-check form-switch">
|
||||
<input
|
||||
class="form-check-input"
|
||||
type="checkbox"
|
||||
id={ field.JSONName }
|
||||
name={ field.JSONName }
|
||||
value="on"
|
||||
if getTaskConfigBoolField(config, field.JSONName) {
|
||||
checked
|
||||
}
|
||||
/>
|
||||
<label class="form-check-label" for={ field.JSONName }>
|
||||
<strong>{ field.DisplayName }</strong>
|
||||
</label>
|
||||
</div>
|
||||
if field.Description != "" {
|
||||
<div class="form-text text-muted">{ field.Description }</div>
|
||||
}
|
||||
</div>
|
||||
} else if field.InputType == "text" {
|
||||
<!-- Text field -->
|
||||
<div class="mb-3">
|
||||
<label for={ field.JSONName } class="form-label">
|
||||
{ field.DisplayName }
|
||||
if field.Required {
|
||||
<span class="text-danger">*</span>
|
||||
}
|
||||
</label>
|
||||
<input
|
||||
type="text"
|
||||
class="form-control"
|
||||
id={ field.JSONName }
|
||||
name={ field.JSONName }
|
||||
value={ getTaskConfigStringField(config, field.JSONName) }
|
||||
placeholder={ field.Placeholder }
|
||||
if field.Required {
|
||||
required
|
||||
}
|
||||
/>
|
||||
if field.Description != "" {
|
||||
<div class="form-text text-muted">{ field.Description }</div>
|
||||
}
|
||||
</div>
|
||||
} else {
|
||||
<!-- Number field -->
|
||||
<div class="mb-3">
|
||||
<label for={ field.JSONName } class="form-label">
|
||||
{ field.DisplayName }
|
||||
if field.Required {
|
||||
<span class="text-danger">*</span>
|
||||
}
|
||||
</label>
|
||||
<input
|
||||
type="number"
|
||||
class="form-control"
|
||||
id={ field.JSONName }
|
||||
name={ field.JSONName }
|
||||
value={ fmt.Sprintf("%.6g", getTaskConfigFloatField(config, field.JSONName)) }
|
||||
placeholder={ field.Placeholder }
|
||||
if field.MinValue != nil {
|
||||
min={ fmt.Sprintf("%v", field.MinValue) }
|
||||
}
|
||||
if field.MaxValue != nil {
|
||||
max={ fmt.Sprintf("%v", field.MaxValue) }
|
||||
}
|
||||
step={ getTaskNumberStep(field) }
|
||||
if field.Required {
|
||||
required
|
||||
}
|
||||
/>
|
||||
if field.Description != "" {
|
||||
<div class="form-text text-muted">{ field.Description }</div>
|
||||
}
|
||||
</div>
|
||||
}
|
||||
}
|
||||
|
||||
// Typed field getters for task configs - avoiding interface{} where possible
|
||||
func getTaskConfigBoolField(config interface{}, fieldName string) bool {
|
||||
switch fieldName {
|
||||
case "enabled":
|
||||
// Use reflection only for the common 'enabled' field in BaseConfig
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
if boolVal, ok := value.(bool); ok {
|
||||
return boolVal
|
||||
}
|
||||
}
|
||||
return false
|
||||
default:
|
||||
// For other boolean fields, use reflection
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
if boolVal, ok := value.(bool); ok {
|
||||
return boolVal
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func getTaskConfigInt32Field(config interface{}, fieldName string) int32 {
|
||||
switch fieldName {
|
||||
case "scan_interval_seconds", "max_concurrent":
|
||||
// Common fields that should be int/int32
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
switch v := value.(type) {
|
||||
case int32:
|
||||
return v
|
||||
case int:
|
||||
return int32(v)
|
||||
case int64:
|
||||
return int32(v)
|
||||
}
|
||||
}
|
||||
return 0
|
||||
default:
|
||||
// For other int fields, use reflection
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
switch v := value.(type) {
|
||||
case int32:
|
||||
return v
|
||||
case int:
|
||||
return int32(v)
|
||||
case int64:
|
||||
return int32(v)
|
||||
case float64:
|
||||
return int32(v)
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func getTaskConfigFloatField(config interface{}, fieldName string) float64 {
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
switch v := value.(type) {
|
||||
case float64:
|
||||
return v
|
||||
case float32:
|
||||
return float64(v)
|
||||
case int:
|
||||
return float64(v)
|
||||
case int32:
|
||||
return float64(v)
|
||||
case int64:
|
||||
return float64(v)
|
||||
}
|
||||
}
|
||||
return 0.0
|
||||
}
|
||||
|
||||
func getTaskConfigStringField(config interface{}, fieldName string) string {
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
if strVal, ok := value.(string); ok {
|
||||
return strVal
|
||||
}
|
||||
// Convert numbers to strings for form display
|
||||
switch v := value.(type) {
|
||||
case int:
|
||||
return fmt.Sprintf("%d", v)
|
||||
case int32:
|
||||
return fmt.Sprintf("%d", v)
|
||||
case int64:
|
||||
return fmt.Sprintf("%d", v)
|
||||
case float64:
|
||||
return fmt.Sprintf("%.6g", v)
|
||||
case float32:
|
||||
return fmt.Sprintf("%.6g", v)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func getTaskNumberStep(field *config.Field) string {
|
||||
if field.Type == config.FieldTypeFloat {
|
||||
return "any"
|
||||
}
|
||||
return "1"
|
||||
}
|
||||
|
||||
func getTaskFieldValue(config interface{}, fieldName string) interface{} {
|
||||
if config == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Use reflection to get the field value from the config struct
|
||||
configValue := reflect.ValueOf(config)
|
||||
if configValue.Kind() == reflect.Ptr {
|
||||
configValue = configValue.Elem()
|
||||
}
|
||||
|
||||
if configValue.Kind() != reflect.Struct {
|
||||
return nil
|
||||
}
|
||||
|
||||
configType := configValue.Type()
|
||||
|
||||
for i := 0; i < configValue.NumField(); i++ {
|
||||
field := configValue.Field(i)
|
||||
fieldType := configType.Field(i)
|
||||
|
||||
// Handle embedded structs recursively (before JSON tag check)
|
||||
if field.Kind() == reflect.Struct && fieldType.Anonymous {
|
||||
if value := getTaskFieldValue(field.Interface(), fieldName); value != nil {
|
||||
return value
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Get JSON tag name
|
||||
jsonTag := fieldType.Tag.Get("json")
|
||||
if jsonTag == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Remove options like ",omitempty"
|
||||
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
|
||||
jsonTag = jsonTag[:commaIdx]
|
||||
}
|
||||
|
||||
// Check if this is the field we're looking for
|
||||
if jsonTag == fieldName {
|
||||
return field.Interface()
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -1,948 +0,0 @@
|
||||
// Code generated by templ - DO NOT EDIT.
|
||||
|
||||
// templ: version: v0.3.977
|
||||
package app
|
||||
|
||||
//lint:file-ignore SA4006 This context is only used if a nested component is present.
|
||||
|
||||
import "github.com/a-h/templ"
|
||||
import templruntime "github.com/a-h/templ/runtime"
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/config"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
|
||||
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Helper function to convert task schema to JSON string
|
||||
func taskSchemaToJSON(schema *tasks.TaskConfigSchema) string {
|
||||
if schema == nil {
|
||||
return "{}"
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"fields": schema.Fields,
|
||||
}
|
||||
|
||||
jsonBytes, err := json.Marshal(data)
|
||||
if err != nil {
|
||||
return "{}"
|
||||
}
|
||||
|
||||
return string(jsonBytes)
|
||||
}
|
||||
|
||||
// Helper function to base64 encode the JSON to avoid HTML escaping issues
|
||||
func taskSchemaToBase64JSON(schema *tasks.TaskConfigSchema) string {
|
||||
jsonStr := taskSchemaToJSON(schema)
|
||||
return base64.StdEncoding.EncodeToString([]byte(jsonStr))
|
||||
}
|
||||
|
||||
func TaskConfigSchema(data *maintenance.TaskConfigData, schema *tasks.TaskConfigSchema, config interface{}) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var1 == nil {
|
||||
templ_7745c5c3_Var1 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"container-fluid\"><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"d-flex justify-content-between align-items-center\"><h2 class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var2 = []any{schema.Icon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var2...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var3 string
|
||||
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var2).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "\"></i> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var4 string
|
||||
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(schema.DisplayName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 47, Col: 43}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, " Configuration</h2><div class=\"btn-group\"><a href=\"/maintenance/config\" class=\"btn btn-outline-secondary\"><i class=\"fas fa-arrow-left me-1\"></i> Back to System Config</a></div></div></div></div><!-- Configuration Card --><div class=\"row\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-cogs me-2\"></i> Task Configuration</h5><p class=\"mb-0 text-muted small\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var5 string
|
||||
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(schema.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 68, Col: 76}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</p></div><div class=\"card-body\"><form id=\"taskConfigForm\" method=\"POST\"><!-- Dynamically render all schema fields in defined order -->")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, field := range schema.Fields {
|
||||
templ_7745c5c3_Err = TaskConfigField(field, config).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<div class=\"d-flex gap-2\"><button type=\"submit\" class=\"btn btn-primary\"><i class=\"fas fa-save me-1\"></i> Save Configuration</button> <button type=\"button\" class=\"btn btn-secondary\" onclick=\"resetToDefaults()\"><i class=\"fas fa-undo me-1\"></i> Reset to Defaults</button></div></form></div></div></div></div><!-- Performance Notes Card --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-info-circle me-2\"></i> Important Notes</h5></div><div class=\"card-body\"><div class=\"alert alert-info\" role=\"alert\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if schema.TaskName == "vacuum" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "<h6 class=\"alert-heading\">Vacuum Operations:</h6><p class=\"mb-2\"><strong>Performance:</strong> Vacuum operations are I/O intensive and may impact cluster performance.</p><p class=\"mb-2\"><strong>Safety:</strong> Only volumes meeting age and garbage thresholds will be processed.</p><p class=\"mb-0\"><strong>Recommendation:</strong> Monitor cluster load and adjust concurrent limits accordingly.</p>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else if schema.TaskName == "balance" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "<h6 class=\"alert-heading\">Balance Operations:</h6><p class=\"mb-2\"><strong>Performance:</strong> Volume balancing involves data movement and can impact cluster performance.</p><p class=\"mb-2\"><strong>Safety:</strong> Requires adequate server count to ensure data safety during moves.</p><p class=\"mb-0\"><strong>Recommendation:</strong> Run during off-peak hours to minimize impact on production workloads.</p>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else if schema.TaskName == "erasure_coding" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "<h6 class=\"alert-heading\">Erasure Coding Operations:</h6><p class=\"mb-2\"><strong>Performance:</strong> Erasure coding is CPU and I/O intensive. Consider running during off-peak hours.</p><p class=\"mb-2\"><strong>Durability:</strong> With ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var6 string
|
||||
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d+%d", erasure_coding.DataShardsCount, erasure_coding.ParityShardsCount))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 118, Col: 170}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, " configuration, can tolerate up to ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var7 string
|
||||
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", erasure_coding.ParityShardsCount))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 118, Col: 260}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, " shard failures.</p><p class=\"mb-0\"><strong>Configuration:</strong> Fullness ratio should be between 0.5 and 1.0 (e.g., 0.90 for 90%).</p>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "</div></div></div></div></div></div><script>\n function resetToDefaults() {\n showConfirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.', function() {\n // Reset form fields to their default values\n const form = document.getElementById('taskConfigForm');\n const schemaFields = window.taskConfigSchema ? window.taskConfigSchema.fields : {};\n \n Object.keys(schemaFields).forEach(fieldName => {\n const field = schemaFields[fieldName];\n const element = document.getElementById(fieldName);\n \n if (element && field.default_value !== undefined) {\n if (field.input_type === 'checkbox') {\n element.checked = field.default_value;\n } else if (field.input_type === 'interval') {\n // Handle interval fields with value and unit\n const valueElement = document.getElementById(fieldName + '_value');\n const unitElement = document.getElementById(fieldName + '_unit');\n if (valueElement && unitElement && field.default_value) {\n const defaultSeconds = field.default_value;\n const { value, unit } = convertSecondsToTaskIntervalValueUnit(defaultSeconds);\n valueElement.value = value;\n unitElement.value = unit;\n }\n } else {\n element.value = field.default_value;\n }\n }\n });\n });\n }\n\n function convertSecondsToTaskIntervalValueUnit(totalSeconds) {\n if (totalSeconds === 0) {\n return { value: 0, unit: 'minutes' };\n }\n\n // Check if it's evenly divisible by days\n if (totalSeconds % (24 * 3600) === 0) {\n return { value: totalSeconds / (24 * 3600), unit: 'days' };\n }\n\n // Check if it's evenly divisible by hours\n if (totalSeconds % 3600 === 0) {\n return { value: totalSeconds / 3600, unit: 'hours' };\n }\n\n // Default to minutes\n return { value: totalSeconds / 60, unit: 'minutes' };\n }\n\n // Store schema data for JavaScript access (moved to after div is created)\n </script><!-- Hidden element to store schema data --><div data-task-schema=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var8 string
|
||||
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(taskSchemaToBase64JSON(schema))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 183, Col: 58}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "\" style=\"display: none;\"></div><script>\n // Load schema data now that the div exists\n const base64Data = document.querySelector('[data-task-schema]').getAttribute('data-task-schema');\n const jsonStr = atob(base64Data);\n window.taskConfigSchema = JSON.parse(jsonStr);\n </script>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// TaskConfigField renders a single task configuration field based on schema with typed field lookup
|
||||
func TaskConfigField(field *config.Field, config interface{}) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var9 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var9 == nil {
|
||||
templ_7745c5c3_Var9 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
if field.InputType == "interval" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "<!-- Interval field with number input + unit dropdown --> <div class=\"mb-3\"><label for=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var10 string
|
||||
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 198, Col: 39}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, "\" class=\"form-label\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var11 string
|
||||
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 199, Col: 35}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, " ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Required {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "<span class=\"text-danger\">*</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "</label><div class=\"input-group\"><input type=\"number\" class=\"form-control\" id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var12 string
|
||||
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_value")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 208, Col: 50}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "\" name=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var13 string
|
||||
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_value")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 209, Col: 52}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var14 string
|
||||
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", components.ConvertInt32SecondsToDisplayValue(getTaskConfigInt32Field(config, field.JSONName))))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 210, Col: 142}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "\" step=\"1\" min=\"1\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Required {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, " required")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "> <select class=\"form-select\" id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var15 string
|
||||
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_unit")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 219, Col: 49}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "\" name=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var16 string
|
||||
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_unit")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 220, Col: 51}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "\" style=\"max-width: 120px;\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Required {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, " required")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "><option value=\"minutes\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "minutes" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, " selected")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, ">Minutes</option> <option value=\"hours\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "hours" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, " selected")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, ">Hours</option> <option value=\"days\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "days" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, " selected")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, ">Days</option></select></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Description != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "<div class=\"form-text text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var17 string
|
||||
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 253, Col: 69}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else if field.InputType == "checkbox" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "<!-- Checkbox field --> <div class=\"mb-3\"><div class=\"form-check form-switch\"><input class=\"form-check-input\" type=\"checkbox\" id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var18 string
|
||||
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 263, Col: 39}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "\" name=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var19 string
|
||||
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 264, Col: 41}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "\" value=\"on\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if getTaskConfigBoolField(config, field.JSONName) {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, " checked")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "> <label class=\"form-check-label\" for=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var20 string
|
||||
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 270, Col: 68}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "\"><strong>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var21 string
|
||||
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 271, Col: 47}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "</strong></label></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Description != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "<div class=\"form-text text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var22 string
|
||||
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 275, Col: 69}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else if field.InputType == "text" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "<!-- Text field --> <div class=\"mb-3\"><label for=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var23 string
|
||||
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 281, Col: 39}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "\" class=\"form-label\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var24 string
|
||||
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 282, Col: 35}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, " ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Required {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<span class=\"text-danger\">*</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "</label> <input type=\"text\" class=\"form-control\" id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var25 string
|
||||
templ_7745c5c3_Var25, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 290, Col: 35}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var25))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "\" name=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var26 string
|
||||
templ_7745c5c3_Var26, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 291, Col: 37}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var26))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var27 string
|
||||
templ_7745c5c3_Var27, templ_7745c5c3_Err = templ.JoinStringErrs(getTaskConfigStringField(config, field.JSONName))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 292, Col: 72}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var27))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "\" placeholder=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var28 string
|
||||
templ_7745c5c3_Var28, templ_7745c5c3_Err = templ.JoinStringErrs(field.Placeholder)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 293, Col: 47}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var28))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Required {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, " required")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Description != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "<div class=\"form-text text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var29 string
|
||||
templ_7745c5c3_Var29, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 299, Col: 69}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var29))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "<!-- Number field --> <div class=\"mb-3\"><label for=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var30 string
|
||||
templ_7745c5c3_Var30, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 305, Col: 39}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var30))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "\" class=\"form-label\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var31 string
|
||||
templ_7745c5c3_Var31, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 306, Col: 35}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var31))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, " ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Required {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "<span class=\"text-danger\">*</span>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 65, "</label> <input type=\"number\" class=\"form-control\" id=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var32 string
|
||||
templ_7745c5c3_Var32, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 314, Col: 35}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var32))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 66, "\" name=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var33 string
|
||||
templ_7745c5c3_Var33, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 315, Col: 37}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var33))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 67, "\" value=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var34 string
|
||||
templ_7745c5c3_Var34, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.6g", getTaskConfigFloatField(config, field.JSONName)))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 316, Col: 92}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var34))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 68, "\" placeholder=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var35 string
|
||||
templ_7745c5c3_Var35, templ_7745c5c3_Err = templ.JoinStringErrs(field.Placeholder)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 317, Col: 47}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var35))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 69, "\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.MinValue != nil {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 70, " min=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var36 string
|
||||
templ_7745c5c3_Var36, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%v", field.MinValue))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 319, Col: 59}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var36))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 71, "\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
if field.MaxValue != nil {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 72, " max=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var37 string
|
||||
templ_7745c5c3_Var37, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%v", field.MaxValue))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 322, Col: 59}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var37))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 73, "\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 74, " step=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var38 string
|
||||
templ_7745c5c3_Var38, templ_7745c5c3_Err = templ.JoinStringErrs(getTaskNumberStep(field))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 324, Col: 47}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var38))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 75, "\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Required {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 76, " required")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 77, "> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if field.Description != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 78, "<div class=\"form-text text-muted\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var39 string
|
||||
templ_7745c5c3_Var39, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_schema.templ`, Line: 330, Col: 69}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var39))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 79, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 80, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// Typed field getters for task configs - avoiding interface{} where possible
|
||||
func getTaskConfigBoolField(config interface{}, fieldName string) bool {
|
||||
switch fieldName {
|
||||
case "enabled":
|
||||
// Use reflection only for the common 'enabled' field in BaseConfig
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
if boolVal, ok := value.(bool); ok {
|
||||
return boolVal
|
||||
}
|
||||
}
|
||||
return false
|
||||
default:
|
||||
// For other boolean fields, use reflection
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
if boolVal, ok := value.(bool); ok {
|
||||
return boolVal
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func getTaskConfigInt32Field(config interface{}, fieldName string) int32 {
|
||||
switch fieldName {
|
||||
case "scan_interval_seconds", "max_concurrent":
|
||||
// Common fields that should be int/int32
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
switch v := value.(type) {
|
||||
case int32:
|
||||
return v
|
||||
case int:
|
||||
return int32(v)
|
||||
case int64:
|
||||
return int32(v)
|
||||
}
|
||||
}
|
||||
return 0
|
||||
default:
|
||||
// For other int fields, use reflection
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
switch v := value.(type) {
|
||||
case int32:
|
||||
return v
|
||||
case int:
|
||||
return int32(v)
|
||||
case int64:
|
||||
return int32(v)
|
||||
case float64:
|
||||
return int32(v)
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func getTaskConfigFloatField(config interface{}, fieldName string) float64 {
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
switch v := value.(type) {
|
||||
case float64:
|
||||
return v
|
||||
case float32:
|
||||
return float64(v)
|
||||
case int:
|
||||
return float64(v)
|
||||
case int32:
|
||||
return float64(v)
|
||||
case int64:
|
||||
return float64(v)
|
||||
}
|
||||
}
|
||||
return 0.0
|
||||
}
|
||||
|
||||
func getTaskConfigStringField(config interface{}, fieldName string) string {
|
||||
if value := getTaskFieldValue(config, fieldName); value != nil {
|
||||
if strVal, ok := value.(string); ok {
|
||||
return strVal
|
||||
}
|
||||
// Convert numbers to strings for form display
|
||||
switch v := value.(type) {
|
||||
case int:
|
||||
return fmt.Sprintf("%d", v)
|
||||
case int32:
|
||||
return fmt.Sprintf("%d", v)
|
||||
case int64:
|
||||
return fmt.Sprintf("%d", v)
|
||||
case float64:
|
||||
return fmt.Sprintf("%.6g", v)
|
||||
case float32:
|
||||
return fmt.Sprintf("%.6g", v)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func getTaskNumberStep(field *config.Field) string {
|
||||
if field.Type == config.FieldTypeFloat {
|
||||
return "any"
|
||||
}
|
||||
return "1"
|
||||
}
|
||||
|
||||
func getTaskFieldValue(config interface{}, fieldName string) interface{} {
|
||||
if config == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Use reflection to get the field value from the config struct
|
||||
configValue := reflect.ValueOf(config)
|
||||
if configValue.Kind() == reflect.Ptr {
|
||||
configValue = configValue.Elem()
|
||||
}
|
||||
|
||||
if configValue.Kind() != reflect.Struct {
|
||||
return nil
|
||||
}
|
||||
|
||||
configType := configValue.Type()
|
||||
|
||||
for i := 0; i < configValue.NumField(); i++ {
|
||||
field := configValue.Field(i)
|
||||
fieldType := configType.Field(i)
|
||||
|
||||
// Handle embedded structs recursively (before JSON tag check)
|
||||
if field.Kind() == reflect.Struct && fieldType.Anonymous {
|
||||
if value := getTaskFieldValue(field.Interface(), fieldName); value != nil {
|
||||
return value
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Get JSON tag name
|
||||
jsonTag := fieldType.Tag.Get("json")
|
||||
if jsonTag == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Remove options like ",omitempty"
|
||||
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
|
||||
jsonTag = jsonTag[:commaIdx]
|
||||
}
|
||||
|
||||
// Check if this is the field we're looking for
|
||||
if jsonTag == fieldName {
|
||||
return field.Interface()
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
var _ = templruntime.GeneratedTemplate
|
||||
@@ -1,232 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Test structs that mirror the actual configuration structure
|
||||
type TestBaseConfigForTemplate struct {
|
||||
Enabled bool `json:"enabled"`
|
||||
ScanIntervalSeconds int `json:"scan_interval_seconds"`
|
||||
MaxConcurrent int `json:"max_concurrent"`
|
||||
}
|
||||
|
||||
type TestTaskConfigForTemplate struct {
|
||||
TestBaseConfigForTemplate
|
||||
TaskSpecificField float64 `json:"task_specific_field"`
|
||||
AnotherSpecificField string `json:"another_specific_field"`
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_EmbeddedStructFields(t *testing.T) {
|
||||
config := &TestTaskConfigForTemplate{
|
||||
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 2400,
|
||||
MaxConcurrent: 5,
|
||||
},
|
||||
TaskSpecificField: 0.18,
|
||||
AnotherSpecificField: "test_value",
|
||||
}
|
||||
|
||||
// Test embedded struct fields
|
||||
tests := []struct {
|
||||
fieldName string
|
||||
expectedValue interface{}
|
||||
description string
|
||||
}{
|
||||
{"enabled", true, "BaseConfig boolean field"},
|
||||
{"scan_interval_seconds", 2400, "BaseConfig integer field"},
|
||||
{"max_concurrent", 5, "BaseConfig integer field"},
|
||||
{"task_specific_field", 0.18, "Task-specific float field"},
|
||||
{"another_specific_field", "test_value", "Task-specific string field"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.description, func(t *testing.T) {
|
||||
result := getTaskFieldValue(config, test.fieldName)
|
||||
|
||||
if result != test.expectedValue {
|
||||
t.Errorf("Field %s: expected %v (%T), got %v (%T)",
|
||||
test.fieldName, test.expectedValue, test.expectedValue, result, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_NonExistentField(t *testing.T) {
|
||||
config := &TestTaskConfigForTemplate{
|
||||
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 1800,
|
||||
MaxConcurrent: 3,
|
||||
},
|
||||
}
|
||||
|
||||
result := getTaskFieldValue(config, "non_existent_field")
|
||||
|
||||
if result != nil {
|
||||
t.Errorf("Expected nil for non-existent field, got %v", result)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_NilConfig(t *testing.T) {
|
||||
var config *TestTaskConfigForTemplate = nil
|
||||
|
||||
result := getTaskFieldValue(config, "enabled")
|
||||
|
||||
if result != nil {
|
||||
t.Errorf("Expected nil for nil config, got %v", result)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_EmptyStruct(t *testing.T) {
|
||||
config := &TestTaskConfigForTemplate{}
|
||||
|
||||
// Test that we can extract zero values
|
||||
tests := []struct {
|
||||
fieldName string
|
||||
expectedValue interface{}
|
||||
description string
|
||||
}{
|
||||
{"enabled", false, "Zero value boolean"},
|
||||
{"scan_interval_seconds", 0, "Zero value integer"},
|
||||
{"max_concurrent", 0, "Zero value integer"},
|
||||
{"task_specific_field", 0.0, "Zero value float"},
|
||||
{"another_specific_field", "", "Zero value string"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.description, func(t *testing.T) {
|
||||
result := getTaskFieldValue(config, test.fieldName)
|
||||
|
||||
if result != test.expectedValue {
|
||||
t.Errorf("Field %s: expected %v (%T), got %v (%T)",
|
||||
test.fieldName, test.expectedValue, test.expectedValue, result, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_NonStructConfig(t *testing.T) {
|
||||
var config interface{} = "not a struct"
|
||||
|
||||
result := getTaskFieldValue(config, "enabled")
|
||||
|
||||
if result != nil {
|
||||
t.Errorf("Expected nil for non-struct config, got %v", result)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_PointerToStruct(t *testing.T) {
|
||||
config := &TestTaskConfigForTemplate{
|
||||
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
|
||||
Enabled: false,
|
||||
ScanIntervalSeconds: 900,
|
||||
MaxConcurrent: 2,
|
||||
},
|
||||
TaskSpecificField: 0.35,
|
||||
}
|
||||
|
||||
// Test that pointers are handled correctly
|
||||
enabledResult := getTaskFieldValue(config, "enabled")
|
||||
if enabledResult != false {
|
||||
t.Errorf("Expected false for enabled field, got %v", enabledResult)
|
||||
}
|
||||
|
||||
intervalResult := getTaskFieldValue(config, "scan_interval_seconds")
|
||||
if intervalResult != 900 {
|
||||
t.Errorf("Expected 900 for scan_interval_seconds field, got %v", intervalResult)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_FieldsWithJSONOmitempty(t *testing.T) {
|
||||
// Test struct with omitempty tags
|
||||
type TestConfigWithOmitempty struct {
|
||||
TestBaseConfigForTemplate
|
||||
OptionalField string `json:"optional_field,omitempty"`
|
||||
}
|
||||
|
||||
config := &TestConfigWithOmitempty{
|
||||
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 1200,
|
||||
MaxConcurrent: 4,
|
||||
},
|
||||
OptionalField: "optional_value",
|
||||
}
|
||||
|
||||
// Test that fields with omitempty are still found
|
||||
result := getTaskFieldValue(config, "optional_field")
|
||||
if result != "optional_value" {
|
||||
t.Errorf("Expected 'optional_value' for optional_field, got %v", result)
|
||||
}
|
||||
|
||||
// Test embedded fields still work
|
||||
enabledResult := getTaskFieldValue(config, "enabled")
|
||||
if enabledResult != true {
|
||||
t.Errorf("Expected true for enabled field, got %v", enabledResult)
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetTaskFieldValue_DeepEmbedding(t *testing.T) {
|
||||
// Test with multiple levels of embedding
|
||||
type DeepBaseConfig struct {
|
||||
DeepField string `json:"deep_field"`
|
||||
}
|
||||
|
||||
type MiddleConfig struct {
|
||||
DeepBaseConfig
|
||||
MiddleField int `json:"middle_field"`
|
||||
}
|
||||
|
||||
type TopConfig struct {
|
||||
MiddleConfig
|
||||
TopField bool `json:"top_field"`
|
||||
}
|
||||
|
||||
config := &TopConfig{
|
||||
MiddleConfig: MiddleConfig{
|
||||
DeepBaseConfig: DeepBaseConfig{
|
||||
DeepField: "deep_value",
|
||||
},
|
||||
MiddleField: 123,
|
||||
},
|
||||
TopField: true,
|
||||
}
|
||||
|
||||
// Test that deeply embedded fields are found
|
||||
deepResult := getTaskFieldValue(config, "deep_field")
|
||||
if deepResult != "deep_value" {
|
||||
t.Errorf("Expected 'deep_value' for deep_field, got %v", deepResult)
|
||||
}
|
||||
|
||||
middleResult := getTaskFieldValue(config, "middle_field")
|
||||
if middleResult != 123 {
|
||||
t.Errorf("Expected 123 for middle_field, got %v", middleResult)
|
||||
}
|
||||
|
||||
topResult := getTaskFieldValue(config, "top_field")
|
||||
if topResult != true {
|
||||
t.Errorf("Expected true for top_field, got %v", topResult)
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmark to ensure performance is reasonable
|
||||
func BenchmarkGetTaskFieldValue(b *testing.B) {
|
||||
config := &TestTaskConfigForTemplate{
|
||||
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 1800,
|
||||
MaxConcurrent: 3,
|
||||
},
|
||||
TaskSpecificField: 0.25,
|
||||
AnotherSpecificField: "benchmark_test",
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
// Test both embedded and regular fields
|
||||
_ = getTaskFieldValue(config, "enabled")
|
||||
_ = getTaskFieldValue(config, "task_specific_field")
|
||||
}
|
||||
}
|
||||
@@ -1,174 +0,0 @@
|
||||
// Code generated by templ - DO NOT EDIT.
|
||||
|
||||
// templ: version: v0.3.977
|
||||
package app
|
||||
|
||||
//lint:file-ignore SA4006 This context is only used if a nested component is present.
|
||||
|
||||
import "github.com/a-h/templ"
|
||||
import templruntime "github.com/a-h/templ/runtime"
|
||||
|
||||
import (
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
)
|
||||
|
||||
func TaskConfig(data *maintenance.TaskConfigData) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var1 == nil {
|
||||
templ_7745c5c3_Var1 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"container-fluid\"><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"d-flex justify-content-between align-items-center\"><h2 class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var2 = []any{data.TaskIcon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var2...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var3 string
|
||||
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var2).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "\"></i> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var4 string
|
||||
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(data.TaskName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 14, Col: 38}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, " Configuration</h2><div class=\"btn-group\"><a href=\"/maintenance/config\" class=\"btn btn-outline-secondary\"><i class=\"fas fa-arrow-left me-1\"></i> Back to Configuration</a> <a href=\"/maintenance\" class=\"btn btn-outline-primary\"><i class=\"fas fa-list me-1\"></i> View Queue</a></div></div></div></div><div class=\"row\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var5 = []any{data.TaskIcon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var5...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var6 string
|
||||
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var5).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "\"></i> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var7 string
|
||||
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(data.TaskName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 36, Col: 42}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, " Settings</h5></div><div class=\"card-body\"><p class=\"text-muted mb-4\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var8 string
|
||||
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(data.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 40, Col: 68}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "</p><!-- Task-specific configuration form --><form method=\"POST\"><div class=\"task-config-form\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templ.Raw(string(data.ConfigFormHTML)).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "</div><hr class=\"my-4\"><div class=\"d-flex gap-2\"><button type=\"submit\" class=\"btn btn-primary\"><i class=\"fas fa-save me-1\"></i> Save Configuration</button> <button type=\"button\" class=\"btn btn-secondary\" onclick=\"resetForm()\"><i class=\"fas fa-undo me-1\"></i> Reset to Defaults</button> <a href=\"/maintenance/config\" class=\"btn btn-outline-secondary\"><i class=\"fas fa-times me-1\"></i> Cancel</a></div></form></div></div></div></div><!-- Task Information --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-info-circle me-2\"></i> Task Information</h5></div><div class=\"card-body\"><div class=\"row\"><div class=\"col-md-6\"><h6 class=\"text-muted\">Task Type</h6><p class=\"mb-3\"><span class=\"badge bg-secondary\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var9 string
|
||||
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(string(data.TaskType))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 85, Col: 91}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "</span></p></div><div class=\"col-md-6\"><h6 class=\"text-muted\">Display Name</h6><p class=\"mb-3\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var10 string
|
||||
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(data.TaskName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 90, Col: 62}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "</p></div></div><div class=\"row\"><div class=\"col-12\"><h6 class=\"text-muted\">Description</h6><p class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var11 string
|
||||
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(data.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config.templ`, Line: 96, Col: 65}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "</p></div></div></div></div></div></div></div><script>\n function resetForm() {\n showConfirm('Are you sure you want to reset all settings to their default values?', function() {\n // Find all form inputs and reset them\n const form = document.querySelector('form');\n if (form) {\n form.reset();\n }\n });\n }\n\n // Auto-save form data to localStorage for recovery\n document.addEventListener('DOMContentLoaded', function() {\n const form = document.querySelector('form');\n if (form) {\n const taskType = '{string(data.TaskType)}';\n const storageKey = 'taskConfig_' + taskType;\n\n // Load saved data\n const savedData = localStorage.getItem(storageKey);\n if (savedData) {\n try {\n const data = JSON.parse(savedData);\n Object.keys(data).forEach(key => {\n const input = form.querySelector(`[name=\"${key}\"]`);\n if (input) {\n if (input.type === 'checkbox') {\n input.checked = data[key];\n } else {\n input.value = data[key];\n }\n }\n });\n } catch (e) {\n console.warn('Failed to load saved configuration:', e);\n }\n }\n\n // Save data on input change\n form.addEventListener('input', function() {\n const formData = new FormData(form);\n const data = {};\n for (let [key, value] of formData.entries()) {\n data[key] = value;\n }\n localStorage.setItem(storageKey, JSON.stringify(data));\n });\n\n // Clear saved data on successful submit\n form.addEventListener('submit', function() {\n localStorage.removeItem(storageKey);\n });\n }\n });\n </script>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
var _ = templruntime.GeneratedTemplate
|
||||
@@ -1,160 +0,0 @@
|
||||
package app
|
||||
|
||||
import (
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
|
||||
)
|
||||
|
||||
// TaskConfigTemplData represents data for templ-based task configuration
|
||||
type TaskConfigTemplData struct {
|
||||
TaskType maintenance.MaintenanceTaskType
|
||||
TaskName string
|
||||
TaskIcon string
|
||||
Description string
|
||||
ConfigSections []components.ConfigSectionData
|
||||
}
|
||||
|
||||
templ TaskConfigTempl(data *TaskConfigTemplData) {
|
||||
<div class="container-fluid">
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="d-flex justify-content-between align-items-center">
|
||||
<h2 class="mb-0">
|
||||
<i class={data.TaskIcon + " me-2"}></i>
|
||||
{data.TaskName} Configuration
|
||||
</h2>
|
||||
<div class="btn-group">
|
||||
<a href="/maintenance/config" class="btn btn-outline-secondary">
|
||||
<i class="fas fa-arrow-left me-1"></i>
|
||||
Back to Configuration
|
||||
</a>
|
||||
<a href="/maintenance/queue" class="btn btn-outline-info">
|
||||
<i class="fas fa-list me-1"></i>
|
||||
View Queue
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row mb-4">
|
||||
<div class="col-12">
|
||||
<div class="alert alert-info" role="alert">
|
||||
<i class="fas fa-info-circle me-2"></i>
|
||||
{data.Description}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<form method="POST" class="needs-validation" novalidate>
|
||||
<!-- Render all configuration sections -->
|
||||
for _, section := range data.ConfigSections {
|
||||
@components.ConfigSection(section)
|
||||
}
|
||||
|
||||
<!-- Form actions -->
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card">
|
||||
<div class="card-body">
|
||||
<div class="d-flex justify-content-between">
|
||||
<div>
|
||||
<button type="submit" class="btn btn-primary">
|
||||
<i class="fas fa-save me-1"></i>
|
||||
Save Configuration
|
||||
</button>
|
||||
<button type="button" class="btn btn-outline-secondary ms-2" onclick="resetForm()">
|
||||
<i class="fas fa-undo me-1"></i>
|
||||
Reset
|
||||
</button>
|
||||
</div>
|
||||
<div>
|
||||
<button type="button" class="btn btn-outline-info" onclick="testConfiguration()">
|
||||
<i class="fas fa-play me-1"></i>
|
||||
Test Configuration
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Form validation
|
||||
(function() {
|
||||
'use strict';
|
||||
window.addEventListener('load', function() {
|
||||
var forms = document.getElementsByClassName('needs-validation');
|
||||
var validation = Array.prototype.filter.call(forms, function(form) {
|
||||
form.addEventListener('submit', function(event) {
|
||||
if (form.checkValidity() === false) {
|
||||
event.preventDefault();
|
||||
event.stopPropagation();
|
||||
}
|
||||
form.classList.add('was-validated');
|
||||
}, false);
|
||||
});
|
||||
}, false);
|
||||
})();
|
||||
|
||||
// Auto-save functionality
|
||||
let autoSaveTimeout;
|
||||
function autoSave() {
|
||||
clearTimeout(autoSaveTimeout);
|
||||
autoSaveTimeout = setTimeout(function() {
|
||||
const formData = new FormData(document.querySelector('form'));
|
||||
localStorage.setItem('task_config_' + '{data.TaskType}', JSON.stringify(Object.fromEntries(formData)));
|
||||
}, 1000);
|
||||
}
|
||||
|
||||
// Add auto-save listeners to all form inputs
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
const form = document.querySelector('form');
|
||||
if (form) {
|
||||
form.addEventListener('input', autoSave);
|
||||
form.addEventListener('change', autoSave);
|
||||
}
|
||||
});
|
||||
|
||||
// Reset form function
|
||||
function resetForm() {
|
||||
showConfirm('Are you sure you want to reset all changes?', function() {
|
||||
location.reload();
|
||||
});
|
||||
}
|
||||
|
||||
// Test configuration function
|
||||
function testConfiguration() {
|
||||
const formData = new FormData(document.querySelector('form'));
|
||||
|
||||
// Show loading state
|
||||
const testBtn = document.querySelector('button[onclick="testConfiguration()"]');
|
||||
const originalContent = testBtn.innerHTML;
|
||||
testBtn.innerHTML = '<i class="fas fa-spinner fa-spin me-1"></i>Testing...';
|
||||
testBtn.disabled = true;
|
||||
|
||||
fetch('/maintenance/config/{data.TaskType}/test', {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
if (data.success) {
|
||||
alert('Configuration test successful!');
|
||||
} else {
|
||||
alert('Configuration test failed: ' + data.error);
|
||||
}
|
||||
})
|
||||
.catch(error => {
|
||||
alert('Test failed: ' + error);
|
||||
})
|
||||
.finally(() => {
|
||||
testBtn.innerHTML = originalContent;
|
||||
testBtn.disabled = false;
|
||||
});
|
||||
}
|
||||
</script>
|
||||
}
|
||||
@@ -1,112 +0,0 @@
|
||||
// Code generated by templ - DO NOT EDIT.
|
||||
|
||||
// templ: version: v0.3.977
|
||||
package app
|
||||
|
||||
//lint:file-ignore SA4006 This context is only used if a nested component is present.
|
||||
|
||||
import "github.com/a-h/templ"
|
||||
import templruntime "github.com/a-h/templ/runtime"
|
||||
|
||||
import (
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
|
||||
)
|
||||
|
||||
// TaskConfigTemplData represents data for templ-based task configuration
|
||||
type TaskConfigTemplData struct {
|
||||
TaskType maintenance.MaintenanceTaskType
|
||||
TaskName string
|
||||
TaskIcon string
|
||||
Description string
|
||||
ConfigSections []components.ConfigSectionData
|
||||
}
|
||||
|
||||
func TaskConfigTempl(data *TaskConfigTemplData) templ.Component {
|
||||
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
|
||||
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
|
||||
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
|
||||
return templ_7745c5c3_CtxErr
|
||||
}
|
||||
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
|
||||
if !templ_7745c5c3_IsBuffer {
|
||||
defer func() {
|
||||
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err == nil {
|
||||
templ_7745c5c3_Err = templ_7745c5c3_BufErr
|
||||
}
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var1 == nil {
|
||||
templ_7745c5c3_Var1 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"container-fluid\"><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"d-flex justify-content-between align-items-center\"><h2 class=\"mb-0\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var2 = []any{data.TaskIcon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var2...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var3 string
|
||||
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var2).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_templ.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "\"></i> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var4 string
|
||||
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(data.TaskName)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_templ.templ`, Line: 24, Col: 38}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, " Configuration</h2><div class=\"btn-group\"><a href=\"/maintenance/config\" class=\"btn btn-outline-secondary\"><i class=\"fas fa-arrow-left me-1\"></i> Back to Configuration</a> <a href=\"/maintenance/queue\" class=\"btn btn-outline-info\"><i class=\"fas fa-list me-1\"></i> View Queue</a></div></div></div></div><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"alert alert-info\" role=\"alert\"><i class=\"fas fa-info-circle me-2\"></i> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var5 string
|
||||
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(data.Description)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/app/task_config_templ.templ`, Line: 44, Col: 37}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</div></div></div><form method=\"POST\" class=\"needs-validation\" novalidate><!-- Render all configuration sections -->")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, section := range data.ConfigSections {
|
||||
templ_7745c5c3_Err = components.ConfigSection(section).Render(ctx, templ_7745c5c3_Buffer)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<!-- Form actions --><div class=\"row\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><button type=\"submit\" class=\"btn btn-primary\"><i class=\"fas fa-save me-1\"></i> Save Configuration</button> <button type=\"button\" class=\"btn btn-outline-secondary ms-2\" onclick=\"resetForm()\"><i class=\"fas fa-undo me-1\"></i> Reset</button></div><div><button type=\"button\" class=\"btn btn-outline-info\" onclick=\"testConfiguration()\"><i class=\"fas fa-play me-1\"></i> Test Configuration</button></div></div></div></div></div></div></form></div><script>\n // Form validation\n (function() {\n 'use strict';\n window.addEventListener('load', function() {\n var forms = document.getElementsByClassName('needs-validation');\n var validation = Array.prototype.filter.call(forms, function(form) {\n form.addEventListener('submit', function(event) {\n if (form.checkValidity() === false) {\n event.preventDefault();\n event.stopPropagation();\n }\n form.classList.add('was-validated');\n }, false);\n });\n }, false);\n })();\n\n // Auto-save functionality\n let autoSaveTimeout;\n function autoSave() {\n clearTimeout(autoSaveTimeout);\n autoSaveTimeout = setTimeout(function() {\n const formData = new FormData(document.querySelector('form'));\n localStorage.setItem('task_config_' + '{data.TaskType}', JSON.stringify(Object.fromEntries(formData)));\n }, 1000);\n }\n\n // Add auto-save listeners to all form inputs\n document.addEventListener('DOMContentLoaded', function() {\n const form = document.querySelector('form');\n if (form) {\n form.addEventListener('input', autoSave);\n form.addEventListener('change', autoSave);\n }\n });\n\n // Reset form function\n function resetForm() {\n showConfirm('Are you sure you want to reset all changes?', function() {\n location.reload();\n });\n }\n\n // Test configuration function\n function testConfiguration() {\n const formData = new FormData(document.querySelector('form'));\n \n // Show loading state\n const testBtn = document.querySelector('button[onclick=\"testConfiguration()\"]');\n const originalContent = testBtn.innerHTML;\n testBtn.innerHTML = '<i class=\"fas fa-spinner fa-spin me-1\"></i>Testing...';\n testBtn.disabled = true;\n \n fetch('/maintenance/config/{data.TaskType}/test', {\n method: 'POST',\n body: formData\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n alert('Configuration test successful!');\n } else {\n alert('Configuration test failed: ' + data.error);\n }\n })\n .catch(error => {\n alert('Test failed: ' + error);\n })\n .finally(() => {\n testBtn.innerHTML = originalContent;\n testBtn.disabled = false;\n });\n }\n </script>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
var _ = templruntime.GeneratedTemplate
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
@@ -16,13 +16,14 @@ templ Layout(c *gin.Context, content templ.Component) {
|
||||
}
|
||||
csrfToken := c.GetString("csrf_token")
|
||||
|
||||
// Detect if we're on a configuration page to keep submenu expanded
|
||||
currentPath := c.Request.URL.Path
|
||||
isConfigPage := strings.HasPrefix(currentPath, "/maintenance/config") || currentPath == "/config"
|
||||
|
||||
// Detect if we're on a message queue page to keep submenu expanded
|
||||
isMQPage := strings.HasPrefix(currentPath, "/mq/")
|
||||
|
||||
// Detect if we're on plugin page.
|
||||
isPluginPage := strings.HasPrefix(currentPath, "/plugin")
|
||||
|
||||
// Detect if we're on a storage page to keep submenu expanded
|
||||
isStoragePage := strings.HasPrefix(currentPath, "/storage/volumes") || strings.HasPrefix(currentPath, "/storage/ec-shards") || strings.HasPrefix(currentPath, "/storage/collections")
|
||||
|
||||
@@ -258,75 +259,61 @@ templ Layout(c *gin.Context, content templ.Component) {
|
||||
</ul>
|
||||
|
||||
<h6 class="sidebar-heading px-3 mt-4 mb-1 text-muted">
|
||||
<span>MAINTENANCE</span>
|
||||
<span>WORKERS</span>
|
||||
</h6>
|
||||
<ul class="nav flex-column">
|
||||
<li class="nav-item">
|
||||
if isConfigPage {
|
||||
<a class="nav-link" href="#" data-bs-toggle="collapse" data-bs-target="#configurationSubmenu" aria-expanded="true" aria-controls="configurationSubmenu">
|
||||
<i class="fas fa-cogs me-2"></i>Configuration
|
||||
<i class="fas fa-chevron-down ms-auto"></i>
|
||||
if isPluginPage {
|
||||
<a class="nav-link active" href="/plugin">
|
||||
<i class="fas fa-plug me-2"></i>Workers
|
||||
</a>
|
||||
} else {
|
||||
<a class="nav-link collapsed" href="#" data-bs-toggle="collapse" data-bs-target="#configurationSubmenu" aria-expanded="false" aria-controls="configurationSubmenu">
|
||||
<i class="fas fa-cogs me-2"></i>Configuration
|
||||
<i class="fas fa-chevron-right ms-auto"></i>
|
||||
<a class="nav-link" href="/plugin">
|
||||
<i class="fas fa-plug me-2"></i>Workers
|
||||
</a>
|
||||
}
|
||||
if isConfigPage {
|
||||
<div class="collapse show" id="configurationSubmenu">
|
||||
<ul class="nav flex-column ms-3">
|
||||
for _, menuItem := range GetConfigurationMenuItems() {
|
||||
{{
|
||||
isActiveItem := currentPath == menuItem.URL
|
||||
}}
|
||||
<li class="nav-item">
|
||||
if isActiveItem {
|
||||
<a class="nav-link py-2 active" href={templ.SafeURL(menuItem.URL)}>
|
||||
<i class={menuItem.Icon + " me-2"}></i>{menuItem.Name}
|
||||
</a>
|
||||
} else {
|
||||
<a class="nav-link py-2" href={templ.SafeURL(menuItem.URL)}>
|
||||
<i class={menuItem.Icon + " me-2"}></i>{menuItem.Name}
|
||||
</a>
|
||||
}
|
||||
</li>
|
||||
}
|
||||
</ul>
|
||||
</div>
|
||||
} else {
|
||||
<div class="collapse" id="configurationSubmenu">
|
||||
<ul class="nav flex-column ms-3">
|
||||
for _, menuItem := range GetConfigurationMenuItems() {
|
||||
<li class="nav-item">
|
||||
<a class="nav-link py-2" href={templ.SafeURL(menuItem.URL)}>
|
||||
<i class={menuItem.Icon + " me-2"}></i>{menuItem.Name}
|
||||
</a>
|
||||
</li>
|
||||
}
|
||||
</ul>
|
||||
</div>
|
||||
}
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
if currentPath == "/maintenance" {
|
||||
<a class="nav-link active" href="/maintenance">
|
||||
if currentPath == "/plugin/detection" {
|
||||
<a class="nav-link active" href="/plugin/detection">
|
||||
<i class="fas fa-search me-2"></i>Job Detection
|
||||
</a>
|
||||
} else {
|
||||
<a class="nav-link" href="/plugin/detection">
|
||||
<i class="fas fa-search me-2"></i>Job Detection
|
||||
</a>
|
||||
}
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
if currentPath == "/plugin/queue" {
|
||||
<a class="nav-link active" href="/plugin/queue">
|
||||
<i class="fas fa-list me-2"></i>Job Queue
|
||||
</a>
|
||||
} else {
|
||||
<a class="nav-link" href="/maintenance">
|
||||
<a class="nav-link" href="/plugin/queue">
|
||||
<i class="fas fa-list me-2"></i>Job Queue
|
||||
</a>
|
||||
}
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
if currentPath == "/maintenance/workers" {
|
||||
<a class="nav-link active" href="/maintenance/workers">
|
||||
<i class="fas fa-user-cog me-2"></i>Workers
|
||||
if currentPath == "/plugin/execution" {
|
||||
<a class="nav-link active" href="/plugin/execution">
|
||||
<i class="fas fa-tasks me-2"></i>Job Execution
|
||||
</a>
|
||||
} else {
|
||||
<a class="nav-link" href="/maintenance/workers">
|
||||
<i class="fas fa-user-cog me-2"></i>Workers
|
||||
<a class="nav-link" href="/plugin/execution">
|
||||
<i class="fas fa-tasks me-2"></i>Job Execution
|
||||
</a>
|
||||
}
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
if currentPath == "/plugin/configuration" {
|
||||
<a class="nav-link active" href="/plugin/configuration">
|
||||
<i class="fas fa-sliders-h me-2"></i>Configuration
|
||||
</a>
|
||||
} else {
|
||||
<a class="nav-link" href="/plugin/configuration">
|
||||
<i class="fas fa-sliders-h me-2"></i>Configuration
|
||||
</a>
|
||||
}
|
||||
</li>
|
||||
|
||||
@@ -43,13 +43,14 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
|
||||
}
|
||||
csrfToken := c.GetString("csrf_token")
|
||||
|
||||
// Detect if we're on a configuration page to keep submenu expanded
|
||||
currentPath := c.Request.URL.Path
|
||||
isConfigPage := strings.HasPrefix(currentPath, "/maintenance/config") || currentPath == "/config"
|
||||
|
||||
// Detect if we're on a message queue page to keep submenu expanded
|
||||
isMQPage := strings.HasPrefix(currentPath, "/mq/")
|
||||
|
||||
// Detect if we're on plugin page.
|
||||
isPluginPage := strings.HasPrefix(currentPath, "/plugin")
|
||||
|
||||
// Detect if we're on a storage page to keep submenu expanded
|
||||
isStoragePage := strings.HasPrefix(currentPath, "/storage/volumes") || strings.HasPrefix(currentPath, "/storage/ec-shards") || strings.HasPrefix(currentPath, "/storage/collections")
|
||||
|
||||
@@ -62,7 +63,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
|
||||
var templ_7745c5c3_Var2 string
|
||||
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs(csrfToken)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 38, Col: 47}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 39, Col: 47}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
@@ -75,7 +76,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
|
||||
var templ_7745c5c3_Var3 string
|
||||
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(username)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 69, Col: 73}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 70, Col: 73}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
@@ -110,7 +111,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
|
||||
var templ_7745c5c3_Var6 string
|
||||
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%t", isClusterPage))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 96, Col: 207}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 97, Col: 207}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
@@ -167,7 +168,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
|
||||
var templ_7745c5c3_Var11 string
|
||||
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%t", isStoragePage))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 121, Col: 207}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 122, Col: 207}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
@@ -251,243 +252,82 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</li><!-- Commented out for later --><!--\n <li class=\"nav-item\">\n <a class=\"nav-link\" href=\"/metrics\">\n <i class=\"fas fa-chart-line me-2\"></i>Metrics\n </a>\n </li>\n <li class=\"nav-item\">\n <a class=\"nav-link\" href=\"/logs\">\n <i class=\"fas fa-file-alt me-2\"></i>Logs\n </a>\n </li>\n --></ul><h6 class=\"sidebar-heading px-3 mt-4 mb-1 text-muted\"><span>MAINTENANCE</span></h6><ul class=\"nav flex-column\"><li class=\"nav-item\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</li><!-- Commented out for later --><!--\n <li class=\"nav-item\">\n <a class=\"nav-link\" href=\"/metrics\">\n <i class=\"fas fa-chart-line me-2\"></i>Metrics\n </a>\n </li>\n <li class=\"nav-item\">\n <a class=\"nav-link\" href=\"/logs\">\n <i class=\"fas fa-file-alt me-2\"></i>Logs\n </a>\n </li>\n --></ul><h6 class=\"sidebar-heading px-3 mt-4 mb-1 text-muted\"><span>WORKERS</span></h6><ul class=\"nav flex-column\"><li class=\"nav-item\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if isConfigPage {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "<a class=\"nav-link\" href=\"#\" data-bs-toggle=\"collapse\" data-bs-target=\"#configurationSubmenu\" aria-expanded=\"true\" aria-controls=\"configurationSubmenu\"><i class=\"fas fa-cogs me-2\"></i>Configuration <i class=\"fas fa-chevron-down ms-auto\"></i></a> ")
|
||||
if isPluginPage {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "<a class=\"nav-link active\" href=\"/plugin\"><i class=\"fas fa-plug me-2\"></i>Workers</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "<a class=\"nav-link collapsed\" href=\"#\" data-bs-toggle=\"collapse\" data-bs-target=\"#configurationSubmenu\" aria-expanded=\"false\" aria-controls=\"configurationSubmenu\"><i class=\"fas fa-cogs me-2\"></i>Configuration <i class=\"fas fa-chevron-right ms-auto\"></i></a> ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "<a class=\"nav-link\" href=\"/plugin\"><i class=\"fas fa-plug me-2\"></i>Workers</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
if isConfigPage {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "<div class=\"collapse show\" id=\"configurationSubmenu\"><ul class=\"nav flex-column ms-3\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, menuItem := range GetConfigurationMenuItems() {
|
||||
isActiveItem := currentPath == menuItem.URL
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "<li class=\"nav-item\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if isActiveItem {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "<a class=\"nav-link py-2 active\" href=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var14 templ.SafeURL
|
||||
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.URL))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 285, Col: 117}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var15 = []any{menuItem.Icon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var15...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var16 string
|
||||
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var15).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var17 string
|
||||
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Name)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 286, Col: 109}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "<a class=\"nav-link py-2\" href=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var18 templ.SafeURL
|
||||
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.URL))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 289, Col: 110}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var19 = []any{menuItem.Icon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var19...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var20 string
|
||||
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var19).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var21 string
|
||||
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Name)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 290, Col: 109}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "</li>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "</ul></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "<div class=\"collapse\" id=\"configurationSubmenu\"><ul class=\"nav flex-column ms-3\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
for _, menuItem := range GetConfigurationMenuItems() {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "<li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var22 templ.SafeURL
|
||||
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.URL))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 302, Col: 106}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var23 = []any{menuItem.Icon + " me-2"}
|
||||
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var23...)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "<i class=\"")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var24 string
|
||||
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var23).String())
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 1, Col: 0}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "\"></i>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var25 string
|
||||
templ_7745c5c3_Var25, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Name)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 303, Col: 105}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var25))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "</a></li>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "</ul></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "</li><li class=\"nav-item\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "</li><li class=\"nav-item\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if currentPath == "/maintenance" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "<a class=\"nav-link active\" href=\"/maintenance\"><i class=\"fas fa-list me-2\"></i>Job Queue</a>")
|
||||
if currentPath == "/plugin/detection" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "<a class=\"nav-link active\" href=\"/plugin/detection\"><i class=\"fas fa-search me-2\"></i>Job Detection</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<a class=\"nav-link\" href=\"/maintenance\"><i class=\"fas fa-list me-2\"></i>Job Queue</a>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "<a class=\"nav-link\" href=\"/plugin/detection\"><i class=\"fas fa-search me-2\"></i>Job Detection</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "</li><li class=\"nav-item\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "</li><li class=\"nav-item\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if currentPath == "/maintenance/workers" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "<a class=\"nav-link active\" href=\"/maintenance/workers\"><i class=\"fas fa-user-cog me-2\"></i>Workers</a>")
|
||||
if currentPath == "/plugin/queue" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "<a class=\"nav-link active\" href=\"/plugin/queue\"><i class=\"fas fa-list me-2\"></i>Job Queue</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "<a class=\"nav-link\" href=\"/maintenance/workers\"><i class=\"fas fa-user-cog me-2\"></i>Workers</a>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<a class=\"nav-link\" href=\"/plugin/queue\"><i class=\"fas fa-list me-2\"></i>Job Queue</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "</li></ul></div></div><!-- Main content --><main class=\"col-md-9 ms-sm-auto col-lg-10 px-md-4\"><div class=\"pt-3\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</li><li class=\"nav-item\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if currentPath == "/plugin/execution" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "<a class=\"nav-link active\" href=\"/plugin/execution\"><i class=\"fas fa-tasks me-2\"></i>Job Execution</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "<a class=\"nav-link\" href=\"/plugin/execution\"><i class=\"fas fa-tasks me-2\"></i>Job Execution</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "</li><li class=\"nav-item\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if currentPath == "/plugin/configuration" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "<a class=\"nav-link active\" href=\"/plugin/configuration\"><i class=\"fas fa-sliders-h me-2\"></i>Configuration</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
} else {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "<a class=\"nav-link\" href=\"/plugin/configuration\"><i class=\"fas fa-sliders-h me-2\"></i>Configuration</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "</li></ul></div></div><!-- Main content --><main class=\"col-md-9 ms-sm-auto col-lg-10 px-md-4\"><div class=\"pt-3\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
@@ -495,43 +335,43 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "</div></main></div></div><!-- Footer --><footer class=\"footer mt-auto py-3 bg-light\"><div class=\"container-fluid text-center\"><small class=\"text-muted\">© ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "</div></main></div></div><!-- Footer --><footer class=\"footer mt-auto py-3 bg-light\"><div class=\"container-fluid text-center\"><small class=\"text-muted\">© ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var26 string
|
||||
templ_7745c5c3_Var26, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", time.Now().Year()))
|
||||
var templ_7745c5c3_Var14 string
|
||||
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", time.Now().Year()))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 350, Col: 60}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 337, Col: 60}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var26))
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, " SeaweedFS Admin v")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, " SeaweedFS Admin v")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var27 string
|
||||
templ_7745c5c3_Var27, templ_7745c5c3_Err = templ.JoinStringErrs(version.VERSION_NUMBER)
|
||||
var templ_7745c5c3_Var15 string
|
||||
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(version.VERSION_NUMBER)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 350, Col: 102}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 337, Col: 102}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var27))
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, " ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, " ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if !strings.Contains(version.VERSION, "enterprise") {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "<span class=\"mx-2\">•</span> <a href=\"https://seaweedfs.com\" target=\"_blank\" class=\"text-decoration-none\"><i class=\"fas fa-star me-1\"></i>Enterprise Version Available</a>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "<span class=\"mx-2\">•</span> <a href=\"https://seaweedfs.com\" target=\"_blank\" class=\"text-decoration-none\"><i class=\"fas fa-star me-1\"></i>Enterprise Version Available</a>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</small></div></footer><!-- Bootstrap JS --><script src=\"/static/js/bootstrap.bundle.min.js\"></script><!-- Modal Alerts JS (replaces native alert/confirm) --><script src=\"/static/js/modal-alerts.js\"></script><!-- Custom JS --><script src=\"/static/js/admin.js\"></script><script src=\"/static/js/iam-utils.js\"></script><script src=\"/static/js/s3tables.js\"></script></body></html>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "</small></div></footer><!-- Bootstrap JS --><script src=\"/static/js/bootstrap.bundle.min.js\"></script><!-- Modal Alerts JS (replaces native alert/confirm) --><script src=\"/static/js/modal-alerts.js\"></script><!-- Custom JS --><script src=\"/static/js/admin.js\"></script><script src=\"/static/js/iam-utils.js\"></script><script src=\"/static/js/s3tables.js\"></script></body></html>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
@@ -555,61 +395,61 @@ func LoginForm(c *gin.Context, title string, errorMessage string) templ.Componen
|
||||
}()
|
||||
}
|
||||
ctx = templ.InitializeContext(ctx)
|
||||
templ_7745c5c3_Var28 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var28 == nil {
|
||||
templ_7745c5c3_Var28 = templ.NopComponent
|
||||
templ_7745c5c3_Var16 := templ.GetChildren(ctx)
|
||||
if templ_7745c5c3_Var16 == nil {
|
||||
templ_7745c5c3_Var16 = templ.NopComponent
|
||||
}
|
||||
ctx = templ.ClearChildren(ctx)
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "<!doctype html><html lang=\"en\"><head><meta charset=\"UTF-8\"><title>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "<!doctype html><html lang=\"en\"><head><meta charset=\"UTF-8\"><title>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var29 string
|
||||
templ_7745c5c3_Var29, templ_7745c5c3_Err = templ.JoinStringErrs(title)
|
||||
var templ_7745c5c3_Var17 string
|
||||
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(title)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 378, Col: 17}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 365, Col: 17}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var29))
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, " - Login</title><link rel=\"icon\" href=\"/static/favicon.ico\" type=\"image/x-icon\"><meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"><link href=\"/static/css/bootstrap.min.css\" rel=\"stylesheet\"><link href=\"/static/css/fontawesome.min.css\" rel=\"stylesheet\"></head><body class=\"bg-light\"><div class=\"container\"><div class=\"row justify-content-center min-vh-100 align-items-center\"><div class=\"col-md-6 col-lg-4\"><div class=\"card shadow\"><div class=\"card-body p-5\"><div class=\"text-center mb-4\"><i class=\"fas fa-server fa-3x text-primary mb-3\"></i><h4 class=\"card-title\">")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, " - Login</title><link rel=\"icon\" href=\"/static/favicon.ico\" type=\"image/x-icon\"><meta name=\"viewport\" content=\"width=device-width, initial-scale=1\"><link href=\"/static/css/bootstrap.min.css\" rel=\"stylesheet\"><link href=\"/static/css/fontawesome.min.css\" rel=\"stylesheet\"></head><body class=\"bg-light\"><div class=\"container\"><div class=\"row justify-content-center min-vh-100 align-items-center\"><div class=\"col-md-6 col-lg-4\"><div class=\"card shadow\"><div class=\"card-body p-5\"><div class=\"text-center mb-4\"><i class=\"fas fa-server fa-3x text-primary mb-3\"></i><h4 class=\"card-title\">")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var30 string
|
||||
templ_7745c5c3_Var30, templ_7745c5c3_Err = templ.JoinStringErrs(title)
|
||||
var templ_7745c5c3_Var18 string
|
||||
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(title)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 392, Col: 57}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 379, Col: 57}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var30))
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "</h4><p class=\"text-muted\">Please sign in to continue</p></div>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "</h4><p class=\"text-muted\">Please sign in to continue</p></div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
if errorMessage != "" {
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "<div class=\"alert alert-danger\" role=\"alert\"><i class=\"fas fa-exclamation-triangle me-2\"></i> ")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "<div class=\"alert alert-danger\" role=\"alert\"><i class=\"fas fa-exclamation-triangle me-2\"></i> ")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
var templ_7745c5c3_Var31 string
|
||||
templ_7745c5c3_Var31, templ_7745c5c3_Err = templ.JoinStringErrs(errorMessage)
|
||||
var templ_7745c5c3_Var19 string
|
||||
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(errorMessage)
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 399, Col: 45}
|
||||
return templ.Error{Err: templ_7745c5c3_Err, FileName: `weed/admin/view/layout/layout.templ`, Line: 386, Col: 45}
|
||||
}
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var31))
|
||||
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "</div>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "</div>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
}
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 65, "<form method=\"POST\" action=\"/login\"><div class=\"mb-3\"><label for=\"username\" class=\"form-label\">Username</label><div class=\"input-group\"><span class=\"input-group-text\"><i class=\"fas fa-user\"></i></span> <input type=\"text\" class=\"form-control\" id=\"username\" name=\"username\" required></div></div><div class=\"mb-4\"><label for=\"password\" class=\"form-label\">Password</label><div class=\"input-group\"><span class=\"input-group-text\"><i class=\"fas fa-lock\"></i></span> <input type=\"password\" class=\"form-control\" id=\"password\" name=\"password\" required></div></div><button type=\"submit\" class=\"btn btn-primary w-100\"><i class=\"fas fa-sign-in-alt me-2\"></i>Sign In</button></form></div></div></div></div></div><script src=\"/static/js/bootstrap.bundle.min.js\"></script></body></html>")
|
||||
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<form method=\"POST\" action=\"/login\"><div class=\"mb-3\"><label for=\"username\" class=\"form-label\">Username</label><div class=\"input-group\"><span class=\"input-group-text\"><i class=\"fas fa-user\"></i></span> <input type=\"text\" class=\"form-control\" id=\"username\" name=\"username\" required></div></div><div class=\"mb-4\"><label for=\"password\" class=\"form-label\">Password</label><div class=\"input-group\"><span class=\"input-group-text\"><i class=\"fas fa-lock\"></i></span> <input type=\"password\" class=\"form-control\" id=\"password\" name=\"password\" required></div></div><button type=\"submit\" class=\"btn btn-primary w-100\"><i class=\"fas fa-sign-in-alt me-2\"></i>Sign In</button></form></div></div></div></div></div><script src=\"/static/js/bootstrap.bundle.min.js\"></script></body></html>")
|
||||
if templ_7745c5c3_Err != nil {
|
||||
return templ_7745c5c3_Err
|
||||
}
|
||||
|
||||
@@ -1,47 +0,0 @@
|
||||
package layout
|
||||
|
||||
import (
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
|
||||
// Import task packages to trigger their auto-registration
|
||||
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
_ "github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
)
|
||||
|
||||
// MenuItemData represents a menu item
|
||||
type MenuItemData struct {
|
||||
Name string
|
||||
URL string
|
||||
Icon string
|
||||
Description string
|
||||
}
|
||||
|
||||
// GetConfigurationMenuItems returns the dynamic configuration menu items
|
||||
func GetConfigurationMenuItems() []*MenuItemData {
|
||||
var menuItems []*MenuItemData
|
||||
|
||||
// Add system configuration item
|
||||
menuItems = append(menuItems, &MenuItemData{
|
||||
Name: "System",
|
||||
URL: "/maintenance/config",
|
||||
Icon: "fas fa-cogs",
|
||||
Description: "System-level configuration",
|
||||
})
|
||||
|
||||
// Get all registered task types and add them as submenu items
|
||||
registeredTypes := maintenance.GetRegisteredMaintenanceTaskTypes()
|
||||
|
||||
for _, taskType := range registeredTypes {
|
||||
menuItem := &MenuItemData{
|
||||
Name: maintenance.GetTaskDisplayName(taskType),
|
||||
URL: "/maintenance/config/" + string(taskType),
|
||||
Icon: maintenance.GetTaskIcon(taskType),
|
||||
Description: maintenance.GetTaskDescription(taskType),
|
||||
}
|
||||
|
||||
menuItems = append(menuItems, menuItem)
|
||||
}
|
||||
|
||||
return menuItems
|
||||
}
|
||||
Reference in New Issue
Block a user