Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
This commit is contained in:
739
weed/admin/plugin/config_store.go
Normal file
739
weed/admin/plugin/config_store.go
Normal file
@@ -0,0 +1,739 @@
|
||||
package plugin
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
|
||||
"google.golang.org/protobuf/encoding/protojson"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
const (
|
||||
pluginDirName = "plugin"
|
||||
jobTypesDirName = "job_types"
|
||||
jobsDirName = "jobs"
|
||||
jobDetailsDirName = "job_details"
|
||||
activitiesDirName = "activities"
|
||||
descriptorPBFileName = "descriptor.pb"
|
||||
descriptorJSONFileName = "descriptor.json"
|
||||
configPBFileName = "config.pb"
|
||||
configJSONFileName = "config.json"
|
||||
runsJSONFileName = "runs.json"
|
||||
trackedJobsJSONFileName = "tracked_jobs.json"
|
||||
activitiesJSONFileName = "activities.json"
|
||||
defaultDirPerm = 0o755
|
||||
defaultFilePerm = 0o644
|
||||
)
|
||||
|
||||
// validJobTypePattern is the canonical pattern for safe job type names.
|
||||
// Only letters, digits, underscore, dash, and dot are allowed, which prevents
|
||||
// path traversal because '/', '\\', and whitespace are rejected.
|
||||
var validJobTypePattern = regexp.MustCompile(`^[A-Za-z0-9_.-]+$`)
|
||||
|
||||
// ConfigStore persists plugin configuration and bounded run history.
|
||||
// If admin data dir is empty, it transparently falls back to in-memory mode.
|
||||
type ConfigStore struct {
|
||||
configured bool
|
||||
baseDir string
|
||||
|
||||
mu sync.RWMutex
|
||||
|
||||
memDescriptors map[string]*plugin_pb.JobTypeDescriptor
|
||||
memConfigs map[string]*plugin_pb.PersistedJobTypeConfig
|
||||
memRunHistory map[string]*JobTypeRunHistory
|
||||
memTrackedJobs []TrackedJob
|
||||
memActivities []JobActivity
|
||||
memJobDetails map[string]TrackedJob
|
||||
}
|
||||
|
||||
func NewConfigStore(adminDataDir string) (*ConfigStore, error) {
|
||||
store := &ConfigStore{
|
||||
configured: adminDataDir != "",
|
||||
memDescriptors: make(map[string]*plugin_pb.JobTypeDescriptor),
|
||||
memConfigs: make(map[string]*plugin_pb.PersistedJobTypeConfig),
|
||||
memRunHistory: make(map[string]*JobTypeRunHistory),
|
||||
memJobDetails: make(map[string]TrackedJob),
|
||||
}
|
||||
|
||||
if adminDataDir == "" {
|
||||
return store, nil
|
||||
}
|
||||
|
||||
store.baseDir = filepath.Join(adminDataDir, pluginDirName)
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, jobTypesDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin job_types dir: %w", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, jobsDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin jobs dir: %w", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, jobsDirName, jobDetailsDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin job_details dir: %w", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(store.baseDir, activitiesDirName), defaultDirPerm); err != nil {
|
||||
return nil, fmt.Errorf("create plugin activities dir: %w", err)
|
||||
}
|
||||
|
||||
return store, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) IsConfigured() bool {
|
||||
return s.configured
|
||||
}
|
||||
|
||||
func (s *ConfigStore) BaseDir() string {
|
||||
return s.baseDir
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveDescriptor(jobType string, descriptor *plugin_pb.JobTypeDescriptor) error {
|
||||
if descriptor == nil {
|
||||
return fmt.Errorf("descriptor is nil")
|
||||
}
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
clone := proto.Clone(descriptor).(*plugin_pb.JobTypeDescriptor)
|
||||
if clone.JobType == "" {
|
||||
clone.JobType = jobType
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if !s.configured {
|
||||
s.memDescriptors[jobType] = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
jobTypeDir, err := s.ensureJobTypeDir(jobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pbPath := filepath.Join(jobTypeDir, descriptorPBFileName)
|
||||
jsonPath := filepath.Join(jobTypeDir, descriptorJSONFileName)
|
||||
|
||||
if err := writeProtoFiles(clone, pbPath, jsonPath); err != nil {
|
||||
return fmt.Errorf("save descriptor for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadDescriptor(jobType string) (*plugin_pb.JobTypeDescriptor, error) {
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
d := s.memDescriptors[jobType]
|
||||
s.mu.RUnlock()
|
||||
if d == nil {
|
||||
return nil, nil
|
||||
}
|
||||
return proto.Clone(d).(*plugin_pb.JobTypeDescriptor), nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
pbPath := filepath.Join(s.baseDir, jobTypesDirName, jobType, descriptorPBFileName)
|
||||
data, err := os.ReadFile(pbPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read descriptor for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
var descriptor plugin_pb.JobTypeDescriptor
|
||||
if err := proto.Unmarshal(data, &descriptor); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal descriptor for %s: %w", jobType, err)
|
||||
}
|
||||
return &descriptor, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveJobTypeConfig(config *plugin_pb.PersistedJobTypeConfig) error {
|
||||
if config == nil {
|
||||
return fmt.Errorf("job type config is nil")
|
||||
}
|
||||
if config.JobType == "" {
|
||||
return fmt.Errorf("job type config has empty job_type")
|
||||
}
|
||||
sanitizedJobType, err := sanitizeJobType(config.JobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Use the sanitized job type going forward to ensure it is safe for filesystem paths.
|
||||
config.JobType = sanitizedJobType
|
||||
|
||||
clone := proto.Clone(config).(*plugin_pb.PersistedJobTypeConfig)
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if !s.configured {
|
||||
s.memConfigs[config.JobType] = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
jobTypeDir, err := s.ensureJobTypeDir(config.JobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pbPath := filepath.Join(jobTypeDir, configPBFileName)
|
||||
jsonPath := filepath.Join(jobTypeDir, configJSONFileName)
|
||||
|
||||
if err := writeProtoFiles(clone, pbPath, jsonPath); err != nil {
|
||||
return fmt.Errorf("save job type config for %s: %w", config.JobType, err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadJobTypeConfig(jobType string) (*plugin_pb.PersistedJobTypeConfig, error) {
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
cfg := s.memConfigs[jobType]
|
||||
s.mu.RUnlock()
|
||||
if cfg == nil {
|
||||
return nil, nil
|
||||
}
|
||||
return proto.Clone(cfg).(*plugin_pb.PersistedJobTypeConfig), nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
pbPath := filepath.Join(s.baseDir, jobTypesDirName, jobType, configPBFileName)
|
||||
data, err := os.ReadFile(pbPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read job type config for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
var config plugin_pb.PersistedJobTypeConfig
|
||||
if err := proto.Unmarshal(data, &config); err != nil {
|
||||
return nil, fmt.Errorf("unmarshal job type config for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
return &config, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) AppendRunRecord(jobType string, record *JobRunRecord) error {
|
||||
if record == nil {
|
||||
return fmt.Errorf("run record is nil")
|
||||
}
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
safeRecord := *record
|
||||
if safeRecord.JobType == "" {
|
||||
safeRecord.JobType = jobType
|
||||
}
|
||||
if safeRecord.CompletedAt == nil || safeRecord.CompletedAt.IsZero() {
|
||||
safeRecord.CompletedAt = timeToPtr(time.Now().UTC())
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
history, err := s.loadRunHistoryLocked(jobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if safeRecord.Outcome == RunOutcomeSuccess {
|
||||
history.SuccessfulRuns = append(history.SuccessfulRuns, safeRecord)
|
||||
} else {
|
||||
safeRecord.Outcome = RunOutcomeError
|
||||
history.ErrorRuns = append(history.ErrorRuns, safeRecord)
|
||||
}
|
||||
|
||||
history.SuccessfulRuns = trimRuns(history.SuccessfulRuns, MaxSuccessfulRunHistory)
|
||||
history.ErrorRuns = trimRuns(history.ErrorRuns, MaxErrorRunHistory)
|
||||
history.LastUpdatedTime = timeToPtr(time.Now().UTC())
|
||||
|
||||
return s.saveRunHistoryLocked(jobType, history)
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadRunHistory(jobType string) (*JobTypeRunHistory, error) {
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
history, err := s.loadRunHistoryLocked(jobType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return cloneRunHistory(history), nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveTrackedJobs(jobs []TrackedJob) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
clone := cloneTrackedJobs(jobs)
|
||||
|
||||
if !s.configured {
|
||||
s.memTrackedJobs = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(clone, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode tracked jobs: %w", err)
|
||||
}
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, trackedJobsJSONFileName)
|
||||
if err := atomicWriteFile(path, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write tracked jobs: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadTrackedJobs() ([]TrackedJob, error) {
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
out := cloneTrackedJobs(s.memTrackedJobs)
|
||||
s.mu.RUnlock()
|
||||
return out, nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, trackedJobsJSONFileName)
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read tracked jobs: %w", err)
|
||||
}
|
||||
|
||||
var jobs []TrackedJob
|
||||
if err := json.Unmarshal(data, &jobs); err != nil {
|
||||
return nil, fmt.Errorf("parse tracked jobs: %w", err)
|
||||
}
|
||||
return cloneTrackedJobs(jobs), nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveJobDetail(job TrackedJob) error {
|
||||
jobID, err := sanitizeJobID(job.JobID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
clone := cloneTrackedJob(job)
|
||||
clone.JobID = jobID
|
||||
|
||||
if !s.configured {
|
||||
s.memJobDetails[jobID] = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(clone, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode job detail: %w", err)
|
||||
}
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, jobDetailsDirName, jobDetailFileName(jobID))
|
||||
if err := atomicWriteFile(path, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write job detail: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadJobDetail(jobID string) (*TrackedJob, error) {
|
||||
jobID, err := sanitizeJobID(jobID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
job, ok := s.memJobDetails[jobID]
|
||||
s.mu.RUnlock()
|
||||
if !ok {
|
||||
return nil, nil
|
||||
}
|
||||
clone := cloneTrackedJob(job)
|
||||
return &clone, nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
path := filepath.Join(s.baseDir, jobsDirName, jobDetailsDirName, jobDetailFileName(jobID))
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read job detail: %w", err)
|
||||
}
|
||||
|
||||
var job TrackedJob
|
||||
if err := json.Unmarshal(data, &job); err != nil {
|
||||
return nil, fmt.Errorf("parse job detail: %w", err)
|
||||
}
|
||||
clone := cloneTrackedJob(job)
|
||||
return &clone, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) SaveActivities(activities []JobActivity) error {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
clone := cloneActivities(activities)
|
||||
|
||||
if !s.configured {
|
||||
s.memActivities = clone
|
||||
return nil
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(clone, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode activities: %w", err)
|
||||
}
|
||||
|
||||
path := filepath.Join(s.baseDir, activitiesDirName, activitiesJSONFileName)
|
||||
if err := atomicWriteFile(path, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write activities: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) LoadActivities() ([]JobActivity, error) {
|
||||
s.mu.RLock()
|
||||
if !s.configured {
|
||||
out := cloneActivities(s.memActivities)
|
||||
s.mu.RUnlock()
|
||||
return out, nil
|
||||
}
|
||||
s.mu.RUnlock()
|
||||
|
||||
path := filepath.Join(s.baseDir, activitiesDirName, activitiesJSONFileName)
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read activities: %w", err)
|
||||
}
|
||||
|
||||
var activities []JobActivity
|
||||
if err := json.Unmarshal(data, &activities); err != nil {
|
||||
return nil, fmt.Errorf("parse activities: %w", err)
|
||||
}
|
||||
return cloneActivities(activities), nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) ListJobTypes() ([]string, error) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
|
||||
jobTypeSet := make(map[string]struct{})
|
||||
|
||||
if !s.configured {
|
||||
for jobType := range s.memDescriptors {
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
for jobType := range s.memConfigs {
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
for jobType := range s.memRunHistory {
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
} else {
|
||||
jobTypesPath := filepath.Join(s.baseDir, jobTypesDirName)
|
||||
entries, err := os.ReadDir(jobTypesPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return []string{}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("list job types: %w", err)
|
||||
}
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
jobType := strings.TrimSpace(entry.Name())
|
||||
if _, err := sanitizeJobType(jobType); err != nil {
|
||||
continue
|
||||
}
|
||||
jobTypeSet[jobType] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
jobTypes := make([]string, 0, len(jobTypeSet))
|
||||
for jobType := range jobTypeSet {
|
||||
jobTypes = append(jobTypes, jobType)
|
||||
}
|
||||
sort.Strings(jobTypes)
|
||||
return jobTypes, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) loadRunHistoryLocked(jobType string) (*JobTypeRunHistory, error) {
|
||||
if !s.configured {
|
||||
history, ok := s.memRunHistory[jobType]
|
||||
if !ok {
|
||||
history = &JobTypeRunHistory{JobType: jobType}
|
||||
s.memRunHistory[jobType] = history
|
||||
}
|
||||
return cloneRunHistory(history), nil
|
||||
}
|
||||
|
||||
runsPath := filepath.Join(s.baseDir, jobTypesDirName, jobType, runsJSONFileName)
|
||||
data, err := os.ReadFile(runsPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return &JobTypeRunHistory{JobType: jobType}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("read run history for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
var history JobTypeRunHistory
|
||||
if err := json.Unmarshal(data, &history); err != nil {
|
||||
return nil, fmt.Errorf("parse run history for %s: %w", jobType, err)
|
||||
}
|
||||
if history.JobType == "" {
|
||||
history.JobType = jobType
|
||||
}
|
||||
return &history, nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) saveRunHistoryLocked(jobType string, history *JobTypeRunHistory) error {
|
||||
if !s.configured {
|
||||
s.memRunHistory[jobType] = cloneRunHistory(history)
|
||||
return nil
|
||||
}
|
||||
|
||||
jobTypeDir, err := s.ensureJobTypeDir(jobType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
encoded, err := json.MarshalIndent(history, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("encode run history for %s: %w", jobType, err)
|
||||
}
|
||||
|
||||
runsPath := filepath.Join(jobTypeDir, runsJSONFileName)
|
||||
if err := atomicWriteFile(runsPath, encoded, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write run history for %s: %w", jobType, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ConfigStore) ensureJobTypeDir(jobType string) (string, error) {
|
||||
if !s.configured {
|
||||
return "", nil
|
||||
}
|
||||
jobTypeDir := filepath.Join(s.baseDir, jobTypesDirName, jobType)
|
||||
if err := os.MkdirAll(jobTypeDir, defaultDirPerm); err != nil {
|
||||
return "", fmt.Errorf("create job type dir for %s: %w", jobType, err)
|
||||
}
|
||||
return jobTypeDir, nil
|
||||
}
|
||||
|
||||
func sanitizeJobType(jobType string) (string, error) {
|
||||
jobType = strings.TrimSpace(jobType)
|
||||
if jobType == "" {
|
||||
return "", fmt.Errorf("job type is empty")
|
||||
}
|
||||
// Enforce a strict, path-safe pattern for job types: only letters, digits, underscore, dash and dot.
|
||||
// This prevents path traversal because '/', '\\' and whitespace are rejected.
|
||||
if !validJobTypePattern.MatchString(jobType) {
|
||||
return "", fmt.Errorf("invalid job type %q: must match %s", jobType, validJobTypePattern.String())
|
||||
}
|
||||
return jobType, nil
|
||||
}
|
||||
|
||||
// validJobIDPattern allows letters, digits, dash, underscore, and dot.
|
||||
// url.PathEscape in jobDetailFileName provides a second layer of defense.
|
||||
var validJobIDPattern = regexp.MustCompile(`^[A-Za-z0-9_.-]+$`)
|
||||
|
||||
func sanitizeJobID(jobID string) (string, error) {
|
||||
jobID = strings.TrimSpace(jobID)
|
||||
if jobID == "" {
|
||||
return "", fmt.Errorf("job id is empty")
|
||||
}
|
||||
if !validJobIDPattern.MatchString(jobID) {
|
||||
return "", fmt.Errorf("invalid job id %q: must match %s", jobID, validJobIDPattern.String())
|
||||
}
|
||||
return jobID, nil
|
||||
}
|
||||
|
||||
func jobDetailFileName(jobID string) string {
|
||||
return url.PathEscape(jobID) + ".json"
|
||||
}
|
||||
|
||||
func trimRuns(runs []JobRunRecord, maxKeep int) []JobRunRecord {
|
||||
if len(runs) == 0 {
|
||||
return runs
|
||||
}
|
||||
sort.Slice(runs, func(i, j int) bool {
|
||||
ti := time.Time{}
|
||||
if runs[i].CompletedAt != nil {
|
||||
ti = *runs[i].CompletedAt
|
||||
}
|
||||
tj := time.Time{}
|
||||
if runs[j].CompletedAt != nil {
|
||||
tj = *runs[j].CompletedAt
|
||||
}
|
||||
return ti.After(tj)
|
||||
})
|
||||
if len(runs) > maxKeep {
|
||||
runs = runs[:maxKeep]
|
||||
}
|
||||
return runs
|
||||
}
|
||||
|
||||
func cloneRunHistory(in *JobTypeRunHistory) *JobTypeRunHistory {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := *in
|
||||
if in.SuccessfulRuns != nil {
|
||||
out.SuccessfulRuns = append([]JobRunRecord(nil), in.SuccessfulRuns...)
|
||||
}
|
||||
if in.ErrorRuns != nil {
|
||||
out.ErrorRuns = append([]JobRunRecord(nil), in.ErrorRuns...)
|
||||
}
|
||||
return &out
|
||||
}
|
||||
|
||||
func cloneTrackedJobs(in []TrackedJob) []TrackedJob {
|
||||
if len(in) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]TrackedJob, len(in))
|
||||
for i := range in {
|
||||
out[i] = cloneTrackedJob(in[i])
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func cloneTrackedJob(in TrackedJob) TrackedJob {
|
||||
out := in
|
||||
if in.Parameters != nil {
|
||||
out.Parameters = make(map[string]interface{}, len(in.Parameters))
|
||||
for key, value := range in.Parameters {
|
||||
out.Parameters[key] = deepCopyGenericValue(value)
|
||||
}
|
||||
}
|
||||
if in.Labels != nil {
|
||||
out.Labels = make(map[string]string, len(in.Labels))
|
||||
for key, value := range in.Labels {
|
||||
out.Labels[key] = value
|
||||
}
|
||||
}
|
||||
if in.ResultOutputValues != nil {
|
||||
out.ResultOutputValues = make(map[string]interface{}, len(in.ResultOutputValues))
|
||||
for key, value := range in.ResultOutputValues {
|
||||
out.ResultOutputValues[key] = deepCopyGenericValue(value)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func deepCopyGenericValue(val interface{}) interface{} {
|
||||
switch v := val.(type) {
|
||||
case map[string]interface{}:
|
||||
res := make(map[string]interface{}, len(v))
|
||||
for k, val := range v {
|
||||
res[k] = deepCopyGenericValue(val)
|
||||
}
|
||||
return res
|
||||
case []interface{}:
|
||||
res := make([]interface{}, len(v))
|
||||
for i, val := range v {
|
||||
res[i] = deepCopyGenericValue(val)
|
||||
}
|
||||
return res
|
||||
default:
|
||||
return v
|
||||
}
|
||||
}
|
||||
|
||||
func cloneActivities(in []JobActivity) []JobActivity {
|
||||
if len(in) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]JobActivity, len(in))
|
||||
for i := range in {
|
||||
out[i] = in[i]
|
||||
if in[i].Details != nil {
|
||||
out[i].Details = make(map[string]interface{}, len(in[i].Details))
|
||||
for key, value := range in[i].Details {
|
||||
out[i].Details[key] = deepCopyGenericValue(value)
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// writeProtoFiles writes message to both a binary protobuf file (pbPath) and a
|
||||
// human-readable JSON file (jsonPath) using atomicWriteFile for each.
|
||||
// The .pb file is the authoritative source of truth: all reads use proto.Unmarshal
|
||||
// on the .pb file. The .json file is for human inspection only, so a partial
|
||||
// failure where .pb succeeds but .json fails leaves the store in a consistent state.
|
||||
func writeProtoFiles(message proto.Message, pbPath string, jsonPath string) error {
|
||||
pbData, err := proto.Marshal(message)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal protobuf: %w", err)
|
||||
}
|
||||
if err := atomicWriteFile(pbPath, pbData, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write protobuf file: %w", err)
|
||||
}
|
||||
|
||||
jsonData, err := protojson.MarshalOptions{
|
||||
Multiline: true,
|
||||
Indent: " ",
|
||||
EmitUnpopulated: true,
|
||||
}.Marshal(message)
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshal json: %w", err)
|
||||
}
|
||||
if err := atomicWriteFile(jsonPath, jsonData, defaultFilePerm); err != nil {
|
||||
return fmt.Errorf("write json file: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func atomicWriteFile(filename string, data []byte, perm os.FileMode) error {
|
||||
dir := filepath.Dir(filename)
|
||||
if err := os.MkdirAll(dir, defaultDirPerm); err != nil {
|
||||
return fmt.Errorf("create directory %s: %w", dir, err)
|
||||
}
|
||||
tmpFile := filename + ".tmp"
|
||||
if err := os.WriteFile(tmpFile, data, perm); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Rename(tmpFile, filename); err != nil {
|
||||
_ = os.Remove(tmpFile)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
Reference in New Issue
Block a user