Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
This commit is contained in:
@@ -23,7 +23,7 @@ type AdminHandlers struct {
|
||||
fileBrowserHandlers *FileBrowserHandlers
|
||||
userHandlers *UserHandlers
|
||||
policyHandlers *PolicyHandlers
|
||||
maintenanceHandlers *MaintenanceHandlers
|
||||
pluginHandlers *PluginHandlers
|
||||
mqHandlers *MessageQueueHandlers
|
||||
serviceAccountHandlers *ServiceAccountHandlers
|
||||
}
|
||||
@@ -35,7 +35,7 @@ func NewAdminHandlers(adminServer *dash.AdminServer) *AdminHandlers {
|
||||
fileBrowserHandlers := NewFileBrowserHandlers(adminServer)
|
||||
userHandlers := NewUserHandlers(adminServer)
|
||||
policyHandlers := NewPolicyHandlers(adminServer)
|
||||
maintenanceHandlers := NewMaintenanceHandlers(adminServer)
|
||||
pluginHandlers := NewPluginHandlers(adminServer)
|
||||
mqHandlers := NewMessageQueueHandlers(adminServer)
|
||||
serviceAccountHandlers := NewServiceAccountHandlers(adminServer)
|
||||
return &AdminHandlers{
|
||||
@@ -45,7 +45,7 @@ func NewAdminHandlers(adminServer *dash.AdminServer) *AdminHandlers {
|
||||
fileBrowserHandlers: fileBrowserHandlers,
|
||||
userHandlers: userHandlers,
|
||||
policyHandlers: policyHandlers,
|
||||
maintenanceHandlers: maintenanceHandlers,
|
||||
pluginHandlers: pluginHandlers,
|
||||
mqHandlers: mqHandlers,
|
||||
serviceAccountHandlers: serviceAccountHandlers,
|
||||
}
|
||||
@@ -119,14 +119,12 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
protected.GET("/mq/topics", h.mqHandlers.ShowTopics)
|
||||
protected.GET("/mq/topics/:namespace/:topic", h.mqHandlers.ShowTopicDetails)
|
||||
|
||||
// Maintenance system routes
|
||||
protected.GET("/maintenance", h.maintenanceHandlers.ShowMaintenanceQueue)
|
||||
protected.GET("/maintenance/workers", h.maintenanceHandlers.ShowMaintenanceWorkers)
|
||||
protected.GET("/maintenance/config", h.maintenanceHandlers.ShowMaintenanceConfig)
|
||||
protected.POST("/maintenance/config", dash.RequireWriteAccess(), h.maintenanceHandlers.UpdateMaintenanceConfig)
|
||||
protected.GET("/maintenance/config/:taskType", h.maintenanceHandlers.ShowTaskConfig)
|
||||
protected.POST("/maintenance/config/:taskType", dash.RequireWriteAccess(), h.maintenanceHandlers.UpdateTaskConfig)
|
||||
protected.GET("/maintenance/tasks/:id", h.maintenanceHandlers.ShowTaskDetail)
|
||||
protected.GET("/plugin", h.pluginHandlers.ShowPlugin)
|
||||
protected.GET("/plugin/configuration", h.pluginHandlers.ShowPluginConfiguration)
|
||||
protected.GET("/plugin/queue", h.pluginHandlers.ShowPluginQueue)
|
||||
protected.GET("/plugin/detection", h.pluginHandlers.ShowPluginDetection)
|
||||
protected.GET("/plugin/execution", h.pluginHandlers.ShowPluginExecution)
|
||||
protected.GET("/plugin/monitoring", h.pluginHandlers.ShowPluginMonitoring)
|
||||
|
||||
// API routes for AJAX calls
|
||||
api := r.Group("/api")
|
||||
@@ -226,20 +224,25 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
volumeApi.POST("/:id/:server/vacuum", dash.RequireWriteAccess(), h.clusterHandlers.VacuumVolume)
|
||||
}
|
||||
|
||||
// Maintenance API routes
|
||||
maintenanceApi := api.Group("/maintenance")
|
||||
// Plugin API routes
|
||||
pluginApi := api.Group("/plugin")
|
||||
{
|
||||
maintenanceApi.POST("/scan", dash.RequireWriteAccess(), h.adminServer.TriggerMaintenanceScan)
|
||||
maintenanceApi.GET("/tasks", h.adminServer.GetMaintenanceTasks)
|
||||
maintenanceApi.GET("/tasks/:id", h.adminServer.GetMaintenanceTask)
|
||||
maintenanceApi.GET("/tasks/:id/detail", h.adminServer.GetMaintenanceTaskDetailAPI)
|
||||
maintenanceApi.POST("/tasks/:id/cancel", dash.RequireWriteAccess(), h.adminServer.CancelMaintenanceTask)
|
||||
maintenanceApi.GET("/workers", h.adminServer.GetMaintenanceWorkersAPI)
|
||||
maintenanceApi.GET("/workers/:id", h.adminServer.GetMaintenanceWorker)
|
||||
maintenanceApi.GET("/workers/:id/logs", h.adminServer.GetWorkerLogs)
|
||||
maintenanceApi.GET("/stats", h.adminServer.GetMaintenanceStats)
|
||||
maintenanceApi.GET("/config", h.adminServer.GetMaintenanceConfigAPI)
|
||||
maintenanceApi.PUT("/config", dash.RequireWriteAccess(), h.adminServer.UpdateMaintenanceConfigAPI)
|
||||
pluginApi.GET("/status", h.adminServer.GetPluginStatusAPI)
|
||||
pluginApi.GET("/workers", h.adminServer.GetPluginWorkersAPI)
|
||||
pluginApi.GET("/job-types", h.adminServer.GetPluginJobTypesAPI)
|
||||
pluginApi.GET("/jobs", h.adminServer.GetPluginJobsAPI)
|
||||
pluginApi.GET("/jobs/:jobId", h.adminServer.GetPluginJobAPI)
|
||||
pluginApi.GET("/jobs/:jobId/detail", h.adminServer.GetPluginJobDetailAPI)
|
||||
pluginApi.GET("/activities", h.adminServer.GetPluginActivitiesAPI)
|
||||
pluginApi.GET("/scheduler-states", h.adminServer.GetPluginSchedulerStatesAPI)
|
||||
pluginApi.GET("/job-types/:jobType/descriptor", h.adminServer.GetPluginJobTypeDescriptorAPI)
|
||||
pluginApi.POST("/job-types/:jobType/schema", h.adminServer.RequestPluginJobTypeSchemaAPI)
|
||||
pluginApi.GET("/job-types/:jobType/config", h.adminServer.GetPluginJobTypeConfigAPI)
|
||||
pluginApi.PUT("/job-types/:jobType/config", dash.RequireWriteAccess(), h.adminServer.UpdatePluginJobTypeConfigAPI)
|
||||
pluginApi.GET("/job-types/:jobType/runs", h.adminServer.GetPluginRunHistoryAPI)
|
||||
pluginApi.POST("/job-types/:jobType/detect", dash.RequireWriteAccess(), h.adminServer.TriggerPluginDetectionAPI)
|
||||
pluginApi.POST("/job-types/:jobType/run", dash.RequireWriteAccess(), h.adminServer.RunPluginJobTypeAPI)
|
||||
pluginApi.POST("/jobs/execute", dash.RequireWriteAccess(), h.adminServer.ExecutePluginJobAPI)
|
||||
}
|
||||
|
||||
// Message Queue API routes
|
||||
@@ -292,14 +295,12 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
r.GET("/mq/topics", h.mqHandlers.ShowTopics)
|
||||
r.GET("/mq/topics/:namespace/:topic", h.mqHandlers.ShowTopicDetails)
|
||||
|
||||
// Maintenance system routes
|
||||
r.GET("/maintenance", h.maintenanceHandlers.ShowMaintenanceQueue)
|
||||
r.GET("/maintenance/workers", h.maintenanceHandlers.ShowMaintenanceWorkers)
|
||||
r.GET("/maintenance/config", h.maintenanceHandlers.ShowMaintenanceConfig)
|
||||
r.POST("/maintenance/config", h.maintenanceHandlers.UpdateMaintenanceConfig)
|
||||
r.GET("/maintenance/config/:taskType", h.maintenanceHandlers.ShowTaskConfig)
|
||||
r.POST("/maintenance/config/:taskType", h.maintenanceHandlers.UpdateTaskConfig)
|
||||
r.GET("/maintenance/tasks/:id", h.maintenanceHandlers.ShowTaskDetail)
|
||||
r.GET("/plugin", h.pluginHandlers.ShowPlugin)
|
||||
r.GET("/plugin/configuration", h.pluginHandlers.ShowPluginConfiguration)
|
||||
r.GET("/plugin/queue", h.pluginHandlers.ShowPluginQueue)
|
||||
r.GET("/plugin/detection", h.pluginHandlers.ShowPluginDetection)
|
||||
r.GET("/plugin/execution", h.pluginHandlers.ShowPluginExecution)
|
||||
r.GET("/plugin/monitoring", h.pluginHandlers.ShowPluginMonitoring)
|
||||
|
||||
// API routes for AJAX calls
|
||||
api := r.Group("/api")
|
||||
@@ -398,20 +399,25 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, adminUser,
|
||||
volumeApi.POST("/:id/:server/vacuum", h.clusterHandlers.VacuumVolume)
|
||||
}
|
||||
|
||||
// Maintenance API routes
|
||||
maintenanceApi := api.Group("/maintenance")
|
||||
// Plugin API routes
|
||||
pluginApi := api.Group("/plugin")
|
||||
{
|
||||
maintenanceApi.POST("/scan", h.adminServer.TriggerMaintenanceScan)
|
||||
maintenanceApi.GET("/tasks", h.adminServer.GetMaintenanceTasks)
|
||||
maintenanceApi.GET("/tasks/:id", h.adminServer.GetMaintenanceTask)
|
||||
maintenanceApi.GET("/tasks/:id/detail", h.adminServer.GetMaintenanceTaskDetailAPI)
|
||||
maintenanceApi.POST("/tasks/:id/cancel", h.adminServer.CancelMaintenanceTask)
|
||||
maintenanceApi.GET("/workers", h.adminServer.GetMaintenanceWorkersAPI)
|
||||
maintenanceApi.GET("/workers/:id", h.adminServer.GetMaintenanceWorker)
|
||||
maintenanceApi.GET("/workers/:id/logs", h.adminServer.GetWorkerLogs)
|
||||
maintenanceApi.GET("/stats", h.adminServer.GetMaintenanceStats)
|
||||
maintenanceApi.GET("/config", h.adminServer.GetMaintenanceConfigAPI)
|
||||
maintenanceApi.PUT("/config", h.adminServer.UpdateMaintenanceConfigAPI)
|
||||
pluginApi.GET("/status", h.adminServer.GetPluginStatusAPI)
|
||||
pluginApi.GET("/workers", h.adminServer.GetPluginWorkersAPI)
|
||||
pluginApi.GET("/job-types", h.adminServer.GetPluginJobTypesAPI)
|
||||
pluginApi.GET("/jobs", h.adminServer.GetPluginJobsAPI)
|
||||
pluginApi.GET("/jobs/:jobId", h.adminServer.GetPluginJobAPI)
|
||||
pluginApi.GET("/jobs/:jobId/detail", h.adminServer.GetPluginJobDetailAPI)
|
||||
pluginApi.GET("/activities", h.adminServer.GetPluginActivitiesAPI)
|
||||
pluginApi.GET("/scheduler-states", h.adminServer.GetPluginSchedulerStatesAPI)
|
||||
pluginApi.GET("/job-types/:jobType/descriptor", h.adminServer.GetPluginJobTypeDescriptorAPI)
|
||||
pluginApi.POST("/job-types/:jobType/schema", h.adminServer.RequestPluginJobTypeSchemaAPI)
|
||||
pluginApi.GET("/job-types/:jobType/config", h.adminServer.GetPluginJobTypeConfigAPI)
|
||||
pluginApi.PUT("/job-types/:jobType/config", h.adminServer.UpdatePluginJobTypeConfigAPI)
|
||||
pluginApi.GET("/job-types/:jobType/runs", h.adminServer.GetPluginRunHistoryAPI)
|
||||
pluginApi.POST("/job-types/:jobType/detect", h.adminServer.TriggerPluginDetectionAPI)
|
||||
pluginApi.POST("/job-types/:jobType/run", h.adminServer.RunPluginJobTypeAPI)
|
||||
pluginApi.POST("/jobs/execute", h.adminServer.ExecutePluginJobAPI)
|
||||
}
|
||||
|
||||
// Message Queue API routes
|
||||
|
||||
95
weed/admin/handlers/admin_handlers_routes_test.go
Normal file
95
weed/admin/handlers/admin_handlers_routes_test.go
Normal file
@@ -0,0 +1,95 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
)
|
||||
|
||||
func TestSetupRoutes_RegistersPluginSchedulerStatesAPI_NoAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, false, "", "", "", "", true)
|
||||
|
||||
if !hasRoute(router, "GET", "/api/plugin/scheduler-states") {
|
||||
t.Fatalf("expected GET /api/plugin/scheduler-states to be registered in no-auth mode")
|
||||
}
|
||||
if !hasRoute(router, "GET", "/api/plugin/jobs/:jobId/detail") {
|
||||
t.Fatalf("expected GET /api/plugin/jobs/:jobId/detail to be registered in no-auth mode")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetupRoutes_RegistersPluginSchedulerStatesAPI_WithAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, true, "admin", "password", "", "", true)
|
||||
|
||||
if !hasRoute(router, "GET", "/api/plugin/scheduler-states") {
|
||||
t.Fatalf("expected GET /api/plugin/scheduler-states to be registered in auth mode")
|
||||
}
|
||||
if !hasRoute(router, "GET", "/api/plugin/jobs/:jobId/detail") {
|
||||
t.Fatalf("expected GET /api/plugin/jobs/:jobId/detail to be registered in auth mode")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSetupRoutes_RegistersPluginPages_NoAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, false, "", "", "", "", true)
|
||||
|
||||
assertHasRoute(t, router, "GET", "/plugin")
|
||||
assertHasRoute(t, router, "GET", "/plugin/configuration")
|
||||
assertHasRoute(t, router, "GET", "/plugin/queue")
|
||||
assertHasRoute(t, router, "GET", "/plugin/detection")
|
||||
assertHasRoute(t, router, "GET", "/plugin/execution")
|
||||
assertHasRoute(t, router, "GET", "/plugin/monitoring")
|
||||
}
|
||||
|
||||
func TestSetupRoutes_RegistersPluginPages_WithAuth(t *testing.T) {
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
||||
newRouteTestAdminHandlers().SetupRoutes(router, true, "admin", "password", "", "", true)
|
||||
|
||||
assertHasRoute(t, router, "GET", "/plugin")
|
||||
assertHasRoute(t, router, "GET", "/plugin/configuration")
|
||||
assertHasRoute(t, router, "GET", "/plugin/queue")
|
||||
assertHasRoute(t, router, "GET", "/plugin/detection")
|
||||
assertHasRoute(t, router, "GET", "/plugin/execution")
|
||||
assertHasRoute(t, router, "GET", "/plugin/monitoring")
|
||||
}
|
||||
|
||||
func newRouteTestAdminHandlers() *AdminHandlers {
|
||||
adminServer := &dash.AdminServer{}
|
||||
return &AdminHandlers{
|
||||
adminServer: adminServer,
|
||||
authHandlers: &AuthHandlers{adminServer: adminServer},
|
||||
clusterHandlers: &ClusterHandlers{adminServer: adminServer},
|
||||
fileBrowserHandlers: &FileBrowserHandlers{adminServer: adminServer},
|
||||
userHandlers: &UserHandlers{adminServer: adminServer},
|
||||
policyHandlers: &PolicyHandlers{adminServer: adminServer},
|
||||
pluginHandlers: &PluginHandlers{adminServer: adminServer},
|
||||
mqHandlers: &MessageQueueHandlers{adminServer: adminServer},
|
||||
serviceAccountHandlers: &ServiceAccountHandlers{adminServer: adminServer},
|
||||
}
|
||||
}
|
||||
|
||||
func hasRoute(router *gin.Engine, method string, path string) bool {
|
||||
for _, route := range router.Routes() {
|
||||
if route.Method == method && route.Path == path {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func assertHasRoute(t *testing.T, router *gin.Engine, method string, path string) {
|
||||
t.Helper()
|
||||
if !hasRoute(router, method, path) {
|
||||
t.Fatalf("expected %s %s to be registered", method, path)
|
||||
}
|
||||
}
|
||||
@@ -1,550 +0,0 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/config"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/app"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/layout"
|
||||
"github.com/seaweedfs/seaweedfs/weed/glog"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/types"
|
||||
)
|
||||
|
||||
// MaintenanceHandlers handles maintenance-related HTTP requests
|
||||
type MaintenanceHandlers struct {
|
||||
adminServer *dash.AdminServer
|
||||
}
|
||||
|
||||
// NewMaintenanceHandlers creates a new instance of MaintenanceHandlers
|
||||
func NewMaintenanceHandlers(adminServer *dash.AdminServer) *MaintenanceHandlers {
|
||||
return &MaintenanceHandlers{
|
||||
adminServer: adminServer,
|
||||
}
|
||||
}
|
||||
|
||||
// ShowTaskDetail displays the task detail page
|
||||
func (h *MaintenanceHandlers) ShowTaskDetail(c *gin.Context) {
|
||||
taskID := c.Param("id")
|
||||
|
||||
if h.adminServer == nil {
|
||||
c.String(http.StatusInternalServerError, "Admin server not initialized")
|
||||
return
|
||||
}
|
||||
|
||||
taskDetail, err := h.adminServer.GetMaintenanceTaskDetail(taskID)
|
||||
if err != nil {
|
||||
glog.Errorf("DEBUG ShowTaskDetail: error getting task detail for %s: %v", taskID, err)
|
||||
c.String(http.StatusNotFound, "Task not found: %s (Error: %v)", taskID, err)
|
||||
return
|
||||
}
|
||||
|
||||
c.Header("Content-Type", "text/html")
|
||||
taskDetailComponent := app.TaskDetail(taskDetail)
|
||||
layoutComponent := layout.Layout(c, taskDetailComponent)
|
||||
err = layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
glog.Errorf("DEBUG ShowTaskDetail: render error: %v", err)
|
||||
c.String(http.StatusInternalServerError, "Failed to render template: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// ShowMaintenanceQueue displays the maintenance queue page
|
||||
func (h *MaintenanceHandlers) ShowMaintenanceQueue(c *gin.Context) {
|
||||
// Add timeout to prevent hanging
|
||||
ctx, cancel := context.WithTimeout(c.Request.Context(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Use a channel to handle timeout for data retrieval
|
||||
type result struct {
|
||||
data *maintenance.MaintenanceQueueData
|
||||
err error
|
||||
}
|
||||
resultChan := make(chan result, 1)
|
||||
|
||||
go func() {
|
||||
data, err := h.getMaintenanceQueueData()
|
||||
resultChan <- result{data: data, err: err}
|
||||
}()
|
||||
|
||||
select {
|
||||
case res := <-resultChan:
|
||||
if res.err != nil {
|
||||
glog.V(1).Infof("ShowMaintenanceQueue: error getting data: %v", res.err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": res.err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
glog.V(2).Infof("ShowMaintenanceQueue: got data with %d tasks", len(res.data.Tasks))
|
||||
|
||||
// Render HTML template
|
||||
c.Header("Content-Type", "text/html")
|
||||
maintenanceComponent := app.MaintenanceQueue(res.data)
|
||||
layoutComponent := layout.Layout(c, maintenanceComponent)
|
||||
err := layoutComponent.Render(ctx, c.Writer)
|
||||
if err != nil {
|
||||
glog.V(1).Infof("ShowMaintenanceQueue: render error: %v", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
glog.V(3).Infof("ShowMaintenanceQueue: template rendered successfully")
|
||||
|
||||
case <-ctx.Done():
|
||||
glog.Warningf("ShowMaintenanceQueue: timeout waiting for data")
|
||||
c.JSON(http.StatusRequestTimeout, gin.H{
|
||||
"error": "Request timeout - maintenance data retrieval took too long. This may indicate a system issue.",
|
||||
"suggestion": "Try refreshing the page or contact system administrator if the problem persists.",
|
||||
})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// ShowMaintenanceWorkers displays the maintenance workers page
|
||||
func (h *MaintenanceHandlers) ShowMaintenanceWorkers(c *gin.Context) {
|
||||
if h.adminServer == nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Admin server not initialized"})
|
||||
return
|
||||
}
|
||||
workersData, err := h.adminServer.GetMaintenanceWorkersData()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Render HTML template
|
||||
c.Header("Content-Type", "text/html")
|
||||
workersComponent := app.MaintenanceWorkers(workersData)
|
||||
layoutComponent := layout.Layout(c, workersComponent)
|
||||
err = layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// ShowMaintenanceConfig displays the maintenance configuration page
|
||||
func (h *MaintenanceHandlers) ShowMaintenanceConfig(c *gin.Context) {
|
||||
config, err := h.getMaintenanceConfig()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Get the schema for dynamic form rendering
|
||||
schema := maintenance.GetMaintenanceConfigSchema()
|
||||
|
||||
// Render HTML template using schema-driven approach
|
||||
c.Header("Content-Type", "text/html")
|
||||
configComponent := app.MaintenanceConfigSchema(config, schema)
|
||||
layoutComponent := layout.Layout(c, configComponent)
|
||||
err = layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// ShowTaskConfig displays the configuration page for a specific task type
|
||||
func (h *MaintenanceHandlers) ShowTaskConfig(c *gin.Context) {
|
||||
taskTypeName := c.Param("taskType")
|
||||
|
||||
// Get the schema for this task type
|
||||
schema := tasks.GetTaskConfigSchema(taskTypeName)
|
||||
if schema == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "Task type not found or no schema available"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get the UI provider for current configuration
|
||||
uiRegistry := tasks.GetGlobalUIRegistry()
|
||||
typesRegistry := tasks.GetGlobalTypesRegistry()
|
||||
|
||||
var provider types.TaskUIProvider
|
||||
for workerTaskType := range typesRegistry.GetAllDetectors() {
|
||||
if string(workerTaskType) == taskTypeName {
|
||||
provider = uiRegistry.GetProvider(workerTaskType)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if provider == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "UI provider not found for task type"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get current configuration
|
||||
currentConfig := provider.GetCurrentConfig()
|
||||
|
||||
// Note: Do NOT apply schema defaults to current config as it overrides saved values
|
||||
// Only apply defaults when creating new configs, not when displaying existing ones
|
||||
|
||||
// Create task configuration data
|
||||
configData := &maintenance.TaskConfigData{
|
||||
TaskType: maintenance.MaintenanceTaskType(taskTypeName),
|
||||
TaskName: schema.DisplayName,
|
||||
TaskIcon: schema.Icon,
|
||||
Description: schema.Description,
|
||||
}
|
||||
|
||||
// Render HTML template using schema-based approach
|
||||
c.Header("Content-Type", "text/html")
|
||||
taskConfigComponent := app.TaskConfigSchema(configData, schema, currentConfig)
|
||||
layoutComponent := layout.Layout(c, taskConfigComponent)
|
||||
err := layoutComponent.Render(c.Request.Context(), c.Writer)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateTaskConfig updates task configuration from form
|
||||
func (h *MaintenanceHandlers) UpdateTaskConfig(c *gin.Context) {
|
||||
taskTypeName := c.Param("taskType")
|
||||
taskType := types.TaskType(taskTypeName)
|
||||
|
||||
// Parse form data
|
||||
err := c.Request.ParseForm()
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse form data: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Debug logging - show received form data
|
||||
glog.V(1).Infof("Received form data for task type %s:", taskTypeName)
|
||||
for key, values := range c.Request.PostForm {
|
||||
glog.V(1).Infof(" %s: %v", key, values)
|
||||
}
|
||||
|
||||
// Get the task configuration schema
|
||||
schema := tasks.GetTaskConfigSchema(taskTypeName)
|
||||
if schema == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "Schema not found for task type: " + taskTypeName})
|
||||
return
|
||||
}
|
||||
|
||||
// Create a new config instance based on task type and apply schema defaults
|
||||
var config TaskConfig
|
||||
switch taskType {
|
||||
case types.TaskTypeVacuum:
|
||||
config = &vacuum.Config{}
|
||||
case types.TaskTypeBalance:
|
||||
config = &balance.Config{}
|
||||
case types.TaskTypeErasureCoding:
|
||||
config = &erasure_coding.Config{}
|
||||
default:
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Unsupported task type: " + taskTypeName})
|
||||
return
|
||||
}
|
||||
|
||||
// Apply schema defaults first using type-safe method
|
||||
if err := schema.ApplyDefaultsToConfig(config); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to apply defaults: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// First, get the current configuration to preserve existing values
|
||||
currentUIRegistry := tasks.GetGlobalUIRegistry()
|
||||
currentTypesRegistry := tasks.GetGlobalTypesRegistry()
|
||||
|
||||
var currentProvider types.TaskUIProvider
|
||||
for workerTaskType := range currentTypesRegistry.GetAllDetectors() {
|
||||
if string(workerTaskType) == string(taskType) {
|
||||
currentProvider = currentUIRegistry.GetProvider(workerTaskType)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if currentProvider != nil {
|
||||
// Copy current config values to the new config
|
||||
currentConfig := currentProvider.GetCurrentConfig()
|
||||
if currentConfigProtobuf, ok := currentConfig.(TaskConfig); ok {
|
||||
// Apply current values using protobuf directly - no map conversion needed!
|
||||
currentPolicy := currentConfigProtobuf.ToTaskPolicy()
|
||||
if err := config.FromTaskPolicy(currentPolicy); err != nil {
|
||||
glog.Warningf("Failed to load current config for %s: %v", taskTypeName, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse form data using schema-based approach (this will override with new values)
|
||||
err = h.parseTaskConfigFromForm(c.Request.PostForm, schema, config)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse configuration: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Debug logging - show parsed config values
|
||||
switch taskType {
|
||||
case types.TaskTypeVacuum:
|
||||
if vacuumConfig, ok := config.(*vacuum.Config); ok {
|
||||
glog.V(1).Infof("Parsed vacuum config - GarbageThreshold: %f, MinVolumeAgeSeconds: %d, MinIntervalSeconds: %d",
|
||||
vacuumConfig.GarbageThreshold, vacuumConfig.MinVolumeAgeSeconds, vacuumConfig.MinIntervalSeconds)
|
||||
}
|
||||
case types.TaskTypeErasureCoding:
|
||||
if ecConfig, ok := config.(*erasure_coding.Config); ok {
|
||||
glog.V(1).Infof("Parsed EC config - FullnessRatio: %f, QuietForSeconds: %d, MinSizeMB: %d, CollectionFilter: '%s'",
|
||||
ecConfig.FullnessRatio, ecConfig.QuietForSeconds, ecConfig.MinSizeMB, ecConfig.CollectionFilter)
|
||||
}
|
||||
case types.TaskTypeBalance:
|
||||
if balanceConfig, ok := config.(*balance.Config); ok {
|
||||
glog.V(1).Infof("Parsed balance config - Enabled: %v, MaxConcurrent: %d, ScanIntervalSeconds: %d, ImbalanceThreshold: %f, MinServerCount: %d",
|
||||
balanceConfig.Enabled, balanceConfig.MaxConcurrent, balanceConfig.ScanIntervalSeconds, balanceConfig.ImbalanceThreshold, balanceConfig.MinServerCount)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate the configuration
|
||||
if validationErrors := schema.ValidateConfig(config); len(validationErrors) > 0 {
|
||||
errorMessages := make([]string, len(validationErrors))
|
||||
for i, err := range validationErrors {
|
||||
errorMessages[i] = err.Error()
|
||||
}
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Configuration validation failed", "details": errorMessages})
|
||||
return
|
||||
}
|
||||
|
||||
// Apply configuration using UIProvider
|
||||
uiRegistry := tasks.GetGlobalUIRegistry()
|
||||
typesRegistry := tasks.GetGlobalTypesRegistry()
|
||||
|
||||
var provider types.TaskUIProvider
|
||||
for workerTaskType := range typesRegistry.GetAllDetectors() {
|
||||
if string(workerTaskType) == string(taskType) {
|
||||
provider = uiRegistry.GetProvider(workerTaskType)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if provider == nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "UI provider not found for task type"})
|
||||
return
|
||||
}
|
||||
|
||||
// Apply configuration using provider
|
||||
err = provider.ApplyTaskConfig(config)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to apply configuration: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Save task configuration to protobuf file using ConfigPersistence
|
||||
if h.adminServer != nil && h.adminServer.GetConfigPersistence() != nil {
|
||||
err = h.saveTaskConfigToProtobuf(taskType, config)
|
||||
if err != nil {
|
||||
glog.Warningf("Failed to save task config to protobuf file: %v", err)
|
||||
// Don't fail the request, just log the warning
|
||||
}
|
||||
} else if h.adminServer == nil {
|
||||
glog.Warningf("Failed to save task config: admin server not initialized")
|
||||
}
|
||||
|
||||
// Trigger a configuration reload in the maintenance manager
|
||||
if h.adminServer != nil {
|
||||
if manager := h.adminServer.GetMaintenanceManager(); manager != nil {
|
||||
err = manager.ReloadTaskConfigurations()
|
||||
if err != nil {
|
||||
glog.Warningf("Failed to reload task configurations: %v", err)
|
||||
} else {
|
||||
glog.V(1).Infof("Successfully reloaded task configurations after updating %s", taskTypeName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Redirect back to task configuration page
|
||||
c.Redirect(http.StatusSeeOther, "/maintenance/config/"+taskTypeName)
|
||||
}
|
||||
|
||||
// parseTaskConfigFromForm parses form data using schema definitions
|
||||
func (h *MaintenanceHandlers) parseTaskConfigFromForm(formData map[string][]string, schema *tasks.TaskConfigSchema, config interface{}) error {
|
||||
configValue := reflect.ValueOf(config)
|
||||
if configValue.Kind() == reflect.Ptr {
|
||||
configValue = configValue.Elem()
|
||||
}
|
||||
|
||||
if configValue.Kind() != reflect.Struct {
|
||||
return fmt.Errorf("config must be a struct or pointer to struct")
|
||||
}
|
||||
|
||||
configType := configValue.Type()
|
||||
|
||||
for i := 0; i < configValue.NumField(); i++ {
|
||||
field := configValue.Field(i)
|
||||
fieldType := configType.Field(i)
|
||||
|
||||
// Handle embedded structs recursively
|
||||
if fieldType.Anonymous && field.Kind() == reflect.Struct {
|
||||
err := h.parseTaskConfigFromForm(formData, schema, field.Addr().Interface())
|
||||
if err != nil {
|
||||
return fmt.Errorf("error parsing embedded struct %s: %w", fieldType.Name, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Get JSON tag name
|
||||
jsonTag := fieldType.Tag.Get("json")
|
||||
if jsonTag == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Remove options like ",omitempty"
|
||||
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
|
||||
jsonTag = jsonTag[:commaIdx]
|
||||
}
|
||||
|
||||
// Find corresponding schema field
|
||||
schemaField := schema.GetFieldByName(jsonTag)
|
||||
if schemaField == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse value based on field type
|
||||
if err := h.parseFieldFromForm(formData, schemaField, field); err != nil {
|
||||
return fmt.Errorf("error parsing field %s: %w", schemaField.DisplayName, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseFieldFromForm parses a single field value from form data
|
||||
func (h *MaintenanceHandlers) parseFieldFromForm(formData map[string][]string, schemaField *config.Field, fieldValue reflect.Value) error {
|
||||
if !fieldValue.CanSet() {
|
||||
return nil
|
||||
}
|
||||
|
||||
switch schemaField.Type {
|
||||
case config.FieldTypeBool:
|
||||
// Checkbox fields - present means true, absent means false
|
||||
_, exists := formData[schemaField.JSONName]
|
||||
fieldValue.SetBool(exists)
|
||||
|
||||
case config.FieldTypeInt:
|
||||
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
|
||||
if intVal, err := strconv.Atoi(values[0]); err != nil {
|
||||
return fmt.Errorf("invalid integer value: %s", values[0])
|
||||
} else {
|
||||
fieldValue.SetInt(int64(intVal))
|
||||
}
|
||||
}
|
||||
|
||||
case config.FieldTypeFloat:
|
||||
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
|
||||
if floatVal, err := strconv.ParseFloat(values[0], 64); err != nil {
|
||||
return fmt.Errorf("invalid float value: %s", values[0])
|
||||
} else {
|
||||
fieldValue.SetFloat(floatVal)
|
||||
}
|
||||
}
|
||||
|
||||
case config.FieldTypeString:
|
||||
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
|
||||
fieldValue.SetString(values[0])
|
||||
}
|
||||
|
||||
case config.FieldTypeInterval:
|
||||
// Parse interval fields with value + unit
|
||||
valueKey := schemaField.JSONName + "_value"
|
||||
unitKey := schemaField.JSONName + "_unit"
|
||||
|
||||
if valueStrs, ok := formData[valueKey]; ok && len(valueStrs) > 0 {
|
||||
value, err := strconv.Atoi(valueStrs[0])
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid interval value: %s", valueStrs[0])
|
||||
}
|
||||
|
||||
unit := "minutes" // default
|
||||
if unitStrs, ok := formData[unitKey]; ok && len(unitStrs) > 0 {
|
||||
unit = unitStrs[0]
|
||||
}
|
||||
|
||||
// Convert to seconds
|
||||
seconds := config.IntervalValueUnitToSeconds(value, unit)
|
||||
fieldValue.SetInt(int64(seconds))
|
||||
}
|
||||
|
||||
default:
|
||||
return fmt.Errorf("unsupported field type: %s", schemaField.Type)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateMaintenanceConfig updates maintenance configuration from form
|
||||
func (h *MaintenanceHandlers) UpdateMaintenanceConfig(c *gin.Context) {
|
||||
var config maintenance.MaintenanceConfig
|
||||
if err := c.ShouldBind(&config); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
err := h.updateMaintenanceConfig(&config)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.Redirect(http.StatusSeeOther, "/maintenance/config")
|
||||
}
|
||||
|
||||
// Helper methods that delegate to AdminServer
|
||||
|
||||
func (h *MaintenanceHandlers) getMaintenanceQueueData() (*maintenance.MaintenanceQueueData, error) {
|
||||
if h.adminServer == nil {
|
||||
return nil, fmt.Errorf("admin server not initialized")
|
||||
}
|
||||
// Use the exported method from AdminServer used by the JSON API
|
||||
return h.adminServer.GetMaintenanceQueueData()
|
||||
}
|
||||
|
||||
func (h *MaintenanceHandlers) getMaintenanceConfig() (*maintenance.MaintenanceConfigData, error) {
|
||||
if h.adminServer == nil {
|
||||
return nil, fmt.Errorf("admin server not initialized")
|
||||
}
|
||||
// Delegate to AdminServer's real persistence method
|
||||
return h.adminServer.GetMaintenanceConfigData()
|
||||
}
|
||||
|
||||
func (h *MaintenanceHandlers) updateMaintenanceConfig(config *maintenance.MaintenanceConfig) error {
|
||||
if h.adminServer == nil {
|
||||
return fmt.Errorf("admin server not initialized")
|
||||
}
|
||||
// Delegate to AdminServer's real persistence method
|
||||
return h.adminServer.UpdateMaintenanceConfigData(config)
|
||||
}
|
||||
|
||||
// saveTaskConfigToProtobuf saves task configuration to protobuf file
|
||||
func (h *MaintenanceHandlers) saveTaskConfigToProtobuf(taskType types.TaskType, config TaskConfig) error {
|
||||
configPersistence := h.adminServer.GetConfigPersistence()
|
||||
if configPersistence == nil {
|
||||
return fmt.Errorf("config persistence not available")
|
||||
}
|
||||
|
||||
// Use the new ToTaskPolicy method - much simpler and more maintainable!
|
||||
taskPolicy := config.ToTaskPolicy()
|
||||
|
||||
// Save using task-specific methods
|
||||
switch taskType {
|
||||
case types.TaskTypeVacuum:
|
||||
return configPersistence.SaveVacuumTaskPolicy(taskPolicy)
|
||||
case types.TaskTypeErasureCoding:
|
||||
return configPersistence.SaveErasureCodingTaskPolicy(taskPolicy)
|
||||
case types.TaskTypeBalance:
|
||||
return configPersistence.SaveBalanceTaskPolicy(taskPolicy)
|
||||
default:
|
||||
return fmt.Errorf("unsupported task type for protobuf persistence: %s", taskType)
|
||||
}
|
||||
}
|
||||
@@ -1,389 +0,0 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
"testing"
|
||||
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/config"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/base"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
|
||||
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
|
||||
)
|
||||
|
||||
func TestParseTaskConfigFromForm_WithEmbeddedStruct(t *testing.T) {
|
||||
// Create a maintenance handlers instance for testing
|
||||
h := &MaintenanceHandlers{}
|
||||
|
||||
// Test with balance config
|
||||
t.Run("Balance Config", func(t *testing.T) {
|
||||
// Simulate form data
|
||||
formData := url.Values{
|
||||
"enabled": {"on"}, // checkbox field
|
||||
"scan_interval_seconds_value": {"30"}, // interval field
|
||||
"scan_interval_seconds_unit": {"minutes"}, // interval unit
|
||||
"max_concurrent": {"2"}, // number field
|
||||
"imbalance_threshold": {"0.15"}, // float field
|
||||
"min_server_count": {"3"}, // number field
|
||||
}
|
||||
|
||||
// Get schema
|
||||
schema := tasks.GetTaskConfigSchema("balance")
|
||||
if schema == nil {
|
||||
t.Fatal("Failed to get balance schema")
|
||||
}
|
||||
|
||||
// Create config instance
|
||||
config := &balance.Config{}
|
||||
|
||||
// Parse form data
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse form data: %v", err)
|
||||
}
|
||||
|
||||
// Verify embedded struct fields were set correctly
|
||||
if !config.Enabled {
|
||||
t.Errorf("Expected Enabled=true, got %v", config.Enabled)
|
||||
}
|
||||
|
||||
if config.ScanIntervalSeconds != 1800 { // 30 minutes * 60
|
||||
t.Errorf("Expected ScanIntervalSeconds=1800, got %v", config.ScanIntervalSeconds)
|
||||
}
|
||||
|
||||
if config.MaxConcurrent != 2 {
|
||||
t.Errorf("Expected MaxConcurrent=2, got %v", config.MaxConcurrent)
|
||||
}
|
||||
|
||||
// Verify balance-specific fields were set correctly
|
||||
if config.ImbalanceThreshold != 0.15 {
|
||||
t.Errorf("Expected ImbalanceThreshold=0.15, got %v", config.ImbalanceThreshold)
|
||||
}
|
||||
|
||||
if config.MinServerCount != 3 {
|
||||
t.Errorf("Expected MinServerCount=3, got %v", config.MinServerCount)
|
||||
}
|
||||
})
|
||||
|
||||
// Test with vacuum config
|
||||
t.Run("Vacuum Config", func(t *testing.T) {
|
||||
// Simulate form data
|
||||
formData := url.Values{
|
||||
// "enabled" field omitted to simulate unchecked checkbox
|
||||
"scan_interval_seconds_value": {"4"}, // interval field
|
||||
"scan_interval_seconds_unit": {"hours"}, // interval unit
|
||||
"max_concurrent": {"3"}, // number field
|
||||
"garbage_threshold": {"0.4"}, // float field
|
||||
"min_volume_age_seconds_value": {"2"}, // interval field
|
||||
"min_volume_age_seconds_unit": {"days"}, // interval unit
|
||||
"min_interval_seconds_value": {"1"}, // interval field
|
||||
"min_interval_seconds_unit": {"days"}, // interval unit
|
||||
}
|
||||
|
||||
// Get schema
|
||||
schema := tasks.GetTaskConfigSchema("vacuum")
|
||||
if schema == nil {
|
||||
t.Fatal("Failed to get vacuum schema")
|
||||
}
|
||||
|
||||
// Create config instance
|
||||
config := &vacuum.Config{}
|
||||
|
||||
// Parse form data
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse form data: %v", err)
|
||||
}
|
||||
|
||||
// Verify embedded struct fields were set correctly
|
||||
if config.Enabled {
|
||||
t.Errorf("Expected Enabled=false, got %v", config.Enabled)
|
||||
}
|
||||
|
||||
if config.ScanIntervalSeconds != 14400 { // 4 hours * 3600
|
||||
t.Errorf("Expected ScanIntervalSeconds=14400, got %v", config.ScanIntervalSeconds)
|
||||
}
|
||||
|
||||
if config.MaxConcurrent != 3 {
|
||||
t.Errorf("Expected MaxConcurrent=3, got %v", config.MaxConcurrent)
|
||||
}
|
||||
|
||||
// Verify vacuum-specific fields were set correctly
|
||||
if config.GarbageThreshold != 0.4 {
|
||||
t.Errorf("Expected GarbageThreshold=0.4, got %v", config.GarbageThreshold)
|
||||
}
|
||||
|
||||
if config.MinVolumeAgeSeconds != 172800 { // 2 days * 86400
|
||||
t.Errorf("Expected MinVolumeAgeSeconds=172800, got %v", config.MinVolumeAgeSeconds)
|
||||
}
|
||||
|
||||
if config.MinIntervalSeconds != 86400 { // 1 day * 86400
|
||||
t.Errorf("Expected MinIntervalSeconds=86400, got %v", config.MinIntervalSeconds)
|
||||
}
|
||||
})
|
||||
|
||||
// Test with erasure coding config
|
||||
t.Run("Erasure Coding Config", func(t *testing.T) {
|
||||
// Simulate form data
|
||||
formData := url.Values{
|
||||
"enabled": {"on"}, // checkbox field
|
||||
"scan_interval_seconds_value": {"2"}, // interval field
|
||||
"scan_interval_seconds_unit": {"hours"}, // interval unit
|
||||
"max_concurrent": {"1"}, // number field
|
||||
"quiet_for_seconds_value": {"10"}, // interval field
|
||||
"quiet_for_seconds_unit": {"minutes"}, // interval unit
|
||||
"fullness_ratio": {"0.85"}, // float field
|
||||
"collection_filter": {"test_collection"}, // string field
|
||||
"min_size_mb": {"50"}, // number field
|
||||
}
|
||||
|
||||
// Get schema
|
||||
schema := tasks.GetTaskConfigSchema("erasure_coding")
|
||||
if schema == nil {
|
||||
t.Fatal("Failed to get erasure_coding schema")
|
||||
}
|
||||
|
||||
// Create config instance
|
||||
config := &erasure_coding.Config{}
|
||||
|
||||
// Parse form data
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse form data: %v", err)
|
||||
}
|
||||
|
||||
// Verify embedded struct fields were set correctly
|
||||
if !config.Enabled {
|
||||
t.Errorf("Expected Enabled=true, got %v", config.Enabled)
|
||||
}
|
||||
|
||||
if config.ScanIntervalSeconds != 7200 { // 2 hours * 3600
|
||||
t.Errorf("Expected ScanIntervalSeconds=7200, got %v", config.ScanIntervalSeconds)
|
||||
}
|
||||
|
||||
if config.MaxConcurrent != 1 {
|
||||
t.Errorf("Expected MaxConcurrent=1, got %v", config.MaxConcurrent)
|
||||
}
|
||||
|
||||
// Verify erasure coding-specific fields were set correctly
|
||||
if config.QuietForSeconds != 600 { // 10 minutes * 60
|
||||
t.Errorf("Expected QuietForSeconds=600, got %v", config.QuietForSeconds)
|
||||
}
|
||||
|
||||
if config.FullnessRatio != 0.85 {
|
||||
t.Errorf("Expected FullnessRatio=0.85, got %v", config.FullnessRatio)
|
||||
}
|
||||
|
||||
if config.CollectionFilter != "test_collection" {
|
||||
t.Errorf("Expected CollectionFilter='test_collection', got %v", config.CollectionFilter)
|
||||
}
|
||||
|
||||
if config.MinSizeMB != 50 {
|
||||
t.Errorf("Expected MinSizeMB=50, got %v", config.MinSizeMB)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestConfigurationValidation(t *testing.T) {
|
||||
// Test that config structs can be validated and converted to protobuf format
|
||||
taskTypes := []struct {
|
||||
name string
|
||||
config interface{}
|
||||
}{
|
||||
{
|
||||
"balance",
|
||||
&balance.Config{
|
||||
BaseConfig: base.BaseConfig{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 2400,
|
||||
MaxConcurrent: 3,
|
||||
},
|
||||
ImbalanceThreshold: 0.18,
|
||||
MinServerCount: 4,
|
||||
},
|
||||
},
|
||||
{
|
||||
"vacuum",
|
||||
&vacuum.Config{
|
||||
BaseConfig: base.BaseConfig{
|
||||
Enabled: false,
|
||||
ScanIntervalSeconds: 7200,
|
||||
MaxConcurrent: 2,
|
||||
},
|
||||
GarbageThreshold: 0.35,
|
||||
MinVolumeAgeSeconds: 86400,
|
||||
MinIntervalSeconds: 604800,
|
||||
},
|
||||
},
|
||||
{
|
||||
"erasure_coding",
|
||||
&erasure_coding.Config{
|
||||
BaseConfig: base.BaseConfig{
|
||||
Enabled: true,
|
||||
ScanIntervalSeconds: 3600,
|
||||
MaxConcurrent: 1,
|
||||
},
|
||||
QuietForSeconds: 900,
|
||||
FullnessRatio: 0.9,
|
||||
CollectionFilter: "important",
|
||||
MinSizeMB: 100,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range taskTypes {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
// Test that configs can be converted to protobuf TaskPolicy
|
||||
switch cfg := test.config.(type) {
|
||||
case *balance.Config:
|
||||
policy := cfg.ToTaskPolicy()
|
||||
if policy == nil {
|
||||
t.Fatal("ToTaskPolicy returned nil")
|
||||
}
|
||||
if policy.Enabled != cfg.Enabled {
|
||||
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
|
||||
}
|
||||
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
|
||||
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
|
||||
}
|
||||
case *vacuum.Config:
|
||||
policy := cfg.ToTaskPolicy()
|
||||
if policy == nil {
|
||||
t.Fatal("ToTaskPolicy returned nil")
|
||||
}
|
||||
if policy.Enabled != cfg.Enabled {
|
||||
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
|
||||
}
|
||||
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
|
||||
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
|
||||
}
|
||||
case *erasure_coding.Config:
|
||||
policy := cfg.ToTaskPolicy()
|
||||
if policy == nil {
|
||||
t.Fatal("ToTaskPolicy returned nil")
|
||||
}
|
||||
if policy.Enabled != cfg.Enabled {
|
||||
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
|
||||
}
|
||||
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
|
||||
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
|
||||
}
|
||||
default:
|
||||
t.Fatalf("Unknown config type: %T", test.config)
|
||||
}
|
||||
|
||||
// Test that configs can be validated
|
||||
switch cfg := test.config.(type) {
|
||||
case *balance.Config:
|
||||
if err := cfg.Validate(); err != nil {
|
||||
t.Errorf("Validation failed: %v", err)
|
||||
}
|
||||
case *vacuum.Config:
|
||||
if err := cfg.Validate(); err != nil {
|
||||
t.Errorf("Validation failed: %v", err)
|
||||
}
|
||||
case *erasure_coding.Config:
|
||||
if err := cfg.Validate(); err != nil {
|
||||
t.Errorf("Validation failed: %v", err)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseFieldFromForm_EdgeCases(t *testing.T) {
|
||||
h := &MaintenanceHandlers{}
|
||||
|
||||
// Test checkbox parsing (boolean fields)
|
||||
t.Run("Checkbox Fields", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
formData url.Values
|
||||
expectedValue bool
|
||||
}{
|
||||
{"Checked checkbox", url.Values{"test_field": {"on"}}, true},
|
||||
{"Unchecked checkbox", url.Values{}, false},
|
||||
{"Empty value checkbox", url.Values{"test_field": {""}}, true}, // Present but empty means checked
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
schema := &tasks.TaskConfigSchema{
|
||||
Schema: config.Schema{
|
||||
Fields: []*config.Field{
|
||||
{
|
||||
JSONName: "test_field",
|
||||
Type: config.FieldTypeBool,
|
||||
InputType: "checkbox",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
type TestConfig struct {
|
||||
TestField bool `json:"test_field"`
|
||||
}
|
||||
|
||||
config := &TestConfig{}
|
||||
err := h.parseTaskConfigFromForm(test.formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("parseTaskConfigFromForm failed: %v", err)
|
||||
}
|
||||
|
||||
if config.TestField != test.expectedValue {
|
||||
t.Errorf("Expected %v, got %v", test.expectedValue, config.TestField)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
// Test interval parsing
|
||||
t.Run("Interval Fields", func(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
value string
|
||||
unit string
|
||||
expectedSecs int
|
||||
}{
|
||||
{"Minutes", "30", "minutes", 1800},
|
||||
{"Hours", "2", "hours", 7200},
|
||||
{"Days", "1", "days", 86400},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
formData := url.Values{
|
||||
"test_field_value": {test.value},
|
||||
"test_field_unit": {test.unit},
|
||||
}
|
||||
|
||||
schema := &tasks.TaskConfigSchema{
|
||||
Schema: config.Schema{
|
||||
Fields: []*config.Field{
|
||||
{
|
||||
JSONName: "test_field",
|
||||
Type: config.FieldTypeInterval,
|
||||
InputType: "interval",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
type TestConfig struct {
|
||||
TestField int `json:"test_field"`
|
||||
}
|
||||
|
||||
config := &TestConfig{}
|
||||
err := h.parseTaskConfigFromForm(formData, schema, config)
|
||||
if err != nil {
|
||||
t.Fatalf("parseTaskConfigFromForm failed: %v", err)
|
||||
}
|
||||
|
||||
if config.TestField != test.expectedSecs {
|
||||
t.Errorf("Expected %d seconds, got %d", test.expectedSecs, config.TestField)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
67
weed/admin/handlers/plugin_handlers.go
Normal file
67
weed/admin/handlers/plugin_handlers.go
Normal file
@@ -0,0 +1,67 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"net/http"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/app"
|
||||
"github.com/seaweedfs/seaweedfs/weed/admin/view/layout"
|
||||
)
|
||||
|
||||
// PluginHandlers handles plugin UI pages.
|
||||
type PluginHandlers struct {
|
||||
adminServer *dash.AdminServer
|
||||
}
|
||||
|
||||
// NewPluginHandlers creates a new instance of PluginHandlers.
|
||||
func NewPluginHandlers(adminServer *dash.AdminServer) *PluginHandlers {
|
||||
return &PluginHandlers{
|
||||
adminServer: adminServer,
|
||||
}
|
||||
}
|
||||
|
||||
// ShowPlugin displays plugin overview page.
|
||||
func (h *PluginHandlers) ShowPlugin(c *gin.Context) {
|
||||
h.renderPluginPage(c, "overview")
|
||||
}
|
||||
|
||||
// ShowPluginConfiguration displays plugin configuration page.
|
||||
func (h *PluginHandlers) ShowPluginConfiguration(c *gin.Context) {
|
||||
h.renderPluginPage(c, "configuration")
|
||||
}
|
||||
|
||||
// ShowPluginDetection displays plugin detection jobs page.
|
||||
func (h *PluginHandlers) ShowPluginDetection(c *gin.Context) {
|
||||
h.renderPluginPage(c, "detection")
|
||||
}
|
||||
|
||||
// ShowPluginQueue displays plugin job queue page.
|
||||
func (h *PluginHandlers) ShowPluginQueue(c *gin.Context) {
|
||||
h.renderPluginPage(c, "queue")
|
||||
}
|
||||
|
||||
// ShowPluginExecution displays plugin execution jobs page.
|
||||
func (h *PluginHandlers) ShowPluginExecution(c *gin.Context) {
|
||||
h.renderPluginPage(c, "execution")
|
||||
}
|
||||
|
||||
// ShowPluginMonitoring displays plugin monitoring page.
|
||||
func (h *PluginHandlers) ShowPluginMonitoring(c *gin.Context) {
|
||||
// Backward-compatible alias for the old monitoring URL.
|
||||
h.renderPluginPage(c, "detection")
|
||||
}
|
||||
|
||||
func (h *PluginHandlers) renderPluginPage(c *gin.Context, page string) {
|
||||
component := app.Plugin(page)
|
||||
layoutComponent := layout.Layout(c, component)
|
||||
|
||||
var buf bytes.Buffer
|
||||
if err := layoutComponent.Render(c.Request.Context(), &buf); err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.Data(http.StatusOK, "text/html; charset=utf-8", buf.Bytes())
|
||||
}
|
||||
Reference in New Issue
Block a user