Admin: misc improvements on admin server and workers. EC now works. (#7055)

* initial design

* added simulation as tests

* reorganized the codebase to move the simulation framework and tests into their own dedicated package

* integration test. ec worker task

* remove "enhanced" reference

* start master, volume servers, filer

Current Status
 Master: Healthy and running (port 9333)
 Filer: Healthy and running (port 8888)
 Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready

* generate write load

* tasks are assigned

* admin start wtih grpc port. worker has its own working directory

* Update .gitignore

* working worker and admin. Task detection is not working yet.

* compiles, detection uses volumeSizeLimitMB from master

* compiles

* worker retries connecting to admin

* build and restart

* rendering pending tasks

* skip task ID column

* sticky worker id

* test canScheduleTaskNow

* worker reconnect to admin

* clean up logs

* worker register itself first

* worker can run ec work and report status

but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.

* move ec task logic

* listing ec shards

* local copy, ec. Need to distribute.

* ec is mostly working now

* distribution of ec shards needs improvement
* need configuration to enable ec

* show ec volumes

* interval field UI component

* rename

* integration test with vauuming

* garbage percentage threshold

* fix warning

* display ec shard sizes

* fix ec volumes list

* Update ui.go

* show default values

* ensure correct default value

* MaintenanceConfig use ConfigField

* use schema defined defaults

* config

* reduce duplication

* refactor to use BaseUIProvider

* each task register its schema

* checkECEncodingCandidate use ecDetector

* use vacuumDetector

* use volumeSizeLimitMB

* remove

remove

* remove unused

* refactor

* use new framework

* remove v2 reference

* refactor

* left menu can scroll now

* The maintenance manager was not being initialized when no data directory was configured for persistent storage.

* saving config

* Update task_config_schema_templ.go

* enable/disable tasks

* protobuf encoded task configurations

* fix system settings

* use ui component

* remove logs

* interface{} Reduction

* reduce interface{}

* reduce interface{}

* avoid from/to map

* reduce interface{}

* refactor

* keep it DRY

* added logging

* debug messages

* debug level

* debug

* show the log caller line

* use configured task policy

* log level

* handle admin heartbeat response

* Update worker.go

* fix EC rack and dc count

* Report task status to admin server

* fix task logging, simplify interface checking, use erasure_coding constants

* factor in empty volume server during task planning

* volume.list adds disk id

* track disk id also

* fix locking scheduled and manual scanning

* add active topology

* simplify task detector

* ec task completed, but shards are not showing up

* implement ec in ec_typed.go

* adjust log level

* dedup

* implementing ec copying shards and only ecx files

* use disk id when distributing ec shards

🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned

* Delete original volume from all locations

* clean up existing shard locations

* local encoding and distributing

* Update docker/admin_integration/EC-TESTING-README.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* check volume id range

* simplify

* fix tests

* fix types

* clean up logs and tests

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit is contained in:
Chris Lu
2025-07-30 12:38:03 -07:00
committed by GitHub
parent 64198dad83
commit 891a2fb6eb
130 changed files with 27737 additions and 4429 deletions

360
weed/admin/config/schema.go Normal file
View File

@@ -0,0 +1,360 @@
package config
import (
"fmt"
"reflect"
"strings"
"time"
)
// ConfigWithDefaults defines an interface for configurations that can apply their own defaults
type ConfigWithDefaults interface {
// ApplySchemaDefaults applies default values using the provided schema
ApplySchemaDefaults(schema *Schema) error
// Validate validates the configuration
Validate() error
}
// FieldType defines the type of a configuration field
type FieldType string
const (
FieldTypeBool FieldType = "bool"
FieldTypeInt FieldType = "int"
FieldTypeDuration FieldType = "duration"
FieldTypeInterval FieldType = "interval"
FieldTypeString FieldType = "string"
FieldTypeFloat FieldType = "float"
)
// FieldUnit defines the unit for display purposes
type FieldUnit string
const (
UnitSeconds FieldUnit = "seconds"
UnitMinutes FieldUnit = "minutes"
UnitHours FieldUnit = "hours"
UnitDays FieldUnit = "days"
UnitCount FieldUnit = "count"
UnitNone FieldUnit = ""
)
// Field defines a configuration field with all its metadata
type Field struct {
// Field identification
Name string `json:"name"`
JSONName string `json:"json_name"`
Type FieldType `json:"type"`
// Default value and validation
DefaultValue interface{} `json:"default_value"`
MinValue interface{} `json:"min_value,omitempty"`
MaxValue interface{} `json:"max_value,omitempty"`
Required bool `json:"required"`
// UI display
DisplayName string `json:"display_name"`
Description string `json:"description"`
HelpText string `json:"help_text"`
Placeholder string `json:"placeholder"`
Unit FieldUnit `json:"unit"`
// Form rendering
InputType string `json:"input_type"` // "checkbox", "number", "text", "interval", etc.
CSSClasses string `json:"css_classes,omitempty"`
}
// GetDisplayValue returns the value formatted for display in the specified unit
func (f *Field) GetDisplayValue(value interface{}) interface{} {
if (f.Type == FieldTypeDuration || f.Type == FieldTypeInterval) && f.Unit != UnitSeconds {
if duration, ok := value.(time.Duration); ok {
switch f.Unit {
case UnitMinutes:
return int(duration.Minutes())
case UnitHours:
return int(duration.Hours())
case UnitDays:
return int(duration.Hours() / 24)
}
}
if seconds, ok := value.(int); ok {
switch f.Unit {
case UnitMinutes:
return seconds / 60
case UnitHours:
return seconds / 3600
case UnitDays:
return seconds / (24 * 3600)
}
}
}
return value
}
// GetIntervalDisplayValue returns the value and unit for interval fields
func (f *Field) GetIntervalDisplayValue(value interface{}) (int, string) {
if f.Type != FieldTypeInterval {
return 0, "minutes"
}
seconds := 0
if duration, ok := value.(time.Duration); ok {
seconds = int(duration.Seconds())
} else if s, ok := value.(int); ok {
seconds = s
}
return SecondsToIntervalValueUnit(seconds)
}
// SecondsToIntervalValueUnit converts seconds to the most appropriate interval unit
func SecondsToIntervalValueUnit(totalSeconds int) (int, string) {
if totalSeconds == 0 {
return 0, "minutes"
}
// Check if it's evenly divisible by days
if totalSeconds%(24*3600) == 0 {
return totalSeconds / (24 * 3600), "days"
}
// Check if it's evenly divisible by hours
if totalSeconds%3600 == 0 {
return totalSeconds / 3600, "hours"
}
// Default to minutes
return totalSeconds / 60, "minutes"
}
// IntervalValueUnitToSeconds converts interval value and unit to seconds
func IntervalValueUnitToSeconds(value int, unit string) int {
switch unit {
case "days":
return value * 24 * 3600
case "hours":
return value * 3600
case "minutes":
return value * 60
default:
return value * 60 // Default to minutes
}
}
// ParseDisplayValue converts a display value back to the storage format
func (f *Field) ParseDisplayValue(displayValue interface{}) interface{} {
if (f.Type == FieldTypeDuration || f.Type == FieldTypeInterval) && f.Unit != UnitSeconds {
if val, ok := displayValue.(int); ok {
switch f.Unit {
case UnitMinutes:
return val * 60
case UnitHours:
return val * 3600
case UnitDays:
return val * 24 * 3600
}
}
}
return displayValue
}
// ParseIntervalFormData parses form data for interval fields (value + unit)
func (f *Field) ParseIntervalFormData(valueStr, unitStr string) (int, error) {
if f.Type != FieldTypeInterval {
return 0, fmt.Errorf("field %s is not an interval field", f.Name)
}
value := 0
if valueStr != "" {
var err error
value, err = fmt.Sscanf(valueStr, "%d", &value)
if err != nil {
return 0, fmt.Errorf("invalid interval value: %s", valueStr)
}
}
return IntervalValueUnitToSeconds(value, unitStr), nil
}
// ValidateValue validates a value against the field constraints
func (f *Field) ValidateValue(value interface{}) error {
if f.Required && (value == nil || value == "" || value == 0) {
return fmt.Errorf("%s is required", f.DisplayName)
}
if f.MinValue != nil {
if !f.compareValues(value, f.MinValue, ">=") {
return fmt.Errorf("%s must be >= %v", f.DisplayName, f.MinValue)
}
}
if f.MaxValue != nil {
if !f.compareValues(value, f.MaxValue, "<=") {
return fmt.Errorf("%s must be <= %v", f.DisplayName, f.MaxValue)
}
}
return nil
}
// compareValues compares two values based on the operator
func (f *Field) compareValues(a, b interface{}, op string) bool {
switch f.Type {
case FieldTypeInt:
aVal, aOk := a.(int)
bVal, bOk := b.(int)
if !aOk || !bOk {
return false
}
switch op {
case ">=":
return aVal >= bVal
case "<=":
return aVal <= bVal
}
case FieldTypeFloat:
aVal, aOk := a.(float64)
bVal, bOk := b.(float64)
if !aOk || !bOk {
return false
}
switch op {
case ">=":
return aVal >= bVal
case "<=":
return aVal <= bVal
}
}
return true
}
// Schema provides common functionality for configuration schemas
type Schema struct {
Fields []*Field `json:"fields"`
}
// GetFieldByName returns a field by its JSON name
func (s *Schema) GetFieldByName(jsonName string) *Field {
for _, field := range s.Fields {
if field.JSONName == jsonName {
return field
}
}
return nil
}
// ApplyDefaultsToConfig applies defaults to a configuration that implements ConfigWithDefaults
func (s *Schema) ApplyDefaultsToConfig(config ConfigWithDefaults) error {
return config.ApplySchemaDefaults(s)
}
// ApplyDefaultsToProtobuf applies defaults to protobuf types using reflection
func (s *Schema) ApplyDefaultsToProtobuf(config interface{}) error {
return s.applyDefaultsReflection(config)
}
// applyDefaultsReflection applies default values using reflection (internal use only)
// Used for protobuf types and embedded struct handling
func (s *Schema) applyDefaultsReflection(config interface{}) error {
configValue := reflect.ValueOf(config)
if configValue.Kind() == reflect.Ptr {
configValue = configValue.Elem()
}
if configValue.Kind() != reflect.Struct {
return fmt.Errorf("config must be a struct or pointer to struct")
}
configType := configValue.Type()
for i := 0; i < configValue.NumField(); i++ {
field := configValue.Field(i)
fieldType := configType.Field(i)
// Handle embedded structs recursively (before JSON tag check)
if field.Kind() == reflect.Struct && fieldType.Anonymous {
if !field.CanAddr() {
return fmt.Errorf("embedded struct %s is not addressable - config must be a pointer", fieldType.Name)
}
err := s.applyDefaultsReflection(field.Addr().Interface())
if err != nil {
return fmt.Errorf("failed to apply defaults to embedded struct %s: %v", fieldType.Name, err)
}
continue
}
// Get JSON tag name
jsonTag := fieldType.Tag.Get("json")
if jsonTag == "" {
continue
}
// Remove options like ",omitempty"
if commaIdx := strings.Index(jsonTag, ","); commaIdx >= 0 {
jsonTag = jsonTag[:commaIdx]
}
// Find corresponding schema field
schemaField := s.GetFieldByName(jsonTag)
if schemaField == nil {
continue
}
// Apply default if field is zero value
if field.CanSet() && field.IsZero() {
defaultValue := reflect.ValueOf(schemaField.DefaultValue)
if defaultValue.Type().ConvertibleTo(field.Type()) {
field.Set(defaultValue.Convert(field.Type()))
}
}
}
return nil
}
// ValidateConfig validates a configuration against the schema
func (s *Schema) ValidateConfig(config interface{}) []error {
var errors []error
configValue := reflect.ValueOf(config)
if configValue.Kind() == reflect.Ptr {
configValue = configValue.Elem()
}
if configValue.Kind() != reflect.Struct {
errors = append(errors, fmt.Errorf("config must be a struct or pointer to struct"))
return errors
}
configType := configValue.Type()
for i := 0; i < configValue.NumField(); i++ {
field := configValue.Field(i)
fieldType := configType.Field(i)
// Get JSON tag name
jsonTag := fieldType.Tag.Get("json")
if jsonTag == "" {
continue
}
// Remove options like ",omitempty"
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
jsonTag = jsonTag[:commaIdx]
}
// Find corresponding schema field
schemaField := s.GetFieldByName(jsonTag)
if schemaField == nil {
continue
}
// Validate field value
fieldValue := field.Interface()
if err := schemaField.ValidateValue(fieldValue); err != nil {
errors = append(errors, err)
}
}
return errors
}

View File

@@ -0,0 +1,226 @@
package config
import (
"testing"
)
// Test structs that mirror the actual configuration structure
type TestBaseConfigForSchema struct {
Enabled bool `json:"enabled"`
ScanIntervalSeconds int `json:"scan_interval_seconds"`
MaxConcurrent int `json:"max_concurrent"`
}
// ApplySchemaDefaults implements ConfigWithDefaults for test struct
func (c *TestBaseConfigForSchema) ApplySchemaDefaults(schema *Schema) error {
return schema.ApplyDefaultsToProtobuf(c)
}
// Validate implements ConfigWithDefaults for test struct
func (c *TestBaseConfigForSchema) Validate() error {
return nil
}
type TestTaskConfigForSchema struct {
TestBaseConfigForSchema
TaskSpecificField float64 `json:"task_specific_field"`
AnotherSpecificField string `json:"another_specific_field"`
}
// ApplySchemaDefaults implements ConfigWithDefaults for test struct
func (c *TestTaskConfigForSchema) ApplySchemaDefaults(schema *Schema) error {
return schema.ApplyDefaultsToProtobuf(c)
}
// Validate implements ConfigWithDefaults for test struct
func (c *TestTaskConfigForSchema) Validate() error {
return nil
}
func createTestSchema() *Schema {
return &Schema{
Fields: []*Field{
{
Name: "enabled",
JSONName: "enabled",
Type: FieldTypeBool,
DefaultValue: true,
},
{
Name: "scan_interval_seconds",
JSONName: "scan_interval_seconds",
Type: FieldTypeInt,
DefaultValue: 1800,
},
{
Name: "max_concurrent",
JSONName: "max_concurrent",
Type: FieldTypeInt,
DefaultValue: 3,
},
{
Name: "task_specific_field",
JSONName: "task_specific_field",
Type: FieldTypeFloat,
DefaultValue: 0.25,
},
{
Name: "another_specific_field",
JSONName: "another_specific_field",
Type: FieldTypeString,
DefaultValue: "default_value",
},
},
}
}
func TestApplyDefaults_WithEmbeddedStruct(t *testing.T) {
schema := createTestSchema()
// Start with zero values
config := &TestTaskConfigForSchema{}
err := schema.ApplyDefaultsToConfig(config)
if err != nil {
t.Fatalf("ApplyDefaultsToConfig failed: %v", err)
}
// Verify embedded struct fields got default values
if config.Enabled != true {
t.Errorf("Expected Enabled=true (default), got %v", config.Enabled)
}
if config.ScanIntervalSeconds != 1800 {
t.Errorf("Expected ScanIntervalSeconds=1800 (default), got %v", config.ScanIntervalSeconds)
}
if config.MaxConcurrent != 3 {
t.Errorf("Expected MaxConcurrent=3 (default), got %v", config.MaxConcurrent)
}
// Verify task-specific fields got default values
if config.TaskSpecificField != 0.25 {
t.Errorf("Expected TaskSpecificField=0.25 (default), got %v", config.TaskSpecificField)
}
if config.AnotherSpecificField != "default_value" {
t.Errorf("Expected AnotherSpecificField='default_value' (default), got %v", config.AnotherSpecificField)
}
}
func TestApplyDefaults_PartiallySet(t *testing.T) {
schema := createTestSchema()
// Start with some pre-set values
config := &TestTaskConfigForSchema{
TestBaseConfigForSchema: TestBaseConfigForSchema{
Enabled: true, // Non-zero value, should not be overridden
ScanIntervalSeconds: 0, // Should get default
MaxConcurrent: 5, // Non-zero value, should not be overridden
},
TaskSpecificField: 0.0, // Should get default
AnotherSpecificField: "custom", // Non-zero value, should not be overridden
}
err := schema.ApplyDefaultsToConfig(config)
if err != nil {
t.Fatalf("ApplyDefaultsToConfig failed: %v", err)
}
// Verify already-set values are preserved
if config.Enabled != true {
t.Errorf("Expected Enabled=true (pre-set), got %v", config.Enabled)
}
if config.MaxConcurrent != 5 {
t.Errorf("Expected MaxConcurrent=5 (pre-set), got %v", config.MaxConcurrent)
}
if config.AnotherSpecificField != "custom" {
t.Errorf("Expected AnotherSpecificField='custom' (pre-set), got %v", config.AnotherSpecificField)
}
// Verify zero values got defaults
if config.ScanIntervalSeconds != 1800 {
t.Errorf("Expected ScanIntervalSeconds=1800 (default), got %v", config.ScanIntervalSeconds)
}
if config.TaskSpecificField != 0.25 {
t.Errorf("Expected TaskSpecificField=0.25 (default), got %v", config.TaskSpecificField)
}
}
func TestApplyDefaults_NonPointer(t *testing.T) {
schema := createTestSchema()
config := TestTaskConfigForSchema{}
// This should fail since we need a pointer to modify the struct
err := schema.ApplyDefaultsToProtobuf(config)
if err == nil {
t.Fatal("Expected error for non-pointer config, but got nil")
}
}
func TestApplyDefaults_NonStruct(t *testing.T) {
schema := createTestSchema()
var config interface{} = "not a struct"
err := schema.ApplyDefaultsToProtobuf(config)
if err == nil {
t.Fatal("Expected error for non-struct config, but got nil")
}
}
func TestApplyDefaults_EmptySchema(t *testing.T) {
schema := &Schema{Fields: []*Field{}}
config := &TestTaskConfigForSchema{}
err := schema.ApplyDefaultsToConfig(config)
if err != nil {
t.Fatalf("ApplyDefaultsToConfig failed for empty schema: %v", err)
}
// All fields should remain at zero values since no defaults are defined
if config.Enabled != false {
t.Errorf("Expected Enabled=false (zero value), got %v", config.Enabled)
}
}
func TestApplyDefaults_MissingSchemaField(t *testing.T) {
// Schema with fewer fields than the struct
schema := &Schema{
Fields: []*Field{
{
Name: "enabled",
JSONName: "enabled",
Type: FieldTypeBool,
DefaultValue: true,
},
// Note: missing scan_interval_seconds and other fields
},
}
config := &TestTaskConfigForSchema{}
err := schema.ApplyDefaultsToConfig(config)
if err != nil {
t.Fatalf("ApplyDefaultsToConfig failed: %v", err)
}
// Only the field with a schema definition should get a default
if config.Enabled != true {
t.Errorf("Expected Enabled=true (has schema), got %v", config.Enabled)
}
// Fields without schema should remain at zero values
if config.ScanIntervalSeconds != 0 {
t.Errorf("Expected ScanIntervalSeconds=0 (no schema), got %v", config.ScanIntervalSeconds)
}
}
func BenchmarkApplyDefaults(b *testing.B) {
schema := createTestSchema()
config := &TestTaskConfigForSchema{}
b.ResetTimer()
for i := 0; i < b.N; i++ {
_ = schema.ApplyDefaultsToConfig(config)
}
}

View File

@@ -25,6 +25,7 @@ import (
"google.golang.org/grpc"
"github.com/seaweedfs/seaweedfs/weed/s3api"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
)
type AdminServer struct {
@@ -126,30 +127,67 @@ func NewAdminServer(masters string, templateFS http.FileSystem, dataDir string)
}
}
// Initialize maintenance system with persistent configuration
// Initialize maintenance system - always initialize even without persistent storage
var maintenanceConfig *maintenance.MaintenanceConfig
if server.configPersistence.IsConfigured() {
maintenanceConfig, err := server.configPersistence.LoadMaintenanceConfig()
var err error
maintenanceConfig, err = server.configPersistence.LoadMaintenanceConfig()
if err != nil {
glog.Errorf("Failed to load maintenance configuration: %v", err)
maintenanceConfig = maintenance.DefaultMaintenanceConfig()
}
server.InitMaintenanceManager(maintenanceConfig)
// Start maintenance manager if enabled
if maintenanceConfig.Enabled {
go func() {
if err := server.StartMaintenanceManager(); err != nil {
glog.Errorf("Failed to start maintenance manager: %v", err)
}
}()
// Apply new defaults to handle schema changes (like enabling by default)
schema := maintenance.GetMaintenanceConfigSchema()
if err := schema.ApplyDefaultsToProtobuf(maintenanceConfig); err != nil {
glog.Warningf("Failed to apply schema defaults to loaded config: %v", err)
}
// Force enable maintenance system for new default behavior
// This handles the case where old configs had Enabled=false as default
if !maintenanceConfig.Enabled {
glog.V(1).Infof("Enabling maintenance system (new default behavior)")
maintenanceConfig.Enabled = true
}
glog.V(1).Infof("Maintenance system initialized with persistent configuration (enabled: %v)", maintenanceConfig.Enabled)
} else {
glog.V(1).Infof("No data directory configured, maintenance system will run in memory-only mode")
maintenanceConfig = maintenance.DefaultMaintenanceConfig()
glog.V(1).Infof("No data directory configured, maintenance system will run in memory-only mode (enabled: %v)", maintenanceConfig.Enabled)
}
// Always initialize maintenance manager
server.InitMaintenanceManager(maintenanceConfig)
// Load saved task configurations from persistence
server.loadTaskConfigurationsFromPersistence()
// Start maintenance manager if enabled
if maintenanceConfig.Enabled {
go func() {
// Give master client a bit of time to connect before starting scans
time.Sleep(2 * time.Second)
if err := server.StartMaintenanceManager(); err != nil {
glog.Errorf("Failed to start maintenance manager: %v", err)
}
}()
}
return server
}
// loadTaskConfigurationsFromPersistence loads saved task configurations from protobuf files
func (s *AdminServer) loadTaskConfigurationsFromPersistence() {
if s.configPersistence == nil || !s.configPersistence.IsConfigured() {
glog.V(1).Infof("Config persistence not available, using default task configurations")
return
}
// Load task configurations dynamically using the config update registry
configUpdateRegistry := tasks.GetGlobalConfigUpdateRegistry()
configUpdateRegistry.UpdateAllConfigs(s.configPersistence)
}
// GetCredentialManager returns the credential manager
func (s *AdminServer) GetCredentialManager() *credential.CredentialManager {
return s.credentialManager
@@ -852,6 +890,15 @@ func (as *AdminServer) CancelMaintenanceTask(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"success": true, "message": "Task cancelled"})
}
// cancelMaintenanceTask cancels a pending maintenance task
func (as *AdminServer) cancelMaintenanceTask(taskID string) error {
if as.maintenanceManager == nil {
return fmt.Errorf("maintenance manager not initialized")
}
return as.maintenanceManager.CancelTask(taskID)
}
// GetMaintenanceWorkersAPI returns all maintenance workers
func (as *AdminServer) GetMaintenanceWorkersAPI(c *gin.Context) {
workers, err := as.getMaintenanceWorkers()
@@ -899,13 +946,21 @@ func (as *AdminServer) GetMaintenanceConfigAPI(c *gin.Context) {
// UpdateMaintenanceConfigAPI updates maintenance configuration via API
func (as *AdminServer) UpdateMaintenanceConfigAPI(c *gin.Context) {
var config MaintenanceConfig
if err := c.ShouldBindJSON(&config); err != nil {
// Parse JSON into a generic map first to handle type conversions
var jsonConfig map[string]interface{}
if err := c.ShouldBindJSON(&jsonConfig); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
err := as.updateMaintenanceConfig(&config)
// Convert JSON map to protobuf configuration
config, err := convertJSONToMaintenanceConfig(jsonConfig)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse configuration: " + err.Error()})
return
}
err = as.updateMaintenanceConfig(config)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
@@ -951,17 +1006,36 @@ func (as *AdminServer) getMaintenanceQueueData() (*maintenance.MaintenanceQueueD
}, nil
}
// GetMaintenanceQueueStats returns statistics for the maintenance queue (exported for handlers)
func (as *AdminServer) GetMaintenanceQueueStats() (*maintenance.QueueStats, error) {
return as.getMaintenanceQueueStats()
}
// getMaintenanceQueueStats returns statistics for the maintenance queue
func (as *AdminServer) getMaintenanceQueueStats() (*maintenance.QueueStats, error) {
// This would integrate with the maintenance queue to get real statistics
// For now, return mock data
return &maintenance.QueueStats{
PendingTasks: 5,
RunningTasks: 2,
CompletedToday: 15,
FailedToday: 1,
TotalTasks: 23,
}, nil
if as.maintenanceManager == nil {
return &maintenance.QueueStats{
PendingTasks: 0,
RunningTasks: 0,
CompletedToday: 0,
FailedToday: 0,
TotalTasks: 0,
}, nil
}
// Get real statistics from maintenance manager
stats := as.maintenanceManager.GetStats()
// Convert MaintenanceStats to QueueStats
queueStats := &maintenance.QueueStats{
PendingTasks: stats.TasksByStatus[maintenance.TaskStatusPending],
RunningTasks: stats.TasksByStatus[maintenance.TaskStatusAssigned] + stats.TasksByStatus[maintenance.TaskStatusInProgress],
CompletedToday: stats.CompletedToday,
FailedToday: stats.FailedToday,
TotalTasks: stats.TotalTasks,
}
return queueStats, nil
}
// getMaintenanceTasks returns all maintenance tasks
@@ -1000,15 +1074,6 @@ func (as *AdminServer) getMaintenanceTask(taskID string) (*MaintenanceTask, erro
return nil, fmt.Errorf("task %s not found", taskID)
}
// cancelMaintenanceTask cancels a pending maintenance task
func (as *AdminServer) cancelMaintenanceTask(taskID string) error {
if as.maintenanceManager == nil {
return fmt.Errorf("maintenance manager not initialized")
}
return as.maintenanceManager.CancelTask(taskID)
}
// getMaintenanceWorkers returns all maintenance workers
func (as *AdminServer) getMaintenanceWorkers() ([]*maintenance.MaintenanceWorker, error) {
if as.maintenanceManager == nil {
@@ -1110,11 +1175,14 @@ func (as *AdminServer) getMaintenanceConfig() (*maintenance.MaintenanceConfigDat
// Load configuration from persistent storage
config, err := as.configPersistence.LoadMaintenanceConfig()
if err != nil {
glog.Errorf("Failed to load maintenance configuration: %v", err)
// Fallback to default configuration
config = DefaultMaintenanceConfig()
config = maintenance.DefaultMaintenanceConfig()
}
// Note: Do NOT apply schema defaults to existing config as it overrides saved values
// Only apply defaults when creating new configs or handling fallback cases
// The schema defaults should only be used in the UI for new installations
// Get system stats from maintenance manager if available
var systemStats *MaintenanceStats
if as.maintenanceManager != nil {
@@ -1139,18 +1207,25 @@ func (as *AdminServer) getMaintenanceConfig() (*maintenance.MaintenanceConfigDat
}
}
return &MaintenanceConfigData{
configData := &MaintenanceConfigData{
Config: config,
IsEnabled: config.Enabled,
LastScanTime: systemStats.LastScanTime,
NextScanTime: systemStats.NextScanTime,
SystemStats: systemStats,
MenuItems: maintenance.BuildMaintenanceMenuItems(),
}, nil
}
return configData, nil
}
// updateMaintenanceConfig updates maintenance configuration
func (as *AdminServer) updateMaintenanceConfig(config *maintenance.MaintenanceConfig) error {
// Use ConfigField validation instead of standalone validation
if err := maintenance.ValidateMaintenanceConfigWithSchema(config); err != nil {
return fmt.Errorf("configuration validation failed: %v", err)
}
// Save configuration to persistent storage
if err := as.configPersistence.SaveMaintenanceConfig(config); err != nil {
return fmt.Errorf("failed to save maintenance configuration: %w", err)
@@ -1175,7 +1250,14 @@ func (as *AdminServer) triggerMaintenanceScan() error {
return fmt.Errorf("maintenance manager not initialized")
}
return as.maintenanceManager.TriggerScan()
glog.V(1).Infof("Triggering maintenance scan")
err := as.maintenanceManager.TriggerScan()
if err != nil {
glog.Errorf("Failed to trigger maintenance scan: %v", err)
return err
}
glog.V(1).Infof("Maintenance scan triggered successfully")
return nil
}
// TriggerTopicRetentionPurgeAPI triggers topic retention purge via HTTP API
@@ -1265,14 +1347,11 @@ func (as *AdminServer) GetMaintenanceWorkersData() (*MaintenanceWorkersData, err
}
// StartWorkerGrpcServer starts the worker gRPC server
func (s *AdminServer) StartWorkerGrpcServer(httpPort int) error {
func (s *AdminServer) StartWorkerGrpcServer(grpcPort int) error {
if s.workerGrpcServer != nil {
return fmt.Errorf("worker gRPC server is already running")
}
// Calculate gRPC port (HTTP port + 10000)
grpcPort := httpPort + 10000
s.workerGrpcServer = NewWorkerGrpcServer(s)
return s.workerGrpcServer.StartWithTLS(grpcPort)
}
@@ -1412,7 +1491,7 @@ func (s *AdminServer) UpdateTopicRetention(namespace, name string, enabled bool,
}
// Create gRPC connection
conn, err := grpc.Dial(brokerAddress, s.grpcDialOption)
conn, err := grpc.NewClient(brokerAddress, s.grpcDialOption)
if err != nil {
return fmt.Errorf("failed to connect to broker: %w", err)
}
@@ -1501,3 +1580,161 @@ func extractVersioningFromEntry(entry *filer_pb.Entry) bool {
enabled, _ := s3api.LoadVersioningFromExtended(entry)
return enabled
}
// GetConfigPersistence returns the config persistence manager
func (as *AdminServer) GetConfigPersistence() *ConfigPersistence {
return as.configPersistence
}
// convertJSONToMaintenanceConfig converts JSON map to protobuf MaintenanceConfig
func convertJSONToMaintenanceConfig(jsonConfig map[string]interface{}) (*maintenance.MaintenanceConfig, error) {
config := &maintenance.MaintenanceConfig{}
// Helper function to get int32 from interface{}
getInt32 := func(key string) (int32, error) {
if val, ok := jsonConfig[key]; ok {
switch v := val.(type) {
case int:
return int32(v), nil
case int32:
return v, nil
case int64:
return int32(v), nil
case float64:
return int32(v), nil
default:
return 0, fmt.Errorf("invalid type for %s: expected number, got %T", key, v)
}
}
return 0, nil
}
// Helper function to get bool from interface{}
getBool := func(key string) bool {
if val, ok := jsonConfig[key]; ok {
if b, ok := val.(bool); ok {
return b
}
}
return false
}
var err error
// Convert basic fields
config.Enabled = getBool("enabled")
if config.ScanIntervalSeconds, err = getInt32("scan_interval_seconds"); err != nil {
return nil, err
}
if config.WorkerTimeoutSeconds, err = getInt32("worker_timeout_seconds"); err != nil {
return nil, err
}
if config.TaskTimeoutSeconds, err = getInt32("task_timeout_seconds"); err != nil {
return nil, err
}
if config.RetryDelaySeconds, err = getInt32("retry_delay_seconds"); err != nil {
return nil, err
}
if config.MaxRetries, err = getInt32("max_retries"); err != nil {
return nil, err
}
if config.CleanupIntervalSeconds, err = getInt32("cleanup_interval_seconds"); err != nil {
return nil, err
}
if config.TaskRetentionSeconds, err = getInt32("task_retention_seconds"); err != nil {
return nil, err
}
// Convert policy if present
if policyData, ok := jsonConfig["policy"]; ok {
if policyMap, ok := policyData.(map[string]interface{}); ok {
policy := &maintenance.MaintenancePolicy{}
if globalMaxConcurrent, err := getInt32FromMap(policyMap, "global_max_concurrent"); err != nil {
return nil, err
} else {
policy.GlobalMaxConcurrent = globalMaxConcurrent
}
if defaultRepeatIntervalSeconds, err := getInt32FromMap(policyMap, "default_repeat_interval_seconds"); err != nil {
return nil, err
} else {
policy.DefaultRepeatIntervalSeconds = defaultRepeatIntervalSeconds
}
if defaultCheckIntervalSeconds, err := getInt32FromMap(policyMap, "default_check_interval_seconds"); err != nil {
return nil, err
} else {
policy.DefaultCheckIntervalSeconds = defaultCheckIntervalSeconds
}
// Convert task policies if present
if taskPoliciesData, ok := policyMap["task_policies"]; ok {
if taskPoliciesMap, ok := taskPoliciesData.(map[string]interface{}); ok {
policy.TaskPolicies = make(map[string]*maintenance.TaskPolicy)
for taskType, taskPolicyData := range taskPoliciesMap {
if taskPolicyMap, ok := taskPolicyData.(map[string]interface{}); ok {
taskPolicy := &maintenance.TaskPolicy{}
taskPolicy.Enabled = getBoolFromMap(taskPolicyMap, "enabled")
if maxConcurrent, err := getInt32FromMap(taskPolicyMap, "max_concurrent"); err != nil {
return nil, err
} else {
taskPolicy.MaxConcurrent = maxConcurrent
}
if repeatIntervalSeconds, err := getInt32FromMap(taskPolicyMap, "repeat_interval_seconds"); err != nil {
return nil, err
} else {
taskPolicy.RepeatIntervalSeconds = repeatIntervalSeconds
}
if checkIntervalSeconds, err := getInt32FromMap(taskPolicyMap, "check_interval_seconds"); err != nil {
return nil, err
} else {
taskPolicy.CheckIntervalSeconds = checkIntervalSeconds
}
policy.TaskPolicies[taskType] = taskPolicy
}
}
}
}
config.Policy = policy
}
}
return config, nil
}
// Helper functions for map conversion
func getInt32FromMap(m map[string]interface{}, key string) (int32, error) {
if val, ok := m[key]; ok {
switch v := val.(type) {
case int:
return int32(v), nil
case int32:
return v, nil
case int64:
return int32(v), nil
case float64:
return int32(v), nil
default:
return 0, fmt.Errorf("invalid type for %s: expected number, got %T", key, v)
}
}
return 0, nil
}
func getBoolFromMap(m map[string]interface{}, key string) bool {
if val, ok := m[key]; ok {
if b, ok := val.(bool); ok {
return b
}
}
return false
}

View File

@@ -12,6 +12,7 @@ import (
func (s *AdminServer) GetClusterCollections() (*ClusterCollectionsData, error) {
var collections []CollectionInfo
var totalVolumes int
var totalEcVolumes int
var totalFiles int64
var totalSize int64
collectionMap := make(map[string]*CollectionInfo)
@@ -28,6 +29,7 @@ func (s *AdminServer) GetClusterCollections() (*ClusterCollectionsData, error) {
for _, rack := range dc.RackInfos {
for _, node := range rack.DataNodeInfos {
for _, diskInfo := range node.DiskInfos {
// Process regular volumes
for _, volInfo := range diskInfo.VolumeInfos {
// Extract collection name from volume info
collectionName := volInfo.Collection
@@ -69,12 +71,13 @@ func (s *AdminServer) GetClusterCollections() (*ClusterCollectionsData, error) {
totalSize += int64(volInfo.Size)
} else {
newCollection := CollectionInfo{
Name: collectionName,
DataCenter: dc.Id,
VolumeCount: 1,
FileCount: int64(volInfo.FileCount),
TotalSize: int64(volInfo.Size),
DiskTypes: []string{diskType},
Name: collectionName,
DataCenter: dc.Id,
VolumeCount: 1,
EcVolumeCount: 0,
FileCount: int64(volInfo.FileCount),
TotalSize: int64(volInfo.Size),
DiskTypes: []string{diskType},
}
collectionMap[collectionName] = &newCollection
totalVolumes++
@@ -82,6 +85,63 @@ func (s *AdminServer) GetClusterCollections() (*ClusterCollectionsData, error) {
totalSize += int64(volInfo.Size)
}
}
// Process EC volumes
ecVolumeMap := make(map[uint32]bool) // Track unique EC volumes to avoid double counting
for _, ecShardInfo := range diskInfo.EcShardInfos {
// Extract collection name from EC shard info
collectionName := ecShardInfo.Collection
if collectionName == "" {
collectionName = "default" // Default collection for EC volumes without explicit collection
}
// Only count each EC volume once (not per shard)
if !ecVolumeMap[ecShardInfo.Id] {
ecVolumeMap[ecShardInfo.Id] = true
// Get disk type from disk info, default to hdd if empty
diskType := diskInfo.Type
if diskType == "" {
diskType = "hdd"
}
// Get or create collection info
if collection, exists := collectionMap[collectionName]; exists {
collection.EcVolumeCount++
// Update data center if this collection spans multiple DCs
if collection.DataCenter != dc.Id && collection.DataCenter != "multi" {
collection.DataCenter = "multi"
}
// Add disk type if not already present
diskTypeExists := false
for _, existingDiskType := range collection.DiskTypes {
if existingDiskType == diskType {
diskTypeExists = true
break
}
}
if !diskTypeExists {
collection.DiskTypes = append(collection.DiskTypes, diskType)
}
totalEcVolumes++
} else {
newCollection := CollectionInfo{
Name: collectionName,
DataCenter: dc.Id,
VolumeCount: 0,
EcVolumeCount: 1,
FileCount: 0,
TotalSize: 0,
DiskTypes: []string{diskType},
}
collectionMap[collectionName] = &newCollection
totalEcVolumes++
}
}
}
}
}
}
@@ -112,6 +172,7 @@ func (s *AdminServer) GetClusterCollections() (*ClusterCollectionsData, error) {
Collections: []CollectionInfo{},
TotalCollections: 0,
TotalVolumes: 0,
TotalEcVolumes: 0,
TotalFiles: 0,
TotalSize: 0,
LastUpdated: time.Now(),
@@ -122,8 +183,203 @@ func (s *AdminServer) GetClusterCollections() (*ClusterCollectionsData, error) {
Collections: collections,
TotalCollections: len(collections),
TotalVolumes: totalVolumes,
TotalEcVolumes: totalEcVolumes,
TotalFiles: totalFiles,
TotalSize: totalSize,
LastUpdated: time.Now(),
}, nil
}
// GetCollectionDetails retrieves detailed information for a specific collection including volumes and EC volumes
func (s *AdminServer) GetCollectionDetails(collectionName string, page int, pageSize int, sortBy string, sortOrder string) (*CollectionDetailsData, error) {
// Set defaults
if page < 1 {
page = 1
}
if pageSize < 1 || pageSize > 1000 {
pageSize = 25
}
if sortBy == "" {
sortBy = "volume_id"
}
if sortOrder == "" {
sortOrder = "asc"
}
var regularVolumes []VolumeWithTopology
var ecVolumes []EcVolumeWithShards
var totalFiles int64
var totalSize int64
dataCenters := make(map[string]bool)
diskTypes := make(map[string]bool)
// Get regular volumes for this collection
regularVolumeData, err := s.GetClusterVolumes(1, 10000, "volume_id", "asc", collectionName) // Get all volumes
if err != nil {
return nil, err
}
regularVolumes = regularVolumeData.Volumes
totalSize = regularVolumeData.TotalSize
// Calculate total files from regular volumes
for _, vol := range regularVolumes {
totalFiles += int64(vol.FileCount)
}
// Collect data centers and disk types from regular volumes
for _, vol := range regularVolumes {
dataCenters[vol.DataCenter] = true
diskTypes[vol.DiskType] = true
}
// Get EC volumes for this collection
ecVolumeData, err := s.GetClusterEcVolumes(1, 10000, "volume_id", "asc", collectionName) // Get all EC volumes
if err != nil {
return nil, err
}
ecVolumes = ecVolumeData.EcVolumes
// Collect data centers from EC volumes
for _, ecVol := range ecVolumes {
for _, dc := range ecVol.DataCenters {
dataCenters[dc] = true
}
}
// Combine all volumes for sorting and pagination
type VolumeForSorting struct {
Type string // "regular" or "ec"
RegularVolume *VolumeWithTopology
EcVolume *EcVolumeWithShards
}
var allVolumes []VolumeForSorting
for i := range regularVolumes {
allVolumes = append(allVolumes, VolumeForSorting{
Type: "regular",
RegularVolume: &regularVolumes[i],
})
}
for i := range ecVolumes {
allVolumes = append(allVolumes, VolumeForSorting{
Type: "ec",
EcVolume: &ecVolumes[i],
})
}
// Sort all volumes
sort.Slice(allVolumes, func(i, j int) bool {
var less bool
switch sortBy {
case "volume_id":
var idI, idJ uint32
if allVolumes[i].Type == "regular" {
idI = allVolumes[i].RegularVolume.Id
} else {
idI = allVolumes[i].EcVolume.VolumeID
}
if allVolumes[j].Type == "regular" {
idJ = allVolumes[j].RegularVolume.Id
} else {
idJ = allVolumes[j].EcVolume.VolumeID
}
less = idI < idJ
case "type":
// Sort by type first (regular before ec), then by volume ID
if allVolumes[i].Type == allVolumes[j].Type {
var idI, idJ uint32
if allVolumes[i].Type == "regular" {
idI = allVolumes[i].RegularVolume.Id
} else {
idI = allVolumes[i].EcVolume.VolumeID
}
if allVolumes[j].Type == "regular" {
idJ = allVolumes[j].RegularVolume.Id
} else {
idJ = allVolumes[j].EcVolume.VolumeID
}
less = idI < idJ
} else {
less = allVolumes[i].Type < allVolumes[j].Type // "ec" < "regular"
}
default:
// Default to volume ID sort
var idI, idJ uint32
if allVolumes[i].Type == "regular" {
idI = allVolumes[i].RegularVolume.Id
} else {
idI = allVolumes[i].EcVolume.VolumeID
}
if allVolumes[j].Type == "regular" {
idJ = allVolumes[j].RegularVolume.Id
} else {
idJ = allVolumes[j].EcVolume.VolumeID
}
less = idI < idJ
}
if sortOrder == "desc" {
return !less
}
return less
})
// Apply pagination
totalVolumesAndEc := len(allVolumes)
totalPages := (totalVolumesAndEc + pageSize - 1) / pageSize
startIndex := (page - 1) * pageSize
endIndex := startIndex + pageSize
if endIndex > totalVolumesAndEc {
endIndex = totalVolumesAndEc
}
if startIndex >= totalVolumesAndEc {
startIndex = 0
endIndex = 0
}
// Extract paginated results
var paginatedRegularVolumes []VolumeWithTopology
var paginatedEcVolumes []EcVolumeWithShards
for i := startIndex; i < endIndex; i++ {
if allVolumes[i].Type == "regular" {
paginatedRegularVolumes = append(paginatedRegularVolumes, *allVolumes[i].RegularVolume)
} else {
paginatedEcVolumes = append(paginatedEcVolumes, *allVolumes[i].EcVolume)
}
}
// Convert maps to slices
var dcList []string
for dc := range dataCenters {
dcList = append(dcList, dc)
}
sort.Strings(dcList)
var diskTypeList []string
for diskType := range diskTypes {
diskTypeList = append(diskTypeList, diskType)
}
sort.Strings(diskTypeList)
return &CollectionDetailsData{
CollectionName: collectionName,
RegularVolumes: paginatedRegularVolumes,
EcVolumes: paginatedEcVolumes,
TotalVolumes: len(regularVolumes),
TotalEcVolumes: len(ecVolumes),
TotalFiles: totalFiles,
TotalSize: totalSize,
DataCenters: dcList,
DiskTypes: diskTypeList,
LastUpdated: time.Now(),
Page: page,
PageSize: pageSize,
TotalPages: totalPages,
SortBy: sortBy,
SortOrder: sortOrder,
}, nil
}

View File

@@ -1,23 +1,50 @@
package dash
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
)
const (
// Configuration file names
MaintenanceConfigFile = "maintenance.json"
AdminConfigFile = "admin.json"
// Configuration subdirectory
ConfigSubdir = "conf"
// Configuration file names (protobuf binary)
MaintenanceConfigFile = "maintenance.pb"
VacuumTaskConfigFile = "task_vacuum.pb"
ECTaskConfigFile = "task_erasure_coding.pb"
BalanceTaskConfigFile = "task_balance.pb"
ReplicationTaskConfigFile = "task_replication.pb"
// JSON reference files
MaintenanceConfigJSONFile = "maintenance.json"
VacuumTaskConfigJSONFile = "task_vacuum.json"
ECTaskConfigJSONFile = "task_erasure_coding.json"
BalanceTaskConfigJSONFile = "task_balance.json"
ReplicationTaskConfigJSONFile = "task_replication.json"
ConfigDirPermissions = 0755
ConfigFilePermissions = 0644
)
// Task configuration types
type (
VacuumTaskConfig = worker_pb.VacuumTaskConfig
ErasureCodingTaskConfig = worker_pb.ErasureCodingTaskConfig
BalanceTaskConfig = worker_pb.BalanceTaskConfig
ReplicationTaskConfig = worker_pb.ReplicationTaskConfig
)
// ConfigPersistence handles saving and loading configuration files
type ConfigPersistence struct {
dataDir string
@@ -30,122 +57,67 @@ func NewConfigPersistence(dataDir string) *ConfigPersistence {
}
}
// SaveMaintenanceConfig saves maintenance configuration to JSON file
// SaveMaintenanceConfig saves maintenance configuration to protobuf file and JSON reference
func (cp *ConfigPersistence) SaveMaintenanceConfig(config *MaintenanceConfig) error {
if cp.dataDir == "" {
return fmt.Errorf("no data directory specified, cannot save configuration")
}
configPath := filepath.Join(cp.dataDir, MaintenanceConfigFile)
// Create directory if it doesn't exist
if err := os.MkdirAll(cp.dataDir, ConfigDirPermissions); err != nil {
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
if err := os.MkdirAll(confDir, ConfigDirPermissions); err != nil {
return fmt.Errorf("failed to create config directory: %w", err)
}
// Marshal configuration to JSON
configData, err := json.MarshalIndent(config, "", " ")
// Save as protobuf (primary format)
pbConfigPath := filepath.Join(confDir, MaintenanceConfigFile)
pbData, err := proto.Marshal(config)
if err != nil {
return fmt.Errorf("failed to marshal maintenance config: %w", err)
return fmt.Errorf("failed to marshal maintenance config to protobuf: %w", err)
}
// Write to file
if err := os.WriteFile(configPath, configData, ConfigFilePermissions); err != nil {
return fmt.Errorf("failed to write maintenance config file: %w", err)
if err := os.WriteFile(pbConfigPath, pbData, ConfigFilePermissions); err != nil {
return fmt.Errorf("failed to write protobuf config file: %w", err)
}
// Save JSON reference copy for debugging
jsonConfigPath := filepath.Join(confDir, MaintenanceConfigJSONFile)
jsonData, err := protojson.MarshalOptions{
Multiline: true,
Indent: " ",
EmitUnpopulated: true,
}.Marshal(config)
if err != nil {
return fmt.Errorf("failed to marshal maintenance config to JSON: %w", err)
}
if err := os.WriteFile(jsonConfigPath, jsonData, ConfigFilePermissions); err != nil {
return fmt.Errorf("failed to write JSON reference file: %w", err)
}
glog.V(1).Infof("Saved maintenance configuration to %s", configPath)
return nil
}
// LoadMaintenanceConfig loads maintenance configuration from JSON file
// LoadMaintenanceConfig loads maintenance configuration from protobuf file
func (cp *ConfigPersistence) LoadMaintenanceConfig() (*MaintenanceConfig, error) {
if cp.dataDir == "" {
glog.V(1).Infof("No data directory specified, using default maintenance configuration")
return DefaultMaintenanceConfig(), nil
}
configPath := filepath.Join(cp.dataDir, MaintenanceConfigFile)
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
configPath := filepath.Join(confDir, MaintenanceConfigFile)
// Check if file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
glog.V(1).Infof("Maintenance config file does not exist, using defaults: %s", configPath)
return DefaultMaintenanceConfig(), nil
// Try to load from protobuf file
if configData, err := os.ReadFile(configPath); err == nil {
var config MaintenanceConfig
if err := proto.Unmarshal(configData, &config); err == nil {
// Always populate policy from separate task configuration files
config.Policy = buildPolicyFromTaskConfigs()
return &config, nil
}
}
// Read file
configData, err := os.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read maintenance config file: %w", err)
}
// Unmarshal JSON
var config MaintenanceConfig
if err := json.Unmarshal(configData, &config); err != nil {
return nil, fmt.Errorf("failed to unmarshal maintenance config: %w", err)
}
glog.V(1).Infof("Loaded maintenance configuration from %s", configPath)
return &config, nil
}
// SaveAdminConfig saves general admin configuration to JSON file
func (cp *ConfigPersistence) SaveAdminConfig(config map[string]interface{}) error {
if cp.dataDir == "" {
return fmt.Errorf("no data directory specified, cannot save configuration")
}
configPath := filepath.Join(cp.dataDir, AdminConfigFile)
// Create directory if it doesn't exist
if err := os.MkdirAll(cp.dataDir, ConfigDirPermissions); err != nil {
return fmt.Errorf("failed to create config directory: %w", err)
}
// Marshal configuration to JSON
configData, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal admin config: %w", err)
}
// Write to file
if err := os.WriteFile(configPath, configData, ConfigFilePermissions); err != nil {
return fmt.Errorf("failed to write admin config file: %w", err)
}
glog.V(1).Infof("Saved admin configuration to %s", configPath)
return nil
}
// LoadAdminConfig loads general admin configuration from JSON file
func (cp *ConfigPersistence) LoadAdminConfig() (map[string]interface{}, error) {
if cp.dataDir == "" {
glog.V(1).Infof("No data directory specified, using default admin configuration")
return make(map[string]interface{}), nil
}
configPath := filepath.Join(cp.dataDir, AdminConfigFile)
// Check if file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
glog.V(1).Infof("Admin config file does not exist, using defaults: %s", configPath)
return make(map[string]interface{}), nil
}
// Read file
configData, err := os.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read admin config file: %w", err)
}
// Unmarshal JSON
var config map[string]interface{}
if err := json.Unmarshal(configData, &config); err != nil {
return nil, fmt.Errorf("failed to unmarshal admin config: %w", err)
}
glog.V(1).Infof("Loaded admin configuration from %s", configPath)
return config, nil
// File doesn't exist or failed to load, use defaults
return DefaultMaintenanceConfig(), nil
}
// GetConfigPath returns the path to a configuration file
@@ -153,24 +125,35 @@ func (cp *ConfigPersistence) GetConfigPath(filename string) string {
if cp.dataDir == "" {
return ""
}
return filepath.Join(cp.dataDir, filename)
// All configs go in conf subdirectory
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
return filepath.Join(confDir, filename)
}
// ListConfigFiles returns all configuration files in the data directory
// ListConfigFiles returns all configuration files in the conf subdirectory
func (cp *ConfigPersistence) ListConfigFiles() ([]string, error) {
if cp.dataDir == "" {
return nil, fmt.Errorf("no data directory specified")
}
files, err := os.ReadDir(cp.dataDir)
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
files, err := os.ReadDir(confDir)
if err != nil {
// If conf directory doesn't exist, return empty list
if os.IsNotExist(err) {
return []string{}, nil
}
return nil, fmt.Errorf("failed to read config directory: %w", err)
}
var configFiles []string
for _, file := range files {
if !file.IsDir() && filepath.Ext(file.Name()) == ".json" {
configFiles = append(configFiles, file.Name())
if !file.IsDir() {
ext := filepath.Ext(file.Name())
if ext == ".json" || ext == ".pb" {
configFiles = append(configFiles, file.Name())
}
}
}
@@ -183,7 +166,7 @@ func (cp *ConfigPersistence) BackupConfig(filename string) error {
return fmt.Errorf("no data directory specified")
}
configPath := filepath.Join(cp.dataDir, filename)
configPath := cp.GetConfigPath(filename)
if _, err := os.Stat(configPath); os.IsNotExist(err) {
return fmt.Errorf("config file does not exist: %s", filename)
}
@@ -191,7 +174,10 @@ func (cp *ConfigPersistence) BackupConfig(filename string) error {
// Create backup filename with timestamp
timestamp := time.Now().Format("2006-01-02_15-04-05")
backupName := fmt.Sprintf("%s.backup_%s", filename, timestamp)
backupPath := filepath.Join(cp.dataDir, backupName)
// Determine backup directory (conf subdirectory)
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
backupPath := filepath.Join(confDir, backupName)
// Copy file
configData, err := os.ReadFile(configPath)
@@ -213,7 +199,10 @@ func (cp *ConfigPersistence) RestoreConfig(filename, backupName string) error {
return fmt.Errorf("no data directory specified")
}
backupPath := filepath.Join(cp.dataDir, backupName)
// Determine backup path (conf subdirectory)
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
backupPath := filepath.Join(confDir, backupName)
if _, err := os.Stat(backupPath); os.IsNotExist(err) {
return fmt.Errorf("backup file does not exist: %s", backupName)
}
@@ -225,7 +214,7 @@ func (cp *ConfigPersistence) RestoreConfig(filename, backupName string) error {
}
// Write to config file
configPath := filepath.Join(cp.dataDir, filename)
configPath := cp.GetConfigPath(filename)
if err := os.WriteFile(configPath, backupData, ConfigFilePermissions); err != nil {
return fmt.Errorf("failed to restore config: %w", err)
}
@@ -234,6 +223,364 @@ func (cp *ConfigPersistence) RestoreConfig(filename, backupName string) error {
return nil
}
// SaveVacuumTaskConfig saves vacuum task configuration to protobuf file
func (cp *ConfigPersistence) SaveVacuumTaskConfig(config *VacuumTaskConfig) error {
return cp.saveTaskConfig(VacuumTaskConfigFile, config)
}
// SaveVacuumTaskPolicy saves complete vacuum task policy to protobuf file
func (cp *ConfigPersistence) SaveVacuumTaskPolicy(policy *worker_pb.TaskPolicy) error {
return cp.saveTaskConfig(VacuumTaskConfigFile, policy)
}
// LoadVacuumTaskConfig loads vacuum task configuration from protobuf file
func (cp *ConfigPersistence) LoadVacuumTaskConfig() (*VacuumTaskConfig, error) {
// Load as TaskPolicy and extract vacuum config
if taskPolicy, err := cp.LoadVacuumTaskPolicy(); err == nil && taskPolicy != nil {
if vacuumConfig := taskPolicy.GetVacuumConfig(); vacuumConfig != nil {
return vacuumConfig, nil
}
}
// Return default config if no valid config found
return &VacuumTaskConfig{
GarbageThreshold: 0.3,
MinVolumeAgeHours: 24,
MinIntervalSeconds: 7 * 24 * 60 * 60, // 7 days
}, nil
}
// LoadVacuumTaskPolicy loads complete vacuum task policy from protobuf file
func (cp *ConfigPersistence) LoadVacuumTaskPolicy() (*worker_pb.TaskPolicy, error) {
if cp.dataDir == "" {
// Return default policy if no data directory
return &worker_pb.TaskPolicy{
Enabled: true,
MaxConcurrent: 2,
RepeatIntervalSeconds: 24 * 3600, // 24 hours in seconds
CheckIntervalSeconds: 6 * 3600, // 6 hours in seconds
TaskConfig: &worker_pb.TaskPolicy_VacuumConfig{
VacuumConfig: &worker_pb.VacuumTaskConfig{
GarbageThreshold: 0.3,
MinVolumeAgeHours: 24,
MinIntervalSeconds: 7 * 24 * 60 * 60, // 7 days
},
},
}, nil
}
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
configPath := filepath.Join(confDir, VacuumTaskConfigFile)
// Check if file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
// Return default policy if file doesn't exist
return &worker_pb.TaskPolicy{
Enabled: true,
MaxConcurrent: 2,
RepeatIntervalSeconds: 24 * 3600, // 24 hours in seconds
CheckIntervalSeconds: 6 * 3600, // 6 hours in seconds
TaskConfig: &worker_pb.TaskPolicy_VacuumConfig{
VacuumConfig: &worker_pb.VacuumTaskConfig{
GarbageThreshold: 0.3,
MinVolumeAgeHours: 24,
MinIntervalSeconds: 7 * 24 * 60 * 60, // 7 days
},
},
}, nil
}
// Read file
configData, err := os.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read vacuum task config file: %w", err)
}
// Try to unmarshal as TaskPolicy
var policy worker_pb.TaskPolicy
if err := proto.Unmarshal(configData, &policy); err == nil {
// Validate that it's actually a TaskPolicy with vacuum config
if policy.GetVacuumConfig() != nil {
glog.V(1).Infof("Loaded vacuum task policy from %s", configPath)
return &policy, nil
}
}
return nil, fmt.Errorf("failed to unmarshal vacuum task configuration")
}
// SaveErasureCodingTaskConfig saves EC task configuration to protobuf file
func (cp *ConfigPersistence) SaveErasureCodingTaskConfig(config *ErasureCodingTaskConfig) error {
return cp.saveTaskConfig(ECTaskConfigFile, config)
}
// SaveErasureCodingTaskPolicy saves complete EC task policy to protobuf file
func (cp *ConfigPersistence) SaveErasureCodingTaskPolicy(policy *worker_pb.TaskPolicy) error {
return cp.saveTaskConfig(ECTaskConfigFile, policy)
}
// LoadErasureCodingTaskConfig loads EC task configuration from protobuf file
func (cp *ConfigPersistence) LoadErasureCodingTaskConfig() (*ErasureCodingTaskConfig, error) {
// Load as TaskPolicy and extract EC config
if taskPolicy, err := cp.LoadErasureCodingTaskPolicy(); err == nil && taskPolicy != nil {
if ecConfig := taskPolicy.GetErasureCodingConfig(); ecConfig != nil {
return ecConfig, nil
}
}
// Return default config if no valid config found
return &ErasureCodingTaskConfig{
FullnessRatio: 0.9,
QuietForSeconds: 3600,
MinVolumeSizeMb: 1024,
CollectionFilter: "",
}, nil
}
// LoadErasureCodingTaskPolicy loads complete EC task policy from protobuf file
func (cp *ConfigPersistence) LoadErasureCodingTaskPolicy() (*worker_pb.TaskPolicy, error) {
if cp.dataDir == "" {
// Return default policy if no data directory
return &worker_pb.TaskPolicy{
Enabled: true,
MaxConcurrent: 1,
RepeatIntervalSeconds: 168 * 3600, // 1 week in seconds
CheckIntervalSeconds: 24 * 3600, // 24 hours in seconds
TaskConfig: &worker_pb.TaskPolicy_ErasureCodingConfig{
ErasureCodingConfig: &worker_pb.ErasureCodingTaskConfig{
FullnessRatio: 0.9,
QuietForSeconds: 3600,
MinVolumeSizeMb: 1024,
CollectionFilter: "",
},
},
}, nil
}
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
configPath := filepath.Join(confDir, ECTaskConfigFile)
// Check if file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
// Return default policy if file doesn't exist
return &worker_pb.TaskPolicy{
Enabled: true,
MaxConcurrent: 1,
RepeatIntervalSeconds: 168 * 3600, // 1 week in seconds
CheckIntervalSeconds: 24 * 3600, // 24 hours in seconds
TaskConfig: &worker_pb.TaskPolicy_ErasureCodingConfig{
ErasureCodingConfig: &worker_pb.ErasureCodingTaskConfig{
FullnessRatio: 0.9,
QuietForSeconds: 3600,
MinVolumeSizeMb: 1024,
CollectionFilter: "",
},
},
}, nil
}
// Read file
configData, err := os.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read EC task config file: %w", err)
}
// Try to unmarshal as TaskPolicy
var policy worker_pb.TaskPolicy
if err := proto.Unmarshal(configData, &policy); err == nil {
// Validate that it's actually a TaskPolicy with EC config
if policy.GetErasureCodingConfig() != nil {
glog.V(1).Infof("Loaded EC task policy from %s", configPath)
return &policy, nil
}
}
return nil, fmt.Errorf("failed to unmarshal EC task configuration")
}
// SaveBalanceTaskConfig saves balance task configuration to protobuf file
func (cp *ConfigPersistence) SaveBalanceTaskConfig(config *BalanceTaskConfig) error {
return cp.saveTaskConfig(BalanceTaskConfigFile, config)
}
// SaveBalanceTaskPolicy saves complete balance task policy to protobuf file
func (cp *ConfigPersistence) SaveBalanceTaskPolicy(policy *worker_pb.TaskPolicy) error {
return cp.saveTaskConfig(BalanceTaskConfigFile, policy)
}
// LoadBalanceTaskConfig loads balance task configuration from protobuf file
func (cp *ConfigPersistence) LoadBalanceTaskConfig() (*BalanceTaskConfig, error) {
// Load as TaskPolicy and extract balance config
if taskPolicy, err := cp.LoadBalanceTaskPolicy(); err == nil && taskPolicy != nil {
if balanceConfig := taskPolicy.GetBalanceConfig(); balanceConfig != nil {
return balanceConfig, nil
}
}
// Return default config if no valid config found
return &BalanceTaskConfig{
ImbalanceThreshold: 0.1,
MinServerCount: 2,
}, nil
}
// LoadBalanceTaskPolicy loads complete balance task policy from protobuf file
func (cp *ConfigPersistence) LoadBalanceTaskPolicy() (*worker_pb.TaskPolicy, error) {
if cp.dataDir == "" {
// Return default policy if no data directory
return &worker_pb.TaskPolicy{
Enabled: true,
MaxConcurrent: 1,
RepeatIntervalSeconds: 6 * 3600, // 6 hours in seconds
CheckIntervalSeconds: 12 * 3600, // 12 hours in seconds
TaskConfig: &worker_pb.TaskPolicy_BalanceConfig{
BalanceConfig: &worker_pb.BalanceTaskConfig{
ImbalanceThreshold: 0.1,
MinServerCount: 2,
},
},
}, nil
}
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
configPath := filepath.Join(confDir, BalanceTaskConfigFile)
// Check if file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
// Return default policy if file doesn't exist
return &worker_pb.TaskPolicy{
Enabled: true,
MaxConcurrent: 1,
RepeatIntervalSeconds: 6 * 3600, // 6 hours in seconds
CheckIntervalSeconds: 12 * 3600, // 12 hours in seconds
TaskConfig: &worker_pb.TaskPolicy_BalanceConfig{
BalanceConfig: &worker_pb.BalanceTaskConfig{
ImbalanceThreshold: 0.1,
MinServerCount: 2,
},
},
}, nil
}
// Read file
configData, err := os.ReadFile(configPath)
if err != nil {
return nil, fmt.Errorf("failed to read balance task config file: %w", err)
}
// Try to unmarshal as TaskPolicy
var policy worker_pb.TaskPolicy
if err := proto.Unmarshal(configData, &policy); err == nil {
// Validate that it's actually a TaskPolicy with balance config
if policy.GetBalanceConfig() != nil {
glog.V(1).Infof("Loaded balance task policy from %s", configPath)
return &policy, nil
}
}
return nil, fmt.Errorf("failed to unmarshal balance task configuration")
}
// SaveReplicationTaskConfig saves replication task configuration to protobuf file
func (cp *ConfigPersistence) SaveReplicationTaskConfig(config *ReplicationTaskConfig) error {
return cp.saveTaskConfig(ReplicationTaskConfigFile, config)
}
// LoadReplicationTaskConfig loads replication task configuration from protobuf file
func (cp *ConfigPersistence) LoadReplicationTaskConfig() (*ReplicationTaskConfig, error) {
var config ReplicationTaskConfig
err := cp.loadTaskConfig(ReplicationTaskConfigFile, &config)
if err != nil {
// Return default config if file doesn't exist
if os.IsNotExist(err) {
return &ReplicationTaskConfig{
TargetReplicaCount: 1,
}, nil
}
return nil, err
}
return &config, nil
}
// saveTaskConfig is a generic helper for saving task configurations with both protobuf and JSON reference
func (cp *ConfigPersistence) saveTaskConfig(filename string, config proto.Message) error {
if cp.dataDir == "" {
return fmt.Errorf("no data directory specified, cannot save task configuration")
}
// Create conf subdirectory path
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
configPath := filepath.Join(confDir, filename)
// Generate JSON reference filename
jsonFilename := filename[:len(filename)-3] + ".json" // Replace .pb with .json
jsonPath := filepath.Join(confDir, jsonFilename)
// Create conf directory if it doesn't exist
if err := os.MkdirAll(confDir, ConfigDirPermissions); err != nil {
return fmt.Errorf("failed to create config directory: %w", err)
}
// Marshal configuration to protobuf binary format
configData, err := proto.Marshal(config)
if err != nil {
return fmt.Errorf("failed to marshal task config: %w", err)
}
// Write protobuf file
if err := os.WriteFile(configPath, configData, ConfigFilePermissions); err != nil {
return fmt.Errorf("failed to write task config file: %w", err)
}
// Marshal configuration to JSON for reference
marshaler := protojson.MarshalOptions{
Multiline: true,
Indent: " ",
EmitUnpopulated: true,
}
jsonData, err := marshaler.Marshal(config)
if err != nil {
glog.Warningf("Failed to marshal task config to JSON reference: %v", err)
} else {
// Write JSON reference file
if err := os.WriteFile(jsonPath, jsonData, ConfigFilePermissions); err != nil {
glog.Warningf("Failed to write task config JSON reference: %v", err)
}
}
glog.V(1).Infof("Saved task configuration to %s (with JSON reference)", configPath)
return nil
}
// loadTaskConfig is a generic helper for loading task configurations from conf subdirectory
func (cp *ConfigPersistence) loadTaskConfig(filename string, config proto.Message) error {
if cp.dataDir == "" {
return os.ErrNotExist // Will trigger default config return
}
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
configPath := filepath.Join(confDir, filename)
// Check if file exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
return err // Will trigger default config return
}
// Read file
configData, err := os.ReadFile(configPath)
if err != nil {
return fmt.Errorf("failed to read task config file: %w", err)
}
// Unmarshal protobuf binary data
if err := proto.Unmarshal(configData, config); err != nil {
return fmt.Errorf("failed to unmarshal task config: %w", err)
}
glog.V(1).Infof("Loaded task configuration from %s", configPath)
return nil
}
// GetDataDir returns the data directory path
func (cp *ConfigPersistence) GetDataDir() string {
return cp.dataDir
@@ -249,6 +596,7 @@ func (cp *ConfigPersistence) GetConfigInfo() map[string]interface{} {
info := map[string]interface{}{
"data_dir_configured": cp.IsConfigured(),
"data_dir": cp.dataDir,
"config_subdir": ConfigSubdir,
}
if cp.IsConfigured() {
@@ -256,10 +604,18 @@ func (cp *ConfigPersistence) GetConfigInfo() map[string]interface{} {
if _, err := os.Stat(cp.dataDir); err == nil {
info["data_dir_exists"] = true
// List config files
configFiles, err := cp.ListConfigFiles()
if err == nil {
info["config_files"] = configFiles
// Check if conf subdirectory exists
confDir := filepath.Join(cp.dataDir, ConfigSubdir)
if _, err := os.Stat(confDir); err == nil {
info["conf_dir_exists"] = true
// List config files
configFiles, err := cp.ListConfigFiles()
if err == nil {
info["config_files"] = configFiles
}
} else {
info["conf_dir_exists"] = false
}
} else {
info["data_dir_exists"] = false
@@ -268,3 +624,67 @@ func (cp *ConfigPersistence) GetConfigInfo() map[string]interface{} {
return info
}
// buildPolicyFromTaskConfigs loads task configurations from separate files and builds a MaintenancePolicy
func buildPolicyFromTaskConfigs() *worker_pb.MaintenancePolicy {
policy := &worker_pb.MaintenancePolicy{
GlobalMaxConcurrent: 4,
DefaultRepeatIntervalSeconds: 6 * 3600, // 6 hours in seconds
DefaultCheckIntervalSeconds: 12 * 3600, // 12 hours in seconds
TaskPolicies: make(map[string]*worker_pb.TaskPolicy),
}
// Load vacuum task configuration
if vacuumConfig := vacuum.LoadConfigFromPersistence(nil); vacuumConfig != nil {
policy.TaskPolicies["vacuum"] = &worker_pb.TaskPolicy{
Enabled: vacuumConfig.Enabled,
MaxConcurrent: int32(vacuumConfig.MaxConcurrent),
RepeatIntervalSeconds: int32(vacuumConfig.ScanIntervalSeconds),
CheckIntervalSeconds: int32(vacuumConfig.ScanIntervalSeconds),
TaskConfig: &worker_pb.TaskPolicy_VacuumConfig{
VacuumConfig: &worker_pb.VacuumTaskConfig{
GarbageThreshold: float64(vacuumConfig.GarbageThreshold),
MinVolumeAgeHours: int32(vacuumConfig.MinVolumeAgeSeconds / 3600), // Convert seconds to hours
MinIntervalSeconds: int32(vacuumConfig.MinIntervalSeconds),
},
},
}
}
// Load erasure coding task configuration
if ecConfig := erasure_coding.LoadConfigFromPersistence(nil); ecConfig != nil {
policy.TaskPolicies["erasure_coding"] = &worker_pb.TaskPolicy{
Enabled: ecConfig.Enabled,
MaxConcurrent: int32(ecConfig.MaxConcurrent),
RepeatIntervalSeconds: int32(ecConfig.ScanIntervalSeconds),
CheckIntervalSeconds: int32(ecConfig.ScanIntervalSeconds),
TaskConfig: &worker_pb.TaskPolicy_ErasureCodingConfig{
ErasureCodingConfig: &worker_pb.ErasureCodingTaskConfig{
FullnessRatio: float64(ecConfig.FullnessRatio),
QuietForSeconds: int32(ecConfig.QuietForSeconds),
MinVolumeSizeMb: int32(ecConfig.MinSizeMB),
CollectionFilter: ecConfig.CollectionFilter,
},
},
}
}
// Load balance task configuration
if balanceConfig := balance.LoadConfigFromPersistence(nil); balanceConfig != nil {
policy.TaskPolicies["balance"] = &worker_pb.TaskPolicy{
Enabled: balanceConfig.Enabled,
MaxConcurrent: int32(balanceConfig.MaxConcurrent),
RepeatIntervalSeconds: int32(balanceConfig.ScanIntervalSeconds),
CheckIntervalSeconds: int32(balanceConfig.ScanIntervalSeconds),
TaskConfig: &worker_pb.TaskPolicy_BalanceConfig{
BalanceConfig: &worker_pb.BalanceTaskConfig{
ImbalanceThreshold: float64(balanceConfig.ImbalanceThreshold),
MinServerCount: int32(balanceConfig.MinServerCount),
},
},
}
}
glog.V(1).Infof("Built maintenance policy from separate task configs - %d task policies loaded", len(policy.TaskPolicies))
return policy
}

View File

@@ -0,0 +1,734 @@
package dash
import (
"context"
"fmt"
"sort"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb"
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/volume_server_pb"
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
)
// GetClusterEcShards retrieves cluster EC shards data with pagination, sorting, and filtering
func (s *AdminServer) GetClusterEcShards(page int, pageSize int, sortBy string, sortOrder string, collection string) (*ClusterEcShardsData, error) {
// Set defaults
if page < 1 {
page = 1
}
if pageSize < 1 || pageSize > 1000 {
pageSize = 100
}
if sortBy == "" {
sortBy = "volume_id"
}
if sortOrder == "" {
sortOrder = "asc"
}
var ecShards []EcShardWithInfo
volumeShardsMap := make(map[uint32]map[int]bool) // volumeId -> set of shards present
volumesWithAllShards := 0
volumesWithMissingShards := 0
// Get detailed EC shard information via gRPC
err := s.WithMasterClient(func(client master_pb.SeaweedClient) error {
resp, err := client.VolumeList(context.Background(), &master_pb.VolumeListRequest{})
if err != nil {
return err
}
if resp.TopologyInfo != nil {
for _, dc := range resp.TopologyInfo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for _, node := range rack.DataNodeInfos {
for _, diskInfo := range node.DiskInfos {
// Process EC shard information
for _, ecShardInfo := range diskInfo.EcShardInfos {
volumeId := ecShardInfo.Id
// Initialize volume shards map if needed
if volumeShardsMap[volumeId] == nil {
volumeShardsMap[volumeId] = make(map[int]bool)
}
// Create individual shard entries for each shard this server has
shardBits := ecShardInfo.EcIndexBits
for shardId := 0; shardId < erasure_coding.TotalShardsCount; shardId++ {
if (shardBits & (1 << uint(shardId))) != 0 {
// Mark this shard as present for this volume
volumeShardsMap[volumeId][shardId] = true
ecShard := EcShardWithInfo{
VolumeID: volumeId,
ShardID: uint32(shardId),
Collection: ecShardInfo.Collection,
Size: 0, // EC shards don't have individual size in the API response
Server: node.Id,
DataCenter: dc.Id,
Rack: rack.Id,
DiskType: diskInfo.Type,
ModifiedTime: 0, // Not available in current API
EcIndexBits: ecShardInfo.EcIndexBits,
ShardCount: getShardCount(ecShardInfo.EcIndexBits),
}
ecShards = append(ecShards, ecShard)
}
}
}
}
}
}
}
}
return nil
})
if err != nil {
return nil, err
}
// Calculate volume-level completeness (across all servers)
volumeCompleteness := make(map[uint32]bool)
volumeMissingShards := make(map[uint32][]int)
for volumeId, shardsPresent := range volumeShardsMap {
var missingShards []int
shardCount := len(shardsPresent)
// Find which shards are missing for this volume across ALL servers
for shardId := 0; shardId < erasure_coding.TotalShardsCount; shardId++ {
if !shardsPresent[shardId] {
missingShards = append(missingShards, shardId)
}
}
isComplete := (shardCount == erasure_coding.TotalShardsCount)
volumeCompleteness[volumeId] = isComplete
volumeMissingShards[volumeId] = missingShards
if isComplete {
volumesWithAllShards++
} else {
volumesWithMissingShards++
}
}
// Update completeness info for each shard based on volume-level completeness
for i := range ecShards {
volumeId := ecShards[i].VolumeID
ecShards[i].IsComplete = volumeCompleteness[volumeId]
ecShards[i].MissingShards = volumeMissingShards[volumeId]
}
// Filter by collection if specified
if collection != "" {
var filteredShards []EcShardWithInfo
for _, shard := range ecShards {
if shard.Collection == collection {
filteredShards = append(filteredShards, shard)
}
}
ecShards = filteredShards
}
// Sort the results
sortEcShards(ecShards, sortBy, sortOrder)
// Calculate statistics for conditional display
dataCenters := make(map[string]bool)
racks := make(map[string]bool)
collections := make(map[string]bool)
for _, shard := range ecShards {
dataCenters[shard.DataCenter] = true
racks[shard.Rack] = true
if shard.Collection != "" {
collections[shard.Collection] = true
}
}
// Pagination
totalShards := len(ecShards)
totalPages := (totalShards + pageSize - 1) / pageSize
startIndex := (page - 1) * pageSize
endIndex := startIndex + pageSize
if endIndex > totalShards {
endIndex = totalShards
}
if startIndex >= totalShards {
startIndex = 0
endIndex = 0
}
paginatedShards := ecShards[startIndex:endIndex]
// Build response
data := &ClusterEcShardsData{
EcShards: paginatedShards,
TotalShards: totalShards,
TotalVolumes: len(volumeShardsMap),
LastUpdated: time.Now(),
// Pagination
CurrentPage: page,
TotalPages: totalPages,
PageSize: pageSize,
// Sorting
SortBy: sortBy,
SortOrder: sortOrder,
// Statistics
DataCenterCount: len(dataCenters),
RackCount: len(racks),
CollectionCount: len(collections),
// Conditional display flags
ShowDataCenterColumn: len(dataCenters) > 1,
ShowRackColumn: len(racks) > 1,
ShowCollectionColumn: len(collections) > 1 || collection != "",
// Filtering
FilterCollection: collection,
// EC specific statistics
ShardsPerVolume: make(map[uint32]int), // This will be recalculated below
VolumesWithAllShards: volumesWithAllShards,
VolumesWithMissingShards: volumesWithMissingShards,
}
// Recalculate ShardsPerVolume for the response
for volumeId, shardsPresent := range volumeShardsMap {
data.ShardsPerVolume[volumeId] = len(shardsPresent)
}
// Set single values when only one exists
if len(dataCenters) == 1 {
for dc := range dataCenters {
data.SingleDataCenter = dc
break
}
}
if len(racks) == 1 {
for rack := range racks {
data.SingleRack = rack
break
}
}
if len(collections) == 1 {
for col := range collections {
data.SingleCollection = col
break
}
}
return data, nil
}
// GetClusterEcVolumes retrieves cluster EC volumes data grouped by volume ID with shard locations
func (s *AdminServer) GetClusterEcVolumes(page int, pageSize int, sortBy string, sortOrder string, collection string) (*ClusterEcVolumesData, error) {
// Set defaults
if page < 1 {
page = 1
}
if pageSize < 1 || pageSize > 1000 {
pageSize = 100
}
if sortBy == "" {
sortBy = "volume_id"
}
if sortOrder == "" {
sortOrder = "asc"
}
volumeData := make(map[uint32]*EcVolumeWithShards)
totalShards := 0
// Get detailed EC shard information via gRPC
err := s.WithMasterClient(func(client master_pb.SeaweedClient) error {
resp, err := client.VolumeList(context.Background(), &master_pb.VolumeListRequest{})
if err != nil {
return err
}
if resp.TopologyInfo != nil {
for _, dc := range resp.TopologyInfo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for _, node := range rack.DataNodeInfos {
for _, diskInfo := range node.DiskInfos {
// Process EC shard information
for _, ecShardInfo := range diskInfo.EcShardInfos {
volumeId := ecShardInfo.Id
// Initialize volume data if needed
if volumeData[volumeId] == nil {
volumeData[volumeId] = &EcVolumeWithShards{
VolumeID: volumeId,
Collection: ecShardInfo.Collection,
TotalShards: 0,
IsComplete: false,
MissingShards: []int{},
ShardLocations: make(map[int]string),
ShardSizes: make(map[int]int64),
DataCenters: []string{},
Servers: []string{},
Racks: []string{},
}
}
volume := volumeData[volumeId]
// Track data centers and servers
dcExists := false
for _, existingDc := range volume.DataCenters {
if existingDc == dc.Id {
dcExists = true
break
}
}
if !dcExists {
volume.DataCenters = append(volume.DataCenters, dc.Id)
}
serverExists := false
for _, existingServer := range volume.Servers {
if existingServer == node.Id {
serverExists = true
break
}
}
if !serverExists {
volume.Servers = append(volume.Servers, node.Id)
}
// Track racks
rackExists := false
for _, existingRack := range volume.Racks {
if existingRack == rack.Id {
rackExists = true
break
}
}
if !rackExists {
volume.Racks = append(volume.Racks, rack.Id)
}
// Process each shard this server has for this volume
shardBits := ecShardInfo.EcIndexBits
for shardId := 0; shardId < erasure_coding.TotalShardsCount; shardId++ {
if (shardBits & (1 << uint(shardId))) != 0 {
// Record shard location
volume.ShardLocations[shardId] = node.Id
totalShards++
}
}
}
}
}
}
}
}
return nil
})
if err != nil {
return nil, err
}
// Collect shard size information from volume servers
for volumeId, volume := range volumeData {
// Group servers by volume to minimize gRPC calls
serverHasVolume := make(map[string]bool)
for _, server := range volume.Servers {
serverHasVolume[server] = true
}
// Query each server for shard sizes
for server := range serverHasVolume {
err := s.WithVolumeServerClient(pb.ServerAddress(server), func(client volume_server_pb.VolumeServerClient) error {
resp, err := client.VolumeEcShardsInfo(context.Background(), &volume_server_pb.VolumeEcShardsInfoRequest{
VolumeId: volumeId,
})
if err != nil {
glog.V(1).Infof("Failed to get EC shard info from %s for volume %d: %v", server, volumeId, err)
return nil // Continue with other servers, don't fail the entire request
}
// Update shard sizes
for _, shardInfo := range resp.EcShardInfos {
volume.ShardSizes[int(shardInfo.ShardId)] = shardInfo.Size
}
return nil
})
if err != nil {
glog.V(1).Infof("Failed to connect to volume server %s: %v", server, err)
}
}
}
// Calculate completeness for each volume
completeVolumes := 0
incompleteVolumes := 0
for _, volume := range volumeData {
volume.TotalShards = len(volume.ShardLocations)
// Find missing shards
var missingShards []int
for shardId := 0; shardId < erasure_coding.TotalShardsCount; shardId++ {
if _, exists := volume.ShardLocations[shardId]; !exists {
missingShards = append(missingShards, shardId)
}
}
volume.MissingShards = missingShards
volume.IsComplete = (len(missingShards) == 0)
if volume.IsComplete {
completeVolumes++
} else {
incompleteVolumes++
}
}
// Convert map to slice
var ecVolumes []EcVolumeWithShards
for _, volume := range volumeData {
// Filter by collection if specified
if collection == "" || volume.Collection == collection {
ecVolumes = append(ecVolumes, *volume)
}
}
// Sort the results
sortEcVolumes(ecVolumes, sortBy, sortOrder)
// Calculate statistics for conditional display
dataCenters := make(map[string]bool)
collections := make(map[string]bool)
for _, volume := range ecVolumes {
for _, dc := range volume.DataCenters {
dataCenters[dc] = true
}
if volume.Collection != "" {
collections[volume.Collection] = true
}
}
// Pagination
totalVolumes := len(ecVolumes)
totalPages := (totalVolumes + pageSize - 1) / pageSize
startIndex := (page - 1) * pageSize
endIndex := startIndex + pageSize
if endIndex > totalVolumes {
endIndex = totalVolumes
}
if startIndex >= totalVolumes {
startIndex = 0
endIndex = 0
}
paginatedVolumes := ecVolumes[startIndex:endIndex]
// Build response
data := &ClusterEcVolumesData{
EcVolumes: paginatedVolumes,
TotalVolumes: totalVolumes,
LastUpdated: time.Now(),
// Pagination
Page: page,
PageSize: pageSize,
TotalPages: totalPages,
// Sorting
SortBy: sortBy,
SortOrder: sortOrder,
// Filtering
Collection: collection,
// Conditional display flags
ShowDataCenterColumn: len(dataCenters) > 1,
ShowRackColumn: false, // We don't track racks in this view for simplicity
ShowCollectionColumn: len(collections) > 1 || collection != "",
// Statistics
CompleteVolumes: completeVolumes,
IncompleteVolumes: incompleteVolumes,
TotalShards: totalShards,
}
return data, nil
}
// sortEcVolumes sorts EC volumes based on the specified field and order
func sortEcVolumes(volumes []EcVolumeWithShards, sortBy string, sortOrder string) {
sort.Slice(volumes, func(i, j int) bool {
var less bool
switch sortBy {
case "volume_id":
less = volumes[i].VolumeID < volumes[j].VolumeID
case "collection":
if volumes[i].Collection == volumes[j].Collection {
less = volumes[i].VolumeID < volumes[j].VolumeID
} else {
less = volumes[i].Collection < volumes[j].Collection
}
case "total_shards":
if volumes[i].TotalShards == volumes[j].TotalShards {
less = volumes[i].VolumeID < volumes[j].VolumeID
} else {
less = volumes[i].TotalShards < volumes[j].TotalShards
}
case "completeness":
// Complete volumes first, then by volume ID
if volumes[i].IsComplete == volumes[j].IsComplete {
less = volumes[i].VolumeID < volumes[j].VolumeID
} else {
less = volumes[i].IsComplete && !volumes[j].IsComplete
}
default:
less = volumes[i].VolumeID < volumes[j].VolumeID
}
if sortOrder == "desc" {
return !less
}
return less
})
}
// getShardCount returns the number of shards represented by the bitmap
func getShardCount(ecIndexBits uint32) int {
count := 0
for i := 0; i < erasure_coding.TotalShardsCount; i++ {
if (ecIndexBits & (1 << uint(i))) != 0 {
count++
}
}
return count
}
// getMissingShards returns a slice of missing shard IDs for a volume
func getMissingShards(ecIndexBits uint32) []int {
var missing []int
for i := 0; i < erasure_coding.TotalShardsCount; i++ {
if (ecIndexBits & (1 << uint(i))) == 0 {
missing = append(missing, i)
}
}
return missing
}
// sortEcShards sorts EC shards based on the specified field and order
func sortEcShards(shards []EcShardWithInfo, sortBy string, sortOrder string) {
sort.Slice(shards, func(i, j int) bool {
var less bool
switch sortBy {
case "shard_id":
less = shards[i].ShardID < shards[j].ShardID
case "server":
if shards[i].Server == shards[j].Server {
less = shards[i].ShardID < shards[j].ShardID // Secondary sort by shard ID
} else {
less = shards[i].Server < shards[j].Server
}
case "data_center":
if shards[i].DataCenter == shards[j].DataCenter {
less = shards[i].ShardID < shards[j].ShardID // Secondary sort by shard ID
} else {
less = shards[i].DataCenter < shards[j].DataCenter
}
case "rack":
if shards[i].Rack == shards[j].Rack {
less = shards[i].ShardID < shards[j].ShardID // Secondary sort by shard ID
} else {
less = shards[i].Rack < shards[j].Rack
}
default:
less = shards[i].ShardID < shards[j].ShardID
}
if sortOrder == "desc" {
return !less
}
return less
})
}
// GetEcVolumeDetails retrieves detailed information about a specific EC volume
func (s *AdminServer) GetEcVolumeDetails(volumeID uint32, sortBy string, sortOrder string) (*EcVolumeDetailsData, error) {
// Set defaults
if sortBy == "" {
sortBy = "shard_id"
}
if sortOrder == "" {
sortOrder = "asc"
}
var shards []EcShardWithInfo
var collection string
dataCenters := make(map[string]bool)
servers := make(map[string]bool)
// Get detailed EC shard information for the specific volume via gRPC
err := s.WithMasterClient(func(client master_pb.SeaweedClient) error {
resp, err := client.VolumeList(context.Background(), &master_pb.VolumeListRequest{})
if err != nil {
return err
}
if resp.TopologyInfo != nil {
for _, dc := range resp.TopologyInfo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for _, node := range rack.DataNodeInfos {
for _, diskInfo := range node.DiskInfos {
// Process EC shard information for this specific volume
for _, ecShardInfo := range diskInfo.EcShardInfos {
if ecShardInfo.Id == volumeID {
collection = ecShardInfo.Collection
dataCenters[dc.Id] = true
servers[node.Id] = true
// Create individual shard entries for each shard this server has
shardBits := ecShardInfo.EcIndexBits
for shardId := 0; shardId < erasure_coding.TotalShardsCount; shardId++ {
if (shardBits & (1 << uint(shardId))) != 0 {
ecShard := EcShardWithInfo{
VolumeID: ecShardInfo.Id,
ShardID: uint32(shardId),
Collection: ecShardInfo.Collection,
Size: 0, // EC shards don't have individual size in the API response
Server: node.Id,
DataCenter: dc.Id,
Rack: rack.Id,
DiskType: diskInfo.Type,
ModifiedTime: 0, // Not available in current API
EcIndexBits: ecShardInfo.EcIndexBits,
ShardCount: getShardCount(ecShardInfo.EcIndexBits),
}
shards = append(shards, ecShard)
}
}
}
}
}
}
}
}
}
return nil
})
if err != nil {
return nil, err
}
if len(shards) == 0 {
return nil, fmt.Errorf("EC volume %d not found", volumeID)
}
// Collect shard size information from volume servers
shardSizeMap := make(map[string]map[uint32]uint64) // server -> shardId -> size
for _, shard := range shards {
server := shard.Server
if _, exists := shardSizeMap[server]; !exists {
// Query this server for shard sizes
err := s.WithVolumeServerClient(pb.ServerAddress(server), func(client volume_server_pb.VolumeServerClient) error {
resp, err := client.VolumeEcShardsInfo(context.Background(), &volume_server_pb.VolumeEcShardsInfoRequest{
VolumeId: volumeID,
})
if err != nil {
glog.V(1).Infof("Failed to get EC shard info from %s for volume %d: %v", server, volumeID, err)
return nil // Continue with other servers, don't fail the entire request
}
// Store shard sizes for this server
shardSizeMap[server] = make(map[uint32]uint64)
for _, shardInfo := range resp.EcShardInfos {
shardSizeMap[server][shardInfo.ShardId] = uint64(shardInfo.Size)
}
return nil
})
if err != nil {
glog.V(1).Infof("Failed to connect to volume server %s: %v", server, err)
}
}
}
// Update shard sizes in the shards array
for i := range shards {
server := shards[i].Server
shardId := shards[i].ShardID
if serverSizes, exists := shardSizeMap[server]; exists {
if size, exists := serverSizes[shardId]; exists {
shards[i].Size = size
}
}
}
// Calculate completeness based on unique shard IDs
foundShards := make(map[int]bool)
for _, shard := range shards {
foundShards[int(shard.ShardID)] = true
}
totalUniqueShards := len(foundShards)
isComplete := (totalUniqueShards == erasure_coding.TotalShardsCount)
// Calculate missing shards
var missingShards []int
for i := 0; i < erasure_coding.TotalShardsCount; i++ {
if !foundShards[i] {
missingShards = append(missingShards, i)
}
}
// Update completeness info for each shard
for i := range shards {
shards[i].IsComplete = isComplete
shards[i].MissingShards = missingShards
}
// Sort shards based on parameters
sortEcShards(shards, sortBy, sortOrder)
// Convert maps to slices
var dcList []string
for dc := range dataCenters {
dcList = append(dcList, dc)
}
var serverList []string
for server := range servers {
serverList = append(serverList, server)
}
data := &EcVolumeDetailsData{
VolumeID: volumeID,
Collection: collection,
Shards: shards,
TotalShards: totalUniqueShards,
IsComplete: isComplete,
MissingShards: missingShards,
DataCenters: dcList,
Servers: serverList,
LastUpdated: time.Now(),
SortBy: sortBy,
SortOrder: sortOrder,
}
return data, nil
}

View File

@@ -25,3 +25,26 @@ func RequireAuth() gin.HandlerFunc {
c.Next()
}
}
// RequireAuthAPI checks if user is authenticated for API endpoints
// Returns JSON error instead of redirecting to login page
func RequireAuthAPI() gin.HandlerFunc {
return func(c *gin.Context) {
session := sessions.Default(c)
authenticated := session.Get("authenticated")
username := session.Get("username")
if authenticated != true || username == nil {
c.JSON(http.StatusUnauthorized, gin.H{
"error": "Authentication required",
"message": "Please log in to access this endpoint",
})
c.Abort()
return
}
// Set username in context for use in handlers
c.Set("username", username)
c.Next()
}
}

View File

@@ -135,6 +135,84 @@ type ClusterVolumesData struct {
FilterCollection string `json:"filter_collection"`
}
// ClusterEcShardsData represents the data for the cluster EC shards page
type ClusterEcShardsData struct {
Username string `json:"username"`
EcShards []EcShardWithInfo `json:"ec_shards"`
TotalShards int `json:"total_shards"`
TotalVolumes int `json:"total_volumes"`
LastUpdated time.Time `json:"last_updated"`
// Pagination
CurrentPage int `json:"current_page"`
TotalPages int `json:"total_pages"`
PageSize int `json:"page_size"`
// Sorting
SortBy string `json:"sort_by"`
SortOrder string `json:"sort_order"`
// Statistics
DataCenterCount int `json:"datacenter_count"`
RackCount int `json:"rack_count"`
CollectionCount int `json:"collection_count"`
// Conditional display flags
ShowDataCenterColumn bool `json:"show_datacenter_column"`
ShowRackColumn bool `json:"show_rack_column"`
ShowCollectionColumn bool `json:"show_collection_column"`
// Single values when only one exists
SingleDataCenter string `json:"single_datacenter"`
SingleRack string `json:"single_rack"`
SingleCollection string `json:"single_collection"`
// Filtering
FilterCollection string `json:"filter_collection"`
// EC specific statistics
ShardsPerVolume map[uint32]int `json:"shards_per_volume"` // VolumeID -> shard count
VolumesWithAllShards int `json:"volumes_with_all_shards"` // Volumes with all 14 shards
VolumesWithMissingShards int `json:"volumes_with_missing_shards"` // Volumes missing shards
}
// EcShardWithInfo represents an EC shard with its topology information
type EcShardWithInfo struct {
VolumeID uint32 `json:"volume_id"`
ShardID uint32 `json:"shard_id"`
Collection string `json:"collection"`
Size uint64 `json:"size"`
Server string `json:"server"`
DataCenter string `json:"datacenter"`
Rack string `json:"rack"`
DiskType string `json:"disk_type"`
ModifiedTime int64 `json:"modified_time"`
// EC specific fields
EcIndexBits uint32 `json:"ec_index_bits"` // Bitmap of which shards this server has
ShardCount int `json:"shard_count"` // Number of shards this server has for this volume
IsComplete bool `json:"is_complete"` // True if this volume has all 14 shards
MissingShards []int `json:"missing_shards"` // List of missing shard IDs
}
// EcVolumeDetailsData represents the data for the EC volume details page
type EcVolumeDetailsData struct {
Username string `json:"username"`
VolumeID uint32 `json:"volume_id"`
Collection string `json:"collection"`
Shards []EcShardWithInfo `json:"shards"`
TotalShards int `json:"total_shards"`
IsComplete bool `json:"is_complete"`
MissingShards []int `json:"missing_shards"`
DataCenters []string `json:"datacenters"`
Servers []string `json:"servers"`
LastUpdated time.Time `json:"last_updated"`
// Sorting
SortBy string `json:"sort_by"`
SortOrder string `json:"sort_order"`
}
type VolumeDetailsData struct {
Volume VolumeWithTopology `json:"volume"`
Replicas []VolumeWithTopology `json:"replicas"`
@@ -145,12 +223,13 @@ type VolumeDetailsData struct {
// Collection management structures
type CollectionInfo struct {
Name string `json:"name"`
DataCenter string `json:"datacenter"`
VolumeCount int `json:"volume_count"`
FileCount int64 `json:"file_count"`
TotalSize int64 `json:"total_size"`
DiskTypes []string `json:"disk_types"`
Name string `json:"name"`
DataCenter string `json:"datacenter"`
VolumeCount int `json:"volume_count"`
EcVolumeCount int `json:"ec_volume_count"`
FileCount int64 `json:"file_count"`
TotalSize int64 `json:"total_size"`
DiskTypes []string `json:"disk_types"`
}
type ClusterCollectionsData struct {
@@ -158,6 +237,7 @@ type ClusterCollectionsData struct {
Collections []CollectionInfo `json:"collections"`
TotalCollections int `json:"total_collections"`
TotalVolumes int `json:"total_volumes"`
TotalEcVolumes int `json:"total_ec_volumes"`
TotalFiles int64 `json:"total_files"`
TotalSize int64 `json:"total_size"`
LastUpdated time.Time `json:"last_updated"`
@@ -376,3 +456,74 @@ type MaintenanceWorkersData struct {
}
// Maintenance system types are now in weed/admin/maintenance package
// EcVolumeWithShards represents an EC volume with its shard distribution
type EcVolumeWithShards struct {
VolumeID uint32 `json:"volume_id"`
Collection string `json:"collection"`
TotalShards int `json:"total_shards"`
IsComplete bool `json:"is_complete"`
MissingShards []int `json:"missing_shards"`
ShardLocations map[int]string `json:"shard_locations"` // shardId -> server
ShardSizes map[int]int64 `json:"shard_sizes"` // shardId -> size in bytes
DataCenters []string `json:"data_centers"`
Servers []string `json:"servers"`
Racks []string `json:"racks"`
ModifiedTime int64 `json:"modified_time"`
}
// ClusterEcVolumesData represents the response for clustered EC volumes view
type ClusterEcVolumesData struct {
EcVolumes []EcVolumeWithShards `json:"ec_volumes"`
TotalVolumes int `json:"total_volumes"`
LastUpdated time.Time `json:"last_updated"`
// Pagination
Page int `json:"page"`
PageSize int `json:"page_size"`
TotalPages int `json:"total_pages"`
// Sorting
SortBy string `json:"sort_by"`
SortOrder string `json:"sort_order"`
// Filtering
Collection string `json:"collection"`
// Conditional display flags
ShowDataCenterColumn bool `json:"show_datacenter_column"`
ShowRackColumn bool `json:"show_rack_column"`
ShowCollectionColumn bool `json:"show_collection_column"`
// Statistics
CompleteVolumes int `json:"complete_volumes"`
IncompleteVolumes int `json:"incomplete_volumes"`
TotalShards int `json:"total_shards"`
// User context
Username string `json:"username"`
}
// Collection detail page structures
type CollectionDetailsData struct {
Username string `json:"username"`
CollectionName string `json:"collection_name"`
RegularVolumes []VolumeWithTopology `json:"regular_volumes"`
EcVolumes []EcVolumeWithShards `json:"ec_volumes"`
TotalVolumes int `json:"total_volumes"`
TotalEcVolumes int `json:"total_ec_volumes"`
TotalFiles int64 `json:"total_files"`
TotalSize int64 `json:"total_size"`
DataCenters []string `json:"data_centers"`
DiskTypes []string `json:"disk_types"`
LastUpdated time.Time `json:"last_updated"`
// Pagination
Page int `json:"page"`
PageSize int `json:"page_size"`
TotalPages int `json:"total_pages"`
// Sorting
SortBy string `json:"sort_by"`
SortOrder string `json:"sort_order"`
}

View File

@@ -319,27 +319,41 @@ func (s *WorkerGrpcServer) handleHeartbeat(conn *WorkerConnection, heartbeat *wo
// handleTaskRequest processes task requests from workers
func (s *WorkerGrpcServer) handleTaskRequest(conn *WorkerConnection, request *worker_pb.TaskRequest) {
// glog.Infof("DEBUG handleTaskRequest: Worker %s requesting tasks with capabilities %v", conn.workerID, conn.capabilities)
if s.adminServer.maintenanceManager == nil {
glog.Infof("DEBUG handleTaskRequest: maintenance manager is nil")
return
}
// Get next task from maintenance manager
task := s.adminServer.maintenanceManager.GetNextTask(conn.workerID, conn.capabilities)
// glog.Infof("DEBUG handleTaskRequest: GetNextTask returned task: %v", task != nil)
if task != nil {
glog.Infof("DEBUG handleTaskRequest: Assigning task %s (type: %s) to worker %s", task.ID, task.Type, conn.workerID)
// Use typed params directly - master client should already be configured in the params
var taskParams *worker_pb.TaskParams
if task.TypedParams != nil {
taskParams = task.TypedParams
} else {
// Create basic params if none exist
taskParams = &worker_pb.TaskParams{
VolumeId: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
}
}
// Send task assignment
assignment := &worker_pb.AdminMessage{
Timestamp: time.Now().Unix(),
Message: &worker_pb.AdminMessage_TaskAssignment{
TaskAssignment: &worker_pb.TaskAssignment{
TaskId: task.ID,
TaskType: string(task.Type),
Params: &worker_pb.TaskParams{
VolumeId: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
Parameters: convertTaskParameters(task.Parameters),
},
TaskId: task.ID,
TaskType: string(task.Type),
Params: taskParams,
Priority: int32(task.Priority),
CreatedTime: time.Now().Unix(),
},
@@ -348,10 +362,12 @@ func (s *WorkerGrpcServer) handleTaskRequest(conn *WorkerConnection, request *wo
select {
case conn.outgoing <- assignment:
glog.V(2).Infof("Assigned task %s to worker %s", task.ID, conn.workerID)
glog.Infof("DEBUG handleTaskRequest: Successfully assigned task %s to worker %s", task.ID, conn.workerID)
case <-time.After(time.Second):
glog.Warningf("Failed to send task assignment to worker %s", conn.workerID)
}
} else {
// glog.Infof("DEBUG handleTaskRequest: No tasks available for worker %s", conn.workerID)
}
}

View File

@@ -78,6 +78,9 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, username,
protected.GET("/cluster/volumes", h.clusterHandlers.ShowClusterVolumes)
protected.GET("/cluster/volumes/:id/:server", h.clusterHandlers.ShowVolumeDetails)
protected.GET("/cluster/collections", h.clusterHandlers.ShowClusterCollections)
protected.GET("/cluster/collections/:name", h.clusterHandlers.ShowCollectionDetails)
protected.GET("/cluster/ec-shards", h.clusterHandlers.ShowClusterEcShards)
protected.GET("/cluster/ec-volumes/:id", h.clusterHandlers.ShowEcVolumeDetails)
// Message Queue management routes
protected.GET("/mq/brokers", h.mqHandlers.ShowBrokers)
@@ -93,7 +96,8 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, username,
protected.POST("/maintenance/config/:taskType", h.maintenanceHandlers.UpdateTaskConfig)
// API routes for AJAX calls
api := protected.Group("/api")
api := r.Group("/api")
api.Use(dash.RequireAuthAPI()) // Use API-specific auth middleware
{
api.GET("/cluster/topology", h.clusterHandlers.GetClusterTopology)
api.GET("/cluster/masters", h.clusterHandlers.GetMasters)
@@ -198,6 +202,9 @@ func (h *AdminHandlers) SetupRoutes(r *gin.Engine, authRequired bool, username,
r.GET("/cluster/volumes", h.clusterHandlers.ShowClusterVolumes)
r.GET("/cluster/volumes/:id/:server", h.clusterHandlers.ShowVolumeDetails)
r.GET("/cluster/collections", h.clusterHandlers.ShowClusterCollections)
r.GET("/cluster/collections/:name", h.clusterHandlers.ShowCollectionDetails)
r.GET("/cluster/ec-shards", h.clusterHandlers.ShowClusterEcShards)
r.GET("/cluster/ec-volumes/:id", h.clusterHandlers.ShowEcVolumeDetails)
// Message Queue management routes
r.GET("/mq/brokers", h.mqHandlers.ShowBrokers)

View File

@@ -1,6 +1,7 @@
package handlers
import (
"math"
"net/http"
"strconv"
@@ -161,6 +162,129 @@ func (h *ClusterHandlers) ShowClusterCollections(c *gin.Context) {
}
}
// ShowCollectionDetails renders the collection detail page
func (h *ClusterHandlers) ShowCollectionDetails(c *gin.Context) {
collectionName := c.Param("name")
if collectionName == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "Collection name is required"})
return
}
// Parse query parameters
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
pageSize, _ := strconv.Atoi(c.DefaultQuery("page_size", "25"))
sortBy := c.DefaultQuery("sort_by", "volume_id")
sortOrder := c.DefaultQuery("sort_order", "asc")
// Get collection details data (volumes and EC volumes)
collectionDetailsData, err := h.adminServer.GetCollectionDetails(collectionName, page, pageSize, sortBy, sortOrder)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to get collection details: " + err.Error()})
return
}
// Set username
username := c.GetString("username")
if username == "" {
username = "admin"
}
collectionDetailsData.Username = username
// Render HTML template
c.Header("Content-Type", "text/html")
collectionDetailsComponent := app.CollectionDetails(*collectionDetailsData)
layoutComponent := layout.Layout(c, collectionDetailsComponent)
err = layoutComponent.Render(c.Request.Context(), c.Writer)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
return
}
}
// ShowClusterEcShards handles the cluster EC shards page (individual shards view)
func (h *ClusterHandlers) ShowClusterEcShards(c *gin.Context) {
// Parse query parameters
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
pageSize, _ := strconv.Atoi(c.DefaultQuery("page_size", "100"))
sortBy := c.DefaultQuery("sort_by", "volume_id")
sortOrder := c.DefaultQuery("sort_order", "asc")
collection := c.DefaultQuery("collection", "")
// Get data from admin server
data, err := h.adminServer.GetClusterEcVolumes(page, pageSize, sortBy, sortOrder, collection)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
// Set username
username := c.GetString("username")
if username == "" {
username = "admin"
}
data.Username = username
// Render template
c.Header("Content-Type", "text/html")
ecVolumesComponent := app.ClusterEcVolumes(*data)
layoutComponent := layout.Layout(c, ecVolumesComponent)
err = layoutComponent.Render(c.Request.Context(), c.Writer)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
}
// ShowEcVolumeDetails renders the EC volume details page
func (h *ClusterHandlers) ShowEcVolumeDetails(c *gin.Context) {
volumeIDStr := c.Param("id")
if volumeIDStr == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "Volume ID is required"})
return
}
volumeID, err := strconv.Atoi(volumeIDStr)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid volume ID"})
return
}
// Check that volumeID is within uint32 range
if volumeID < 0 || volumeID > int(math.MaxUint32) {
c.JSON(http.StatusBadRequest, gin.H{"error": "Volume ID out of range"})
return
}
// Parse sorting parameters
sortBy := c.DefaultQuery("sort_by", "shard_id")
sortOrder := c.DefaultQuery("sort_order", "asc")
// Get EC volume details
ecVolumeDetails, err := h.adminServer.GetEcVolumeDetails(uint32(volumeID), sortBy, sortOrder)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to get EC volume details: " + err.Error()})
return
}
// Set username
username := c.GetString("username")
if username == "" {
username = "admin"
}
ecVolumeDetails.Username = username
// Render HTML template
c.Header("Content-Type", "text/html")
ecVolumeDetailsComponent := app.EcVolumeDetails(*ecVolumeDetails)
layoutComponent := layout.Layout(c, ecVolumeDetailsComponent)
err = layoutComponent.Render(c.Request.Context(), c.Writer)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
return
}
}
// ShowClusterMasters renders the cluster masters page
func (h *ClusterHandlers) ShowClusterMasters(c *gin.Context) {
// Get cluster masters data

View File

@@ -1,16 +1,24 @@
package handlers
import (
"fmt"
"net/http"
"reflect"
"strconv"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/seaweedfs/seaweedfs/weed/admin/config"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
"github.com/seaweedfs/seaweedfs/weed/admin/view/app"
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
"github.com/seaweedfs/seaweedfs/weed/admin/view/layout"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
"github.com/seaweedfs/seaweedfs/weed/worker/types"
)
@@ -30,19 +38,31 @@ func NewMaintenanceHandlers(adminServer *dash.AdminServer) *MaintenanceHandlers
func (h *MaintenanceHandlers) ShowMaintenanceQueue(c *gin.Context) {
data, err := h.getMaintenanceQueueData()
if err != nil {
glog.Infof("DEBUG ShowMaintenanceQueue: error getting data: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
glog.Infof("DEBUG ShowMaintenanceQueue: got data with %d tasks", len(data.Tasks))
if data.Stats != nil {
glog.Infof("DEBUG ShowMaintenanceQueue: stats = {pending: %d, running: %d, completed: %d}",
data.Stats.PendingTasks, data.Stats.RunningTasks, data.Stats.CompletedToday)
} else {
glog.Infof("DEBUG ShowMaintenanceQueue: stats is nil")
}
// Render HTML template
c.Header("Content-Type", "text/html")
maintenanceComponent := app.MaintenanceQueue(data)
layoutComponent := layout.Layout(c, maintenanceComponent)
err = layoutComponent.Render(c.Request.Context(), c.Writer)
if err != nil {
glog.Infof("DEBUG ShowMaintenanceQueue: render error: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render template: " + err.Error()})
return
}
glog.Infof("DEBUG ShowMaintenanceQueue: template rendered successfully")
}
// ShowMaintenanceWorkers displays the maintenance workers page
@@ -72,9 +92,12 @@ func (h *MaintenanceHandlers) ShowMaintenanceConfig(c *gin.Context) {
return
}
// Render HTML template
// Get the schema for dynamic form rendering
schema := maintenance.GetMaintenanceConfigSchema()
// Render HTML template using schema-driven approach
c.Header("Content-Type", "text/html")
configComponent := app.MaintenanceConfig(config)
configComponent := app.MaintenanceConfigSchema(config, schema)
layoutComponent := layout.Layout(c, configComponent)
err = layoutComponent.Render(c.Request.Context(), c.Writer)
if err != nil {
@@ -87,20 +110,20 @@ func (h *MaintenanceHandlers) ShowMaintenanceConfig(c *gin.Context) {
func (h *MaintenanceHandlers) ShowTaskConfig(c *gin.Context) {
taskTypeName := c.Param("taskType")
// Get the task type
taskType := maintenance.GetMaintenanceTaskType(taskTypeName)
if taskType == "" {
c.JSON(http.StatusNotFound, gin.H{"error": "Task type not found"})
// Get the schema for this task type
schema := tasks.GetTaskConfigSchema(taskTypeName)
if schema == nil {
c.JSON(http.StatusNotFound, gin.H{"error": "Task type not found or no schema available"})
return
}
// Get the UI provider for this task type
// Get the UI provider for current configuration
uiRegistry := tasks.GetGlobalUIRegistry()
typesRegistry := tasks.GetGlobalTypesRegistry()
var provider types.TaskUIProvider
for workerTaskType := range typesRegistry.GetAllDetectors() {
if string(workerTaskType) == string(taskType) {
if string(workerTaskType) == taskTypeName {
provider = uiRegistry.GetProvider(workerTaskType)
break
}
@@ -111,73 +134,23 @@ func (h *MaintenanceHandlers) ShowTaskConfig(c *gin.Context) {
return
}
// Try to get templ UI provider first - temporarily disabled
// templUIProvider := getTemplUIProvider(taskType)
var configSections []components.ConfigSectionData
// Get current configuration
currentConfig := provider.GetCurrentConfig()
// Temporarily disabled templ UI provider
// if templUIProvider != nil {
// // Use the new templ-based UI provider
// currentConfig := templUIProvider.GetCurrentConfig()
// sections, err := templUIProvider.RenderConfigSections(currentConfig)
// if err != nil {
// c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to render configuration sections: " + err.Error()})
// return
// }
// configSections = sections
// } else {
// Fallback to basic configuration for providers that haven't been migrated yet
configSections = []components.ConfigSectionData{
{
Title: "Configuration Settings",
Icon: "fas fa-cogs",
Description: "Configure task detection and scheduling parameters",
Fields: []interface{}{
components.CheckboxFieldData{
FormFieldData: components.FormFieldData{
Name: "enabled",
Label: "Enable Task",
Description: "Whether this task type should be enabled",
},
Checked: true,
},
components.NumberFieldData{
FormFieldData: components.FormFieldData{
Name: "max_concurrent",
Label: "Max Concurrent Tasks",
Description: "Maximum number of concurrent tasks",
Required: true,
},
Value: 2,
Step: "1",
Min: floatPtr(1),
},
components.DurationFieldData{
FormFieldData: components.FormFieldData{
Name: "scan_interval",
Label: "Scan Interval",
Description: "How often to scan for tasks",
Required: true,
},
Value: "30m",
},
},
},
}
// } // End of disabled templ UI provider else block
// Note: Do NOT apply schema defaults to current config as it overrides saved values
// Only apply defaults when creating new configs, not when displaying existing ones
// Create task configuration data using templ components
configData := &app.TaskConfigTemplData{
TaskType: taskType,
TaskName: provider.GetDisplayName(),
TaskIcon: provider.GetIcon(),
Description: provider.GetDescription(),
ConfigSections: configSections,
// Create task configuration data
configData := &maintenance.TaskConfigData{
TaskType: maintenance.MaintenanceTaskType(taskTypeName),
TaskName: schema.DisplayName,
TaskIcon: schema.Icon,
Description: schema.Description,
}
// Render HTML template using templ components
// Render HTML template using schema-based approach
c.Header("Content-Type", "text/html")
taskConfigComponent := app.TaskConfigTempl(configData)
taskConfigComponent := app.TaskConfigSchema(configData, schema, currentConfig)
layoutComponent := layout.Layout(c, taskConfigComponent)
err := layoutComponent.Render(c.Request.Context(), c.Writer)
if err != nil {
@@ -186,19 +159,10 @@ func (h *MaintenanceHandlers) ShowTaskConfig(c *gin.Context) {
}
}
// UpdateTaskConfig updates configuration for a specific task type
// UpdateTaskConfig updates task configuration from form
func (h *MaintenanceHandlers) UpdateTaskConfig(c *gin.Context) {
taskTypeName := c.Param("taskType")
// Get the task type
taskType := maintenance.GetMaintenanceTaskType(taskTypeName)
if taskType == "" {
c.JSON(http.StatusNotFound, gin.H{"error": "Task type not found"})
return
}
// Try to get templ UI provider first - temporarily disabled
// templUIProvider := getTemplUIProvider(taskType)
taskType := types.TaskType(taskTypeName)
// Parse form data
err := c.Request.ParseForm()
@@ -207,31 +171,100 @@ func (h *MaintenanceHandlers) UpdateTaskConfig(c *gin.Context) {
return
}
// Convert form data to map
formData := make(map[string][]string)
// Debug logging - show received form data
glog.V(1).Infof("Received form data for task type %s:", taskTypeName)
for key, values := range c.Request.PostForm {
formData[key] = values
glog.V(1).Infof(" %s: %v", key, values)
}
var config interface{}
// Get the task configuration schema
schema := tasks.GetTaskConfigSchema(taskTypeName)
if schema == nil {
c.JSON(http.StatusNotFound, gin.H{"error": "Schema not found for task type: " + taskTypeName})
return
}
// Temporarily disabled templ UI provider
// if templUIProvider != nil {
// // Use the new templ-based UI provider
// config, err = templUIProvider.ParseConfigForm(formData)
// if err != nil {
// c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse configuration: " + err.Error()})
// return
// }
// // Apply configuration using templ provider
// err = templUIProvider.ApplyConfig(config)
// if err != nil {
// c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to apply configuration: " + err.Error()})
// return
// }
// } else {
// Fallback to old UI provider for tasks that haven't been migrated yet
// Fallback to old UI provider for tasks that haven't been migrated yet
// Create a new config instance based on task type and apply schema defaults
var config TaskConfig
switch taskType {
case types.TaskTypeVacuum:
config = &vacuum.Config{}
case types.TaskTypeBalance:
config = &balance.Config{}
case types.TaskTypeErasureCoding:
config = &erasure_coding.Config{}
default:
c.JSON(http.StatusBadRequest, gin.H{"error": "Unsupported task type: " + taskTypeName})
return
}
// Apply schema defaults first using type-safe method
if err := schema.ApplyDefaultsToConfig(config); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to apply defaults: " + err.Error()})
return
}
// First, get the current configuration to preserve existing values
currentUIRegistry := tasks.GetGlobalUIRegistry()
currentTypesRegistry := tasks.GetGlobalTypesRegistry()
var currentProvider types.TaskUIProvider
for workerTaskType := range currentTypesRegistry.GetAllDetectors() {
if string(workerTaskType) == string(taskType) {
currentProvider = currentUIRegistry.GetProvider(workerTaskType)
break
}
}
if currentProvider != nil {
// Copy current config values to the new config
currentConfig := currentProvider.GetCurrentConfig()
if currentConfigProtobuf, ok := currentConfig.(TaskConfig); ok {
// Apply current values using protobuf directly - no map conversion needed!
currentPolicy := currentConfigProtobuf.ToTaskPolicy()
if err := config.FromTaskPolicy(currentPolicy); err != nil {
glog.Warningf("Failed to load current config for %s: %v", taskTypeName, err)
}
}
}
// Parse form data using schema-based approach (this will override with new values)
err = h.parseTaskConfigFromForm(c.Request.PostForm, schema, config)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse configuration: " + err.Error()})
return
}
// Debug logging - show parsed config values
switch taskType {
case types.TaskTypeVacuum:
if vacuumConfig, ok := config.(*vacuum.Config); ok {
glog.V(1).Infof("Parsed vacuum config - GarbageThreshold: %f, MinVolumeAgeSeconds: %d, MinIntervalSeconds: %d",
vacuumConfig.GarbageThreshold, vacuumConfig.MinVolumeAgeSeconds, vacuumConfig.MinIntervalSeconds)
}
case types.TaskTypeErasureCoding:
if ecConfig, ok := config.(*erasure_coding.Config); ok {
glog.V(1).Infof("Parsed EC config - FullnessRatio: %f, QuietForSeconds: %d, MinSizeMB: %d, CollectionFilter: '%s'",
ecConfig.FullnessRatio, ecConfig.QuietForSeconds, ecConfig.MinSizeMB, ecConfig.CollectionFilter)
}
case types.TaskTypeBalance:
if balanceConfig, ok := config.(*balance.Config); ok {
glog.V(1).Infof("Parsed balance config - Enabled: %v, MaxConcurrent: %d, ScanIntervalSeconds: %d, ImbalanceThreshold: %f, MinServerCount: %d",
balanceConfig.Enabled, balanceConfig.MaxConcurrent, balanceConfig.ScanIntervalSeconds, balanceConfig.ImbalanceThreshold, balanceConfig.MinServerCount)
}
}
// Validate the configuration
if validationErrors := schema.ValidateConfig(config); len(validationErrors) > 0 {
errorMessages := make([]string, len(validationErrors))
for i, err := range validationErrors {
errorMessages[i] = err.Error()
}
c.JSON(http.StatusBadRequest, gin.H{"error": "Configuration validation failed", "details": errorMessages})
return
}
// Apply configuration using UIProvider
uiRegistry := tasks.GetGlobalUIRegistry()
typesRegistry := tasks.GetGlobalTypesRegistry()
@@ -248,25 +281,153 @@ func (h *MaintenanceHandlers) UpdateTaskConfig(c *gin.Context) {
return
}
// Parse configuration from form using old provider
config, err = provider.ParseConfigForm(formData)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "Failed to parse configuration: " + err.Error()})
return
}
// Apply configuration using old provider
err = provider.ApplyConfig(config)
// Apply configuration using provider
err = provider.ApplyTaskConfig(config)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to apply configuration: " + err.Error()})
return
}
// } // End of disabled templ UI provider else block
// Save task configuration to protobuf file using ConfigPersistence
if h.adminServer != nil && h.adminServer.GetConfigPersistence() != nil {
err = h.saveTaskConfigToProtobuf(taskType, config)
if err != nil {
glog.Warningf("Failed to save task config to protobuf file: %v", err)
// Don't fail the request, just log the warning
}
}
// Trigger a configuration reload in the maintenance manager
if h.adminServer != nil {
if manager := h.adminServer.GetMaintenanceManager(); manager != nil {
err = manager.ReloadTaskConfigurations()
if err != nil {
glog.Warningf("Failed to reload task configurations: %v", err)
} else {
glog.V(1).Infof("Successfully reloaded task configurations after updating %s", taskTypeName)
}
}
}
// Redirect back to task configuration page
c.Redirect(http.StatusSeeOther, "/maintenance/config/"+taskTypeName)
}
// parseTaskConfigFromForm parses form data using schema definitions
func (h *MaintenanceHandlers) parseTaskConfigFromForm(formData map[string][]string, schema *tasks.TaskConfigSchema, config interface{}) error {
configValue := reflect.ValueOf(config)
if configValue.Kind() == reflect.Ptr {
configValue = configValue.Elem()
}
if configValue.Kind() != reflect.Struct {
return fmt.Errorf("config must be a struct or pointer to struct")
}
configType := configValue.Type()
for i := 0; i < configValue.NumField(); i++ {
field := configValue.Field(i)
fieldType := configType.Field(i)
// Handle embedded structs recursively
if fieldType.Anonymous && field.Kind() == reflect.Struct {
err := h.parseTaskConfigFromForm(formData, schema, field.Addr().Interface())
if err != nil {
return fmt.Errorf("error parsing embedded struct %s: %w", fieldType.Name, err)
}
continue
}
// Get JSON tag name
jsonTag := fieldType.Tag.Get("json")
if jsonTag == "" {
continue
}
// Remove options like ",omitempty"
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
jsonTag = jsonTag[:commaIdx]
}
// Find corresponding schema field
schemaField := schema.GetFieldByName(jsonTag)
if schemaField == nil {
continue
}
// Parse value based on field type
if err := h.parseFieldFromForm(formData, schemaField, field); err != nil {
return fmt.Errorf("error parsing field %s: %w", schemaField.DisplayName, err)
}
}
return nil
}
// parseFieldFromForm parses a single field value from form data
func (h *MaintenanceHandlers) parseFieldFromForm(formData map[string][]string, schemaField *config.Field, fieldValue reflect.Value) error {
if !fieldValue.CanSet() {
return nil
}
switch schemaField.Type {
case config.FieldTypeBool:
// Checkbox fields - present means true, absent means false
_, exists := formData[schemaField.JSONName]
fieldValue.SetBool(exists)
case config.FieldTypeInt:
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
if intVal, err := strconv.Atoi(values[0]); err != nil {
return fmt.Errorf("invalid integer value: %s", values[0])
} else {
fieldValue.SetInt(int64(intVal))
}
}
case config.FieldTypeFloat:
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
if floatVal, err := strconv.ParseFloat(values[0], 64); err != nil {
return fmt.Errorf("invalid float value: %s", values[0])
} else {
fieldValue.SetFloat(floatVal)
}
}
case config.FieldTypeString:
if values, ok := formData[schemaField.JSONName]; ok && len(values) > 0 {
fieldValue.SetString(values[0])
}
case config.FieldTypeInterval:
// Parse interval fields with value + unit
valueKey := schemaField.JSONName + "_value"
unitKey := schemaField.JSONName + "_unit"
if valueStrs, ok := formData[valueKey]; ok && len(valueStrs) > 0 {
value, err := strconv.Atoi(valueStrs[0])
if err != nil {
return fmt.Errorf("invalid interval value: %s", valueStrs[0])
}
unit := "minutes" // default
if unitStrs, ok := formData[unitKey]; ok && len(unitStrs) > 0 {
unit = unitStrs[0]
}
// Convert to seconds
seconds := config.IntervalValueUnitToSeconds(value, unit)
fieldValue.SetInt(int64(seconds))
}
default:
return fmt.Errorf("unsupported field type: %s", schemaField.Type)
}
return nil
}
// UpdateMaintenanceConfig updates maintenance configuration from form
func (h *MaintenanceHandlers) UpdateMaintenanceConfig(c *gin.Context) {
var config maintenance.MaintenanceConfig
@@ -302,36 +463,50 @@ func (h *MaintenanceHandlers) getMaintenanceQueueData() (*maintenance.Maintenanc
return nil, err
}
return &maintenance.MaintenanceQueueData{
data := &maintenance.MaintenanceQueueData{
Tasks: tasks,
Workers: workers,
Stats: stats,
LastUpdated: time.Now(),
}, nil
}
return data, nil
}
func (h *MaintenanceHandlers) getMaintenanceQueueStats() (*maintenance.QueueStats, error) {
// This would integrate with the maintenance queue to get real statistics
// For now, return mock data
return &maintenance.QueueStats{
PendingTasks: 5,
RunningTasks: 2,
CompletedToday: 15,
FailedToday: 1,
TotalTasks: 23,
}, nil
// Use the exported method from AdminServer
return h.adminServer.GetMaintenanceQueueStats()
}
func (h *MaintenanceHandlers) getMaintenanceTasks() ([]*maintenance.MaintenanceTask, error) {
// This would integrate with the maintenance queue to get real tasks
// For now, return mock data
return []*maintenance.MaintenanceTask{}, nil
// Call the maintenance manager directly to get all tasks
if h.adminServer == nil {
return []*maintenance.MaintenanceTask{}, nil
}
manager := h.adminServer.GetMaintenanceManager()
if manager == nil {
return []*maintenance.MaintenanceTask{}, nil
}
// Get ALL tasks using empty parameters - this should match what the API returns
allTasks := manager.GetTasks("", "", 0)
return allTasks, nil
}
func (h *MaintenanceHandlers) getMaintenanceWorkers() ([]*maintenance.MaintenanceWorker, error) {
// This would integrate with the maintenance system to get real workers
// For now, return mock data
return []*maintenance.MaintenanceWorker{}, nil
// Get workers from the admin server's maintenance manager
if h.adminServer == nil {
return []*maintenance.MaintenanceWorker{}, nil
}
if h.adminServer.GetMaintenanceManager() == nil {
return []*maintenance.MaintenanceWorker{}, nil
}
// Get workers from the maintenance manager
workers := h.adminServer.GetMaintenanceManager().GetWorkers()
return workers, nil
}
func (h *MaintenanceHandlers) getMaintenanceConfig() (*maintenance.MaintenanceConfigData, error) {
@@ -344,40 +519,25 @@ func (h *MaintenanceHandlers) updateMaintenanceConfig(config *maintenance.Mainte
return h.adminServer.UpdateMaintenanceConfigData(config)
}
// floatPtr is a helper function to create float64 pointers
func floatPtr(f float64) *float64 {
return &f
}
// saveTaskConfigToProtobuf saves task configuration to protobuf file
func (h *MaintenanceHandlers) saveTaskConfigToProtobuf(taskType types.TaskType, config TaskConfig) error {
configPersistence := h.adminServer.GetConfigPersistence()
if configPersistence == nil {
return fmt.Errorf("config persistence not available")
}
// Global templ UI registry - temporarily disabled
// var globalTemplUIRegistry *types.UITemplRegistry
// Use the new ToTaskPolicy method - much simpler and more maintainable!
taskPolicy := config.ToTaskPolicy()
// initTemplUIRegistry initializes the global templ UI registry - temporarily disabled
func initTemplUIRegistry() {
// Temporarily disabled due to missing types
// if globalTemplUIRegistry == nil {
// globalTemplUIRegistry = types.NewUITemplRegistry()
// // Register vacuum templ UI provider using shared instances
// vacuumDetector, vacuumScheduler := vacuum.GetSharedInstances()
// vacuum.RegisterUITempl(globalTemplUIRegistry, vacuumDetector, vacuumScheduler)
// // Register erasure coding templ UI provider using shared instances
// erasureCodingDetector, erasureCodingScheduler := erasure_coding.GetSharedInstances()
// erasure_coding.RegisterUITempl(globalTemplUIRegistry, erasureCodingDetector, erasureCodingScheduler)
// // Register balance templ UI provider using shared instances
// balanceDetector, balanceScheduler := balance.GetSharedInstances()
// balance.RegisterUITempl(globalTemplUIRegistry, balanceDetector, balanceScheduler)
// }
}
// getTemplUIProvider gets the templ UI provider for a task type - temporarily disabled
func getTemplUIProvider(taskType maintenance.MaintenanceTaskType) interface{} {
// initTemplUIRegistry()
// Convert maintenance task type to worker task type
// typesRegistry := tasks.GetGlobalTypesRegistry()
// for workerTaskType := range typesRegistry.GetAllDetectors() {
// if string(workerTaskType) == string(taskType) {
// return globalTemplUIRegistry.GetProvider(workerTaskType)
// }
// }
return nil
// Save using task-specific methods
switch taskType {
case types.TaskTypeVacuum:
return configPersistence.SaveVacuumTaskPolicy(taskPolicy)
case types.TaskTypeErasureCoding:
return configPersistence.SaveErasureCodingTaskPolicy(taskPolicy)
case types.TaskTypeBalance:
return configPersistence.SaveBalanceTaskPolicy(taskPolicy)
default:
return fmt.Errorf("unsupported task type for protobuf persistence: %s", taskType)
}
}

View File

@@ -0,0 +1,389 @@
package handlers
import (
"net/url"
"testing"
"github.com/seaweedfs/seaweedfs/weed/admin/config"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/base"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
)
func TestParseTaskConfigFromForm_WithEmbeddedStruct(t *testing.T) {
// Create a maintenance handlers instance for testing
h := &MaintenanceHandlers{}
// Test with balance config
t.Run("Balance Config", func(t *testing.T) {
// Simulate form data
formData := url.Values{
"enabled": {"on"}, // checkbox field
"scan_interval_seconds_value": {"30"}, // interval field
"scan_interval_seconds_unit": {"minutes"}, // interval unit
"max_concurrent": {"2"}, // number field
"imbalance_threshold": {"0.15"}, // float field
"min_server_count": {"3"}, // number field
}
// Get schema
schema := tasks.GetTaskConfigSchema("balance")
if schema == nil {
t.Fatal("Failed to get balance schema")
}
// Create config instance
config := &balance.Config{}
// Parse form data
err := h.parseTaskConfigFromForm(formData, schema, config)
if err != nil {
t.Fatalf("Failed to parse form data: %v", err)
}
// Verify embedded struct fields were set correctly
if !config.Enabled {
t.Errorf("Expected Enabled=true, got %v", config.Enabled)
}
if config.ScanIntervalSeconds != 1800 { // 30 minutes * 60
t.Errorf("Expected ScanIntervalSeconds=1800, got %v", config.ScanIntervalSeconds)
}
if config.MaxConcurrent != 2 {
t.Errorf("Expected MaxConcurrent=2, got %v", config.MaxConcurrent)
}
// Verify balance-specific fields were set correctly
if config.ImbalanceThreshold != 0.15 {
t.Errorf("Expected ImbalanceThreshold=0.15, got %v", config.ImbalanceThreshold)
}
if config.MinServerCount != 3 {
t.Errorf("Expected MinServerCount=3, got %v", config.MinServerCount)
}
})
// Test with vacuum config
t.Run("Vacuum Config", func(t *testing.T) {
// Simulate form data
formData := url.Values{
// "enabled" field omitted to simulate unchecked checkbox
"scan_interval_seconds_value": {"4"}, // interval field
"scan_interval_seconds_unit": {"hours"}, // interval unit
"max_concurrent": {"3"}, // number field
"garbage_threshold": {"0.4"}, // float field
"min_volume_age_seconds_value": {"2"}, // interval field
"min_volume_age_seconds_unit": {"days"}, // interval unit
"min_interval_seconds_value": {"1"}, // interval field
"min_interval_seconds_unit": {"days"}, // interval unit
}
// Get schema
schema := tasks.GetTaskConfigSchema("vacuum")
if schema == nil {
t.Fatal("Failed to get vacuum schema")
}
// Create config instance
config := &vacuum.Config{}
// Parse form data
err := h.parseTaskConfigFromForm(formData, schema, config)
if err != nil {
t.Fatalf("Failed to parse form data: %v", err)
}
// Verify embedded struct fields were set correctly
if config.Enabled {
t.Errorf("Expected Enabled=false, got %v", config.Enabled)
}
if config.ScanIntervalSeconds != 14400 { // 4 hours * 3600
t.Errorf("Expected ScanIntervalSeconds=14400, got %v", config.ScanIntervalSeconds)
}
if config.MaxConcurrent != 3 {
t.Errorf("Expected MaxConcurrent=3, got %v", config.MaxConcurrent)
}
// Verify vacuum-specific fields were set correctly
if config.GarbageThreshold != 0.4 {
t.Errorf("Expected GarbageThreshold=0.4, got %v", config.GarbageThreshold)
}
if config.MinVolumeAgeSeconds != 172800 { // 2 days * 86400
t.Errorf("Expected MinVolumeAgeSeconds=172800, got %v", config.MinVolumeAgeSeconds)
}
if config.MinIntervalSeconds != 86400 { // 1 day * 86400
t.Errorf("Expected MinIntervalSeconds=86400, got %v", config.MinIntervalSeconds)
}
})
// Test with erasure coding config
t.Run("Erasure Coding Config", func(t *testing.T) {
// Simulate form data
formData := url.Values{
"enabled": {"on"}, // checkbox field
"scan_interval_seconds_value": {"2"}, // interval field
"scan_interval_seconds_unit": {"hours"}, // interval unit
"max_concurrent": {"1"}, // number field
"quiet_for_seconds_value": {"10"}, // interval field
"quiet_for_seconds_unit": {"minutes"}, // interval unit
"fullness_ratio": {"0.85"}, // float field
"collection_filter": {"test_collection"}, // string field
"min_size_mb": {"50"}, // number field
}
// Get schema
schema := tasks.GetTaskConfigSchema("erasure_coding")
if schema == nil {
t.Fatal("Failed to get erasure_coding schema")
}
// Create config instance
config := &erasure_coding.Config{}
// Parse form data
err := h.parseTaskConfigFromForm(formData, schema, config)
if err != nil {
t.Fatalf("Failed to parse form data: %v", err)
}
// Verify embedded struct fields were set correctly
if !config.Enabled {
t.Errorf("Expected Enabled=true, got %v", config.Enabled)
}
if config.ScanIntervalSeconds != 7200 { // 2 hours * 3600
t.Errorf("Expected ScanIntervalSeconds=7200, got %v", config.ScanIntervalSeconds)
}
if config.MaxConcurrent != 1 {
t.Errorf("Expected MaxConcurrent=1, got %v", config.MaxConcurrent)
}
// Verify erasure coding-specific fields were set correctly
if config.QuietForSeconds != 600 { // 10 minutes * 60
t.Errorf("Expected QuietForSeconds=600, got %v", config.QuietForSeconds)
}
if config.FullnessRatio != 0.85 {
t.Errorf("Expected FullnessRatio=0.85, got %v", config.FullnessRatio)
}
if config.CollectionFilter != "test_collection" {
t.Errorf("Expected CollectionFilter='test_collection', got %v", config.CollectionFilter)
}
if config.MinSizeMB != 50 {
t.Errorf("Expected MinSizeMB=50, got %v", config.MinSizeMB)
}
})
}
func TestConfigurationValidation(t *testing.T) {
// Test that config structs can be validated and converted to protobuf format
taskTypes := []struct {
name string
config interface{}
}{
{
"balance",
&balance.Config{
BaseConfig: base.BaseConfig{
Enabled: true,
ScanIntervalSeconds: 2400,
MaxConcurrent: 3,
},
ImbalanceThreshold: 0.18,
MinServerCount: 4,
},
},
{
"vacuum",
&vacuum.Config{
BaseConfig: base.BaseConfig{
Enabled: false,
ScanIntervalSeconds: 7200,
MaxConcurrent: 2,
},
GarbageThreshold: 0.35,
MinVolumeAgeSeconds: 86400,
MinIntervalSeconds: 604800,
},
},
{
"erasure_coding",
&erasure_coding.Config{
BaseConfig: base.BaseConfig{
Enabled: true,
ScanIntervalSeconds: 3600,
MaxConcurrent: 1,
},
QuietForSeconds: 900,
FullnessRatio: 0.9,
CollectionFilter: "important",
MinSizeMB: 100,
},
},
}
for _, test := range taskTypes {
t.Run(test.name, func(t *testing.T) {
// Test that configs can be converted to protobuf TaskPolicy
switch cfg := test.config.(type) {
case *balance.Config:
policy := cfg.ToTaskPolicy()
if policy == nil {
t.Fatal("ToTaskPolicy returned nil")
}
if policy.Enabled != cfg.Enabled {
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
}
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
}
case *vacuum.Config:
policy := cfg.ToTaskPolicy()
if policy == nil {
t.Fatal("ToTaskPolicy returned nil")
}
if policy.Enabled != cfg.Enabled {
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
}
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
}
case *erasure_coding.Config:
policy := cfg.ToTaskPolicy()
if policy == nil {
t.Fatal("ToTaskPolicy returned nil")
}
if policy.Enabled != cfg.Enabled {
t.Errorf("Expected Enabled=%v, got %v", cfg.Enabled, policy.Enabled)
}
if policy.MaxConcurrent != int32(cfg.MaxConcurrent) {
t.Errorf("Expected MaxConcurrent=%v, got %v", cfg.MaxConcurrent, policy.MaxConcurrent)
}
default:
t.Fatalf("Unknown config type: %T", test.config)
}
// Test that configs can be validated
switch cfg := test.config.(type) {
case *balance.Config:
if err := cfg.Validate(); err != nil {
t.Errorf("Validation failed: %v", err)
}
case *vacuum.Config:
if err := cfg.Validate(); err != nil {
t.Errorf("Validation failed: %v", err)
}
case *erasure_coding.Config:
if err := cfg.Validate(); err != nil {
t.Errorf("Validation failed: %v", err)
}
}
})
}
}
func TestParseFieldFromForm_EdgeCases(t *testing.T) {
h := &MaintenanceHandlers{}
// Test checkbox parsing (boolean fields)
t.Run("Checkbox Fields", func(t *testing.T) {
tests := []struct {
name string
formData url.Values
expectedValue bool
}{
{"Checked checkbox", url.Values{"test_field": {"on"}}, true},
{"Unchecked checkbox", url.Values{}, false},
{"Empty value checkbox", url.Values{"test_field": {""}}, true}, // Present but empty means checked
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
schema := &tasks.TaskConfigSchema{
Schema: config.Schema{
Fields: []*config.Field{
{
JSONName: "test_field",
Type: config.FieldTypeBool,
InputType: "checkbox",
},
},
},
}
type TestConfig struct {
TestField bool `json:"test_field"`
}
config := &TestConfig{}
err := h.parseTaskConfigFromForm(test.formData, schema, config)
if err != nil {
t.Fatalf("parseTaskConfigFromForm failed: %v", err)
}
if config.TestField != test.expectedValue {
t.Errorf("Expected %v, got %v", test.expectedValue, config.TestField)
}
})
}
})
// Test interval parsing
t.Run("Interval Fields", func(t *testing.T) {
tests := []struct {
name string
value string
unit string
expectedSecs int
}{
{"Minutes", "30", "minutes", 1800},
{"Hours", "2", "hours", 7200},
{"Days", "1", "days", 86400},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
formData := url.Values{
"test_field_value": {test.value},
"test_field_unit": {test.unit},
}
schema := &tasks.TaskConfigSchema{
Schema: config.Schema{
Fields: []*config.Field{
{
JSONName: "test_field",
Type: config.FieldTypeInterval,
InputType: "interval",
},
},
},
}
type TestConfig struct {
TestField int `json:"test_field"`
}
config := &TestConfig{}
err := h.parseTaskConfigFromForm(formData, schema, config)
if err != nil {
t.Fatalf("parseTaskConfigFromForm failed: %v", err)
}
if config.TestField != test.expectedSecs {
t.Errorf("Expected %d seconds, got %d", test.expectedSecs, config.TestField)
}
})
}
})
}

View File

@@ -0,0 +1,25 @@
package handlers
import (
"github.com/seaweedfs/seaweedfs/weed/admin/config"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
)
// TaskConfig defines the interface that all task configuration types must implement
type TaskConfig interface {
config.ConfigWithDefaults // Extends ConfigWithDefaults for type-safe schema operations
// Common methods from BaseConfig
IsEnabled() bool
SetEnabled(enabled bool)
// Protobuf serialization methods - no more map[string]interface{}!
ToTaskPolicy() *worker_pb.TaskPolicy
FromTaskPolicy(policy *worker_pb.TaskPolicy) error
}
// TaskConfigProvider defines the interface for creating specific task config types
type TaskConfigProvider interface {
NewConfig() TaskConfig
GetTaskType() string
}

View File

@@ -0,0 +1,190 @@
package maintenance
import (
"github.com/seaweedfs/seaweedfs/weed/admin/config"
)
// Type aliases for backward compatibility
type ConfigFieldType = config.FieldType
type ConfigFieldUnit = config.FieldUnit
type ConfigField = config.Field
// Constant aliases for backward compatibility
const (
FieldTypeBool = config.FieldTypeBool
FieldTypeInt = config.FieldTypeInt
FieldTypeDuration = config.FieldTypeDuration
FieldTypeInterval = config.FieldTypeInterval
FieldTypeString = config.FieldTypeString
FieldTypeFloat = config.FieldTypeFloat
)
const (
UnitSeconds = config.UnitSeconds
UnitMinutes = config.UnitMinutes
UnitHours = config.UnitHours
UnitDays = config.UnitDays
UnitCount = config.UnitCount
UnitNone = config.UnitNone
)
// Function aliases for backward compatibility
var (
SecondsToIntervalValueUnit = config.SecondsToIntervalValueUnit
IntervalValueUnitToSeconds = config.IntervalValueUnitToSeconds
)
// MaintenanceConfigSchema defines the schema for maintenance configuration
type MaintenanceConfigSchema struct {
config.Schema // Embed common schema functionality
}
// GetMaintenanceConfigSchema returns the schema for maintenance configuration
func GetMaintenanceConfigSchema() *MaintenanceConfigSchema {
return &MaintenanceConfigSchema{
Schema: config.Schema{
Fields: []*config.Field{
{
Name: "enabled",
JSONName: "enabled",
Type: config.FieldTypeBool,
DefaultValue: true,
Required: false,
DisplayName: "Enable Maintenance System",
Description: "When enabled, the system will automatically scan for and execute maintenance tasks",
HelpText: "Toggle this to enable or disable the entire maintenance system",
InputType: "checkbox",
CSSClasses: "form-check-input",
},
{
Name: "scan_interval_seconds",
JSONName: "scan_interval_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 30 * 60, // 30 minutes in seconds
MinValue: 1 * 60, // 1 minute
MaxValue: 24 * 60 * 60, // 24 hours
Required: true,
DisplayName: "Scan Interval",
Description: "How often to scan for maintenance tasks",
HelpText: "The system will check for new maintenance tasks at this interval",
Placeholder: "30",
Unit: config.UnitMinutes,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "worker_timeout_seconds",
JSONName: "worker_timeout_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 5 * 60, // 5 minutes
MinValue: 1 * 60, // 1 minute
MaxValue: 60 * 60, // 1 hour
Required: true,
DisplayName: "Worker Timeout",
Description: "How long to wait for worker heartbeat before considering it inactive",
HelpText: "Workers that don't send heartbeats within this time are considered offline",
Placeholder: "5",
Unit: config.UnitMinutes,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "task_timeout_seconds",
JSONName: "task_timeout_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 2 * 60 * 60, // 2 hours
MinValue: 10 * 60, // 10 minutes
MaxValue: 24 * 60 * 60, // 24 hours
Required: true,
DisplayName: "Task Timeout",
Description: "Maximum time allowed for a task to complete",
HelpText: "Tasks that exceed this duration will be marked as failed",
Placeholder: "2",
Unit: config.UnitHours,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "retry_delay_seconds",
JSONName: "retry_delay_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 15 * 60, // 15 minutes
MinValue: 1 * 60, // 1 minute
MaxValue: 24 * 60 * 60, // 24 hours
Required: true,
DisplayName: "Retry Delay",
Description: "How long to wait before retrying a failed task",
HelpText: "Failed tasks will be retried after this delay",
Placeholder: "15",
Unit: config.UnitMinutes,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "max_retries",
JSONName: "max_retries",
Type: config.FieldTypeInt,
DefaultValue: 3,
MinValue: 0,
MaxValue: 10,
Required: true,
DisplayName: "Max Retries",
Description: "Maximum number of times to retry a failed task",
HelpText: "Tasks that fail more than this many times will be marked as permanently failed",
Placeholder: "3",
Unit: config.UnitCount,
InputType: "number",
CSSClasses: "form-control",
},
{
Name: "cleanup_interval_seconds",
JSONName: "cleanup_interval_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 24 * 60 * 60, // 24 hours
MinValue: 1 * 60 * 60, // 1 hour
MaxValue: 7 * 24 * 60 * 60, // 7 days
Required: true,
DisplayName: "Cleanup Interval",
Description: "How often to run maintenance cleanup operations",
HelpText: "Removes old task records and temporary files at this interval",
Placeholder: "24",
Unit: config.UnitHours,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "task_retention_seconds",
JSONName: "task_retention_seconds",
Type: config.FieldTypeInterval,
DefaultValue: 7 * 24 * 60 * 60, // 7 days
MinValue: 1 * 24 * 60 * 60, // 1 day
MaxValue: 30 * 24 * 60 * 60, // 30 days
Required: true,
DisplayName: "Task Retention",
Description: "How long to keep completed task records",
HelpText: "Task history older than this duration will be automatically deleted",
Placeholder: "7",
Unit: config.UnitDays,
InputType: "interval",
CSSClasses: "form-control",
},
{
Name: "global_max_concurrent",
JSONName: "global_max_concurrent",
Type: config.FieldTypeInt,
DefaultValue: 10,
MinValue: 1,
MaxValue: 100,
Required: true,
DisplayName: "Global Max Concurrent Tasks",
Description: "Maximum number of maintenance tasks that can run simultaneously across all workers",
HelpText: "Limits the total number of maintenance operations to control system load",
Placeholder: "10",
Unit: config.UnitCount,
InputType: "number",
CSSClasses: "form-control",
},
},
},
}
}

View File

@@ -0,0 +1,124 @@
package maintenance
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
)
// VerifyProtobufConfig demonstrates that the protobuf configuration system is working
func VerifyProtobufConfig() error {
// Create configuration manager
configManager := NewMaintenanceConfigManager()
config := configManager.GetConfig()
// Verify basic configuration
if !config.Enabled {
return fmt.Errorf("expected config to be enabled by default")
}
if config.ScanIntervalSeconds != 30*60 {
return fmt.Errorf("expected scan interval to be 1800 seconds, got %d", config.ScanIntervalSeconds)
}
// Verify policy configuration
if config.Policy == nil {
return fmt.Errorf("expected policy to be configured")
}
if config.Policy.GlobalMaxConcurrent != 4 {
return fmt.Errorf("expected global max concurrent to be 4, got %d", config.Policy.GlobalMaxConcurrent)
}
// Verify task policies
vacuumPolicy := config.Policy.TaskPolicies["vacuum"]
if vacuumPolicy == nil {
return fmt.Errorf("expected vacuum policy to be configured")
}
if !vacuumPolicy.Enabled {
return fmt.Errorf("expected vacuum policy to be enabled")
}
// Verify typed configuration access
vacuumConfig := vacuumPolicy.GetVacuumConfig()
if vacuumConfig == nil {
return fmt.Errorf("expected vacuum config to be accessible")
}
if vacuumConfig.GarbageThreshold != 0.3 {
return fmt.Errorf("expected garbage threshold to be 0.3, got %f", vacuumConfig.GarbageThreshold)
}
// Verify helper functions work
if !IsTaskEnabled(config.Policy, "vacuum") {
return fmt.Errorf("expected vacuum task to be enabled via helper function")
}
maxConcurrent := GetMaxConcurrent(config.Policy, "vacuum")
if maxConcurrent != 2 {
return fmt.Errorf("expected vacuum max concurrent to be 2, got %d", maxConcurrent)
}
// Verify erasure coding configuration
ecPolicy := config.Policy.TaskPolicies["erasure_coding"]
if ecPolicy == nil {
return fmt.Errorf("expected EC policy to be configured")
}
ecConfig := ecPolicy.GetErasureCodingConfig()
if ecConfig == nil {
return fmt.Errorf("expected EC config to be accessible")
}
// Verify configurable EC fields only
if ecConfig.FullnessRatio <= 0 || ecConfig.FullnessRatio > 1 {
return fmt.Errorf("expected EC config to have valid fullness ratio (0-1), got %f", ecConfig.FullnessRatio)
}
return nil
}
// GetProtobufConfigSummary returns a summary of the current protobuf configuration
func GetProtobufConfigSummary() string {
configManager := NewMaintenanceConfigManager()
config := configManager.GetConfig()
summary := fmt.Sprintf("SeaweedFS Protobuf Maintenance Configuration:\n")
summary += fmt.Sprintf(" Enabled: %v\n", config.Enabled)
summary += fmt.Sprintf(" Scan Interval: %d seconds\n", config.ScanIntervalSeconds)
summary += fmt.Sprintf(" Max Retries: %d\n", config.MaxRetries)
summary += fmt.Sprintf(" Global Max Concurrent: %d\n", config.Policy.GlobalMaxConcurrent)
summary += fmt.Sprintf(" Task Policies: %d configured\n", len(config.Policy.TaskPolicies))
for taskType, policy := range config.Policy.TaskPolicies {
summary += fmt.Sprintf(" %s: enabled=%v, max_concurrent=%d\n",
taskType, policy.Enabled, policy.MaxConcurrent)
}
return summary
}
// CreateCustomConfig demonstrates creating a custom protobuf configuration
func CreateCustomConfig() *worker_pb.MaintenanceConfig {
return &worker_pb.MaintenanceConfig{
Enabled: true,
ScanIntervalSeconds: 60 * 60, // 1 hour
MaxRetries: 5,
Policy: &worker_pb.MaintenancePolicy{
GlobalMaxConcurrent: 8,
TaskPolicies: map[string]*worker_pb.TaskPolicy{
"custom_vacuum": {
Enabled: true,
MaxConcurrent: 4,
TaskConfig: &worker_pb.TaskPolicy_VacuumConfig{
VacuumConfig: &worker_pb.VacuumTaskConfig{
GarbageThreshold: 0.5,
MinVolumeAgeHours: 48,
},
},
},
},
},
}
}

View File

@@ -0,0 +1,287 @@
package maintenance
import (
"fmt"
"time"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
)
// MaintenanceConfigManager handles protobuf-based configuration
type MaintenanceConfigManager struct {
config *worker_pb.MaintenanceConfig
}
// NewMaintenanceConfigManager creates a new config manager with defaults
func NewMaintenanceConfigManager() *MaintenanceConfigManager {
return &MaintenanceConfigManager{
config: DefaultMaintenanceConfigProto(),
}
}
// DefaultMaintenanceConfigProto returns default configuration as protobuf
func DefaultMaintenanceConfigProto() *worker_pb.MaintenanceConfig {
return &worker_pb.MaintenanceConfig{
Enabled: true,
ScanIntervalSeconds: 30 * 60, // 30 minutes
WorkerTimeoutSeconds: 5 * 60, // 5 minutes
TaskTimeoutSeconds: 2 * 60 * 60, // 2 hours
RetryDelaySeconds: 15 * 60, // 15 minutes
MaxRetries: 3,
CleanupIntervalSeconds: 24 * 60 * 60, // 24 hours
TaskRetentionSeconds: 7 * 24 * 60 * 60, // 7 days
// Policy field will be populated dynamically from separate task configuration files
Policy: nil,
}
}
// GetConfig returns the current configuration
func (mcm *MaintenanceConfigManager) GetConfig() *worker_pb.MaintenanceConfig {
return mcm.config
}
// Type-safe configuration accessors
// GetVacuumConfig returns vacuum-specific configuration for a task type
func (mcm *MaintenanceConfigManager) GetVacuumConfig(taskType string) *worker_pb.VacuumTaskConfig {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
if vacuumConfig := policy.GetVacuumConfig(); vacuumConfig != nil {
return vacuumConfig
}
}
// Return defaults if not configured
return &worker_pb.VacuumTaskConfig{
GarbageThreshold: 0.3,
MinVolumeAgeHours: 24,
MinIntervalSeconds: 7 * 24 * 60 * 60, // 7 days
}
}
// GetErasureCodingConfig returns EC-specific configuration for a task type
func (mcm *MaintenanceConfigManager) GetErasureCodingConfig(taskType string) *worker_pb.ErasureCodingTaskConfig {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
if ecConfig := policy.GetErasureCodingConfig(); ecConfig != nil {
return ecConfig
}
}
// Return defaults if not configured
return &worker_pb.ErasureCodingTaskConfig{
FullnessRatio: 0.95,
QuietForSeconds: 3600,
MinVolumeSizeMb: 100,
CollectionFilter: "",
}
}
// GetBalanceConfig returns balance-specific configuration for a task type
func (mcm *MaintenanceConfigManager) GetBalanceConfig(taskType string) *worker_pb.BalanceTaskConfig {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
if balanceConfig := policy.GetBalanceConfig(); balanceConfig != nil {
return balanceConfig
}
}
// Return defaults if not configured
return &worker_pb.BalanceTaskConfig{
ImbalanceThreshold: 0.2,
MinServerCount: 2,
}
}
// GetReplicationConfig returns replication-specific configuration for a task type
func (mcm *MaintenanceConfigManager) GetReplicationConfig(taskType string) *worker_pb.ReplicationTaskConfig {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
if replicationConfig := policy.GetReplicationConfig(); replicationConfig != nil {
return replicationConfig
}
}
// Return defaults if not configured
return &worker_pb.ReplicationTaskConfig{
TargetReplicaCount: 2,
}
}
// Typed convenience methods for getting task configurations
// GetVacuumTaskConfigForType returns vacuum configuration for a specific task type
func (mcm *MaintenanceConfigManager) GetVacuumTaskConfigForType(taskType string) *worker_pb.VacuumTaskConfig {
return GetVacuumTaskConfig(mcm.config.Policy, MaintenanceTaskType(taskType))
}
// GetErasureCodingTaskConfigForType returns erasure coding configuration for a specific task type
func (mcm *MaintenanceConfigManager) GetErasureCodingTaskConfigForType(taskType string) *worker_pb.ErasureCodingTaskConfig {
return GetErasureCodingTaskConfig(mcm.config.Policy, MaintenanceTaskType(taskType))
}
// GetBalanceTaskConfigForType returns balance configuration for a specific task type
func (mcm *MaintenanceConfigManager) GetBalanceTaskConfigForType(taskType string) *worker_pb.BalanceTaskConfig {
return GetBalanceTaskConfig(mcm.config.Policy, MaintenanceTaskType(taskType))
}
// GetReplicationTaskConfigForType returns replication configuration for a specific task type
func (mcm *MaintenanceConfigManager) GetReplicationTaskConfigForType(taskType string) *worker_pb.ReplicationTaskConfig {
return GetReplicationTaskConfig(mcm.config.Policy, MaintenanceTaskType(taskType))
}
// Helper methods
func (mcm *MaintenanceConfigManager) getTaskPolicy(taskType string) *worker_pb.TaskPolicy {
if mcm.config.Policy != nil && mcm.config.Policy.TaskPolicies != nil {
return mcm.config.Policy.TaskPolicies[taskType]
}
return nil
}
// IsTaskEnabled returns whether a task type is enabled
func (mcm *MaintenanceConfigManager) IsTaskEnabled(taskType string) bool {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
return policy.Enabled
}
return false
}
// GetMaxConcurrent returns the max concurrent limit for a task type
func (mcm *MaintenanceConfigManager) GetMaxConcurrent(taskType string) int32 {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
return policy.MaxConcurrent
}
return 1 // Default
}
// GetRepeatInterval returns the repeat interval for a task type in seconds
func (mcm *MaintenanceConfigManager) GetRepeatInterval(taskType string) int32 {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
return policy.RepeatIntervalSeconds
}
return mcm.config.Policy.DefaultRepeatIntervalSeconds
}
// GetCheckInterval returns the check interval for a task type in seconds
func (mcm *MaintenanceConfigManager) GetCheckInterval(taskType string) int32 {
if policy := mcm.getTaskPolicy(taskType); policy != nil {
return policy.CheckIntervalSeconds
}
return mcm.config.Policy.DefaultCheckIntervalSeconds
}
// Duration accessor methods
// GetScanInterval returns the scan interval as a time.Duration
func (mcm *MaintenanceConfigManager) GetScanInterval() time.Duration {
return time.Duration(mcm.config.ScanIntervalSeconds) * time.Second
}
// GetWorkerTimeout returns the worker timeout as a time.Duration
func (mcm *MaintenanceConfigManager) GetWorkerTimeout() time.Duration {
return time.Duration(mcm.config.WorkerTimeoutSeconds) * time.Second
}
// GetTaskTimeout returns the task timeout as a time.Duration
func (mcm *MaintenanceConfigManager) GetTaskTimeout() time.Duration {
return time.Duration(mcm.config.TaskTimeoutSeconds) * time.Second
}
// GetRetryDelay returns the retry delay as a time.Duration
func (mcm *MaintenanceConfigManager) GetRetryDelay() time.Duration {
return time.Duration(mcm.config.RetryDelaySeconds) * time.Second
}
// GetCleanupInterval returns the cleanup interval as a time.Duration
func (mcm *MaintenanceConfigManager) GetCleanupInterval() time.Duration {
return time.Duration(mcm.config.CleanupIntervalSeconds) * time.Second
}
// GetTaskRetention returns the task retention period as a time.Duration
func (mcm *MaintenanceConfigManager) GetTaskRetention() time.Duration {
return time.Duration(mcm.config.TaskRetentionSeconds) * time.Second
}
// ValidateMaintenanceConfigWithSchema validates protobuf maintenance configuration using ConfigField rules
func ValidateMaintenanceConfigWithSchema(config *worker_pb.MaintenanceConfig) error {
if config == nil {
return fmt.Errorf("configuration cannot be nil")
}
// Get the schema to access field validation rules
schema := GetMaintenanceConfigSchema()
// Validate each field individually using the ConfigField rules
if err := validateFieldWithSchema(schema, "enabled", config.Enabled); err != nil {
return err
}
if err := validateFieldWithSchema(schema, "scan_interval_seconds", int(config.ScanIntervalSeconds)); err != nil {
return err
}
if err := validateFieldWithSchema(schema, "worker_timeout_seconds", int(config.WorkerTimeoutSeconds)); err != nil {
return err
}
if err := validateFieldWithSchema(schema, "task_timeout_seconds", int(config.TaskTimeoutSeconds)); err != nil {
return err
}
if err := validateFieldWithSchema(schema, "retry_delay_seconds", int(config.RetryDelaySeconds)); err != nil {
return err
}
if err := validateFieldWithSchema(schema, "max_retries", int(config.MaxRetries)); err != nil {
return err
}
if err := validateFieldWithSchema(schema, "cleanup_interval_seconds", int(config.CleanupIntervalSeconds)); err != nil {
return err
}
if err := validateFieldWithSchema(schema, "task_retention_seconds", int(config.TaskRetentionSeconds)); err != nil {
return err
}
// Validate policy fields if present
if config.Policy != nil {
// Note: These field names might need to be adjusted based on the actual schema
if err := validatePolicyField("global_max_concurrent", int(config.Policy.GlobalMaxConcurrent)); err != nil {
return err
}
if err := validatePolicyField("default_repeat_interval_seconds", int(config.Policy.DefaultRepeatIntervalSeconds)); err != nil {
return err
}
if err := validatePolicyField("default_check_interval_seconds", int(config.Policy.DefaultCheckIntervalSeconds)); err != nil {
return err
}
}
return nil
}
// validateFieldWithSchema validates a single field using its ConfigField definition
func validateFieldWithSchema(schema *MaintenanceConfigSchema, fieldName string, value interface{}) error {
field := schema.GetFieldByName(fieldName)
if field == nil {
// Field not in schema, skip validation
return nil
}
return field.ValidateValue(value)
}
// validatePolicyField validates policy fields (simplified validation for now)
func validatePolicyField(fieldName string, value int) error {
switch fieldName {
case "global_max_concurrent":
if value < 1 || value > 20 {
return fmt.Errorf("Global Max Concurrent must be between 1 and 20, got %d", value)
}
case "default_repeat_interval":
if value < 1 || value > 168 {
return fmt.Errorf("Default Repeat Interval must be between 1 and 168 hours, got %d", value)
}
case "default_check_interval":
if value < 1 || value > 168 {
return fmt.Errorf("Default Check Interval must be between 1 and 168 hours, got %d", value)
}
}
return nil
}

View File

@@ -1,11 +1,20 @@
package maintenance
import (
"context"
"fmt"
"time"
"github.com/seaweedfs/seaweedfs/weed/admin/topology"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/operation"
"github.com/seaweedfs/seaweedfs/weed/pb"
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
"github.com/seaweedfs/seaweedfs/weed/worker/types"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
// MaintenanceIntegration bridges the task system with existing maintenance
@@ -17,6 +26,12 @@ type MaintenanceIntegration struct {
maintenanceQueue *MaintenanceQueue
maintenancePolicy *MaintenancePolicy
// Pending operations tracker
pendingOperations *PendingOperations
// Active topology for task detection and target selection
activeTopology *topology.ActiveTopology
// Type conversion maps
taskTypeMap map[types.TaskType]MaintenanceTaskType
revTaskTypeMap map[MaintenanceTaskType]types.TaskType
@@ -31,8 +46,12 @@ func NewMaintenanceIntegration(queue *MaintenanceQueue, policy *MaintenancePolic
uiRegistry: tasks.GetGlobalUIRegistry(), // Use global UI registry with auto-registered UI providers
maintenanceQueue: queue,
maintenancePolicy: policy,
pendingOperations: NewPendingOperations(),
}
// Initialize active topology with 10 second recent task window
integration.activeTopology = topology.NewActiveTopology(10)
// Initialize type conversion maps
integration.initializeTypeMaps()
@@ -96,7 +115,7 @@ func (s *MaintenanceIntegration) registerAllTasks() {
s.buildTaskTypeMappings()
// Configure tasks from policy
s.configureTasksFromPolicy()
s.ConfigureTasksFromPolicy()
registeredTaskTypes := make([]string, 0, len(s.taskTypeMap))
for _, maintenanceTaskType := range s.taskTypeMap {
@@ -105,8 +124,8 @@ func (s *MaintenanceIntegration) registerAllTasks() {
glog.V(1).Infof("Registered tasks: %v", registeredTaskTypes)
}
// configureTasksFromPolicy dynamically configures all registered tasks based on the maintenance policy
func (s *MaintenanceIntegration) configureTasksFromPolicy() {
// ConfigureTasksFromPolicy dynamically configures all registered tasks based on the maintenance policy
func (s *MaintenanceIntegration) ConfigureTasksFromPolicy() {
if s.maintenancePolicy == nil {
return
}
@@ -143,7 +162,7 @@ func (s *MaintenanceIntegration) configureDetectorFromPolicy(taskType types.Task
// Convert task system type to maintenance task type for policy lookup
maintenanceTaskType, exists := s.taskTypeMap[taskType]
if exists {
enabled := s.maintenancePolicy.IsTaskEnabled(maintenanceTaskType)
enabled := IsTaskEnabled(s.maintenancePolicy, maintenanceTaskType)
basicDetector.SetEnabled(enabled)
glog.V(3).Infof("Set enabled=%v for detector %s", enabled, taskType)
}
@@ -172,14 +191,14 @@ func (s *MaintenanceIntegration) configureSchedulerFromPolicy(taskType types.Tas
// Set enabled status if scheduler supports it
if enableableScheduler, ok := scheduler.(interface{ SetEnabled(bool) }); ok {
enabled := s.maintenancePolicy.IsTaskEnabled(maintenanceTaskType)
enabled := IsTaskEnabled(s.maintenancePolicy, maintenanceTaskType)
enableableScheduler.SetEnabled(enabled)
glog.V(3).Infof("Set enabled=%v for scheduler %s", enabled, taskType)
}
// Set max concurrent if scheduler supports it
if concurrentScheduler, ok := scheduler.(interface{ SetMaxConcurrent(int) }); ok {
maxConcurrent := s.maintenancePolicy.GetMaxConcurrent(maintenanceTaskType)
maxConcurrent := GetMaxConcurrent(s.maintenancePolicy, maintenanceTaskType)
if maxConcurrent > 0 {
concurrentScheduler.SetMaxConcurrent(maxConcurrent)
glog.V(3).Infof("Set max concurrent=%d for scheduler %s", maxConcurrent, taskType)
@@ -193,11 +212,20 @@ func (s *MaintenanceIntegration) configureSchedulerFromPolicy(taskType types.Tas
// ScanWithTaskDetectors performs a scan using the task system
func (s *MaintenanceIntegration) ScanWithTaskDetectors(volumeMetrics []*types.VolumeHealthMetrics) ([]*TaskDetectionResult, error) {
// Note: ActiveTopology gets updated from topology info instead of volume metrics
glog.V(2).Infof("Processed %d volume metrics for task detection", len(volumeMetrics))
// Filter out volumes with pending operations to avoid duplicates
filteredMetrics := s.pendingOperations.FilterVolumeMetricsExcludingPending(volumeMetrics)
glog.V(1).Infof("Scanning %d volumes (filtered from %d) excluding pending operations",
len(filteredMetrics), len(volumeMetrics))
var allResults []*TaskDetectionResult
// Create cluster info
clusterInfo := &types.ClusterInfo{
TotalVolumes: len(volumeMetrics),
TotalVolumes: len(filteredMetrics),
LastUpdated: time.Now(),
}
@@ -209,17 +237,26 @@ func (s *MaintenanceIntegration) ScanWithTaskDetectors(volumeMetrics []*types.Vo
glog.V(2).Infof("Running detection for task type: %s", taskType)
results, err := detector.ScanForTasks(volumeMetrics, clusterInfo)
results, err := detector.ScanForTasks(filteredMetrics, clusterInfo)
if err != nil {
glog.Errorf("Failed to scan for %s tasks: %v", taskType, err)
continue
}
// Convert results to existing system format
// Convert results to existing system format and check for conflicts
for _, result := range results {
existingResult := s.convertToExistingFormat(result)
if existingResult != nil {
allResults = append(allResults, existingResult)
// Double-check for conflicts with pending operations
opType := s.mapMaintenanceTaskTypeToPendingOperationType(existingResult.TaskType)
if !s.pendingOperations.WouldConflictWithPending(existingResult.VolumeID, opType) {
// Plan destination for operations that need it
s.planDestinationForTask(existingResult, opType)
allResults = append(allResults, existingResult)
} else {
glog.V(2).Infof("Skipping task %s for volume %d due to conflict with pending operation",
existingResult.TaskType, existingResult.VolumeID)
}
}
}
@@ -229,6 +266,11 @@ func (s *MaintenanceIntegration) ScanWithTaskDetectors(volumeMetrics []*types.Vo
return allResults, nil
}
// UpdateTopologyInfo updates the volume shard tracker with topology information for empty servers
func (s *MaintenanceIntegration) UpdateTopologyInfo(topologyInfo *master_pb.TopologyInfo) error {
return s.activeTopology.UpdateTopology(topologyInfo)
}
// convertToExistingFormat converts task results to existing system format using dynamic mapping
func (s *MaintenanceIntegration) convertToExistingFormat(result *types.TaskDetectionResult) *TaskDetectionResult {
// Convert types using mapping tables
@@ -241,49 +283,62 @@ func (s *MaintenanceIntegration) convertToExistingFormat(result *types.TaskDetec
existingPriority, exists := s.priorityMap[result.Priority]
if !exists {
glog.Warningf("Unknown priority %d, defaulting to normal", result.Priority)
glog.Warningf("Unknown priority %s, defaulting to normal", result.Priority)
existingPriority = PriorityNormal
}
return &TaskDetectionResult{
TaskType: existingType,
VolumeID: result.VolumeID,
Server: result.Server,
Collection: result.Collection,
Priority: existingPriority,
Reason: result.Reason,
Parameters: result.Parameters,
ScheduleAt: result.ScheduleAt,
TaskType: existingType,
VolumeID: result.VolumeID,
Server: result.Server,
Collection: result.Collection,
Priority: existingPriority,
Reason: result.Reason,
TypedParams: result.TypedParams,
ScheduleAt: result.ScheduleAt,
}
}
// CanScheduleWithTaskSchedulers determines if a task can be scheduled using task schedulers with dynamic type conversion
func (s *MaintenanceIntegration) CanScheduleWithTaskSchedulers(task *MaintenanceTask, runningTasks []*MaintenanceTask, availableWorkers []*MaintenanceWorker) bool {
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Checking task %s (type: %s)", task.ID, task.Type)
// Convert existing types to task types using mapping
taskType, exists := s.revTaskTypeMap[task.Type]
if !exists {
glog.V(2).Infof("Unknown task type %s for scheduling, falling back to existing logic", task.Type)
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Unknown task type %s for scheduling, falling back to existing logic", task.Type)
return false // Fallback to existing logic for unknown types
}
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Mapped task type %s to %s", task.Type, taskType)
// Convert task objects
taskObject := s.convertTaskToTaskSystem(task)
if taskObject == nil {
glog.V(2).Infof("Failed to convert task %s for scheduling", task.ID)
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Failed to convert task %s for scheduling", task.ID)
return false
}
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Successfully converted task %s", task.ID)
runningTaskObjects := s.convertTasksToTaskSystem(runningTasks)
workerObjects := s.convertWorkersToTaskSystem(availableWorkers)
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Converted %d running tasks and %d workers", len(runningTaskObjects), len(workerObjects))
// Get the appropriate scheduler
scheduler := s.taskRegistry.GetScheduler(taskType)
if scheduler == nil {
glog.V(2).Infof("No scheduler found for task type %s", taskType)
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: No scheduler found for task type %s", taskType)
return false
}
return scheduler.CanScheduleNow(taskObject, runningTaskObjects, workerObjects)
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Found scheduler for task type %s", taskType)
canSchedule := scheduler.CanScheduleNow(taskObject, runningTaskObjects, workerObjects)
glog.Infof("DEBUG CanScheduleWithTaskSchedulers: Scheduler decision for task %s: %v", task.ID, canSchedule)
return canSchedule
}
// convertTaskToTaskSystem converts existing task to task system format using dynamic mapping
@@ -304,14 +359,14 @@ func (s *MaintenanceIntegration) convertTaskToTaskSystem(task *MaintenanceTask)
}
return &types.Task{
ID: task.ID,
Type: taskType,
Priority: priority,
VolumeID: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
Parameters: task.Parameters,
CreatedAt: task.CreatedAt,
ID: task.ID,
Type: taskType,
Priority: priority,
VolumeID: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
TypedParams: task.TypedParams,
CreatedAt: task.CreatedAt,
}
}
@@ -407,3 +462,463 @@ func (s *MaintenanceIntegration) GetAllTaskStats() []*types.TaskStats {
return stats
}
// mapMaintenanceTaskTypeToPendingOperationType converts a maintenance task type to a pending operation type
func (s *MaintenanceIntegration) mapMaintenanceTaskTypeToPendingOperationType(taskType MaintenanceTaskType) PendingOperationType {
switch taskType {
case MaintenanceTaskType("balance"):
return OpTypeVolumeBalance
case MaintenanceTaskType("erasure_coding"):
return OpTypeErasureCoding
case MaintenanceTaskType("vacuum"):
return OpTypeVacuum
case MaintenanceTaskType("replication"):
return OpTypeReplication
default:
// For other task types, assume they're volume operations
return OpTypeVolumeMove
}
}
// GetPendingOperations returns the pending operations tracker
func (s *MaintenanceIntegration) GetPendingOperations() *PendingOperations {
return s.pendingOperations
}
// GetActiveTopology returns the active topology for task detection
func (s *MaintenanceIntegration) GetActiveTopology() *topology.ActiveTopology {
return s.activeTopology
}
// planDestinationForTask plans the destination for a task that requires it and creates typed protobuf parameters
func (s *MaintenanceIntegration) planDestinationForTask(task *TaskDetectionResult, opType PendingOperationType) {
// Only plan destinations for operations that move volumes/shards
if opType == OpTypeVacuum {
// For vacuum tasks, create VacuumTaskParams
s.createVacuumTaskParams(task)
return
}
glog.V(1).Infof("Planning destination for %s task on volume %d (server: %s)", task.TaskType, task.VolumeID, task.Server)
// Use ActiveTopology for destination planning
destinationPlan, err := s.planDestinationWithActiveTopology(task, opType)
if err != nil {
glog.Warningf("Failed to plan primary destination for %s task volume %d: %v",
task.TaskType, task.VolumeID, err)
// Don't return here - still try to create task params which might work with multiple destinations
}
// Create typed protobuf parameters based on operation type
switch opType {
case OpTypeErasureCoding:
if destinationPlan == nil {
glog.Warningf("Cannot create EC task for volume %d: destination planning failed", task.VolumeID)
return
}
s.createErasureCodingTaskParams(task, destinationPlan)
case OpTypeVolumeMove, OpTypeVolumeBalance:
if destinationPlan == nil {
glog.Warningf("Cannot create balance task for volume %d: destination planning failed", task.VolumeID)
return
}
s.createBalanceTaskParams(task, destinationPlan.(*topology.DestinationPlan))
case OpTypeReplication:
if destinationPlan == nil {
glog.Warningf("Cannot create replication task for volume %d: destination planning failed", task.VolumeID)
return
}
s.createReplicationTaskParams(task, destinationPlan.(*topology.DestinationPlan))
default:
glog.V(2).Infof("Unknown operation type for task %s: %v", task.TaskType, opType)
}
if destinationPlan != nil {
switch plan := destinationPlan.(type) {
case *topology.DestinationPlan:
glog.V(1).Infof("Completed destination planning for %s task on volume %d: %s -> %s",
task.TaskType, task.VolumeID, task.Server, plan.TargetNode)
case *topology.MultiDestinationPlan:
glog.V(1).Infof("Completed EC destination planning for volume %d: %s -> %d destinations (racks: %d, DCs: %d)",
task.VolumeID, task.Server, len(plan.Plans), plan.SuccessfulRack, plan.SuccessfulDCs)
}
} else {
glog.V(1).Infof("Completed destination planning for %s task on volume %d: no destination planned",
task.TaskType, task.VolumeID)
}
}
// createVacuumTaskParams creates typed parameters for vacuum tasks
func (s *MaintenanceIntegration) createVacuumTaskParams(task *TaskDetectionResult) {
// Get configuration from policy instead of using hard-coded values
vacuumConfig := GetVacuumTaskConfig(s.maintenancePolicy, MaintenanceTaskType("vacuum"))
// Use configured values or defaults if config is not available
garbageThreshold := 0.3 // Default 30%
verifyChecksum := true // Default to verify
batchSize := int32(1000) // Default batch size
workingDir := "/tmp/seaweedfs_vacuum_work" // Default working directory
if vacuumConfig != nil {
garbageThreshold = vacuumConfig.GarbageThreshold
// Note: VacuumTaskConfig has GarbageThreshold, MinVolumeAgeHours, MinIntervalSeconds
// Other fields like VerifyChecksum, BatchSize, WorkingDir would need to be added
// to the protobuf definition if they should be configurable
}
// Create typed protobuf parameters
task.TypedParams = &worker_pb.TaskParams{
VolumeId: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
TaskParams: &worker_pb.TaskParams_VacuumParams{
VacuumParams: &worker_pb.VacuumTaskParams{
GarbageThreshold: garbageThreshold,
ForceVacuum: false,
BatchSize: batchSize,
WorkingDir: workingDir,
VerifyChecksum: verifyChecksum,
},
},
}
}
// planDestinationWithActiveTopology uses ActiveTopology to plan destinations
func (s *MaintenanceIntegration) planDestinationWithActiveTopology(task *TaskDetectionResult, opType PendingOperationType) (interface{}, error) {
// Get source node information from topology
var sourceRack, sourceDC string
// Extract rack and DC from topology info
topologyInfo := s.activeTopology.GetTopologyInfo()
if topologyInfo != nil {
for _, dc := range topologyInfo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for _, dataNodeInfo := range rack.DataNodeInfos {
if dataNodeInfo.Id == task.Server {
sourceDC = dc.Id
sourceRack = rack.Id
break
}
}
if sourceRack != "" {
break
}
}
if sourceDC != "" {
break
}
}
}
switch opType {
case OpTypeVolumeBalance, OpTypeVolumeMove:
// Plan single destination for balance operation
return s.activeTopology.PlanBalanceDestination(task.VolumeID, task.Server, sourceRack, sourceDC, 0)
case OpTypeErasureCoding:
// Plan multiple destinations for EC operation using adaptive shard counts
// Start with the default configuration, but fall back to smaller configurations if insufficient disks
totalShards := s.getOptimalECShardCount()
multiPlan, err := s.activeTopology.PlanECDestinations(task.VolumeID, task.Server, sourceRack, sourceDC, totalShards)
if err != nil {
return nil, err
}
if multiPlan != nil && len(multiPlan.Plans) > 0 {
// Return the multi-destination plan for EC
return multiPlan, nil
}
return nil, fmt.Errorf("no EC destinations found")
default:
return nil, fmt.Errorf("unsupported operation type for destination planning: %v", opType)
}
}
// createErasureCodingTaskParams creates typed parameters for EC tasks
func (s *MaintenanceIntegration) createErasureCodingTaskParams(task *TaskDetectionResult, destinationPlan interface{}) {
// Determine EC shard counts based on the number of planned destinations
multiPlan, ok := destinationPlan.(*topology.MultiDestinationPlan)
if !ok {
glog.Warningf("EC task for volume %d received unexpected destination plan type", task.VolumeID)
task.TypedParams = nil
return
}
// Use adaptive shard configuration based on actual planned destinations
totalShards := len(multiPlan.Plans)
dataShards, parityShards := s.getECShardCounts(totalShards)
// Extract disk-aware destinations from the multi-destination plan
var destinations []*worker_pb.ECDestination
var allConflicts []string
for _, plan := range multiPlan.Plans {
allConflicts = append(allConflicts, plan.Conflicts...)
// Create disk-aware destination
destinations = append(destinations, &worker_pb.ECDestination{
Node: plan.TargetNode,
DiskId: plan.TargetDisk,
Rack: plan.TargetRack,
DataCenter: plan.TargetDC,
PlacementScore: plan.PlacementScore,
})
}
glog.V(1).Infof("EC destination planning for volume %d: got %d destinations (%d+%d shards) across %d racks and %d DCs",
task.VolumeID, len(destinations), dataShards, parityShards, multiPlan.SuccessfulRack, multiPlan.SuccessfulDCs)
if len(destinations) == 0 {
glog.Warningf("No destinations available for EC task volume %d - rejecting task", task.VolumeID)
task.TypedParams = nil
return
}
// Collect existing EC shard locations for cleanup
existingShardLocations := s.collectExistingEcShardLocations(task.VolumeID)
// Create EC task parameters
ecParams := &worker_pb.ErasureCodingTaskParams{
Destinations: destinations, // Disk-aware destinations
DataShards: dataShards,
ParityShards: parityShards,
WorkingDir: "/tmp/seaweedfs_ec_work",
MasterClient: "localhost:9333",
CleanupSource: true,
ExistingShardLocations: existingShardLocations, // Pass existing shards for cleanup
}
// Add placement conflicts if any
if len(allConflicts) > 0 {
// Remove duplicates
conflictMap := make(map[string]bool)
var uniqueConflicts []string
for _, conflict := range allConflicts {
if !conflictMap[conflict] {
conflictMap[conflict] = true
uniqueConflicts = append(uniqueConflicts, conflict)
}
}
ecParams.PlacementConflicts = uniqueConflicts
}
// Wrap in TaskParams
task.TypedParams = &worker_pb.TaskParams{
VolumeId: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
TaskParams: &worker_pb.TaskParams_ErasureCodingParams{
ErasureCodingParams: ecParams,
},
}
glog.V(1).Infof("Created EC task params with %d destinations for volume %d",
len(destinations), task.VolumeID)
}
// createBalanceTaskParams creates typed parameters for balance/move tasks
func (s *MaintenanceIntegration) createBalanceTaskParams(task *TaskDetectionResult, destinationPlan *topology.DestinationPlan) {
// balanceConfig could be used for future config options like ImbalanceThreshold, MinServerCount
// Create balance task parameters
balanceParams := &worker_pb.BalanceTaskParams{
DestNode: destinationPlan.TargetNode,
EstimatedSize: destinationPlan.ExpectedSize,
DestRack: destinationPlan.TargetRack,
DestDc: destinationPlan.TargetDC,
PlacementScore: destinationPlan.PlacementScore,
ForceMove: false, // Default to false
TimeoutSeconds: 300, // Default 5 minutes
}
// Add placement conflicts if any
if len(destinationPlan.Conflicts) > 0 {
balanceParams.PlacementConflicts = destinationPlan.Conflicts
}
// Note: balanceConfig would have ImbalanceThreshold, MinServerCount if needed for future enhancements
// Wrap in TaskParams
task.TypedParams = &worker_pb.TaskParams{
VolumeId: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
TaskParams: &worker_pb.TaskParams_BalanceParams{
BalanceParams: balanceParams,
},
}
glog.V(1).Infof("Created balance task params for volume %d: %s -> %s (score: %.2f)",
task.VolumeID, task.Server, destinationPlan.TargetNode, destinationPlan.PlacementScore)
}
// createReplicationTaskParams creates typed parameters for replication tasks
func (s *MaintenanceIntegration) createReplicationTaskParams(task *TaskDetectionResult, destinationPlan *topology.DestinationPlan) {
// replicationConfig could be used for future config options like TargetReplicaCount
// Create replication task parameters
replicationParams := &worker_pb.ReplicationTaskParams{
DestNode: destinationPlan.TargetNode,
DestRack: destinationPlan.TargetRack,
DestDc: destinationPlan.TargetDC,
PlacementScore: destinationPlan.PlacementScore,
}
// Add placement conflicts if any
if len(destinationPlan.Conflicts) > 0 {
replicationParams.PlacementConflicts = destinationPlan.Conflicts
}
// Note: replicationConfig would have TargetReplicaCount if needed for future enhancements
// Wrap in TaskParams
task.TypedParams = &worker_pb.TaskParams{
VolumeId: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
TaskParams: &worker_pb.TaskParams_ReplicationParams{
ReplicationParams: replicationParams,
},
}
glog.V(1).Infof("Created replication task params for volume %d: %s -> %s",
task.VolumeID, task.Server, destinationPlan.TargetNode)
}
// getOptimalECShardCount returns the optimal number of EC shards based on available disks
// Uses a simplified approach to avoid blocking during UI access
func (s *MaintenanceIntegration) getOptimalECShardCount() int {
// Try to get available disks quickly, but don't block if topology is busy
availableDisks := s.getAvailableDisksQuickly()
// EC configurations in order of preference: (data+parity=total)
// Use smaller configurations for smaller clusters
if availableDisks >= 14 {
glog.V(1).Infof("Using default EC configuration: 10+4=14 shards for %d available disks", availableDisks)
return 14 // Default: 10+4
} else if availableDisks >= 6 {
glog.V(1).Infof("Using small cluster EC configuration: 4+2=6 shards for %d available disks", availableDisks)
return 6 // Small cluster: 4+2
} else if availableDisks >= 4 {
glog.V(1).Infof("Using minimal EC configuration: 3+1=4 shards for %d available disks", availableDisks)
return 4 // Minimal: 3+1
} else {
glog.V(1).Infof("Using very small cluster EC configuration: 2+1=3 shards for %d available disks", availableDisks)
return 3 // Very small: 2+1
}
}
// getAvailableDisksQuickly returns available disk count with a fast path to avoid UI blocking
func (s *MaintenanceIntegration) getAvailableDisksQuickly() int {
// Use ActiveTopology's optimized disk counting if available
// Use empty task type and node filter for general availability check
allDisks := s.activeTopology.GetAvailableDisks(topology.TaskTypeErasureCoding, "")
if len(allDisks) > 0 {
return len(allDisks)
}
// Fallback: try to count from topology but don't hold locks for too long
topologyInfo := s.activeTopology.GetTopologyInfo()
return s.countAvailableDisks(topologyInfo)
}
// countAvailableDisks counts the total number of available disks in the topology
func (s *MaintenanceIntegration) countAvailableDisks(topologyInfo *master_pb.TopologyInfo) int {
if topologyInfo == nil {
return 0
}
diskCount := 0
for _, dc := range topologyInfo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for _, node := range rack.DataNodeInfos {
diskCount += len(node.DiskInfos)
}
}
}
return diskCount
}
// getECShardCounts determines data and parity shard counts for a given total
func (s *MaintenanceIntegration) getECShardCounts(totalShards int) (int32, int32) {
// Map total shards to (data, parity) configurations
switch totalShards {
case 14:
return 10, 4 // Default: 10+4
case 9:
return 6, 3 // Medium: 6+3
case 6:
return 4, 2 // Small: 4+2
case 4:
return 3, 1 // Minimal: 3+1
case 3:
return 2, 1 // Very small: 2+1
default:
// For any other total, try to maintain roughly 3:1 or 4:1 ratio
if totalShards >= 4 {
parityShards := totalShards / 4
if parityShards < 1 {
parityShards = 1
}
dataShards := totalShards - parityShards
return int32(dataShards), int32(parityShards)
}
// Fallback for very small clusters
return int32(totalShards - 1), 1
}
}
// collectExistingEcShardLocations queries the master for existing EC shard locations during planning
func (s *MaintenanceIntegration) collectExistingEcShardLocations(volumeId uint32) []*worker_pb.ExistingECShardLocation {
var existingShardLocations []*worker_pb.ExistingECShardLocation
// Use insecure connection for simplicity - in production this might be configurable
grpcDialOption := grpc.WithTransportCredentials(insecure.NewCredentials())
err := operation.WithMasterServerClient(false, pb.ServerAddress("localhost:9333"), grpcDialOption,
func(masterClient master_pb.SeaweedClient) error {
req := &master_pb.LookupEcVolumeRequest{
VolumeId: volumeId,
}
resp, err := masterClient.LookupEcVolume(context.Background(), req)
if err != nil {
// If volume doesn't exist as EC volume, that's fine - just no existing shards
glog.V(1).Infof("LookupEcVolume for volume %d returned: %v (this is normal if no existing EC shards)", volumeId, err)
return nil
}
// Group shard locations by server
serverShardMap := make(map[string][]uint32)
for _, shardIdLocation := range resp.ShardIdLocations {
shardId := uint32(shardIdLocation.ShardId)
for _, location := range shardIdLocation.Locations {
serverAddr := pb.NewServerAddressFromLocation(location)
serverShardMap[string(serverAddr)] = append(serverShardMap[string(serverAddr)], shardId)
}
}
// Convert to protobuf format
for serverAddr, shardIds := range serverShardMap {
existingShardLocations = append(existingShardLocations, &worker_pb.ExistingECShardLocation{
Node: serverAddr,
ShardIds: shardIds,
})
}
return nil
})
if err != nil {
glog.Errorf("Failed to lookup existing EC shards from master for volume %d: %v", volumeId, err)
// Return empty list - cleanup will be skipped but task can continue
return []*worker_pb.ExistingECShardLocation{}
}
if len(existingShardLocations) > 0 {
glog.V(1).Infof("Found existing EC shards for volume %d on %d servers during planning", volumeId, len(existingShardLocations))
}
return existingShardLocations
}

View File

@@ -7,8 +7,76 @@ import (
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/balance"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/erasure_coding"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks/vacuum"
)
// buildPolicyFromTaskConfigs loads task configurations from separate files and builds a MaintenancePolicy
func buildPolicyFromTaskConfigs() *worker_pb.MaintenancePolicy {
policy := &worker_pb.MaintenancePolicy{
GlobalMaxConcurrent: 4,
DefaultRepeatIntervalSeconds: 6 * 3600, // 6 hours in seconds
DefaultCheckIntervalSeconds: 12 * 3600, // 12 hours in seconds
TaskPolicies: make(map[string]*worker_pb.TaskPolicy),
}
// Load vacuum task configuration
if vacuumConfig := vacuum.LoadConfigFromPersistence(nil); vacuumConfig != nil {
policy.TaskPolicies["vacuum"] = &worker_pb.TaskPolicy{
Enabled: vacuumConfig.Enabled,
MaxConcurrent: int32(vacuumConfig.MaxConcurrent),
RepeatIntervalSeconds: int32(vacuumConfig.ScanIntervalSeconds),
CheckIntervalSeconds: int32(vacuumConfig.ScanIntervalSeconds),
TaskConfig: &worker_pb.TaskPolicy_VacuumConfig{
VacuumConfig: &worker_pb.VacuumTaskConfig{
GarbageThreshold: float64(vacuumConfig.GarbageThreshold),
MinVolumeAgeHours: int32(vacuumConfig.MinVolumeAgeSeconds / 3600), // Convert seconds to hours
MinIntervalSeconds: int32(vacuumConfig.MinIntervalSeconds),
},
},
}
}
// Load erasure coding task configuration
if ecConfig := erasure_coding.LoadConfigFromPersistence(nil); ecConfig != nil {
policy.TaskPolicies["erasure_coding"] = &worker_pb.TaskPolicy{
Enabled: ecConfig.Enabled,
MaxConcurrent: int32(ecConfig.MaxConcurrent),
RepeatIntervalSeconds: int32(ecConfig.ScanIntervalSeconds),
CheckIntervalSeconds: int32(ecConfig.ScanIntervalSeconds),
TaskConfig: &worker_pb.TaskPolicy_ErasureCodingConfig{
ErasureCodingConfig: &worker_pb.ErasureCodingTaskConfig{
FullnessRatio: float64(ecConfig.FullnessRatio),
QuietForSeconds: int32(ecConfig.QuietForSeconds),
MinVolumeSizeMb: int32(ecConfig.MinSizeMB),
CollectionFilter: ecConfig.CollectionFilter,
},
},
}
}
// Load balance task configuration
if balanceConfig := balance.LoadConfigFromPersistence(nil); balanceConfig != nil {
policy.TaskPolicies["balance"] = &worker_pb.TaskPolicy{
Enabled: balanceConfig.Enabled,
MaxConcurrent: int32(balanceConfig.MaxConcurrent),
RepeatIntervalSeconds: int32(balanceConfig.ScanIntervalSeconds),
CheckIntervalSeconds: int32(balanceConfig.ScanIntervalSeconds),
TaskConfig: &worker_pb.TaskPolicy_BalanceConfig{
BalanceConfig: &worker_pb.BalanceTaskConfig{
ImbalanceThreshold: float64(balanceConfig.ImbalanceThreshold),
MinServerCount: int32(balanceConfig.MinServerCount),
},
},
}
}
glog.V(1).Infof("Built maintenance policy from separate task configs - %d task policies loaded", len(policy.TaskPolicies))
return policy
}
// MaintenanceManager coordinates the maintenance system
type MaintenanceManager struct {
config *MaintenanceConfig
@@ -18,11 +86,12 @@ type MaintenanceManager struct {
running bool
stopChan chan struct{}
// Error handling and backoff
errorCount int
lastError error
lastErrorTime time.Time
backoffDelay time.Duration
mutex sync.RWMutex
errorCount int
lastError error
lastErrorTime time.Time
backoffDelay time.Duration
mutex sync.RWMutex
scanInProgress bool
}
// NewMaintenanceManager creates a new maintenance manager
@@ -31,8 +100,15 @@ func NewMaintenanceManager(adminClient AdminClient, config *MaintenanceConfig) *
config = DefaultMaintenanceConfig()
}
queue := NewMaintenanceQueue(config.Policy)
scanner := NewMaintenanceScanner(adminClient, config.Policy, queue)
// Use the policy from the config (which is populated from separate task files in LoadMaintenanceConfig)
policy := config.Policy
if policy == nil {
// Fallback: build policy from separate task configuration files if not already populated
policy = buildPolicyFromTaskConfigs()
}
queue := NewMaintenanceQueue(policy)
scanner := NewMaintenanceScanner(adminClient, policy, queue)
return &MaintenanceManager{
config: config,
@@ -125,23 +201,14 @@ func (mm *MaintenanceManager) scanLoop() {
return
case <-ticker.C:
glog.V(1).Infof("Performing maintenance scan every %v", scanInterval)
mm.performScan()
// Adjust ticker interval based on error state
mm.mutex.RLock()
currentInterval := scanInterval
if mm.errorCount > 0 {
// Use backoff delay when there are errors
currentInterval = mm.backoffDelay
if currentInterval > scanInterval {
// Don't make it longer than the configured interval * 10
maxInterval := scanInterval * 10
if currentInterval > maxInterval {
currentInterval = maxInterval
}
}
// Use the same synchronization as TriggerScan to prevent concurrent scans
if err := mm.triggerScanInternal(false); err != nil {
glog.V(1).Infof("Scheduled scan skipped: %v", err)
}
mm.mutex.RUnlock()
// Adjust ticker interval based on error state (read error state safely)
currentInterval := mm.getScanInterval(scanInterval)
// Reset ticker with new interval if needed
if currentInterval != scanInterval {
@@ -152,6 +219,26 @@ func (mm *MaintenanceManager) scanLoop() {
}
}
// getScanInterval safely reads the current scan interval with error backoff
func (mm *MaintenanceManager) getScanInterval(baseInterval time.Duration) time.Duration {
mm.mutex.RLock()
defer mm.mutex.RUnlock()
if mm.errorCount > 0 {
// Use backoff delay when there are errors
currentInterval := mm.backoffDelay
if currentInterval > baseInterval {
// Don't make it longer than the configured interval * 10
maxInterval := baseInterval * 10
if currentInterval > maxInterval {
currentInterval = maxInterval
}
}
return currentInterval
}
return baseInterval
}
// cleanupLoop periodically cleans up old tasks and stale workers
func (mm *MaintenanceManager) cleanupLoop() {
cleanupInterval := time.Duration(mm.config.CleanupIntervalSeconds) * time.Second
@@ -170,25 +257,54 @@ func (mm *MaintenanceManager) cleanupLoop() {
// performScan executes a maintenance scan with error handling and backoff
func (mm *MaintenanceManager) performScan() {
mm.mutex.Lock()
defer mm.mutex.Unlock()
defer func() {
// Always reset scan in progress flag when done
mm.mutex.Lock()
mm.scanInProgress = false
mm.mutex.Unlock()
}()
glog.V(2).Infof("Starting maintenance scan")
glog.Infof("Starting maintenance scan...")
results, err := mm.scanner.ScanForMaintenanceTasks()
if err != nil {
// Handle scan error
mm.mutex.Lock()
mm.handleScanError(err)
mm.mutex.Unlock()
glog.Warningf("Maintenance scan failed: %v", err)
return
}
// Scan succeeded, reset error tracking
mm.resetErrorTracking()
// Scan succeeded - update state and process results
mm.handleScanSuccess(results)
}
if len(results) > 0 {
// handleScanSuccess processes successful scan results with proper lock management
func (mm *MaintenanceManager) handleScanSuccess(results []*TaskDetectionResult) {
// Update manager state first
mm.mutex.Lock()
mm.resetErrorTracking()
taskCount := len(results)
mm.mutex.Unlock()
if taskCount > 0 {
// Count tasks by type for logging (outside of lock)
taskCounts := make(map[MaintenanceTaskType]int)
for _, result := range results {
taskCounts[result.TaskType]++
}
// Add tasks to queue (no manager lock held)
mm.queue.AddTasksFromResults(results)
glog.V(1).Infof("Maintenance scan completed: added %d tasks", len(results))
// Log detailed scan results
glog.Infof("Maintenance scan completed: found %d tasks", taskCount)
for taskType, count := range taskCounts {
glog.Infof(" - %s: %d tasks", taskType, count)
}
} else {
glog.V(2).Infof("Maintenance scan completed: no tasks needed")
glog.Infof("Maintenance scan completed: no maintenance tasks needed")
}
}
@@ -272,8 +388,19 @@ func (mm *MaintenanceManager) performCleanup() {
removedTasks := mm.queue.CleanupOldTasks(taskRetention)
removedWorkers := mm.queue.RemoveStaleWorkers(workerTimeout)
if removedTasks > 0 || removedWorkers > 0 {
glog.V(1).Infof("Cleanup completed: removed %d old tasks and %d stale workers", removedTasks, removedWorkers)
// Clean up stale pending operations (operations running for more than 4 hours)
staleOperationTimeout := 4 * time.Hour
removedOperations := 0
if mm.scanner != nil && mm.scanner.integration != nil {
pendingOps := mm.scanner.integration.GetPendingOperations()
if pendingOps != nil {
removedOperations = pendingOps.CleanupStaleOperations(staleOperationTimeout)
}
}
if removedTasks > 0 || removedWorkers > 0 || removedOperations > 0 {
glog.V(1).Infof("Cleanup completed: removed %d old tasks, %d stale workers, and %d stale operations",
removedTasks, removedWorkers, removedOperations)
}
}
@@ -311,6 +438,21 @@ func (mm *MaintenanceManager) GetStats() *MaintenanceStats {
return stats
}
// ReloadTaskConfigurations reloads task configurations from the current policy
func (mm *MaintenanceManager) ReloadTaskConfigurations() error {
mm.mutex.Lock()
defer mm.mutex.Unlock()
// Trigger configuration reload in the integration layer
if mm.scanner != nil && mm.scanner.integration != nil {
mm.scanner.integration.ConfigureTasksFromPolicy()
glog.V(1).Infof("Task configurations reloaded from policy")
return nil
}
return fmt.Errorf("integration not available for configuration reload")
}
// GetErrorState returns the current error state for monitoring
func (mm *MaintenanceManager) GetErrorState() (errorCount int, lastError error, backoffDelay time.Duration) {
mm.mutex.RLock()
@@ -330,10 +472,29 @@ func (mm *MaintenanceManager) GetWorkers() []*MaintenanceWorker {
// TriggerScan manually triggers a maintenance scan
func (mm *MaintenanceManager) TriggerScan() error {
return mm.triggerScanInternal(true)
}
// triggerScanInternal handles both manual and automatic scan triggers
func (mm *MaintenanceManager) triggerScanInternal(isManual bool) error {
if !mm.running {
return fmt.Errorf("maintenance manager is not running")
}
// Prevent multiple concurrent scans
mm.mutex.Lock()
if mm.scanInProgress {
mm.mutex.Unlock()
if isManual {
glog.V(1).Infof("Manual scan already in progress, ignoring trigger request")
} else {
glog.V(2).Infof("Automatic scan already in progress, ignoring scheduled scan")
}
return fmt.Errorf("scan already in progress")
}
mm.scanInProgress = true
mm.mutex.Unlock()
go mm.performScan()
return nil
}

View File

@@ -1,10 +1,13 @@
package maintenance
import (
"crypto/rand"
"fmt"
"sort"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
)
// NewMaintenanceQueue creates a new maintenance queue
@@ -24,11 +27,18 @@ func (mq *MaintenanceQueue) SetIntegration(integration *MaintenanceIntegration)
glog.V(1).Infof("Maintenance queue configured with integration")
}
// AddTask adds a new maintenance task to the queue
// AddTask adds a new maintenance task to the queue with deduplication
func (mq *MaintenanceQueue) AddTask(task *MaintenanceTask) {
mq.mutex.Lock()
defer mq.mutex.Unlock()
// Check for duplicate tasks (same type + volume + not completed)
if mq.hasDuplicateTask(task) {
glog.V(1).Infof("Task skipped (duplicate): %s for volume %d on %s (already queued or running)",
task.Type, task.VolumeID, task.Server)
return
}
task.ID = generateTaskID()
task.Status = TaskStatusPending
task.CreatedAt = time.Now()
@@ -45,19 +55,48 @@ func (mq *MaintenanceQueue) AddTask(task *MaintenanceTask) {
return mq.pendingTasks[i].ScheduledAt.Before(mq.pendingTasks[j].ScheduledAt)
})
glog.V(2).Infof("Added maintenance task %s: %s for volume %d", task.ID, task.Type, task.VolumeID)
scheduleInfo := ""
if !task.ScheduledAt.IsZero() && time.Until(task.ScheduledAt) > time.Minute {
scheduleInfo = fmt.Sprintf(", scheduled for %v", task.ScheduledAt.Format("15:04:05"))
}
glog.Infof("Task queued: %s (%s) volume %d on %s, priority %d%s, reason: %s",
task.ID, task.Type, task.VolumeID, task.Server, task.Priority, scheduleInfo, task.Reason)
}
// hasDuplicateTask checks if a similar task already exists (same type, volume, and not completed)
func (mq *MaintenanceQueue) hasDuplicateTask(newTask *MaintenanceTask) bool {
for _, existingTask := range mq.tasks {
if existingTask.Type == newTask.Type &&
existingTask.VolumeID == newTask.VolumeID &&
existingTask.Server == newTask.Server &&
(existingTask.Status == TaskStatusPending ||
existingTask.Status == TaskStatusAssigned ||
existingTask.Status == TaskStatusInProgress) {
return true
}
}
return false
}
// AddTasksFromResults converts detection results to tasks and adds them to the queue
func (mq *MaintenanceQueue) AddTasksFromResults(results []*TaskDetectionResult) {
for _, result := range results {
// Validate that task has proper typed parameters
if result.TypedParams == nil {
glog.Warningf("Rejecting invalid task: %s for volume %d on %s - no typed parameters (insufficient destinations or planning failed)",
result.TaskType, result.VolumeID, result.Server)
continue
}
task := &MaintenanceTask{
Type: result.TaskType,
Priority: result.Priority,
VolumeID: result.VolumeID,
Server: result.Server,
Collection: result.Collection,
Parameters: result.Parameters,
Type: result.TaskType,
Priority: result.Priority,
VolumeID: result.VolumeID,
Server: result.Server,
Collection: result.Collection,
// Copy typed protobuf parameters
TypedParams: result.TypedParams,
Reason: result.Reason,
ScheduledAt: result.ScheduleAt,
}
@@ -67,57 +106,92 @@ func (mq *MaintenanceQueue) AddTasksFromResults(results []*TaskDetectionResult)
// GetNextTask returns the next available task for a worker
func (mq *MaintenanceQueue) GetNextTask(workerID string, capabilities []MaintenanceTaskType) *MaintenanceTask {
mq.mutex.Lock()
defer mq.mutex.Unlock()
// Use read lock for initial checks and search
mq.mutex.RLock()
worker, exists := mq.workers[workerID]
if !exists {
mq.mutex.RUnlock()
glog.V(2).Infof("Task assignment failed for worker %s: worker not registered", workerID)
return nil
}
// Check if worker has capacity
if worker.CurrentLoad >= worker.MaxConcurrent {
mq.mutex.RUnlock()
glog.V(2).Infof("Task assignment failed for worker %s: at capacity (%d/%d)", workerID, worker.CurrentLoad, worker.MaxConcurrent)
return nil
}
now := time.Now()
var selectedTask *MaintenanceTask
var selectedIndex int = -1
// Find the next suitable task
// Find the next suitable task (using read lock)
for i, task := range mq.pendingTasks {
// Check if it's time to execute the task
if task.ScheduledAt.After(now) {
glog.V(3).Infof("Task %s skipped for worker %s: scheduled for future (%v)", task.ID, workerID, task.ScheduledAt)
continue
}
// Check if worker can handle this task type
if !mq.workerCanHandle(task.Type, capabilities) {
glog.V(3).Infof("Task %s (%s) skipped for worker %s: capability mismatch (worker has: %v)", task.ID, task.Type, workerID, capabilities)
continue
}
// Check scheduling logic - use simplified system if available, otherwise fallback
// Check if this task type needs a cooldown period
if !mq.canScheduleTaskNow(task) {
glog.V(3).Infof("Task %s (%s) skipped for worker %s: scheduling constraints not met", task.ID, task.Type, workerID)
continue
}
// Assign task to worker
task.Status = TaskStatusAssigned
task.WorkerID = workerID
startTime := now
task.StartedAt = &startTime
// Remove from pending tasks
mq.pendingTasks = append(mq.pendingTasks[:i], mq.pendingTasks[i+1:]...)
// Update worker
worker.CurrentTask = task
worker.CurrentLoad++
worker.Status = "busy"
glog.V(2).Infof("Assigned task %s to worker %s", task.ID, workerID)
return task
// Found a suitable task
selectedTask = task
selectedIndex = i
break
}
return nil
// Release read lock
mq.mutex.RUnlock()
// If no task found, return nil
if selectedTask == nil {
glog.V(2).Infof("No suitable tasks available for worker %s (checked %d pending tasks)", workerID, len(mq.pendingTasks))
return nil
}
// Now acquire write lock to actually assign the task
mq.mutex.Lock()
defer mq.mutex.Unlock()
// Re-check that the task is still available (it might have been assigned to another worker)
if selectedIndex >= len(mq.pendingTasks) || mq.pendingTasks[selectedIndex].ID != selectedTask.ID {
glog.V(2).Infof("Task %s no longer available for worker %s: assigned to another worker", selectedTask.ID, workerID)
return nil
}
// Assign the task
selectedTask.Status = TaskStatusAssigned
selectedTask.WorkerID = workerID
selectedTask.StartedAt = &now
// Remove from pending tasks
mq.pendingTasks = append(mq.pendingTasks[:selectedIndex], mq.pendingTasks[selectedIndex+1:]...)
// Update worker load
if worker, exists := mq.workers[workerID]; exists {
worker.CurrentLoad++
}
// Track pending operation
mq.trackPendingOperation(selectedTask)
glog.Infof("Task assigned: %s (%s) → worker %s (volume %d, server %s)",
selectedTask.ID, selectedTask.Type, workerID, selectedTask.VolumeID, selectedTask.Server)
return selectedTask
}
// CompleteTask marks a task as completed
@@ -127,12 +201,19 @@ func (mq *MaintenanceQueue) CompleteTask(taskID string, error string) {
task, exists := mq.tasks[taskID]
if !exists {
glog.Warningf("Attempted to complete non-existent task: %s", taskID)
return
}
completedTime := time.Now()
task.CompletedAt = &completedTime
// Calculate task duration
var duration time.Duration
if task.StartedAt != nil {
duration = completedTime.Sub(*task.StartedAt)
}
if error != "" {
task.Status = TaskStatusFailed
task.Error = error
@@ -148,14 +229,17 @@ func (mq *MaintenanceQueue) CompleteTask(taskID string, error string) {
task.ScheduledAt = time.Now().Add(15 * time.Minute) // Retry delay
mq.pendingTasks = append(mq.pendingTasks, task)
glog.V(2).Infof("Retrying task %s (attempt %d/%d)", taskID, task.RetryCount, task.MaxRetries)
glog.Warningf("Task failed, scheduling retry: %s (%s) attempt %d/%d, worker %s, duration %v, error: %s",
taskID, task.Type, task.RetryCount, task.MaxRetries, task.WorkerID, duration, error)
} else {
glog.Errorf("Task %s failed permanently after %d retries: %s", taskID, task.MaxRetries, error)
glog.Errorf("Task failed permanently: %s (%s) worker %s, duration %v, after %d retries: %s",
taskID, task.Type, task.WorkerID, duration, task.MaxRetries, error)
}
} else {
task.Status = TaskStatusCompleted
task.Progress = 100
glog.V(2).Infof("Task %s completed successfully", taskID)
glog.Infof("Task completed: %s (%s) worker %s, duration %v, volume %d",
taskID, task.Type, task.WorkerID, duration, task.VolumeID)
}
// Update worker
@@ -168,6 +252,11 @@ func (mq *MaintenanceQueue) CompleteTask(taskID string, error string) {
}
}
}
// Remove pending operation (unless it's being retried)
if task.Status != TaskStatusPending {
mq.removePendingOperation(taskID)
}
}
// UpdateTaskProgress updates the progress of a running task
@@ -176,8 +265,26 @@ func (mq *MaintenanceQueue) UpdateTaskProgress(taskID string, progress float64)
defer mq.mutex.RUnlock()
if task, exists := mq.tasks[taskID]; exists {
oldProgress := task.Progress
task.Progress = progress
task.Status = TaskStatusInProgress
// Update pending operation status
mq.updatePendingOperationStatus(taskID, "in_progress")
// Log progress at significant milestones or changes
if progress == 0 {
glog.V(1).Infof("Task started: %s (%s) worker %s, volume %d",
taskID, task.Type, task.WorkerID, task.VolumeID)
} else if progress >= 100 {
glog.V(1).Infof("Task progress: %s (%s) worker %s, %.1f%% complete",
taskID, task.Type, task.WorkerID, progress)
} else if progress-oldProgress >= 25 { // Log every 25% increment
glog.V(1).Infof("Task progress: %s (%s) worker %s, %.1f%% complete",
taskID, task.Type, task.WorkerID, progress)
}
} else {
glog.V(2).Infof("Progress update for unknown task: %s (%.1f%%)", taskID, progress)
}
}
@@ -186,12 +293,25 @@ func (mq *MaintenanceQueue) RegisterWorker(worker *MaintenanceWorker) {
mq.mutex.Lock()
defer mq.mutex.Unlock()
isNewWorker := true
if existingWorker, exists := mq.workers[worker.ID]; exists {
isNewWorker = false
glog.Infof("Worker reconnected: %s at %s (capabilities: %v, max concurrent: %d)",
worker.ID, worker.Address, worker.Capabilities, worker.MaxConcurrent)
// Preserve current load when reconnecting
worker.CurrentLoad = existingWorker.CurrentLoad
} else {
glog.Infof("Worker registered: %s at %s (capabilities: %v, max concurrent: %d)",
worker.ID, worker.Address, worker.Capabilities, worker.MaxConcurrent)
}
worker.LastHeartbeat = time.Now()
worker.Status = "active"
worker.CurrentLoad = 0
if isNewWorker {
worker.CurrentLoad = 0
}
mq.workers[worker.ID] = worker
glog.V(1).Infof("Registered maintenance worker %s at %s", worker.ID, worker.Address)
}
// UpdateWorkerHeartbeat updates worker heartbeat
@@ -200,7 +320,15 @@ func (mq *MaintenanceQueue) UpdateWorkerHeartbeat(workerID string) {
defer mq.mutex.Unlock()
if worker, exists := mq.workers[workerID]; exists {
lastSeen := worker.LastHeartbeat
worker.LastHeartbeat = time.Now()
// Log if worker was offline for a while
if time.Since(lastSeen) > 2*time.Minute {
glog.Infof("Worker %s heartbeat resumed after %v", workerID, time.Since(lastSeen))
}
} else {
glog.V(2).Infof("Heartbeat from unknown worker: %s", workerID)
}
}
@@ -255,7 +383,7 @@ func (mq *MaintenanceQueue) getRepeatPreventionInterval(taskType MaintenanceTask
// Fallback to policy configuration if no scheduler available or scheduler doesn't provide default
if mq.policy != nil {
repeatIntervalHours := mq.policy.GetRepeatInterval(taskType)
repeatIntervalHours := GetRepeatInterval(mq.policy, taskType)
if repeatIntervalHours > 0 {
interval := time.Duration(repeatIntervalHours) * time.Hour
glog.V(3).Infof("Using policy configuration repeat interval for %s: %v", taskType, interval)
@@ -311,10 +439,23 @@ func (mq *MaintenanceQueue) GetWorkers() []*MaintenanceWorker {
func generateTaskID() string {
const charset = "abcdefghijklmnopqrstuvwxyz0123456789"
b := make([]byte, 8)
for i := range b {
b[i] = charset[i%len(charset)]
randBytes := make([]byte, 8)
// Generate random bytes
if _, err := rand.Read(randBytes); err != nil {
// Fallback to timestamp-based ID if crypto/rand fails
timestamp := time.Now().UnixNano()
return fmt.Sprintf("task-%d", timestamp)
}
return string(b)
// Convert random bytes to charset
for i := range b {
b[i] = charset[int(randBytes[i])%len(charset)]
}
// Add timestamp suffix to ensure uniqueness
timestamp := time.Now().Unix() % 10000 // last 4 digits of timestamp
return fmt.Sprintf("%s-%04d", string(b), timestamp)
}
// CleanupOldTasks removes old completed and failed tasks
@@ -427,19 +568,31 @@ func (mq *MaintenanceQueue) workerCanHandle(taskType MaintenanceTaskType, capabi
// canScheduleTaskNow determines if a task can be scheduled using task schedulers or fallback logic
func (mq *MaintenanceQueue) canScheduleTaskNow(task *MaintenanceTask) bool {
glog.V(2).Infof("Checking if task %s (type: %s) can be scheduled", task.ID, task.Type)
// TEMPORARY FIX: Skip integration task scheduler which is being overly restrictive
// Use fallback logic directly for now
glog.V(2).Infof("Using fallback logic for task scheduling")
canExecute := mq.canExecuteTaskType(task.Type)
glog.V(2).Infof("Fallback decision for task %s: %v", task.ID, canExecute)
return canExecute
// NOTE: Original integration code disabled temporarily
// Try task scheduling logic first
if mq.integration != nil {
// Get all running tasks and available workers
runningTasks := mq.getRunningTasks()
availableWorkers := mq.getAvailableWorkers()
/*
if mq.integration != nil {
glog.Infof("DEBUG canScheduleTaskNow: Using integration task scheduler")
// Get all running tasks and available workers
runningTasks := mq.getRunningTasks()
availableWorkers := mq.getAvailableWorkers()
canSchedule := mq.integration.CanScheduleWithTaskSchedulers(task, runningTasks, availableWorkers)
glog.V(3).Infof("Task scheduler decision for task %s (%s): %v", task.ID, task.Type, canSchedule)
return canSchedule
}
glog.Infof("DEBUG canScheduleTaskNow: Running tasks: %d, Available workers: %d", len(runningTasks), len(availableWorkers))
// Fallback to hardcoded logic
return mq.canExecuteTaskType(task.Type)
canSchedule := mq.integration.CanScheduleWithTaskSchedulers(task, runningTasks, availableWorkers)
glog.Infof("DEBUG canScheduleTaskNow: Task scheduler decision for task %s (%s): %v", task.ID, task.Type, canSchedule)
return canSchedule
}
*/
}
// canExecuteTaskType checks if we can execute more tasks of this type (concurrency limits) - fallback logic
@@ -465,7 +618,7 @@ func (mq *MaintenanceQueue) getMaxConcurrentForTaskType(taskType MaintenanceTask
// Fallback to policy configuration if no scheduler available or scheduler doesn't provide default
if mq.policy != nil {
maxConcurrent := mq.policy.GetMaxConcurrent(taskType)
maxConcurrent := GetMaxConcurrent(mq.policy, taskType)
if maxConcurrent > 0 {
glog.V(3).Infof("Using policy configuration max concurrent for %s: %d", taskType, maxConcurrent)
return maxConcurrent
@@ -498,3 +651,108 @@ func (mq *MaintenanceQueue) getAvailableWorkers() []*MaintenanceWorker {
}
return availableWorkers
}
// trackPendingOperation adds a task to the pending operations tracker
func (mq *MaintenanceQueue) trackPendingOperation(task *MaintenanceTask) {
if mq.integration == nil {
return
}
pendingOps := mq.integration.GetPendingOperations()
if pendingOps == nil {
return
}
// Skip tracking for tasks without proper typed parameters
if task.TypedParams == nil {
glog.V(2).Infof("Skipping pending operation tracking for task %s - no typed parameters", task.ID)
return
}
// Map maintenance task type to pending operation type
var opType PendingOperationType
switch task.Type {
case MaintenanceTaskType("balance"):
opType = OpTypeVolumeBalance
case MaintenanceTaskType("erasure_coding"):
opType = OpTypeErasureCoding
case MaintenanceTaskType("vacuum"):
opType = OpTypeVacuum
case MaintenanceTaskType("replication"):
opType = OpTypeReplication
default:
opType = OpTypeVolumeMove
}
// Determine destination node and estimated size from typed parameters
destNode := ""
estimatedSize := uint64(1024 * 1024 * 1024) // Default 1GB estimate
switch params := task.TypedParams.TaskParams.(type) {
case *worker_pb.TaskParams_ErasureCodingParams:
if params.ErasureCodingParams != nil {
if len(params.ErasureCodingParams.Destinations) > 0 {
destNode = params.ErasureCodingParams.Destinations[0].Node
}
if params.ErasureCodingParams.EstimatedShardSize > 0 {
estimatedSize = params.ErasureCodingParams.EstimatedShardSize
}
}
case *worker_pb.TaskParams_BalanceParams:
if params.BalanceParams != nil {
destNode = params.BalanceParams.DestNode
if params.BalanceParams.EstimatedSize > 0 {
estimatedSize = params.BalanceParams.EstimatedSize
}
}
case *worker_pb.TaskParams_ReplicationParams:
if params.ReplicationParams != nil {
destNode = params.ReplicationParams.DestNode
if params.ReplicationParams.EstimatedSize > 0 {
estimatedSize = params.ReplicationParams.EstimatedSize
}
}
}
operation := &PendingOperation{
VolumeID: task.VolumeID,
OperationType: opType,
SourceNode: task.Server,
DestNode: destNode,
TaskID: task.ID,
StartTime: time.Now(),
EstimatedSize: estimatedSize,
Collection: task.Collection,
Status: "assigned",
}
pendingOps.AddOperation(operation)
}
// removePendingOperation removes a task from the pending operations tracker
func (mq *MaintenanceQueue) removePendingOperation(taskID string) {
if mq.integration == nil {
return
}
pendingOps := mq.integration.GetPendingOperations()
if pendingOps == nil {
return
}
pendingOps.RemoveOperation(taskID)
}
// updatePendingOperationStatus updates the status of a pending operation
func (mq *MaintenanceQueue) updatePendingOperationStatus(taskID string, status string) {
if mq.integration == nil {
return
}
pendingOps := mq.integration.GetPendingOperations()
if pendingOps == nil {
return
}
pendingOps.UpdateOperationStatus(taskID, status)
}

View File

@@ -0,0 +1,353 @@
package maintenance
import (
"testing"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
)
// Test suite for canScheduleTaskNow() function and related scheduling logic
//
// This test suite ensures that:
// 1. The fallback scheduling logic works correctly when no integration is present
// 2. Task concurrency limits are properly enforced per task type
// 3. Different task types don't interfere with each other's concurrency limits
// 4. Custom policies with higher concurrency limits work correctly
// 5. Edge cases (nil tasks, empty task types) are handled gracefully
// 6. Helper functions (GetRunningTaskCount, canExecuteTaskType, etc.) work correctly
//
// Background: The canScheduleTaskNow() function is critical for task assignment.
// It was previously failing due to an overly restrictive integration scheduler,
// so we implemented a temporary fix that bypasses the integration and uses
// fallback logic based on simple concurrency limits per task type.
func TestCanScheduleTaskNow_FallbackLogic(t *testing.T) {
// Test the current implementation which uses fallback logic
mq := &MaintenanceQueue{
tasks: make(map[string]*MaintenanceTask),
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: nil, // No policy for default behavior
integration: nil, // No integration to force fallback
}
task := &MaintenanceTask{
ID: "test-task-1",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusPending,
}
// Should return true with fallback logic (no running tasks, default max concurrent = 1)
result := mq.canScheduleTaskNow(task)
if !result {
t.Errorf("Expected canScheduleTaskNow to return true with fallback logic, got false")
}
}
func TestCanScheduleTaskNow_FallbackWithRunningTasks(t *testing.T) {
// Test fallback logic when there are already running tasks
mq := &MaintenanceQueue{
tasks: map[string]*MaintenanceTask{
"running-task": {
ID: "running-task",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusInProgress,
},
},
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: nil,
integration: nil,
}
task := &MaintenanceTask{
ID: "test-task-2",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusPending,
}
// Should return false because max concurrent is 1 and we have 1 running task
result := mq.canScheduleTaskNow(task)
if result {
t.Errorf("Expected canScheduleTaskNow to return false when at capacity, got true")
}
}
func TestCanScheduleTaskNow_DifferentTaskTypes(t *testing.T) {
// Test that different task types don't interfere with each other
mq := &MaintenanceQueue{
tasks: map[string]*MaintenanceTask{
"running-ec-task": {
ID: "running-ec-task",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusInProgress,
},
},
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: nil,
integration: nil,
}
// Test vacuum task when EC task is running
vacuumTask := &MaintenanceTask{
ID: "vacuum-task",
Type: MaintenanceTaskType("vacuum"),
Status: TaskStatusPending,
}
// Should return true because vacuum and erasure_coding are different task types
result := mq.canScheduleTaskNow(vacuumTask)
if !result {
t.Errorf("Expected canScheduleTaskNow to return true for different task type, got false")
}
// Test another EC task when one is already running
ecTask := &MaintenanceTask{
ID: "ec-task",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusPending,
}
// Should return false because max concurrent for EC is 1 and we have 1 running
result = mq.canScheduleTaskNow(ecTask)
if result {
t.Errorf("Expected canScheduleTaskNow to return false for same task type at capacity, got true")
}
}
func TestCanScheduleTaskNow_WithIntegration(t *testing.T) {
// Test with a real MaintenanceIntegration (will use fallback logic in current implementation)
policy := &MaintenancePolicy{
TaskPolicies: make(map[string]*worker_pb.TaskPolicy),
GlobalMaxConcurrent: 10,
DefaultRepeatIntervalSeconds: 24 * 60 * 60, // 24 hours in seconds
DefaultCheckIntervalSeconds: 60 * 60, // 1 hour in seconds
}
mq := NewMaintenanceQueue(policy)
// Create a basic integration (this would normally be more complex)
integration := NewMaintenanceIntegration(mq, policy)
mq.SetIntegration(integration)
task := &MaintenanceTask{
ID: "test-task-3",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusPending,
}
// With our current implementation (fallback logic), this should return true
result := mq.canScheduleTaskNow(task)
if !result {
t.Errorf("Expected canScheduleTaskNow to return true with fallback logic, got false")
}
}
func TestGetRunningTaskCount(t *testing.T) {
// Test the helper function used by fallback logic
mq := &MaintenanceQueue{
tasks: map[string]*MaintenanceTask{
"task1": {
ID: "task1",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusInProgress,
},
"task2": {
ID: "task2",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusAssigned,
},
"task3": {
ID: "task3",
Type: MaintenanceTaskType("vacuum"),
Status: TaskStatusInProgress,
},
"task4": {
ID: "task4",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusCompleted,
},
},
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
}
// Should count 2 running EC tasks (in_progress + assigned)
ecCount := mq.GetRunningTaskCount(MaintenanceTaskType("erasure_coding"))
if ecCount != 2 {
t.Errorf("Expected 2 running EC tasks, got %d", ecCount)
}
// Should count 1 running vacuum task
vacuumCount := mq.GetRunningTaskCount(MaintenanceTaskType("vacuum"))
if vacuumCount != 1 {
t.Errorf("Expected 1 running vacuum task, got %d", vacuumCount)
}
// Should count 0 running balance tasks
balanceCount := mq.GetRunningTaskCount(MaintenanceTaskType("balance"))
if balanceCount != 0 {
t.Errorf("Expected 0 running balance tasks, got %d", balanceCount)
}
}
func TestCanExecuteTaskType(t *testing.T) {
// Test the fallback logic helper function
mq := &MaintenanceQueue{
tasks: map[string]*MaintenanceTask{
"running-task": {
ID: "running-task",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusInProgress,
},
},
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: nil, // Will use default max concurrent = 1
integration: nil,
}
// Should return false for EC (1 running, max = 1)
result := mq.canExecuteTaskType(MaintenanceTaskType("erasure_coding"))
if result {
t.Errorf("Expected canExecuteTaskType to return false for EC at capacity, got true")
}
// Should return true for vacuum (0 running, max = 1)
result = mq.canExecuteTaskType(MaintenanceTaskType("vacuum"))
if !result {
t.Errorf("Expected canExecuteTaskType to return true for vacuum, got false")
}
}
func TestGetMaxConcurrentForTaskType_DefaultBehavior(t *testing.T) {
// Test the default behavior when no policy or integration is set
mq := &MaintenanceQueue{
tasks: make(map[string]*MaintenanceTask),
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: nil,
integration: nil,
}
// Should return default value of 1
maxConcurrent := mq.getMaxConcurrentForTaskType(MaintenanceTaskType("erasure_coding"))
if maxConcurrent != 1 {
t.Errorf("Expected default max concurrent to be 1, got %d", maxConcurrent)
}
maxConcurrent = mq.getMaxConcurrentForTaskType(MaintenanceTaskType("vacuum"))
if maxConcurrent != 1 {
t.Errorf("Expected default max concurrent to be 1, got %d", maxConcurrent)
}
}
// Test edge cases and error conditions
func TestCanScheduleTaskNow_NilTask(t *testing.T) {
mq := &MaintenanceQueue{
tasks: make(map[string]*MaintenanceTask),
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: nil,
integration: nil,
}
// This should panic with a nil task, so we expect and catch the panic
defer func() {
if r := recover(); r == nil {
t.Errorf("Expected canScheduleTaskNow to panic with nil task, but it didn't")
}
}()
// This should panic
mq.canScheduleTaskNow(nil)
}
func TestCanScheduleTaskNow_EmptyTaskType(t *testing.T) {
mq := &MaintenanceQueue{
tasks: make(map[string]*MaintenanceTask),
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: nil,
integration: nil,
}
task := &MaintenanceTask{
ID: "empty-type-task",
Type: MaintenanceTaskType(""), // Empty task type
Status: TaskStatusPending,
}
// Should handle empty task type gracefully
result := mq.canScheduleTaskNow(task)
if !result {
t.Errorf("Expected canScheduleTaskNow to handle empty task type, got false")
}
}
func TestCanScheduleTaskNow_WithPolicy(t *testing.T) {
// Test with a policy that allows higher concurrency
policy := &MaintenancePolicy{
TaskPolicies: map[string]*worker_pb.TaskPolicy{
string(MaintenanceTaskType("erasure_coding")): {
Enabled: true,
MaxConcurrent: 3,
RepeatIntervalSeconds: 60 * 60, // 1 hour
CheckIntervalSeconds: 60 * 60, // 1 hour
},
string(MaintenanceTaskType("vacuum")): {
Enabled: true,
MaxConcurrent: 2,
RepeatIntervalSeconds: 60 * 60, // 1 hour
CheckIntervalSeconds: 60 * 60, // 1 hour
},
},
GlobalMaxConcurrent: 10,
DefaultRepeatIntervalSeconds: 24 * 60 * 60, // 24 hours in seconds
DefaultCheckIntervalSeconds: 60 * 60, // 1 hour in seconds
}
mq := &MaintenanceQueue{
tasks: map[string]*MaintenanceTask{
"running-task-1": {
ID: "running-task-1",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusInProgress,
},
"running-task-2": {
ID: "running-task-2",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusAssigned,
},
},
pendingTasks: []*MaintenanceTask{},
workers: make(map[string]*MaintenanceWorker),
policy: policy,
integration: nil,
}
task := &MaintenanceTask{
ID: "test-task-policy",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusPending,
}
// Should return true because we have 2 running EC tasks but max is 3
result := mq.canScheduleTaskNow(task)
if !result {
t.Errorf("Expected canScheduleTaskNow to return true with policy allowing 3 concurrent, got false")
}
// Add one more running task to reach the limit
mq.tasks["running-task-3"] = &MaintenanceTask{
ID: "running-task-3",
Type: MaintenanceTaskType("erasure_coding"),
Status: TaskStatusInProgress,
}
// Should return false because we now have 3 running EC tasks (at limit)
result = mq.canScheduleTaskNow(task)
if result {
t.Errorf("Expected canScheduleTaskNow to return false when at policy limit, got true")
}
}

View File

@@ -43,7 +43,18 @@ func (ms *MaintenanceScanner) ScanForMaintenanceTasks() ([]*TaskDetectionResult,
// Convert metrics to task system format
taskMetrics := ms.convertToTaskMetrics(volumeMetrics)
// Use task detection system
// Update topology information for complete cluster view (including empty servers)
// This must happen before task detection to ensure EC placement can consider all servers
if ms.lastTopologyInfo != nil {
if err := ms.integration.UpdateTopologyInfo(ms.lastTopologyInfo); err != nil {
glog.Errorf("Failed to update topology info for empty servers: %v", err)
// Don't fail the scan - continue with just volume-bearing servers
} else {
glog.V(1).Infof("Updated topology info for complete cluster view including empty servers")
}
}
// Use task detection system with complete cluster information
results, err := ms.integration.ScanWithTaskDetectors(taskMetrics)
if err != nil {
glog.Errorf("Task scanning failed: %v", err)
@@ -62,25 +73,60 @@ func (ms *MaintenanceScanner) ScanForMaintenanceTasks() ([]*TaskDetectionResult,
// getVolumeHealthMetrics collects health information for all volumes
func (ms *MaintenanceScanner) getVolumeHealthMetrics() ([]*VolumeHealthMetrics, error) {
var metrics []*VolumeHealthMetrics
var volumeSizeLimitMB uint64
glog.V(1).Infof("Collecting volume health metrics from master")
err := ms.adminClient.WithMasterClient(func(client master_pb.SeaweedClient) error {
// First, get volume size limit from master configuration
configResp, err := client.GetMasterConfiguration(context.Background(), &master_pb.GetMasterConfigurationRequest{})
if err != nil {
glog.Warningf("Failed to get volume size limit from master: %v", err)
volumeSizeLimitMB = 30000 // Default to 30GB if we can't get from master
} else {
volumeSizeLimitMB = uint64(configResp.VolumeSizeLimitMB)
}
// Now get volume list
resp, err := client.VolumeList(context.Background(), &master_pb.VolumeListRequest{})
if err != nil {
return err
}
if resp.TopologyInfo == nil {
glog.Warningf("No topology info received from master")
return nil
}
volumeSizeLimitBytes := volumeSizeLimitMB * 1024 * 1024 // Convert MB to bytes
// Track all nodes discovered in topology
var allNodesInTopology []string
var nodesWithVolumes []string
var nodesWithoutVolumes []string
for _, dc := range resp.TopologyInfo.DataCenterInfos {
glog.V(2).Infof("Processing datacenter: %s", dc.Id)
for _, rack := range dc.RackInfos {
glog.V(2).Infof("Processing rack: %s in datacenter: %s", rack.Id, dc.Id)
for _, node := range rack.DataNodeInfos {
for _, diskInfo := range node.DiskInfos {
allNodesInTopology = append(allNodesInTopology, node.Id)
glog.V(2).Infof("Found volume server in topology: %s (disks: %d)", node.Id, len(node.DiskInfos))
hasVolumes := false
// Process each disk on this node
for diskType, diskInfo := range node.DiskInfos {
if len(diskInfo.VolumeInfos) > 0 {
hasVolumes = true
glog.V(2).Infof("Volume server %s disk %s has %d volumes", node.Id, diskType, len(diskInfo.VolumeInfos))
}
// Process volumes on this specific disk
for _, volInfo := range diskInfo.VolumeInfos {
metric := &VolumeHealthMetrics{
VolumeID: volInfo.Id,
Server: node.Id,
DiskType: diskType, // Track which disk this volume is on
DiskId: volInfo.DiskId, // Use disk ID from volume info
Collection: volInfo.Collection,
Size: volInfo.Size,
DeletedBytes: volInfo.DeletedByteCount,
@@ -94,31 +140,58 @@ func (ms *MaintenanceScanner) getVolumeHealthMetrics() ([]*VolumeHealthMetrics,
// Calculate derived metrics
if metric.Size > 0 {
metric.GarbageRatio = float64(metric.DeletedBytes) / float64(metric.Size)
// Calculate fullness ratio (would need volume size limit)
// metric.FullnessRatio = float64(metric.Size) / float64(volumeSizeLimit)
// Calculate fullness ratio using actual volume size limit from master
metric.FullnessRatio = float64(metric.Size) / float64(volumeSizeLimitBytes)
}
metric.Age = time.Since(metric.LastModified)
glog.V(3).Infof("Volume %d on %s:%s (ID %d): size=%d, limit=%d, fullness=%.2f",
metric.VolumeID, metric.Server, metric.DiskType, metric.DiskId, metric.Size, volumeSizeLimitBytes, metric.FullnessRatio)
metrics = append(metrics, metric)
}
}
if hasVolumes {
nodesWithVolumes = append(nodesWithVolumes, node.Id)
} else {
nodesWithoutVolumes = append(nodesWithoutVolumes, node.Id)
glog.V(1).Infof("Volume server %s found in topology but has no volumes", node.Id)
}
}
}
}
glog.Infof("Topology discovery complete:")
glog.Infof(" - Total volume servers in topology: %d (%v)", len(allNodesInTopology), allNodesInTopology)
glog.Infof(" - Volume servers with volumes: %d (%v)", len(nodesWithVolumes), nodesWithVolumes)
glog.Infof(" - Volume servers without volumes: %d (%v)", len(nodesWithoutVolumes), nodesWithoutVolumes)
glog.Infof("Note: Maintenance system will track empty servers separately from volume metrics.")
// Store topology info for volume shard tracker
ms.lastTopologyInfo = resp.TopologyInfo
return nil
})
if err != nil {
glog.Errorf("Failed to get volume health metrics: %v", err)
return nil, err
}
glog.V(1).Infof("Successfully collected metrics for %d actual volumes with disk ID information", len(metrics))
// Count actual replicas and identify EC volumes
ms.enrichVolumeMetrics(metrics)
return metrics, nil
}
// getTopologyInfo returns the last collected topology information
func (ms *MaintenanceScanner) getTopologyInfo() *master_pb.TopologyInfo {
return ms.lastTopologyInfo
}
// enrichVolumeMetrics adds additional information like replica counts
func (ms *MaintenanceScanner) enrichVolumeMetrics(metrics []*VolumeHealthMetrics) {
// Group volumes by ID to count replicas
@@ -127,13 +200,17 @@ func (ms *MaintenanceScanner) enrichVolumeMetrics(metrics []*VolumeHealthMetrics
volumeGroups[metric.VolumeID] = append(volumeGroups[metric.VolumeID], metric)
}
// Update replica counts
for _, group := range volumeGroups {
actualReplicas := len(group)
for _, metric := range group {
metric.ReplicaCount = actualReplicas
// Update replica counts for actual volumes
for volumeID, replicas := range volumeGroups {
replicaCount := len(replicas)
for _, replica := range replicas {
replica.ReplicaCount = replicaCount
}
glog.V(3).Infof("Volume %d has %d replicas", volumeID, replicaCount)
}
// TODO: Identify EC volumes by checking volume structure
// This would require querying volume servers for EC shard information
}
// convertToTaskMetrics converts existing volume metrics to task system format
@@ -144,6 +221,8 @@ func (ms *MaintenanceScanner) convertToTaskMetrics(metrics []*VolumeHealthMetric
simplified = append(simplified, &types.VolumeHealthMetrics{
VolumeID: metric.VolumeID,
Server: metric.Server,
DiskType: metric.DiskType,
DiskId: metric.DiskId,
Collection: metric.Collection,
Size: metric.Size,
DeletedBytes: metric.DeletedBytes,
@@ -159,5 +238,6 @@ func (ms *MaintenanceScanner) convertToTaskMetrics(metrics []*VolumeHealthMetric
})
}
glog.V(2).Infof("Converted %d volume metrics with disk ID information for task detection", len(simplified))
return simplified
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/worker_pb"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
"github.com/seaweedfs/seaweedfs/weed/worker/types"
)
@@ -96,7 +97,7 @@ type MaintenanceTask struct {
VolumeID uint32 `json:"volume_id,omitempty"`
Server string `json:"server,omitempty"`
Collection string `json:"collection,omitempty"`
Parameters map[string]interface{} `json:"parameters,omitempty"`
TypedParams *worker_pb.TaskParams `json:"typed_params,omitempty"`
Reason string `json:"reason"`
CreatedAt time.Time `json:"created_at"`
ScheduledAt time.Time `json:"scheduled_at"`
@@ -109,90 +110,149 @@ type MaintenanceTask struct {
MaxRetries int `json:"max_retries"`
}
// MaintenanceConfig holds configuration for the maintenance system
// DEPRECATED: Use worker_pb.MaintenanceConfig instead
type MaintenanceConfig = worker_pb.MaintenanceConfig
// MaintenancePolicy defines policies for maintenance operations
// DEPRECATED: Use worker_pb.MaintenancePolicy instead
type MaintenancePolicy = worker_pb.MaintenancePolicy
// TaskPolicy represents configuration for a specific task type
type TaskPolicy struct {
Enabled bool `json:"enabled"`
MaxConcurrent int `json:"max_concurrent"`
RepeatInterval int `json:"repeat_interval"` // Hours to wait before repeating
CheckInterval int `json:"check_interval"` // Hours between checks
Configuration map[string]interface{} `json:"configuration"` // Task-specific config
// DEPRECATED: Use worker_pb.TaskPolicy instead
type TaskPolicy = worker_pb.TaskPolicy
// Default configuration values
func DefaultMaintenanceConfig() *MaintenanceConfig {
return DefaultMaintenanceConfigProto()
}
// MaintenancePolicy defines policies for maintenance operations using a dynamic structure
type MaintenancePolicy struct {
// Task-specific policies mapped by task type
TaskPolicies map[MaintenanceTaskType]*TaskPolicy `json:"task_policies"`
// Policy helper functions (since we can't add methods to type aliases)
// Global policy settings
GlobalMaxConcurrent int `json:"global_max_concurrent"` // Overall limit across all task types
DefaultRepeatInterval int `json:"default_repeat_interval"` // Default hours if task doesn't specify
DefaultCheckInterval int `json:"default_check_interval"` // Default hours for periodic checks
}
// GetTaskPolicy returns the policy for a specific task type, creating generic defaults if needed
func (mp *MaintenancePolicy) GetTaskPolicy(taskType MaintenanceTaskType) *TaskPolicy {
// GetTaskPolicy returns the policy for a specific task type
func GetTaskPolicy(mp *MaintenancePolicy, taskType MaintenanceTaskType) *TaskPolicy {
if mp.TaskPolicies == nil {
mp.TaskPolicies = make(map[MaintenanceTaskType]*TaskPolicy)
return nil
}
policy, exists := mp.TaskPolicies[taskType]
if !exists {
// Create generic default policy using global settings - no hardcoded fallbacks
policy = &TaskPolicy{
Enabled: false, // Conservative default - require explicit enabling
MaxConcurrent: 1, // Conservative default concurrency
RepeatInterval: mp.DefaultRepeatInterval, // Use configured default, 0 if not set
CheckInterval: mp.DefaultCheckInterval, // Use configured default, 0 if not set
Configuration: make(map[string]interface{}),
}
mp.TaskPolicies[taskType] = policy
}
return policy
return mp.TaskPolicies[string(taskType)]
}
// SetTaskPolicy sets the policy for a specific task type
func (mp *MaintenancePolicy) SetTaskPolicy(taskType MaintenanceTaskType, policy *TaskPolicy) {
func SetTaskPolicy(mp *MaintenancePolicy, taskType MaintenanceTaskType, policy *TaskPolicy) {
if mp.TaskPolicies == nil {
mp.TaskPolicies = make(map[MaintenanceTaskType]*TaskPolicy)
mp.TaskPolicies = make(map[string]*TaskPolicy)
}
mp.TaskPolicies[taskType] = policy
mp.TaskPolicies[string(taskType)] = policy
}
// IsTaskEnabled returns whether a task type is enabled
func (mp *MaintenancePolicy) IsTaskEnabled(taskType MaintenanceTaskType) bool {
policy := mp.GetTaskPolicy(taskType)
func IsTaskEnabled(mp *MaintenancePolicy, taskType MaintenanceTaskType) bool {
policy := GetTaskPolicy(mp, taskType)
if policy == nil {
return false
}
return policy.Enabled
}
// GetMaxConcurrent returns the max concurrent limit for a task type
func (mp *MaintenancePolicy) GetMaxConcurrent(taskType MaintenanceTaskType) int {
policy := mp.GetTaskPolicy(taskType)
return policy.MaxConcurrent
func GetMaxConcurrent(mp *MaintenancePolicy, taskType MaintenanceTaskType) int {
policy := GetTaskPolicy(mp, taskType)
if policy == nil {
return 1
}
return int(policy.MaxConcurrent)
}
// GetRepeatInterval returns the repeat interval for a task type
func (mp *MaintenancePolicy) GetRepeatInterval(taskType MaintenanceTaskType) int {
policy := mp.GetTaskPolicy(taskType)
return policy.RepeatInterval
}
// GetTaskConfig returns a configuration value for a task type
func (mp *MaintenancePolicy) GetTaskConfig(taskType MaintenanceTaskType, key string) (interface{}, bool) {
policy := mp.GetTaskPolicy(taskType)
value, exists := policy.Configuration[key]
return value, exists
}
// SetTaskConfig sets a configuration value for a task type
func (mp *MaintenancePolicy) SetTaskConfig(taskType MaintenanceTaskType, key string, value interface{}) {
policy := mp.GetTaskPolicy(taskType)
if policy.Configuration == nil {
policy.Configuration = make(map[string]interface{})
func GetRepeatInterval(mp *MaintenancePolicy, taskType MaintenanceTaskType) int {
policy := GetTaskPolicy(mp, taskType)
if policy == nil {
return int(mp.DefaultRepeatIntervalSeconds)
}
policy.Configuration[key] = value
return int(policy.RepeatIntervalSeconds)
}
// GetVacuumTaskConfig returns the vacuum task configuration
func GetVacuumTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType) *worker_pb.VacuumTaskConfig {
policy := GetTaskPolicy(mp, taskType)
if policy == nil {
return nil
}
return policy.GetVacuumConfig()
}
// GetErasureCodingTaskConfig returns the erasure coding task configuration
func GetErasureCodingTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType) *worker_pb.ErasureCodingTaskConfig {
policy := GetTaskPolicy(mp, taskType)
if policy == nil {
return nil
}
return policy.GetErasureCodingConfig()
}
// GetBalanceTaskConfig returns the balance task configuration
func GetBalanceTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType) *worker_pb.BalanceTaskConfig {
policy := GetTaskPolicy(mp, taskType)
if policy == nil {
return nil
}
return policy.GetBalanceConfig()
}
// GetReplicationTaskConfig returns the replication task configuration
func GetReplicationTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType) *worker_pb.ReplicationTaskConfig {
policy := GetTaskPolicy(mp, taskType)
if policy == nil {
return nil
}
return policy.GetReplicationConfig()
}
// Note: GetTaskConfig was removed - use typed getters: GetVacuumTaskConfig, GetErasureCodingTaskConfig, GetBalanceTaskConfig, or GetReplicationTaskConfig
// SetVacuumTaskConfig sets the vacuum task configuration
func SetVacuumTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType, config *worker_pb.VacuumTaskConfig) {
policy := GetTaskPolicy(mp, taskType)
if policy != nil {
policy.TaskConfig = &worker_pb.TaskPolicy_VacuumConfig{
VacuumConfig: config,
}
}
}
// SetErasureCodingTaskConfig sets the erasure coding task configuration
func SetErasureCodingTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType, config *worker_pb.ErasureCodingTaskConfig) {
policy := GetTaskPolicy(mp, taskType)
if policy != nil {
policy.TaskConfig = &worker_pb.TaskPolicy_ErasureCodingConfig{
ErasureCodingConfig: config,
}
}
}
// SetBalanceTaskConfig sets the balance task configuration
func SetBalanceTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType, config *worker_pb.BalanceTaskConfig) {
policy := GetTaskPolicy(mp, taskType)
if policy != nil {
policy.TaskConfig = &worker_pb.TaskPolicy_BalanceConfig{
BalanceConfig: config,
}
}
}
// SetReplicationTaskConfig sets the replication task configuration
func SetReplicationTaskConfig(mp *MaintenancePolicy, taskType MaintenanceTaskType, config *worker_pb.ReplicationTaskConfig) {
policy := GetTaskPolicy(mp, taskType)
if policy != nil {
policy.TaskConfig = &worker_pb.TaskPolicy_ReplicationConfig{
ReplicationConfig: config,
}
}
}
// SetTaskConfig sets a configuration value for a task type (legacy method - use typed setters above)
// Note: SetTaskConfig was removed - use typed setters: SetVacuumTaskConfig, SetErasureCodingTaskConfig, SetBalanceTaskConfig, or SetReplicationTaskConfig
// MaintenanceWorker represents a worker instance
type MaintenanceWorker struct {
ID string `json:"id"`
@@ -217,29 +277,32 @@ type MaintenanceQueue struct {
// MaintenanceScanner analyzes the cluster and generates maintenance tasks
type MaintenanceScanner struct {
adminClient AdminClient
policy *MaintenancePolicy
queue *MaintenanceQueue
lastScan map[MaintenanceTaskType]time.Time
integration *MaintenanceIntegration
adminClient AdminClient
policy *MaintenancePolicy
queue *MaintenanceQueue
lastScan map[MaintenanceTaskType]time.Time
integration *MaintenanceIntegration
lastTopologyInfo *master_pb.TopologyInfo
}
// TaskDetectionResult represents the result of scanning for maintenance needs
type TaskDetectionResult struct {
TaskType MaintenanceTaskType `json:"task_type"`
VolumeID uint32 `json:"volume_id,omitempty"`
Server string `json:"server,omitempty"`
Collection string `json:"collection,omitempty"`
Priority MaintenanceTaskPriority `json:"priority"`
Reason string `json:"reason"`
Parameters map[string]interface{} `json:"parameters,omitempty"`
ScheduleAt time.Time `json:"schedule_at"`
TaskType MaintenanceTaskType `json:"task_type"`
VolumeID uint32 `json:"volume_id,omitempty"`
Server string `json:"server,omitempty"`
Collection string `json:"collection,omitempty"`
Priority MaintenanceTaskPriority `json:"priority"`
Reason string `json:"reason"`
TypedParams *worker_pb.TaskParams `json:"typed_params,omitempty"`
ScheduleAt time.Time `json:"schedule_at"`
}
// VolumeHealthMetrics contains health information about a volume
// VolumeHealthMetrics represents the health metrics for a volume
type VolumeHealthMetrics struct {
VolumeID uint32 `json:"volume_id"`
Server string `json:"server"`
DiskType string `json:"disk_type"` // Disk type (e.g., "hdd", "ssd") or disk path (e.g., "/data1")
DiskId uint32 `json:"disk_id"` // ID of the disk in Store.Locations array
Collection string `json:"collection"`
Size uint64 `json:"size"`
DeletedBytes uint64 `json:"deleted_bytes"`
@@ -267,38 +330,6 @@ type MaintenanceStats struct {
NextScanTime time.Time `json:"next_scan_time"`
}
// MaintenanceConfig holds configuration for the maintenance system
type MaintenanceConfig struct {
Enabled bool `json:"enabled"`
ScanIntervalSeconds int `json:"scan_interval_seconds"` // How often to scan for maintenance needs (in seconds)
WorkerTimeoutSeconds int `json:"worker_timeout_seconds"` // Worker heartbeat timeout (in seconds)
TaskTimeoutSeconds int `json:"task_timeout_seconds"` // Individual task timeout (in seconds)
RetryDelaySeconds int `json:"retry_delay_seconds"` // Delay between retries (in seconds)
MaxRetries int `json:"max_retries"` // Default max retries for tasks
CleanupIntervalSeconds int `json:"cleanup_interval_seconds"` // How often to clean up old tasks (in seconds)
TaskRetentionSeconds int `json:"task_retention_seconds"` // How long to keep completed/failed tasks (in seconds)
Policy *MaintenancePolicy `json:"policy"`
}
// Default configuration values
func DefaultMaintenanceConfig() *MaintenanceConfig {
return &MaintenanceConfig{
Enabled: false, // Disabled by default for safety
ScanIntervalSeconds: 30 * 60, // 30 minutes
WorkerTimeoutSeconds: 5 * 60, // 5 minutes
TaskTimeoutSeconds: 2 * 60 * 60, // 2 hours
RetryDelaySeconds: 15 * 60, // 15 minutes
MaxRetries: 3,
CleanupIntervalSeconds: 24 * 60 * 60, // 24 hours
TaskRetentionSeconds: 7 * 24 * 60 * 60, // 7 days
Policy: &MaintenancePolicy{
GlobalMaxConcurrent: 4,
DefaultRepeatInterval: 6,
DefaultCheckInterval: 12,
},
}
}
// MaintenanceQueueData represents data for the queue visualization UI
type MaintenanceQueueData struct {
Tasks []*MaintenanceTask `json:"tasks"`
@@ -380,10 +411,10 @@ type ClusterReplicationTask struct {
// from all registered tasks using their UI providers
func BuildMaintenancePolicyFromTasks() *MaintenancePolicy {
policy := &MaintenancePolicy{
TaskPolicies: make(map[MaintenanceTaskType]*TaskPolicy),
GlobalMaxConcurrent: 4,
DefaultRepeatInterval: 6,
DefaultCheckInterval: 12,
TaskPolicies: make(map[string]*TaskPolicy),
GlobalMaxConcurrent: 4,
DefaultRepeatIntervalSeconds: 6 * 3600, // 6 hours in seconds
DefaultCheckIntervalSeconds: 12 * 3600, // 12 hours in seconds
}
// Get all registered task types from the UI registry
@@ -399,32 +430,23 @@ func BuildMaintenancePolicyFromTasks() *MaintenancePolicy {
// Create task policy from UI configuration
taskPolicy := &TaskPolicy{
Enabled: true, // Default enabled
MaxConcurrent: 2, // Default concurrency
RepeatInterval: policy.DefaultRepeatInterval,
CheckInterval: policy.DefaultCheckInterval,
Configuration: make(map[string]interface{}),
Enabled: true, // Default enabled
MaxConcurrent: 2, // Default concurrency
RepeatIntervalSeconds: policy.DefaultRepeatIntervalSeconds,
CheckIntervalSeconds: policy.DefaultCheckIntervalSeconds,
}
// Extract configuration from UI provider's config
if configMap, ok := defaultConfig.(map[string]interface{}); ok {
// Copy all configuration values
for key, value := range configMap {
taskPolicy.Configuration[key] = value
// Extract configuration using TaskConfig interface - no more map conversions!
if taskConfig, ok := defaultConfig.(interface{ ToTaskPolicy() *worker_pb.TaskPolicy }); ok {
// Use protobuf directly for clean, type-safe config extraction
pbTaskPolicy := taskConfig.ToTaskPolicy()
taskPolicy.Enabled = pbTaskPolicy.Enabled
taskPolicy.MaxConcurrent = pbTaskPolicy.MaxConcurrent
if pbTaskPolicy.RepeatIntervalSeconds > 0 {
taskPolicy.RepeatIntervalSeconds = pbTaskPolicy.RepeatIntervalSeconds
}
// Extract common fields
if enabled, exists := configMap["enabled"]; exists {
if enabledBool, ok := enabled.(bool); ok {
taskPolicy.Enabled = enabledBool
}
}
if maxConcurrent, exists := configMap["max_concurrent"]; exists {
if maxConcurrentInt, ok := maxConcurrent.(int); ok {
taskPolicy.MaxConcurrent = maxConcurrentInt
} else if maxConcurrentFloat, ok := maxConcurrent.(float64); ok {
taskPolicy.MaxConcurrent = int(maxConcurrentFloat)
}
if pbTaskPolicy.CheckIntervalSeconds > 0 {
taskPolicy.CheckIntervalSeconds = pbTaskPolicy.CheckIntervalSeconds
}
}
@@ -432,24 +454,24 @@ func BuildMaintenancePolicyFromTasks() *MaintenancePolicy {
var scheduler types.TaskScheduler = typesRegistry.GetScheduler(taskType)
if scheduler != nil {
if taskPolicy.MaxConcurrent <= 0 {
taskPolicy.MaxConcurrent = scheduler.GetMaxConcurrent()
taskPolicy.MaxConcurrent = int32(scheduler.GetMaxConcurrent())
}
// Convert default repeat interval to hours
// Convert default repeat interval to seconds
if repeatInterval := scheduler.GetDefaultRepeatInterval(); repeatInterval > 0 {
taskPolicy.RepeatInterval = int(repeatInterval.Hours())
taskPolicy.RepeatIntervalSeconds = int32(repeatInterval.Seconds())
}
}
// Also get defaults from detector if available (using types.TaskDetector explicitly)
var detector types.TaskDetector = typesRegistry.GetDetector(taskType)
if detector != nil {
// Convert scan interval to check interval (hours)
// Convert scan interval to check interval (seconds)
if scanInterval := detector.ScanInterval(); scanInterval > 0 {
taskPolicy.CheckInterval = int(scanInterval.Hours())
taskPolicy.CheckIntervalSeconds = int32(scanInterval.Seconds())
}
}
policy.TaskPolicies[maintenanceTaskType] = taskPolicy
policy.TaskPolicies[string(maintenanceTaskType)] = taskPolicy
glog.V(3).Infof("Built policy for task type %s: enabled=%v, max_concurrent=%d",
maintenanceTaskType, taskPolicy.Enabled, taskPolicy.MaxConcurrent)
}
@@ -558,3 +580,8 @@ func BuildMaintenanceMenuItems() []*MaintenanceMenuItem {
return menuItems
}
// Helper functions to extract configuration fields
// Note: Removed getVacuumConfigField, getErasureCodingConfigField, getBalanceConfigField, getReplicationConfigField
// These were orphaned after removing GetTaskConfig - use typed getters instead

View File

@@ -7,6 +7,7 @@ import (
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/worker"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
"github.com/seaweedfs/seaweedfs/weed/worker/types"
@@ -145,15 +146,20 @@ func NewMaintenanceWorkerService(workerID, address, adminServer string) *Mainten
func (mws *MaintenanceWorkerService) executeGenericTask(task *MaintenanceTask) error {
glog.V(2).Infof("Executing generic task %s: %s for volume %d", task.ID, task.Type, task.VolumeID)
// Validate that task has proper typed parameters
if task.TypedParams == nil {
return fmt.Errorf("task %s has no typed parameters - task was not properly planned (insufficient destinations)", task.ID)
}
// Convert MaintenanceTask to types.TaskType
taskType := types.TaskType(string(task.Type))
// Create task parameters
taskParams := types.TaskParams{
VolumeID: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
Parameters: task.Parameters,
VolumeID: task.VolumeID,
Server: task.Server,
Collection: task.Collection,
TypedParams: task.TypedParams,
}
// Create task instance using the registry
@@ -396,10 +402,19 @@ func NewMaintenanceWorkerCommand(workerID, address, adminServer string) *Mainten
// Run starts the maintenance worker as a standalone service
func (mwc *MaintenanceWorkerCommand) Run() error {
// Generate worker ID if not provided
// Generate or load persistent worker ID if not provided
if mwc.workerService.workerID == "" {
hostname, _ := os.Hostname()
mwc.workerService.workerID = fmt.Sprintf("worker-%s-%d", hostname, time.Now().Unix())
// Get current working directory for worker ID persistence
wd, err := os.Getwd()
if err != nil {
return fmt.Errorf("failed to get working directory: %w", err)
}
workerID, err := worker.GenerateOrLoadWorkerID(wd)
if err != nil {
return fmt.Errorf("failed to generate or load worker ID: %w", err)
}
mwc.workerService.workerID = workerID
}
// Start the worker service

View File

@@ -0,0 +1,311 @@
package maintenance
import (
"sync"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/worker/types"
)
// PendingOperationType represents the type of pending operation
type PendingOperationType string
const (
OpTypeVolumeMove PendingOperationType = "volume_move"
OpTypeVolumeBalance PendingOperationType = "volume_balance"
OpTypeErasureCoding PendingOperationType = "erasure_coding"
OpTypeVacuum PendingOperationType = "vacuum"
OpTypeReplication PendingOperationType = "replication"
)
// PendingOperation represents a pending volume/shard operation
type PendingOperation struct {
VolumeID uint32 `json:"volume_id"`
OperationType PendingOperationType `json:"operation_type"`
SourceNode string `json:"source_node"`
DestNode string `json:"dest_node,omitempty"` // Empty for non-movement operations
TaskID string `json:"task_id"`
StartTime time.Time `json:"start_time"`
EstimatedSize uint64 `json:"estimated_size"` // Bytes
Collection string `json:"collection"`
Status string `json:"status"` // "assigned", "in_progress", "completing"
}
// PendingOperations tracks all pending volume/shard operations
type PendingOperations struct {
// Operations by volume ID for conflict detection
byVolumeID map[uint32]*PendingOperation
// Operations by task ID for updates
byTaskID map[string]*PendingOperation
// Operations by node for capacity calculations
bySourceNode map[string][]*PendingOperation
byDestNode map[string][]*PendingOperation
mutex sync.RWMutex
}
// NewPendingOperations creates a new pending operations tracker
func NewPendingOperations() *PendingOperations {
return &PendingOperations{
byVolumeID: make(map[uint32]*PendingOperation),
byTaskID: make(map[string]*PendingOperation),
bySourceNode: make(map[string][]*PendingOperation),
byDestNode: make(map[string][]*PendingOperation),
}
}
// AddOperation adds a pending operation
func (po *PendingOperations) AddOperation(op *PendingOperation) {
po.mutex.Lock()
defer po.mutex.Unlock()
// Check for existing operation on this volume
if existing, exists := po.byVolumeID[op.VolumeID]; exists {
glog.V(1).Infof("Replacing existing pending operation on volume %d: %s -> %s",
op.VolumeID, existing.TaskID, op.TaskID)
po.removeOperationUnlocked(existing)
}
// Add new operation
po.byVolumeID[op.VolumeID] = op
po.byTaskID[op.TaskID] = op
// Add to node indexes
po.bySourceNode[op.SourceNode] = append(po.bySourceNode[op.SourceNode], op)
if op.DestNode != "" {
po.byDestNode[op.DestNode] = append(po.byDestNode[op.DestNode], op)
}
glog.V(2).Infof("Added pending operation: volume %d, type %s, task %s, %s -> %s",
op.VolumeID, op.OperationType, op.TaskID, op.SourceNode, op.DestNode)
}
// RemoveOperation removes a completed operation
func (po *PendingOperations) RemoveOperation(taskID string) {
po.mutex.Lock()
defer po.mutex.Unlock()
if op, exists := po.byTaskID[taskID]; exists {
po.removeOperationUnlocked(op)
glog.V(2).Infof("Removed completed operation: volume %d, task %s", op.VolumeID, taskID)
}
}
// removeOperationUnlocked removes an operation (must hold lock)
func (po *PendingOperations) removeOperationUnlocked(op *PendingOperation) {
delete(po.byVolumeID, op.VolumeID)
delete(po.byTaskID, op.TaskID)
// Remove from source node list
if ops, exists := po.bySourceNode[op.SourceNode]; exists {
for i, other := range ops {
if other.TaskID == op.TaskID {
po.bySourceNode[op.SourceNode] = append(ops[:i], ops[i+1:]...)
break
}
}
}
// Remove from dest node list
if op.DestNode != "" {
if ops, exists := po.byDestNode[op.DestNode]; exists {
for i, other := range ops {
if other.TaskID == op.TaskID {
po.byDestNode[op.DestNode] = append(ops[:i], ops[i+1:]...)
break
}
}
}
}
}
// HasPendingOperationOnVolume checks if a volume has a pending operation
func (po *PendingOperations) HasPendingOperationOnVolume(volumeID uint32) bool {
po.mutex.RLock()
defer po.mutex.RUnlock()
_, exists := po.byVolumeID[volumeID]
return exists
}
// GetPendingOperationOnVolume returns the pending operation on a volume
func (po *PendingOperations) GetPendingOperationOnVolume(volumeID uint32) *PendingOperation {
po.mutex.RLock()
defer po.mutex.RUnlock()
return po.byVolumeID[volumeID]
}
// WouldConflictWithPending checks if a new operation would conflict with pending ones
func (po *PendingOperations) WouldConflictWithPending(volumeID uint32, opType PendingOperationType) bool {
po.mutex.RLock()
defer po.mutex.RUnlock()
if existing, exists := po.byVolumeID[volumeID]; exists {
// Volume already has a pending operation
glog.V(3).Infof("Volume %d conflict: already has %s operation (task %s)",
volumeID, existing.OperationType, existing.TaskID)
return true
}
return false
}
// GetPendingCapacityImpactForNode calculates pending capacity changes for a node
func (po *PendingOperations) GetPendingCapacityImpactForNode(nodeID string) (incoming uint64, outgoing uint64) {
po.mutex.RLock()
defer po.mutex.RUnlock()
// Calculate outgoing capacity (volumes leaving this node)
if ops, exists := po.bySourceNode[nodeID]; exists {
for _, op := range ops {
// Only count movement operations
if op.DestNode != "" {
outgoing += op.EstimatedSize
}
}
}
// Calculate incoming capacity (volumes coming to this node)
if ops, exists := po.byDestNode[nodeID]; exists {
for _, op := range ops {
incoming += op.EstimatedSize
}
}
return incoming, outgoing
}
// FilterVolumeMetricsExcludingPending filters out volumes with pending operations
func (po *PendingOperations) FilterVolumeMetricsExcludingPending(metrics []*types.VolumeHealthMetrics) []*types.VolumeHealthMetrics {
po.mutex.RLock()
defer po.mutex.RUnlock()
var filtered []*types.VolumeHealthMetrics
excludedCount := 0
for _, metric := range metrics {
if _, hasPending := po.byVolumeID[metric.VolumeID]; !hasPending {
filtered = append(filtered, metric)
} else {
excludedCount++
glog.V(3).Infof("Excluding volume %d from scan due to pending operation", metric.VolumeID)
}
}
if excludedCount > 0 {
glog.V(1).Infof("Filtered out %d volumes with pending operations from %d total volumes",
excludedCount, len(metrics))
}
return filtered
}
// GetNodeCapacityProjection calculates projected capacity for a node
func (po *PendingOperations) GetNodeCapacityProjection(nodeID string, currentUsed uint64, totalCapacity uint64) NodeCapacityProjection {
incoming, outgoing := po.GetPendingCapacityImpactForNode(nodeID)
projectedUsed := currentUsed + incoming - outgoing
projectedFree := totalCapacity - projectedUsed
return NodeCapacityProjection{
NodeID: nodeID,
CurrentUsed: currentUsed,
TotalCapacity: totalCapacity,
PendingIncoming: incoming,
PendingOutgoing: outgoing,
ProjectedUsed: projectedUsed,
ProjectedFree: projectedFree,
}
}
// GetAllPendingOperations returns all pending operations
func (po *PendingOperations) GetAllPendingOperations() []*PendingOperation {
po.mutex.RLock()
defer po.mutex.RUnlock()
var operations []*PendingOperation
for _, op := range po.byVolumeID {
operations = append(operations, op)
}
return operations
}
// UpdateOperationStatus updates the status of a pending operation
func (po *PendingOperations) UpdateOperationStatus(taskID string, status string) {
po.mutex.Lock()
defer po.mutex.Unlock()
if op, exists := po.byTaskID[taskID]; exists {
op.Status = status
glog.V(3).Infof("Updated operation status: task %s, volume %d -> %s", taskID, op.VolumeID, status)
}
}
// CleanupStaleOperations removes operations that have been running too long
func (po *PendingOperations) CleanupStaleOperations(maxAge time.Duration) int {
po.mutex.Lock()
defer po.mutex.Unlock()
cutoff := time.Now().Add(-maxAge)
var staleOps []*PendingOperation
for _, op := range po.byVolumeID {
if op.StartTime.Before(cutoff) {
staleOps = append(staleOps, op)
}
}
for _, op := range staleOps {
po.removeOperationUnlocked(op)
glog.Warningf("Removed stale pending operation: volume %d, task %s, age %v",
op.VolumeID, op.TaskID, time.Since(op.StartTime))
}
return len(staleOps)
}
// NodeCapacityProjection represents projected capacity for a node
type NodeCapacityProjection struct {
NodeID string `json:"node_id"`
CurrentUsed uint64 `json:"current_used"`
TotalCapacity uint64 `json:"total_capacity"`
PendingIncoming uint64 `json:"pending_incoming"`
PendingOutgoing uint64 `json:"pending_outgoing"`
ProjectedUsed uint64 `json:"projected_used"`
ProjectedFree uint64 `json:"projected_free"`
}
// GetStats returns statistics about pending operations
func (po *PendingOperations) GetStats() PendingOperationsStats {
po.mutex.RLock()
defer po.mutex.RUnlock()
stats := PendingOperationsStats{
TotalOperations: len(po.byVolumeID),
ByType: make(map[PendingOperationType]int),
ByStatus: make(map[string]int),
}
var totalSize uint64
for _, op := range po.byVolumeID {
stats.ByType[op.OperationType]++
stats.ByStatus[op.Status]++
totalSize += op.EstimatedSize
}
stats.TotalEstimatedSize = totalSize
return stats
}
// PendingOperationsStats provides statistics about pending operations
type PendingOperationsStats struct {
TotalOperations int `json:"total_operations"`
ByType map[PendingOperationType]int `json:"by_type"`
ByStatus map[string]int `json:"by_status"`
TotalEstimatedSize uint64 `json:"total_estimated_size"`
}

View File

@@ -0,0 +1,250 @@
package maintenance
import (
"testing"
"time"
"github.com/seaweedfs/seaweedfs/weed/worker/types"
)
func TestPendingOperations_ConflictDetection(t *testing.T) {
pendingOps := NewPendingOperations()
// Add a pending erasure coding operation on volume 123
op := &PendingOperation{
VolumeID: 123,
OperationType: OpTypeErasureCoding,
SourceNode: "node1",
TaskID: "task-001",
StartTime: time.Now(),
EstimatedSize: 1024 * 1024 * 1024, // 1GB
Collection: "test",
Status: "assigned",
}
pendingOps.AddOperation(op)
// Test conflict detection
if !pendingOps.HasPendingOperationOnVolume(123) {
t.Errorf("Expected volume 123 to have pending operation")
}
if !pendingOps.WouldConflictWithPending(123, OpTypeVacuum) {
t.Errorf("Expected conflict when trying to add vacuum operation on volume 123")
}
if pendingOps.HasPendingOperationOnVolume(124) {
t.Errorf("Expected volume 124 to have no pending operation")
}
if pendingOps.WouldConflictWithPending(124, OpTypeVacuum) {
t.Errorf("Expected no conflict for volume 124")
}
}
func TestPendingOperations_CapacityProjection(t *testing.T) {
pendingOps := NewPendingOperations()
// Add operation moving volume from node1 to node2
op1 := &PendingOperation{
VolumeID: 100,
OperationType: OpTypeVolumeMove,
SourceNode: "node1",
DestNode: "node2",
TaskID: "task-001",
StartTime: time.Now(),
EstimatedSize: 2 * 1024 * 1024 * 1024, // 2GB
Collection: "test",
Status: "in_progress",
}
// Add operation moving volume from node3 to node1
op2 := &PendingOperation{
VolumeID: 101,
OperationType: OpTypeVolumeMove,
SourceNode: "node3",
DestNode: "node1",
TaskID: "task-002",
StartTime: time.Now(),
EstimatedSize: 1 * 1024 * 1024 * 1024, // 1GB
Collection: "test",
Status: "assigned",
}
pendingOps.AddOperation(op1)
pendingOps.AddOperation(op2)
// Test capacity impact for node1
incoming, outgoing := pendingOps.GetPendingCapacityImpactForNode("node1")
expectedIncoming := uint64(1 * 1024 * 1024 * 1024) // 1GB incoming
expectedOutgoing := uint64(2 * 1024 * 1024 * 1024) // 2GB outgoing
if incoming != expectedIncoming {
t.Errorf("Expected incoming capacity %d, got %d", expectedIncoming, incoming)
}
if outgoing != expectedOutgoing {
t.Errorf("Expected outgoing capacity %d, got %d", expectedOutgoing, outgoing)
}
// Test projection for node1
currentUsed := uint64(10 * 1024 * 1024 * 1024) // 10GB current
totalCapacity := uint64(50 * 1024 * 1024 * 1024) // 50GB total
projection := pendingOps.GetNodeCapacityProjection("node1", currentUsed, totalCapacity)
expectedProjectedUsed := currentUsed + incoming - outgoing // 10 + 1 - 2 = 9GB
expectedProjectedFree := totalCapacity - expectedProjectedUsed // 50 - 9 = 41GB
if projection.ProjectedUsed != expectedProjectedUsed {
t.Errorf("Expected projected used %d, got %d", expectedProjectedUsed, projection.ProjectedUsed)
}
if projection.ProjectedFree != expectedProjectedFree {
t.Errorf("Expected projected free %d, got %d", expectedProjectedFree, projection.ProjectedFree)
}
}
func TestPendingOperations_VolumeFiltering(t *testing.T) {
pendingOps := NewPendingOperations()
// Create volume metrics
metrics := []*types.VolumeHealthMetrics{
{VolumeID: 100, Server: "node1"},
{VolumeID: 101, Server: "node2"},
{VolumeID: 102, Server: "node3"},
{VolumeID: 103, Server: "node1"},
}
// Add pending operations on volumes 101 and 103
op1 := &PendingOperation{
VolumeID: 101,
OperationType: OpTypeVacuum,
SourceNode: "node2",
TaskID: "task-001",
StartTime: time.Now(),
EstimatedSize: 1024 * 1024 * 1024,
Status: "in_progress",
}
op2 := &PendingOperation{
VolumeID: 103,
OperationType: OpTypeErasureCoding,
SourceNode: "node1",
TaskID: "task-002",
StartTime: time.Now(),
EstimatedSize: 2 * 1024 * 1024 * 1024,
Status: "assigned",
}
pendingOps.AddOperation(op1)
pendingOps.AddOperation(op2)
// Filter metrics
filtered := pendingOps.FilterVolumeMetricsExcludingPending(metrics)
// Should only have volumes 100 and 102 (101 and 103 are filtered out)
if len(filtered) != 2 {
t.Errorf("Expected 2 filtered metrics, got %d", len(filtered))
}
// Check that correct volumes remain
foundVolumes := make(map[uint32]bool)
for _, metric := range filtered {
foundVolumes[metric.VolumeID] = true
}
if !foundVolumes[100] || !foundVolumes[102] {
t.Errorf("Expected volumes 100 and 102 to remain after filtering")
}
if foundVolumes[101] || foundVolumes[103] {
t.Errorf("Expected volumes 101 and 103 to be filtered out")
}
}
func TestPendingOperations_OperationLifecycle(t *testing.T) {
pendingOps := NewPendingOperations()
// Add operation
op := &PendingOperation{
VolumeID: 200,
OperationType: OpTypeVolumeBalance,
SourceNode: "node1",
DestNode: "node2",
TaskID: "task-balance-001",
StartTime: time.Now(),
EstimatedSize: 1024 * 1024 * 1024,
Status: "assigned",
}
pendingOps.AddOperation(op)
// Check it exists
if !pendingOps.HasPendingOperationOnVolume(200) {
t.Errorf("Expected volume 200 to have pending operation")
}
// Update status
pendingOps.UpdateOperationStatus("task-balance-001", "in_progress")
retrievedOp := pendingOps.GetPendingOperationOnVolume(200)
if retrievedOp == nil {
t.Errorf("Expected to retrieve pending operation for volume 200")
} else if retrievedOp.Status != "in_progress" {
t.Errorf("Expected operation status to be 'in_progress', got '%s'", retrievedOp.Status)
}
// Complete operation
pendingOps.RemoveOperation("task-balance-001")
if pendingOps.HasPendingOperationOnVolume(200) {
t.Errorf("Expected volume 200 to have no pending operation after removal")
}
}
func TestPendingOperations_StaleCleanup(t *testing.T) {
pendingOps := NewPendingOperations()
// Add recent operation
recentOp := &PendingOperation{
VolumeID: 300,
OperationType: OpTypeVacuum,
SourceNode: "node1",
TaskID: "task-recent",
StartTime: time.Now(),
EstimatedSize: 1024 * 1024 * 1024,
Status: "in_progress",
}
// Add stale operation (24 hours ago)
staleOp := &PendingOperation{
VolumeID: 301,
OperationType: OpTypeErasureCoding,
SourceNode: "node2",
TaskID: "task-stale",
StartTime: time.Now().Add(-24 * time.Hour),
EstimatedSize: 2 * 1024 * 1024 * 1024,
Status: "in_progress",
}
pendingOps.AddOperation(recentOp)
pendingOps.AddOperation(staleOp)
// Clean up operations older than 1 hour
removedCount := pendingOps.CleanupStaleOperations(1 * time.Hour)
if removedCount != 1 {
t.Errorf("Expected to remove 1 stale operation, removed %d", removedCount)
}
// Recent operation should still exist
if !pendingOps.HasPendingOperationOnVolume(300) {
t.Errorf("Expected recent operation on volume 300 to still exist")
}
// Stale operation should be removed
if pendingOps.HasPendingOperationOnVolume(301) {
t.Errorf("Expected stale operation on volume 301 to be removed")
}
}

View File

@@ -9,6 +9,7 @@
z-index: 100;
padding: 48px 0 0;
box-shadow: inset -1px 0 0 rgba(0, 0, 0, .1);
overflow-y: auto;
}
.sidebar-heading {

View File

@@ -0,0 +1,741 @@
package topology
import (
"fmt"
"sync"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
)
// TaskType represents different types of maintenance operations
type TaskType string
// TaskStatus represents the current status of a task
type TaskStatus string
// Common task type constants
const (
TaskTypeVacuum TaskType = "vacuum"
TaskTypeBalance TaskType = "balance"
TaskTypeErasureCoding TaskType = "erasure_coding"
TaskTypeReplication TaskType = "replication"
)
// Common task status constants
const (
TaskStatusPending TaskStatus = "pending"
TaskStatusInProgress TaskStatus = "in_progress"
TaskStatusCompleted TaskStatus = "completed"
)
// taskState represents the current state of tasks affecting the topology (internal)
type taskState struct {
VolumeID uint32 `json:"volume_id"`
TaskType TaskType `json:"task_type"`
SourceServer string `json:"source_server"`
SourceDisk uint32 `json:"source_disk"`
TargetServer string `json:"target_server,omitempty"`
TargetDisk uint32 `json:"target_disk,omitempty"`
Status TaskStatus `json:"status"`
StartedAt time.Time `json:"started_at"`
CompletedAt time.Time `json:"completed_at,omitempty"`
}
// DiskInfo represents a disk with its current state and ongoing tasks (public for external access)
type DiskInfo struct {
NodeID string `json:"node_id"`
DiskID uint32 `json:"disk_id"`
DiskType string `json:"disk_type"`
DataCenter string `json:"data_center"`
Rack string `json:"rack"`
DiskInfo *master_pb.DiskInfo `json:"disk_info"`
LoadCount int `json:"load_count"` // Number of active tasks
}
// activeDisk represents internal disk state (private)
type activeDisk struct {
*DiskInfo
pendingTasks []*taskState
assignedTasks []*taskState
recentTasks []*taskState // Completed in last N seconds
}
// activeNode represents a node with its disks (private)
type activeNode struct {
nodeID string
dataCenter string
rack string
nodeInfo *master_pb.DataNodeInfo
disks map[uint32]*activeDisk // DiskID -> activeDisk
}
// ActiveTopology provides a real-time view of cluster state with task awareness
type ActiveTopology struct {
// Core topology from master
topologyInfo *master_pb.TopologyInfo
lastUpdated time.Time
// Structured topology for easy access (private)
nodes map[string]*activeNode // NodeID -> activeNode
disks map[string]*activeDisk // "NodeID:DiskID" -> activeDisk
// Task states affecting the topology (private)
pendingTasks map[string]*taskState
assignedTasks map[string]*taskState
recentTasks map[string]*taskState
// Configuration
recentTaskWindowSeconds int
// Synchronization
mutex sync.RWMutex
}
// NewActiveTopology creates a new ActiveTopology instance
func NewActiveTopology(recentTaskWindowSeconds int) *ActiveTopology {
if recentTaskWindowSeconds <= 0 {
recentTaskWindowSeconds = 10 // Default 10 seconds
}
return &ActiveTopology{
nodes: make(map[string]*activeNode),
disks: make(map[string]*activeDisk),
pendingTasks: make(map[string]*taskState),
assignedTasks: make(map[string]*taskState),
recentTasks: make(map[string]*taskState),
recentTaskWindowSeconds: recentTaskWindowSeconds,
}
}
// UpdateTopology updates the topology information from master
func (at *ActiveTopology) UpdateTopology(topologyInfo *master_pb.TopologyInfo) error {
at.mutex.Lock()
defer at.mutex.Unlock()
at.topologyInfo = topologyInfo
at.lastUpdated = time.Now()
// Rebuild structured topology
at.nodes = make(map[string]*activeNode)
at.disks = make(map[string]*activeDisk)
for _, dc := range topologyInfo.DataCenterInfos {
for _, rack := range dc.RackInfos {
for _, nodeInfo := range rack.DataNodeInfos {
node := &activeNode{
nodeID: nodeInfo.Id,
dataCenter: dc.Id,
rack: rack.Id,
nodeInfo: nodeInfo,
disks: make(map[uint32]*activeDisk),
}
// Add disks for this node
for diskType, diskInfo := range nodeInfo.DiskInfos {
disk := &activeDisk{
DiskInfo: &DiskInfo{
NodeID: nodeInfo.Id,
DiskID: diskInfo.DiskId,
DiskType: diskType,
DataCenter: dc.Id,
Rack: rack.Id,
DiskInfo: diskInfo,
},
}
diskKey := fmt.Sprintf("%s:%d", nodeInfo.Id, diskInfo.DiskId)
node.disks[diskInfo.DiskId] = disk
at.disks[diskKey] = disk
}
at.nodes[nodeInfo.Id] = node
}
}
}
// Reassign task states to updated topology
at.reassignTaskStates()
glog.V(1).Infof("ActiveTopology updated: %d nodes, %d disks", len(at.nodes), len(at.disks))
return nil
}
// AddPendingTask adds a pending task to the topology
func (at *ActiveTopology) AddPendingTask(taskID string, taskType TaskType, volumeID uint32,
sourceServer string, sourceDisk uint32, targetServer string, targetDisk uint32) {
at.mutex.Lock()
defer at.mutex.Unlock()
task := &taskState{
VolumeID: volumeID,
TaskType: taskType,
SourceServer: sourceServer,
SourceDisk: sourceDisk,
TargetServer: targetServer,
TargetDisk: targetDisk,
Status: TaskStatusPending,
StartedAt: time.Now(),
}
at.pendingTasks[taskID] = task
at.assignTaskToDisk(task)
}
// AssignTask moves a task from pending to assigned
func (at *ActiveTopology) AssignTask(taskID string) error {
at.mutex.Lock()
defer at.mutex.Unlock()
task, exists := at.pendingTasks[taskID]
if !exists {
return fmt.Errorf("pending task %s not found", taskID)
}
delete(at.pendingTasks, taskID)
task.Status = TaskStatusInProgress
at.assignedTasks[taskID] = task
at.reassignTaskStates()
return nil
}
// CompleteTask moves a task from assigned to recent
func (at *ActiveTopology) CompleteTask(taskID string) error {
at.mutex.Lock()
defer at.mutex.Unlock()
task, exists := at.assignedTasks[taskID]
if !exists {
return fmt.Errorf("assigned task %s not found", taskID)
}
delete(at.assignedTasks, taskID)
task.Status = TaskStatusCompleted
task.CompletedAt = time.Now()
at.recentTasks[taskID] = task
at.reassignTaskStates()
// Clean up old recent tasks
at.cleanupRecentTasks()
return nil
}
// GetAvailableDisks returns disks that can accept new tasks of the given type
func (at *ActiveTopology) GetAvailableDisks(taskType TaskType, excludeNodeID string) []*DiskInfo {
at.mutex.RLock()
defer at.mutex.RUnlock()
var available []*DiskInfo
for _, disk := range at.disks {
if disk.NodeID == excludeNodeID {
continue // Skip excluded node
}
if at.isDiskAvailable(disk, taskType) {
// Create a copy with current load count
diskCopy := *disk.DiskInfo
diskCopy.LoadCount = len(disk.pendingTasks) + len(disk.assignedTasks)
available = append(available, &diskCopy)
}
}
return available
}
// GetDiskLoad returns the current load on a disk (number of active tasks)
func (at *ActiveTopology) GetDiskLoad(nodeID string, diskID uint32) int {
at.mutex.RLock()
defer at.mutex.RUnlock()
diskKey := fmt.Sprintf("%s:%d", nodeID, diskID)
disk, exists := at.disks[diskKey]
if !exists {
return 0
}
return len(disk.pendingTasks) + len(disk.assignedTasks)
}
// HasRecentTaskForVolume checks if a volume had a recent task (to avoid immediate re-detection)
func (at *ActiveTopology) HasRecentTaskForVolume(volumeID uint32, taskType TaskType) bool {
at.mutex.RLock()
defer at.mutex.RUnlock()
for _, task := range at.recentTasks {
if task.VolumeID == volumeID && task.TaskType == taskType {
return true
}
}
return false
}
// GetAllNodes returns information about all nodes (public interface)
func (at *ActiveTopology) GetAllNodes() map[string]*master_pb.DataNodeInfo {
at.mutex.RLock()
defer at.mutex.RUnlock()
result := make(map[string]*master_pb.DataNodeInfo)
for nodeID, node := range at.nodes {
result[nodeID] = node.nodeInfo
}
return result
}
// GetTopologyInfo returns the current topology information (read-only access)
func (at *ActiveTopology) GetTopologyInfo() *master_pb.TopologyInfo {
at.mutex.RLock()
defer at.mutex.RUnlock()
return at.topologyInfo
}
// GetNodeDisks returns all disks for a specific node
func (at *ActiveTopology) GetNodeDisks(nodeID string) []*DiskInfo {
at.mutex.RLock()
defer at.mutex.RUnlock()
node, exists := at.nodes[nodeID]
if !exists {
return nil
}
var disks []*DiskInfo
for _, disk := range node.disks {
diskCopy := *disk.DiskInfo
diskCopy.LoadCount = len(disk.pendingTasks) + len(disk.assignedTasks)
disks = append(disks, &diskCopy)
}
return disks
}
// DestinationPlan represents a planned destination for a volume/shard operation
type DestinationPlan struct {
TargetNode string `json:"target_node"`
TargetDisk uint32 `json:"target_disk"`
TargetRack string `json:"target_rack"`
TargetDC string `json:"target_dc"`
ExpectedSize uint64 `json:"expected_size"`
PlacementScore float64 `json:"placement_score"`
Conflicts []string `json:"conflicts"`
}
// MultiDestinationPlan represents multiple planned destinations for operations like EC
type MultiDestinationPlan struct {
Plans []*DestinationPlan `json:"plans"`
TotalShards int `json:"total_shards"`
SuccessfulRack int `json:"successful_racks"`
SuccessfulDCs int `json:"successful_dcs"`
}
// PlanBalanceDestination finds the best destination for a balance operation
func (at *ActiveTopology) PlanBalanceDestination(volumeID uint32, sourceNode string, sourceRack string, sourceDC string, volumeSize uint64) (*DestinationPlan, error) {
at.mutex.RLock()
defer at.mutex.RUnlock()
// Get available disks, excluding the source node
availableDisks := at.getAvailableDisksForPlanning(TaskTypeBalance, sourceNode)
if len(availableDisks) == 0 {
return nil, fmt.Errorf("no available disks for balance operation")
}
// Score each disk for balance placement
bestDisk := at.selectBestBalanceDestination(availableDisks, sourceRack, sourceDC, volumeSize)
if bestDisk == nil {
return nil, fmt.Errorf("no suitable destination found for balance operation")
}
return &DestinationPlan{
TargetNode: bestDisk.NodeID,
TargetDisk: bestDisk.DiskID,
TargetRack: bestDisk.Rack,
TargetDC: bestDisk.DataCenter,
ExpectedSize: volumeSize,
PlacementScore: at.calculatePlacementScore(bestDisk, sourceRack, sourceDC),
Conflicts: at.checkPlacementConflicts(bestDisk, TaskTypeBalance),
}, nil
}
// PlanECDestinations finds multiple destinations for EC shard distribution
func (at *ActiveTopology) PlanECDestinations(volumeID uint32, sourceNode string, sourceRack string, sourceDC string, shardsNeeded int) (*MultiDestinationPlan, error) {
at.mutex.RLock()
defer at.mutex.RUnlock()
// Get available disks for EC placement
availableDisks := at.getAvailableDisksForPlanning(TaskTypeErasureCoding, "")
if len(availableDisks) < shardsNeeded {
return nil, fmt.Errorf("insufficient disks for EC placement: need %d, have %d", shardsNeeded, len(availableDisks))
}
// Select best disks for EC placement with rack/DC diversity
selectedDisks := at.selectBestECDestinations(availableDisks, sourceRack, sourceDC, shardsNeeded)
if len(selectedDisks) < shardsNeeded {
return nil, fmt.Errorf("could not find %d suitable destinations for EC placement", shardsNeeded)
}
var plans []*DestinationPlan
rackCount := make(map[string]int)
dcCount := make(map[string]int)
for _, disk := range selectedDisks {
plan := &DestinationPlan{
TargetNode: disk.NodeID,
TargetDisk: disk.DiskID,
TargetRack: disk.Rack,
TargetDC: disk.DataCenter,
ExpectedSize: 0, // EC shards don't have predetermined size
PlacementScore: at.calculatePlacementScore(disk, sourceRack, sourceDC),
Conflicts: at.checkPlacementConflicts(disk, TaskTypeErasureCoding),
}
plans = append(plans, plan)
// Count rack and DC diversity
rackKey := fmt.Sprintf("%s:%s", disk.DataCenter, disk.Rack)
rackCount[rackKey]++
dcCount[disk.DataCenter]++
}
return &MultiDestinationPlan{
Plans: plans,
TotalShards: len(plans),
SuccessfulRack: len(rackCount),
SuccessfulDCs: len(dcCount),
}, nil
}
// getAvailableDisksForPlanning returns disks available for destination planning
func (at *ActiveTopology) getAvailableDisksForPlanning(taskType TaskType, excludeNodeID string) []*activeDisk {
var available []*activeDisk
for _, disk := range at.disks {
if excludeNodeID != "" && disk.NodeID == excludeNodeID {
continue // Skip excluded node
}
if at.isDiskAvailable(disk, taskType) {
available = append(available, disk)
}
}
return available
}
// selectBestBalanceDestination selects the best disk for balance operation
func (at *ActiveTopology) selectBestBalanceDestination(disks []*activeDisk, sourceRack string, sourceDC string, volumeSize uint64) *activeDisk {
if len(disks) == 0 {
return nil
}
var bestDisk *activeDisk
bestScore := -1.0
for _, disk := range disks {
score := at.calculateBalanceScore(disk, sourceRack, sourceDC, volumeSize)
if score > bestScore {
bestScore = score
bestDisk = disk
}
}
return bestDisk
}
// selectBestECDestinations selects multiple disks for EC shard placement with diversity
func (at *ActiveTopology) selectBestECDestinations(disks []*activeDisk, sourceRack string, sourceDC string, shardsNeeded int) []*activeDisk {
if len(disks) == 0 {
return nil
}
// Group disks by rack and DC for diversity
rackGroups := make(map[string][]*activeDisk)
for _, disk := range disks {
rackKey := fmt.Sprintf("%s:%s", disk.DataCenter, disk.Rack)
rackGroups[rackKey] = append(rackGroups[rackKey], disk)
}
var selected []*activeDisk
usedRacks := make(map[string]bool)
// First pass: select one disk from each rack for maximum diversity
for rackKey, rackDisks := range rackGroups {
if len(selected) >= shardsNeeded {
break
}
// Select best disk from this rack
bestDisk := at.selectBestFromRack(rackDisks, sourceRack, sourceDC)
if bestDisk != nil {
selected = append(selected, bestDisk)
usedRacks[rackKey] = true
}
}
// Second pass: if we need more disks, select from racks we've already used
if len(selected) < shardsNeeded {
for _, disk := range disks {
if len(selected) >= shardsNeeded {
break
}
// Skip if already selected
alreadySelected := false
for _, sel := range selected {
if sel.NodeID == disk.NodeID && sel.DiskID == disk.DiskID {
alreadySelected = true
break
}
}
if !alreadySelected && at.isDiskAvailable(disk, TaskTypeErasureCoding) {
selected = append(selected, disk)
}
}
}
return selected
}
// selectBestFromRack selects the best disk from a rack
func (at *ActiveTopology) selectBestFromRack(disks []*activeDisk, sourceRack string, sourceDC string) *activeDisk {
if len(disks) == 0 {
return nil
}
var bestDisk *activeDisk
bestScore := -1.0
for _, disk := range disks {
if !at.isDiskAvailable(disk, TaskTypeErasureCoding) {
continue
}
score := at.calculateECScore(disk, sourceRack, sourceDC)
if score > bestScore {
bestScore = score
bestDisk = disk
}
}
return bestDisk
}
// calculateBalanceScore calculates placement score for balance operations
func (at *ActiveTopology) calculateBalanceScore(disk *activeDisk, sourceRack string, sourceDC string, volumeSize uint64) float64 {
score := 0.0
// Prefer disks with lower load
activeLoad := len(disk.pendingTasks) + len(disk.assignedTasks)
score += (2.0 - float64(activeLoad)) * 40.0 // Max 80 points for load
// Prefer disks with more free space
if disk.DiskInfo.DiskInfo.MaxVolumeCount > 0 {
freeRatio := float64(disk.DiskInfo.DiskInfo.MaxVolumeCount-disk.DiskInfo.DiskInfo.VolumeCount) / float64(disk.DiskInfo.DiskInfo.MaxVolumeCount)
score += freeRatio * 20.0 // Max 20 points for free space
}
// Rack diversity bonus (prefer different rack)
if disk.Rack != sourceRack {
score += 10.0
}
// DC diversity bonus (prefer different DC)
if disk.DataCenter != sourceDC {
score += 5.0
}
return score
}
// calculateECScore calculates placement score for EC operations
func (at *ActiveTopology) calculateECScore(disk *activeDisk, sourceRack string, sourceDC string) float64 {
score := 0.0
// Prefer disks with lower load
activeLoad := len(disk.pendingTasks) + len(disk.assignedTasks)
score += (2.0 - float64(activeLoad)) * 30.0 // Max 60 points for load
// Prefer disks with more free space
if disk.DiskInfo.DiskInfo.MaxVolumeCount > 0 {
freeRatio := float64(disk.DiskInfo.DiskInfo.MaxVolumeCount-disk.DiskInfo.DiskInfo.VolumeCount) / float64(disk.DiskInfo.DiskInfo.MaxVolumeCount)
score += freeRatio * 20.0 // Max 20 points for free space
}
// Strong rack diversity preference for EC
if disk.Rack != sourceRack {
score += 20.0
}
// Strong DC diversity preference for EC
if disk.DataCenter != sourceDC {
score += 15.0
}
return score
}
// calculatePlacementScore calculates overall placement quality score
func (at *ActiveTopology) calculatePlacementScore(disk *activeDisk, sourceRack string, sourceDC string) float64 {
score := 0.0
// Load factor
activeLoad := len(disk.pendingTasks) + len(disk.assignedTasks)
loadScore := (2.0 - float64(activeLoad)) / 2.0 // Normalize to 0-1
score += loadScore * 0.4
// Capacity factor
if disk.DiskInfo.DiskInfo.MaxVolumeCount > 0 {
freeRatio := float64(disk.DiskInfo.DiskInfo.MaxVolumeCount-disk.DiskInfo.DiskInfo.VolumeCount) / float64(disk.DiskInfo.DiskInfo.MaxVolumeCount)
score += freeRatio * 0.3
}
// Diversity factor
diversityScore := 0.0
if disk.Rack != sourceRack {
diversityScore += 0.5
}
if disk.DataCenter != sourceDC {
diversityScore += 0.5
}
score += diversityScore * 0.3
return score // Score between 0.0 and 1.0
}
// checkPlacementConflicts checks for placement rule violations
func (at *ActiveTopology) checkPlacementConflicts(disk *activeDisk, taskType TaskType) []string {
var conflicts []string
// Check load limits
activeLoad := len(disk.pendingTasks) + len(disk.assignedTasks)
if activeLoad >= 2 {
conflicts = append(conflicts, fmt.Sprintf("disk_load_high_%d", activeLoad))
}
// Check capacity limits
if disk.DiskInfo.DiskInfo.MaxVolumeCount > 0 {
usageRatio := float64(disk.DiskInfo.DiskInfo.VolumeCount) / float64(disk.DiskInfo.DiskInfo.MaxVolumeCount)
if usageRatio > 0.9 {
conflicts = append(conflicts, "disk_capacity_high")
}
}
// Check for conflicting task types
for _, task := range disk.assignedTasks {
if at.areTaskTypesConflicting(task.TaskType, taskType) {
conflicts = append(conflicts, fmt.Sprintf("task_conflict_%s", task.TaskType))
}
}
return conflicts
}
// Private methods
// reassignTaskStates assigns tasks to the appropriate disks
func (at *ActiveTopology) reassignTaskStates() {
// Clear existing task assignments
for _, disk := range at.disks {
disk.pendingTasks = nil
disk.assignedTasks = nil
disk.recentTasks = nil
}
// Reassign pending tasks
for _, task := range at.pendingTasks {
at.assignTaskToDisk(task)
}
// Reassign assigned tasks
for _, task := range at.assignedTasks {
at.assignTaskToDisk(task)
}
// Reassign recent tasks
for _, task := range at.recentTasks {
at.assignTaskToDisk(task)
}
}
// assignTaskToDisk assigns a task to the appropriate disk(s)
func (at *ActiveTopology) assignTaskToDisk(task *taskState) {
// Assign to source disk
sourceKey := fmt.Sprintf("%s:%d", task.SourceServer, task.SourceDisk)
if sourceDisk, exists := at.disks[sourceKey]; exists {
switch task.Status {
case TaskStatusPending:
sourceDisk.pendingTasks = append(sourceDisk.pendingTasks, task)
case TaskStatusInProgress:
sourceDisk.assignedTasks = append(sourceDisk.assignedTasks, task)
case TaskStatusCompleted:
sourceDisk.recentTasks = append(sourceDisk.recentTasks, task)
}
}
// Assign to target disk if it exists and is different from source
if task.TargetServer != "" && (task.TargetServer != task.SourceServer || task.TargetDisk != task.SourceDisk) {
targetKey := fmt.Sprintf("%s:%d", task.TargetServer, task.TargetDisk)
if targetDisk, exists := at.disks[targetKey]; exists {
switch task.Status {
case TaskStatusPending:
targetDisk.pendingTasks = append(targetDisk.pendingTasks, task)
case TaskStatusInProgress:
targetDisk.assignedTasks = append(targetDisk.assignedTasks, task)
case TaskStatusCompleted:
targetDisk.recentTasks = append(targetDisk.recentTasks, task)
}
}
}
}
// isDiskAvailable checks if a disk can accept new tasks
func (at *ActiveTopology) isDiskAvailable(disk *activeDisk, taskType TaskType) bool {
// Check if disk has too many active tasks
activeLoad := len(disk.pendingTasks) + len(disk.assignedTasks)
if activeLoad >= 2 { // Max 2 concurrent tasks per disk
return false
}
// Check for conflicting task types
for _, task := range disk.assignedTasks {
if at.areTaskTypesConflicting(task.TaskType, taskType) {
return false
}
}
return true
}
// areTaskTypesConflicting checks if two task types conflict
func (at *ActiveTopology) areTaskTypesConflicting(existing, new TaskType) bool {
// Examples of conflicting task types
conflictMap := map[TaskType][]TaskType{
TaskTypeVacuum: {TaskTypeBalance, TaskTypeErasureCoding},
TaskTypeBalance: {TaskTypeVacuum, TaskTypeErasureCoding},
TaskTypeErasureCoding: {TaskTypeVacuum, TaskTypeBalance},
}
if conflicts, exists := conflictMap[existing]; exists {
for _, conflictType := range conflicts {
if conflictType == new {
return true
}
}
}
return false
}
// cleanupRecentTasks removes old recent tasks
func (at *ActiveTopology) cleanupRecentTasks() {
cutoff := time.Now().Add(-time.Duration(at.recentTaskWindowSeconds) * time.Second)
for taskID, task := range at.recentTasks {
if task.CompletedAt.Before(cutoff) {
delete(at.recentTasks, taskID)
}
}
}

View File

@@ -0,0 +1,654 @@
package topology
import (
"testing"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestActiveTopologyBasicOperations tests basic topology management
func TestActiveTopologyBasicOperations(t *testing.T) {
topology := NewActiveTopology(10)
assert.NotNil(t, topology)
assert.Equal(t, 10, topology.recentTaskWindowSeconds)
// Test empty topology
assert.Equal(t, 0, len(topology.nodes))
assert.Equal(t, 0, len(topology.disks))
assert.Equal(t, 0, len(topology.pendingTasks))
}
// TestActiveTopologyUpdate tests topology updates from master
func TestActiveTopologyUpdate(t *testing.T) {
topology := NewActiveTopology(10)
// Create sample topology info
topologyInfo := createSampleTopology()
err := topology.UpdateTopology(topologyInfo)
require.NoError(t, err)
// Verify topology structure
assert.Equal(t, 2, len(topology.nodes)) // 2 nodes
assert.Equal(t, 4, len(topology.disks)) // 4 disks total (2 per node)
// Verify node structure
node1, exists := topology.nodes["10.0.0.1:8080"]
require.True(t, exists)
assert.Equal(t, "dc1", node1.dataCenter)
assert.Equal(t, "rack1", node1.rack)
assert.Equal(t, 2, len(node1.disks))
// Verify disk structure
disk1, exists := topology.disks["10.0.0.1:8080:0"]
require.True(t, exists)
assert.Equal(t, uint32(0), disk1.DiskID)
assert.Equal(t, "hdd", disk1.DiskType)
assert.Equal(t, "dc1", disk1.DataCenter)
}
// TestTaskLifecycle tests the complete task lifecycle
func TestTaskLifecycle(t *testing.T) {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
taskID := "balance-001"
// 1. Add pending task
topology.AddPendingTask(taskID, TaskTypeBalance, 1001,
"10.0.0.1:8080", 0, "10.0.0.2:8080", 1)
// Verify pending state
assert.Equal(t, 1, len(topology.pendingTasks))
assert.Equal(t, 0, len(topology.assignedTasks))
assert.Equal(t, 0, len(topology.recentTasks))
task := topology.pendingTasks[taskID]
assert.Equal(t, TaskStatusPending, task.Status)
assert.Equal(t, uint32(1001), task.VolumeID)
// Verify task assigned to disks
sourceDisk := topology.disks["10.0.0.1:8080:0"]
targetDisk := topology.disks["10.0.0.2:8080:1"]
assert.Equal(t, 1, len(sourceDisk.pendingTasks))
assert.Equal(t, 1, len(targetDisk.pendingTasks))
// 2. Assign task
err := topology.AssignTask(taskID)
require.NoError(t, err)
// Verify assigned state
assert.Equal(t, 0, len(topology.pendingTasks))
assert.Equal(t, 1, len(topology.assignedTasks))
assert.Equal(t, 0, len(topology.recentTasks))
task = topology.assignedTasks[taskID]
assert.Equal(t, TaskStatusInProgress, task.Status)
// Verify task moved to assigned on disks
assert.Equal(t, 0, len(sourceDisk.pendingTasks))
assert.Equal(t, 1, len(sourceDisk.assignedTasks))
assert.Equal(t, 0, len(targetDisk.pendingTasks))
assert.Equal(t, 1, len(targetDisk.assignedTasks))
// 3. Complete task
err = topology.CompleteTask(taskID)
require.NoError(t, err)
// Verify completed state
assert.Equal(t, 0, len(topology.pendingTasks))
assert.Equal(t, 0, len(topology.assignedTasks))
assert.Equal(t, 1, len(topology.recentTasks))
task = topology.recentTasks[taskID]
assert.Equal(t, TaskStatusCompleted, task.Status)
assert.False(t, task.CompletedAt.IsZero())
}
// TestTaskDetectionScenarios tests various task detection scenarios
func TestTaskDetectionScenarios(t *testing.T) {
tests := []struct {
name string
scenario func() *ActiveTopology
expectedTasks map[string]bool // taskType -> shouldDetect
}{
{
name: "Empty cluster - no tasks needed",
scenario: func() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createEmptyTopology())
return topology
},
expectedTasks: map[string]bool{
"balance": false,
"vacuum": false,
"ec": false,
},
},
{
name: "Unbalanced cluster - balance task needed",
scenario: func() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createUnbalancedTopology())
return topology
},
expectedTasks: map[string]bool{
"balance": true,
"vacuum": false,
"ec": false,
},
},
{
name: "High garbage ratio - vacuum task needed",
scenario: func() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createHighGarbageTopology())
return topology
},
expectedTasks: map[string]bool{
"balance": false,
"vacuum": true,
"ec": false,
},
},
{
name: "Large volumes - EC task needed",
scenario: func() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createLargeVolumeTopology())
return topology
},
expectedTasks: map[string]bool{
"balance": false,
"vacuum": false,
"ec": true,
},
},
{
name: "Recent tasks - no immediate re-detection",
scenario: func() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createUnbalancedTopology())
// Add recent balance task
topology.recentTasks["recent-balance"] = &taskState{
VolumeID: 1001,
TaskType: TaskTypeBalance,
Status: TaskStatusCompleted,
CompletedAt: time.Now().Add(-5 * time.Second), // 5 seconds ago
}
return topology
},
expectedTasks: map[string]bool{
"balance": false, // Should not detect due to recent task
"vacuum": false,
"ec": false,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
topology := tt.scenario()
// Test balance task detection
shouldDetectBalance := tt.expectedTasks["balance"]
actualDetectBalance := !topology.HasRecentTaskForVolume(1001, TaskTypeBalance)
if shouldDetectBalance {
assert.True(t, actualDetectBalance, "Should detect balance task")
} else {
// Note: In real implementation, task detection would be more sophisticated
// This is a simplified test of the recent task prevention mechanism
}
// Test that recent tasks prevent re-detection
if len(topology.recentTasks) > 0 {
for _, task := range topology.recentTasks {
hasRecent := topology.HasRecentTaskForVolume(task.VolumeID, task.TaskType)
assert.True(t, hasRecent, "Should find recent task for volume %d", task.VolumeID)
}
}
})
}
}
// TestTargetSelectionScenarios tests target selection for different task types
func TestTargetSelectionScenarios(t *testing.T) {
tests := []struct {
name string
topology *ActiveTopology
taskType TaskType
excludeNode string
expectedTargets int
expectedBestTarget string
}{
{
name: "Balance task - find least loaded disk",
topology: createTopologyWithLoad(),
taskType: TaskTypeBalance,
excludeNode: "10.0.0.1:8080", // Exclude source node
expectedTargets: 2, // 2 disks on other node
},
{
name: "EC task - find multiple available disks",
topology: createTopologyForEC(),
taskType: TaskTypeErasureCoding,
excludeNode: "", // Don't exclude any nodes
expectedTargets: 4, // All 4 disks available
},
{
name: "Vacuum task - avoid conflicting disks",
topology: createTopologyWithConflicts(),
taskType: TaskTypeVacuum,
excludeNode: "",
expectedTargets: 1, // Only 1 disk without conflicts (conflicts exclude more disks)
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
availableDisks := tt.topology.GetAvailableDisks(tt.taskType, tt.excludeNode)
assert.Equal(t, tt.expectedTargets, len(availableDisks),
"Expected %d available disks, got %d", tt.expectedTargets, len(availableDisks))
// Verify disks are actually available
for _, disk := range availableDisks {
assert.NotEqual(t, tt.excludeNode, disk.NodeID,
"Available disk should not be on excluded node")
load := tt.topology.GetDiskLoad(disk.NodeID, disk.DiskID)
assert.Less(t, load, 2, "Disk load should be less than 2")
}
})
}
}
// TestDiskLoadCalculation tests disk load calculation
func TestDiskLoadCalculation(t *testing.T) {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
// Initially no load
load := topology.GetDiskLoad("10.0.0.1:8080", 0)
assert.Equal(t, 0, load)
// Add pending task
topology.AddPendingTask("task1", TaskTypeBalance, 1001,
"10.0.0.1:8080", 0, "10.0.0.2:8080", 1)
// Check load increased
load = topology.GetDiskLoad("10.0.0.1:8080", 0)
assert.Equal(t, 1, load)
// Add another task to same disk
topology.AddPendingTask("task2", TaskTypeVacuum, 1002,
"10.0.0.1:8080", 0, "", 0)
load = topology.GetDiskLoad("10.0.0.1:8080", 0)
assert.Equal(t, 2, load)
// Move one task to assigned
topology.AssignTask("task1")
// Load should still be 2 (1 pending + 1 assigned)
load = topology.GetDiskLoad("10.0.0.1:8080", 0)
assert.Equal(t, 2, load)
// Complete one task
topology.CompleteTask("task1")
// Load should decrease to 1
load = topology.GetDiskLoad("10.0.0.1:8080", 0)
assert.Equal(t, 1, load)
}
// TestTaskConflictDetection tests task conflict detection
func TestTaskConflictDetection(t *testing.T) {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
// Add a balance task
topology.AddPendingTask("balance1", TaskTypeBalance, 1001,
"10.0.0.1:8080", 0, "10.0.0.2:8080", 1)
topology.AssignTask("balance1")
// Try to get available disks for vacuum (conflicts with balance)
availableDisks := topology.GetAvailableDisks(TaskTypeVacuum, "")
// Source disk should not be available due to conflict
sourceDiskAvailable := false
for _, disk := range availableDisks {
if disk.NodeID == "10.0.0.1:8080" && disk.DiskID == 0 {
sourceDiskAvailable = true
break
}
}
assert.False(t, sourceDiskAvailable, "Source disk should not be available due to task conflict")
}
// TestPublicInterfaces tests the public interface methods
func TestPublicInterfaces(t *testing.T) {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
// Test GetAllNodes
nodes := topology.GetAllNodes()
assert.Equal(t, 2, len(nodes))
assert.Contains(t, nodes, "10.0.0.1:8080")
assert.Contains(t, nodes, "10.0.0.2:8080")
// Test GetNodeDisks
disks := topology.GetNodeDisks("10.0.0.1:8080")
assert.Equal(t, 2, len(disks))
// Test with non-existent node
disks = topology.GetNodeDisks("non-existent")
assert.Nil(t, disks)
}
// Helper functions to create test topologies
func createSampleTopology() *master_pb.TopologyInfo {
return &master_pb.TopologyInfo{
DataCenterInfos: []*master_pb.DataCenterInfo{
{
Id: "dc1",
RackInfos: []*master_pb.RackInfo{
{
Id: "rack1",
DataNodeInfos: []*master_pb.DataNodeInfo{
{
Id: "10.0.0.1:8080",
DiskInfos: map[string]*master_pb.DiskInfo{
"hdd": {DiskId: 0, VolumeCount: 10, MaxVolumeCount: 100},
"ssd": {DiskId: 1, VolumeCount: 5, MaxVolumeCount: 50},
},
},
{
Id: "10.0.0.2:8080",
DiskInfos: map[string]*master_pb.DiskInfo{
"hdd": {DiskId: 0, VolumeCount: 8, MaxVolumeCount: 100},
"ssd": {DiskId: 1, VolumeCount: 3, MaxVolumeCount: 50},
},
},
},
},
},
},
},
}
}
func createEmptyTopology() *master_pb.TopologyInfo {
return &master_pb.TopologyInfo{
DataCenterInfos: []*master_pb.DataCenterInfo{
{
Id: "dc1",
RackInfos: []*master_pb.RackInfo{
{
Id: "rack1",
DataNodeInfos: []*master_pb.DataNodeInfo{
{
Id: "10.0.0.1:8080",
DiskInfos: map[string]*master_pb.DiskInfo{
"hdd": {DiskId: 0, VolumeCount: 0, MaxVolumeCount: 100},
},
},
},
},
},
},
},
}
}
func createUnbalancedTopology() *master_pb.TopologyInfo {
return &master_pb.TopologyInfo{
DataCenterInfos: []*master_pb.DataCenterInfo{
{
Id: "dc1",
RackInfos: []*master_pb.RackInfo{
{
Id: "rack1",
DataNodeInfos: []*master_pb.DataNodeInfo{
{
Id: "10.0.0.1:8080",
DiskInfos: map[string]*master_pb.DiskInfo{
"hdd": {DiskId: 0, VolumeCount: 90, MaxVolumeCount: 100}, // Very loaded
},
},
{
Id: "10.0.0.2:8080",
DiskInfos: map[string]*master_pb.DiskInfo{
"hdd": {DiskId: 0, VolumeCount: 10, MaxVolumeCount: 100}, // Lightly loaded
},
},
},
},
},
},
},
}
}
func createHighGarbageTopology() *master_pb.TopologyInfo {
// In a real implementation, this would include volume-level garbage metrics
return createSampleTopology()
}
func createLargeVolumeTopology() *master_pb.TopologyInfo {
// In a real implementation, this would include volume-level size metrics
return createSampleTopology()
}
func createTopologyWithLoad() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
// Add some existing tasks to create load
topology.AddPendingTask("existing1", TaskTypeVacuum, 2001,
"10.0.0.1:8080", 0, "", 0)
topology.AssignTask("existing1")
return topology
}
func createTopologyForEC() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
return topology
}
func createTopologyWithConflicts() *ActiveTopology {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
// Add conflicting tasks
topology.AddPendingTask("balance1", TaskTypeBalance, 3001,
"10.0.0.1:8080", 0, "10.0.0.2:8080", 0)
topology.AssignTask("balance1")
topology.AddPendingTask("ec1", TaskTypeErasureCoding, 3002,
"10.0.0.1:8080", 1, "", 0)
topology.AssignTask("ec1")
return topology
}
// TestDestinationPlanning tests destination planning functionality
func TestDestinationPlanning(t *testing.T) {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
// Test balance destination planning
t.Run("Balance destination planning", func(t *testing.T) {
plan, err := topology.PlanBalanceDestination(1001, "10.0.0.1:8080", "rack1", "dc1", 1024*1024) // 1MB
require.NoError(t, err)
require.NotNil(t, plan)
// Should not target the source node
assert.NotEqual(t, "10.0.0.1:8080", plan.TargetNode)
assert.Equal(t, "10.0.0.2:8080", plan.TargetNode)
assert.NotEmpty(t, plan.TargetRack)
assert.NotEmpty(t, plan.TargetDC)
assert.Greater(t, plan.PlacementScore, 0.0)
})
// Test EC destination planning
t.Run("EC destination planning", func(t *testing.T) {
multiPlan, err := topology.PlanECDestinations(1002, "10.0.0.1:8080", "rack1", "dc1", 3) // Ask for 3 shards - source node can be included
require.NoError(t, err)
require.NotNil(t, multiPlan)
assert.Greater(t, len(multiPlan.Plans), 0)
assert.LessOrEqual(t, len(multiPlan.Plans), 3) // Should get at most 3 shards
assert.Equal(t, len(multiPlan.Plans), multiPlan.TotalShards)
// Check that all plans have valid target nodes
for _, plan := range multiPlan.Plans {
assert.NotEmpty(t, plan.TargetNode)
assert.NotEmpty(t, plan.TargetRack)
assert.NotEmpty(t, plan.TargetDC)
assert.GreaterOrEqual(t, plan.PlacementScore, 0.0)
}
// Check diversity metrics
assert.GreaterOrEqual(t, multiPlan.SuccessfulRack, 1)
assert.GreaterOrEqual(t, multiPlan.SuccessfulDCs, 1)
})
// Test destination planning with load
t.Run("Destination planning considers load", func(t *testing.T) {
// Add load to one disk
topology.AddPendingTask("task1", TaskTypeBalance, 2001,
"10.0.0.2:8080", 0, "", 0)
plan, err := topology.PlanBalanceDestination(1003, "10.0.0.1:8080", "rack1", "dc1", 1024*1024)
require.NoError(t, err)
require.NotNil(t, plan)
// Should prefer less loaded disk (disk 1 over disk 0 on node2)
assert.Equal(t, "10.0.0.2:8080", plan.TargetNode)
assert.Equal(t, uint32(1), plan.TargetDisk) // Should prefer SSD (disk 1) which has no load
})
// Test insufficient destinations
t.Run("Handle insufficient destinations", func(t *testing.T) {
// Try to plan for more EC shards than available disks
multiPlan, err := topology.PlanECDestinations(1004, "10.0.0.1:8080", "rack1", "dc1", 100)
// Should get an error for insufficient disks
assert.Error(t, err)
assert.Nil(t, multiPlan)
})
}
// TestDestinationPlanningWithActiveTopology tests the integration between task detection and destination planning
func TestDestinationPlanningWithActiveTopology(t *testing.T) {
topology := NewActiveTopology(10)
topology.UpdateTopology(createUnbalancedTopology())
// Test that tasks are created with destinations
t.Run("Balance task with destination", func(t *testing.T) {
// Simulate what the balance detector would create
sourceNode := "10.0.0.1:8080" // Overloaded node
volumeID := uint32(1001)
plan, err := topology.PlanBalanceDestination(volumeID, sourceNode, "rack1", "dc1", 1024*1024)
require.NoError(t, err)
require.NotNil(t, plan)
// Verify the destination is different from source
assert.NotEqual(t, sourceNode, plan.TargetNode)
assert.Equal(t, "10.0.0.2:8080", plan.TargetNode) // Should be the lightly loaded node
// Verify placement quality
assert.Greater(t, plan.PlacementScore, 0.0)
assert.LessOrEqual(t, plan.PlacementScore, 1.0)
})
// Test task state integration
t.Run("Task state affects future planning", func(t *testing.T) {
volumeID := uint32(1002)
sourceNode := "10.0.0.1:8080"
targetNode := "10.0.0.2:8080"
// Plan first destination
plan1, err := topology.PlanBalanceDestination(volumeID, sourceNode, "rack1", "dc1", 1024*1024)
require.NoError(t, err)
require.NotNil(t, plan1)
// Add a pending task to the target
topology.AddPendingTask("task1", TaskTypeBalance, volumeID, sourceNode, 0, targetNode, 0)
// Plan another destination - should consider the pending task load
plan2, err := topology.PlanBalanceDestination(1003, sourceNode, "rack1", "dc1", 1024*1024)
require.NoError(t, err)
require.NotNil(t, plan2)
// The placement score should reflect the increased load
// (This test might need adjustment based on the actual scoring algorithm)
glog.V(1).Infof("Plan1 score: %.3f, Plan2 score: %.3f", plan1.PlacementScore, plan2.PlacementScore)
})
}
// TestECDestinationPlanningDetailed tests the EC destination planning with multiple shards
func TestECDestinationPlanningDetailed(t *testing.T) {
topology := NewActiveTopology(10)
topology.UpdateTopology(createSampleTopology())
t.Run("EC multiple destinations", func(t *testing.T) {
// Plan for 3 EC shards (now including source node, we have 4 disks total)
multiPlan, err := topology.PlanECDestinations(1005, "10.0.0.1:8080", "rack1", "dc1", 3)
require.NoError(t, err)
require.NotNil(t, multiPlan)
// Should get 3 destinations (can include source node's disks)
assert.Equal(t, 3, len(multiPlan.Plans))
assert.Equal(t, 3, multiPlan.TotalShards)
// Count node distribution - source node can now be included
nodeCount := make(map[string]int)
for _, plan := range multiPlan.Plans {
nodeCount[plan.TargetNode]++
}
// Should distribute across available nodes (both nodes can be used)
assert.GreaterOrEqual(t, len(nodeCount), 1, "Should use at least 1 node")
assert.LessOrEqual(t, len(nodeCount), 2, "Should use at most 2 nodes")
glog.V(1).Infof("EC destinations node distribution: %v", nodeCount)
glog.V(1).Infof("EC destinations: %d plans across %d racks, %d DCs",
multiPlan.TotalShards, multiPlan.SuccessfulRack, multiPlan.SuccessfulDCs)
})
t.Run("EC destination planning with task conflicts", func(t *testing.T) {
// Create a fresh topology for this test to avoid conflicts from previous test
freshTopology := NewActiveTopology(10)
freshTopology.UpdateTopology(createSampleTopology())
// Add tasks to create conflicts on some disks
freshTopology.AddPendingTask("conflict1", TaskTypeVacuum, 2001, "10.0.0.2:8080", 0, "", 0)
freshTopology.AddPendingTask("conflict2", TaskTypeBalance, 2002, "10.0.0.1:8080", 0, "", 0)
freshTopology.AssignTask("conflict1")
freshTopology.AssignTask("conflict2")
// Plan EC destinations - should still succeed using available disks
multiPlan, err := freshTopology.PlanECDestinations(1006, "10.0.0.1:8080", "rack1", "dc1", 2)
require.NoError(t, err)
require.NotNil(t, multiPlan)
// Should get destinations (using disks that don't have conflicts)
assert.GreaterOrEqual(t, len(multiPlan.Plans), 1)
assert.LessOrEqual(t, len(multiPlan.Plans), 2)
// Available disks should be: node1/disk1 and node2/disk1 (since disk0 on both nodes have conflicts)
for _, plan := range multiPlan.Plans {
assert.Equal(t, uint32(1), plan.TargetDisk, "Should prefer disk 1 which has no conflicts")
}
glog.V(1).Infof("EC destination planning with conflicts: found %d destinations", len(multiPlan.Plans))
})
}

View File

@@ -22,7 +22,7 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
<div id="collections-content">
<!-- Summary Cards -->
<div class="row mb-4">
<div class="col-xl-3 col-md-6 mb-4">
<div class="col-xl-2 col-lg-3 col-md-4 col-sm-6 mb-4">
<div class="card border-left-primary shadow h-100 py-2">
<div class="card-body">
<div class="row no-gutters align-items-center">
@@ -42,13 +42,13 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
</div>
</div>
<div class="col-xl-3 col-md-6 mb-4">
<div class="col-xl-2 col-lg-3 col-md-4 col-sm-6 mb-4">
<div class="card border-left-info shadow h-100 py-2">
<div class="card-body">
<div class="row no-gutters align-items-center">
<div class="col mr-2">
<div class="text-xs font-weight-bold text-info text-uppercase mb-1">
Total Volumes
Regular Volumes
</div>
<div class="h5 mb-0 font-weight-bold text-gray-800">
{fmt.Sprintf("%d", data.TotalVolumes)}
@@ -62,7 +62,27 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
</div>
</div>
<div class="col-xl-3 col-md-6 mb-4">
<div class="col-xl-2 col-lg-3 col-md-4 col-sm-6 mb-4">
<div class="card border-left-success shadow h-100 py-2">
<div class="card-body">
<div class="row no-gutters align-items-center">
<div class="col mr-2">
<div class="text-xs font-weight-bold text-success text-uppercase mb-1">
EC Volumes
</div>
<div class="h5 mb-0 font-weight-bold text-gray-800">
{fmt.Sprintf("%d", data.TotalEcVolumes)}
</div>
</div>
<div class="col-auto">
<i class="fas fa-th-large fa-2x text-gray-300"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-xl-2 col-lg-3 col-md-4 col-sm-6 mb-4">
<div class="card border-left-warning shadow h-100 py-2">
<div class="card-body">
<div class="row no-gutters align-items-center">
@@ -76,19 +96,19 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
</div>
<div class="col-auto">
<i class="fas fa-file fa-2x text-gray-300"></i>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="col-xl-3 col-md-6 mb-4">
<div class="col-xl-2 col-lg-3 col-md-4 col-sm-6 mb-4">
<div class="card border-left-secondary shadow h-100 py-2">
<div class="card-body">
<div class="row no-gutters align-items-center">
<div class="col mr-2">
<div class="text-xs font-weight-bold text-secondary text-uppercase mb-1">
Total Storage Size
Total Storage Size (Logical)
</div>
<div class="h5 mb-0 font-weight-bold text-gray-800">
{formatBytes(data.TotalSize)}
@@ -117,9 +137,10 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
<thead>
<tr>
<th>Collection Name</th>
<th>Volumes</th>
<th>Regular Volumes</th>
<th>EC Volumes</th>
<th>Files</th>
<th>Size</th>
<th>Size (Logical)</th>
<th>Disk Types</th>
<th>Actions</th>
</tr>
@@ -128,7 +149,7 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
for _, collection := range data.Collections {
<tr>
<td>
<a href={templ.SafeURL(fmt.Sprintf("/cluster/volumes?collection=%s", collection.Name))} class="text-decoration-none">
<a href={templ.SafeURL(fmt.Sprintf("/cluster/collections/%s", collection.Name))} class="text-decoration-none">
<strong>{collection.Name}</strong>
</a>
</td>
@@ -136,7 +157,23 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
<a href={templ.SafeURL(fmt.Sprintf("/cluster/volumes?collection=%s", collection.Name))} class="text-decoration-none">
<div class="d-flex align-items-center">
<i class="fas fa-database me-2 text-muted"></i>
{fmt.Sprintf("%d", collection.VolumeCount)}
if collection.VolumeCount > 0 {
{fmt.Sprintf("%d", collection.VolumeCount)}
} else {
<span class="text-muted">0</span>
}
</div>
</a>
</td>
<td>
<a href={templ.SafeURL(fmt.Sprintf("/cluster/ec-shards?collection=%s", collection.Name))} class="text-decoration-none">
<div class="d-flex align-items-center">
<i class="fas fa-th-large me-2 text-muted"></i>
if collection.EcVolumeCount > 0 {
{fmt.Sprintf("%d", collection.EcVolumeCount)}
} else {
<span class="text-muted">0</span>
}
</div>
</a>
</td>
@@ -171,6 +208,7 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
data-name={collection.Name}
data-datacenter={collection.DataCenter}
data-volume-count={fmt.Sprintf("%d", collection.VolumeCount)}
data-ec-volume-count={fmt.Sprintf("%d", collection.EcVolumeCount)}
data-file-count={fmt.Sprintf("%d", collection.FileCount)}
data-total-size={fmt.Sprintf("%d", collection.TotalSize)}
data-disk-types={formatDiskTypes(collection.DiskTypes)}>
@@ -223,6 +261,7 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
name: button.getAttribute('data-name'),
datacenter: button.getAttribute('data-datacenter'),
volumeCount: parseInt(button.getAttribute('data-volume-count')),
ecVolumeCount: parseInt(button.getAttribute('data-ec-volume-count')),
fileCount: parseInt(button.getAttribute('data-file-count')),
totalSize: parseInt(button.getAttribute('data-total-size')),
diskTypes: button.getAttribute('data-disk-types')
@@ -260,19 +299,25 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
'<div class="col-md-6">' +
'<h6 class="text-primary"><i class="fas fa-chart-bar me-1"></i>Storage Statistics</h6>' +
'<table class="table table-sm">' +
'<tr><td><strong>Total Volumes:</strong></td><td>' +
'<tr><td><strong>Regular Volumes:</strong></td><td>' +
'<div class="d-flex align-items-center">' +
'<i class="fas fa-database me-2 text-muted"></i>' +
'<span>' + collection.volumeCount.toLocaleString() + '</span>' +
'</div>' +
'</td></tr>' +
'<tr><td><strong>EC Volumes:</strong></td><td>' +
'<div class="d-flex align-items-center">' +
'<i class="fas fa-th-large me-2 text-muted"></i>' +
'<span>' + collection.ecVolumeCount.toLocaleString() + '</span>' +
'</div>' +
'</td></tr>' +
'<tr><td><strong>Total Files:</strong></td><td>' +
'<div class="d-flex align-items-center">' +
'<i class="fas fa-file me-2 text-muted"></i>' +
'<span>' + collection.fileCount.toLocaleString() + '</span>' +
'</div>' +
'</td></tr>' +
'<tr><td><strong>Total Size:</strong></td><td>' +
'<tr><td><strong>Total Size (Logical):</strong></td><td>' +
'<div class="d-flex align-items-center">' +
'<i class="fas fa-hdd me-2 text-muted"></i>' +
'<span>' + formatBytes(collection.totalSize) + '</span>' +
@@ -288,6 +333,9 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
'<a href="/cluster/volumes?collection=' + encodeURIComponent(collection.name) + '" class="btn btn-outline-primary">' +
'<i class="fas fa-database me-1"></i>View Volumes' +
'</a>' +
'<a href="/cluster/ec-shards?collection=' + encodeURIComponent(collection.name) + '" class="btn btn-outline-secondary">' +
'<i class="fas fa-th-large me-1"></i>View EC Volumes' +
'</a>' +
'<a href="/files?collection=' + encodeURIComponent(collection.name) + '" class="btn btn-outline-info">' +
'<i class="fas fa-folder me-1"></i>Browse Files' +
'</a>' +
@@ -295,6 +343,7 @@ templ ClusterCollections(data dash.ClusterCollectionsData) {
'</div>' +
'</div>' +
'</div>' +
'</div>' +
'<div class="modal-footer">' +
'<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>' +
'</div>' +

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,455 @@
package app
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
)
templ ClusterEcShards(data dash.ClusterEcShardsData) {
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<div>
<h1 class="h2">
<i class="fas fa-th-large me-2"></i>EC Shards
</h1>
if data.FilterCollection != "" {
<div class="d-flex align-items-center mt-2">
if data.FilterCollection == "default" {
<span class="badge bg-secondary text-white me-2">
<i class="fas fa-filter me-1"></i>Collection: default
</span>
} else {
<span class="badge bg-info text-white me-2">
<i class="fas fa-filter me-1"></i>Collection: {data.FilterCollection}
</span>
}
<a href="/cluster/ec-shards" class="btn btn-sm btn-outline-secondary">
<i class="fas fa-times me-1"></i>Clear Filter
</a>
</div>
}
</div>
<div class="btn-toolbar mb-2 mb-md-0">
<div class="btn-group me-2">
<select class="form-select form-select-sm me-2" id="pageSizeSelect" onchange="changePageSize()" style="width: auto;">
<option value="50" if data.PageSize == 50 { selected="selected" }>50 per page</option>
<option value="100" if data.PageSize == 100 { selected="selected" }>100 per page</option>
<option value="200" if data.PageSize == 200 { selected="selected" }>200 per page</option>
<option value="500" if data.PageSize == 500 { selected="selected" }>500 per page</option>
</select>
<button type="button" class="btn btn-sm btn-outline-primary" onclick="exportEcShards()">
<i class="fas fa-download me-1"></i>Export
</button>
</div>
</div>
</div>
<!-- Statistics Cards -->
<div class="row mb-4">
<div class="col-md-3">
<div class="card text-bg-primary">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Total Shards</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalShards)}</h4>
</div>
<div class="align-self-center">
<i class="fas fa-puzzle-piece fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-info">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">EC Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalVolumes)}</h4>
</div>
<div class="align-self-center">
<i class="fas fa-database fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-success">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Healthy Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.VolumesWithAllShards)}</h4>
<small>Complete (14/14 shards)</small>
</div>
<div class="align-self-center">
<i class="fas fa-check-circle fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-warning">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Degraded Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.VolumesWithMissingShards)}</h4>
<small>Incomplete/Critical</small>
</div>
<div class="align-self-center">
<i class="fas fa-exclamation-triangle fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Shards Table -->
<div class="table-responsive">
<table class="table table-striped table-hover" id="ecShardsTable">
<thead>
<tr>
<th>
<a href="#" onclick="sortBy('volume_id')" class="text-dark text-decoration-none">
Volume ID
if data.SortBy == "volume_id" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
if data.ShowCollectionColumn {
<th>
<a href="#" onclick="sortBy('collection')" class="text-dark text-decoration-none">
Collection
if data.SortBy == "collection" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
}
<th>
<a href="#" onclick="sortBy('server')" class="text-dark text-decoration-none">
Server
if data.SortBy == "server" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
if data.ShowDataCenterColumn {
<th>
<a href="#" onclick="sortBy('datacenter')" class="text-dark text-decoration-none">
Data Center
if data.SortBy == "datacenter" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
}
if data.ShowRackColumn {
<th>
<a href="#" onclick="sortBy('rack')" class="text-dark text-decoration-none">
Rack
if data.SortBy == "rack" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
}
<th class="text-dark">Distribution</th>
<th class="text-dark">Status</th>
<th class="text-dark">Actions</th>
</tr>
</thead>
<tbody>
for _, shard := range data.EcShards {
<tr>
<td>
<span class="fw-bold">{fmt.Sprintf("%d", shard.VolumeID)}</span>
</td>
if data.ShowCollectionColumn {
<td>
if shard.Collection != "" {
<a href="/cluster/ec-shards?collection={shard.Collection}" class="text-decoration-none">
<span class="badge bg-info text-white">{shard.Collection}</span>
</a>
} else {
<a href="/cluster/ec-shards?collection=default" class="text-decoration-none">
<span class="badge bg-secondary text-white">default</span>
</a>
}
</td>
}
<td>
<code class="small">{shard.Server}</code>
</td>
if data.ShowDataCenterColumn {
<td>
<span class="badge bg-outline-primary">{shard.DataCenter}</span>
</td>
}
if data.ShowRackColumn {
<td>
<span class="badge bg-outline-secondary">{shard.Rack}</span>
</td>
}
<td>
@displayShardDistribution(shard, data.EcShards)
</td>
<td>
@displayVolumeStatus(shard)
</td>
<td>
<div class="btn-group" role="group">
<button type="button" class="btn btn-sm btn-outline-primary"
onclick="showShardDetails(event)"
data-volume-id={ fmt.Sprintf("%d", shard.VolumeID) }
title="View EC volume details">
<i class="fas fa-info-circle"></i>
</button>
if !shard.IsComplete {
<button type="button" class="btn btn-sm btn-outline-warning"
onclick="repairVolume(event)"
data-volume-id={ fmt.Sprintf("%d", shard.VolumeID) }
title="Repair missing shards">
<i class="fas fa-wrench"></i>
</button>
}
</div>
</td>
</tr>
}
</tbody>
</table>
</div>
<!-- Pagination -->
if data.TotalPages > 1 {
<nav aria-label="EC Shards pagination">
<ul class="pagination justify-content-center">
if data.CurrentPage > 1 {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.CurrentPage-1) }>
<i class="fas fa-chevron-left"></i>
</a>
</li>
}
<!-- First page -->
if data.CurrentPage > 3 {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(1)">1</a>
</li>
if data.CurrentPage > 4 {
<li class="page-item disabled">
<span class="page-link">...</span>
</li>
}
}
<!-- Current page and neighbors -->
if data.CurrentPage > 1 && data.CurrentPage-1 >= 1 {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.CurrentPage-1) }>{fmt.Sprintf("%d", data.CurrentPage-1)}</a>
</li>
}
<li class="page-item active">
<span class="page-link">{fmt.Sprintf("%d", data.CurrentPage)}</span>
</li>
if data.CurrentPage < data.TotalPages && data.CurrentPage+1 <= data.TotalPages {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.CurrentPage+1) }>{fmt.Sprintf("%d", data.CurrentPage+1)}</a>
</li>
}
<!-- Last page -->
if data.CurrentPage < data.TotalPages-2 {
if data.CurrentPage < data.TotalPages-3 {
<li class="page-item disabled">
<span class="page-link">...</span>
</li>
}
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.TotalPages) }>{fmt.Sprintf("%d", data.TotalPages)}</a>
</li>
}
if data.CurrentPage < data.TotalPages {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.CurrentPage+1) }>
<i class="fas fa-chevron-right"></i>
</a>
</li>
}
</ul>
</nav>
}
<!-- JavaScript -->
<script>
function sortBy(field) {
const currentSort = "{data.SortBy}";
const currentOrder = "{data.SortOrder}";
let newOrder = 'asc';
if (currentSort === field && currentOrder === 'asc') {
newOrder = 'desc';
}
updateUrl({
sortBy: field,
sortOrder: newOrder,
page: 1
});
}
function goToPage(event) {
// Get data from the link element (not any child elements)
const link = event.target.closest('a');
const page = link.getAttribute('data-page');
updateUrl({ page: page });
}
function changePageSize() {
const pageSize = document.getElementById('pageSizeSelect').value;
updateUrl({ pageSize: pageSize, page: 1 });
}
function updateUrl(params) {
const url = new URL(window.location);
Object.keys(params).forEach(key => {
if (params[key]) {
url.searchParams.set(key, params[key]);
} else {
url.searchParams.delete(key);
}
});
window.location.href = url.toString();
}
function exportEcShards() {
const url = new URL('/api/cluster/ec-shards/export', window.location.origin);
const params = new URLSearchParams(window.location.search);
params.forEach((value, key) => {
url.searchParams.set(key, value);
});
window.open(url.toString(), '_blank');
}
function showShardDetails(event) {
// Get data from the button element (not the icon inside it)
const button = event.target.closest('button');
const volumeId = button.getAttribute('data-volume-id');
// Navigate to the EC volume details page
window.location.href = `/cluster/ec-volumes/${volumeId}`;
}
function repairVolume(event) {
// Get data from the button element (not the icon inside it)
const button = event.target.closest('button');
const volumeId = button.getAttribute('data-volume-id');
if (confirm(`Are you sure you want to repair missing shards for volume ${volumeId}?`)) {
fetch(`/api/cluster/volumes/${volumeId}/repair`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
}
})
.then(response => response.json())
.then(data => {
if (data.success) {
alert('Repair initiated successfully');
location.reload();
} else {
alert('Failed to initiate repair: ' + data.error);
}
})
.catch(error => {
alert('Error: ' + error.message);
});
}
}
</script>
}
// displayShardDistribution shows the distribution summary for a volume's shards
templ displayShardDistribution(shard dash.EcShardWithInfo, allShards []dash.EcShardWithInfo) {
<div class="small">
<i class="fas fa-sitemap me-1"></i>
{ calculateDistributionSummary(shard.VolumeID, allShards) }
</div>
}
// displayVolumeStatus shows an improved status display
templ displayVolumeStatus(shard dash.EcShardWithInfo) {
if shard.IsComplete {
<span class="badge bg-success"><i class="fas fa-check me-1"></i>Complete</span>
} else {
if len(shard.MissingShards) > 10 {
<span class="badge bg-danger"><i class="fas fa-skull me-1"></i>Critical ({fmt.Sprintf("%d", len(shard.MissingShards))} missing)</span>
} else if len(shard.MissingShards) > 6 {
<span class="badge bg-warning"><i class="fas fa-exclamation-triangle me-1"></i>Degraded ({fmt.Sprintf("%d", len(shard.MissingShards))} missing)</span>
} else if len(shard.MissingShards) > 2 {
<span class="badge bg-warning"><i class="fas fa-info-circle me-1"></i>Incomplete ({fmt.Sprintf("%d", len(shard.MissingShards))} missing)</span>
} else {
<span class="badge bg-info"><i class="fas fa-info-circle me-1"></i>Minor Issues ({fmt.Sprintf("%d", len(shard.MissingShards))} missing)</span>
}
}
}
// calculateDistributionSummary calculates and formats the distribution summary
func calculateDistributionSummary(volumeID uint32, allShards []dash.EcShardWithInfo) string {
dataCenters := make(map[string]bool)
racks := make(map[string]bool)
servers := make(map[string]bool)
for _, s := range allShards {
if s.VolumeID == volumeID {
dataCenters[s.DataCenter] = true
racks[s.Rack] = true
servers[s.Server] = true
}
}
return fmt.Sprintf("%d DCs, %d racks, %d servers", len(dataCenters), len(racks), len(servers))
}

View File

@@ -0,0 +1,840 @@
// Code generated by templ - DO NOT EDIT.
// templ: version: v0.3.906
package app
//lint:file-ignore SA4006 This context is only used if a nested component is present.
import "github.com/a-h/templ"
import templruntime "github.com/a-h/templ/runtime"
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
)
func ClusterEcShards(data dash.ClusterEcShardsData) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
if templ_7745c5c3_Var1 == nil {
templ_7745c5c3_Var1 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom\"><div><h1 class=\"h2\"><i class=\"fas fa-th-large me-2\"></i>EC Shards</h1>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.FilterCollection != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "<div class=\"d-flex align-items-center mt-2\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.FilterCollection == "default" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "<span class=\"badge bg-secondary text-white me-2\"><i class=\"fas fa-filter me-1\"></i>Collection: default</span> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "<span class=\"badge bg-info text-white me-2\"><i class=\"fas fa-filter me-1\"></i>Collection: ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var2 string
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs(data.FilterCollection)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 22, Col: 96}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</span> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<a href=\"/cluster/ec-shards\" class=\"btn btn-sm btn-outline-secondary\"><i class=\"fas fa-times me-1\"></i>Clear Filter</a></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "</div><div class=\"btn-toolbar mb-2 mb-md-0\"><div class=\"btn-group me-2\"><select class=\"form-select form-select-sm me-2\" id=\"pageSizeSelect\" onchange=\"changePageSize()\" style=\"width: auto;\"><option value=\"50\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 50 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, " selected=\"selected\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, ">50 per page</option> <option value=\"100\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 100 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, " selected=\"selected\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, ">100 per page</option> <option value=\"200\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 200 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, " selected=\"selected\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, ">200 per page</option> <option value=\"500\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 500 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, " selected=\"selected\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, ">500 per page</option></select> <button type=\"button\" class=\"btn btn-sm btn-outline-primary\" onclick=\"exportEcShards()\"><i class=\"fas fa-download me-1\"></i>Export</button></div></div></div><!-- Statistics Cards --><div class=\"row mb-4\"><div class=\"col-md-3\"><div class=\"card text-bg-primary\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">Total Shards</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var3 string
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalShards))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 54, Col: 81}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "</h4></div><div class=\"align-self-center\"><i class=\"fas fa-puzzle-piece fa-2x\"></i></div></div></div></div></div><div class=\"col-md-3\"><div class=\"card text-bg-info\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">EC Volumes</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var4 string
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalVolumes))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 69, Col: 82}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "</h4></div><div class=\"align-self-center\"><i class=\"fas fa-database fa-2x\"></i></div></div></div></div></div><div class=\"col-md-3\"><div class=\"card text-bg-success\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">Healthy Volumes</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var5 string
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.VolumesWithAllShards))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 84, Col: 90}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "</h4><small>Complete (14/14 shards)</small></div><div class=\"align-self-center\"><i class=\"fas fa-check-circle fa-2x\"></i></div></div></div></div></div><div class=\"col-md-3\"><div class=\"card text-bg-warning\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">Degraded Volumes</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.VolumesWithMissingShards))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 100, Col: 94}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "</h4><small>Incomplete/Critical</small></div><div class=\"align-self-center\"><i class=\"fas fa-exclamation-triangle fa-2x\"></i></div></div></div></div></div></div><!-- Shards Table --><div class=\"table-responsive\"><table class=\"table table-striped table-hover\" id=\"ecShardsTable\"><thead><tr><th><a href=\"#\" onclick=\"sortBy('volume_id')\" class=\"text-dark text-decoration-none\">Volume ID ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "volume_id" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</a></th>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.ShowCollectionColumn {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "<th><a href=\"#\" onclick=\"sortBy('collection')\" class=\"text-dark text-decoration-none\">Collection ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "collection" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "</a></th>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "<th><a href=\"#\" onclick=\"sortBy('server')\" class=\"text-dark text-decoration-none\">Server ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "server" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</a></th>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.ShowDataCenterColumn {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "<th><a href=\"#\" onclick=\"sortBy('datacenter')\" class=\"text-dark text-decoration-none\">Data Center ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "datacenter" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "</a></th>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
if data.ShowRackColumn {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "<th><a href=\"#\" onclick=\"sortBy('rack')\" class=\"text-dark text-decoration-none\">Rack ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "rack" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "</a></th>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "<th class=\"text-dark\">Distribution</th><th class=\"text-dark\">Status</th><th class=\"text-dark\">Actions</th></tr></thead> <tbody>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, shard := range data.EcShards {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "<tr><td><span class=\"fw-bold\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var7 string
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", shard.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 203, Col: 84}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "</span></td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.ShowCollectionColumn {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "<td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if shard.Collection != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "<a href=\"/cluster/ec-shards?collection={shard.Collection}\" class=\"text-decoration-none\"><span class=\"badge bg-info text-white\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var8 string
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Collection)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 209, Col: 96}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "</span></a>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<a href=\"/cluster/ec-shards?collection=default\" class=\"text-decoration-none\"><span class=\"badge bg-secondary text-white\">default</span></a>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "</td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "<td><code class=\"small\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var9 string
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Server)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 219, Col: 61}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "</code></td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.ShowDataCenterColumn {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "<td><span class=\"badge bg-outline-primary\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var10 string
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(shard.DataCenter)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 223, Col: 88}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "</span></td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
if data.ShowRackColumn {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "<td><span class=\"badge bg-outline-secondary\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var11 string
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Rack)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 228, Col: 84}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "</span></td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "<td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = displayShardDistribution(shard, data.EcShards).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = displayVolumeStatus(shard).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "</td><td><div class=\"btn-group\" role=\"group\"><button type=\"button\" class=\"btn btn-sm btn-outline-primary\" onclick=\"showShardDetails(event)\" data-volume-id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var12 string
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", shard.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 241, Col: 90}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "\" title=\"View EC volume details\"><i class=\"fas fa-info-circle\"></i></button> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if !shard.IsComplete {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "<button type=\"button\" class=\"btn btn-sm btn-outline-warning\" onclick=\"repairVolume(event)\" data-volume-id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var13 string
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", shard.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 248, Col: 94}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "\" title=\"Repair missing shards\"><i class=\"fas fa-wrench\"></i></button>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "</div></td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 65, "</tbody></table></div><!-- Pagination -->")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.TotalPages > 1 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 66, "<nav aria-label=\"EC Shards pagination\"><ul class=\"pagination justify-content-center\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.CurrentPage > 1 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 67, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.CurrentPage-1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 267, Col: 129}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 68, "\"><i class=\"fas fa-chevron-left\"></i></a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 69, "<!-- First page -->")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.CurrentPage > 3 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 70, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(1)\">1</a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.CurrentPage > 4 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 71, "<li class=\"page-item disabled\"><span class=\"page-link\">...</span></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 72, "<!-- Current page and neighbors -->")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.CurrentPage > 1 && data.CurrentPage-1 >= 1 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 73, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.CurrentPage-1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 288, Col: 129}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 74, "\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var16 string
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.CurrentPage-1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 288, Col: 170}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 75, "</a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 76, "<li class=\"page-item active\"><span class=\"page-link\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var17 string
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.CurrentPage))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 293, Col: 80}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 77, "</span></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.CurrentPage < data.TotalPages && data.CurrentPage+1 <= data.TotalPages {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 78, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var18 string
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.CurrentPage+1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 298, Col: 129}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 79, "\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var19 string
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.CurrentPage+1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 298, Col: 170}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 80, "</a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 81, "<!-- Last page -->")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.CurrentPage < data.TotalPages-2 {
if data.CurrentPage < data.TotalPages-3 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 82, "<li class=\"page-item disabled\"><span class=\"page-link\">...</span></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 83, " <li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var20 string
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalPages))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 310, Col: 126}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 84, "\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var21 string
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalPages))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 310, Col: 164}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 85, "</a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
if data.CurrentPage < data.TotalPages {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 86, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var22 string
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.CurrentPage+1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 316, Col: 129}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 87, "\"><i class=\"fas fa-chevron-right\"></i></a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 88, "</ul></nav>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 89, "<!-- JavaScript --><script>\n function sortBy(field) {\n const currentSort = \"{data.SortBy}\";\n const currentOrder = \"{data.SortOrder}\";\n let newOrder = 'asc';\n \n if (currentSort === field && currentOrder === 'asc') {\n newOrder = 'desc';\n }\n \n updateUrl({\n sortBy: field,\n sortOrder: newOrder,\n page: 1\n });\n }\n\n function goToPage(event) {\n // Get data from the link element (not any child elements)\n const link = event.target.closest('a');\n const page = link.getAttribute('data-page');\n updateUrl({ page: page });\n }\n\n function changePageSize() {\n const pageSize = document.getElementById('pageSizeSelect').value;\n updateUrl({ pageSize: pageSize, page: 1 });\n }\n\n function updateUrl(params) {\n const url = new URL(window.location);\n Object.keys(params).forEach(key => {\n if (params[key]) {\n url.searchParams.set(key, params[key]);\n } else {\n url.searchParams.delete(key);\n }\n });\n window.location.href = url.toString();\n }\n\n function exportEcShards() {\n const url = new URL('/api/cluster/ec-shards/export', window.location.origin);\n const params = new URLSearchParams(window.location.search);\n params.forEach((value, key) => {\n url.searchParams.set(key, value);\n });\n window.open(url.toString(), '_blank');\n }\n\n function showShardDetails(event) {\n // Get data from the button element (not the icon inside it)\n const button = event.target.closest('button');\n const volumeId = button.getAttribute('data-volume-id');\n \n // Navigate to the EC volume details page\n window.location.href = `/cluster/ec-volumes/${volumeId}`;\n }\n\n function repairVolume(event) {\n // Get data from the button element (not the icon inside it)\n const button = event.target.closest('button');\n const volumeId = button.getAttribute('data-volume-id');\n if (confirm(`Are you sure you want to repair missing shards for volume ${volumeId}?`)) {\n fetch(`/api/cluster/volumes/${volumeId}/repair`, {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n }\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n alert('Repair initiated successfully');\n location.reload();\n } else {\n alert('Failed to initiate repair: ' + data.error);\n }\n })\n .catch(error => {\n alert('Error: ' + error.message);\n });\n }\n }\n </script>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
return nil
})
}
// displayShardDistribution shows the distribution summary for a volume's shards
func displayShardDistribution(shard dash.EcShardWithInfo, allShards []dash.EcShardWithInfo) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var23 := templ.GetChildren(ctx)
if templ_7745c5c3_Var23 == nil {
templ_7745c5c3_Var23 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 90, "<div class=\"small\"><i class=\"fas fa-sitemap me-1\"></i> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var24 string
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(calculateDistributionSummary(shard.VolumeID, allShards))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 418, Col: 65}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 91, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
return nil
})
}
// displayVolumeStatus shows an improved status display
func displayVolumeStatus(shard dash.EcShardWithInfo) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var25 := templ.GetChildren(ctx)
if templ_7745c5c3_Var25 == nil {
templ_7745c5c3_Var25 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
if shard.IsComplete {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 92, "<span class=\"badge bg-success\"><i class=\"fas fa-check me-1\"></i>Complete</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
if len(shard.MissingShards) > 10 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 93, "<span class=\"badge bg-danger\"><i class=\"fas fa-skull me-1\"></i>Critical (")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var26 string
templ_7745c5c3_Var26, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(shard.MissingShards)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 428, Col: 129}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var26))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 94, " missing)</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if len(shard.MissingShards) > 6 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 95, "<span class=\"badge bg-warning\"><i class=\"fas fa-exclamation-triangle me-1\"></i>Degraded (")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var27 string
templ_7745c5c3_Var27, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(shard.MissingShards)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 430, Col: 145}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var27))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 96, " missing)</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if len(shard.MissingShards) > 2 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 97, "<span class=\"badge bg-warning\"><i class=\"fas fa-info-circle me-1\"></i>Incomplete (")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var28 string
templ_7745c5c3_Var28, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(shard.MissingShards)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 432, Col: 138}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var28))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 98, " missing)</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 99, "<span class=\"badge bg-info\"><i class=\"fas fa-info-circle me-1\"></i>Minor Issues (")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var29 string
templ_7745c5c3_Var29, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(shard.MissingShards)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/cluster_ec_shards.templ`, Line: 434, Col: 137}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var29))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 100, " missing)</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
}
return nil
})
}
// calculateDistributionSummary calculates and formats the distribution summary
func calculateDistributionSummary(volumeID uint32, allShards []dash.EcShardWithInfo) string {
dataCenters := make(map[string]bool)
racks := make(map[string]bool)
servers := make(map[string]bool)
for _, s := range allShards {
if s.VolumeID == volumeID {
dataCenters[s.DataCenter] = true
racks[s.Rack] = true
servers[s.Server] = true
}
}
return fmt.Sprintf("%d DCs, %d racks, %d servers", len(dataCenters), len(racks), len(servers))
}
var _ = templruntime.GeneratedTemplate

View File

@@ -0,0 +1,775 @@
package app
import (
"fmt"
"strings"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
)
templ ClusterEcVolumes(data dash.ClusterEcVolumesData) {
<!DOCTYPE html>
<html lang="en">
<head>
<title>EC Volumes - SeaweedFS</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css" rel="stylesheet">
</head>
<body>
<div class="container-fluid">
<div class="row">
<div class="col-12">
<h2 class="mb-4">
<i class="fas fa-database me-2"></i>EC Volumes
<small class="text-muted">({fmt.Sprintf("%d", data.TotalVolumes)} volumes)</small>
</h2>
</div>
</div>
<!-- Statistics Cards -->
<div class="row mb-4">
<div class="col-md-3">
<div class="card text-bg-primary">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Total Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalVolumes)}</h4>
<small>EC encoded volumes</small>
</div>
<div class="align-self-center">
<i class="fas fa-cubes fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-info">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Total Shards</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalShards)}</h4>
<small>Distributed shards</small>
</div>
<div class="align-self-center">
<i class="fas fa-puzzle-piece fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-success">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Complete Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.CompleteVolumes)}</h4>
<small>All shards present</small>
</div>
<div class="align-self-center">
<i class="fas fa-check-circle fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-warning">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Incomplete Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.IncompleteVolumes)}</h4>
<small>Missing shards</small>
</div>
<div class="align-self-center">
<i class="fas fa-exclamation-triangle fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- EC Storage Information Note -->
<div class="alert alert-info mb-4" role="alert">
<i class="fas fa-info-circle me-2"></i>
<strong>EC Storage Note:</strong>
EC volumes use erasure coding (10+4) which stores data across 14 shards with redundancy.
Physical storage is approximately 1.4x the original logical data size due to 4 parity shards.
</div>
<!-- Volumes Table -->
<div class="d-flex justify-content-between align-items-center mb-3">
<div class="d-flex align-items-center">
<span class="me-3">
Showing {fmt.Sprintf("%d", (data.Page-1)*data.PageSize + 1)} to {fmt.Sprintf("%d", func() int {
end := data.Page * data.PageSize
if end > data.TotalVolumes {
return data.TotalVolumes
}
return end
}())} of {fmt.Sprintf("%d", data.TotalVolumes)} volumes
</span>
<div class="d-flex align-items-center">
<label for="pageSize" class="form-label me-2 mb-0">Show:</label>
<select id="pageSize" class="form-select form-select-sm" style="width: auto;" onchange="changePageSize(this.value)">
<option value="5" if data.PageSize == 5 { selected }>5</option>
<option value="10" if data.PageSize == 10 { selected }>10</option>
<option value="25" if data.PageSize == 25 { selected }>25</option>
<option value="50" if data.PageSize == 50 { selected }>50</option>
<option value="100" if data.PageSize == 100 { selected }>100</option>
</select>
<span class="ms-2">per page</span>
</div>
</div>
if data.Collection != "" {
<div>
if data.Collection == "default" {
<span class="badge bg-secondary text-white">Collection: default</span>
} else {
<span class="badge bg-info text-white">Collection: {data.Collection}</span>
}
<a href="/cluster/ec-shards" class="btn btn-sm btn-outline-secondary ms-2">Clear Filter</a>
</div>
}
</div>
<div class="table-responsive">
<table class="table table-striped table-hover" id="ecVolumesTable">
<thead>
<tr>
<th>
<a href="#" onclick="sortBy('volume_id')" class="text-dark text-decoration-none">
Volume ID
if data.SortBy == "volume_id" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
if data.ShowCollectionColumn {
<th>
<a href="#" onclick="sortBy('collection')" class="text-dark text-decoration-none">
Collection
if data.SortBy == "collection" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
}
<th>
<a href="#" onclick="sortBy('total_shards')" class="text-dark text-decoration-none">
Shard Count
if data.SortBy == "total_shards" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
<th class="text-dark">Shard Size</th>
<th class="text-dark">Shard Locations</th>
<th>
<a href="#" onclick="sortBy('completeness')" class="text-dark text-decoration-none">
Status
if data.SortBy == "completeness" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
if data.ShowDataCenterColumn {
<th class="text-dark">Data Centers</th>
}
<th class="text-dark">Actions</th>
</tr>
</thead>
<tbody>
for _, volume := range data.EcVolumes {
<tr>
<td>
<strong>{fmt.Sprintf("%d", volume.VolumeID)}</strong>
</td>
if data.ShowCollectionColumn {
<td>
if volume.Collection != "" {
<a href="/cluster/ec-shards?collection={volume.Collection}" class="text-decoration-none">
<span class="badge bg-info text-white">{volume.Collection}</span>
</a>
} else {
<a href="/cluster/ec-shards?collection=default" class="text-decoration-none">
<span class="badge bg-secondary text-white">default</span>
</a>
}
</td>
}
<td>
<span class="badge bg-primary">{fmt.Sprintf("%d/14", volume.TotalShards)}</span>
</td>
<td>
@displayShardSizes(volume.ShardSizes)
</td>
<td>
@displayVolumeDistribution(volume)
</td>
<td>
@displayEcVolumeStatus(volume)
</td>
if data.ShowDataCenterColumn {
<td>
for i, dc := range volume.DataCenters {
if i > 0 {
<span>, </span>
}
<span class="badge bg-primary text-white">{dc}</span>
}
</td>
}
<td>
<div class="btn-group" role="group">
<button type="button" class="btn btn-sm btn-outline-primary"
onclick="showVolumeDetails(event)"
data-volume-id={ fmt.Sprintf("%d", volume.VolumeID) }
title="View EC volume details">
<i class="fas fa-info-circle"></i>
</button>
if !volume.IsComplete {
<button type="button" class="btn btn-sm btn-outline-warning"
onclick="repairVolume(event)"
data-volume-id={ fmt.Sprintf("%d", volume.VolumeID) }
title="Repair missing shards">
<i class="fas fa-wrench"></i>
</button>
}
</div>
</td>
</tr>
}
</tbody>
</table>
</div>
<!-- Pagination -->
if data.TotalPages > 1 {
<nav aria-label="EC Volumes pagination">
<ul class="pagination justify-content-center">
if data.Page > 1 {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page="1">First</a>
</li>
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page-1) }>Previous</a>
</li>
}
for i := 1; i <= data.TotalPages; i++ {
if i == data.Page {
<li class="page-item active">
<span class="page-link">{fmt.Sprintf("%d", i)}</span>
</li>
} else if i <= 3 || i > data.TotalPages-3 || (i >= data.Page-2 && i <= data.Page+2) {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", i) }>{fmt.Sprintf("%d", i)}</a>
</li>
} else if i == 4 && data.Page > 6 {
<li class="page-item disabled">
<span class="page-link">...</span>
</li>
} else if i == data.TotalPages-3 && data.Page < data.TotalPages-5 {
<li class="page-item disabled">
<span class="page-link">...</span>
</li>
}
}
if data.Page < data.TotalPages {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page+1) }>Next</a>
</li>
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.TotalPages) }>Last</a>
</li>
}
</ul>
</nav>
}
</div>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script>
<script>
// Sorting functionality
function sortBy(field) {
const currentSort = new URLSearchParams(window.location.search).get('sort_by');
const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';
let newOrder = 'asc';
if (currentSort === field && currentOrder === 'asc') {
newOrder = 'desc';
}
const url = new URL(window.location);
url.searchParams.set('sort_by', field);
url.searchParams.set('sort_order', newOrder);
url.searchParams.set('page', '1'); // Reset to first page
window.location.href = url.toString();
}
// Pagination functionality
function goToPage(event) {
event.preventDefault();
const page = event.target.closest('a').getAttribute('data-page');
const url = new URL(window.location);
url.searchParams.set('page', page);
window.location.href = url.toString();
}
// Page size functionality
function changePageSize(newPageSize) {
const url = new URL(window.location);
url.searchParams.set('page_size', newPageSize);
url.searchParams.set('page', '1'); // Reset to first page when changing page size
window.location.href = url.toString();
}
// Volume details
function showVolumeDetails(event) {
const volumeId = event.target.closest('button').getAttribute('data-volume-id');
window.location.href = `/cluster/ec-volumes/${volumeId}`;
}
// Repair volume
function repairVolume(event) {
const volumeId = event.target.closest('button').getAttribute('data-volume-id');
if (confirm(`Are you sure you want to repair missing shards for volume ${volumeId}?`)) {
// TODO: Implement repair functionality
alert('Repair functionality will be implemented soon.');
}
}
</script>
</body>
</html>
}
// displayShardLocationsHTML renders shard locations as proper HTML
templ displayShardLocationsHTML(shardLocations map[int]string) {
if len(shardLocations) == 0 {
<span class="text-muted">No shards</span>
} else {
for i, serverInfo := range groupShardsByServer(shardLocations) {
if i > 0 {
<br/>
}
<strong>
<a href={ templ.URL("/cluster/volume-servers/" + serverInfo.Server) } class="text-primary text-decoration-none">
{ serverInfo.Server }
</a>:
</strong> { serverInfo.ShardRanges }
}
}
}
// displayShardSizes renders shard sizes in a compact format
templ displayShardSizes(shardSizes map[int]int64) {
if len(shardSizes) == 0 {
<span class="text-muted">-</span>
} else {
@renderShardSizesContent(shardSizes)
}
}
// renderShardSizesContent renders the content of shard sizes
templ renderShardSizesContent(shardSizes map[int]int64) {
if areAllShardSizesSame(shardSizes) {
// All shards have the same size, show just the common size
<span class="text-success">{getCommonShardSize(shardSizes)}</span>
} else {
// Shards have different sizes, show individual sizes
<div class="shard-sizes" style="max-width: 300px;">
{ formatIndividualShardSizes(shardSizes) }
</div>
}
}
// ServerShardInfo represents server and its shard ranges with sizes
type ServerShardInfo struct {
Server string
ShardRanges string
}
// groupShardsByServer groups shards by server and formats ranges
func groupShardsByServer(shardLocations map[int]string) []ServerShardInfo {
if len(shardLocations) == 0 {
return []ServerShardInfo{}
}
// Group shards by server
serverShards := make(map[string][]int)
for shardId, server := range shardLocations {
serverShards[server] = append(serverShards[server], shardId)
}
var serverInfos []ServerShardInfo
for server, shards := range serverShards {
// Sort shards for each server
for i := 0; i < len(shards); i++ {
for j := i + 1; j < len(shards); j++ {
if shards[i] > shards[j] {
shards[i], shards[j] = shards[j], shards[i]
}
}
}
// Format shard ranges compactly
shardRanges := formatShardRanges(shards)
serverInfos = append(serverInfos, ServerShardInfo{
Server: server,
ShardRanges: shardRanges,
})
}
// Sort by server name
for i := 0; i < len(serverInfos); i++ {
for j := i + 1; j < len(serverInfos); j++ {
if serverInfos[i].Server > serverInfos[j].Server {
serverInfos[i], serverInfos[j] = serverInfos[j], serverInfos[i]
}
}
}
return serverInfos
}
// groupShardsByServerWithSizes groups shards by server and formats ranges with sizes
func groupShardsByServerWithSizes(shardLocations map[int]string, shardSizes map[int]int64) []ServerShardInfo {
if len(shardLocations) == 0 {
return []ServerShardInfo{}
}
// Group shards by server
serverShards := make(map[string][]int)
for shardId, server := range shardLocations {
serverShards[server] = append(serverShards[server], shardId)
}
var serverInfos []ServerShardInfo
for server, shards := range serverShards {
// Sort shards for each server
for i := 0; i < len(shards); i++ {
for j := i + 1; j < len(shards); j++ {
if shards[i] > shards[j] {
shards[i], shards[j] = shards[j], shards[i]
}
}
}
// Format shard ranges compactly with sizes
shardRanges := formatShardRangesWithSizes(shards, shardSizes)
serverInfos = append(serverInfos, ServerShardInfo{
Server: server,
ShardRanges: shardRanges,
})
}
// Sort by server name
for i := 0; i < len(serverInfos); i++ {
for j := i + 1; j < len(serverInfos); j++ {
if serverInfos[i].Server > serverInfos[j].Server {
serverInfos[i], serverInfos[j] = serverInfos[j], serverInfos[i]
}
}
}
return serverInfos
}
// Helper function to format shard ranges compactly (e.g., "0-3,7,9-11")
func formatShardRanges(shards []int) string {
if len(shards) == 0 {
return ""
}
var ranges []string
start := shards[0]
end := shards[0]
for i := 1; i < len(shards); i++ {
if shards[i] == end+1 {
end = shards[i]
} else {
if start == end {
ranges = append(ranges, fmt.Sprintf("%d", start))
} else {
ranges = append(ranges, fmt.Sprintf("%d-%d", start, end))
}
start = shards[i]
end = shards[i]
}
}
// Add the last range
if start == end {
ranges = append(ranges, fmt.Sprintf("%d", start))
} else {
ranges = append(ranges, fmt.Sprintf("%d-%d", start, end))
}
return strings.Join(ranges, ",")
}
// Helper function to format shard ranges with sizes (e.g., "0(1.2MB),1-3(2.5MB),7(800KB)")
func formatShardRangesWithSizes(shards []int, shardSizes map[int]int64) string {
if len(shards) == 0 {
return ""
}
var ranges []string
start := shards[0]
end := shards[0]
var totalSize int64
for i := 1; i < len(shards); i++ {
if shards[i] == end+1 {
end = shards[i]
totalSize += shardSizes[shards[i]]
} else {
// Add current range with size
if start == end {
size := shardSizes[start]
if size > 0 {
ranges = append(ranges, fmt.Sprintf("%d(%s)", start, bytesToHumanReadable(size)))
} else {
ranges = append(ranges, fmt.Sprintf("%d", start))
}
} else {
// Calculate total size for the range
rangeSize := shardSizes[start]
for j := start + 1; j <= end; j++ {
rangeSize += shardSizes[j]
}
if rangeSize > 0 {
ranges = append(ranges, fmt.Sprintf("%d-%d(%s)", start, end, bytesToHumanReadable(rangeSize)))
} else {
ranges = append(ranges, fmt.Sprintf("%d-%d", start, end))
}
}
start = shards[i]
end = shards[i]
totalSize = shardSizes[shards[i]]
}
}
// Add the last range
if start == end {
size := shardSizes[start]
if size > 0 {
ranges = append(ranges, fmt.Sprintf("%d(%s)", start, bytesToHumanReadable(size)))
} else {
ranges = append(ranges, fmt.Sprintf("%d", start))
}
} else {
// Calculate total size for the range
rangeSize := shardSizes[start]
for j := start + 1; j <= end; j++ {
rangeSize += shardSizes[j]
}
if rangeSize > 0 {
ranges = append(ranges, fmt.Sprintf("%d-%d(%s)", start, end, bytesToHumanReadable(rangeSize)))
} else {
ranges = append(ranges, fmt.Sprintf("%d-%d", start, end))
}
}
return strings.Join(ranges, ",")
}
// Helper function to convert bytes to human readable format
func bytesToHumanReadable(bytes int64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%dB", bytes)
}
div, exp := int64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f%cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
// Helper function to format missing shards
func formatMissingShards(missingShards []int) string {
if len(missingShards) == 0 {
return ""
}
var shardStrs []string
for _, shard := range missingShards {
shardStrs = append(shardStrs, fmt.Sprintf("%d", shard))
}
return strings.Join(shardStrs, ", ")
}
// Helper function to check if all shard sizes are the same
func areAllShardSizesSame(shardSizes map[int]int64) bool {
if len(shardSizes) <= 1 {
return true
}
var firstSize int64 = -1
for _, size := range shardSizes {
if firstSize == -1 {
firstSize = size
} else if size != firstSize {
return false
}
}
return true
}
// Helper function to get the common shard size (when all shards are the same size)
func getCommonShardSize(shardSizes map[int]int64) string {
for _, size := range shardSizes {
return bytesToHumanReadable(size)
}
return "-"
}
// Helper function to format individual shard sizes
func formatIndividualShardSizes(shardSizes map[int]int64) string {
if len(shardSizes) == 0 {
return ""
}
// Group shards by size for more compact display
sizeGroups := make(map[int64][]int)
for shardId, size := range shardSizes {
sizeGroups[size] = append(sizeGroups[size], shardId)
}
// If there are only 1-2 different sizes, show them grouped
if len(sizeGroups) <= 3 {
var groupStrs []string
for size, shardIds := range sizeGroups {
// Sort shard IDs
for i := 0; i < len(shardIds); i++ {
for j := i + 1; j < len(shardIds); j++ {
if shardIds[i] > shardIds[j] {
shardIds[i], shardIds[j] = shardIds[j], shardIds[i]
}
}
}
var idRanges []string
if len(shardIds) <= 4 {
// Show individual IDs if few shards
for _, id := range shardIds {
idRanges = append(idRanges, fmt.Sprintf("%d", id))
}
} else {
// Show count if many shards
idRanges = append(idRanges, fmt.Sprintf("%d shards", len(shardIds)))
}
groupStrs = append(groupStrs, fmt.Sprintf("%s: %s", strings.Join(idRanges, ","), bytesToHumanReadable(size)))
}
return strings.Join(groupStrs, " | ")
}
// If too many different sizes, show summary
return fmt.Sprintf("%d different sizes", len(sizeGroups))
}
// displayVolumeDistribution shows the distribution summary for a volume
templ displayVolumeDistribution(volume dash.EcVolumeWithShards) {
<div class="small">
<i class="fas fa-sitemap me-1"></i>
{ calculateVolumeDistributionSummary(volume) }
</div>
}
// displayEcVolumeStatus shows an improved status display for EC volumes
templ displayEcVolumeStatus(volume dash.EcVolumeWithShards) {
if volume.IsComplete {
<span class="badge bg-success"><i class="fas fa-check me-1"></i>Complete</span>
} else {
if len(volume.MissingShards) > 10 {
<span class="badge bg-danger"><i class="fas fa-skull me-1"></i>Critical ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
} else if len(volume.MissingShards) > 6 {
<span class="badge bg-warning"><i class="fas fa-exclamation-triangle me-1"></i>Degraded ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
} else if len(volume.MissingShards) > 2 {
<span class="badge bg-warning"><i class="fas fa-info-circle me-1"></i>Incomplete ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
} else {
<span class="badge bg-info"><i class="fas fa-info-circle me-1"></i>Minor Issues ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
}
}
}
// calculateVolumeDistributionSummary calculates and formats the distribution summary for a volume
func calculateVolumeDistributionSummary(volume dash.EcVolumeWithShards) string {
dataCenters := make(map[string]bool)
racks := make(map[string]bool)
servers := make(map[string]bool)
// Count unique servers from shard locations
for _, server := range volume.ShardLocations {
servers[server] = true
}
// Use the DataCenters field if available
for _, dc := range volume.DataCenters {
dataCenters[dc] = true
}
// Use the Servers field if available
for _, server := range volume.Servers {
servers[server] = true
}
// Use the Racks field if available
for _, rack := range volume.Racks {
racks[rack] = true
}
// If we don't have rack information, estimate it from servers as fallback
rackCount := len(racks)
if rackCount == 0 {
// Fallback estimation - assume each server might be in a different rack
rackCount = len(servers)
if len(dataCenters) > 0 {
// More conservative estimate if we have DC info
rackCount = (len(servers) + len(dataCenters) - 1) / len(dataCenters)
if rackCount == 0 {
rackCount = 1
}
}
}
return fmt.Sprintf("%d DCs, %d racks, %d servers", len(dataCenters), rackCount, len(servers))
}

File diff suppressed because it is too large Load Diff

View File

@@ -277,7 +277,7 @@ templ ClusterVolumes(data dash.ClusterVolumesData) {
@getSortIcon("size", data.SortBy, data.SortOrder)
</a>
</th>
<th>Storage Usage</th>
<th>Volume Utilization</th>
<th>
<a href="#" onclick="sortTable('filecount')" class="text-decoration-none text-dark">
File Count

View File

@@ -399,7 +399,7 @@ func ClusterVolumes(data dash.ClusterVolumesData) templ.Component {
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "</a></th><th>Storage Usage</th><th><a href=\"#\" onclick=\"sortTable('filecount')\" class=\"text-decoration-none text-dark\">File Count")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "</a></th><th>Volume Utilization</th><th><a href=\"#\" onclick=\"sortTable('filecount')\" class=\"text-decoration-none text-dark\">File Count")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}

View File

@@ -0,0 +1,371 @@
package app
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
"github.com/seaweedfs/seaweedfs/weed/util"
)
templ CollectionDetails(data dash.CollectionDetailsData) {
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<div>
<h1 class="h2">
<i class="fas fa-layer-group me-2"></i>Collection Details: {data.CollectionName}
</h1>
<nav aria-label="breadcrumb">
<ol class="breadcrumb">
<li class="breadcrumb-item"><a href="/admin" class="text-decoration-none">Dashboard</a></li>
<li class="breadcrumb-item"><a href="/cluster/collections" class="text-decoration-none">Collections</a></li>
<li class="breadcrumb-item active" aria-current="page">{data.CollectionName}</li>
</ol>
</nav>
</div>
<div class="btn-toolbar mb-2 mb-md-0">
<div class="btn-group me-2">
<button type="button" class="btn btn-sm btn-outline-secondary" onclick="history.back()">
<i class="fas fa-arrow-left me-1"></i>Back
</button>
<button type="button" class="btn btn-sm btn-outline-primary" onclick="window.location.reload()">
<i class="fas fa-refresh me-1"></i>Refresh
</button>
</div>
</div>
</div>
<!-- Collection Summary -->
<div class="row mb-4">
<div class="col-md-3">
<div class="card text-bg-primary">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Regular Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalVolumes)}</h4>
<small>Traditional volumes</small>
</div>
<div class="align-self-center">
<i class="fas fa-database fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-info">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">EC Volumes</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalEcVolumes)}</h4>
<small>Erasure coded volumes</small>
</div>
<div class="align-self-center">
<i class="fas fa-th-large fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-success">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Total Files</h6>
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalFiles)}</h4>
<small>Files stored</small>
</div>
<div class="align-self-center">
<i class="fas fa-file fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
<div class="col-md-3">
<div class="card text-bg-warning">
<div class="card-body">
<div class="d-flex justify-content-between">
<div>
<h6 class="card-title">Total Size (Logical)</h6>
<h4 class="mb-0">{util.BytesToHumanReadable(uint64(data.TotalSize))}</h4>
<small>Data stored (regular volumes only)</small>
</div>
<div class="align-self-center">
<i class="fas fa-hdd fa-2x"></i>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- Size Information Note -->
<div class="alert alert-info" role="alert">
<i class="fas fa-info-circle me-2"></i>
<strong>Size Information:</strong>
Logical size represents the actual data stored (regular volumes only).
EC volumes show shard counts instead of size - physical storage for EC volumes is approximately 1.4x the original data due to erasure coding redundancy.
</div>
<!-- Pagination Info -->
<div class="d-flex justify-content-between align-items-center mb-3">
<div class="d-flex align-items-center">
<span class="me-3">
Showing {fmt.Sprintf("%d", (data.Page-1)*data.PageSize + 1)} to {fmt.Sprintf("%d", func() int {
end := data.Page * data.PageSize
totalItems := data.TotalVolumes + data.TotalEcVolumes
if end > totalItems {
return totalItems
}
return end
}())} of {fmt.Sprintf("%d", data.TotalVolumes + data.TotalEcVolumes)} items
</span>
<div class="d-flex align-items-center">
<label for="pageSize" class="form-label me-2 mb-0">Show:</label>
<select id="pageSize" class="form-select form-select-sm" style="width: auto;" onchange="changePageSize(this.value)">
<option value="10" if data.PageSize == 10 { selected }>10</option>
<option value="25" if data.PageSize == 25 { selected }>25</option>
<option value="50" if data.PageSize == 50 { selected }>50</option>
<option value="100" if data.PageSize == 100 { selected }>100</option>
</select>
<span class="ms-2">per page</span>
</div>
</div>
</div>
<!-- Volumes Table -->
<div class="table-responsive">
<table class="table table-striped table-hover" id="volumesTable">
<thead>
<tr>
<th>
<a href="#" onclick="sortBy('volume_id')" class="text-dark text-decoration-none">
Volume ID
if data.SortBy == "volume_id" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
<th>
<a href="#" onclick="sortBy('type')" class="text-dark text-decoration-none">
Type
if data.SortBy == "type" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
<th class="text-dark">Logical Size / Shard Count</th>
<th class="text-dark">Files</th>
<th class="text-dark">Status</th>
<th class="text-dark">Actions</th>
</tr>
</thead>
<tbody>
// Display regular volumes
for _, volume := range data.RegularVolumes {
<tr>
<td>
<strong>{fmt.Sprintf("%d", volume.Id)}</strong>
</td>
<td>
<span class="badge bg-primary">
<i class="fas fa-database me-1"></i>Regular
</span>
</td>
<td>
{util.BytesToHumanReadable(volume.Size)}
</td>
<td>
{fmt.Sprintf("%d", volume.FileCount)}
</td>
<td>
if volume.ReadOnly {
<span class="badge bg-warning">Read Only</span>
} else {
<span class="badge bg-success">Read/Write</span>
}
</td>
<td>
<div class="btn-group" role="group">
<button type="button" class="btn btn-sm btn-outline-primary"
onclick="showVolumeDetails(event)"
data-volume-id={ fmt.Sprintf("%d", volume.Id) }
data-server={ volume.Server }
title="View volume details">
<i class="fas fa-info-circle"></i>
</button>
</div>
</td>
</tr>
}
// Display EC volumes
for _, ecVolume := range data.EcVolumes {
<tr>
<td>
<strong>{fmt.Sprintf("%d", ecVolume.VolumeID)}</strong>
</td>
<td>
<span class="badge bg-info">
<i class="fas fa-th-large me-1"></i>EC
</span>
</td>
<td>
<span class="badge bg-primary">{fmt.Sprintf("%d/14", ecVolume.TotalShards)}</span>
</td>
<td>
<span class="text-muted">-</span>
</td>
<td>
if ecVolume.IsComplete {
<span class="badge bg-success">
<i class="fas fa-check me-1"></i>Complete
</span>
} else {
<span class="badge bg-warning">
<i class="fas fa-exclamation-triangle me-1"></i>
Missing {fmt.Sprintf("%d", len(ecVolume.MissingShards))} shards
</span>
}
</td>
<td>
<div class="btn-group" role="group">
<button type="button" class="btn btn-sm btn-outline-info"
onclick="showEcVolumeDetails(event)"
data-volume-id={ fmt.Sprintf("%d", ecVolume.VolumeID) }
title="View EC volume details">
<i class="fas fa-info-circle"></i>
</button>
if !ecVolume.IsComplete {
<button type="button" class="btn btn-sm btn-outline-warning"
onclick="repairEcVolume(event)"
data-volume-id={ fmt.Sprintf("%d", ecVolume.VolumeID) }
title="Repair missing shards">
<i class="fas fa-wrench"></i>
</button>
}
</div>
</td>
</tr>
}
</tbody>
</table>
</div>
<!-- Pagination -->
if data.TotalPages > 1 {
<nav aria-label="Collection volumes pagination">
<ul class="pagination justify-content-center">
if data.Page > 1 {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page="1">First</a>
</li>
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page-1) }>Previous</a>
</li>
}
for i := 1; i <= data.TotalPages; i++ {
if i == data.Page {
<li class="page-item active">
<span class="page-link">{fmt.Sprintf("%d", i)}</span>
</li>
} else if i <= 3 || i > data.TotalPages-3 || (i >= data.Page-2 && i <= data.Page+2) {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", i) }>{fmt.Sprintf("%d", i)}</a>
</li>
} else if i == 4 && data.Page > 6 {
<li class="page-item disabled">
<span class="page-link">...</span>
</li>
} else if i == data.TotalPages-3 && data.Page < data.TotalPages-5 {
<li class="page-item disabled">
<span class="page-link">...</span>
</li>
}
}
if data.Page < data.TotalPages {
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page+1) }>Next</a>
</li>
<li class="page-item">
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.TotalPages) }>Last</a>
</li>
}
</ul>
</nav>
}
<script>
// Sorting functionality
function sortBy(field) {
const currentSort = new URLSearchParams(window.location.search).get('sort_by');
const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';
let newOrder = 'asc';
if (currentSort === field && currentOrder === 'asc') {
newOrder = 'desc';
}
const url = new URL(window.location);
url.searchParams.set('sort_by', field);
url.searchParams.set('sort_order', newOrder);
url.searchParams.set('page', '1'); // Reset to first page
window.location.href = url.toString();
}
// Pagination functionality
function goToPage(event) {
event.preventDefault();
const page = event.target.closest('a').getAttribute('data-page');
const url = new URL(window.location);
url.searchParams.set('page', page);
window.location.href = url.toString();
}
// Page size functionality
function changePageSize(newPageSize) {
const url = new URL(window.location);
url.searchParams.set('page_size', newPageSize);
url.searchParams.set('page', '1'); // Reset to first page when changing page size
window.location.href = url.toString();
}
// Volume details
function showVolumeDetails(event) {
const volumeId = event.target.closest('button').getAttribute('data-volume-id');
const server = event.target.closest('button').getAttribute('data-server');
window.location.href = `/cluster/volumes/${volumeId}/${server}`;
}
// EC Volume details
function showEcVolumeDetails(event) {
const volumeId = event.target.closest('button').getAttribute('data-volume-id');
window.location.href = `/cluster/ec-volumes/${volumeId}`;
}
// Repair EC Volume
function repairEcVolume(event) {
const volumeId = event.target.closest('button').getAttribute('data-volume-id');
if (confirm(`Are you sure you want to repair missing shards for EC volume ${volumeId}?`)) {
// TODO: Implement repair functionality
alert('Repair functionality will be implemented soon.');
}
}
</script>
}

View File

@@ -0,0 +1,567 @@
// Code generated by templ - DO NOT EDIT.
// templ: version: v0.3.906
package app
//lint:file-ignore SA4006 This context is only used if a nested component is present.
import "github.com/a-h/templ"
import templruntime "github.com/a-h/templ/runtime"
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
"github.com/seaweedfs/seaweedfs/weed/util"
)
func CollectionDetails(data dash.CollectionDetailsData) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
if templ_7745c5c3_Var1 == nil {
templ_7745c5c3_Var1 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom\"><div><h1 class=\"h2\"><i class=\"fas fa-layer-group me-2\"></i>Collection Details: ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var2 string
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs(data.CollectionName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 13, Col: 83}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "</h1><nav aria-label=\"breadcrumb\"><ol class=\"breadcrumb\"><li class=\"breadcrumb-item\"><a href=\"/admin\" class=\"text-decoration-none\">Dashboard</a></li><li class=\"breadcrumb-item\"><a href=\"/cluster/collections\" class=\"text-decoration-none\">Collections</a></li><li class=\"breadcrumb-item active\" aria-current=\"page\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var3 string
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(data.CollectionName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 19, Col: 80}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "</li></ol></nav></div><div class=\"btn-toolbar mb-2 mb-md-0\"><div class=\"btn-group me-2\"><button type=\"button\" class=\"btn btn-sm btn-outline-secondary\" onclick=\"history.back()\"><i class=\"fas fa-arrow-left me-1\"></i>Back</button> <button type=\"button\" class=\"btn btn-sm btn-outline-primary\" onclick=\"window.location.reload()\"><i class=\"fas fa-refresh me-1\"></i>Refresh</button></div></div></div><!-- Collection Summary --><div class=\"row mb-4\"><div class=\"col-md-3\"><div class=\"card text-bg-primary\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">Regular Volumes</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var4 string
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalVolumes))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 43, Col: 61}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "</h4><small>Traditional volumes</small></div><div class=\"align-self-center\"><i class=\"fas fa-database fa-2x\"></i></div></div></div></div></div><div class=\"col-md-3\"><div class=\"card text-bg-info\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">EC Volumes</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var5 string
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalEcVolumes))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 59, Col: 63}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</h4><small>Erasure coded volumes</small></div><div class=\"align-self-center\"><i class=\"fas fa-th-large fa-2x\"></i></div></div></div></div></div><div class=\"col-md-3\"><div class=\"card text-bg-success\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">Total Files</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalFiles))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 75, Col: 59}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "</h4><small>Files stored</small></div><div class=\"align-self-center\"><i class=\"fas fa-file fa-2x\"></i></div></div></div></div></div><div class=\"col-md-3\"><div class=\"card text-bg-warning\"><div class=\"card-body\"><div class=\"d-flex justify-content-between\"><div><h6 class=\"card-title\">Total Size (Logical)</h6><h4 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var7 string
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(util.BytesToHumanReadable(uint64(data.TotalSize)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 91, Col: 74}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "</h4><small>Data stored (regular volumes only)</small></div><div class=\"align-self-center\"><i class=\"fas fa-hdd fa-2x\"></i></div></div></div></div></div></div><!-- Size Information Note --><div class=\"alert alert-info\" role=\"alert\"><i class=\"fas fa-info-circle me-2\"></i> <strong>Size Information:</strong> Logical size represents the actual data stored (regular volumes only). EC volumes show shard counts instead of size - physical storage for EC volumes is approximately 1.4x the original data due to erasure coding redundancy.</div><!-- Pagination Info --><div class=\"d-flex justify-content-between align-items-center mb-3\"><div class=\"d-flex align-items-center\"><span class=\"me-3\">Showing ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var8 string
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", (data.Page-1)*data.PageSize+1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 115, Col: 63}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, " to ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var9 string
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", func() int {
end := data.Page * data.PageSize
totalItems := data.TotalVolumes + data.TotalEcVolumes
if end > totalItems {
return totalItems
}
return end
}()))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 122, Col: 8}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, " of ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var10 string
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalVolumes+data.TotalEcVolumes))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 122, Col: 72}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, " items</span><div class=\"d-flex align-items-center\"><label for=\"pageSize\" class=\"form-label me-2 mb-0\">Show:</label> <select id=\"pageSize\" class=\"form-select form-select-sm\" style=\"width: auto;\" onchange=\"changePageSize(this.value)\"><option value=\"10\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 10 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, ">10</option> <option value=\"25\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 25 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, ">25</option> <option value=\"50\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 50 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, ">50</option> <option value=\"100\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.PageSize == 100 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, ">100</option></select> <span class=\"ms-2\">per page</span></div></div></div><!-- Volumes Table --><div class=\"table-responsive\"><table class=\"table table-striped table-hover\" id=\"volumesTable\"><thead><tr><th><a href=\"#\" onclick=\"sortBy('volume_id')\" class=\"text-dark text-decoration-none\">Volume ID ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "volume_id" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "</a></th><th><a href=\"#\" onclick=\"sortBy('type')\" class=\"text-dark text-decoration-none\">Type ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "type" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "</a></th><th class=\"text-dark\">Logical Size / Shard Count</th><th class=\"text-dark\">Files</th><th class=\"text-dark\">Status</th><th class=\"text-dark\">Actions</th></tr></thead> <tbody>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, volume := range data.RegularVolumes {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "<tr><td><strong>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var11 string
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", volume.Id))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 182, Col: 44}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "</strong></td><td><span class=\"badge bg-primary\"><i class=\"fas fa-database me-1\"></i>Regular</span></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var12 string
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(util.BytesToHumanReadable(volume.Size))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 190, Col: 46}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var13 string
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", volume.FileCount))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 193, Col: 43}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if volume.ReadOnly {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "<span class=\"badge bg-warning\">Read Only</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<span class=\"badge bg-success\">Read/Write</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</td><td><div class=\"btn-group\" role=\"group\"><button type=\"button\" class=\"btn btn-sm btn-outline-primary\" onclick=\"showVolumeDetails(event)\" data-volume-id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", volume.Id))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 206, Col: 55}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "\" data-server=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(volume.Server)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 207, Col: 37}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "\" title=\"View volume details\"><i class=\"fas fa-info-circle\"></i></button></div></td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
for _, ecVolume := range data.EcVolumes {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<tr><td><strong>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var16 string
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", ecVolume.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 220, Col: 52}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "</strong></td><td><span class=\"badge bg-info\"><i class=\"fas fa-th-large me-1\"></i>EC</span></td><td><span class=\"badge bg-primary\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var17 string
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d/14", ecVolume.TotalShards))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 228, Col: 81}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "</span></td><td><span class=\"text-muted\">-</span></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if ecVolume.IsComplete {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "<span class=\"badge bg-success\"><i class=\"fas fa-check me-1\"></i>Complete</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "<span class=\"badge bg-warning\"><i class=\"fas fa-exclamation-triangle me-1\"></i> Missing ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var18 string
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(ecVolume.MissingShards)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 241, Col: 64}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, " shards</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "</td><td><div class=\"btn-group\" role=\"group\"><button type=\"button\" class=\"btn btn-sm btn-outline-info\" onclick=\"showEcVolumeDetails(event)\" data-volume-id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var19 string
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", ecVolume.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 249, Col: 63}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "\" title=\"View EC volume details\"><i class=\"fas fa-info-circle\"></i></button> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if !ecVolume.IsComplete {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "<button type=\"button\" class=\"btn btn-sm btn-outline-warning\" onclick=\"repairEcVolume(event)\" data-volume-id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var20 string
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", ecVolume.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 256, Col: 64}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "\" title=\"Repair missing shards\"><i class=\"fas fa-wrench\"></i></button>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "</div></td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "</tbody></table></div><!-- Pagination -->")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.TotalPages > 1 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "<nav aria-label=\"Collection volumes pagination\"><ul class=\"pagination justify-content-center\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Page > 1 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"1\">First</a></li><li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var21 string
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Page-1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 278, Col: 104}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "\">Previous</a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
for i := 1; i <= data.TotalPages; i++ {
if i == data.Page {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "<li class=\"page-item active\"><span class=\"page-link\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var22 string
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", i))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 285, Col: 52}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "</span></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if i <= 3 || i > data.TotalPages-3 || (i >= data.Page-2 && i <= data.Page+2) {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var23 string
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", i))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 289, Col: 95}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var24 string
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", i))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 289, Col: 119}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "</a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if i == 4 && data.Page > 6 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "<li class=\"page-item disabled\"><span class=\"page-link\">...</span></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if i == data.TotalPages-3 && data.Page < data.TotalPages-5 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "<li class=\"page-item disabled\"><span class=\"page-link\">...</span></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
}
if data.Page < data.TotalPages {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "<li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var25 string
templ_7745c5c3_Var25, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Page+1))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 304, Col: 104}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var25))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "\">Next</a></li><li class=\"page-item\"><a class=\"page-link\" href=\"#\" onclick=\"goToPage(event)\" data-page=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var26 string
templ_7745c5c3_Var26, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalPages))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/collection_details.templ`, Line: 307, Col: 108}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var26))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "\">Last</a></li>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "</ul></nav>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "<script>\n\t\t// Sorting functionality\n\t\tfunction sortBy(field) {\n\t\t\tconst currentSort = new URLSearchParams(window.location.search).get('sort_by');\n\t\t\tconst currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';\n\t\t\t\n\t\t\tlet newOrder = 'asc';\n\t\t\tif (currentSort === field && currentOrder === 'asc') {\n\t\t\t\tnewOrder = 'desc';\n\t\t\t}\n\t\t\t\n\t\t\tconst url = new URL(window.location);\n\t\t\turl.searchParams.set('sort_by', field);\n\t\t\turl.searchParams.set('sort_order', newOrder);\n\t\t\turl.searchParams.set('page', '1'); // Reset to first page\n\t\t\twindow.location.href = url.toString();\n\t\t}\n\n\t\t// Pagination functionality\n\t\tfunction goToPage(event) {\n\t\t\tevent.preventDefault();\n\t\t\tconst page = event.target.closest('a').getAttribute('data-page');\n\t\t\tconst url = new URL(window.location);\n\t\t\turl.searchParams.set('page', page);\n\t\t\twindow.location.href = url.toString();\n\t\t}\n\n\t\t// Page size functionality\n\t\tfunction changePageSize(newPageSize) {\n\t\t\tconst url = new URL(window.location);\n\t\t\turl.searchParams.set('page_size', newPageSize);\n\t\t\turl.searchParams.set('page', '1'); // Reset to first page when changing page size\n\t\t\twindow.location.href = url.toString();\n\t\t}\n\n\t\t// Volume details\n\t\tfunction showVolumeDetails(event) {\n\t\t\tconst volumeId = event.target.closest('button').getAttribute('data-volume-id');\n\t\t\tconst server = event.target.closest('button').getAttribute('data-server');\n\t\t\twindow.location.href = `/cluster/volumes/${volumeId}/${server}`;\n\t\t}\n\n\t\t// EC Volume details\n\t\tfunction showEcVolumeDetails(event) {\n\t\t\tconst volumeId = event.target.closest('button').getAttribute('data-volume-id');\n\t\t\twindow.location.href = `/cluster/ec-volumes/${volumeId}`;\n\t\t}\n\n\t\t// Repair EC Volume\n\t\tfunction repairEcVolume(event) {\n\t\t\tconst volumeId = event.target.closest('button').getAttribute('data-volume-id');\n\t\t\tif (confirm(`Are you sure you want to repair missing shards for EC volume ${volumeId}?`)) {\n\t\t\t\t// TODO: Implement repair functionality\n\t\t\t\talert('Repair functionality will be implemented soon.');\n\t\t\t}\n\t\t}\n\t</script>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
return nil
})
}
var _ = templruntime.GeneratedTemplate

View File

@@ -0,0 +1,313 @@
package app
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
)
templ EcVolumeDetails(data dash.EcVolumeDetailsData) {
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
<div>
<h1 class="h2">
<i class="fas fa-th-large me-2"></i>EC Volume Details
</h1>
<nav aria-label="breadcrumb">
<ol class="breadcrumb">
<li class="breadcrumb-item"><a href="/admin" class="text-decoration-none">Dashboard</a></li>
<li class="breadcrumb-item"><a href="/cluster/ec-shards" class="text-decoration-none">EC Volumes</a></li>
<li class="breadcrumb-item active" aria-current="page">Volume {fmt.Sprintf("%d", data.VolumeID)}</li>
</ol>
</nav>
</div>
<div class="btn-toolbar mb-2 mb-md-0">
<div class="btn-group me-2">
<button type="button" class="btn btn-sm btn-outline-secondary" onclick="history.back()">
<i class="fas fa-arrow-left me-1"></i>Back
</button>
<button type="button" class="btn btn-sm btn-outline-primary" onclick="window.location.reload()">
<i class="fas fa-refresh me-1"></i>Refresh
</button>
</div>
</div>
</div>
<!-- EC Volume Summary -->
<div class="row mb-4">
<div class="col-md-6">
<div class="card">
<div class="card-header">
<h5 class="card-title mb-0">
<i class="fas fa-info-circle me-2"></i>Volume Information
</h5>
</div>
<div class="card-body">
<table class="table table-borderless">
<tr>
<td><strong>Volume ID:</strong></td>
<td>{fmt.Sprintf("%d", data.VolumeID)}</td>
</tr>
<tr>
<td><strong>Collection:</strong></td>
<td>
if data.Collection != "" {
<span class="badge bg-info">{data.Collection}</span>
} else {
<span class="text-muted">default</span>
}
</td>
</tr>
<tr>
<td><strong>Status:</strong></td>
<td>
if data.IsComplete {
<span class="badge bg-success">
<i class="fas fa-check me-1"></i>Complete ({data.TotalShards}/14 shards)
</span>
} else {
<span class="badge bg-warning">
<i class="fas fa-exclamation-triangle me-1"></i>Incomplete ({data.TotalShards}/14 shards)
</span>
}
</td>
</tr>
if !data.IsComplete {
<tr>
<td><strong>Missing Shards:</strong></td>
<td>
for i, shardID := range data.MissingShards {
if i > 0 {
<span>, </span>
}
<span class="badge bg-danger">{fmt.Sprintf("%02d", shardID)}</span>
}
</td>
</tr>
}
<tr>
<td><strong>Data Centers:</strong></td>
<td>
for i, dc := range data.DataCenters {
if i > 0 {
<span>, </span>
}
<span class="badge bg-primary">{dc}</span>
}
</td>
</tr>
<tr>
<td><strong>Servers:</strong></td>
<td>
<span class="text-muted">{fmt.Sprintf("%d servers", len(data.Servers))}</span>
</td>
</tr>
<tr>
<td><strong>Last Updated:</strong></td>
<td>
<span class="text-muted">{data.LastUpdated.Format("2006-01-02 15:04:05")}</span>
</td>
</tr>
</table>
</div>
</div>
</div>
<div class="col-md-6">
<div class="card">
<div class="card-header">
<h5 class="card-title mb-0">
<i class="fas fa-chart-pie me-2"></i>Shard Distribution
</h5>
</div>
<div class="card-body">
<div class="row text-center">
<div class="col-4">
<div class="border rounded p-3">
<h3 class="text-primary mb-1">{fmt.Sprintf("%d", data.TotalShards)}</h3>
<small class="text-muted">Total Shards</small>
</div>
</div>
<div class="col-4">
<div class="border rounded p-3">
<h3 class="text-success mb-1">{fmt.Sprintf("%d", len(data.DataCenters))}</h3>
<small class="text-muted">Data Centers</small>
</div>
</div>
<div class="col-4">
<div class="border rounded p-3">
<h3 class="text-info mb-1">{fmt.Sprintf("%d", len(data.Servers))}</h3>
<small class="text-muted">Servers</small>
</div>
</div>
</div>
<!-- Shard Distribution Visualization -->
<div class="mt-3">
<h6>Present Shards:</h6>
<div class="d-flex flex-wrap gap-1">
for _, shard := range data.Shards {
<span class="badge bg-success me-1 mb-1">{fmt.Sprintf("%02d", shard.ShardID)}</span>
}
</div>
if len(data.MissingShards) > 0 {
<h6 class="mt-2">Missing Shards:</h6>
<div class="d-flex flex-wrap gap-1">
for _, shardID := range data.MissingShards {
<span class="badge bg-secondary me-1 mb-1">{fmt.Sprintf("%02d", shardID)}</span>
}
</div>
}
</div>
</div>
</div>
</div>
</div>
<!-- Shard Details Table -->
<div class="card">
<div class="card-header">
<h5 class="card-title mb-0">
<i class="fas fa-list me-2"></i>Shard Details
</h5>
</div>
<div class="card-body">
if len(data.Shards) > 0 {
<div class="table-responsive">
<table class="table table-striped table-hover">
<thead>
<tr>
<th>
<a href="#" onclick="sortBy('shard_id')" class="text-dark text-decoration-none">
Shard ID
if data.SortBy == "shard_id" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
<th>
<a href="#" onclick="sortBy('server')" class="text-dark text-decoration-none">
Server
if data.SortBy == "server" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
<th>
<a href="#" onclick="sortBy('data_center')" class="text-dark text-decoration-none">
Data Center
if data.SortBy == "data_center" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
<th>
<a href="#" onclick="sortBy('rack')" class="text-dark text-decoration-none">
Rack
if data.SortBy == "rack" {
if data.SortOrder == "asc" {
<i class="fas fa-sort-up ms-1"></i>
} else {
<i class="fas fa-sort-down ms-1"></i>
}
} else {
<i class="fas fa-sort ms-1 text-muted"></i>
}
</a>
</th>
<th class="text-dark">Disk Type</th>
<th class="text-dark">Shard Size</th>
<th class="text-dark">Actions</th>
</tr>
</thead>
<tbody>
for _, shard := range data.Shards {
<tr>
<td>
<span class="badge bg-primary">{fmt.Sprintf("%02d", shard.ShardID)}</span>
</td>
<td>
<a href={ templ.URL("/cluster/volume-servers/" + shard.Server) } class="text-primary text-decoration-none">
<code class="small">{shard.Server}</code>
</a>
</td>
<td>
<span class="badge bg-primary text-white">{shard.DataCenter}</span>
</td>
<td>
<span class="badge bg-secondary text-white">{shard.Rack}</span>
</td>
<td>
<span class="text-dark">{shard.DiskType}</span>
</td>
<td>
<span class="text-success">{bytesToHumanReadableUint64(shard.Size)}</span>
</td>
<td>
<a href={ templ.SafeURL(fmt.Sprintf("http://%s/ui/index.html", shard.Server)) } target="_blank" class="btn btn-sm btn-primary">
<i class="fas fa-external-link-alt me-1"></i>Volume Server
</a>
</td>
</tr>
}
</tbody>
</table>
</div>
} else {
<div class="text-center py-4">
<i class="fas fa-exclamation-triangle fa-3x text-warning mb-3"></i>
<h5>No EC shards found</h5>
<p class="text-muted">This volume may not be EC encoded yet.</p>
</div>
}
</div>
</div>
<script>
// Sorting functionality
function sortBy(field) {
const currentSort = new URLSearchParams(window.location.search).get('sort_by');
const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';
let newOrder = 'asc';
if (currentSort === field && currentOrder === 'asc') {
newOrder = 'desc';
}
const url = new URL(window.location);
url.searchParams.set('sort_by', field);
url.searchParams.set('sort_order', newOrder);
window.location.href = url.toString();
}
</script>
}
// Helper function to convert bytes to human readable format (uint64 version)
func bytesToHumanReadableUint64(bytes uint64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%dB", bytes)
}
div, exp := uint64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f%cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}

View File

@@ -0,0 +1,560 @@
// Code generated by templ - DO NOT EDIT.
// templ: version: v0.3.906
package app
//lint:file-ignore SA4006 This context is only used if a nested component is present.
import "github.com/a-h/templ"
import templruntime "github.com/a-h/templ/runtime"
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
)
func EcVolumeDetails(data dash.EcVolumeDetailsData) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
if templ_7745c5c3_Var1 == nil {
templ_7745c5c3_Var1 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom\"><div><h1 class=\"h2\"><i class=\"fas fa-th-large me-2\"></i>EC Volume Details</h1><nav aria-label=\"breadcrumb\"><ol class=\"breadcrumb\"><li class=\"breadcrumb-item\"><a href=\"/admin\" class=\"text-decoration-none\">Dashboard</a></li><li class=\"breadcrumb-item\"><a href=\"/cluster/ec-shards\" class=\"text-decoration-none\">EC Volumes</a></li><li class=\"breadcrumb-item active\" aria-current=\"page\">Volume ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var2 string
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 18, Col: 115}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "</li></ol></nav></div><div class=\"btn-toolbar mb-2 mb-md-0\"><div class=\"btn-group me-2\"><button type=\"button\" class=\"btn btn-sm btn-outline-secondary\" onclick=\"history.back()\"><i class=\"fas fa-arrow-left me-1\"></i>Back</button> <button type=\"button\" class=\"btn btn-sm btn-outline-primary\" onclick=\"window.location.reload()\"><i class=\"fas fa-refresh me-1\"></i>Refresh</button></div></div></div><!-- EC Volume Summary --><div class=\"row mb-4\"><div class=\"col-md-6\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"card-title mb-0\"><i class=\"fas fa-info-circle me-2\"></i>Volume Information</h5></div><div class=\"card-body\"><table class=\"table table-borderless\"><tr><td><strong>Volume ID:</strong></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var3 string
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 47, Col: 65}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "</td></tr><tr><td><strong>Collection:</strong></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Collection != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "<span class=\"badge bg-info\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var4 string
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(data.Collection)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 53, Col: 80}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<span class=\"text-muted\">default</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "</td></tr><tr><td><strong>Status:</strong></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.IsComplete {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "<span class=\"badge bg-success\"><i class=\"fas fa-check me-1\"></i>Complete (")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var5 string
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(data.TotalShards)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 64, Col: 100}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "/14 shards)</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "<span class=\"badge bg-warning\"><i class=\"fas fa-exclamation-triangle me-1\"></i>Incomplete (")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(data.TotalShards)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 68, Col: 117}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "/14 shards)</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "</td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if !data.IsComplete {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "<tr><td><strong>Missing Shards:</strong></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for i, shardID := range data.MissingShards {
if i > 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "<span>, </span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, " <span class=\"badge bg-danger\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var7 string
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shardID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 81, Col: 99}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "</td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "<tr><td><strong>Data Centers:</strong></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for i, dc := range data.DataCenters {
if i > 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "<span>, </span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, " <span class=\"badge bg-primary\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var8 string
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(dc)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 93, Col: 70}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "</td></tr><tr><td><strong>Servers:</strong></td><td><span class=\"text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var9 string
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d servers", len(data.Servers)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 100, Col: 102}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</span></td></tr><tr><td><strong>Last Updated:</strong></td><td><span class=\"text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var10 string
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(data.LastUpdated.Format("2006-01-02 15:04:05"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 106, Col: 104}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</span></td></tr></table></div></div></div><div class=\"col-md-6\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"card-title mb-0\"><i class=\"fas fa-chart-pie me-2\"></i>Shard Distribution</h5></div><div class=\"card-body\"><div class=\"row text-center\"><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-primary mb-1\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var11 string
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.TotalShards))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 125, Col: 98}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "</h3><small class=\"text-muted\">Total Shards</small></div></div><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-success mb-1\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var12 string
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(data.DataCenters)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 131, Col: 103}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "</h3><small class=\"text-muted\">Data Centers</small></div></div><div class=\"col-4\"><div class=\"border rounded p-3\"><h3 class=\"text-info mb-1\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var13 string
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", len(data.Servers)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 137, Col: 96}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "</h3><small class=\"text-muted\">Servers</small></div></div></div><!-- Shard Distribution Visualization --><div class=\"mt-3\"><h6>Present Shards:</h6><div class=\"d-flex flex-wrap gap-1\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, shard := range data.Shards {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "<span class=\"badge bg-success me-1 mb-1\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shard.ShardID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 148, Col: 108}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if len(data.MissingShards) > 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "<h6 class=\"mt-2\">Missing Shards:</h6><div class=\"d-flex flex-wrap gap-1\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, shardID := range data.MissingShards {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<span class=\"badge bg-secondary me-1 mb-1\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shardID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 155, Col: 108}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "</div></div></div></div></div><!-- Shard Details Table --><div class=\"card\"><div class=\"card-header\"><h5 class=\"card-title mb-0\"><i class=\"fas fa-list me-2\"></i>Shard Details</h5></div><div class=\"card-body\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if len(data.Shards) > 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<div class=\"table-responsive\"><table class=\"table table-striped table-hover\"><thead><tr><th><a href=\"#\" onclick=\"sortBy('shard_id')\" class=\"text-dark text-decoration-none\">Shard ID ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "shard_id" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "</a></th><th><a href=\"#\" onclick=\"sortBy('server')\" class=\"text-dark text-decoration-none\">Server ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "server" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "</a></th><th><a href=\"#\" onclick=\"sortBy('data_center')\" class=\"text-dark text-decoration-none\">Data Center ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "data_center" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "</a></th><th><a href=\"#\" onclick=\"sortBy('rack')\" class=\"text-dark text-decoration-none\">Rack ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.SortBy == "rack" {
if data.SortOrder == "asc" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "<i class=\"fas fa-sort-up ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<i class=\"fas fa-sort-down ms-1\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "<i class=\"fas fa-sort ms-1 text-muted\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "</a></th><th class=\"text-dark\">Disk Type</th><th class=\"text-dark\">Shard Size</th><th class=\"text-dark\">Actions</th></tr></thead> <tbody>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, shard := range data.Shards {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "<tr><td><span class=\"badge bg-primary\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var16 string
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%02d", shard.ShardID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 243, Col: 110}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "</span></td><td><a href=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var17 templ.SafeURL
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinURLErrs(templ.URL("/cluster/volume-servers/" + shard.Server))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 246, Col: 106}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "\" class=\"text-primary text-decoration-none\"><code class=\"small\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var18 string
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Server)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 247, Col: 81}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "</code></a></td><td><span class=\"badge bg-primary text-white\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var19 string
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(shard.DataCenter)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 251, Col: 103}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "</span></td><td><span class=\"badge bg-secondary text-white\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var20 string
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(shard.Rack)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 254, Col: 99}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "</span></td><td><span class=\"text-dark\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var21 string
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(shard.DiskType)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 257, Col: 83}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</span></td><td><span class=\"text-success\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var22 string
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(bytesToHumanReadableUint64(shard.Size))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 260, Col: 110}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "</span></td><td><a href=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var23 templ.SafeURL
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(fmt.Sprintf("http://%s/ui/index.html", shard.Server)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/ec_volume_details.templ`, Line: 263, Col: 121}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "\" target=\"_blank\" class=\"btn btn-sm btn-primary\"><i class=\"fas fa-external-link-alt me-1\"></i>Volume Server</a></td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "</tbody></table></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "<div class=\"text-center py-4\"><i class=\"fas fa-exclamation-triangle fa-3x text-warning mb-3\"></i><h5>No EC shards found</h5><p class=\"text-muted\">This volume may not be EC encoded yet.</p></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "</div></div><script>\n // Sorting functionality\n function sortBy(field) {\n const currentSort = new URLSearchParams(window.location.search).get('sort_by');\n const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';\n \n let newOrder = 'asc';\n if (currentSort === field && currentOrder === 'asc') {\n newOrder = 'desc';\n }\n \n const url = new URL(window.location);\n url.searchParams.set('sort_by', field);\n url.searchParams.set('sort_order', newOrder);\n window.location.href = url.toString();\n }\n </script>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
return nil
})
}
// Helper function to convert bytes to human readable format (uint64 version)
func bytesToHumanReadableUint64(bytes uint64) string {
const unit = 1024
if bytes < unit {
return fmt.Sprintf("%dB", bytes)
}
div, exp := uint64(unit), 0
for n := bytes / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f%cB", float64(bytes)/float64(div), "KMGTPE"[exp])
}
var _ = templruntime.GeneratedTemplate

View File

@@ -47,63 +47,70 @@ templ MaintenanceConfig(data *maintenance.MaintenanceConfigData) {
<div class="mb-3">
<label for="scanInterval" class="form-label">Scan Interval (minutes)</label>
<input type="number" class="form-control" id="scanInterval"
value={fmt.Sprintf("%.0f", float64(data.Config.ScanIntervalSeconds)/60)} min="1" max="1440">
value={fmt.Sprintf("%.0f", float64(data.Config.ScanIntervalSeconds)/60)}
placeholder="30 (default)" min="1" max="1440">
<small class="form-text text-muted">
How often to scan for maintenance tasks (1-1440 minutes).
How often to scan for maintenance tasks (1-1440 minutes). <strong>Default: 30 minutes</strong>
</small>
</div>
<div class="mb-3">
<label for="workerTimeout" class="form-label">Worker Timeout (minutes)</label>
<input type="number" class="form-control" id="workerTimeout"
value={fmt.Sprintf("%.0f", float64(data.Config.WorkerTimeoutSeconds)/60)} min="1" max="60">
value={fmt.Sprintf("%.0f", float64(data.Config.WorkerTimeoutSeconds)/60)}
placeholder="5 (default)" min="1" max="60">
<small class="form-text text-muted">
How long to wait for worker heartbeat before considering it inactive (1-60 minutes).
How long to wait for worker heartbeat before considering it inactive (1-60 minutes). <strong>Default: 5 minutes</strong>
</small>
</div>
<div class="mb-3">
<label for="taskTimeout" class="form-label">Task Timeout (hours)</label>
<input type="number" class="form-control" id="taskTimeout"
value={fmt.Sprintf("%.0f", float64(data.Config.TaskTimeoutSeconds)/3600)} min="1" max="24">
value={fmt.Sprintf("%.0f", float64(data.Config.TaskTimeoutSeconds)/3600)}
placeholder="2 (default)" min="1" max="24">
<small class="form-text text-muted">
Maximum time allowed for a single task to complete (1-24 hours).
Maximum time allowed for a single task to complete (1-24 hours). <strong>Default: 2 hours</strong>
</small>
</div>
<div class="mb-3">
<label for="globalMaxConcurrent" class="form-label">Global Concurrent Limit</label>
<input type="number" class="form-control" id="globalMaxConcurrent"
value={fmt.Sprintf("%d", data.Config.Policy.GlobalMaxConcurrent)} min="1" max="20">
value={fmt.Sprintf("%d", data.Config.Policy.GlobalMaxConcurrent)}
placeholder="4 (default)" min="1" max="20">
<small class="form-text text-muted">
Maximum number of maintenance tasks that can run simultaneously across all workers (1-20).
Maximum number of maintenance tasks that can run simultaneously across all workers (1-20). <strong>Default: 4</strong>
</small>
</div>
<div class="mb-3">
<label for="maxRetries" class="form-label">Default Max Retries</label>
<input type="number" class="form-control" id="maxRetries"
value={fmt.Sprintf("%d", data.Config.MaxRetries)} min="0" max="10">
value={fmt.Sprintf("%d", data.Config.MaxRetries)}
placeholder="3 (default)" min="0" max="10">
<small class="form-text text-muted">
Default number of times to retry failed tasks (0-10).
Default number of times to retry failed tasks (0-10). <strong>Default: 3</strong>
</small>
</div>
<div class="mb-3">
<label for="retryDelay" class="form-label">Retry Delay (minutes)</label>
<input type="number" class="form-control" id="retryDelay"
value={fmt.Sprintf("%.0f", float64(data.Config.RetryDelaySeconds)/60)} min="1" max="120">
value={fmt.Sprintf("%.0f", float64(data.Config.RetryDelaySeconds)/60)}
placeholder="15 (default)" min="1" max="120">
<small class="form-text text-muted">
Time to wait before retrying failed tasks (1-120 minutes).
Time to wait before retrying failed tasks (1-120 minutes). <strong>Default: 15 minutes</strong>
</small>
</div>
<div class="mb-3">
<label for="taskRetention" class="form-label">Task Retention (days)</label>
<input type="number" class="form-control" id="taskRetention"
value={fmt.Sprintf("%.0f", float64(data.Config.TaskRetentionSeconds)/(24*3600))} min="1" max="30">
value={fmt.Sprintf("%.0f", float64(data.Config.TaskRetentionSeconds)/(24*3600))}
placeholder="7 (default)" min="1" max="30">
<small class="form-text text-muted">
How long to keep completed/failed task records (1-30 days).
How long to keep completed/failed task records (1-30 days). <strong>Default: 7 days</strong>
</small>
</div>
@@ -143,7 +150,7 @@ templ MaintenanceConfig(data *maintenance.MaintenanceConfigData) {
<i class={menuItem.Icon + " me-2"}></i>
{menuItem.DisplayName}
</h6>
if data.Config.Policy.IsTaskEnabled(menuItem.TaskType) {
if menuItem.IsEnabled {
<span class="badge bg-success">Enabled</span>
} else {
<span class="badge bg-secondary">Disabled</span>
@@ -200,44 +207,60 @@ templ MaintenanceConfig(data *maintenance.MaintenanceConfigData) {
<script>
function saveConfiguration() {
const config = {
enabled: document.getElementById('enabled').checked,
scan_interval_seconds: parseInt(document.getElementById('scanInterval').value) * 60, // Convert to seconds
policy: {
vacuum_enabled: document.getElementById('vacuumEnabled').checked,
vacuum_garbage_ratio: parseFloat(document.getElementById('vacuumGarbageRatio').value) / 100,
replication_fix_enabled: document.getElementById('replicationFixEnabled').checked,
}
};
// First, get current configuration to preserve existing values
fetch('/api/maintenance/config')
.then(response => response.json())
.then(currentConfig => {
// Update only the fields from the form
const updatedConfig = {
...currentConfig.config, // Preserve existing config
enabled: document.getElementById('enabled').checked,
scan_interval_seconds: parseInt(document.getElementById('scanInterval').value) * 60, // Convert to seconds
worker_timeout_seconds: parseInt(document.getElementById('workerTimeout').value) * 60, // Convert to seconds
task_timeout_seconds: parseInt(document.getElementById('taskTimeout').value) * 3600, // Convert to seconds
retry_delay_seconds: parseInt(document.getElementById('retryDelay').value) * 60, // Convert to seconds
max_retries: parseInt(document.getElementById('maxRetries').value),
task_retention_seconds: parseInt(document.getElementById('taskRetention').value) * 24 * 3600, // Convert to seconds
policy: {
...currentConfig.config.policy, // Preserve existing policy
global_max_concurrent: parseInt(document.getElementById('globalMaxConcurrent').value)
}
};
fetch('/api/maintenance/config', {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(config)
})
.then(response => response.json())
.then(data => {
if (data.success) {
alert('Configuration saved successfully');
} else {
alert('Failed to save configuration: ' + (data.error || 'Unknown error'));
}
})
.catch(error => {
alert('Error: ' + error.message);
});
// Send the updated configuration
return fetch('/api/maintenance/config', {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(updatedConfig)
});
})
.then(response => response.json())
.then(data => {
if (data.success) {
alert('Configuration saved successfully');
location.reload(); // Reload to show updated values
} else {
alert('Failed to save configuration: ' + (data.error || 'Unknown error'));
}
})
.catch(error => {
alert('Error: ' + error.message);
});
}
function resetToDefaults() {
if (confirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.')) {
// Reset form to defaults
// Reset form to defaults (matching DefaultMaintenanceConfig values)
document.getElementById('enabled').checked = false;
document.getElementById('scanInterval').value = '30';
document.getElementById('vacuumEnabled').checked = false;
document.getElementById('vacuumGarbageRatio').value = '30';
document.getElementById('replicationFixEnabled').checked = false;
document.getElementById('workerTimeout').value = '5';
document.getElementById('taskTimeout').value = '2';
document.getElementById('globalMaxConcurrent').value = '4';
document.getElementById('maxRetries').value = '3';
document.getElementById('retryDelay').value = '15';
document.getElementById('taskRetention').value = '7';
}
}
</script>

View File

@@ -0,0 +1,381 @@
package app
import (
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
"github.com/seaweedfs/seaweedfs/weed/admin/config"
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
)
templ MaintenanceConfigSchema(data *maintenance.MaintenanceConfigData, schema *maintenance.MaintenanceConfigSchema) {
<div class="container-fluid">
<div class="row mb-4">
<div class="col-12">
<div class="d-flex justify-content-between align-items-center">
<h2 class="mb-0">
<i class="fas fa-cogs me-2"></i>
Maintenance Configuration
</h2>
<div class="btn-group">
<a href="/maintenance/tasks" class="btn btn-outline-primary">
<i class="fas fa-tasks me-1"></i>
View Tasks
</a>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">System Settings</h5>
</div>
<div class="card-body">
<form id="maintenanceConfigForm">
<!-- Dynamically render all schema fields in order -->
for _, field := range schema.Fields {
@ConfigField(field, data.Config)
}
<div class="d-flex gap-2">
<button type="button" class="btn btn-primary" onclick="saveConfiguration()">
<i class="fas fa-save me-1"></i>
Save Configuration
</button>
<button type="button" class="btn btn-secondary" onclick="resetToDefaults()">
<i class="fas fa-undo me-1"></i>
Reset to Defaults
</button>
</div>
</form>
</div>
</div>
</div>
</div>
<!-- Task Configuration Cards -->
<div class="row mt-4">
<div class="col-md-4">
<div class="card">
<div class="card-header">
<h5 class="mb-0">
<i class="fas fa-broom me-2"></i>
Volume Vacuum
</h5>
</div>
<div class="card-body">
<p class="card-text">Reclaims disk space by removing deleted files from volumes.</p>
<a href="/maintenance/config/vacuum" class="btn btn-primary">Configure</a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
<h5 class="mb-0">
<i class="fas fa-balance-scale me-2"></i>
Volume Balance
</h5>
</div>
<div class="card-body">
<p class="card-text">Redistributes volumes across servers to optimize storage utilization.</p>
<a href="/maintenance/config/balance" class="btn btn-primary">Configure</a>
</div>
</div>
</div>
<div class="col-md-4">
<div class="card">
<div class="card-header">
<h5 class="mb-0">
<i class="fas fa-shield-alt me-2"></i>
Erasure Coding
</h5>
</div>
<div class="card-body">
<p class="card-text">Converts volumes to erasure coded format for improved durability.</p>
<a href="/maintenance/config/erasure_coding" class="btn btn-primary">Configure</a>
</div>
</div>
</div>
</div>
</div>
<script>
function saveConfiguration() {
const form = document.getElementById('maintenanceConfigForm');
const formData = new FormData(form);
// Convert form data to JSON, handling interval fields specially
const config = {};
for (let [key, value] of formData.entries()) {
if (key.endsWith('_value')) {
// This is an interval value part
const baseKey = key.replace('_value', '');
const unitKey = baseKey + '_unit';
const unitValue = formData.get(unitKey);
if (unitValue) {
// Convert to seconds based on unit
const numValue = parseInt(value) || 0;
let seconds = numValue;
switch(unitValue) {
case 'minutes':
seconds = numValue * 60;
break;
case 'hours':
seconds = numValue * 3600;
break;
case 'days':
seconds = numValue * 24 * 3600;
break;
}
config[baseKey] = seconds;
}
} else if (key.endsWith('_unit')) {
// Skip unit keys - they're handled with their corresponding value
continue;
} else {
// Regular field
if (form.querySelector(`[name="${key}"]`).type === 'checkbox') {
config[key] = form.querySelector(`[name="${key}"]`).checked;
} else {
const numValue = parseFloat(value);
config[key] = isNaN(numValue) ? value : numValue;
}
}
}
fetch('/api/maintenance/config', {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(config)
})
.then(response => {
if (response.status === 401) {
alert('Authentication required. Please log in first.');
window.location.href = '/login';
return;
}
return response.json();
})
.then(data => {
if (!data) return; // Skip if redirected to login
if (data.success) {
alert('Configuration saved successfully!');
location.reload();
} else {
alert('Error saving configuration: ' + (data.error || 'Unknown error'));
}
})
.catch(error => {
console.error('Error:', error);
alert('Error saving configuration: ' + error.message);
});
}
function resetToDefaults() {
if (confirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.')) {
fetch('/maintenance/config/defaults', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
}
})
.then(response => response.json())
.then(data => {
if (data.success) {
alert('Configuration reset to defaults!');
location.reload();
} else {
alert('Error resetting configuration: ' + (data.error || 'Unknown error'));
}
})
.catch(error => {
console.error('Error:', error);
alert('Error resetting configuration: ' + error.message);
});
}
}
</script>
}
// ConfigField renders a single configuration field based on schema with typed value lookup
templ ConfigField(field *config.Field, config *maintenance.MaintenanceConfig) {
if field.InputType == "interval" {
<!-- Interval field with number input + unit dropdown -->
<div class="mb-3">
<label for={ field.JSONName } class="form-label">
{ field.DisplayName }
if field.Required {
<span class="text-danger">*</span>
}
</label>
<div class="input-group">
<input
type="number"
class="form-control"
id={ field.JSONName + "_value" }
name={ field.JSONName + "_value" }
value={ fmt.Sprintf("%.0f", components.ConvertInt32SecondsToDisplayValue(getMaintenanceInt32Field(config, field.JSONName))) }
step="1"
min="1"
if field.Required {
required
}
/>
<select
class="form-select"
id={ field.JSONName + "_unit" }
name={ field.JSONName + "_unit" }
style="max-width: 120px;"
if field.Required {
required
}
>
<option
value="minutes"
if components.GetInt32DisplayUnit(getMaintenanceInt32Field(config, field.JSONName)) == "minutes" {
selected
}
>
Minutes
</option>
<option
value="hours"
if components.GetInt32DisplayUnit(getMaintenanceInt32Field(config, field.JSONName)) == "hours" {
selected
}
>
Hours
</option>
<option
value="days"
if components.GetInt32DisplayUnit(getMaintenanceInt32Field(config, field.JSONName)) == "days" {
selected
}
>
Days
</option>
</select>
</div>
if field.Description != "" {
<div class="form-text text-muted">{ field.Description }</div>
}
</div>
} else if field.InputType == "checkbox" {
<!-- Checkbox field -->
<div class="mb-3">
<div class="form-check form-switch">
<input
class="form-check-input"
type="checkbox"
id={ field.JSONName }
name={ field.JSONName }
if getMaintenanceBoolField(config, field.JSONName) {
checked
}
/>
<label class="form-check-label" for={ field.JSONName }>
<strong>{ field.DisplayName }</strong>
</label>
</div>
if field.Description != "" {
<div class="form-text text-muted">{ field.Description }</div>
}
</div>
} else {
<!-- Number field -->
<div class="mb-3">
<label for={ field.JSONName } class="form-label">
{ field.DisplayName }
if field.Required {
<span class="text-danger">*</span>
}
</label>
<input
type="number"
class="form-control"
id={ field.JSONName }
name={ field.JSONName }
value={ fmt.Sprintf("%d", getMaintenanceInt32Field(config, field.JSONName)) }
placeholder={ field.Placeholder }
if field.MinValue != nil {
min={ fmt.Sprintf("%v", field.MinValue) }
}
if field.MaxValue != nil {
max={ fmt.Sprintf("%v", field.MaxValue) }
}
step={ getNumberStep(field) }
if field.Required {
required
}
/>
if field.Description != "" {
<div class="form-text text-muted">{ field.Description }</div>
}
</div>
}
}
// Helper functions for form field types
func getNumberStep(field *config.Field) string {
if field.Type == config.FieldTypeFloat {
return "0.01"
}
return "1"
}
// Typed field getters for MaintenanceConfig - no interface{} needed
func getMaintenanceInt32Field(config *maintenance.MaintenanceConfig, fieldName string) int32 {
if config == nil {
return 0
}
switch fieldName {
case "scan_interval_seconds":
return config.ScanIntervalSeconds
case "worker_timeout_seconds":
return config.WorkerTimeoutSeconds
case "task_timeout_seconds":
return config.TaskTimeoutSeconds
case "retry_delay_seconds":
return config.RetryDelaySeconds
case "max_retries":
return config.MaxRetries
case "cleanup_interval_seconds":
return config.CleanupIntervalSeconds
case "task_retention_seconds":
return config.TaskRetentionSeconds
case "global_max_concurrent":
if config.Policy != nil {
return config.Policy.GlobalMaxConcurrent
}
return 0
default:
return 0
}
}
func getMaintenanceBoolField(config *maintenance.MaintenanceConfig, fieldName string) bool {
if config == nil {
return false
}
switch fieldName {
case "enabled":
return config.Enabled
default:
return false
}
}
// Helper function to convert schema to JSON for JavaScript
templ schemaToJSON(schema *maintenance.MaintenanceConfigSchema) {
{`{}`}
}

File diff suppressed because one or more lines are too long

View File

@@ -57,85 +57,85 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "\" min=\"1\" max=\"1440\"> <small class=\"form-text text-muted\">How often to scan for maintenance tasks (1-1440 minutes).</small></div><div class=\"mb-3\"><label for=\"workerTimeout\" class=\"form-label\">Worker Timeout (minutes)</label> <input type=\"number\" class=\"form-control\" id=\"workerTimeout\" value=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "\" placeholder=\"30 (default)\" min=\"1\" max=\"1440\"> <small class=\"form-text text-muted\">How often to scan for maintenance tasks (1-1440 minutes). <strong>Default: 30 minutes</strong></small></div><div class=\"mb-3\"><label for=\"workerTimeout\" class=\"form-label\">Worker Timeout (minutes)</label> <input type=\"number\" class=\"form-control\" id=\"workerTimeout\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var3 string
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.WorkerTimeoutSeconds)/60))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 59, Col: 111}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 60, Col: 111}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "\" min=\"1\" max=\"60\"> <small class=\"form-text text-muted\">How long to wait for worker heartbeat before considering it inactive (1-60 minutes).</small></div><div class=\"mb-3\"><label for=\"taskTimeout\" class=\"form-label\">Task Timeout (hours)</label> <input type=\"number\" class=\"form-control\" id=\"taskTimeout\" value=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "\" placeholder=\"5 (default)\" min=\"1\" max=\"60\"> <small class=\"form-text text-muted\">How long to wait for worker heartbeat before considering it inactive (1-60 minutes). <strong>Default: 5 minutes</strong></small></div><div class=\"mb-3\"><label for=\"taskTimeout\" class=\"form-label\">Task Timeout (hours)</label> <input type=\"number\" class=\"form-control\" id=\"taskTimeout\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var4 string
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.TaskTimeoutSeconds)/3600))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 68, Col: 111}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 70, Col: 111}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "\" min=\"1\" max=\"24\"> <small class=\"form-text text-muted\">Maximum time allowed for a single task to complete (1-24 hours).</small></div><div class=\"mb-3\"><label for=\"globalMaxConcurrent\" class=\"form-label\">Global Concurrent Limit</label> <input type=\"number\" class=\"form-control\" id=\"globalMaxConcurrent\" value=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "\" placeholder=\"2 (default)\" min=\"1\" max=\"24\"> <small class=\"form-text text-muted\">Maximum time allowed for a single task to complete (1-24 hours). <strong>Default: 2 hours</strong></small></div><div class=\"mb-3\"><label for=\"globalMaxConcurrent\" class=\"form-label\">Global Concurrent Limit</label> <input type=\"number\" class=\"form-control\" id=\"globalMaxConcurrent\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var5 string
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Config.Policy.GlobalMaxConcurrent))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 77, Col: 103}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 80, Col: 103}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "\" min=\"1\" max=\"20\"> <small class=\"form-text text-muted\">Maximum number of maintenance tasks that can run simultaneously across all workers (1-20).</small></div><div class=\"mb-3\"><label for=\"maxRetries\" class=\"form-label\">Default Max Retries</label> <input type=\"number\" class=\"form-control\" id=\"maxRetries\" value=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "\" placeholder=\"4 (default)\" min=\"1\" max=\"20\"> <small class=\"form-text text-muted\">Maximum number of maintenance tasks that can run simultaneously across all workers (1-20). <strong>Default: 4</strong></small></div><div class=\"mb-3\"><label for=\"maxRetries\" class=\"form-label\">Default Max Retries</label> <input type=\"number\" class=\"form-control\" id=\"maxRetries\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.Config.MaxRetries))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 86, Col: 87}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 90, Col: 87}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "\" min=\"0\" max=\"10\"> <small class=\"form-text text-muted\">Default number of times to retry failed tasks (0-10).</small></div><div class=\"mb-3\"><label for=\"retryDelay\" class=\"form-label\">Retry Delay (minutes)</label> <input type=\"number\" class=\"form-control\" id=\"retryDelay\" value=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "\" placeholder=\"3 (default)\" min=\"0\" max=\"10\"> <small class=\"form-text text-muted\">Default number of times to retry failed tasks (0-10). <strong>Default: 3</strong></small></div><div class=\"mb-3\"><label for=\"retryDelay\" class=\"form-label\">Retry Delay (minutes)</label> <input type=\"number\" class=\"form-control\" id=\"retryDelay\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var7 string
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.RetryDelaySeconds)/60))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 95, Col: 108}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 100, Col: 108}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "\" min=\"1\" max=\"120\"> <small class=\"form-text text-muted\">Time to wait before retrying failed tasks (1-120 minutes).</small></div><div class=\"mb-3\"><label for=\"taskRetention\" class=\"form-label\">Task Retention (days)</label> <input type=\"number\" class=\"form-control\" id=\"taskRetention\" value=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "\" placeholder=\"15 (default)\" min=\"1\" max=\"120\"> <small class=\"form-text text-muted\">Time to wait before retrying failed tasks (1-120 minutes). <strong>Default: 15 minutes</strong></small></div><div class=\"mb-3\"><label for=\"taskRetention\" class=\"form-label\">Task Retention (days)</label> <input type=\"number\" class=\"form-control\" id=\"taskRetention\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var8 string
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", float64(data.Config.TaskRetentionSeconds)/(24*3600)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 104, Col: 118}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 110, Col: 118}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "\" min=\"1\" max=\"30\"> <small class=\"form-text text-muted\">How long to keep completed/failed task records (1-30 days).</small></div><div class=\"d-flex gap-2\"><button type=\"button\" class=\"btn btn-primary\" onclick=\"saveConfiguration()\"><i class=\"fas fa-save me-1\"></i> Save Configuration</button> <button type=\"button\" class=\"btn btn-secondary\" onclick=\"resetToDefaults()\"><i class=\"fas fa-undo me-1\"></i> Reset to Defaults</button></div></form></div></div></div></div><!-- Individual Task Configuration Menu --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-cogs me-2\"></i> Task Configuration</h5></div><div class=\"card-body\"><p class=\"text-muted mb-3\">Configure specific settings for each maintenance task type.</p><div class=\"list-group\">")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "\" placeholder=\"7 (default)\" min=\"1\" max=\"30\"> <small class=\"form-text text-muted\">How long to keep completed/failed task records (1-30 days). <strong>Default: 7 days</strong></small></div><div class=\"d-flex gap-2\"><button type=\"button\" class=\"btn btn-primary\" onclick=\"saveConfiguration()\"><i class=\"fas fa-save me-1\"></i> Save Configuration</button> <button type=\"button\" class=\"btn btn-secondary\" onclick=\"resetToDefaults()\"><i class=\"fas fa-undo me-1\"></i> Reset to Defaults</button></div></form></div></div></div></div><!-- Individual Task Configuration Menu --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-cogs me-2\"></i> Task Configuration</h5></div><div class=\"card-body\"><p class=\"text-muted mb-3\">Configure specific settings for each maintenance task type.</p><div class=\"list-group\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
@@ -147,7 +147,7 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
var templ_7745c5c3_Var9 templ.SafeURL
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.Path))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 140, Col: 69}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 147, Col: 69}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
if templ_7745c5c3_Err != nil {
@@ -182,7 +182,7 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
var templ_7745c5c3_Var12 string
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.DisplayName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 144, Col: 65}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 151, Col: 65}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
if templ_7745c5c3_Err != nil {
@@ -192,7 +192,7 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Config.Policy.IsTaskEnabled(menuItem.TaskType) {
if menuItem.IsEnabled {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "<span class=\"badge bg-success\">Enabled</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
@@ -210,7 +210,7 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
var templ_7745c5c3_Var13 string
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Description)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 152, Col: 90}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 159, Col: 90}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
if templ_7745c5c3_Err != nil {
@@ -228,7 +228,7 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(data.LastScanTime.Format("2006-01-02 15:04:05"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 173, Col: 100}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 180, Col: 100}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
@@ -241,7 +241,7 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(data.NextScanTime.Format("2006-01-02 15:04:05"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 179, Col: 100}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 186, Col: 100}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
@@ -254,7 +254,7 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
var templ_7745c5c3_Var16 string
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.SystemStats.TotalTasks))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 185, Col: 99}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 192, Col: 99}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
if templ_7745c5c3_Err != nil {
@@ -267,13 +267,13 @@ func MaintenanceConfig(data *maintenance.MaintenanceConfigData) templ.Component
var templ_7745c5c3_Var17 string
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", data.SystemStats.ActiveWorkers))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 191, Col: 102}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_config.templ`, Line: 198, Col: 102}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</p></div></div></div></div></div></div></div></div><script>\n function saveConfiguration() {\n const config = {\n enabled: document.getElementById('enabled').checked,\n scan_interval_seconds: parseInt(document.getElementById('scanInterval').value) * 60, // Convert to seconds\n policy: {\n vacuum_enabled: document.getElementById('vacuumEnabled').checked,\n vacuum_garbage_ratio: parseFloat(document.getElementById('vacuumGarbageRatio').value) / 100,\n replication_fix_enabled: document.getElementById('replicationFixEnabled').checked,\n }\n };\n\n fetch('/api/maintenance/config', {\n method: 'PUT',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify(config)\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n alert('Configuration saved successfully');\n } else {\n alert('Failed to save configuration: ' + (data.error || 'Unknown error'));\n }\n })\n .catch(error => {\n alert('Error: ' + error.message);\n });\n }\n\n function resetToDefaults() {\n if (confirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.')) {\n // Reset form to defaults\n document.getElementById('enabled').checked = false;\n document.getElementById('scanInterval').value = '30';\n document.getElementById('vacuumEnabled').checked = false;\n document.getElementById('vacuumGarbageRatio').value = '30';\n document.getElementById('replicationFixEnabled').checked = false;\n }\n }\n </script>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</p></div></div></div></div></div></div></div></div><script>\n function saveConfiguration() {\n // First, get current configuration to preserve existing values\n fetch('/api/maintenance/config')\n .then(response => response.json())\n .then(currentConfig => {\n // Update only the fields from the form\n const updatedConfig = {\n ...currentConfig.config, // Preserve existing config\n enabled: document.getElementById('enabled').checked,\n scan_interval_seconds: parseInt(document.getElementById('scanInterval').value) * 60, // Convert to seconds\n worker_timeout_seconds: parseInt(document.getElementById('workerTimeout').value) * 60, // Convert to seconds\n task_timeout_seconds: parseInt(document.getElementById('taskTimeout').value) * 3600, // Convert to seconds\n retry_delay_seconds: parseInt(document.getElementById('retryDelay').value) * 60, // Convert to seconds\n max_retries: parseInt(document.getElementById('maxRetries').value),\n task_retention_seconds: parseInt(document.getElementById('taskRetention').value) * 24 * 3600, // Convert to seconds\n policy: {\n ...currentConfig.config.policy, // Preserve existing policy\n global_max_concurrent: parseInt(document.getElementById('globalMaxConcurrent').value)\n }\n };\n\n // Send the updated configuration\n return fetch('/api/maintenance/config', {\n method: 'PUT',\n headers: {\n 'Content-Type': 'application/json',\n },\n body: JSON.stringify(updatedConfig)\n });\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n alert('Configuration saved successfully');\n location.reload(); // Reload to show updated values\n } else {\n alert('Failed to save configuration: ' + (data.error || 'Unknown error'));\n }\n })\n .catch(error => {\n alert('Error: ' + error.message);\n });\n }\n\n function resetToDefaults() {\n if (confirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.')) {\n // Reset form to defaults (matching DefaultMaintenanceConfig values)\n document.getElementById('enabled').checked = false;\n document.getElementById('scanInterval').value = '30';\n document.getElementById('workerTimeout').value = '5';\n document.getElementById('taskTimeout').value = '2';\n document.getElementById('globalMaxConcurrent').value = '4';\n document.getElementById('maxRetries').value = '3';\n document.getElementById('retryDelay').value = '15';\n document.getElementById('taskRetention').value = '7';\n }\n }\n </script>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}

View File

@@ -70,43 +70,51 @@ templ MaintenanceQueue(data *maintenance.MaintenanceQueueData) {
</div>
</div>
<!-- Simple task queue display -->
<div class="row">
<!-- Pending Tasks -->
<div class="row mb-4">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">Task Queue</h5>
<div class="card-header bg-primary text-white">
<h5 class="mb-0">
<i class="fas fa-clock me-2"></i>
Pending Tasks
</h5>
</div>
<div class="card-body">
if len(data.Tasks) == 0 {
if data.Stats.PendingTasks == 0 {
<div class="text-center text-muted py-4">
<i class="fas fa-clipboard-list fa-3x mb-3"></i>
<p>No maintenance tasks in queue</p>
<small>Tasks will appear here when the system detects maintenance needs</small>
<p>No pending maintenance tasks</p>
<small>Pending tasks will appear here when the system detects maintenance needs</small>
</div>
} else {
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th>ID</th>
<th>Type</th>
<th>Status</th>
<th>Priority</th>
<th>Volume</th>
<th>Server</th>
<th>Reason</th>
<th>Created</th>
</tr>
</thead>
<tbody>
for _, task := range data.Tasks {
<tr>
<td><code>{task.ID[:8]}...</code></td>
<td>{string(task.Type)}</td>
<td>{string(task.Status)}</td>
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
<td>{task.Server}</td>
<td>{task.CreatedAt.Format("2006-01-02 15:04")}</td>
</tr>
if string(task.Status) == "pending" {
<tr>
<td>
@TaskTypeIcon(task.Type)
{string(task.Type)}
</td>
<td>@PriorityBadge(task.Priority)</td>
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
<td><small>{task.Server}</small></td>
<td><small>{task.Reason}</small></td>
<td>{task.CreatedAt.Format("2006-01-02 15:04")}</td>
</tr>
}
}
</tbody>
</table>
@@ -117,36 +125,171 @@ templ MaintenanceQueue(data *maintenance.MaintenanceQueueData) {
</div>
</div>
<!-- Workers Summary -->
<div class="row mt-4">
<!-- Active Tasks -->
<div class="row mb-4">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">Active Workers</h5>
<div class="card-header bg-warning text-dark">
<h5 class="mb-0">
<i class="fas fa-running me-2"></i>
Active Tasks
</h5>
</div>
<div class="card-body">
if len(data.Workers) == 0 {
if data.Stats.RunningTasks == 0 {
<div class="text-center text-muted py-4">
<i class="fas fa-robot fa-3x mb-3"></i>
<p>No workers are currently active</p>
<small>Start workers using: <code>weed worker -admin=localhost:9333</code></small>
<i class="fas fa-tasks fa-3x mb-3"></i>
<p>No active maintenance tasks</p>
<small>Active tasks will appear here when workers start processing them</small>
</div>
} else {
<div class="row">
for _, worker := range data.Workers {
<div class="col-md-4 mb-3">
<div class="card">
<div class="card-body">
<h6 class="card-title">{worker.ID}</h6>
<p class="card-text">
<small class="text-muted">{worker.Address}</small><br/>
Status: {worker.Status}<br/>
Load: {fmt.Sprintf("%d/%d", worker.CurrentLoad, worker.MaxConcurrent)}
</p>
</div>
</div>
</div>
}
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th>Type</th>
<th>Status</th>
<th>Progress</th>
<th>Volume</th>
<th>Worker</th>
<th>Started</th>
</tr>
</thead>
<tbody>
for _, task := range data.Tasks {
if string(task.Status) == "assigned" || string(task.Status) == "in_progress" {
<tr>
<td>
@TaskTypeIcon(task.Type)
{string(task.Type)}
</td>
<td>@StatusBadge(task.Status)</td>
<td>@ProgressBar(task.Progress, task.Status)</td>
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
<td>
if task.WorkerID != "" {
<small>{task.WorkerID}</small>
} else {
<span class="text-muted">-</span>
}
</td>
<td>
if task.StartedAt != nil {
{task.StartedAt.Format("2006-01-02 15:04")}
} else {
<span class="text-muted">-</span>
}
</td>
</tr>
}
}
</tbody>
</table>
</div>
}
</div>
</div>
</div>
</div>
<!-- Completed Tasks -->
<div class="row mb-4">
<div class="col-12">
<div class="card">
<div class="card-header bg-success text-white">
<h5 class="mb-0">
<i class="fas fa-check-circle me-2"></i>
Completed Tasks
</h5>
</div>
<div class="card-body">
if data.Stats.CompletedToday == 0 && data.Stats.FailedToday == 0 {
<div class="text-center text-muted py-4">
<i class="fas fa-check-circle fa-3x mb-3"></i>
<p>No completed maintenance tasks today</p>
<small>Completed tasks will appear here after workers finish processing them</small>
</div>
} else {
<div class="table-responsive">
<table class="table table-hover">
<thead>
<tr>
<th>Type</th>
<th>Status</th>
<th>Volume</th>
<th>Worker</th>
<th>Duration</th>
<th>Completed</th>
</tr>
</thead>
<tbody>
for _, task := range data.Tasks {
if string(task.Status) == "completed" || string(task.Status) == "failed" || string(task.Status) == "cancelled" {
if string(task.Status) == "failed" {
<tr class="table-danger">
<td>
@TaskTypeIcon(task.Type)
{string(task.Type)}
</td>
<td>@StatusBadge(task.Status)</td>
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
<td>
if task.WorkerID != "" {
<small>{task.WorkerID}</small>
} else {
<span class="text-muted">-</span>
}
</td>
<td>
if task.StartedAt != nil && task.CompletedAt != nil {
{formatDuration(task.CompletedAt.Sub(*task.StartedAt))}
} else {
<span class="text-muted">-</span>
}
</td>
<td>
if task.CompletedAt != nil {
{task.CompletedAt.Format("2006-01-02 15:04")}
} else {
<span class="text-muted">-</span>
}
</td>
</tr>
} else {
<tr>
<td>
@TaskTypeIcon(task.Type)
{string(task.Type)}
</td>
<td>@StatusBadge(task.Status)</td>
<td>{fmt.Sprintf("%d", task.VolumeID)}</td>
<td>
if task.WorkerID != "" {
<small>{task.WorkerID}</small>
} else {
<span class="text-muted">-</span>
}
</td>
<td>
if task.StartedAt != nil && task.CompletedAt != nil {
{formatDuration(task.CompletedAt.Sub(*task.StartedAt))}
} else {
<span class="text-muted">-</span>
}
</td>
<td>
if task.CompletedAt != nil {
{task.CompletedAt.Format("2006-01-02 15:04")}
} else {
<span class="text-muted">-</span>
}
</td>
</tr>
}
}
}
</tbody>
</table>
</div>
}
</div>
@@ -156,6 +299,9 @@ templ MaintenanceQueue(data *maintenance.MaintenanceQueueData) {
</div>
<script>
// Debug output to browser console
console.log("DEBUG: Maintenance Queue Template loaded");
// Auto-refresh every 10 seconds
setInterval(function() {
if (!document.hidden) {
@@ -163,7 +309,8 @@ templ MaintenanceQueue(data *maintenance.MaintenanceQueueData) {
}
}, 10000);
function triggerScan() {
window.triggerScan = function() {
console.log("triggerScan called");
fetch('/api/maintenance/scan', {
method: 'POST',
headers: {
@@ -182,7 +329,12 @@ templ MaintenanceQueue(data *maintenance.MaintenanceQueueData) {
.catch(error => {
alert('Error: ' + error.message);
});
}
};
window.refreshPage = function() {
console.log("refreshPage called");
window.location.reload();
};
</script>
}
@@ -243,32 +395,13 @@ templ ProgressBar(progress float64, status maintenance.MaintenanceTaskStatus) {
}
}
templ WorkerStatusBadge(status string) {
switch status {
case "active":
<span class="badge bg-success">Active</span>
case "busy":
<span class="badge bg-warning">Busy</span>
case "inactive":
<span class="badge bg-secondary">Inactive</span>
default:
<span class="badge bg-light text-dark">Unknown</span>
}
}
// Helper functions (would be defined in Go)
func getWorkerStatusColor(status string) string {
switch status {
case "active":
return "success"
case "busy":
return "warning"
case "inactive":
return "secondary"
default:
return "light"
func formatDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.0fs", d.Seconds())
} else if d < time.Hour {
return fmt.Sprintf("%.1fm", d.Minutes())
} else {
return fmt.Sprintf("%.1fh", d.Hours())
}
}

View File

@@ -87,102 +87,103 @@ func MaintenanceQueue(data *maintenance.MaintenanceQueueData) templ.Component {
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</h4><p class=\"text-muted mb-0\">Failed Today</p></div></div></div></div><!-- Simple task queue display --><div class=\"row\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\">Task Queue</h5></div><div class=\"card-body\">")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</h4><p class=\"text-muted mb-0\">Failed Today</p></div></div></div></div><!-- Pending Tasks --><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header bg-primary text-white\"><h5 class=\"mb-0\"><i class=\"fas fa-clock me-2\"></i> Pending Tasks</h5></div><div class=\"card-body\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if len(data.Tasks) == 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-clipboard-list fa-3x mb-3\"></i><p>No maintenance tasks in queue</p><small>Tasks will appear here when the system detects maintenance needs</small></div>")
if data.Stats.PendingTasks == 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-clipboard-list fa-3x mb-3\"></i><p>No pending maintenance tasks</p><small>Pending tasks will appear here when the system detects maintenance needs</small></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "<div class=\"table-responsive\"><table class=\"table table-hover\"><thead><tr><th>ID</th><th>Type</th><th>Status</th><th>Volume</th><th>Server</th><th>Created</th></tr></thead> <tbody>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "<div class=\"table-responsive\"><table class=\"table table-hover\"><thead><tr><th>Type</th><th>Priority</th><th>Volume</th><th>Server</th><th>Reason</th><th>Created</th></tr></thead> <tbody>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, task := range data.Tasks {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "<tr><td><code>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(task.ID[:8])
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 103, Col: 70}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "...</code></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var7 string
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 104, Col: 70}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var8 string
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Status))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 105, Col: 72}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var9 string
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 106, Col: 85}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var10 string
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(task.Server)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 107, Col: 64}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var11 string
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(task.CreatedAt.Format("2006-01-02 15:04"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 108, Col: 94}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "</td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
if string(task.Status) == "pending" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "<tr><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 109, Col: 74}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = PriorityBadge(task.Priority).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var7 string
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 112, Col: 89}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "</td><td><small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var8 string
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(task.Server)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 113, Col: 75}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "</small></td><td><small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var9 string
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(task.Reason)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 114, Col: 75}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "</small></td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var10 string
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(task.CreatedAt.Format("2006-01-02 15:04"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 115, Col: 98}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, "</td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, "</tbody></table></div>")
@@ -190,84 +191,374 @@ func MaintenanceQueue(data *maintenance.MaintenanceQueueData) templ.Component {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "</div></div></div></div><!-- Workers Summary --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\">Active Workers</h5></div><div class=\"card-body\">")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "</div></div></div></div><!-- Active Tasks --><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header bg-warning text-dark\"><h5 class=\"mb-0\"><i class=\"fas fa-running me-2\"></i> Active Tasks</h5></div><div class=\"card-body\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if len(data.Workers) == 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-robot fa-3x mb-3\"></i><p>No workers are currently active</p><small>Start workers using: <code>weed worker -admin=localhost:9333</code></small></div>")
if data.Stats.RunningTasks == 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-tasks fa-3x mb-3\"></i><p>No active maintenance tasks</p><small>Active tasks will appear here when workers start processing them</small></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "<div class=\"row\">")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "<div class=\"table-responsive\"><table class=\"table table-hover\"><thead><tr><th>Type</th><th>Status</th><th>Progress</th><th>Volume</th><th>Worker</th><th>Started</th></tr></thead> <tbody>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, worker := range data.Workers {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "<div class=\"col-md-4 mb-3\"><div class=\"card\"><div class=\"card-body\"><h6 class=\"card-title\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var12 string
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(worker.ID)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 140, Col: 81}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "</h6><p class=\"card-text\"><small class=\"text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var13 string
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(worker.Address)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 142, Col: 93}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "</small><br>Status: ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(worker.Status)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 143, Col: 74}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "<br>Load: ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d/%d", worker.CurrentLoad, worker.MaxConcurrent))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 144, Col: 121}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</p></div></div></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
for _, task := range data.Tasks {
if string(task.Status) == "assigned" || string(task.Status) == "in_progress" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "<tr><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var11 string
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 164, Col: 74}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = StatusBadge(task.Status).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = ProgressBar(task.Progress, task.Status).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var12 string
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 168, Col: 89}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.WorkerID != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "<small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var13 string
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(task.WorkerID)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 171, Col: 81}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "</small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.StartedAt != nil {
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(task.StartedAt.Format("2006-01-02 15:04"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 178, Col: 102}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "</td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, "</div>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "</tbody></table></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "</div></div></div></div></div><script>\n // Auto-refresh every 10 seconds\n setInterval(function() {\n if (!document.hidden) {\n window.location.reload();\n }\n }, 10000);\n\n function triggerScan() {\n fetch('/api/maintenance/scan', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n }\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n alert('Maintenance scan triggered successfully');\n setTimeout(() => window.location.reload(), 2000);\n } else {\n alert('Failed to trigger scan: ' + (data.error || 'Unknown error'));\n }\n })\n .catch(error => {\n alert('Error: ' + error.message);\n });\n }\n </script>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "</div></div></div></div><!-- Completed Tasks --><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header bg-success text-white\"><h5 class=\"mb-0\"><i class=\"fas fa-check-circle me-2\"></i> Completed Tasks</h5></div><div class=\"card-body\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Stats.CompletedToday == 0 && data.Stats.FailedToday == 0 {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<div class=\"text-center text-muted py-4\"><i class=\"fas fa-check-circle fa-3x mb-3\"></i><p>No completed maintenance tasks today</p><small>Completed tasks will appear here after workers finish processing them</small></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "<div class=\"table-responsive\"><table class=\"table table-hover\"><thead><tr><th>Type</th><th>Status</th><th>Volume</th><th>Worker</th><th>Duration</th><th>Completed</th></tr></thead> <tbody>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, task := range data.Tasks {
if string(task.Status) == "completed" || string(task.Status) == "failed" || string(task.Status) == "cancelled" {
if string(task.Status) == "failed" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "<tr class=\"table-danger\"><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 232, Col: 78}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = StatusBadge(task.Status).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var16 string
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 235, Col: 93}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.WorkerID != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "<small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var17 string
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(task.WorkerID)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 238, Col: 85}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "</small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.StartedAt != nil && task.CompletedAt != nil {
var templ_7745c5c3_Var18 string
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(formatDuration(task.CompletedAt.Sub(*task.StartedAt)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 245, Col: 118}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.CompletedAt != nil {
var templ_7745c5c3_Var19 string
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(task.CompletedAt.Format("2006-01-02 15:04"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 252, Col: 108}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "</td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "<tr><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = TaskTypeIcon(task.Type).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var20 string
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(string(task.Type))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 262, Col: 78}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = StatusBadge(task.Status).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var21 string
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", task.VolumeID))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 265, Col: 93}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.WorkerID != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "<small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var22 string
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(task.WorkerID)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 268, Col: 85}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "</small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.StartedAt != nil && task.CompletedAt != nil {
var templ_7745c5c3_Var23 string
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinStringErrs(formatDuration(task.CompletedAt.Sub(*task.StartedAt)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 275, Col: 118}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "</td><td>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if task.CompletedAt != nil {
var templ_7745c5c3_Var24 string
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(task.CompletedAt.Format("2006-01-02 15:04"))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 282, Col: 108}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "</td></tr>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "</tbody></table></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "</div></div></div></div></div><script>\n // Debug output to browser console\n console.log(\"DEBUG: Maintenance Queue Template loaded\");\n \n // Auto-refresh every 10 seconds\n setInterval(function() {\n if (!document.hidden) {\n window.location.reload();\n }\n }, 10000);\n\n window.triggerScan = function() {\n console.log(\"triggerScan called\");\n fetch('/api/maintenance/scan', {\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n }\n })\n .then(response => response.json())\n .then(data => {\n if (data.success) {\n alert('Maintenance scan triggered successfully');\n setTimeout(() => window.location.reload(), 2000);\n } else {\n alert('Failed to trigger scan: ' + (data.error || 'Unknown error'));\n }\n })\n .catch(error => {\n alert('Error: ' + error.message);\n });\n };\n\n window.refreshPage = function() {\n console.log(\"refreshPage called\");\n window.location.reload();\n };\n </script>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
@@ -292,30 +583,30 @@ func TaskTypeIcon(taskType maintenance.MaintenanceTaskType) templ.Component {
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var16 := templ.GetChildren(ctx)
if templ_7745c5c3_Var16 == nil {
templ_7745c5c3_Var16 = templ.NopComponent
templ_7745c5c3_Var25 := templ.GetChildren(ctx)
if templ_7745c5c3_Var25 == nil {
templ_7745c5c3_Var25 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
var templ_7745c5c3_Var17 = []any{maintenance.GetTaskIcon(taskType) + " me-1"}
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var17...)
var templ_7745c5c3_Var26 = []any{maintenance.GetTaskIcon(taskType) + " me-1"}
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var26...)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "<i class=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "<i class=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var18 string
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var17).String())
var templ_7745c5c3_Var27 string
templ_7745c5c3_Var27, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var26).String())
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 1, Col: 0}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var27))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, "\"></i>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, "\"></i>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
@@ -339,34 +630,34 @@ func PriorityBadge(priority maintenance.MaintenanceTaskPriority) templ.Component
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var19 := templ.GetChildren(ctx)
if templ_7745c5c3_Var19 == nil {
templ_7745c5c3_Var19 = templ.NopComponent
templ_7745c5c3_Var28 := templ.GetChildren(ctx)
if templ_7745c5c3_Var28 == nil {
templ_7745c5c3_Var28 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
switch priority {
case maintenance.PriorityCritical:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, "<span class=\"badge bg-danger\">Critical</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "<span class=\"badge bg-danger\">Critical</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.PriorityHigh:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, "<span class=\"badge bg-warning\">High</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "<span class=\"badge bg-warning\">High</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.PriorityNormal:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, "<span class=\"badge bg-primary\">Normal</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "<span class=\"badge bg-primary\">Normal</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.PriorityLow:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, "<span class=\"badge bg-secondary\">Low</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 65, "<span class=\"badge bg-secondary\">Low</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
default:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<span class=\"badge bg-light text-dark\">Unknown</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 66, "<span class=\"badge bg-light text-dark\">Unknown</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
@@ -391,44 +682,44 @@ func StatusBadge(status maintenance.MaintenanceTaskStatus) templ.Component {
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var20 := templ.GetChildren(ctx)
if templ_7745c5c3_Var20 == nil {
templ_7745c5c3_Var20 = templ.NopComponent
templ_7745c5c3_Var29 := templ.GetChildren(ctx)
if templ_7745c5c3_Var29 == nil {
templ_7745c5c3_Var29 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
switch status {
case maintenance.TaskStatusPending:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "<span class=\"badge bg-secondary\">Pending</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 67, "<span class=\"badge bg-secondary\">Pending</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.TaskStatusAssigned:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "<span class=\"badge bg-info\">Assigned</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 68, "<span class=\"badge bg-info\">Assigned</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.TaskStatusInProgress:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "<span class=\"badge bg-warning\">Running</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 69, "<span class=\"badge bg-warning\">Running</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.TaskStatusCompleted:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "<span class=\"badge bg-success\">Completed</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 70, "<span class=\"badge bg-success\">Completed</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.TaskStatusFailed:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "<span class=\"badge bg-danger\">Failed</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 71, "<span class=\"badge bg-danger\">Failed</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case maintenance.TaskStatusCancelled:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, "<span class=\"badge bg-light text-dark\">Cancelled</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 72, "<span class=\"badge bg-light text-dark\">Cancelled</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
default:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "<span class=\"badge bg-light text-dark\">Unknown</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 73, "<span class=\"badge bg-light text-dark\">Unknown</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
@@ -453,49 +744,49 @@ func ProgressBar(progress float64, status maintenance.MaintenanceTaskStatus) tem
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var21 := templ.GetChildren(ctx)
if templ_7745c5c3_Var21 == nil {
templ_7745c5c3_Var21 = templ.NopComponent
templ_7745c5c3_Var30 := templ.GetChildren(ctx)
if templ_7745c5c3_Var30 == nil {
templ_7745c5c3_Var30 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
if status == maintenance.TaskStatusInProgress || status == maintenance.TaskStatusAssigned {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "<div class=\"progress\" style=\"height: 8px; min-width: 100px;\"><div class=\"progress-bar\" role=\"progressbar\" style=\"")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 74, "<div class=\"progress\" style=\"height: 8px; min-width: 100px;\"><div class=\"progress-bar\" role=\"progressbar\" style=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var22 string
templ_7745c5c3_Var22, templ_7745c5c3_Err = templruntime.SanitizeStyleAttributeValues(fmt.Sprintf("width: %.1f%%", progress))
var templ_7745c5c3_Var31 string
templ_7745c5c3_Var31, templ_7745c5c3_Err = templruntime.SanitizeStyleAttributeValues(fmt.Sprintf("width: %.1f%%", progress))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 231, Col: 102}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 383, Col: 102}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var31))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "\"></div></div><small class=\"text-muted\">")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 75, "\"></div></div><small class=\"text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var23 string
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.1f%%", progress))
var templ_7745c5c3_Var32 string
templ_7745c5c3_Var32, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.1f%%", progress))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 234, Col: 66}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/maintenance_queue.templ`, Line: 386, Col: 66}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var32))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "</small>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 76, "</small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if status == maintenance.TaskStatusCompleted {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "<div class=\"progress\" style=\"height: 8px; min-width: 100px;\"><div class=\"progress-bar bg-success\" role=\"progressbar\" style=\"width: 100%\"></div></div><small class=\"text-success\">100%</small>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 77, "<div class=\"progress\" style=\"height: 8px; min-width: 100px;\"><div class=\"progress-bar bg-success\" role=\"progressbar\" style=\"width: 100%\"></div></div><small class=\"text-success\">100%</small>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "<span class=\"text-muted\">-</span>")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 78, "<span class=\"text-muted\">-</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
@@ -504,65 +795,13 @@ func ProgressBar(progress float64, status maintenance.MaintenanceTaskStatus) tem
})
}
func WorkerStatusBadge(status string) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var24 := templ.GetChildren(ctx)
if templ_7745c5c3_Var24 == nil {
templ_7745c5c3_Var24 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
switch status {
case "active":
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "<span class=\"badge bg-success\">Active</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case "busy":
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "<span class=\"badge bg-warning\">Busy</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
case "inactive":
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, "<span class=\"badge bg-secondary\">Inactive</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
default:
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "<span class=\"badge bg-light text-dark\">Unknown</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
return nil
})
}
// Helper functions (would be defined in Go)
func getWorkerStatusColor(status string) string {
switch status {
case "active":
return "success"
case "busy":
return "warning"
case "inactive":
return "secondary"
default:
return "light"
func formatDuration(d time.Duration) string {
if d < time.Minute {
return fmt.Sprintf("%.0fs", d.Seconds())
} else if d < time.Hour {
return fmt.Sprintf("%.1fm", d.Minutes())
} else {
return fmt.Sprintf("%.1fh", d.Hours())
}
}

View File

@@ -0,0 +1,486 @@
package app
import (
"encoding/base64"
"encoding/json"
"fmt"
"reflect"
"strings"
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
"github.com/seaweedfs/seaweedfs/weed/admin/config"
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
)
// Helper function to convert task schema to JSON string
func taskSchemaToJSON(schema *tasks.TaskConfigSchema) string {
if schema == nil {
return "{}"
}
data := map[string]interface{}{
"fields": schema.Fields,
}
jsonBytes, err := json.Marshal(data)
if err != nil {
return "{}"
}
return string(jsonBytes)
}
// Helper function to base64 encode the JSON to avoid HTML escaping issues
func taskSchemaToBase64JSON(schema *tasks.TaskConfigSchema) string {
jsonStr := taskSchemaToJSON(schema)
return base64.StdEncoding.EncodeToString([]byte(jsonStr))
}
templ TaskConfigSchema(data *maintenance.TaskConfigData, schema *tasks.TaskConfigSchema, config interface{}) {
<div class="container-fluid">
<div class="row mb-4">
<div class="col-12">
<div class="d-flex justify-content-between align-items-center">
<h2 class="mb-0">
<i class={schema.Icon + " me-2"}></i>
{schema.DisplayName} Configuration
</h2>
<div class="btn-group">
<a href="/maintenance/config" class="btn btn-outline-secondary">
<i class="fas fa-arrow-left me-1"></i>
Back to System Config
</a>
</div>
</div>
</div>
</div>
<!-- Configuration Card -->
<div class="row">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">
<i class="fas fa-cogs me-2"></i>
Task Configuration
</h5>
<p class="mb-0 text-muted small">{schema.Description}</p>
</div>
<div class="card-body">
<form id="taskConfigForm" method="POST">
<!-- Dynamically render all schema fields in defined order -->
for _, field := range schema.Fields {
@TaskConfigField(field, config)
}
<div class="d-flex gap-2">
<button type="submit" class="btn btn-primary">
<i class="fas fa-save me-1"></i>
Save Configuration
</button>
<button type="button" class="btn btn-secondary" onclick="resetToDefaults()">
<i class="fas fa-undo me-1"></i>
Reset to Defaults
</button>
</div>
</form>
</div>
</div>
</div>
</div>
<!-- Performance Notes Card -->
<div class="row mt-4">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">
<i class="fas fa-info-circle me-2"></i>
Important Notes
</h5>
</div>
<div class="card-body">
<div class="alert alert-info" role="alert">
if schema.TaskName == "vacuum" {
<h6 class="alert-heading">Vacuum Operations:</h6>
<p class="mb-2"><strong>Performance:</strong> Vacuum operations are I/O intensive and may impact cluster performance.</p>
<p class="mb-2"><strong>Safety:</strong> Only volumes meeting age and garbage thresholds will be processed.</p>
<p class="mb-0"><strong>Recommendation:</strong> Monitor cluster load and adjust concurrent limits accordingly.</p>
} else if schema.TaskName == "balance" {
<h6 class="alert-heading">Balance Operations:</h6>
<p class="mb-2"><strong>Performance:</strong> Volume balancing involves data movement and can impact cluster performance.</p>
<p class="mb-2"><strong>Safety:</strong> Requires adequate server count to ensure data safety during moves.</p>
<p class="mb-0"><strong>Recommendation:</strong> Run during off-peak hours to minimize impact on production workloads.</p>
} else if schema.TaskName == "erasure_coding" {
<h6 class="alert-heading">Erasure Coding Operations:</h6>
<p class="mb-2"><strong>Performance:</strong> Erasure coding is CPU and I/O intensive. Consider running during off-peak hours.</p>
<p class="mb-2"><strong>Durability:</strong> With 10+4 configuration, can tolerate up to 4 shard failures.</p>
<p class="mb-0"><strong>Configuration:</strong> Fullness ratio should be between 0.5 and 1.0 (e.g., 0.90 for 90%).</p>
}
</div>
</div>
</div>
</div>
</div>
</div>
<script>
function resetToDefaults() {
if (confirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.')) {
// Reset form fields to their default values
const form = document.getElementById('taskConfigForm');
const schemaFields = window.taskConfigSchema ? window.taskConfigSchema.fields : {};
Object.keys(schemaFields).forEach(fieldName => {
const field = schemaFields[fieldName];
const element = document.getElementById(fieldName);
if (element && field.default_value !== undefined) {
if (field.input_type === 'checkbox') {
element.checked = field.default_value;
} else if (field.input_type === 'interval') {
// Handle interval fields with value and unit
const valueElement = document.getElementById(fieldName + '_value');
const unitElement = document.getElementById(fieldName + '_unit');
if (valueElement && unitElement && field.default_value) {
const defaultSeconds = field.default_value;
const { value, unit } = convertSecondsToTaskIntervalValueUnit(defaultSeconds);
valueElement.value = value;
unitElement.value = unit;
}
} else {
element.value = field.default_value;
}
}
});
}
}
function convertSecondsToTaskIntervalValueUnit(totalSeconds) {
if (totalSeconds === 0) {
return { value: 0, unit: 'minutes' };
}
// Check if it's evenly divisible by days
if (totalSeconds % (24 * 3600) === 0) {
return { value: totalSeconds / (24 * 3600), unit: 'days' };
}
// Check if it's evenly divisible by hours
if (totalSeconds % 3600 === 0) {
return { value: totalSeconds / 3600, unit: 'hours' };
}
// Default to minutes
return { value: totalSeconds / 60, unit: 'minutes' };
}
// Store schema data for JavaScript access (moved to after div is created)
</script>
<!-- Hidden element to store schema data -->
<div data-task-schema={ taskSchemaToBase64JSON(schema) } style="display: none;"></div>
<script>
// Load schema data now that the div exists
const base64Data = document.querySelector('[data-task-schema]').getAttribute('data-task-schema');
const jsonStr = atob(base64Data);
window.taskConfigSchema = JSON.parse(jsonStr);
</script>
}
// TaskConfigField renders a single task configuration field based on schema with typed field lookup
templ TaskConfigField(field *config.Field, config interface{}) {
if field.InputType == "interval" {
<!-- Interval field with number input + unit dropdown -->
<div class="mb-3">
<label for={ field.JSONName } class="form-label">
{ field.DisplayName }
if field.Required {
<span class="text-danger">*</span>
}
</label>
<div class="input-group">
<input
type="number"
class="form-control"
id={ field.JSONName + "_value" }
name={ field.JSONName + "_value" }
value={ fmt.Sprintf("%.0f", components.ConvertInt32SecondsToDisplayValue(getTaskConfigInt32Field(config, field.JSONName))) }
step="1"
min="1"
if field.Required {
required
}
/>
<select
class="form-select"
id={ field.JSONName + "_unit" }
name={ field.JSONName + "_unit" }
style="max-width: 120px;"
if field.Required {
required
}
>
<option
value="minutes"
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "minutes" {
selected
}
>
Minutes
</option>
<option
value="hours"
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "hours" {
selected
}
>
Hours
</option>
<option
value="days"
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "days" {
selected
}
>
Days
</option>
</select>
</div>
if field.Description != "" {
<div class="form-text text-muted">{ field.Description }</div>
}
</div>
} else if field.InputType == "checkbox" {
<!-- Checkbox field -->
<div class="mb-3">
<div class="form-check form-switch">
<input
class="form-check-input"
type="checkbox"
id={ field.JSONName }
name={ field.JSONName }
value="on"
if getTaskConfigBoolField(config, field.JSONName) {
checked
}
/>
<label class="form-check-label" for={ field.JSONName }>
<strong>{ field.DisplayName }</strong>
</label>
</div>
if field.Description != "" {
<div class="form-text text-muted">{ field.Description }</div>
}
</div>
} else if field.InputType == "text" {
<!-- Text field -->
<div class="mb-3">
<label for={ field.JSONName } class="form-label">
{ field.DisplayName }
if field.Required {
<span class="text-danger">*</span>
}
</label>
<input
type="text"
class="form-control"
id={ field.JSONName }
name={ field.JSONName }
value={ getTaskConfigStringField(config, field.JSONName) }
placeholder={ field.Placeholder }
if field.Required {
required
}
/>
if field.Description != "" {
<div class="form-text text-muted">{ field.Description }</div>
}
</div>
} else {
<!-- Number field -->
<div class="mb-3">
<label for={ field.JSONName } class="form-label">
{ field.DisplayName }
if field.Required {
<span class="text-danger">*</span>
}
</label>
<input
type="number"
class="form-control"
id={ field.JSONName }
name={ field.JSONName }
value={ fmt.Sprintf("%.6g", getTaskConfigFloatField(config, field.JSONName)) }
placeholder={ field.Placeholder }
if field.MinValue != nil {
min={ fmt.Sprintf("%v", field.MinValue) }
}
if field.MaxValue != nil {
max={ fmt.Sprintf("%v", field.MaxValue) }
}
step={ getTaskNumberStep(field) }
if field.Required {
required
}
/>
if field.Description != "" {
<div class="form-text text-muted">{ field.Description }</div>
}
</div>
}
}
// Typed field getters for task configs - avoiding interface{} where possible
func getTaskConfigBoolField(config interface{}, fieldName string) bool {
switch fieldName {
case "enabled":
// Use reflection only for the common 'enabled' field in BaseConfig
if value := getTaskFieldValue(config, fieldName); value != nil {
if boolVal, ok := value.(bool); ok {
return boolVal
}
}
return false
default:
// For other boolean fields, use reflection
if value := getTaskFieldValue(config, fieldName); value != nil {
if boolVal, ok := value.(bool); ok {
return boolVal
}
}
return false
}
}
func getTaskConfigInt32Field(config interface{}, fieldName string) int32 {
switch fieldName {
case "scan_interval_seconds", "max_concurrent":
// Common fields that should be int/int32
if value := getTaskFieldValue(config, fieldName); value != nil {
switch v := value.(type) {
case int32:
return v
case int:
return int32(v)
case int64:
return int32(v)
}
}
return 0
default:
// For other int fields, use reflection
if value := getTaskFieldValue(config, fieldName); value != nil {
switch v := value.(type) {
case int32:
return v
case int:
return int32(v)
case int64:
return int32(v)
case float64:
return int32(v)
}
}
return 0
}
}
func getTaskConfigFloatField(config interface{}, fieldName string) float64 {
if value := getTaskFieldValue(config, fieldName); value != nil {
switch v := value.(type) {
case float64:
return v
case float32:
return float64(v)
case int:
return float64(v)
case int32:
return float64(v)
case int64:
return float64(v)
}
}
return 0.0
}
func getTaskConfigStringField(config interface{}, fieldName string) string {
if value := getTaskFieldValue(config, fieldName); value != nil {
if strVal, ok := value.(string); ok {
return strVal
}
// Convert numbers to strings for form display
switch v := value.(type) {
case int:
return fmt.Sprintf("%d", v)
case int32:
return fmt.Sprintf("%d", v)
case int64:
return fmt.Sprintf("%d", v)
case float64:
return fmt.Sprintf("%.6g", v)
case float32:
return fmt.Sprintf("%.6g", v)
}
}
return ""
}
func getTaskNumberStep(field *config.Field) string {
if field.Type == config.FieldTypeFloat {
return "0.01"
}
return "1"
}
func getTaskFieldValue(config interface{}, fieldName string) interface{} {
if config == nil {
return nil
}
// Use reflection to get the field value from the config struct
configValue := reflect.ValueOf(config)
if configValue.Kind() == reflect.Ptr {
configValue = configValue.Elem()
}
if configValue.Kind() != reflect.Struct {
return nil
}
configType := configValue.Type()
for i := 0; i < configValue.NumField(); i++ {
field := configValue.Field(i)
fieldType := configType.Field(i)
// Handle embedded structs recursively (before JSON tag check)
if field.Kind() == reflect.Struct && fieldType.Anonymous {
if value := getTaskFieldValue(field.Interface(), fieldName); value != nil {
return value
}
continue
}
// Get JSON tag name
jsonTag := fieldType.Tag.Get("json")
if jsonTag == "" {
continue
}
// Remove options like ",omitempty"
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
jsonTag = jsonTag[:commaIdx]
}
// Check if this is the field we're looking for
if jsonTag == fieldName {
return field.Interface()
}
}
return nil
}

View File

@@ -0,0 +1,921 @@
// Code generated by templ - DO NOT EDIT.
// templ: version: v0.3.906
package app
//lint:file-ignore SA4006 This context is only used if a nested component is present.
import "github.com/a-h/templ"
import templruntime "github.com/a-h/templ/runtime"
import (
"encoding/base64"
"encoding/json"
"fmt"
"github.com/seaweedfs/seaweedfs/weed/admin/config"
"github.com/seaweedfs/seaweedfs/weed/admin/maintenance"
"github.com/seaweedfs/seaweedfs/weed/admin/view/components"
"github.com/seaweedfs/seaweedfs/weed/worker/tasks"
"reflect"
"strings"
)
// Helper function to convert task schema to JSON string
func taskSchemaToJSON(schema *tasks.TaskConfigSchema) string {
if schema == nil {
return "{}"
}
data := map[string]interface{}{
"fields": schema.Fields,
}
jsonBytes, err := json.Marshal(data)
if err != nil {
return "{}"
}
return string(jsonBytes)
}
// Helper function to base64 encode the JSON to avoid HTML escaping issues
func taskSchemaToBase64JSON(schema *tasks.TaskConfigSchema) string {
jsonStr := taskSchemaToJSON(schema)
return base64.StdEncoding.EncodeToString([]byte(jsonStr))
}
func TaskConfigSchema(data *maintenance.TaskConfigData, schema *tasks.TaskConfigSchema, config interface{}) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var1 := templ.GetChildren(ctx)
if templ_7745c5c3_Var1 == nil {
templ_7745c5c3_Var1 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<div class=\"container-fluid\"><div class=\"row mb-4\"><div class=\"col-12\"><div class=\"d-flex justify-content-between align-items-center\"><h2 class=\"mb-0\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var2 = []any{schema.Icon + " me-2"}
templ_7745c5c3_Err = templ.RenderCSSItems(ctx, templ_7745c5c3_Buffer, templ_7745c5c3_Var2...)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "<i class=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var3 string
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinStringErrs(templ.CSSClasses(templ_7745c5c3_Var2).String())
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 1, Col: 0}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "\"></i> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var4 string
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinStringErrs(schema.DisplayName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 46, Col: 43}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, " Configuration</h2><div class=\"btn-group\"><a href=\"/maintenance/config\" class=\"btn btn-outline-secondary\"><i class=\"fas fa-arrow-left me-1\"></i> Back to System Config</a></div></div></div></div><!-- Configuration Card --><div class=\"row\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-cogs me-2\"></i> Task Configuration</h5><p class=\"mb-0 text-muted small\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var5 string
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs(schema.Description)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 67, Col: 76}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "</p></div><div class=\"card-body\"><form id=\"taskConfigForm\" method=\"POST\"><!-- Dynamically render all schema fields in defined order -->")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
for _, field := range schema.Fields {
templ_7745c5c3_Err = TaskConfigField(field, config).Render(ctx, templ_7745c5c3_Buffer)
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "<div class=\"d-flex gap-2\"><button type=\"submit\" class=\"btn btn-primary\"><i class=\"fas fa-save me-1\"></i> Save Configuration</button> <button type=\"button\" class=\"btn btn-secondary\" onclick=\"resetToDefaults()\"><i class=\"fas fa-undo me-1\"></i> Reset to Defaults</button></div></form></div></div></div></div><!-- Performance Notes Card --><div class=\"row mt-4\"><div class=\"col-12\"><div class=\"card\"><div class=\"card-header\"><h5 class=\"mb-0\"><i class=\"fas fa-info-circle me-2\"></i> Important Notes</h5></div><div class=\"card-body\"><div class=\"alert alert-info\" role=\"alert\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if schema.TaskName == "vacuum" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "<h6 class=\"alert-heading\">Vacuum Operations:</h6><p class=\"mb-2\"><strong>Performance:</strong> Vacuum operations are I/O intensive and may impact cluster performance.</p><p class=\"mb-2\"><strong>Safety:</strong> Only volumes meeting age and garbage thresholds will be processed.</p><p class=\"mb-0\"><strong>Recommendation:</strong> Monitor cluster load and adjust concurrent limits accordingly.</p>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if schema.TaskName == "balance" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 8, "<h6 class=\"alert-heading\">Balance Operations:</h6><p class=\"mb-2\"><strong>Performance:</strong> Volume balancing involves data movement and can impact cluster performance.</p><p class=\"mb-2\"><strong>Safety:</strong> Requires adequate server count to ensure data safety during moves.</p><p class=\"mb-0\"><strong>Recommendation:</strong> Run during off-peak hours to minimize impact on production workloads.</p>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if schema.TaskName == "erasure_coding" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 9, "<h6 class=\"alert-heading\">Erasure Coding Operations:</h6><p class=\"mb-2\"><strong>Performance:</strong> Erasure coding is CPU and I/O intensive. Consider running during off-peak hours.</p><p class=\"mb-2\"><strong>Durability:</strong> With 10+4 configuration, can tolerate up to 4 shard failures.</p><p class=\"mb-0\"><strong>Configuration:</strong> Fullness ratio should be between 0.5 and 1.0 (e.g., 0.90 for 90%).</p>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "</div></div></div></div></div></div><script>\n function resetToDefaults() {\n if (confirm('Are you sure you want to reset to default configuration? This will overwrite your current settings.')) {\n // Reset form fields to their default values\n const form = document.getElementById('taskConfigForm');\n const schemaFields = window.taskConfigSchema ? window.taskConfigSchema.fields : {};\n \n Object.keys(schemaFields).forEach(fieldName => {\n const field = schemaFields[fieldName];\n const element = document.getElementById(fieldName);\n \n if (element && field.default_value !== undefined) {\n if (field.input_type === 'checkbox') {\n element.checked = field.default_value;\n } else if (field.input_type === 'interval') {\n // Handle interval fields with value and unit\n const valueElement = document.getElementById(fieldName + '_value');\n const unitElement = document.getElementById(fieldName + '_unit');\n if (valueElement && unitElement && field.default_value) {\n const defaultSeconds = field.default_value;\n const { value, unit } = convertSecondsToTaskIntervalValueUnit(defaultSeconds);\n valueElement.value = value;\n unitElement.value = unit;\n }\n } else {\n element.value = field.default_value;\n }\n }\n });\n }\n }\n\n function convertSecondsToTaskIntervalValueUnit(totalSeconds) {\n if (totalSeconds === 0) {\n return { value: 0, unit: 'minutes' };\n }\n\n // Check if it's evenly divisible by days\n if (totalSeconds % (24 * 3600) === 0) {\n return { value: totalSeconds / (24 * 3600), unit: 'days' };\n }\n\n // Check if it's evenly divisible by hours\n if (totalSeconds % 3600 === 0) {\n return { value: totalSeconds / 3600, unit: 'hours' };\n }\n\n // Default to minutes\n return { value: totalSeconds / 60, unit: 'minutes' };\n }\n\n // Store schema data for JavaScript access (moved to after div is created)\n </script><!-- Hidden element to store schema data --><div data-task-schema=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(taskSchemaToBase64JSON(schema))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 182, Col: 58}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 11, "\" style=\"display: none;\"></div><script>\n // Load schema data now that the div exists\n const base64Data = document.querySelector('[data-task-schema]').getAttribute('data-task-schema');\n const jsonStr = atob(base64Data);\n window.taskConfigSchema = JSON.parse(jsonStr);\n </script>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
return nil
})
}
// TaskConfigField renders a single task configuration field based on schema with typed field lookup
func TaskConfigField(field *config.Field, config interface{}) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var7 := templ.GetChildren(ctx)
if templ_7745c5c3_Var7 == nil {
templ_7745c5c3_Var7 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
if field.InputType == "interval" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 12, "<!-- Interval field with number input + unit dropdown --> <div class=\"mb-3\"><label for=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var8 string
templ_7745c5c3_Var8, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 197, Col: 39}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var8))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 13, "\" class=\"form-label\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var9 string
templ_7745c5c3_Var9, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 198, Col: 35}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var9))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 14, " ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 15, "<span class=\"text-danger\">*</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 16, "</label><div class=\"input-group\"><input type=\"number\" class=\"form-control\" id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var10 string
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_value")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 207, Col: 50}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 17, "\" name=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var11 string
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_value")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 208, Col: 52}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 18, "\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var12 string
templ_7745c5c3_Var12, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", components.ConvertInt32SecondsToDisplayValue(getTaskConfigInt32Field(config, field.JSONName))))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 209, Col: 142}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var12))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 19, "\" step=\"1\" min=\"1\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 20, " required")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 21, "> <select class=\"form-select\" id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var13 string
templ_7745c5c3_Var13, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_unit")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 218, Col: 49}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var13))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 22, "\" name=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName + "_unit")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 219, Col: 51}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 23, "\" style=\"max-width: 120px;\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 24, " required")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 25, "><option value=\"minutes\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "minutes" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 27, ">Minutes</option> <option value=\"hours\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "hours" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 28, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 29, ">Hours</option> <option value=\"days\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if components.GetInt32DisplayUnit(getTaskConfigInt32Field(config, field.JSONName)) == "days" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 30, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 31, ">Days</option></select></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Description != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 32, "<div class=\"form-text text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 252, Col: 69}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 33, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 34, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if field.InputType == "checkbox" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 35, "<!-- Checkbox field --> <div class=\"mb-3\"><div class=\"form-check form-switch\"><input class=\"form-check-input\" type=\"checkbox\" id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var16 string
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 262, Col: 39}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 36, "\" name=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var17 string
templ_7745c5c3_Var17, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 263, Col: 41}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var17))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 37, "\" value=\"on\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if getTaskConfigBoolField(config, field.JSONName) {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 38, " checked")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 39, "> <label class=\"form-check-label\" for=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var18 string
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 269, Col: 68}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 40, "\"><strong>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var19 string
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 270, Col: 47}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 41, "</strong></label></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Description != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 42, "<div class=\"form-text text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var20 string
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 274, Col: 69}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 43, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 44, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else if field.InputType == "text" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 45, "<!-- Text field --> <div class=\"mb-3\"><label for=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var21 string
templ_7745c5c3_Var21, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 280, Col: 39}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var21))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 46, "\" class=\"form-label\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var22 string
templ_7745c5c3_Var22, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 281, Col: 35}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var22))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 47, " ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 48, "<span class=\"text-danger\">*</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 49, "</label> <input type=\"text\" class=\"form-control\" id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var23 string
templ_7745c5c3_Var23, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 289, Col: 35}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var23))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 50, "\" name=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var24 string
templ_7745c5c3_Var24, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 290, Col: 37}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var24))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 51, "\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var25 string
templ_7745c5c3_Var25, templ_7745c5c3_Err = templ.JoinStringErrs(getTaskConfigStringField(config, field.JSONName))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 291, Col: 72}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var25))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 52, "\" placeholder=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var26 string
templ_7745c5c3_Var26, templ_7745c5c3_Err = templ.JoinStringErrs(field.Placeholder)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 292, Col: 47}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var26))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 53, "\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 54, " required")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 55, "> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Description != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 56, "<div class=\"form-text text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var27 string
templ_7745c5c3_Var27, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 298, Col: 69}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var27))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 57, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 58, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
} else {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 59, "<!-- Number field --> <div class=\"mb-3\"><label for=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var28 string
templ_7745c5c3_Var28, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 304, Col: 39}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var28))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 60, "\" class=\"form-label\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var29 string
templ_7745c5c3_Var29, templ_7745c5c3_Err = templ.JoinStringErrs(field.DisplayName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 305, Col: 35}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var29))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 61, " ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 62, "<span class=\"text-danger\">*</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 63, "</label> <input type=\"number\" class=\"form-control\" id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var30 string
templ_7745c5c3_Var30, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 313, Col: 35}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var30))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 64, "\" name=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var31 string
templ_7745c5c3_Var31, templ_7745c5c3_Err = templ.JoinStringErrs(field.JSONName)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 314, Col: 37}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var31))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 65, "\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var32 string
templ_7745c5c3_Var32, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.6g", getTaskConfigFloatField(config, field.JSONName)))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 315, Col: 92}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var32))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 66, "\" placeholder=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var33 string
templ_7745c5c3_Var33, templ_7745c5c3_Err = templ.JoinStringErrs(field.Placeholder)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 316, Col: 47}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var33))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 67, "\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.MinValue != nil {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 68, " min=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var34 string
templ_7745c5c3_Var34, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%v", field.MinValue))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 318, Col: 59}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var34))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 69, "\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
if field.MaxValue != nil {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 70, " max=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var35 string
templ_7745c5c3_Var35, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%v", field.MaxValue))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 321, Col: 59}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var35))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 71, "\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 72, " step=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var36 string
templ_7745c5c3_Var36, templ_7745c5c3_Err = templ.JoinStringErrs(getTaskNumberStep(field))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 323, Col: 47}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var36))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 73, "\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 74, " required")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 75, "> ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if field.Description != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 76, "<div class=\"form-text text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var37 string
templ_7745c5c3_Var37, templ_7745c5c3_Err = templ.JoinStringErrs(field.Description)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/app/task_config_schema.templ`, Line: 329, Col: 69}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var37))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 77, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 78, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
return nil
})
}
// Typed field getters for task configs - avoiding interface{} where possible
func getTaskConfigBoolField(config interface{}, fieldName string) bool {
switch fieldName {
case "enabled":
// Use reflection only for the common 'enabled' field in BaseConfig
if value := getTaskFieldValue(config, fieldName); value != nil {
if boolVal, ok := value.(bool); ok {
return boolVal
}
}
return false
default:
// For other boolean fields, use reflection
if value := getTaskFieldValue(config, fieldName); value != nil {
if boolVal, ok := value.(bool); ok {
return boolVal
}
}
return false
}
}
func getTaskConfigInt32Field(config interface{}, fieldName string) int32 {
switch fieldName {
case "scan_interval_seconds", "max_concurrent":
// Common fields that should be int/int32
if value := getTaskFieldValue(config, fieldName); value != nil {
switch v := value.(type) {
case int32:
return v
case int:
return int32(v)
case int64:
return int32(v)
}
}
return 0
default:
// For other int fields, use reflection
if value := getTaskFieldValue(config, fieldName); value != nil {
switch v := value.(type) {
case int32:
return v
case int:
return int32(v)
case int64:
return int32(v)
case float64:
return int32(v)
}
}
return 0
}
}
func getTaskConfigFloatField(config interface{}, fieldName string) float64 {
if value := getTaskFieldValue(config, fieldName); value != nil {
switch v := value.(type) {
case float64:
return v
case float32:
return float64(v)
case int:
return float64(v)
case int32:
return float64(v)
case int64:
return float64(v)
}
}
return 0.0
}
func getTaskConfigStringField(config interface{}, fieldName string) string {
if value := getTaskFieldValue(config, fieldName); value != nil {
if strVal, ok := value.(string); ok {
return strVal
}
// Convert numbers to strings for form display
switch v := value.(type) {
case int:
return fmt.Sprintf("%d", v)
case int32:
return fmt.Sprintf("%d", v)
case int64:
return fmt.Sprintf("%d", v)
case float64:
return fmt.Sprintf("%.6g", v)
case float32:
return fmt.Sprintf("%.6g", v)
}
}
return ""
}
func getTaskNumberStep(field *config.Field) string {
if field.Type == config.FieldTypeFloat {
return "0.01"
}
return "1"
}
func getTaskFieldValue(config interface{}, fieldName string) interface{} {
if config == nil {
return nil
}
// Use reflection to get the field value from the config struct
configValue := reflect.ValueOf(config)
if configValue.Kind() == reflect.Ptr {
configValue = configValue.Elem()
}
if configValue.Kind() != reflect.Struct {
return nil
}
configType := configValue.Type()
for i := 0; i < configValue.NumField(); i++ {
field := configValue.Field(i)
fieldType := configType.Field(i)
// Handle embedded structs recursively (before JSON tag check)
if field.Kind() == reflect.Struct && fieldType.Anonymous {
if value := getTaskFieldValue(field.Interface(), fieldName); value != nil {
return value
}
continue
}
// Get JSON tag name
jsonTag := fieldType.Tag.Get("json")
if jsonTag == "" {
continue
}
// Remove options like ",omitempty"
if commaIdx := strings.Index(jsonTag, ","); commaIdx > 0 {
jsonTag = jsonTag[:commaIdx]
}
// Check if this is the field we're looking for
if jsonTag == fieldName {
return field.Interface()
}
}
return nil
}
var _ = templruntime.GeneratedTemplate

View File

@@ -0,0 +1,232 @@
package app
import (
"testing"
)
// Test structs that mirror the actual configuration structure
type TestBaseConfigForTemplate struct {
Enabled bool `json:"enabled"`
ScanIntervalSeconds int `json:"scan_interval_seconds"`
MaxConcurrent int `json:"max_concurrent"`
}
type TestTaskConfigForTemplate struct {
TestBaseConfigForTemplate
TaskSpecificField float64 `json:"task_specific_field"`
AnotherSpecificField string `json:"another_specific_field"`
}
func TestGetTaskFieldValue_EmbeddedStructFields(t *testing.T) {
config := &TestTaskConfigForTemplate{
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
Enabled: true,
ScanIntervalSeconds: 2400,
MaxConcurrent: 5,
},
TaskSpecificField: 0.18,
AnotherSpecificField: "test_value",
}
// Test embedded struct fields
tests := []struct {
fieldName string
expectedValue interface{}
description string
}{
{"enabled", true, "BaseConfig boolean field"},
{"scan_interval_seconds", 2400, "BaseConfig integer field"},
{"max_concurrent", 5, "BaseConfig integer field"},
{"task_specific_field", 0.18, "Task-specific float field"},
{"another_specific_field", "test_value", "Task-specific string field"},
}
for _, test := range tests {
t.Run(test.description, func(t *testing.T) {
result := getTaskFieldValue(config, test.fieldName)
if result != test.expectedValue {
t.Errorf("Field %s: expected %v (%T), got %v (%T)",
test.fieldName, test.expectedValue, test.expectedValue, result, result)
}
})
}
}
func TestGetTaskFieldValue_NonExistentField(t *testing.T) {
config := &TestTaskConfigForTemplate{
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
Enabled: true,
ScanIntervalSeconds: 1800,
MaxConcurrent: 3,
},
}
result := getTaskFieldValue(config, "non_existent_field")
if result != nil {
t.Errorf("Expected nil for non-existent field, got %v", result)
}
}
func TestGetTaskFieldValue_NilConfig(t *testing.T) {
var config *TestTaskConfigForTemplate = nil
result := getTaskFieldValue(config, "enabled")
if result != nil {
t.Errorf("Expected nil for nil config, got %v", result)
}
}
func TestGetTaskFieldValue_EmptyStruct(t *testing.T) {
config := &TestTaskConfigForTemplate{}
// Test that we can extract zero values
tests := []struct {
fieldName string
expectedValue interface{}
description string
}{
{"enabled", false, "Zero value boolean"},
{"scan_interval_seconds", 0, "Zero value integer"},
{"max_concurrent", 0, "Zero value integer"},
{"task_specific_field", 0.0, "Zero value float"},
{"another_specific_field", "", "Zero value string"},
}
for _, test := range tests {
t.Run(test.description, func(t *testing.T) {
result := getTaskFieldValue(config, test.fieldName)
if result != test.expectedValue {
t.Errorf("Field %s: expected %v (%T), got %v (%T)",
test.fieldName, test.expectedValue, test.expectedValue, result, result)
}
})
}
}
func TestGetTaskFieldValue_NonStructConfig(t *testing.T) {
var config interface{} = "not a struct"
result := getTaskFieldValue(config, "enabled")
if result != nil {
t.Errorf("Expected nil for non-struct config, got %v", result)
}
}
func TestGetTaskFieldValue_PointerToStruct(t *testing.T) {
config := &TestTaskConfigForTemplate{
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
Enabled: false,
ScanIntervalSeconds: 900,
MaxConcurrent: 2,
},
TaskSpecificField: 0.35,
}
// Test that pointers are handled correctly
enabledResult := getTaskFieldValue(config, "enabled")
if enabledResult != false {
t.Errorf("Expected false for enabled field, got %v", enabledResult)
}
intervalResult := getTaskFieldValue(config, "scan_interval_seconds")
if intervalResult != 900 {
t.Errorf("Expected 900 for scan_interval_seconds field, got %v", intervalResult)
}
}
func TestGetTaskFieldValue_FieldsWithJSONOmitempty(t *testing.T) {
// Test struct with omitempty tags
type TestConfigWithOmitempty struct {
TestBaseConfigForTemplate
OptionalField string `json:"optional_field,omitempty"`
}
config := &TestConfigWithOmitempty{
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
Enabled: true,
ScanIntervalSeconds: 1200,
MaxConcurrent: 4,
},
OptionalField: "optional_value",
}
// Test that fields with omitempty are still found
result := getTaskFieldValue(config, "optional_field")
if result != "optional_value" {
t.Errorf("Expected 'optional_value' for optional_field, got %v", result)
}
// Test embedded fields still work
enabledResult := getTaskFieldValue(config, "enabled")
if enabledResult != true {
t.Errorf("Expected true for enabled field, got %v", enabledResult)
}
}
func TestGetTaskFieldValue_DeepEmbedding(t *testing.T) {
// Test with multiple levels of embedding
type DeepBaseConfig struct {
DeepField string `json:"deep_field"`
}
type MiddleConfig struct {
DeepBaseConfig
MiddleField int `json:"middle_field"`
}
type TopConfig struct {
MiddleConfig
TopField bool `json:"top_field"`
}
config := &TopConfig{
MiddleConfig: MiddleConfig{
DeepBaseConfig: DeepBaseConfig{
DeepField: "deep_value",
},
MiddleField: 123,
},
TopField: true,
}
// Test that deeply embedded fields are found
deepResult := getTaskFieldValue(config, "deep_field")
if deepResult != "deep_value" {
t.Errorf("Expected 'deep_value' for deep_field, got %v", deepResult)
}
middleResult := getTaskFieldValue(config, "middle_field")
if middleResult != 123 {
t.Errorf("Expected 123 for middle_field, got %v", middleResult)
}
topResult := getTaskFieldValue(config, "top_field")
if topResult != true {
t.Errorf("Expected true for top_field, got %v", topResult)
}
}
// Benchmark to ensure performance is reasonable
func BenchmarkGetTaskFieldValue(b *testing.B) {
config := &TestTaskConfigForTemplate{
TestBaseConfigForTemplate: TestBaseConfigForTemplate{
Enabled: true,
ScanIntervalSeconds: 1800,
MaxConcurrent: 3,
},
TaskSpecificField: 0.25,
AnotherSpecificField: "benchmark_test",
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
// Test both embedded and regular fields
_ = getTaskFieldValue(config, "enabled")
_ = getTaskFieldValue(config, "task_specific_field")
}
}

View File

@@ -269,6 +269,55 @@ templ DurationInputField(data DurationInputFieldData) {
}
// Helper functions for duration conversion (used by DurationInputField)
// Typed conversion functions for protobuf int32 (most common) - EXPORTED
func ConvertInt32SecondsToDisplayValue(seconds int32) float64 {
return convertIntSecondsToDisplayValue(int(seconds))
}
func GetInt32DisplayUnit(seconds int32) string {
return getIntDisplayUnit(int(seconds))
}
// Typed conversion functions for regular int
func convertIntSecondsToDisplayValue(seconds int) float64 {
if seconds == 0 {
return 0
}
// Check if it's evenly divisible by days
if seconds%(24*3600) == 0 {
return float64(seconds / (24 * 3600))
}
// Check if it's evenly divisible by hours
if seconds%3600 == 0 {
return float64(seconds / 3600)
}
// Default to minutes
return float64(seconds / 60)
}
func getIntDisplayUnit(seconds int) string {
if seconds == 0 {
return "minutes"
}
// Check if it's evenly divisible by days
if seconds%(24*3600) == 0 {
return "days"
}
// Check if it's evenly divisible by hours
if seconds%3600 == 0 {
return "hours"
}
// Default to minutes
return "minutes"
}
func convertSecondsToUnit(seconds int) string {
if seconds == 0 {
return "minutes"
@@ -303,4 +352,73 @@ func convertSecondsToValue(seconds int, unit string) float64 {
default:
return float64(seconds / 60) // Default to minutes
}
}
// IntervalFieldData represents interval input field data with separate value and unit
type IntervalFieldData struct {
FormFieldData
Seconds int // The interval value in seconds
}
// IntervalField renders a Bootstrap interval input with number + unit dropdown (like task config)
templ IntervalField(data IntervalFieldData) {
<div class="mb-3">
<label for={ data.Name } class="form-label">
{ data.Label }
if data.Required {
<span class="text-danger">*</span>
}
</label>
<div class="input-group">
<input
type="number"
class="form-control"
id={ data.Name + "_value" }
name={ data.Name + "_value" }
value={ fmt.Sprintf("%.0f", convertSecondsToValue(data.Seconds, convertSecondsToUnit(data.Seconds))) }
step="1"
min="1"
if data.Required {
required
}
/>
<select
class="form-select"
id={ data.Name + "_unit" }
name={ data.Name + "_unit" }
style="max-width: 120px;"
if data.Required {
required
}
>
<option
value="minutes"
if convertSecondsToUnit(data.Seconds) == "minutes" {
selected
}
>
Minutes
</option>
<option
value="hours"
if convertSecondsToUnit(data.Seconds) == "hours" {
selected
}
>
Hours
</option>
<option
value="days"
if convertSecondsToUnit(data.Seconds) == "days" {
selected
}
>
Days
</option>
</select>
</div>
if data.Description != "" {
<div class="form-text text-muted">{ data.Description }</div>
}
</div>
}

View File

@@ -1065,6 +1065,55 @@ func DurationInputField(data DurationInputFieldData) templ.Component {
}
// Helper functions for duration conversion (used by DurationInputField)
// Typed conversion functions for protobuf int32 (most common) - EXPORTED
func ConvertInt32SecondsToDisplayValue(seconds int32) float64 {
return convertIntSecondsToDisplayValue(int(seconds))
}
func GetInt32DisplayUnit(seconds int32) string {
return getIntDisplayUnit(int(seconds))
}
// Typed conversion functions for regular int
func convertIntSecondsToDisplayValue(seconds int) float64 {
if seconds == 0 {
return 0
}
// Check if it's evenly divisible by days
if seconds%(24*3600) == 0 {
return float64(seconds / (24 * 3600))
}
// Check if it's evenly divisible by hours
if seconds%3600 == 0 {
return float64(seconds / 3600)
}
// Default to minutes
return float64(seconds / 60)
}
func getIntDisplayUnit(seconds int) string {
if seconds == 0 {
return "minutes"
}
// Check if it's evenly divisible by days
if seconds%(24*3600) == 0 {
return "days"
}
// Check if it's evenly divisible by hours
if seconds%3600 == 0 {
return "hours"
}
// Default to minutes
return "minutes"
}
func convertSecondsToUnit(seconds int) string {
if seconds == 0 {
return "minutes"
@@ -1101,4 +1150,214 @@ func convertSecondsToValue(seconds int, unit string) float64 {
}
}
// IntervalFieldData represents interval input field data with separate value and unit
type IntervalFieldData struct {
FormFieldData
Seconds int // The interval value in seconds
}
// IntervalField renders a Bootstrap interval input with number + unit dropdown (like task config)
func IntervalField(data IntervalFieldData) templ.Component {
return templruntime.GeneratedTemplate(func(templ_7745c5c3_Input templruntime.GeneratedComponentInput) (templ_7745c5c3_Err error) {
templ_7745c5c3_W, ctx := templ_7745c5c3_Input.Writer, templ_7745c5c3_Input.Context
if templ_7745c5c3_CtxErr := ctx.Err(); templ_7745c5c3_CtxErr != nil {
return templ_7745c5c3_CtxErr
}
templ_7745c5c3_Buffer, templ_7745c5c3_IsBuffer := templruntime.GetBuffer(templ_7745c5c3_W)
if !templ_7745c5c3_IsBuffer {
defer func() {
templ_7745c5c3_BufErr := templruntime.ReleaseBuffer(templ_7745c5c3_Buffer)
if templ_7745c5c3_Err == nil {
templ_7745c5c3_Err = templ_7745c5c3_BufErr
}
}()
}
ctx = templ.InitializeContext(ctx)
templ_7745c5c3_Var50 := templ.GetChildren(ctx)
if templ_7745c5c3_Var50 == nil {
templ_7745c5c3_Var50 = templ.NopComponent
}
ctx = templ.ClearChildren(ctx)
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 101, "<div class=\"mb-3\"><label for=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var51 string
templ_7745c5c3_Var51, templ_7745c5c3_Err = templ.JoinStringErrs(data.Name)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 366, Col: 24}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var51))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 102, "\" class=\"form-label\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var52 string
templ_7745c5c3_Var52, templ_7745c5c3_Err = templ.JoinStringErrs(data.Label)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 367, Col: 15}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var52))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 103, " ")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 104, "<span class=\"text-danger\">*</span>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 105, "</label><div class=\"input-group\"><input type=\"number\" class=\"form-control\" id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var53 string
templ_7745c5c3_Var53, templ_7745c5c3_Err = templ.JoinStringErrs(data.Name + "_value")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 376, Col: 29}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var53))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 106, "\" name=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var54 string
templ_7745c5c3_Var54, templ_7745c5c3_Err = templ.JoinStringErrs(data.Name + "_value")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 377, Col: 31}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var54))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 107, "\" value=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var55 string
templ_7745c5c3_Var55, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%.0f", convertSecondsToValue(data.Seconds, convertSecondsToUnit(data.Seconds))))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 378, Col: 104}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var55))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 108, "\" step=\"1\" min=\"1\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 109, " required")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 110, "> <select class=\"form-select\" id=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var56 string
templ_7745c5c3_Var56, templ_7745c5c3_Err = templ.JoinStringErrs(data.Name + "_unit")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 387, Col: 28}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var56))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 111, "\" name=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var57 string
templ_7745c5c3_Var57, templ_7745c5c3_Err = templ.JoinStringErrs(data.Name + "_unit")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 388, Col: 30}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var57))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 112, "\" style=\"max-width: 120px;\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Required {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 113, " required")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 114, "><option value=\"minutes\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if convertSecondsToUnit(data.Seconds) == "minutes" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 115, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 116, ">Minutes</option> <option value=\"hours\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if convertSecondsToUnit(data.Seconds) == "hours" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 117, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 118, ">Hours</option> <option value=\"days\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if convertSecondsToUnit(data.Seconds) == "days" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 119, " selected")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 120, ">Days</option></select></div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
if data.Description != "" {
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 121, "<div class=\"form-text text-muted\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var58 string
templ_7745c5c3_Var58, templ_7745c5c3_Err = templ.JoinStringErrs(data.Description)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/components/form_fields.templ`, Line: 421, Col: 55}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var58))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 122, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 123, "</div>")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
return nil
})
}
var _ = templruntime.GeneratedTemplate

View File

@@ -111,6 +111,11 @@ templ Layout(c *gin.Context, content templ.Component) {
<i class="fas fa-database me-2"></i>Volumes
</a>
</li>
<li class="nav-item">
<a class="nav-link py-2" href="/cluster/ec-shards">
<i class="fas fa-th-large me-2"></i>EC Volumes
</a>
</li>
<li class="nav-item">
<a class="nav-link py-2" href="/cluster/collections">
<i class="fas fa-layer-group me-2"></i>Collections

View File

@@ -62,7 +62,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "</a><ul class=\"dropdown-menu\"><li><a class=\"dropdown-item\" href=\"/logout\"><i class=\"fas fa-sign-out-alt me-2\"></i>Logout</a></li></ul></li></ul></div></div></header><div class=\"row g-0\"><!-- Sidebar --><div class=\"col-md-3 col-lg-2 d-md-block bg-light sidebar collapse\"><div class=\"position-sticky pt-3\"><h6 class=\"sidebar-heading px-3 mt-4 mb-1 text-muted\"><span>MAIN</span></h6><ul class=\"nav flex-column\"><li class=\"nav-item\"><a class=\"nav-link\" href=\"/admin\"><i class=\"fas fa-tachometer-alt me-2\"></i>Dashboard</a></li><li class=\"nav-item\"><a class=\"nav-link collapsed\" href=\"#\" data-bs-toggle=\"collapse\" data-bs-target=\"#clusterSubmenu\" aria-expanded=\"false\" aria-controls=\"clusterSubmenu\"><i class=\"fas fa-sitemap me-2\"></i>Cluster <i class=\"fas fa-chevron-down ms-auto\"></i></a><div class=\"collapse\" id=\"clusterSubmenu\"><ul class=\"nav flex-column ms-3\"><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/masters\"><i class=\"fas fa-crown me-2\"></i>Masters</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/volume-servers\"><i class=\"fas fa-server me-2\"></i>Volume Servers</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/filers\"><i class=\"fas fa-folder-open me-2\"></i>Filers</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/volumes\"><i class=\"fas fa-database me-2\"></i>Volumes</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/collections\"><i class=\"fas fa-layer-group me-2\"></i>Collections</a></li></ul></div></li></ul><h6 class=\"sidebar-heading px-3 mt-4 mb-1 text-muted\"><span>MANAGEMENT</span></h6><ul class=\"nav flex-column\"><li class=\"nav-item\"><a class=\"nav-link\" href=\"/files\"><i class=\"fas fa-folder me-2\"></i>File Browser</a></li><li class=\"nav-item\"><a class=\"nav-link collapsed\" href=\"#\" data-bs-toggle=\"collapse\" data-bs-target=\"#objectStoreSubmenu\" aria-expanded=\"false\" aria-controls=\"objectStoreSubmenu\"><i class=\"fas fa-cloud me-2\"></i>Object Store <i class=\"fas fa-chevron-down ms-auto\"></i></a><div class=\"collapse\" id=\"objectStoreSubmenu\"><ul class=\"nav flex-column ms-3\"><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/object-store/buckets\"><i class=\"fas fa-cube me-2\"></i>Buckets</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/object-store/users\"><i class=\"fas fa-users me-2\"></i>Users</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/object-store/policies\"><i class=\"fas fa-shield-alt me-2\"></i>Policies</a></li></ul></div></li><li class=\"nav-item\">")
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "</a><ul class=\"dropdown-menu\"><li><a class=\"dropdown-item\" href=\"/logout\"><i class=\"fas fa-sign-out-alt me-2\"></i>Logout</a></li></ul></li></ul></div></div></header><div class=\"row g-0\"><!-- Sidebar --><div class=\"col-md-3 col-lg-2 d-md-block bg-light sidebar collapse\"><div class=\"position-sticky pt-3\"><h6 class=\"sidebar-heading px-3 mt-4 mb-1 text-muted\"><span>MAIN</span></h6><ul class=\"nav flex-column\"><li class=\"nav-item\"><a class=\"nav-link\" href=\"/admin\"><i class=\"fas fa-tachometer-alt me-2\"></i>Dashboard</a></li><li class=\"nav-item\"><a class=\"nav-link collapsed\" href=\"#\" data-bs-toggle=\"collapse\" data-bs-target=\"#clusterSubmenu\" aria-expanded=\"false\" aria-controls=\"clusterSubmenu\"><i class=\"fas fa-sitemap me-2\"></i>Cluster <i class=\"fas fa-chevron-down ms-auto\"></i></a><div class=\"collapse\" id=\"clusterSubmenu\"><ul class=\"nav flex-column ms-3\"><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/masters\"><i class=\"fas fa-crown me-2\"></i>Masters</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/volume-servers\"><i class=\"fas fa-server me-2\"></i>Volume Servers</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/filers\"><i class=\"fas fa-folder-open me-2\"></i>Filers</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/volumes\"><i class=\"fas fa-database me-2\"></i>Volumes</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/ec-shards\"><i class=\"fas fa-th-large me-2\"></i>EC Volumes</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/cluster/collections\"><i class=\"fas fa-layer-group me-2\"></i>Collections</a></li></ul></div></li></ul><h6 class=\"sidebar-heading px-3 mt-4 mb-1 text-muted\"><span>MANAGEMENT</span></h6><ul class=\"nav flex-column\"><li class=\"nav-item\"><a class=\"nav-link\" href=\"/files\"><i class=\"fas fa-folder me-2\"></i>File Browser</a></li><li class=\"nav-item\"><a class=\"nav-link collapsed\" href=\"#\" data-bs-toggle=\"collapse\" data-bs-target=\"#objectStoreSubmenu\" aria-expanded=\"false\" aria-controls=\"objectStoreSubmenu\"><i class=\"fas fa-cloud me-2\"></i>Object Store <i class=\"fas fa-chevron-down ms-auto\"></i></a><div class=\"collapse\" id=\"objectStoreSubmenu\"><ul class=\"nav flex-column ms-3\"><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/object-store/buckets\"><i class=\"fas fa-cube me-2\"></i>Buckets</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/object-store/users\"><i class=\"fas fa-users me-2\"></i>Users</a></li><li class=\"nav-item\"><a class=\"nav-link py-2\" href=\"/object-store/policies\"><i class=\"fas fa-shield-alt me-2\"></i>Policies</a></li></ul></div></li><li class=\"nav-item\">")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
@@ -153,7 +153,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var3 templ.SafeURL
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.URL))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 253, Col: 117}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 258, Col: 117}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
if templ_7745c5c3_Err != nil {
@@ -188,7 +188,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var6 string
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Name)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 254, Col: 109}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 259, Col: 109}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
@@ -206,7 +206,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var7 templ.SafeURL
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.URL))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 257, Col: 110}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 262, Col: 110}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
@@ -241,7 +241,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var10 string
templ_7745c5c3_Var10, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Name)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 258, Col: 109}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 263, Col: 109}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var10))
if templ_7745c5c3_Err != nil {
@@ -274,7 +274,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var11 templ.SafeURL
templ_7745c5c3_Var11, templ_7745c5c3_Err = templ.JoinURLErrs(templ.SafeURL(menuItem.URL))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 270, Col: 106}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 275, Col: 106}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var11))
if templ_7745c5c3_Err != nil {
@@ -309,7 +309,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var14 string
templ_7745c5c3_Var14, templ_7745c5c3_Err = templ.JoinStringErrs(menuItem.Name)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 271, Col: 105}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 276, Col: 105}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var14))
if templ_7745c5c3_Err != nil {
@@ -370,7 +370,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var15 string
templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%d", time.Now().Year()))
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 318, Col: 60}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 323, Col: 60}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var15))
if templ_7745c5c3_Err != nil {
@@ -383,7 +383,7 @@ func Layout(c *gin.Context, content templ.Component) templ.Component {
var templ_7745c5c3_Var16 string
templ_7745c5c3_Var16, templ_7745c5c3_Err = templ.JoinStringErrs(version.VERSION_NUMBER)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 318, Col: 102}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 323, Col: 102}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var16))
if templ_7745c5c3_Err != nil {
@@ -435,7 +435,7 @@ func LoginForm(c *gin.Context, title string, errorMessage string) templ.Componen
var templ_7745c5c3_Var18 string
templ_7745c5c3_Var18, templ_7745c5c3_Err = templ.JoinStringErrs(title)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 342, Col: 17}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 347, Col: 17}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var18))
if templ_7745c5c3_Err != nil {
@@ -448,7 +448,7 @@ func LoginForm(c *gin.Context, title string, errorMessage string) templ.Componen
var templ_7745c5c3_Var19 string
templ_7745c5c3_Var19, templ_7745c5c3_Err = templ.JoinStringErrs(title)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 356, Col: 57}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 361, Col: 57}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var19))
if templ_7745c5c3_Err != nil {
@@ -466,7 +466,7 @@ func LoginForm(c *gin.Context, title string, errorMessage string) templ.Componen
var templ_7745c5c3_Var20 string
templ_7745c5c3_Var20, templ_7745c5c3_Err = templ.JoinStringErrs(errorMessage)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 363, Col: 45}
return templ.Error{Err: templ_7745c5c3_Err, FileName: `view/layout/layout.templ`, Line: 368, Col: 45}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var20))
if templ_7745c5c3_Err != nil {