* Fix EC Volumes page header styling to match admin theme
Fixes#7779
The EC Volumes page was rendering with bright Bootstrap default colors
instead of the admin dark theme because it was structured as a standalone
HTML document with its own DOCTYPE, head, and body tags.
This fix converts the template to be a content fragment (like other
properly styled templates such as cluster_ec_shards.templ) so it
correctly inherits the admin.css styling when rendered within the layout.
* Address review comments: fix URL interpolation and falsy value check
- Fix collection filter link to use templ.URL() for proper interpolation
- Change updateUrl() falsy check from 'if (params[key])' to
'if (params[key] != null)' to handle 0 and false values correctly
* Address additional review comments
- Use erasure_coding.TotalShardsCount constant instead of hardcoded '14'
for shard count displays (lines 88 and 214)
- Improve error handling in repairVolume() to check response.ok before
parsing JSON, preventing confusing errors on non-JSON responses
- Remove unused totalSize variable in formatShardRangesWithSizes()
- Simplify redundant pagination conditions
* Remove unused code: displayShardLocationsHTML, groupShardsByServerWithSizes, formatShardRangesWithSizes
These functions and templates were defined but never called anywhere
in the codebase. Removing them reduces code maintenance burden.
* Address review feedback: improve code quality
- Add defensive JSON response validation in repairVolume function
- Replace O(n²) bubble sorts with Go's standard sort.Ints and sort.Slice
- Document volume status thresholds explaining EC recovery logic:
* Critical: unrecoverable (more than DataShardsCount missing)
* Degraded: high risk (more than half DataShardsCount missing)
* Incomplete: reduced redundancy (more than half ParityShardsCount missing)
* Minor: fully recoverable with good margin
* Fix redundant shard count display in Healthy Volumes card
Changed from 'Complete (14/14 shards)' to 'All 14 shards present' since
the numerator and denominator were always the same value.
* Use templ.URL for default collection link for consistency
* Fix Clear Filter link to stay on EC Volumes page
Changed href from /cluster/ec-shards to /cluster/ec-volumes so clearing
the filter stays on the current page instead of navigating away.
* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>