Commit Graph

24 Commits

Author SHA1 Message Date
Chris Lu
e9da64f62a fix: volume server healthz now checks local conditions only (#7610)
This fixes issue #6823 where a single volume server shutdown would cause
other healthy volume servers to fail their health checks and get restarted
by Kubernetes, causing a cascading failure.

Previously, the healthz handler checked if all replicated volumes could
reach their remote replicas via GetWritableRemoteReplications(). When a
volume server went down, the master would remove it from the volume
location list. Other volume servers would then fail their healthz checks
because they couldn't find all required replicas, causing Kubernetes to
restart them.

The healthz endpoint now only checks local conditions:
1. Is the server shutting down?
2. Is the server heartbeating with the master?

This follows the principle that a health check should only verify the
health of THIS server, not the overall cluster state.

Fixes #6823
2025-12-02 23:19:14 -08:00
Chris Lu
891a2fb6eb Admin: misc improvements on admin server and workers. EC now works. (#7055)
* initial design

* added simulation as tests

* reorganized the codebase to move the simulation framework and tests into their own dedicated package

* integration test. ec worker task

* remove "enhanced" reference

* start master, volume servers, filer

Current Status
 Master: Healthy and running (port 9333)
 Filer: Healthy and running (port 8888)
 Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready

* generate write load

* tasks are assigned

* admin start wtih grpc port. worker has its own working directory

* Update .gitignore

* working worker and admin. Task detection is not working yet.

* compiles, detection uses volumeSizeLimitMB from master

* compiles

* worker retries connecting to admin

* build and restart

* rendering pending tasks

* skip task ID column

* sticky worker id

* test canScheduleTaskNow

* worker reconnect to admin

* clean up logs

* worker register itself first

* worker can run ec work and report status

but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.

* move ec task logic

* listing ec shards

* local copy, ec. Need to distribute.

* ec is mostly working now

* distribution of ec shards needs improvement
* need configuration to enable ec

* show ec volumes

* interval field UI component

* rename

* integration test with vauuming

* garbage percentage threshold

* fix warning

* display ec shard sizes

* fix ec volumes list

* Update ui.go

* show default values

* ensure correct default value

* MaintenanceConfig use ConfigField

* use schema defined defaults

* config

* reduce duplication

* refactor to use BaseUIProvider

* each task register its schema

* checkECEncodingCandidate use ecDetector

* use vacuumDetector

* use volumeSizeLimitMB

* remove

remove

* remove unused

* refactor

* use new framework

* remove v2 reference

* refactor

* left menu can scroll now

* The maintenance manager was not being initialized when no data directory was configured for persistent storage.

* saving config

* Update task_config_schema_templ.go

* enable/disable tasks

* protobuf encoded task configurations

* fix system settings

* use ui component

* remove logs

* interface{} Reduction

* reduce interface{}

* reduce interface{}

* avoid from/to map

* reduce interface{}

* refactor

* keep it DRY

* added logging

* debug messages

* debug level

* debug

* show the log caller line

* use configured task policy

* log level

* handle admin heartbeat response

* Update worker.go

* fix EC rack and dc count

* Report task status to admin server

* fix task logging, simplify interface checking, use erasure_coding constants

* factor in empty volume server during task planning

* volume.list adds disk id

* track disk id also

* fix locking scheduled and manual scanning

* add active topology

* simplify task detector

* ec task completed, but shards are not showing up

* implement ec in ec_typed.go

* adjust log level

* dedup

* implementing ec copying shards and only ecx files

* use disk id when distributing ec shards

🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned

* Delete original volume from all locations

* clean up existing shard locations

* local encoding and distributing

* Update docker/admin_integration/EC-TESTING-README.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* check volume id range

* simplify

* fix tests

* fix types

* clean up logs and tests

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-30 12:38:03 -07:00
chrislu
bd4891a117 change version directory 2025-06-03 22:46:10 -07:00
chrislu
26dbc6c905 move to https://github.com/seaweedfs/seaweedfs 2022-07-29 00:17:28 -07:00
Konstantin Lebedev
9ea09cc41c healthz check to avoid drain pod with last replicas 2022-02-16 14:18:36 +05:00
Chris Lu
7ce647f27e support customizable disk type 2021-02-13 15:42:42 -08:00
Chris Lu
2e8dba571b adjust volume server UI 2020-12-14 00:51:57 -08:00
Chris Lu
62563a895a refactoring 2020-09-20 16:00:01 -07:00
Chris Lu
bc2ec6774d inject git version into build 2020-06-02 00:10:38 -07:00
LazyDBA247-Anyvision
4ff513d64d staus route: add DiskStatuses for disk in the volume server status
whem monitoring server, better to know the status of the disks & volumes in a single route.
2020-02-23 23:27:09 +02:00
Chris Lu
6383b45bd0 add lock variable 2019-12-02 20:49:50 -08:00
Chris Lu
6a756136ef go fmt 2018-10-23 23:59:49 -07:00
Chris Lu
46eb77f9bb move DiskStatus and MemStatus to protobuf 2018-10-15 22:25:28 -07:00
Chris Lu
eec951cad2 migrate volume sync to gRpc 2018-10-15 21:44:41 -07:00
Chris Lu
f8b2d3cacc move volume mount/unmount on volume server to grpc 2018-10-15 01:48:15 -07:00
Chris Lu
66a353dcb5 remove volume server /admin/volume/delete 2018-10-15 01:26:49 -07:00
Chris Lu
fda771c83f migrate volume sync status to grpc API on volume server 2018-10-15 01:19:15 -07:00
Chris Lu
b423bb9e2d migrate assign volume to grpc API on volume server 2018-10-15 00:40:46 -07:00
Chris Lu
8301519fb0 migrate delete collection to grpc API on volume server 2018-10-15 00:03:55 -07:00
Chris Lu
7efeb146c5 fix log 2018-05-31 22:49:55 -07:00
Chris Lu
458ada173e go fmt 2018-05-27 11:52:26 -07:00
brstgt
e074a54a20 Delete volumes online without restarting volume server 2017-01-20 13:02:37 +01:00
Chris Lu
ed44f12f6d support Fallocate on linux 2017-01-08 11:01:46 -08:00
Chris Lu
5ce6bbf076 directory structure change to work with glide
glide has its own requirements. My previous workaround caused me some
code checkin errors. Need to fix this.
2016-06-02 18:09:14 -07:00