Commit Graph

293 Commits

Author SHA1 Message Date
Chris Lu
b3f7472fd3 4.15 2026-03-04 22:13:57 -08:00
Chris Lu
7799804200 4.14
Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-04 19:22:39 -08:00
Chris Lu
1a3e3100d0 Helm: set serviceAccountName independent of cluster role (#8495)
* Add stale job expiry and expire API

* Add expire job button

* helm: decouple serviceAccountName from cluster role

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-03 12:13:18 -08:00
Surote
3db05f59f0 Feat: update openshift helm value to support seaweed s3 (#8494)
feat: update openshift helm values

Update helm values for openshift to enable/disable s3 and change log to `emptydir` instead of `hostpath`
2026-03-03 01:11:01 -08:00
Chris Lu
2644816692 helm: avoid duplicate env var keys in workload env lists (#8488)
* helm: dedupe merged extraEnvironmentVars in workloads

* address comments

Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com>

* range

Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com>

* helm: reuse merge helper for extraEnvironmentVars

---------

Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-02 12:10:57 -08:00
Kirill Ilin
ae02d47433 helm: add optional parameters to COSI BucketClass (#8453)
Add cosi.bucketClassParameters to allow passing arbitrary parameters
to the default BucketClass resource. This enables use cases like
tiered storage where a diskType parameter needs to be set on the
BucketClass to route objects to specific volume servers.

When bucketClassParameters is empty (default), the BucketClass is
rendered without a parameters block, preserving backward compatibility.

Signed-off-by: Kirill Ilin <stitch14@yandex.ru>
Co-authored-by: Claude <noreply@anthropic.com>
2026-02-26 12:19:07 -08:00
Chris Lu
9b6fc49946 Chart createBuckets config #8368: Add TTL, Object Lock, and Versioning support (#8375)
* Chart createBuckets config #8368: Add TTL, Object Lock, and Versioning support

* Update weed/shell/command_s3_bucket_versioning.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* address comments

* address comments

* go fmt

* fix failures are still treated like “bucket not found”

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-26 11:56:10 -08:00
Peter Dodd
f4af1cc0ba feat(helm): annotations for service account (#8429) 2026-02-24 07:35:13 -08:00
Sheya Bernstein
d8b8f0dffd fix(helm): add missing app.kubernetes.io/instance label to volume service (#8403) 2026-02-22 07:20:38 -08:00
Chris Lu
2a1ae896e4 helm: refine openshift-values.yaml for assigned UID ranges (#8396)
* helm: refine openshift-values.yaml to remove hardcoded UIDs

Remove hardcoded runAsUser, runAsGroup, and fsGroup from the
openshift-values.yaml example. This allows OpenShift's admission
controller to automatically assign a valid UID from the namespace's
allocated range, avoiding "forbidden" errors when UID 1000 is
outside the permissible range.

Updates #8381, #8390.

* helm: fix volume.logs and add consistent security context comments

* Update README.md
2026-02-20 12:05:57 -08:00
Richard Chen Zheng
964a8f5fde Allow user to define access and secret key via values (#8389)
* Allow user to define admin access and secret key via values

* Add comments to values.yaml

* Add support for read for consistency

* Simplify templating

* Add checksum to s3 config

* Update comments

* Revert "Add checksum to s3 config"

This reverts commit d21a7038a86ae2adf547730b2cb6f455dcd4ce70.
2026-02-20 00:37:54 -08:00
Chris Lu
40cc0e04a6 docker: fix entrypoint chown guard; helm: add openshift-values.yaml (#8390)
* Enforce IAM for s3tables bucket creation

* Prefer IAM path when policies exist

* Ensure IAM enforcement honors default allow

* address comments

* Reused the precomputed principal when setting tableBucketMetadata.OwnerAccountID, avoiding the redundant getAccountID call.

* get identity

* fix

* dedup

* fix

* comments

* fix tests

* update iam config

* go fmt

* fix ports

* fix flags

* mini clean shutdown

* Revert "update iam config"

This reverts commit ca48fdbb0afa45657823d98657556c0bbf24f239.

Revert "mini clean shutdown"

This reverts commit 9e17f6baffd5dd7cc404d831d18dd618b9fe5049.

Revert "fix flags"

This reverts commit e9e7b29d2f77ee5cb82147d50621255410695ee3.

Revert "go fmt"

This reverts commit bd3241960b1d9484b7900190773b0ecb3f762c9a.

* test/s3tables: share single weed mini per test package via TestMain

Previously each top-level test function in the catalog and s3tables
package started and stopped its own weed mini instance. This caused
failures when a prior instance wasn't cleanly stopped before the next
one started (port conflicts, leaked global state).

Changes:
- catalog/iceberg_catalog_test.go: introduce TestMain that starts one
  shared TestEnvironment (external weed binary) before all tests and
  tears it down after. All individual test functions now use sharedEnv.
  Added randomSuffix() for unique resource names across tests.
- catalog/pyiceberg_test.go: updated to use sharedEnv instead of
  per-test environments.
- catalog/pyiceberg_test_helpers.go -> pyiceberg_test_helpers_test.go:
  renamed to a _test.go file so it can access TestEnvironment which is
  defined in a test file.
- table-buckets/setup.go: add package-level sharedCluster variable.
- table-buckets/s3tables_integration_test.go: introduce TestMain that
  starts one shared TestCluster before all tests. TestS3TablesIntegration
  now uses sharedCluster. Extract startMiniClusterInDir (no *testing.T)
  for TestMain use. TestS3TablesCreateBucketIAMPolicy keeps its own
  cluster (different IAM config). Remove miniClusterMutex (no longer
  needed). Fix Stop() to not panic when t is nil."

* delete

* parse

* default allow should work with anonymous

* fix port

* iceberg route

The failures are from Iceberg REST using the default bucket warehouse when no prefix is provided. Your tests create random buckets, so /v1/namespaces was looking in warehouse and failing. I updated the tests to use the prefixed Iceberg routes (/v1/{bucket}/...) via a small helper.

* test(s3tables): fix port conflicts and IAM ARN matching in integration tests

- Pass -master.dir explicitly to prevent filer store directory collision
  between shared cluster and per-test clusters running in the same process
- Pass -volume.port.public and -volume.publicUrl to prevent the global
  publicPort flag (mutated from 0 → concrete port by first cluster) from
  being reused by a second cluster, causing 'address already in use'
- Remove the flag-reset loop in Stop() that reset global flag values while
  other goroutines were reading them (race → panic)
- Fix IAM policy Resource ARN in TestS3TablesCreateBucketIAMPolicy to use
  wildcards (arn:aws:s3tables:*:*:bucket/<name>) because the handler
  generates ARNs with its own DefaultRegion (us-east-1) and principal name
  ('admin'), not the test constants testRegion/testAccountID

* docker: fix entrypoint chown guard; helm: add openshift-values.yaml

Fix a regression in entrypoint.sh where the DATA_UID/DATA_GID
ownership comparison was dropped, causing chown -R /data to run
unconditionally on every container start even when ownership was
already correct. Restore the guard so the recursive chown is
skipped when the seaweed user already owns /data — making startup
faster on subsequent runs and a no-op on OpenShift/PVC deployments
where fsGroup has already set correct ownership.

Add k8s/charts/seaweedfs/openshift-values.yaml: an example Helm
overrides file for deploying SeaweedFS on OpenShift (or any cluster
enforcing the Kubernetes restricted Pod Security Standard). Replaces
hostPath volumes with PVCs, sets runAsUser/fsGroup to 1000
(the seaweed user baked into the image), drops all capabilities,
disables privilege escalation, and enables RuntimeDefault seccomp —
satisfying OpenShift's default restricted SCC without needing a
custom SCC or root access.

Fixes #8381"
2026-02-20 00:35:42 -08:00
Chris Lu
8ec9ff4a12 Refactor plugin system and migrate worker runtime (#8369)
* admin: add plugin runtime UI page and route wiring

* pb: add plugin gRPC contract and generated bindings

* admin/plugin: implement worker registry, runtime, monitoring, and config store

* admin/dash: wire plugin runtime and expose plugin workflow APIs

* command: add flags to enable plugin runtime

* admin: rename remaining plugin v2 wording to plugin

* admin/plugin: add detectable job type registry helper

* admin/plugin: add scheduled detection and dispatch orchestration

* admin/plugin: prefetch job type descriptors when workers connect

* admin/plugin: add known job type discovery API and UI

* admin/plugin: refresh design doc to match current implementation

* admin/plugin: enforce per-worker scheduler concurrency limits

* admin/plugin: use descriptor runtime defaults for scheduler policy

* admin/ui: auto-load first known plugin job type on page open

* admin/plugin: bootstrap persisted config from descriptor defaults

* admin/plugin: dedupe scheduled proposals by dedupe key

* admin/ui: add job type and state filters for plugin monitoring

* admin/ui: add per-job-type plugin activity summary

* admin/plugin: split descriptor read API from schema refresh

* admin/ui: keep plugin summary metrics global while tables are filtered

* admin/plugin: retry executor reservation before timing out

* admin/plugin: expose scheduler states for monitoring

* admin/ui: show per-job-type scheduler states in plugin monitor

* pb/plugin: rename protobuf package to plugin

* admin/plugin: rename pluginRuntime wiring to plugin

* admin/plugin: remove runtime naming from plugin APIs and UI

* admin/plugin: rename runtime files to plugin naming

* admin/plugin: persist jobs and activities for monitor recovery

* admin/plugin: lease one detector worker per job type

* admin/ui: show worker load from plugin heartbeats

* admin/plugin: skip stale workers for detector and executor picks

* plugin/worker: add plugin worker command and stream runtime scaffold

* plugin/worker: implement vacuum detect and execute handlers

* admin/plugin: document external vacuum plugin worker starter

* command: update plugin.worker help to reflect implemented flow

* command/admin: drop legacy Plugin V2 label

* plugin/worker: validate vacuum job type and respect min interval

* plugin/worker: test no-op detect when min interval not elapsed

* command/admin: document plugin.worker external process

* plugin/worker: advertise configured concurrency in hello

* command/plugin.worker: add jobType handler selection

* command/plugin.worker: test handler selection by job type

* command/plugin.worker: persist worker id in workingDir

* admin/plugin: document plugin.worker jobType and workingDir flags

* plugin/worker: support cancel request for in-flight work

* plugin/worker: test cancel request acknowledgements

* command/plugin.worker: document workingDir and jobType behavior

* plugin/worker: emit executor activity events for monitor

* plugin/worker: test executor activity builder

* admin/plugin: send last successful run in detection request

* admin/plugin: send cancel request when detect or execute context ends

* admin/plugin: document worker cancel request responsibility

* admin/handlers: expose plugin scheduler states API in no-auth mode

* admin/handlers: test plugin scheduler states route registration

* admin/plugin: keep worker id on worker-generated activity records

* admin/plugin: test worker id propagation in monitor activities

* admin/dash: always initialize plugin service

* command/admin: remove plugin enable flags and default to enabled

* admin/dash: drop pluginEnabled constructor parameter

* admin/plugin UI: stop checking plugin enabled state

* admin/plugin: remove docs for plugin enable flags

* admin/dash: remove unused plugin enabled check method

* admin/dash: fallback to in-memory plugin init when dataDir fails

* admin/plugin API: expose worker gRPC port in status

* command/plugin.worker: resolve admin gRPC port via plugin status

* split plugin UI into overview/configuration/monitoring pages

* Update layout_templ.go

* add volume_balance plugin worker handler

* wire plugin.worker CLI for volume_balance job type

* add erasure_coding plugin worker handler

* wire plugin.worker CLI for erasure_coding job type

* support multi-job handlers in plugin worker runtime

* allow plugin.worker jobType as comma-separated list

* admin/plugin UI: rename to Workers and simplify config view

* plugin worker: queue detection requests instead of capacity reject

* Update plugin_worker.go

* plugin volume_balance: remove force_move/timeout from worker config UI

* plugin erasure_coding: enforce local working dir and cleanup

* admin/plugin UI: rename admin settings to job scheduling

* admin/plugin UI: persist and robustly render detection results

* admin/plugin: record and return detection trace metadata

* admin/plugin UI: show detection process and decision trace

* plugin: surface detector decision trace as activities

* mini: start a plugin worker by default

* admin/plugin UI: split monitoring into detection and execution tabs

* plugin worker: emit detection decision trace for EC and balance

* admin workers UI: split monitoring into detection and execution pages

* plugin scheduler: skip proposals for active assigned/running jobs

* admin workers UI: add job queue tab

* plugin worker: add dummy stress detector and executor job type

* admin workers UI: reorder tabs to detection queue execution

* admin workers UI: regenerate plugin template

* plugin defaults: include dummy stress and add stress tests

* plugin dummy stress: rotate detection selections across runs

* plugin scheduler: remove cross-run proposal dedupe

* plugin queue: track pending scheduled jobs

* plugin scheduler: wait for executor capacity before dispatch

* plugin scheduler: skip detection when waiting backlog is high

* plugin: add disk-backed job detail API and persistence

* admin ui: show plugin job detail modal from job id links

* plugin: generate unique job ids instead of reusing proposal ids

* plugin worker: emit heartbeats on work state changes

* plugin registry: round-robin tied executor and detector picks

* add temporary EC overnight stress runner

* plugin job details: persist and render EC execution plans

* ec volume details: color data and parity shard badges

* shard labels: keep parity ids numeric and color-only distinction

* admin: remove legacy maintenance UI routes and templates

* admin: remove dead maintenance endpoint helpers

* Update layout_templ.go

* remove dummy_stress worker and command support

* refactor plugin UI to job-type top tabs and sub-tabs

* migrate weed worker command to plugin runtime

* remove plugin.worker command and keep worker runtime with metrics

* update helm worker args for jobType and execution flags

* set plugin scheduling defaults to global 16 and per-worker 4

* stress: fix RPC context reuse and remove redundant variables in ec_stress_runner

* admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants

* admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API

* admin/handlers: implement buffered rendering to prevent response corruption

* admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups

* admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve

* admin/plugin: implement atomic file writes and fix run record side effects

* admin/plugin: use P prefix for parity shard labels in execution plans

* admin/plugin: enable parallel execution for cancellation tests

* admin: refactor time.Time fields to pointers for better JSON omitempty support

* admin/plugin: implement pointer-safe time assignments and comparisons in plugin core

* admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor

* admin/plugin: update scheduler activity tracking to use time pointers

* admin/plugin: fix time-based run history trimming after pointer refactor

* admin/dash: fix JobSpec struct literal in plugin API after pointer refactor

* admin/view: add D/P prefixes to EC shard badges for UI consistency

* admin/plugin: use lifecycle-aware context for schema prefetching

* Update ec_volume_details_templ.go

* admin/stress: fix proposal sorting and log volume cleanup errors

* stress: refine ec stress runner with math/rand and collection name

- Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction.
- Replaced crypto/rand with seeded math/rand PRNG for bulk payloads.
- Added documentation for EcMinAge zero-value behavior.
- Added logging for ignored errors in volume/shard deletion.

* admin: return internal server error for plugin store failures

Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors.

* admin: implement safe channel sends and graceful shutdown sync

- Added sync.WaitGroup to Plugin struct to manage background goroutines.
- Implemented safeSendCh helper using recover() to prevent panics on closed channels.
- Ensured Shutdown() waits for all background operations to complete.

* admin: robustify plugin monitor with nil-safe time and record init

- Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt).
- Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk.
- Fixed debounced persistence to trigger immediate write on job completion.

* admin: improve scheduler shutdown behavior and logic guards

- Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection.
- Removed redundant nil guard in buildScheduledJobSpec.
- Standardized WaitGroup usage for schedulerLoop.

* admin: implement deep copy for job parameters and atomic write fixes

- Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state.
- Ensured atomicWriteFile creates parent directories before writing.

* admin: remove unreachable branch in shard classification

Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded.

* admin: secure UI links and use canonical shard constants

- Added rel="noopener noreferrer" to external links for security.
- Replaced magic number 14 with erasure_coding.TotalShardsCount.
- Used renderEcShardBadge for missing shard list consistency.

* admin: stabilize plugin tests and fix regressions

- Composed a robust plugin_monitor_test.go to handle asynchronous persistence.
- Updated all time.Time literals to use timeToPtr helper.
- Added explicit Shutdown() calls in tests to synchronize with debounced writes.
- Fixed syntax errors and orphaned struct literals in tests.

* Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* admin: finalize refinements for error handling, scheduler, and race fixes

- Standardized HTTP 500 status codes for store failures in plugin_api.go.
- Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown.
- Fixed race condition in safeSendDetectionComplete by extracting channel under lock.
- Implemented deep copy for JobActivity details.
- Used defaultDirPerm constant in atomicWriteFile.

* test(ec): migrate admin dockertest to plugin APIs

* admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors

* admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures

* admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage

* admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID

* admin/plugin: fix racy Shutdown channel close with sync.Once

* admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg

* admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only

* admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators

* test/ec: check http.NewRequest errors to prevent nil req panics

* test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1

* plugin(ec): raise default detection and scheduling throughput limits

* topology: include empty disks in volume list and EC capacity fallback

* topology: remove hard 10-task cap for detection planning

* Update ec_volume_details_templ.go

* adjust default

* fix tests

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-02-18 13:42:41 -08:00
Chris Lu
5919f519fd fix: allow overriding Enterprise image name using Helm #8361 (#8363)
* fix: allow overriding Enterprise image name using Helm #8361

* refactor: flatten image name construction logic for better readability
2026-02-17 13:49:16 -08:00
Chris Lu
3c3a78d08e 4.13 2026-02-16 17:01:19 -08:00
Lukas
abd681b54b Fix service name in the worker deployment (seaweedfs#8314) (#8315)
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
2026-02-12 14:22:42 -08:00
Chris Lu
6bd6bba594 Fix inconsistent admin argument in worker pods (#8316)
* Fix inconsistent admin argument in worker pods

* Use seaweedfs.componentName for admin service naming
2026-02-12 09:50:53 -08:00
Chris Lu
af8273386d 4.12 2026-02-09 18:15:19 -08:00
Chris Lu
cb9e21cdc5 Normalize hashicorp raft peer ids (#8253)
* Normalize raft voter ids

* 4.11

* Update raft_hashicorp.go
2026-02-09 07:46:34 -08:00
Chris Lu
5a279c4d2f fmt 2026-02-08 21:19:00 -08:00
Chris Lu
0c89185291 4.10 2026-02-08 21:16:58 -08:00
Nikita
c44716f9af helm: add a trafficDistribution field to an s3 service (#8232)
helm: add trafficDistribution field to s3 service

Signed-off-by: nbykov0 <166552198+nbykov0@users.noreply.github.com>
2026-02-06 10:47:39 -08:00
Yalın Doğu Şahin
ef3b5f7efa helm/add iceberg rest catalog ingress for s3 (#8205)
* helm: add Iceberg REST catalog support to S3 service

* helm: add Iceberg REST catalog support to S3 service

* add ingress for iceberg catalog endpoint

* helm: conditionally render ingressClassName in s3-iceberg-ingress.yaml

* helm: refactor s3-iceberg-ingress.yaml to use named template for paths

* helm: remove unused $serviceName variable in s3-iceberg-ingress.yaml

---------

Co-authored-by: yalin.sahin <yalin.sahin@tradition.ch>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-02-04 12:00:59 -08:00
Chris Lu
5a5cc38692 4.09 2026-02-03 17:56:25 -08:00
Yalın Doğu Şahin
47fc9e771f helm: add Iceberg REST catalog support to S3 service (#8193)
* helm: add Iceberg REST catalog support to S3 service

* helm: add Iceberg REST catalog support to S3 service

---------

Co-authored-by: yalin.sahin <yalin.sahin@tradition.ch>
2026-02-03 13:44:52 -08:00
Chris Lu
ba8816e2e1 4.08 2026-02-02 20:36:03 -08:00
Emanuele Leopardi
51ef39fc76 Update Helm hook annotations for post-install and upgrade (#8150)
* Update Helm hook annotations for post-install and upgrade

I believe it makes sense to allow this job to run also after installation. Assuming weed shell is idempotent, and assuming someone wants to add a new bucket after the initial installation, it makes sense to trigger the job again.

* Add check for existing buckets before creation

* Enhances S3 bucket existence check

Improves the reliability of checking for existing S3 buckets in the post-install hook.

The previous `grep -w` command could lead to imprecise matches. This update extracts only the bucket name and performs an exact, whole-line match to ensure accurate detection of existing buckets. This prevents potential issues with redundant creation attempts or false negatives.

* Currently Bucket Creation is ignored if filer.s3.enabled is disabled

This commit enables bucket creation on both scenarios,i.e. if any of filer.s3.enabled or s3.enabled are used.

---------

Co-authored-by: Emanuele <emanuele.leopardi@tset.com>
2026-01-28 13:08:20 -08:00
Chris Lu
4f5f1f6be7 refactor(helm): Unified Naming Truncation and Bug Fixes (#8143)
* refactor(helm): add componentName helper for truncation

* fix(helm): unify ingress backend naming with truncation

* fix(helm): unify statefulset/deployment naming with truncation

* fix(helm): add missing labels to services for servicemonitor discovery

* chore(helm): secure secrets and add upgrade notes

* fix(helm): truncate context instead of suffix in componentName

* revert(docs): remove upgrade notes per feedback

* fix(helm): use componentName for COSI serviceAccountName

* helm: update master -ip to use component name for correct truncation

* helm: refactor masterServers helper to use truncated component names

* helm: update volume -ip to use component name and cleanup redundant printf

* helm: refine helpers with robustness check and updated docs
2026-01-27 17:45:16 -08:00
MorezMartin
20952aa514 Fix jwt error in admin UI (#8140)
* add jwt token in weed admin headers requests

* add jwt token to header for download

* :s/upload/download

* filer_signing.read despite of filer_signing key

* finalize filer_browser_handlers.go

* admin: add JWT authorization to file browser handlers

* security: fix typos in JWT read validation descriptions

* Move security.toml to example and secure keys

* security: address PR feedback on JWT enforcement and example keys

* security: refactor JWT logic and improve example keys readability

* Update docker/Dockerfile.local

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Chris Lu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-27 17:27:02 -08:00
Chris Lu
c9c91ba568 Refactor Helm chart to use dynamic names for resources (#8142)
* Refactor Helm chart to use dynamic names for resources

* ensure name length
2026-01-27 12:52:06 -08:00
Chris Lu
b40551f960 helm: seaweedfs admin should not allow setting multiple admin servers 2026-01-24 13:43:08 -08:00
Yalın Doğu Şahin
d345752e3d Feature/volume ingress (#8084) 2026-01-22 06:48:29 -08:00
Chris Lu
bc853bdee5 4.07 2026-01-18 15:48:09 -08:00
Vladimir Shishkaryov
b49f3ce6d3 fix(chart): place backoffLimit correctly in resize hook (#8036)
Signed-off-by: Vladimir Shishkaryov <vladimir@jckls.com>
2026-01-15 12:45:49 -08:00
Sheya Bernstein
8740a087b9 fix: apply tpl function to all component extraEnvironmentVars (#8001) 2026-01-11 12:14:16 -08:00
Chris Lu
ce6e9be66b 4.06 2026-01-10 12:08:16 -08:00
Nicholas Boyd Isacsson
88e9e2c471 fix: Invalid volume mount conditional in filer template (#7992)
There is a mistmatch in the conditionals for the definition and mounting of the `config-users` volume in the filer's template. 

Volume definition:
```
        {{- if and .Values.filer.s3.enabled .Values.filer.s3.enableAuth }}
```
Mount:
```
            {{- if .Values.filer.s3.enableAuth }}
```

This leads to an invalid specification in the case where s3 is disabled but the enableAuth value is set to true, as it tries to mount in an undefined volume. I've fixed it here by adding the extra check to the latter conditional.
2026-01-09 12:10:40 -08:00
MorezMartin
629d9479a1 Fix jwt error in Filer pod (k8s) (#7960)
* Avoid JWT error on liveprobeness

* fix jwt error

* address comments

* lint

---------

Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-01-04 12:05:31 -08:00
Sheya Bernstein
c0188db7cc chart: Set admin metrics port to http port (#7936)
* chart: Set admin metrics port to http port

* remove metrics reference
2026-01-02 12:15:33 -08:00
Chris Lu
87b71029f7 4.05 2026-01-01 20:39:22 -08:00
Sheya Bernstein
6f28cb7f87 helm: Support multiple hosts for S3 ingress (#7931) 2026-01-01 07:41:53 -08:00
Chris Lu
60707f99d8 customizable adminServer 2025-12-31 12:02:16 -08:00
Chris Lu
31a4f57cd9 Fix: Add -admin.grpc flag to worker for explicit gRPC port (#7926) (#7927)
* Fix: Add -admin.grpc flag to worker for explicit gRPC port configuration

* Fix(helm): Add adminGrpcServer to worker configuration

* Refactor: Support host:port.grpcPort address format, revert -admin.grpc flag

* Helm: Conditionally append grpcPort to worker admin address

* weed/admin: fix "send on closed channel" panic in worker gRPC server

Make unregisterWorker connection-aware to prevent closing channels
belonging to newer connections.

* weed/worker: improve gRPC client stability and logging

- Fix goroutine leak in reconnection logic
- Refactor reconnection loop to exit on success and prevent busy-waiting
- Add session identification and enhanced logging to client handlers
- Use constant for internal reset action and remove unused variables

* weed/worker: fix worker state initialization and add lifecycle logs

- Revert workerState to use running boolean correctly
- Prevent handleStart failing by checking running state instead of startTime
- Add more detailed logs for worker startup events
2025-12-31 11:55:09 -08:00
Sheya Bernstein
915a7d4a54 feat: Add probes to worker service (#7896)
* feat: Add probes to worker service

* feat: Add probes to worker service

* Merge branch 'master' into pr/7896

* refactor

---------

Co-authored-by: Chris Lu <chris.lu@gmail.com>
2025-12-27 13:40:05 -08:00
Sheya Bernstein
7f611f5d3a fix: Correct admin server port in Helm worker deployment (#7872)
The worker deployment was incorrectly passing the admin gRPC port (33646)
to the -admin flag. However, the SeaweedFS worker command automatically
calculates the gRPC port by adding 10000 to the HTTP port provided.

This caused workers to attempt connection to port 43646 (33646 + 10000)
instead of the correct gRPC port 33646 (23646 + 10000).

Changes:
- Update worker-deployment.yaml to use admin.port instead of admin.grpcPort
- Workers now correctly connect to admin HTTP port, allowing the binary
  to calculate the gRPC port automatically

Fixes workers failing with:
"dial tcp <admin-ip>:43646: connect: no route to host"

Related:
- Worker code: weed/pb/grpc_client_server.go:272 (grpcPort = port + 10000)
- Worker docs: weed/command/worker.go:36 (admin HTTP port + 10000)
2025-12-24 12:22:37 -08:00
Sheya Bernstein
911aca74f3 Support volume server ID in Helm chart (#7867)
helm: Support volume server ID
2025-12-24 10:52:40 -08:00
Chris Lu
88ed187c27 fix(worker): add metrics HTTP server and health checks for Kubernetes (#7860)
* feat(worker): add metrics HTTP server and debug profiling support

- Add -metricsPort flag to enable Prometheus metrics endpoint
- Add -metricsIp flag to configure metrics server bind address
- Implement /metrics endpoint for Prometheus-compatible metrics
- Implement /health endpoint for Kubernetes readiness/liveness probes
- Add -debug flag to enable pprof debugging server
- Add -debug.port flag to configure debug server port
- Fix stats package import naming conflict by using alias
- Update usage examples to show new flags

Fixes #7843

* feat(helm): add worker metrics and health check support

- Update worker readiness probe to use httpGet on /health endpoint
- Update worker liveness probe to use httpGet on /health endpoint
- Add metricsPort flag to worker command in deployment template
- Support both httpGet and tcpSocket probe types for backward compatibility
- Update values.yaml with health check configuration

This enables Kubernetes pod lifecycle management for worker components through
proper health checks on the new metrics HTTP endpoint.

* feat(mini): align all services to share single debug and metrics servers

- Disable S3's separate debug server in mini mode (port 6060 now shared by all)
- Add metrics server startup to embedded worker for health monitoring
- All services now share the single metrics port (9327) and single debug port (6060)
- Consistent pattern with master, filer, volume, webdav services

* fix(worker): fix variable shadowing in health check handler

- Rename http.ResponseWriter parameter from 'w' to 'rw' to avoid shadowing
  the outer 'w *worker.Worker' parameter
- Prevents potential bugs if future code tries to use worker state in handler
- Improves code clarity and follows Go best practices

* fix(worker): remove unused worker parameter in metrics server

- Change 'w *worker.Worker' parameter to '_' as it's not used
- Clarifies intent that parameter is intentionally unused
- Follows Go best practices and improves code clarity

* fix(helm): fix trailing backslash syntax errors in worker command

- Fix conditional backslash placement to prevent shell syntax errors
- Only add backslash when metricsPort OR extraArgs are present
- Prevents worker pod startup failures due to malformed command arguments
- Ensures proper shell command parsing regardless of configuration state

* refactor(worker): use standard stats.StartMetricsServer for consistency

- Replace custom metrics server implementation with stats.StartMetricsServer
  to match pattern used in master, volume, s3, filer_sync components
- Simplifies code and improves maintainability
- Uses glog.Fatal for errors (consistent with other SeaweedFS components)
- Remove unused net/http and prometheus/promhttp imports
- Automatically provides /metrics and /health endpoints via standard implementation
2025-12-23 11:46:34 -08:00
Chris Lu
8d75290601 4.04 2025-12-22 23:46:30 -08:00
MorezMartin
22271358c6 Fix worker and admin ca (#7807)
* Fix Worker and Admin CA in helm chart

* Fix Worker and Admin CA in helm chart - add security.toml modification

* Fix Worker and Admin CA in helm chart - fix security.toml modification error

* Fix Worker and Admin CA in helm chart - fix  errors in volume mounts

* Fix Worker and Admin CA in helm chart - address review comments

- Remove worker-cert from admin pod (principle of least privilege)
- Remove admin-cert from worker pod (principle of least privilege)
- Remove overly broad namespace wildcards from admin-cert dnsNames
- Remove overly broad namespace wildcards from worker-cert dnsNames

---------

Co-authored-by: chrislu <chris.lu@gmail.com>
2025-12-17 12:51:45 -08:00
Chris Lu
f5c666052e feat: add S3 bucket size and object count metrics (#7776)
* feat: add S3 bucket size and object count metrics

Adds periodic collection of bucket size metrics:
- SeaweedFS_s3_bucket_size_bytes: logical size (deduplicated across replicas)
- SeaweedFS_s3_bucket_physical_size_bytes: physical size (including replicas)
- SeaweedFS_s3_bucket_object_count: object count (deduplicated)

Collection runs every 1 minute via background goroutine that queries
filer Statistics RPC for each bucket's collection.

Also adds Grafana dashboard panels for:
- S3 Bucket Size (logical vs physical)
- S3 Bucket Object Count

* address PR comments: fix bucket size metrics collection

1. Fix collectCollectionInfoFromMaster to use master VolumeList API
   - Now properly queries master for topology info
   - Uses WithMasterClient to get volume list from master
   - Correctly calculates logical vs physical size based on replication

2. Return error when filerClient is nil to trigger fallback
   - Changed from 'return nil, nil' to 'return nil, error'
   - Ensures fallback to filer stats is properly triggered

3. Implement pagination in listBucketNames
   - Added listBucketPageSize constant (1000)
   - Uses StartFromFileName for pagination
   - Continues fetching until fewer entries than limit returned

4. Handle NewReplicaPlacementFromByte error and prevent division by zero
   - Check error return from NewReplicaPlacementFromByte
   - Default to 1 copy if error occurs
   - Add explicit check for copyCount == 0

* simplify bucket size metrics: remove filer fallback, align with quota enforcement

- Remove fallback to filer Statistics RPC
- Use only master topology for collection info (same as s3.bucket.quota.enforce)
- Updated comments to clarify this runs the same collection logic as quota enforcement
- Simplified code by removing collectBucketSizeFromFilerStats

* use s3a.option.Masters directly instead of querying filer

* address PR comments: fix dashboard overlaps and improve metrics collection

Grafana dashboard fixes:
- Fix overlapping panels 55 and 59 in grafana_seaweedfs.json (moved 59 to y=30)
- Fix grid collision in k8s dashboard (moved panel 72 to y=48)
- Aggregate bucket metrics with max() by (bucket) for multi-instance S3 gateways

Go code improvements:
- Add graceful shutdown support via context cancellation
- Use ticker instead of time.Sleep for better shutdown responsiveness
- Distinguish EOF from actual errors in stream handling

* improve bucket size metrics: multi-master failover and proper error handling

- Initial delay now respects context cancellation using select with time.After
- Use WithOneOfGrpcMasterClients for multi-master failover instead of hardcoding Masters[0]
- Properly propagate stream errors instead of just logging them (EOF vs real errors)

* improve bucket size metrics: distributed lock and volume ID deduplication

- Add distributed lock (LiveLock) so only one S3 instance collects metrics at a time
- Add IsLocked() method to LiveLock for checking lock status
- Fix deduplication: use volume ID tracking instead of dividing by copyCount
  - Previous approach gave wrong results if replicas were missing
  - Now tracks seen volume IDs and counts each volume only once
- Physical size still includes all replicas for accurate disk usage reporting

* rename lock to s3.leader

* simplify: remove StartBucketSizeMetricsCollection wrapper function

* fix data race: use atomic operations for LiveLock.isLocked field

- Change isLocked from bool to int32
- Use atomic.LoadInt32/StoreInt32 for all reads/writes
- Sync shared isLocked field in StartLongLivedLock goroutine

* add nil check for topology info to prevent panic

* fix bucket metrics: use Ticker for consistent intervals, fix pagination logic

- Use time.Ticker instead of time.After for consistent interval execution
- Fix pagination: count all entries (not just directories) for proper termination
- Update lastFileName for all entries to prevent pagination issues

* address PR comments: remove redundant atomic store, propagate context

- Remove redundant atomic.StoreInt32 in StartLongLivedLock (AttemptToLock already sets it)
- Propagate context through metrics collection for proper cancellation on shutdown
  - collectAndUpdateBucketSizeMetrics now accepts ctx
  - collectCollectionInfoFromMaster uses ctx for VolumeList RPC
  - listBucketNames uses ctx for ListEntries RPC
2025-12-15 19:23:25 -08:00