* fix(helm): namespace app-specific values under global.seaweedfs
Move all app-specific values from the global namespace to
global.seaweedfs.* to avoid polluting the shared .Values.global
namespace when the chart is used as a subchart.
Standard Helm conventions (global.imageRegistry, global.imagePullSecrets)
remain at the global level as they are designed to be shared across
subcharts.
Fixesseaweedfs/seaweedfs#8699
BREAKING CHANGE: global values have been restructured. Users must update
their values files to use the new paths:
- global.registry → global.imageRegistry
- global.repository → global.seaweedfs.image.repository
- global.imageName → global.seaweedfs.image.name
- global.<key> → global.seaweedfs.<key> (for all other app-specific values)
* fix(ci): update helm CI tests to use new global.seaweedfs.* value paths
Update all --set flags in helm_ci.yml to use the new namespaced
global.seaweedfs.* paths matching the values.yaml restructuring.
* fix(ci): install Claude Code via npm to avoid install.sh 403
The claude-code-action's built-in installer uses
`curl https://claude.ai/install.sh | bash` which can fail with 403.
Due to the pipe, bash exits 0 on empty input, masking the curl failure
and leaving the `claude` binary missing.
Work around this by installing Claude Code via npm before invoking the
action, and passing the executable path via path_to_claude_code_executable.
* revert: remove claude-code-review.yml changes from this PR
The claude-code-action OIDC token exchange validates that the workflow
file matches the version on the default branch. Modifying it in a PR
causes the review job to fail with "Workflow validation failed".
The Claude Code install fix will need to be applied directly to master
or in a separate PR.
* fix: update stale references to old global.* value paths
- admin-statefulset.yaml: fix fail message to reference
global.seaweedfs.masterServer
- values.yaml: fix comment to reference image.name instead of imageName
- helm_ci.yml: fix diagnostic message to reference
global.seaweedfs.enableSecurity
* feat(helm): add backward-compat shim for old global.* value paths
Add _compat.tpl with a seaweedfs.compat helper that detects old-style
global.* keys (e.g. global.enableSecurity, global.registry) and merges
them into the new global.seaweedfs.* namespace.
Since the old keys no longer have defaults in values.yaml, their
presence means the user explicitly provided them. The helper uses
in-place mutation via `set` so all templates see the merged values.
This ensures existing deployments using old value paths continue to
work without changes after upgrading.
* fix: update stale comment references in values.yaml
Update comments referencing global.enableSecurity and global.masterServer
to the new global.seaweedfs.* paths.
---------
Co-authored-by: Copilot <copilot@github.com>
* fix(helm): trim whitespace before s3 TLS args to prevent command breakage (#8613)
When global.enableSecurity is enabled, the `{{ include }}` call for
s3 TLS args lacked the leading dash (`{{-`), producing an extra blank
line in the rendered shell command. This broke shell continuation and
caused the filer (and s3/all-in-one) to crash because arguments after
the blank line were silently dropped.
* ci(helm): assert no blank lines in security+S3 command blocks
Renders the chart with global.enableSecurity=true and S3 enabled for
normal mode (filer + s3 deployments) and all-in-one mode, then parses
every /bin/sh -ec command block and fails if any contains blank lines.
This catches the whitespace regression from #8613 where a missing {{-
dash on the seaweedfs.s3.tlsArgs include produced a blank line that
broke shell continuation.
* ci(helm): enable S3 in all-in-one security render test
The s3.tlsArgs include is gated by allInOne.s3.enabled, so without
this flag the all-in-one command block wasn't actually exercising the
TLS args path.
* helm: add s3.tlsSecret to allow custom TLS certificate for S3 HTTPS endpoint
Allow users to specify an external Kubernetes TLS secret for the S3
HTTPS endpoint instead of using the internal self-signed client
certificate. This enables using publicly trusted certificates (e.g.
from Let's Encrypt) so S3 clients don't need to trust the internal CA.
The new s3.tlsSecret value is supported in the standalone S3 gateway,
filer with embedded S3, and all-in-one deployment templates.
Closes#8581
* refactor: extract S3 TLS helpers to reduce duplication
Move repeated S3 TLS cert/key logic into shared helper templates
(seaweedfs.s3.tlsArgs, seaweedfs.s3.tlsVolumeMount, seaweedfs.s3.tlsVolume)
in _helpers.tpl, and use them across all three deployment templates.
* helm: add allInOne.s3.trafficDistribution support
Add the missing allInOne.s3.trafficDistribution branch to the
seaweedfs.trafficDistribution helper and wire it into the all-in-one
service template, mirroring the existing s3-service.yaml behavior.
PreferClose is auto-converted to PreferSameZone on k8s >=1.35.
* fix: scope S3 TLS mounts to S3-enabled pods and simplify trafficDistribution helper
- Wrap S3 TLS volume/volumeMount includes in allInOne.s3.enabled and
filer.s3.enabled guards so the custom TLS secret is only mounted
when S3 is actually enabled in that deployment mode.
- Refactor seaweedfs.trafficDistribution helper to accept an explicit
value+Capabilities dict instead of walking multiple .Values paths,
making each call site responsible for passing its own setting.
* refactor(helm): add componentName helper for truncation
* fix(helm): unify ingress backend naming with truncation
* fix(helm): unify statefulset/deployment naming with truncation
* fix(helm): add missing labels to services for servicemonitor discovery
* chore(helm): secure secrets and add upgrade notes
* fix(helm): truncate context instead of suffix in componentName
* revert(docs): remove upgrade notes per feedback
* fix(helm): use componentName for COSI serviceAccountName
* helm: update master -ip to use component name for correct truncation
* helm: refactor masterServers helper to use truncated component names
* helm: update volume -ip to use component name and cleanup redundant printf
* helm: refine helpers with robustness check and updated docs
There is a mistmatch in the conditionals for the definition and mounting of the `config-users` volume in the filer's template.
Volume definition:
```
{{- if and .Values.filer.s3.enabled .Values.filer.s3.enableAuth }}
```
Mount:
```
{{- if .Values.filer.s3.enableAuth }}
```
This leads to an invalid specification in the case where s3 is disabled but the enableAuth value is set to true, as it tries to mount in an undefined volume. I've fixed it here by adding the extra check to the latter conditional.
Fixes#7467
The -mserver argument line in volume-statefulset.yaml was missing a
trailing backslash, which prevented extraArgs from being passed to
the weed volume process.
Also:
- Extracted master server list generation logic into shared helper
templates in _helpers.tpl for better maintainability
- Updated all occurrences of deprecated -mserver flag to -master
across docker-compose files, test files, and documentation
The allowEmptyFolder option is no longer functional because:
1. The code that used it was already commented out
2. Empty folder cleanup is now handled asynchronously by EmptyFolderCleaner
The CLI flags are kept for backward compatibility but marked as deprecated
and ignored. This removes:
- S3ApiServerOption.AllowEmptyFolder field
- The actual usage in s3api_object_handlers_list.go
- Helm chart values and template references
- References in test Makefiles and docker-compose files
Fix the templates to read scheme from httpGet.scheme instead of the
probe level, matching the structure defined in values.yaml.
This ensures that changing *.livenessProbe.httpGet.scheme or
*.readinessProbe.httpGet.scheme in values.yaml now correctly affects
the rendered manifests.
Affected components: master, filer, volume, s3, all-in-one
Fixes#7615