fix(helm): namespace app-specific global values under global.seaweedfs (#8700)

* fix(helm): namespace app-specific values under global.seaweedfs

Move all app-specific values from the global namespace to
global.seaweedfs.* to avoid polluting the shared .Values.global
namespace when the chart is used as a subchart.

Standard Helm conventions (global.imageRegistry, global.imagePullSecrets)
remain at the global level as they are designed to be shared across
subcharts.

Fixes seaweedfs/seaweedfs#8699

BREAKING CHANGE: global values have been restructured. Users must update
their values files to use the new paths:
- global.registry → global.imageRegistry
- global.repository → global.seaweedfs.image.repository
- global.imageName → global.seaweedfs.image.name
- global.<key> → global.seaweedfs.<key> (for all other app-specific values)

* fix(ci): update helm CI tests to use new global.seaweedfs.* value paths

Update all --set flags in helm_ci.yml to use the new namespaced
global.seaweedfs.* paths matching the values.yaml restructuring.

* fix(ci): install Claude Code via npm to avoid install.sh 403

The claude-code-action's built-in installer uses
`curl https://claude.ai/install.sh | bash` which can fail with 403.
Due to the pipe, bash exits 0 on empty input, masking the curl failure
and leaving the `claude` binary missing.

Work around this by installing Claude Code via npm before invoking the
action, and passing the executable path via path_to_claude_code_executable.

* revert: remove claude-code-review.yml changes from this PR

The claude-code-action OIDC token exchange validates that the workflow
file matches the version on the default branch. Modifying it in a PR
causes the review job to fail with "Workflow validation failed".

The Claude Code install fix will need to be applied directly to master
or in a separate PR.

* fix: update stale references to old global.* value paths

- admin-statefulset.yaml: fix fail message to reference
  global.seaweedfs.masterServer
- values.yaml: fix comment to reference image.name instead of imageName
- helm_ci.yml: fix diagnostic message to reference
  global.seaweedfs.enableSecurity

* feat(helm): add backward-compat shim for old global.* value paths

Add _compat.tpl with a seaweedfs.compat helper that detects old-style
global.* keys (e.g. global.enableSecurity, global.registry) and merges
them into the new global.seaweedfs.* namespace.

Since the old keys no longer have defaults in values.yaml, their
presence means the user explicitly provided them. The helper uses
in-place mutation via `set` so all templates see the merged values.

This ensures existing deployments using old value paths continue to
work without changes after upgrading.

* fix: update stale comment references in values.yaml

Update comments referencing global.enableSecurity and global.masterServer
to the new global.seaweedfs.* paths.

---------

Co-authored-by: Copilot <copilot@github.com>
This commit is contained in:
Chris Lu
2026-03-19 13:00:48 -07:00
committed by GitHub
parent 55bc363228
commit 5e76f55077
37 changed files with 288 additions and 190 deletions

View File

@@ -0,0 +1,59 @@
{{/*
Backward-compatibility shim for the global.* global.seaweedfs.* migration.
When the chart is used as a subchart, .Values.global is shared with sibling
charts. To avoid namespace pollution, app-specific values were moved under
global.seaweedfs.* (and global.registry was renamed to global.imageRegistry).
If a user still passes the OLD key paths (e.g. --set global.enableSecurity=true),
those keys will no longer have defaults in values.yaml, so their mere presence in
.Values.global means the user explicitly provided them. This helper merges them
into global.seaweedfs.* so the rest of the templates see a single, canonical
location.
The helper mutates .Values.global.seaweedfs in-place via `set` and produces no
output. It is idempotent (safe to call more than once in the same render).
Usage: {{- include "seaweedfs.compat" . -}}
*/}}
{{- define "seaweedfs.compat" -}}
{{- $g := .Values.global -}}
{{- $sw := $g.seaweedfs | default dict -}}
{{/* --- image-related renames --- */}}
{{- if hasKey $g "registry" -}}
{{- $_ := set $g "imageRegistry" (default $g.imageRegistry $g.registry) -}}
{{- end -}}
{{- if hasKey $g "repository" -}}
{{- $img := $sw.image | default dict -}}
{{- $_ := set $img "repository" (default $img.repository $g.repository) -}}
{{- $_ := set $sw "image" $img -}}
{{- end -}}
{{- if hasKey $g "imageName" -}}
{{- $img := $sw.image | default dict -}}
{{- $_ := set $img "name" (default $img.name $g.imageName) -}}
{{- $_ := set $sw "image" $img -}}
{{- end -}}
{{/* --- scalar keys that moved 1:1 under global.seaweedfs --- */}}
{{- range $key := list "createClusterRole" "imagePullPolicy" "restartPolicy" "loggingLevel" "enableSecurity" "masterServer" "serviceAccountName" "automountServiceAccountToken" "enableReplication" "replicationPlacement" -}}
{{- if hasKey $g $key -}}
{{- $_ := set $sw $key (index $g $key) -}}
{{- end -}}
{{- end -}}
{{/* --- nested dict keys: deep-merge so partial overrides work --- */}}
{{- range $key := list "securityConfig" "certificates" "monitoring" "serviceAccountAnnotations" "extraEnvironmentVars" -}}
{{- if hasKey $g $key -}}
{{- $old := index $g $key | default dict -}}
{{- $new := index $sw $key | default dict -}}
{{- if and (kindIs "map" $old) (kindIs "map" $new) -}}
{{- $_ := set $sw $key (merge $old $new) -}}
{{- else -}}
{{- $_ := set $sw $key $old -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- $_ := set $g "seaweedfs" $sw -}}
{{- end -}}

View File

@@ -143,9 +143,9 @@ Inject extra environment vars in the format key:value, if populated
{{/* Computes the container image name for all components (if they are not overridden) */}}
{{- define "common.image" -}}
{{- $registryName := default .Values.image.registry .Values.global.registry | toString -}}
{{- $repositoryName := default .Values.image.repository .Values.global.repository | toString -}}
{{- $name := .Values.global.imageName | toString -}}
{{- $registryName := default .Values.image.registry .Values.global.imageRegistry | toString -}}
{{- $repositoryName := default .Values.image.repository .Values.global.seaweedfs.image.repository | toString -}}
{{- $name := .Values.global.seaweedfs.image.name | toString -}}
{{- $tag := default .Chart.AppVersion .Values.image.tag | toString -}}
{{- if .Values.image.repository -}}
{{- $name = $repositoryName -}}
@@ -318,8 +318,8 @@ Generate master server argument value, using global.masterServer if set, otherwi
Usage: {{ include "seaweedfs.masterServerArg" . }}
*/}}
{{- define "seaweedfs.masterServerArg" -}}
{{- if .Values.global.masterServer -}}
{{- .Values.global.masterServer -}}
{{- if .Values.global.seaweedfs.masterServer -}}
{{- .Values.global.seaweedfs.masterServer -}}
{{- else -}}
{{- include "seaweedfs.masterServers" . -}}
{{- end -}}
@@ -329,7 +329,7 @@ Usage: {{ include "seaweedfs.masterServerArg" . }}
Create the name of the service account to use
*/}}
{{- define "seaweedfs.serviceAccountName" -}}
{{- .Values.global.serviceAccountName | default "seaweedfs" -}}
{{- .Values.global.seaweedfs.serviceAccountName | default "seaweedfs" -}}
{{- end -}}
{{/* S3 TLS cert/key arguments, using custom secret if s3.tlsSecret is set */}}

View File

@@ -1,4 +1,5 @@
{{- if .Values.global.createClusterRole }}
{{- include "seaweedfs.compat" . -}}
{{- if .Values.global.seaweedfs.createClusterRole }}
#hack for delete pod master after migration
---
kind: ClusterRole

View File

@@ -1,10 +1,11 @@
{{- include "seaweedfs.compat" . -}}
{{- /* Support bucket creation for both standalone filer.s3 and allInOne modes */}}
{{- $createBuckets := list }}
{{- $s3Enabled := false }}
{{- $enableAuth := false }}
{{- $existingConfigSecret := "" }}
{{- $bucketsFolder := "/buckets" }}
{{- $bucketEnvVars := merge (dict) (.Values.global.extraEnvironmentVars | default dict) }}
{{- $bucketEnvVars := merge (dict) (.Values.global.seaweedfs.extraEnvironmentVars | default dict) }}
{{- if .Values.allInOne.enabled }}
{{- $bucketEnvVars = merge (.Values.allInOne.extraEnvironmentVars | default dict) $bucketEnvVars }}
{{- else }}
@@ -68,7 +69,7 @@ spec:
containers:
- name: post-install-job
image: {{ template "master.image" . }}
imagePullPolicy: {{ $.Values.global.imagePullPolicy | default "IfNotPresent" }}
imagePullPolicy: {{ $.Values.global.seaweedfs.imagePullPolicy | default "IfNotPresent" }}
env:
- name: WEED_CLUSTER_DEFAULT
value: "sw"
@@ -183,7 +184,7 @@ spec:
ports:
- containerPort: {{ .Values.master.port }}
name: swfs-master
{{- if and .Values.global.monitoring.enabled .Values.master.metricsPort }}
{{- if and .Values.global.seaweedfs.monitoring.enabled .Values.master.metricsPort }}
- containerPort: {{ .Values.master.metricsPort }}
name: metrics
{{- end }}

View File

@@ -1,4 +1,5 @@
{{- if .Values.global.monitoring.enabled }}
{{- include "seaweedfs.compat" . -}}
{{- if .Values.global.seaweedfs.monitoring.enabled }}
{{- $files := .Files.Glob "dashboards/*.json" }}
{{- if $files }}
{{- range $path, $file := $files }}

View File

@@ -1,4 +1,5 @@
{{- if .Values.global.enableSecurity }}
{{- include "seaweedfs.compat" . -}}
{{- if .Values.global.seaweedfs.enableSecurity }}
apiVersion: v1
kind: ConfigMap
metadata:
@@ -21,14 +22,14 @@ data:
security.toml: |-
# this file is read by master, volume server, and filer
{{- if .Values.global.securityConfig.jwtSigning.volumeWrite }}
{{- if .Values.global.seaweedfs.securityConfig.jwtSigning.volumeWrite }}
# the jwt signing key is read by master and volume server
# a jwt expires in 10 seconds
[jwt.signing]
key = "{{ dig "jwt" "signing" "key" (randAlphaNum 10 | b64enc) $securityConfig }}"
{{- end }}
{{- if .Values.global.securityConfig.jwtSigning.volumeRead }}
{{- if .Values.global.seaweedfs.securityConfig.jwtSigning.volumeRead }}
# this jwt signing key is read by master and volume server, and it is used for read operations:
# - the Master server generates the JWT, which can be used to read a certain file on a volume server
# - the Volume server validates the JWT on reading
@@ -36,7 +37,7 @@ data:
key = "{{ dig "jwt" "signing" "read" "key" (randAlphaNum 10 | b64enc) $securityConfig }}"
{{- end }}
{{- if .Values.global.securityConfig.jwtSigning.filerWrite }}
{{- if .Values.global.seaweedfs.securityConfig.jwtSigning.filerWrite }}
# If this JWT key is configured, Filer only accepts writes over HTTP if they are signed with this JWT:
# - f.e. the S3 API Shim generates the JWT
# - the Filer server validates the JWT on writing
@@ -45,7 +46,7 @@ data:
key = "{{ dig "jwt" "filer_signing" "key" (randAlphaNum 10 | b64enc) $securityConfig }}"
{{- end }}
{{- if .Values.global.securityConfig.jwtSigning.filerRead }}
{{- if .Values.global.seaweedfs.securityConfig.jwtSigning.filerRead }}
# If this JWT key is configured, Filer only accepts reads over HTTP if they are signed with this JWT:
# - f.e. the S3 API Shim generates the JWT
# - the Filer server validates the JWT on reading

View File

@@ -1,9 +1,10 @@
{{- include "seaweedfs.compat" . -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "seaweedfs.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- with .Values.global.serviceAccountAnnotations }}
{{- with .Values.global.seaweedfs.serviceAccountAnnotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
@@ -12,4 +13,4 @@ metadata:
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
automountServiceAccountToken: {{ .Values.global.automountServiceAccountToken }}
automountServiceAccountToken: {{ .Values.global.seaweedfs.automountServiceAccountToken }}