fix(helm): namespace app-specific global values under global.seaweedfs (#8700)

* fix(helm): namespace app-specific values under global.seaweedfs

Move all app-specific values from the global namespace to
global.seaweedfs.* to avoid polluting the shared .Values.global
namespace when the chart is used as a subchart.

Standard Helm conventions (global.imageRegistry, global.imagePullSecrets)
remain at the global level as they are designed to be shared across
subcharts.

Fixes seaweedfs/seaweedfs#8699

BREAKING CHANGE: global values have been restructured. Users must update
their values files to use the new paths:
- global.registry → global.imageRegistry
- global.repository → global.seaweedfs.image.repository
- global.imageName → global.seaweedfs.image.name
- global.<key> → global.seaweedfs.<key> (for all other app-specific values)

* fix(ci): update helm CI tests to use new global.seaweedfs.* value paths

Update all --set flags in helm_ci.yml to use the new namespaced
global.seaweedfs.* paths matching the values.yaml restructuring.

* fix(ci): install Claude Code via npm to avoid install.sh 403

The claude-code-action's built-in installer uses
`curl https://claude.ai/install.sh | bash` which can fail with 403.
Due to the pipe, bash exits 0 on empty input, masking the curl failure
and leaving the `claude` binary missing.

Work around this by installing Claude Code via npm before invoking the
action, and passing the executable path via path_to_claude_code_executable.

* revert: remove claude-code-review.yml changes from this PR

The claude-code-action OIDC token exchange validates that the workflow
file matches the version on the default branch. Modifying it in a PR
causes the review job to fail with "Workflow validation failed".

The Claude Code install fix will need to be applied directly to master
or in a separate PR.

* fix: update stale references to old global.* value paths

- admin-statefulset.yaml: fix fail message to reference
  global.seaweedfs.masterServer
- values.yaml: fix comment to reference image.name instead of imageName
- helm_ci.yml: fix diagnostic message to reference
  global.seaweedfs.enableSecurity

* feat(helm): add backward-compat shim for old global.* value paths

Add _compat.tpl with a seaweedfs.compat helper that detects old-style
global.* keys (e.g. global.enableSecurity, global.registry) and merges
them into the new global.seaweedfs.* namespace.

Since the old keys no longer have defaults in values.yaml, their
presence means the user explicitly provided them. The helper uses
in-place mutation via `set` so all templates see the merged values.

This ensures existing deployments using old value paths continue to
work without changes after upgrading.

* fix: update stale comment references in values.yaml

Update comments referencing global.enableSecurity and global.masterServer
to the new global.seaweedfs.* paths.

---------

Co-authored-by: Copilot <copilot@github.com>
This commit is contained in:
Chris Lu
2026-03-19 13:00:48 -07:00
committed by GitHub
parent 55bc363228
commit 5e76f55077
37 changed files with 288 additions and 190 deletions

View File

@@ -64,15 +64,15 @@ jobs:
echo "✓ All-in-one deployment renders correctly"
echo "=== Testing with security enabled ==="
helm template test $CHART_DIR --set global.enableSecurity=true > /tmp/security.yaml
helm template test $CHART_DIR --set global.seaweedfs.enableSecurity=true > /tmp/security.yaml
grep -q "security-config" /tmp/security.yaml
echo "✓ Security configuration renders correctly"
echo "=== Testing with monitoring enabled ==="
helm template test $CHART_DIR \
--set global.monitoring.enabled=true \
--set global.monitoring.gatewayHost=prometheus \
--set global.monitoring.gatewayPort=9091 > /tmp/monitoring.yaml
--set global.seaweedfs.monitoring.enabled=true \
--set global.seaweedfs.monitoring.gatewayHost=prometheus \
--set global.seaweedfs.monitoring.gatewayPort=9091 > /tmp/monitoring.yaml
echo "✓ Monitoring configuration renders correctly"
echo "=== Testing with PVC storage ==="
@@ -124,7 +124,7 @@ jobs:
# --- Normal mode: master + filer-client services vs helper-produced addresses ---
helm template "$LONG_RELEASE" $CHART_DIR \
--set s3.enabled=true \
--set global.createBuckets[0].name=test > /tmp/longname.yaml
--set global.seaweedfs.createBuckets[0].name=test > /tmp/longname.yaml
# Extract Service names from metadata
MASTER_SVC=$(awk '/kind: Service/{found=1} found && /^ *name:/{print $2; found=0}' /tmp/longname.yaml \
@@ -161,7 +161,7 @@ jobs:
# --- All-in-one mode: all-in-one service vs both helper addresses ---
helm template "$LONG_RELEASE" $CHART_DIR \
--set allInOne.enabled=true \
--set global.createBuckets[0].name=test > /tmp/longname-aio.yaml
--set global.seaweedfs.createBuckets[0].name=test > /tmp/longname-aio.yaml
AIO_SVC=$(awk '/kind: Service/{found=1} found && /^ *name:/{print $2; found=0}' /tmp/longname-aio.yaml \
| grep -- '-all-in-one$')
@@ -183,11 +183,11 @@ jobs:
# Render the three manifests that include seaweedfs.s3.tlsArgs:
# filer-statefulset, s3-deployment, all-in-one-deployment
helm template test $CHART_DIR \
--set global.enableSecurity=true \
--set global.seaweedfs.enableSecurity=true \
--set filer.s3.enabled=true \
--set s3.enabled=true > /tmp/security-s3.yaml
helm template test $CHART_DIR \
--set global.enableSecurity=true \
--set global.seaweedfs.enableSecurity=true \
--set allInOne.enabled=true \
--set allInOne.s3.enabled=true > /tmp/security-aio.yaml
@@ -212,7 +212,7 @@ jobs:
if errors:
for e in errors:
print(f"FAIL: {e}", file=sys.stderr)
print("Rendered with: global.enableSecurity=true, filer.s3.enabled=true, s3.enabled=true, allInOne.enabled=true", file=sys.stderr)
print("Rendered with: global.seaweedfs.enableSecurity=true, filer.s3.enabled=true, s3.enabled=true, allInOne.enabled=true", file=sys.stderr)
sys.exit(1)
print("✓ No blank lines in security+S3 command blocks")
PYEOF