* helm: add s3.tlsSecret to allow custom TLS certificate for S3 HTTPS endpoint
Allow users to specify an external Kubernetes TLS secret for the S3
HTTPS endpoint instead of using the internal self-signed client
certificate. This enables using publicly trusted certificates (e.g.
from Let's Encrypt) so S3 clients don't need to trust the internal CA.
The new s3.tlsSecret value is supported in the standalone S3 gateway,
filer with embedded S3, and all-in-one deployment templates.
Closes#8581
* refactor: extract S3 TLS helpers to reduce duplication
Move repeated S3 TLS cert/key logic into shared helper templates
(seaweedfs.s3.tlsArgs, seaweedfs.s3.tlsVolumeMount, seaweedfs.s3.tlsVolume)
in _helpers.tpl, and use them across all three deployment templates.
* helm: add allInOne.s3.trafficDistribution support
Add the missing allInOne.s3.trafficDistribution branch to the
seaweedfs.trafficDistribution helper and wire it into the all-in-one
service template, mirroring the existing s3-service.yaml behavior.
PreferClose is auto-converted to PreferSameZone on k8s >=1.35.
* fix: scope S3 TLS mounts to S3-enabled pods and simplify trafficDistribution helper
- Wrap S3 TLS volume/volumeMount includes in allInOne.s3.enabled and
filer.s3.enabled guards so the custom TLS secret is only mounted
when S3 is actually enabled in that deployment mode.
- Refactor seaweedfs.trafficDistribution helper to accept an explicit
value+Capabilities dict instead of walking multiple .Values paths,
making each call site responsible for passing its own setting.
* refactor(helm): add componentName helper for truncation
* fix(helm): unify ingress backend naming with truncation
* fix(helm): unify statefulset/deployment naming with truncation
* fix(helm): add missing labels to services for servicemonitor discovery
* chore(helm): secure secrets and add upgrade notes
* fix(helm): truncate context instead of suffix in componentName
* revert(docs): remove upgrade notes per feedback
* fix(helm): use componentName for COSI serviceAccountName
* helm: update master -ip to use component name for correct truncation
* helm: refactor masterServers helper to use truncated component names
* helm: update volume -ip to use component name and cleanup redundant printf
* helm: refine helpers with robustness check and updated docs
There is a mistmatch in the conditionals for the definition and mounting of the `config-users` volume in the filer's template.
Volume definition:
```
{{- if and .Values.filer.s3.enabled .Values.filer.s3.enableAuth }}
```
Mount:
```
{{- if .Values.filer.s3.enableAuth }}
```
This leads to an invalid specification in the case where s3 is disabled but the enableAuth value is set to true, as it tries to mount in an undefined volume. I've fixed it here by adding the extra check to the latter conditional.
Fixes#7467
The -mserver argument line in volume-statefulset.yaml was missing a
trailing backslash, which prevented extraArgs from being passed to
the weed volume process.
Also:
- Extracted master server list generation logic into shared helper
templates in _helpers.tpl for better maintainability
- Updated all occurrences of deprecated -mserver flag to -master
across docker-compose files, test files, and documentation
The allowEmptyFolder option is no longer functional because:
1. The code that used it was already commented out
2. Empty folder cleanup is now handled asynchronously by EmptyFolderCleaner
The CLI flags are kept for backward compatibility but marked as deprecated
and ignored. This removes:
- S3ApiServerOption.AllowEmptyFolder field
- The actual usage in s3api_object_handlers_list.go
- Helm chart values and template references
- References in test Makefiles and docker-compose files
Fix the templates to read scheme from httpGet.scheme instead of the
probe level, matching the structure defined in values.yaml.
This ensures that changing *.livenessProbe.httpGet.scheme or
*.readinessProbe.httpGet.scheme in values.yaml now correctly affects
the rendered manifests.
Affected components: master, filer, volume, s3, all-in-one
Fixes#7615