* admin: add plugin runtime UI page and route wiring * pb: add plugin gRPC contract and generated bindings * admin/plugin: implement worker registry, runtime, monitoring, and config store * admin/dash: wire plugin runtime and expose plugin workflow APIs * command: add flags to enable plugin runtime * admin: rename remaining plugin v2 wording to plugin * admin/plugin: add detectable job type registry helper * admin/plugin: add scheduled detection and dispatch orchestration * admin/plugin: prefetch job type descriptors when workers connect * admin/plugin: add known job type discovery API and UI * admin/plugin: refresh design doc to match current implementation * admin/plugin: enforce per-worker scheduler concurrency limits * admin/plugin: use descriptor runtime defaults for scheduler policy * admin/ui: auto-load first known plugin job type on page open * admin/plugin: bootstrap persisted config from descriptor defaults * admin/plugin: dedupe scheduled proposals by dedupe key * admin/ui: add job type and state filters for plugin monitoring * admin/ui: add per-job-type plugin activity summary * admin/plugin: split descriptor read API from schema refresh * admin/ui: keep plugin summary metrics global while tables are filtered * admin/plugin: retry executor reservation before timing out * admin/plugin: expose scheduler states for monitoring * admin/ui: show per-job-type scheduler states in plugin monitor * pb/plugin: rename protobuf package to plugin * admin/plugin: rename pluginRuntime wiring to plugin * admin/plugin: remove runtime naming from plugin APIs and UI * admin/plugin: rename runtime files to plugin naming * admin/plugin: persist jobs and activities for monitor recovery * admin/plugin: lease one detector worker per job type * admin/ui: show worker load from plugin heartbeats * admin/plugin: skip stale workers for detector and executor picks * plugin/worker: add plugin worker command and stream runtime scaffold * plugin/worker: implement vacuum detect and execute handlers * admin/plugin: document external vacuum plugin worker starter * command: update plugin.worker help to reflect implemented flow * command/admin: drop legacy Plugin V2 label * plugin/worker: validate vacuum job type and respect min interval * plugin/worker: test no-op detect when min interval not elapsed * command/admin: document plugin.worker external process * plugin/worker: advertise configured concurrency in hello * command/plugin.worker: add jobType handler selection * command/plugin.worker: test handler selection by job type * command/plugin.worker: persist worker id in workingDir * admin/plugin: document plugin.worker jobType and workingDir flags * plugin/worker: support cancel request for in-flight work * plugin/worker: test cancel request acknowledgements * command/plugin.worker: document workingDir and jobType behavior * plugin/worker: emit executor activity events for monitor * plugin/worker: test executor activity builder * admin/plugin: send last successful run in detection request * admin/plugin: send cancel request when detect or execute context ends * admin/plugin: document worker cancel request responsibility * admin/handlers: expose plugin scheduler states API in no-auth mode * admin/handlers: test plugin scheduler states route registration * admin/plugin: keep worker id on worker-generated activity records * admin/plugin: test worker id propagation in monitor activities * admin/dash: always initialize plugin service * command/admin: remove plugin enable flags and default to enabled * admin/dash: drop pluginEnabled constructor parameter * admin/plugin UI: stop checking plugin enabled state * admin/plugin: remove docs for plugin enable flags * admin/dash: remove unused plugin enabled check method * admin/dash: fallback to in-memory plugin init when dataDir fails * admin/plugin API: expose worker gRPC port in status * command/plugin.worker: resolve admin gRPC port via plugin status * split plugin UI into overview/configuration/monitoring pages * Update layout_templ.go * add volume_balance plugin worker handler * wire plugin.worker CLI for volume_balance job type * add erasure_coding plugin worker handler * wire plugin.worker CLI for erasure_coding job type * support multi-job handlers in plugin worker runtime * allow plugin.worker jobType as comma-separated list * admin/plugin UI: rename to Workers and simplify config view * plugin worker: queue detection requests instead of capacity reject * Update plugin_worker.go * plugin volume_balance: remove force_move/timeout from worker config UI * plugin erasure_coding: enforce local working dir and cleanup * admin/plugin UI: rename admin settings to job scheduling * admin/plugin UI: persist and robustly render detection results * admin/plugin: record and return detection trace metadata * admin/plugin UI: show detection process and decision trace * plugin: surface detector decision trace as activities * mini: start a plugin worker by default * admin/plugin UI: split monitoring into detection and execution tabs * plugin worker: emit detection decision trace for EC and balance * admin workers UI: split monitoring into detection and execution pages * plugin scheduler: skip proposals for active assigned/running jobs * admin workers UI: add job queue tab * plugin worker: add dummy stress detector and executor job type * admin workers UI: reorder tabs to detection queue execution * admin workers UI: regenerate plugin template * plugin defaults: include dummy stress and add stress tests * plugin dummy stress: rotate detection selections across runs * plugin scheduler: remove cross-run proposal dedupe * plugin queue: track pending scheduled jobs * plugin scheduler: wait for executor capacity before dispatch * plugin scheduler: skip detection when waiting backlog is high * plugin: add disk-backed job detail API and persistence * admin ui: show plugin job detail modal from job id links * plugin: generate unique job ids instead of reusing proposal ids * plugin worker: emit heartbeats on work state changes * plugin registry: round-robin tied executor and detector picks * add temporary EC overnight stress runner * plugin job details: persist and render EC execution plans * ec volume details: color data and parity shard badges * shard labels: keep parity ids numeric and color-only distinction * admin: remove legacy maintenance UI routes and templates * admin: remove dead maintenance endpoint helpers * Update layout_templ.go * remove dummy_stress worker and command support * refactor plugin UI to job-type top tabs and sub-tabs * migrate weed worker command to plugin runtime * remove plugin.worker command and keep worker runtime with metrics * update helm worker args for jobType and execution flags * set plugin scheduling defaults to global 16 and per-worker 4 * stress: fix RPC context reuse and remove redundant variables in ec_stress_runner * admin/plugin: fix lifecycle races, safe channel operations, and terminal state constants * admin/dash: randomize job IDs and fix priority zero-value overwrite in plugin API * admin/handlers: implement buffered rendering to prevent response corruption * admin/plugin: implement debounced persistence flusher and optimize BuildJobDetail memory lookups * admin/plugin: fix priority overwrite and implement bounded wait in scheduler reserve * admin/plugin: implement atomic file writes and fix run record side effects * admin/plugin: use P prefix for parity shard labels in execution plans * admin/plugin: enable parallel execution for cancellation tests * admin: refactor time.Time fields to pointers for better JSON omitempty support * admin/plugin: implement pointer-safe time assignments and comparisons in plugin core * admin/plugin: fix time assignment and sorting logic in plugin monitor after pointer refactor * admin/plugin: update scheduler activity tracking to use time pointers * admin/plugin: fix time-based run history trimming after pointer refactor * admin/dash: fix JobSpec struct literal in plugin API after pointer refactor * admin/view: add D/P prefixes to EC shard badges for UI consistency * admin/plugin: use lifecycle-aware context for schema prefetching * Update ec_volume_details_templ.go * admin/stress: fix proposal sorting and log volume cleanup errors * stress: refine ec stress runner with math/rand and collection name - Added Collection field to VolumeEcShardsDeleteRequest for correct filename construction. - Replaced crypto/rand with seeded math/rand PRNG for bulk payloads. - Added documentation for EcMinAge zero-value behavior. - Added logging for ignored errors in volume/shard deletion. * admin: return internal server error for plugin store failures Changed error status code from 400 Bad Request to 500 Internal Server Error for failures in GetPluginJobDetail to correctly reflect server-side errors. * admin: implement safe channel sends and graceful shutdown sync - Added sync.WaitGroup to Plugin struct to manage background goroutines. - Implemented safeSendCh helper using recover() to prevent panics on closed channels. - Ensured Shutdown() waits for all background operations to complete. * admin: robustify plugin monitor with nil-safe time and record init - Standardized nil-safe assignment for *time.Time pointers (CreatedAt, UpdatedAt, CompletedAt). - Ensured persistJobDetailSnapshot initializes new records correctly if they don't exist on disk. - Fixed debounced persistence to trigger immediate write on job completion. * admin: improve scheduler shutdown behavior and logic guards - Replaced brittle error string matching with explicit r.shutdownCh selection for shutdown detection. - Removed redundant nil guard in buildScheduledJobSpec. - Standardized WaitGroup usage for schedulerLoop. * admin: implement deep copy for job parameters and atomic write fixes - Implemented deepCopyGenericValue and used it in cloneTrackedJob to prevent shared state. - Ensured atomicWriteFile creates parent directories before writing. * admin: remove unreachable branch in shard classification Removed an unreachable 'totalShards <= 0' check in classifyShardID as dataShards and parityShards are already guarded. * admin: secure UI links and use canonical shard constants - Added rel="noopener noreferrer" to external links for security. - Replaced magic number 14 with erasure_coding.TotalShardsCount. - Used renderEcShardBadge for missing shard list consistency. * admin: stabilize plugin tests and fix regressions - Composed a robust plugin_monitor_test.go to handle asynchronous persistence. - Updated all time.Time literals to use timeToPtr helper. - Added explicit Shutdown() calls in tests to synchronize with debounced writes. - Fixed syntax errors and orphaned struct literals in tests. * Potential fix for code scanning alert no. 278: Slice memory allocation with excessive size value Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Potential fix for code scanning alert no. 283: Uncontrolled data used in path expression Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * admin: finalize refinements for error handling, scheduler, and race fixes - Standardized HTTP 500 status codes for store failures in plugin_api.go. - Tracked scheduled detection goroutines with sync.WaitGroup for safe shutdown. - Fixed race condition in safeSendDetectionComplete by extracting channel under lock. - Implemented deep copy for JobActivity details. - Used defaultDirPerm constant in atomicWriteFile. * test(ec): migrate admin dockertest to plugin APIs * admin/plugin_api: fix RunPluginJobTypeAPI to return 500 for server-side detection/filter errors * admin/plugin_api: fix ExecutePluginJobAPI to return 500 for job execution failures * admin/plugin_api: limit parseProtoJSONBody request body to 1MB to prevent unbounded memory usage * admin/plugin: consolidate regex to package-level validJobTypePattern; add char validation to sanitizeJobID * admin/plugin: fix racy Shutdown channel close with sync.Once * admin/plugin: track sendLoop and recv goroutines in WorkerStream with r.wg * admin/plugin: document writeProtoFiles atomicity — .pb is source of truth, .json is human-readable only * admin/plugin: extract activityLess helper to deduplicate nil-safe OccurredAt sort comparators * test/ec: check http.NewRequest errors to prevent nil req panics * test/ec: replace deprecated ioutil/math/rand, fix stale step comment 5.1→3.1 * plugin(ec): raise default detection and scheduling throughput limits * topology: include empty disks in volume list and EC capacity fallback * topology: remove hard 10-task cap for detection planning * Update ec_volume_details_templ.go * adjust default * fix tests --------- Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
293 lines
12 KiB
YAML
293 lines
12 KiB
YAML
{{- if .Values.worker.enabled }}
|
|
{{- if and (not .Values.worker.adminServer) (not .Values.admin.enabled) }}
|
|
{{- fail "worker.adminServer must be set if admin.enabled is false within the same release" -}}
|
|
{{- end }}
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
name: {{ include "seaweedfs.fullname" . }}-worker
|
|
namespace: {{ .Release.Namespace }}
|
|
labels:
|
|
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
|
|
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
|
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
|
app.kubernetes.io/component: worker
|
|
{{- if .Values.worker.annotations }}
|
|
annotations:
|
|
{{- toYaml .Values.worker.annotations | nindent 4 }}
|
|
{{- end }}
|
|
spec:
|
|
replicas: {{ .Values.worker.replicas }}
|
|
selector:
|
|
matchLabels:
|
|
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
|
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
|
app.kubernetes.io/component: worker
|
|
template:
|
|
metadata:
|
|
labels:
|
|
app.kubernetes.io/name: {{ template "seaweedfs.name" . }}
|
|
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
|
app.kubernetes.io/component: worker
|
|
{{ with .Values.podLabels }}
|
|
{{- toYaml . | nindent 8 }}
|
|
{{- end }}
|
|
{{- with .Values.worker.podLabels }}
|
|
{{- toYaml . | nindent 8 }}
|
|
{{- end }}
|
|
annotations:
|
|
{{ with .Values.podAnnotations }}
|
|
{{- toYaml . | nindent 8 }}
|
|
{{- end }}
|
|
{{- with .Values.worker.podAnnotations }}
|
|
{{- toYaml . | nindent 8 }}
|
|
{{- end }}
|
|
spec:
|
|
restartPolicy: {{ default .Values.global.restartPolicy .Values.worker.restartPolicy }}
|
|
{{- if .Values.worker.affinity }}
|
|
affinity:
|
|
{{ tpl .Values.worker.affinity . | nindent 8 | trim }}
|
|
{{- end }}
|
|
{{- if .Values.worker.topologySpreadConstraints }}
|
|
topologySpreadConstraints:
|
|
{{ tpl .Values.worker.topologySpreadConstraints . | nindent 8 | trim }}
|
|
{{- end }}
|
|
{{- if .Values.worker.tolerations }}
|
|
tolerations:
|
|
{{ tpl .Values.worker.tolerations . | nindent 8 | trim }}
|
|
{{- end }}
|
|
{{- include "seaweedfs.imagePullSecrets" . | nindent 6 }}
|
|
terminationGracePeriodSeconds: 60
|
|
{{- if .Values.worker.priorityClassName }}
|
|
priorityClassName: {{ .Values.worker.priorityClassName | quote }}
|
|
{{- end }}
|
|
enableServiceLinks: false
|
|
{{- if .Values.worker.serviceAccountName }}
|
|
serviceAccountName: {{ .Values.worker.serviceAccountName | quote }}
|
|
{{- end }}
|
|
{{- if .Values.worker.initContainers }}
|
|
initContainers:
|
|
{{ tpl .Values.worker.initContainers . | nindent 8 | trim }}
|
|
{{- end }}
|
|
{{- if .Values.worker.podSecurityContext.enabled }}
|
|
securityContext: {{- omit .Values.worker.podSecurityContext "enabled" | toYaml | nindent 8 }}
|
|
{{- end }}
|
|
containers:
|
|
- name: seaweedfs
|
|
image: {{ template "worker.image" . }}
|
|
imagePullPolicy: {{ default "IfNotPresent" .Values.global.imagePullPolicy }}
|
|
env:
|
|
- name: POD_IP
|
|
valueFrom:
|
|
fieldRef:
|
|
fieldPath: status.podIP
|
|
- name: POD_NAME
|
|
valueFrom:
|
|
fieldRef:
|
|
fieldPath: metadata.name
|
|
- name: NAMESPACE
|
|
valueFrom:
|
|
fieldRef:
|
|
fieldPath: metadata.namespace
|
|
- name: SEAWEEDFS_FULLNAME
|
|
value: "{{ include "seaweedfs.fullname" . }}"
|
|
{{- if .Values.worker.extraEnvironmentVars }}
|
|
{{- range $key, $value := .Values.worker.extraEnvironmentVars }}
|
|
- name: {{ $key }}
|
|
{{- if kindIs "string" $value }}
|
|
value: {{ tpl $value $ | quote }}
|
|
{{- else }}
|
|
valueFrom:
|
|
{{ toYaml $value | nindent 16 | trim }}
|
|
{{- end -}}
|
|
{{- end }}
|
|
{{- end }}
|
|
{{- if .Values.global.extraEnvironmentVars }}
|
|
{{- range $key, $value := .Values.global.extraEnvironmentVars }}
|
|
- name: {{ $key }}
|
|
{{- if kindIs "string" $value }}
|
|
value: {{ tpl $value $ | quote }}
|
|
{{- else }}
|
|
valueFrom:
|
|
{{ toYaml $value | nindent 16 | trim }}
|
|
{{- end -}}
|
|
{{- end }}
|
|
{{- end }}
|
|
command:
|
|
- "/bin/sh"
|
|
- "-ec"
|
|
- |
|
|
exec /usr/bin/weed \
|
|
{{- if or (eq .Values.worker.logs.type "hostPath") (eq .Values.worker.logs.type "emptyDir") (eq .Values.worker.logs.type "existingClaim") }}
|
|
-logdir=/logs \
|
|
{{- else }}
|
|
-logtostderr=true \
|
|
{{- end }}
|
|
{{- if .Values.worker.loggingOverrideLevel }}
|
|
-v={{ .Values.worker.loggingOverrideLevel }} \
|
|
{{- else }}
|
|
-v={{ .Values.global.loggingLevel }} \
|
|
{{- end }}
|
|
worker \
|
|
{{- if .Values.worker.adminServer }}
|
|
-admin={{ .Values.worker.adminServer }} \
|
|
{{- else }}
|
|
-admin={{ template "seaweedfs.fullname" . }}-admin.{{ .Release.Namespace }}:{{ .Values.admin.port }}{{ if .Values.admin.grpcPort }}.{{ .Values.admin.grpcPort }}{{ end }} \
|
|
{{- end }}
|
|
-jobType={{ .Values.worker.jobType }} \
|
|
-maxDetect={{ .Values.worker.maxDetect }} \
|
|
-maxExecute={{ .Values.worker.maxExecute }} \
|
|
-workingDir={{ .Values.worker.workingDir }}{{- if or .Values.worker.metricsPort .Values.worker.metricsIp .Values.worker.extraArgs }} \{{ end }}
|
|
{{- if .Values.worker.metricsPort }}
|
|
-metricsPort={{ .Values.worker.metricsPort }}{{- if or .Values.worker.metricsIp .Values.worker.extraArgs }} \{{ end }}
|
|
{{- end }}
|
|
{{- if .Values.worker.metricsIp }}
|
|
-metricsIp={{ .Values.worker.metricsIp }}{{- if .Values.worker.extraArgs }} \{{ end }}
|
|
{{- end }}
|
|
{{- range $index, $arg := .Values.worker.extraArgs }}
|
|
{{ $arg }}{{- if lt $index (sub (len $.Values.worker.extraArgs) 1) }} \{{ end }}
|
|
{{- end }}
|
|
volumeMounts:
|
|
{{- if or (eq .Values.worker.data.type "hostPath") (eq .Values.worker.data.type "emptyDir") (eq .Values.worker.data.type "existingClaim") }}
|
|
- name: worker-data
|
|
mountPath: {{ .Values.worker.workingDir }}
|
|
{{- end }}
|
|
{{- if or (eq .Values.worker.logs.type "hostPath") (eq .Values.worker.logs.type "emptyDir") (eq .Values.worker.logs.type "existingClaim") }}
|
|
- name: worker-logs
|
|
mountPath: /logs
|
|
{{- end }}
|
|
{{- if .Values.global.enableSecurity }}
|
|
- name: security-config
|
|
readOnly: true
|
|
mountPath: /etc/seaweedfs/security.toml
|
|
subPath: security.toml
|
|
- name: ca-cert
|
|
readOnly: true
|
|
mountPath: /usr/local/share/ca-certificates/ca/
|
|
- name: master-cert
|
|
readOnly: true
|
|
mountPath: /usr/local/share/ca-certificates/master/
|
|
- name: volume-cert
|
|
readOnly: true
|
|
mountPath: /usr/local/share/ca-certificates/volume/
|
|
- name: filer-cert
|
|
readOnly: true
|
|
mountPath: /usr/local/share/ca-certificates/filer/
|
|
- name: client-cert
|
|
readOnly: true
|
|
mountPath: /usr/local/share/ca-certificates/client/
|
|
- name: worker-cert
|
|
readOnly: true
|
|
mountPath: /usr/local/share/ca-certificates/worker/
|
|
{{- end }}
|
|
{{ tpl .Values.worker.extraVolumeMounts . | nindent 12 | trim }}
|
|
ports:
|
|
{{- if .Values.worker.metricsPort }}
|
|
- containerPort: {{ .Values.worker.metricsPort }}
|
|
name: metrics
|
|
{{- end }}
|
|
{{- with .Values.worker.resources }}
|
|
resources:
|
|
{{- toYaml . | nindent 12 }}
|
|
{{- end }}
|
|
{{- if .Values.worker.livenessProbe.enabled }}
|
|
livenessProbe:
|
|
{{- if .Values.worker.livenessProbe.httpGet }}
|
|
httpGet:
|
|
path: {{ .Values.worker.livenessProbe.httpGet.path }}
|
|
port: {{ .Values.worker.livenessProbe.httpGet.port }}
|
|
{{- else if .Values.worker.livenessProbe.tcpSocket }}
|
|
tcpSocket:
|
|
port: {{ .Values.worker.livenessProbe.tcpSocket.port }}
|
|
{{- end }}
|
|
initialDelaySeconds: {{ .Values.worker.livenessProbe.initialDelaySeconds }}
|
|
periodSeconds: {{ .Values.worker.livenessProbe.periodSeconds }}
|
|
successThreshold: {{ .Values.worker.livenessProbe.successThreshold }}
|
|
failureThreshold: {{ .Values.worker.livenessProbe.failureThreshold }}
|
|
timeoutSeconds: {{ .Values.worker.livenessProbe.timeoutSeconds }}
|
|
{{- end }}
|
|
{{- if .Values.worker.readinessProbe.enabled }}
|
|
readinessProbe:
|
|
{{- if .Values.worker.readinessProbe.httpGet }}
|
|
httpGet:
|
|
path: {{ .Values.worker.readinessProbe.httpGet.path }}
|
|
port: {{ .Values.worker.readinessProbe.httpGet.port }}
|
|
{{- else if .Values.worker.readinessProbe.tcpSocket }}
|
|
tcpSocket:
|
|
port: {{ .Values.worker.readinessProbe.tcpSocket.port }}
|
|
{{- end }}
|
|
initialDelaySeconds: {{ .Values.worker.readinessProbe.initialDelaySeconds }}
|
|
periodSeconds: {{ .Values.worker.readinessProbe.periodSeconds }}
|
|
successThreshold: {{ .Values.worker.readinessProbe.successThreshold }}
|
|
failureThreshold: {{ .Values.worker.readinessProbe.failureThreshold }}
|
|
timeoutSeconds: {{ .Values.worker.readinessProbe.timeoutSeconds }}
|
|
{{- end }}
|
|
{{- if .Values.worker.containerSecurityContext.enabled }}
|
|
securityContext: {{- omit .Values.worker.containerSecurityContext "enabled" | toYaml | nindent 12 }}
|
|
{{- end }}
|
|
{{- if .Values.worker.sidecars }}
|
|
{{- include "common.tplvalues.render" (dict "value" .Values.worker.sidecars "context" $) | nindent 8 }}
|
|
{{- end }}
|
|
volumes:
|
|
{{- if eq .Values.worker.data.type "hostPath" }}
|
|
- name: worker-data
|
|
hostPath:
|
|
path: {{ .Values.worker.data.hostPathPrefix }}/seaweedfs-worker-data
|
|
type: DirectoryOrCreate
|
|
{{- end }}
|
|
{{- if eq .Values.worker.data.type "emptyDir" }}
|
|
- name: worker-data
|
|
emptyDir: {}
|
|
{{- end }}
|
|
{{- if eq .Values.worker.data.type "existingClaim" }}
|
|
- name: worker-data
|
|
persistentVolumeClaim:
|
|
claimName: {{ .Values.worker.data.claimName }}
|
|
{{- end }}
|
|
{{- if eq .Values.worker.logs.type "hostPath" }}
|
|
- name: worker-logs
|
|
hostPath:
|
|
path: {{ .Values.worker.logs.hostPathPrefix }}/logs/seaweedfs/worker
|
|
type: DirectoryOrCreate
|
|
{{- end }}
|
|
{{- if eq .Values.worker.logs.type "emptyDir" }}
|
|
- name: worker-logs
|
|
emptyDir: {}
|
|
{{- end }}
|
|
{{- if eq .Values.worker.logs.type "existingClaim" }}
|
|
- name: worker-logs
|
|
persistentVolumeClaim:
|
|
claimName: {{ .Values.worker.logs.claimName }}
|
|
{{- end }}
|
|
{{- if .Values.global.enableSecurity }}
|
|
- name: security-config
|
|
configMap:
|
|
name: {{ include "seaweedfs.fullname" . }}-security-config
|
|
- name: ca-cert
|
|
secret:
|
|
secretName: {{ include "seaweedfs.fullname" . }}-ca-cert
|
|
- name: master-cert
|
|
secret:
|
|
secretName: {{ include "seaweedfs.fullname" . }}-master-cert
|
|
- name: volume-cert
|
|
secret:
|
|
secretName: {{ include "seaweedfs.fullname" . }}-volume-cert
|
|
- name: filer-cert
|
|
secret:
|
|
secretName: {{ include "seaweedfs.fullname" . }}-filer-cert
|
|
- name: client-cert
|
|
secret:
|
|
secretName: {{ include "seaweedfs.fullname" . }}-client-cert
|
|
- name: worker-cert
|
|
secret:
|
|
secretName: {{ include "seaweedfs.fullname" . }}-worker-cert
|
|
{{- end }}
|
|
{{ tpl .Values.worker.extraVolumes . | indent 8 | trim }}
|
|
{{- if .Values.worker.nodeSelector }}
|
|
nodeSelector:
|
|
{{ tpl .Values.worker.nodeSelector . | indent 8 | trim }}
|
|
{{- end }}
|
|
{{- end }}
|