Files
seaweedFS/weed/plugin/worker/lifecycle/execution.go
Chris Lu d95df76bca feat: separate scheduler lanes for iceberg, lifecycle, and volume management (#8787)
* feat: introduce scheduler lanes for independent per-workload scheduling

Split the single plugin scheduler loop into independent per-lane
goroutines so that volume management, iceberg compaction, and lifecycle
operations never block each other.

Each lane has its own:
- Goroutine (laneSchedulerLoop)
- Wake channel for immediate scheduling
- Admin lock scope (e.g. "plugin scheduler:default")
- Configurable idle sleep duration
- Loop state tracking

Three lanes are defined:
- default: vacuum, volume_balance, ec_balance, erasure_coding, admin_script
- iceberg: iceberg_maintenance
- lifecycle: s3_lifecycle (new, handler coming in a later commit)

Job types are mapped to lanes via a hardcoded map with LaneDefault as
the fallback. The SchedulerJobTypeState and SchedulerStatus types now
include a Lane field for API consumers.

* feat: per-lane execution reservation pools for resource isolation

Each scheduler lane now maintains its own execution reservation map
so that a busy volume lane cannot consume execution slots needed by
iceberg or lifecycle lanes. The per-lane pool is used by default when
dispatching jobs through the lane scheduler; the global pool remains
as a fallback for the public DispatchProposals API.

* feat: add per-lane scheduler status API and lane worker UI pages

- GET /api/plugin/lanes returns all lanes with status and job types
- GET /api/plugin/workers?lane=X filters workers by lane
- GET /api/plugin/scheduler-states?lane=X filters job types by lane
- GET /api/plugin/scheduler-status?lane=X returns lane-scoped status
- GET /plugin/lanes/{lane}/workers renders per-lane worker page
- SchedulerJobTypeState now includes a "lane" field

The lane worker pages show scheduler status, job type configuration,
and connected workers scoped to a single lane, with links back to
the main plugin overview.

* feat: add s3_lifecycle worker handler for object store lifecycle management

Implements a full plugin worker handler for S3 lifecycle management,
assigned to the new "lifecycle" scheduler lane.

Detection phase:
- Reads filer.conf to find buckets with TTL lifecycle rules
- Creates one job proposal per bucket with active lifecycle rules
- Supports bucket_filter wildcard pattern from admin config

Execution phase:
- Walks the bucket directory tree breadth-first
- Identifies expired objects by checking TtlSec + Crtime < now
- Deletes expired objects in configurable batches
- Reports progress with scanned/expired/error counts
- Supports dry_run mode for safe testing

Configurable via admin UI:
- batch_size: entries per filer listing page (default 1000)
- max_deletes_per_bucket: safety cap per run (default 10000)
- dry_run: detect without deleting
- delete_marker_cleanup: clean expired delete markers
- abort_mpu_days: abort stale multipart uploads

The handler integrates with the existing PutBucketLifecycle flow which
sets TtlSec on entries via filer.conf path rules.

* feat: add per-lane submenu items under Workers sidebar menu

Replace the single "Workers" sidebar link with a collapsible submenu
containing three lane entries:
- Default (volume management + admin scripts) -> /plugin
- Iceberg (table compaction) -> /plugin/lanes/iceberg/workers
- Lifecycle (S3 object expiration) -> /plugin/lanes/lifecycle/workers

The submenu auto-expands when on any /plugin page and highlights the
active lane. Icons match each lane's job type descriptor (server,
snowflake, hourglass).

* feat: scope plugin pages to their scheduler lane

The plugin overview, configuration, detection, queue, and execution
pages now filter workers, job types, scheduler states, and scheduler
status to only show data for their lane.

- Plugin() templ function accepts a lane parameter (default: "default")
- JavaScript appends ?lane= to /api/plugin/workers, /job-types,
  /scheduler-states, and /scheduler-status API calls
- GET /api/plugin/job-types now supports ?lane= filtering
- When ?job= is provided (e.g. ?job=iceberg_maintenance), the lane is
  auto-derived from the job type so the page scopes correctly

This ensures /plugin shows only default-lane workers and
/plugin/configuration?job=iceberg_maintenance scopes to the iceberg lane.

* fix: remove "Lane" from lane worker page titles and capitalize properly

"lifecycle Lane Workers" -> "Lifecycle Workers"
"iceberg Lane Workers" -> "Iceberg Workers"

* refactor: promote lane items to top-level sidebar menu entries

Move Default, Iceberg, and Lifecycle from a collapsible submenu to
direct top-level items under the WORKERS heading. Removes the
intermediate "Workers" parent link and collapse toggle.

* admin: unify plugin lane routes and handlers

* admin: filter plugin jobs and activities by lane

* admin: reuse plugin UI for worker lane pages

* fix: use ServerAddress.ToGrpcAddress() for filer connections in lifecycle handler

ClusterContext addresses use ServerAddress format (host:port.grpcPort).
Convert to the actual gRPC address via ToGrpcAddress() before dialing,
and add a Ping verification after connecting.

Fixes: "dial tcp: lookup tcp/8888.18888: unknown port"

* fix: resolve ServerAddress gRPC port in iceberg and lifecycle filer connections

ClusterContext addresses use ServerAddress format (host:httpPort.grpcPort).
Both the iceberg and lifecycle handlers now detect the compound format
and extract the gRPC port via ToGrpcAddress() before dialing. Plain
host:port addresses (e.g. from tests) are passed through unchanged.

Fixes: "dial tcp: lookup tcp/8888.18888: unknown port"

* align url

* Potential fix for code scanning alert no. 335: Incorrect conversion between integer types

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* fix: address PR review findings across scheduler lanes and lifecycle handler

- Fix variable shadowing: rename loop var `w` to `worker` in
  GetPluginWorkersAPI to avoid shadowing the http.ResponseWriter param
- Fix stale GetSchedulerStatus: aggregate loop states across all lanes
  instead of reading never-updated legacy schedulerLoopState
- Scope InProcessJobs to lane in GetLaneSchedulerStatus
- Fix AbortMPUDays=0 treated as unset: change <= 0 to < 0 so 0 disables
- Propagate listing errors in lifecycle bucket walk instead of swallowing
- Implement DeleteMarkerCleanup: scan for S3 delete marker entries and
  remove them
- Implement AbortMPUDays: scan .uploads directory and remove stale
  multipart uploads older than the configured threshold
- Fix success determination: mark job failed when result.errors > 0
  even if no fatal error occurred
- Add regression test for jobTypeLaneMap to catch drift from handler
  registrations

* fix: guard against nil result in lifecycle completion and trim filer addresses

- Guard result dereference in completion summary: use local vars
  defaulting to 0 when result is nil to prevent panic
- Append trimmed filer addresses instead of originals so whitespace
  is not passed to the gRPC dialer

* fix: propagate ctx cancellation from deleteExpiredObjects and add config logging

- deleteExpiredObjects now returns a third error value when the context
  is canceled mid-batch; the caller stops processing further batches
  and returns the cancellation error to the job completion handler
- readBoolConfig and readInt64Config now log unexpected ConfigValue
  types at V(1) for debugging, consistent with readStringConfig

* fix: propagate errors in lifecycle cleanup helpers and use correct delete marker key

- cleanupDeleteMarkers: return error on ctx cancellation and SeaweedList
  failures instead of silently continuing
- abortIncompleteMPUs: log SeaweedList errors instead of discarding
- isDeleteMarker: use ExtDeleteMarkerKey ("Seaweed-X-Amz-Delete-Marker")
  instead of ExtLatestVersionIsDeleteMarker which is for the parent entry
- batchSize cap: use math.MaxInt instead of math.MaxInt32

* fix: propagate ctx cancellation from abortIncompleteMPUs and log unrecognized bool strings

- abortIncompleteMPUs now returns (aborted, errors, ctxErr) matching
  cleanupDeleteMarkers; caller stops on cancellation or listing failure
- readBoolConfig logs unrecognized string values before falling back

* fix: shared per-bucket budget across lifecycle phases and allow cleanup without expired objects

- Thread a shared remaining counter through TTL deletion, delete marker
  cleanup, and MPU abort so the total operations per bucket never exceed
  MaxDeletesPerBucket
- Remove early return when no TTL-expired objects found so delete marker
  cleanup and MPU abort still run
- Add NOTE on cleanupDeleteMarkers about version-safety limitation

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2026-03-26 19:28:13 -07:00

329 lines
9.9 KiB
Go

package lifecycle
import (
"context"
"fmt"
"math"
"path"
"strings"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/plugin_pb"
pluginworker "github.com/seaweedfs/seaweedfs/weed/plugin/worker"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
)
type executionResult struct {
objectsExpired int64
objectsScanned int64
deleteMarkersClean int64
mpuAborted int64
errors int64
}
// executeLifecycleForBucket processes lifecycle rules for a single bucket:
// 1. Reads filer.conf to get TTL rules for the bucket's collection
// 2. Walks the bucket directory tree to find expired objects
// 3. Deletes expired objects (unless dry run)
func (h *Handler) executeLifecycleForBucket(
ctx context.Context,
filerClient filer_pb.SeaweedFilerClient,
config Config,
bucket, bucketsPath string,
sender pluginworker.ExecutionSender,
jobID string,
) (*executionResult, error) {
result := &executionResult{}
// Load filer.conf to verify TTL rules still exist.
fc, err := loadFilerConf(ctx, filerClient)
if err != nil {
return result, fmt.Errorf("load filer conf: %w", err)
}
collection := bucket
ttlRules := fc.GetCollectionTtls(collection)
if len(ttlRules) == 0 {
glog.V(1).Infof("s3_lifecycle: bucket %s has no lifecycle rules, skipping", bucket)
return result, nil
}
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
JobId: jobID,
JobType: jobType,
State: plugin_pb.JobState_JOB_STATE_RUNNING,
ProgressPercent: 10,
Stage: "scanning",
Message: fmt.Sprintf("scanning bucket %s for expired objects (%d rules)", bucket, len(ttlRules)),
})
// Shared budget across all phases so we don't exceed MaxDeletesPerBucket.
remaining := config.MaxDeletesPerBucket
// Find expired objects.
expired, scanned, err := listExpiredObjects(ctx, filerClient, bucketsPath, bucket, remaining)
result.objectsScanned = scanned
if err != nil {
return result, fmt.Errorf("list expired objects: %w", err)
}
if len(expired) > 0 {
glog.V(1).Infof("s3_lifecycle: bucket %s: found %d expired objects out of %d scanned", bucket, len(expired), scanned)
} else {
glog.V(1).Infof("s3_lifecycle: bucket %s: scanned %d objects, none expired", bucket, scanned)
}
if config.DryRun && len(expired) > 0 {
result.objectsExpired = int64(len(expired))
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
JobId: jobID,
JobType: jobType,
State: plugin_pb.JobState_JOB_STATE_RUNNING,
ProgressPercent: 100,
Stage: "dry_run",
Message: fmt.Sprintf("dry run: would delete %d expired objects", len(expired)),
})
return result, nil
}
// Delete expired objects in batches.
if len(expired) > 0 {
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
JobId: jobID,
JobType: jobType,
State: plugin_pb.JobState_JOB_STATE_RUNNING,
ProgressPercent: 50,
Stage: "deleting",
Message: fmt.Sprintf("deleting %d expired objects", len(expired)),
})
var batchSize int
if config.BatchSize <= 0 {
batchSize = defaultBatchSize
} else if config.BatchSize > math.MaxInt {
batchSize = math.MaxInt
} else {
batchSize = int(config.BatchSize)
}
for i := 0; i < len(expired); i += batchSize {
select {
case <-ctx.Done():
return result, ctx.Err()
default:
}
end := i + batchSize
if end > len(expired) {
end = len(expired)
}
batch := expired[i:end]
deleted, errs, batchErr := deleteExpiredObjects(ctx, filerClient, batch)
result.objectsExpired += int64(deleted)
result.errors += int64(errs)
if batchErr != nil {
return result, batchErr
}
progress := float64(end)/float64(len(expired))*50 + 50 // 50-100%
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
JobId: jobID,
JobType: jobType,
State: plugin_pb.JobState_JOB_STATE_RUNNING,
ProgressPercent: progress,
Stage: "deleting",
Message: fmt.Sprintf("deleted %d/%d expired objects", result.objectsExpired, len(expired)),
})
}
remaining -= result.objectsExpired + result.errors
if remaining < 0 {
remaining = 0
}
}
// Delete marker cleanup.
if config.DeleteMarkerCleanup && remaining > 0 {
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
JobId: jobID, JobType: jobType,
State: plugin_pb.JobState_JOB_STATE_RUNNING,
Stage: "cleaning_delete_markers", Message: "cleaning expired delete markers",
})
cleaned, cleanErrs, cleanCtxErr := cleanupDeleteMarkers(ctx, filerClient, bucketsPath, bucket, remaining)
result.deleteMarkersClean = int64(cleaned)
result.errors += int64(cleanErrs)
if cleanCtxErr != nil {
return result, cleanCtxErr
}
remaining -= int64(cleaned + cleanErrs)
if remaining < 0 {
remaining = 0
}
}
// Abort incomplete multipart uploads.
if config.AbortMPUDays > 0 && remaining > 0 {
_ = sender.SendProgress(&plugin_pb.JobProgressUpdate{
JobId: jobID, JobType: jobType,
State: plugin_pb.JobState_JOB_STATE_RUNNING,
Stage: "aborting_mpus", Message: fmt.Sprintf("aborting multipart uploads older than %d days", config.AbortMPUDays),
})
aborted, abortErrs, abortCtxErr := abortIncompleteMPUs(ctx, filerClient, bucketsPath, bucket, config.AbortMPUDays, remaining)
result.mpuAborted = int64(aborted)
result.errors += int64(abortErrs)
if abortCtxErr != nil {
return result, abortCtxErr
}
}
return result, nil
}
// cleanupDeleteMarkers scans the bucket for entries marked as delete markers
// (via the S3 versioning extended attribute) and removes them.
//
// NOTE: This currently removes delete markers unconditionally without checking
// whether prior non-expired versions exist. In versioned buckets, removing a
// delete marker can resurface an older version. A future enhancement should
// query version metadata before removal to match AWS ExpiredObjectDeleteMarker
// semantics (only remove when no non-current versions remain).
func cleanupDeleteMarkers(
ctx context.Context,
client filer_pb.SeaweedFilerClient,
bucketsPath, bucket string,
limit int64,
) (cleaned, errors int, ctxErr error) {
bucketPath := path.Join(bucketsPath, bucket)
dirsToProcess := []string{bucketPath}
for len(dirsToProcess) > 0 {
if ctx.Err() != nil {
return cleaned, errors, ctx.Err()
}
dir := dirsToProcess[0]
dirsToProcess = dirsToProcess[1:]
listErr := filer_pb.SeaweedList(ctx, client, dir, "", func(entry *filer_pb.Entry, isLast bool) error {
if entry.IsDirectory {
// Skip .uploads directories.
if entry.Name != ".uploads" {
dirsToProcess = append(dirsToProcess, path.Join(dir, entry.Name))
}
return nil
}
if isDeleteMarker(entry) {
if err := filer_pb.DoRemove(ctx, client, dir, entry.Name, true, false, false, false, nil); err != nil {
glog.V(1).Infof("s3_lifecycle: failed to remove delete marker %s/%s: %v", dir, entry.Name, err)
errors++
} else {
cleaned++
}
}
if limit > 0 && int64(cleaned+errors) >= limit {
return fmt.Errorf("limit reached")
}
return nil
}, "", false, 10000)
if listErr != nil && !strings.Contains(listErr.Error(), "limit reached") {
return cleaned, errors, fmt.Errorf("list %s: %w", dir, listErr)
}
if limit > 0 && int64(cleaned+errors) >= limit {
break
}
}
return cleaned, errors, nil
}
// isDeleteMarker checks if an entry is an S3 delete marker.
func isDeleteMarker(entry *filer_pb.Entry) bool {
if entry == nil || entry.Extended == nil {
return false
}
return string(entry.Extended[s3_constants.ExtDeleteMarkerKey]) == "true"
}
// abortIncompleteMPUs scans the .uploads directory under a bucket and
// removes multipart upload entries older than the specified number of days.
func abortIncompleteMPUs(
ctx context.Context,
client filer_pb.SeaweedFilerClient,
bucketsPath, bucket string,
olderThanDays, limit int64,
) (aborted, errors int, ctxErr error) {
uploadsDir := path.Join(bucketsPath, bucket, ".uploads")
cutoff := time.Now().Add(-time.Duration(olderThanDays) * 24 * time.Hour)
listErr := filer_pb.SeaweedList(ctx, client, uploadsDir, "", func(entry *filer_pb.Entry, isLast bool) error {
if ctx.Err() != nil {
return ctx.Err()
}
if !entry.IsDirectory {
return nil
}
// Each subdirectory under .uploads is one multipart upload.
// Check the directory creation time.
if entry.Attributes != nil && entry.Attributes.Crtime > 0 {
created := time.Unix(entry.Attributes.Crtime, 0)
if created.Before(cutoff) {
uploadPath := path.Join(uploadsDir, entry.Name)
if err := filer_pb.DoRemove(ctx, client, uploadsDir, entry.Name, true, true, true, false, nil); err != nil {
glog.V(1).Infof("s3_lifecycle: failed to abort MPU %s: %v", uploadPath, err)
errors++
} else {
aborted++
}
}
}
if limit > 0 && int64(aborted+errors) >= limit {
return fmt.Errorf("limit reached")
}
return nil
}, "", false, 10000)
if listErr != nil && !strings.Contains(listErr.Error(), "limit reached") {
return aborted, errors, fmt.Errorf("list uploads in %s: %w", uploadsDir, listErr)
}
return aborted, errors, nil
}
// deleteExpiredObjects deletes a batch of expired objects from the filer.
// Returns a non-nil error when the context is canceled mid-batch.
func deleteExpiredObjects(
ctx context.Context,
client filer_pb.SeaweedFilerClient,
objects []expiredObject,
) (deleted, errors int, ctxErr error) {
for _, obj := range objects {
if ctx.Err() != nil {
return deleted, errors, ctx.Err()
}
err := filer_pb.DoRemove(ctx, client, obj.dir, obj.name, true, false, false, false, nil)
if err != nil {
glog.V(1).Infof("s3_lifecycle: failed to delete %s/%s: %v", obj.dir, obj.name, err)
errors++
continue
}
deleted++
}
return deleted, errors, nil
}
// nowUnix returns the current time as a Unix timestamp.
func nowUnix() int64 {
return time.Now().Unix()
}