Commit Graph

13358 Commits

Author SHA1 Message Date
Lars Lehtonen
c1acf9e479 Prune unused functions from weed/admin/dash. (#8871)
* chore(weed/admin/dash): prune unused functions

* chore(weed/admin/dash): prune test-only function
2026-04-01 09:22:49 -07:00
qzh
4c72512ea2 fix(shell): avoid marking skipped or unplaced volumes as fixed (#8866)
* fix(s3api): fix AWS Signature V2 format and validation

* fix(s3api): Skip space after "AWS" prefix (+1 offset)

* test(s3api): add unit tests for Signature V2 authentication fix

* fix(s3api): simply comparing signatures

* validation for the colon extraction in expectedAuth

* fix(shell): avoid marking skipped or unplaced volumes as fixed

---------

Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
2026-04-01 01:20:25 -07:00
Chris Lu
af68449a26 Process .ecj deletions during EC decode and vacuum decoded volume (#8863)
* Process .ecj deletions during EC decode and vacuum decoded volume (#8798)

When decoding EC volumes back to normal volumes, deletions recorded in
the .ecj journal were not being applied before computing the dat file
size or checking for live needles. This caused the decoded volume to
include data for deleted files and could produce false positives in the
all-deleted check.

- Call RebuildEcxFile before HasLiveNeedles/FindDatFileSize in
  VolumeEcShardsToVolume so .ecj deletions are merged into .ecx first
- Vacuum the decoded volume after mounting in ec.decode to compact out
  deleted needle data from the .dat file
- Add integration tests for decoding with non-empty .ecj files

* storage: add offline volume compaction helper

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* ec: compact decoded volumes before deleting shards

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* ec: address PR review comments

- Fall back to data directory for .ecx when idx directory lacks it
- Make compaction failure non-fatal during EC decode
- Remove misleading "buffer: 10%" from space check error message

* ec: collect .ecj from all shard locations during decode

Each server's .ecj only contains deletions for needles whose data
resides in shards held by that server. Previously, sources with no
new data shards to contribute were skipped entirely, losing their
.ecj deletion entries. Now .ecj is always appended from every shard
location so RebuildEcxFile sees the full set of deletions.

* ec: add integration tests for .ecj collection during decode

TestEcDecodePreservesDeletedNeedles: verifies that needles deleted
via VolumeEcBlobDelete are excluded from the decoded volume.

TestEcDecodeCollectsEcjFromPeer: regression test for the fix in
collectEcShards. Deletes a needle only on a peer server that holds
no new data shards, then verifies the deletion survives decode via
.ecj collection.

* ec: address review nits in decode and tests

- Remove double error wrapping in mountDecodedVolume
- Check VolumeUnmount error in peer ecj test
- Assert 404 specifically for deleted needles, fail on 5xx

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-01 01:15:26 -07:00
Lars Lehtonen
80d3085d54 Prune Query Engine (#8865)
* chore(weed/query/engine): prune unused functions

* chore(weed/query/engine): prune unused test-only function
2026-03-31 20:53:41 -07:00
Chris Lu
75a6a34528 dlm: resilient distributed locks via consistent hashing + backup replication (#8860)
* dlm: replace modulo hashing with consistent hash ring

Introduce HashRing with virtual nodes (CRC32-based consistent hashing)
to replace the modulo-based hashKeyToServer. When a filer node is
removed, only keys that hashed to that node are remapped to the next
server on the ring, leaving all other mappings stable. This is the
foundation for backup replication — the successor on the ring is
always the natural takeover node.

* dlm: add Generation and IsBackup fields to Lock

Lock now carries IsBackup (whether this node holds the lock as a backup
replica) and Generation (a monotonic fencing token that increments on
each fresh acquisition, stays the same on renewal). Add helper methods:
AllLocks, PromoteLock, DemoteLock, InsertBackupLock, RemoveLock, GetLock.

* dlm: add ReplicateLock RPC and generation/is_backup proto fields

Add generation field to LockResponse for fencing tokens.
Add generation and is_backup fields to Lock message.
Add ReplicateLock RPC for primary-to-backup lock replication.
Add ReplicateLockRequest/ReplicateLockResponse messages.

* dlm: add async backup replication to DistributedLockManager

Route lock/unlock via consistent hash ring's GetPrimaryAndBackup().
After a successful lock or unlock on the primary, asynchronously
replicate the operation to the backup server via ReplicateFunc
callback. Single-server deployments skip replication.

* dlm: add ReplicateLock handler and backup-aware topology changes

Add ReplicateLock gRPC handler for primary-to-backup replication.
Revise OnDlmChangeSnapshot to handle three cases on topology change:
- Promote backup locks when this node becomes primary
- Demote primary locks when this node becomes backup
- Transfer locks when this node is neither primary nor backup
Wire up SetupDlmReplication during filer server initialization.

* dlm: expose generation fencing token in lock client

LiveLock now captures the generation from LockResponse and exposes it
via Generation() method. Consumers can use this as a fencing token to
detect stale lock holders.

* dlm: update empty folder cleaner to use consistent hash ring

Replace local modulo-based hashKeyToServer with LockRing.GetPrimary()
which uses the shared consistent hash ring for folder ownership.

* dlm: add unit tests for consistent hash ring

Test basic operations, consistency on server removal (only keys from
removed server move), backup-is-successor property (backup becomes
new primary when primary is removed), and key distribution balance.

* dlm: add integration tests for lock replication failure scenarios

Test cases:
- Primary crash with backup promotion (backup has valid token)
- Backup crash with primary continuing
- Both primary and backup crash (lock lost, re-acquirable)
- Rolling restart across all nodes
- Generation fencing token increments on new acquisition
- Replication failure (primary still works independently)
- Unlock replicates deletion to backup
- Lock survives server addition (topology change)
- Consistent hashing minimal disruption (only removed server's keys move)

* dlm: address PR review findings

1. Causal replication ordering: Add per-lock sequence number (Seq) that
   increments on every mutation. Backup rejects incoming mutations with
   seq <= current seq, preventing stale async replications from
   overwriting newer state. Unlock replication also carries seq and is
   rejected if stale.

2. Demote-after-handoff: OnDlmChangeSnapshot now transfers the lock to
   the new primary first and only demotes to backup after a successful
   TransferLocks RPC. If the transfer fails, the lock stays as primary
   on this node.

3. SetSnapshot candidateServers leak: Replace the candidateServers map
   entirely instead of appending, so removed servers don't linger.

4. TransferLocks preserves Generation and Seq: InsertLock now accepts
   generation and seq parameters. After accepting a transferred lock,
   the receiving node re-replicates to its backup.

5. Rolling restart test: Add re-replication step after promotion and
   assert survivedCount > 0. Add TestDLM_StaleReplicationRejected.

6. Mixed-version upgrade note: Add comment on HashRing documenting that
   all filer nodes must be upgraded together.

* dlm: serve renewals locally during transfer window on node join

When a new node joins and steals hash ranges from surviving nodes,
there's a window between ring update and lock transfer where the
client gets redirected to a node that doesn't have the lock yet.

Fix: if the ring says primary != self but we still hold the lock
locally (non-backup, matching token), serve the renewal/unlock here
rather than redirecting. The lock will be transferred by
OnDlmChangeSnapshot, and subsequent requests will go to the new
primary once the transfer completes.

Add tests:
- TestDLM_NodeDropAndJoin_OwnershipDisruption: measures disruption
  when a node drops and a new one joins (14/100 surviving-node locks
  disrupted, all handled by transfer logic)
- TestDLM_RenewalDuringTransferWindow: verifies renewal succeeds on
  old primary during the transfer window

* dlm: master-managed lock ring with stabilization batching

The master now owns the lock ring membership. Instead of filers
independently reacting to individual ClusterNodeUpdate add/remove
events, the master:

1. Tracks filer membership in LockRingManager
2. Batches rapid changes with a 1-second stabilization timer
   (e.g., a node drop + join within 1 second → single ring update)
3. Broadcasts the complete ring snapshot atomically via the new
   LockRingUpdate message in KeepConnectedResponse

Filers receive the ring as a complete snapshot and apply it via
SetSnapshot, ensuring all filers converge to the same ring state
without intermediate churn.

This eliminates the double-churn problem where a rapid drop+join
would fire two separate ring mutations, each triggering lock
transfers and disrupting ownership on surviving nodes.

* dlm: track ring version, reject stale updates, remove dead code

SetSnapshot now takes a version parameter from the master. Stale
updates (version < current) are rejected, preventing reordered
messages from overwriting a newer ring state. Version 0 is always
accepted for bootstrap.

Remove AddServer/RemoveServer from LockRing — the ring is now
exclusively managed by the master via SetSnapshot. Remove the
candidateServers map that was only used by those methods.

* dlm: fix SelectLocks data race, advance generation on backup insert

- SelectLocks: change RLock to Lock since the function deletes map
  entries, which is a write operation and causes a data race under RLock.
- InsertBackupLock: advance nextGeneration to at least the incoming
  generation so that after failover promotion, new lock acquisitions
  get a generation strictly greater than any replicated lock.
- Bump replication failure log from V(1) to Warningf for production
  visibility.

* dlm: fix SetSnapshot race, test reliability, timer edge cases

- SetSnapshot: hold LockRing lock through both version update and
  Ring.SetServers() so they're atomic. Prevents a concurrent caller
  from seeing the new version but applying stale servers.
- Transfer window test: search for a key that actually moves primary
  when filer4 joins, instead of relying on a fixed key that may not.
- renewLock redirect: pass the existing token to the new primary
  instead of empty string, so redirected renewals work correctly.
- scheduleBroadcast: check timer.Stop() return value. If the timer
  already fired, the callback picks up latest state.
- FlushPending: only broadcast if timer.Stop() returns true (timer
  was still pending). If false, the callback is already running.
- Fix test comment: "idempotent" → "accepted, state-changing".

* dlm: use wall-clock nanoseconds for lock ring version

The lock ring version was an in-memory counter that reset to 0 on
master restart. A filer that had seen version 5 would reject version 1
from the restarted master.

Fix: use time.Now().UnixNano() as the version. This survives master
restarts without persistence — the restarted master produces a
version greater than any pre-restart value.

* dlm: treat expired lock owners as missing

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* dlm: reject stale lock transfers

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* dlm: order replication by generation

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* dlm: bootstrap lock ring on reconnect

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-30 23:29:56 -07:00
Lars Lehtonen
387b146edd Prune wdclient Functions (#8855)
* chore(weed/wdclient): prune unused functions

* chore(weed/wdclient): prune test-only functions and associated tests

* chore(weed/wdclient): remove dead cursor field

The cursor field and its initialization are no longer used after
the removal of getLocationIndex.

---------

Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-03-30 18:53:10 -07:00
Chris Lu
9205140bd5 Use Unix sockets for gRPC in weed server mode (#8858)
* Use Unix sockets for gRPC between co-located services in weed server

Extends the Unix socket gRPC optimization (added for mini mode in #8856)
to `weed server`. Registers Unix socket paths for each service's gRPC
port before startup, so co-located services (master, volume, filer, S3)
communicate via Unix sockets instead of TCP loopback.

Only services actually started in this process get registered. The gRPC
port is resolved early (port + 10000 if unset) so the socket path is
known before any service dials another.

* Refactor gRPC Unix socket registration into a data-driven loop
2026-03-30 18:52:15 -07:00
Chris Lu
4705d8b82b Fix stale admin lock metric when lock expires and is reacquired (#8859)
* Fix stale admin lock metric when lock expires and is reacquired (#8857)

When a lock expired without an explicit unlock and a different client
acquired it, the old client's metric was never cleared, causing
multiple clients to appear as simultaneously holding the lock.

* Use DeleteLabelValues instead of Set(0) to remove stale metric series

Avoids cardinality explosion from accumulated stale series when
client names are dynamic.
2026-03-30 18:51:38 -07:00
Chris Lu
ced2236cc6 Adjust rename events metadata format (#8854)
* rename metadata events

* fix subscription filter to use NewEntry.Name for rename path matching

The server-side subscription filter constructed the new path using
OldEntry.Name instead of NewEntry.Name when checking if a rename
event's destination matches the subscriber's path prefix. This could
cause events to be incorrectly filtered when a rename changes the
file name.

* fix bucket events to handle rename of bucket directories

onBucketEvents only checked IsCreate and IsDelete. A bucket directory
rename via AtomicRenameEntry now emits a single rename event (both
OldEntry and NewEntry non-nil), which matched neither check. Handle
IsRename by deleting the old bucket and creating the new one.

* fix replicator to handle rename events across directory boundaries

Two issues fixed:

1. The replicator filtered events by checking if the key (old path)
   was under the source directory. Rename events now use the old path
   as key, so renames from outside into the watched directory were
   silently dropped. Now both old and new paths are checked, and
   cross-boundary renames are converted to create or delete.

2. NewParentPath was passed to the sink without remapping to the
   sink's target directory structure, causing the sink to write
   entries at the wrong location. Now NewParentPath is remapped
   alongside the key.

* fix filer sync to handle rename events crossing directory boundaries

The early directory-prefix filter only checked resp.Directory (old
parent). Rename events now carry the old parent as Directory, so
renames from outside the source path into it were dropped before
reaching the existing cross-boundary handling logic. Check both old
and new directories against sourcePath and excludePaths so the
downstream old-key/new-key logic can properly convert these to
create or delete operations.

* fix metadata event path matching

* fix metadata event consumers for rename targets

* Fix replication rename target keys

Logical rename events now reach replication sinks with distinct source and target paths.\n\nHandle non-filer sinks as delete-plus-create on the translated target key, and make the rename fallback path create at the translated target key too.\n\nAdd focused tests covering non-filer renames, filer rename updates, and the fallback path.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix filer sync rename path scoping

Use directory-boundary matching instead of raw prefix checks when classifying source and target paths during filer sync.\n\nAlso apply excludePaths per side so renames across excluded boundaries downgrade cleanly to create/delete instead of being misclassified as in-scope updates.\n\nAdd focused tests for boundary matching and rename classification.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix replicator directory boundary checks

Use directory-boundary matching instead of raw prefix checks when deciding whether a source or target path is inside the watched tree or an excluded subtree.\n\nThis prevents sibling paths such as /foo and /foobar from being misclassified during rename handling, and preserves the earlier rename-target-key fix.\n\nAdd focused tests for boundary matching and rename classification across sibling/excluded directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Fix etc-remote rename-out handling

Use boundary-safe source/target directory membership when classifying metadata events under DirectoryEtcRemote.\n\nThis prevents rename-out events from being processed as config updates, while still treating them as removals where appropriate for the remote sync and remote gateway command paths.\n\nAdd focused tests for update/removal classification and sibling-prefix handling.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Defer rename events until commit

Queue logical rename metadata events during atomic and streaming renames and publish them only after the transaction commits successfully.\n\nThis prevents subscribers from seeing delete or logical rename events for operations that later fail during delete or commit.\n\nAlso serialize notification.Queue swaps in rename tests and add failure-path coverage.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Skip descendant rename target lookups

Avoid redundant target lookups during recursive directory renames once the destination subtree is known absent.\n\nThe recursive move path now inserts known-absent descendants directly, and the test harness exercises prefixed directory listing so the optimization is covered by a directory rename regression test.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Tighten rename review tests

Return filer_pb.ErrNotFound from the bucket tracking store test stub so it follows the FilerStore contract, and add a webhook filter case for same-name renames across parent directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix HardLinkId format verb in InsertEntryKnownAbsent error

HardLinkId is a byte slice. %d prints each byte as a decimal number
which is not useful for an identifier. Use %x to match the log line
two lines above.

* only skip descendant target lookup when source and dest use same store

moveFolderSubEntries unconditionally passed skipTargetLookup=true for
every descendant. This is safe when all paths resolve to the same
underlying store, but with path-specific store configuration a child's
destination may map to a different backend that already holds an entry
at that path. Use FilerStoreWrapper.SameActualStore to check per-child
and fall back to the full CreateEntry path when stores differ.

* add nil and create edge-case tests for metadata event scope helpers

* extract pathIsEqualOrUnder into util.IsEqualOrUnder

Identical implementations existed in both replication/replicator.go and
command/filer_sync.go. Move to util.IsEqualOrUnder (alongside the
existing FullPath.IsUnder) and remove the duplicates.

* use MetadataEventTargetDirectory for new-side directory in filer sync

The new-side directory checks and sourceNewKey computation used
message.NewParentPath directly. If NewParentPath were empty (legacy
events, older filer versions during rolling upgrades), sourceNewKey
would be wrong (/filename instead of /dir/filename) and the
UpdateEntry parent path rewrite would panic on slice bounds.

Derive targetDir once from MetadataEventTargetDirectory, which falls
back to resp.Directory when NewParentPath is empty, and use it
consistently for all new-side checks and the sink parent path.
2026-03-30 18:25:11 -07:00
Chris Lu
2eaf98a7a2 Use Unix sockets for gRPC in mini mode (#8856)
* Use Unix sockets for gRPC between co-located services in mini mode

In `weed mini`, all services run in one process. Previously, inter-service
gRPC traffic (volume↔master, filer↔master, S3↔filer, worker↔admin, etc.)
went through TCP loopback. This adds a gRPC Unix socket registry in the pb
package: mini mode registers a socket path per gRPC port at startup, each
gRPC server additionally listens on its socket, and GrpcDial transparently
routes to the socket via WithContextDialer when a match is found.

Standalone commands (weed master, weed filer, etc.) are unaffected since
no sockets are registered. TCP listeners are kept for external clients.

* Handle Serve error and clean up socket file in ServeGrpcOnLocalSocket

Log non-expected errors from grpcServer.Serve (ignoring
grpc.ErrServerStopped) and always remove the Unix socket file
when Serve returns, ensuring cleanup on Stop/GracefulStop.
2026-03-30 18:18:52 -07:00
Chris Lu
0ce4a857e6 test: add dcache stabilisation check after FUSE git reset
After git reset --hard on a FUSE mount, the kernel dcache can
transiently show the directory then drop it moments later. Add a
1-second stabilisation delay and re-verification in
resetToCommitWithRecovery and tryPullFromCommit so that recovery
retries if the entry vanishes in that window.
2026-03-30 17:26:43 -07:00
Chris Lu
d5068b3ee6 test: harden weed mini readiness checks 2026-03-30 16:21:38 -07:00
Chris Lu
7d426d2a56 Retry uploader on volume full (#8853)
* retry uploader on volume full

* drop unused upload retry helper
2026-03-30 13:32:31 -07:00
dependabot[bot]
5961e44cfa build(deps): bump cloud.google.com/go/kms from 1.25.0 to 1.26.0 (#8850)
Bumps [cloud.google.com/go/kms](https://github.com/googleapis/google-cloud-go) from 1.25.0 to 1.26.0.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/main/documentai/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/dlp/v1.25.0...dlp/v1.26.0)

---
updated-dependencies:
- dependency-name: cloud.google.com/go/kms
  dependency-version: 1.26.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:04:38 -07:00
dependabot[bot]
f8a2383a02 build(deps): bump github.com/getsentry/sentry-go from 0.43.0 to 0.44.1 (#8851)
Bumps [github.com/getsentry/sentry-go](https://github.com/getsentry/sentry-go) from 0.43.0 to 0.44.1.
- [Release notes](https://github.com/getsentry/sentry-go/releases)
- [Changelog](https://github.com/getsentry/sentry-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-go/compare/v0.43.0...v0.44.1)

---
updated-dependencies:
- dependency-name: github.com/getsentry/sentry-go
  dependency-version: 0.44.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:04:28 -07:00
dependabot[bot]
977b652ea1 build(deps): bump go.etcd.io/etcd/client/v3 from 3.6.7 to 3.6.9 (#8852)
Bumps [go.etcd.io/etcd/client/v3](https://github.com/etcd-io/etcd) from 3.6.7 to 3.6.9.
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.6.7...v3.6.9)

---
updated-dependencies:
- dependency-name: go.etcd.io/etcd/client/v3
  dependency-version: 3.6.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:04:15 -07:00
dependabot[bot]
77e30af5fb build(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.32.9 to 1.32.13 (#8849)
build(deps): bump github.com/aws/aws-sdk-go-v2/config

Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.32.9 to 1.32.13.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.32.9...config/v1.32.13)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.32.13
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:03:02 -07:00
dependabot[bot]
e598df0e81 build(deps): bump github.com/xdg-go/scram from 1.1.2 to 1.2.0 (#8848)
Bumps [github.com/xdg-go/scram](https://github.com/xdg-go/scram) from 1.1.2 to 1.2.0.
- [Release notes](https://github.com/xdg-go/scram/releases)
- [Changelog](https://github.com/xdg-go/scram/blob/master/CHANGELOG.md)
- [Commits](https://github.com/xdg-go/scram/compare/v1.1.2...v1.2.0)

---
updated-dependencies:
- dependency-name: github.com/xdg-go/scram
  dependency-version: 1.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:02:44 -07:00
dependabot[bot]
c0626c92f0 build(deps): bump azure/setup-helm from 4 to 5 (#8847)
Bumps [azure/setup-helm](https://github.com/azure/setup-helm) from 4 to 5.
- [Release notes](https://github.com/azure/setup-helm/releases)
- [Changelog](https://github.com/Azure/setup-helm/blob/main/CHANGELOG.md)
- [Commits](https://github.com/azure/setup-helm/compare/v4...v5)

---
updated-dependencies:
- dependency-name: azure/setup-helm
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:02:19 -07:00
dependabot[bot]
098184d01e build(deps): bump actions/cache from 4 to 5 (#8846)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 13:02:04 -07:00
Chris Lu
652b29af22 docker: upgrade zlib in alpine runtime images 2026-03-30 12:14:21 -07:00
msementsov
4c13a9ce65 Client disconnects create context cancelled errors, 500x errors and Filer lookup failures (#8845)
* Update stream.go

Client disconnects create context cancelled errors and Filer lookup failures

* s3api: handle canceled stream requests cleanly

* s3api: address canceled streaming review feedback

---------

Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-03-30 12:11:30 -07:00
dependabot[bot]
d2723b75ca build(deps): bump golang.org/x/image from 0.36.0 to 0.38.0 (#8844)
Bumps [golang.org/x/image](https://github.com/golang/image) from 0.36.0 to 0.38.0.
- [Commits](https://github.com/golang/image/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/image
  dependency-version: 0.38.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 09:50:21 -07:00
dependabot[bot]
39ab2ef6ad build(deps): bump golang.org/x/image from 0.36.0 to 0.38.0 in /test/kafka (#8843)
build(deps): bump golang.org/x/image in /test/kafka

Bumps [golang.org/x/image](https://github.com/golang/image) from 0.36.0 to 0.38.0.
- [Commits](https://github.com/golang/image/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/image
  dependency-version: 0.38.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 09:50:06 -07:00
Lars Lehtonen
5c5d377277 weed/s3api: prune test-only functions (#8840)
weed/s3api: prune functions that are referenced only from tests and the tests that exercise them.
2026-03-30 09:43:33 -07:00
Chris Lu
955a011f39 fix large_disk container rust build (#8839)
* fix docker rust build inputs for large disk image

* simplify rust builder weed copy
2026-03-29 22:43:31 -07:00
Chris Lu
d074830016 fix(worker): pass compaction revision and file sizes in EC volume copy (#8835)
* fix(worker): pass compaction revision and file sizes in EC volume copy

The worker EC task was sending CopyFile requests without the current
compaction revision (defaulting to 0) and with StopOffset set to
math.MaxInt64.  After a vacuum compaction this caused the volume server
to reject the copy or return stale data.

Read the volume file status first and forward the compaction revision
and actual file sizes so the copy is consistent with the compacted
volume.

* propagate erasure coding task context

* fix(worker): validate volume file status and detect short copies

Reject zero dat file size from ReadVolumeFileStatus — a zero-sized
snapshot would produce 0-byte copies and broken EC shards.

After streaming, verify totalBytes matches the expected stopOffset
and return an error on short copies instead of logging success.

* fix(worker): reject zero idx file size in volume status validation

A non-empty dat with zero idx indicates an empty or corrupt volume.
Without this guard, copyFileFromSource gets stopOffset=0, produces a
0-byte .idx, passes the short-copy check, and generateEcShardsLocally
runs against a volume with no index.

* fix fake plugin volume file status

* fix plugin volume balance test fixtures
2026-03-29 18:47:15 -07:00
Chris Lu
e3359badfc fix unsupported platform container builds (#8838)
* ci: reintroduce Trivy report and gate workflow

* ci: add dry-run mode to container release workflow

* docker: fix unsupported platform builds
2026-03-29 18:47:07 -07:00
Chris Lu
44fea49816 docker: use alpine packages for Rust builder to fix linux/386 builds (#8837)
The upstream rust:alpine manifest list no longer includes linux/386,
breaking multi-platform builds. Switch the Rust volume server builder
stage to alpine:3.23 and install Rust toolchain via apk instead.
Also adds openssl-dev which is needed for the build.
2026-03-29 14:20:18 -07:00
Chris Lu
e5ad5e8d4a fix(filer): apply default disk type after location-prefix resolution in gRPC AssignVolume (#8836)
* fix(filer): apply default disk type after location-prefix resolution in gRPC AssignVolume

The gRPC AssignVolume path was applying the filer's default DiskType to
the request before calling detectStorageOption. This caused the default
to shadow any disk type configured via a filer location-prefix rule,
diverging from the HTTP write path which applies the default only when
no rule matches.

Extract resolveAssignStorageOption to apply the filer default disk type
after detectStorageOption, so location-prefix rules take precedence.

* fix(filer): apply default disk type after location-prefix resolution in TUS upload path

Same class of bug as the gRPC AssignVolume fix: the TUS tusWriteData
handler called detectStorageOption0 but never applied the filer's
default DiskType when no location-prefix rule matched. This made TUS
uploads ignore the -disk flag entirely.
2026-03-29 14:18:24 -07:00
Chris Lu
0761be58d3 fix(s3): preserve explicit directory markers during empty folder cleanup (#8831)
* fix(s3): preserve explicit directory markers during empty folder cleanup

PR #8292 switched empty-folder cleanup from per-folder implicit checks
to bucket-level policy, inadvertently dropping the check that preserved
explicitly created directories (e.g., PUT /bucket/folder/). This caused
user-created folders to be deleted when their last file was removed.

Add IsDirectoryKeyObject check in executeCleanup to skip folders that
have a MIME type set, matching the canonical pattern used throughout the
S3 listing and delete handlers.

* fix: handle ErrNotFound in IsDirectoryKeyObject for race safety

Entry may be deleted between the emptiness check and the directory
marker lookup. Treat not-found as false rather than propagating
the error, avoiding unnecessary error logging in the cleanup path.

* refactor: consolidate directory marker tests and tidy error handling

- Combine two separate test functions into a table-driven test
- Nest ErrNotFound check inside the err != nil block
2026-03-29 13:46:54 -07:00
Chris Lu
937a168d34 notification.kafka: add SASL authentication and TLS support (#8832)
* notification.kafka: add SASL authentication and TLS support (#8827)

Wire sarama SASL (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512) and TLS
configuration into the Kafka notification producer and consumer,
enabling connections to secured Kafka clusters.

* notification.kafka: validate mTLS config

* kafka notification: validate partial mTLS config, replace panics with errors

- Reject when only one of tls_client_cert/tls_client_key is provided
- Replace three panic() calls in KafkaInput.initialize with returned errors

* kafka notification: enforce minimum TLS 1.2 for Kafka connections
2026-03-29 13:45:54 -07:00
Simon Bråten
479e72b5ab mount: add option to show system entries (#8829)
* mount: add option to show system entries

* address gemini code review's suggested changes

* rename flag from -showSystemEntries to -includeSystemEntries

* meta_cache: purge hidden system entries on filer events

---------

Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-03-29 13:33:17 -07:00
Chris Lu
46567ac06c reintroduce Trivy reporting and dry-run mode (#8834)
* ci: reintroduce Trivy report and gate workflow

* ci: add dry-run mode to container release workflow
2026-03-29 13:20:05 -07:00
Chris Lu
00fcd5b828 revert temporary docker and trivy changes (#8833) 2026-03-29 13:01:51 -07:00
Chris Lu
a95b8396e4 plugin scheduler: run iceberg and lifecycle lanes concurrently (#8821)
* plugin scheduler: run iceberg and lifecycle lanes concurrently

The default lane serialises job types under a single admin lock
because volume-management operations share global state. Iceberg
and lifecycle lanes have no such constraint, so run each of their
job types independently in separate goroutines.

* Fix concurrent lane scheduler status

* plugin scheduler: address review feedback

- Extract collectDueJobTypes helper to deduplicate policy loading
  between locked and concurrent iteration paths.
- Use atomic.Bool instead of sync.Mutex for hadJobs in the concurrent
  path.
- Set lane loop state to "busy" before launching concurrent goroutines
  so the lane is not reported as idle while work runs.
- Convert TestLaneRequiresLock to table-driven style.
- Add TestRunLaneSchedulerIterationLockBehavior to verify the scheduler
  acquires the admin lock only for lanes that require it.
- Fix flaky TestGetLaneSchedulerStatusShowsActiveConcurrentLaneWork by
  not starting background scheduler goroutines that race with the
  direct runJobTypeIteration call.
2026-03-29 00:06:20 -07:00
Chris Lu
e8a6fcaafb s3api: skip TTL fast-path for versioned buckets (#8823)
* s3api: skip TTL fast-path for versioned buckets (#8757)

PutBucketLifecycleConfiguration was translating Expiration.Days into
filer.conf TTL entries for all buckets. For versioned buckets this is
wrong:

 1. TTL volumes expire as a unit, destroying all data — including
    noncurrent versions that should be preserved.
 2. Filer-backend TTL (RocksDB compaction filter, Redis key expiry)
    removes entries without triggering chunk deletion, leaving orphaned
    volume data with 0 deleted bytes.
 3. On AWS S3, Expiration.Days on a versioned bucket creates a delete
    marker — it does not hard-delete data. TTL has no such nuance.

Fix: skip the TTL fast-path when the bucket has versioning enabled or
suspended. All lifecycle rules are evaluated at scan time by the
lifecycle worker instead.

Also fix the lifecycle worker to evaluate Expiration rules against the
latest version in .versions/ directories, which was previously skipped
entirely — only NoncurrentVersionExpiration was handled.

* lifecycle worker: handle SeaweedList error in versions dir cleanup

Do not assume the directory is empty when the list call fails — log
the error and skip the directory to avoid incorrect deletion.

* address review feedback

- Fetch version file for tag-based rules instead of reading tags from
  the .versions directory entry where they are not cached.
- Handle getBucketVersioningStatus error by failing closed (treat as
  versioned) to avoid creating TTL entries on transient failures.
- Capture and assert deleteExpiredObjects return values in test.
- Improve test documentation.
2026-03-29 00:05:53 -07:00
Chris Lu
9dd43ca006 fix balance fallback replica placement (#8824) 2026-03-29 00:05:42 -07:00
Chris Lu
ce02cf3c9d stabilize FUSE git reset recovery (#8825) 2026-03-29 00:04:32 -07:00
Chris Lu
43f5916a1d ci: add Trivy CVE scan to container release workflow (#8820)
* ci: add Trivy CVE scan to container release workflow

* ci: pin trivy-action version and fail on HIGH/CRITICAL CVEs

Address review feedback:
- Pin aquasecurity/trivy-action to v0.28.0 instead of @master
- Add exit-code: '1' so the scan fails the job on findings
- Add comment explaining why only amd64 is scanned

* ci: pin trivy-action to SHA for v0.35.0

Tags ≤0.34.2 were compromised (GHSA-69fq-xp46-6x23). Pin to the full
commit SHA of v0.35.0 to avoid mutable tag risks.
2026-03-28 21:10:57 -07:00
Chris Lu
056cf6fa5b docker: default published images to seaweed user (#8819)
* ci: add Trivy CVE scan to container release workflow

* docker: default published images to seaweed user

* Revert "ci: add Trivy CVE scan to container release workflow"

This reverts commit bc9b7e1cf7a0694e355c5d23b5e323a07e8ba670.
2026-03-28 21:03:24 -07:00
Chris Lu
0884acd70c ci: add S3 mutation regression coverage (#8804)
* test/s3: stabilize distributed lock regression

* ci: add S3 mutation regression workflow

* test: fix delete regression readiness probe

* test: address mutation regression review comments
2026-03-28 20:17:20 -07:00
Chris Lu
297cdef1a3 s3api: accept all supported lifecycle rule types (#8813)
* s3api: accept NoncurrentVersionExpiration, AbortIncompleteMultipartUpload, Expiration.Date

Update PutBucketLifecycleConfigurationHandler to accept all newly-supported
lifecycle rule types. Only Transition and NoncurrentVersionTransition are
still rejected (require storage class tier infrastructure).

Changes:
- Remove ErrNotImplemented for Expiration.Date (handled by worker at scan time)
- Only reject rules with Transition.set or NoncurrentVersionTransition.set
- Extract prefix from Filter.And when present
- Add comment explaining that non-Expiration.Days rules are evaluated by
  the lifecycle worker from stored lifecycle XML, not via filer.conf TTL

The lifecycle XML is already stored verbatim in bucket metadata, so new
rule types are preserved on Get even without explicit handler support.
Filer.conf TTL entries are only created for Expiration.Days (fast path).

* s3api: skip TTL fast path for rules with tag or size filters

Rules with tag or size constraints (Filter.Tag, Filter.And with tags
or size bounds, Filter.ObjectSizeGreaterThan/LessThan) must not be
lowered to filer.conf TTL entries, because TTL applies unconditionally
to all objects under the prefix. These rules are evaluated at scan
time by the lifecycle worker which checks each object's tags and size.

Only simple Expiration.Days rules with prefix-only filters use the
TTL fast path (RocksDB compaction filter).

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-28 19:39:21 -07:00
Chris Lu
55318fe5ec lifecycle worker: add integration tests with in-memory filer (#8818) 2026-03-28 15:19:58 -07:00
Chris Lu
782ab84f95 lifecycle worker: drive MPU abort from lifecycle rules (#8812)
* lifecycle worker: drive MPU abort from lifecycle rules

Update the multipart upload abort phase to read
AbortIncompleteMultipartUpload.DaysAfterInitiation from the parsed
lifecycle rules. Falls back to the worker config abort_mpu_days when
no lifecycle XML rule specifies the value.

This means per-bucket MPU abort thresholds are now respected when
set via PutBucketLifecycleConfiguration, instead of using a single
global worker config value for all buckets.

* lifecycle worker: only use config AbortMPUDays when no lifecycle XML exists

When a bucket has lifecycle XML (useRuleEval=true) but no
AbortIncompleteMultipartUpload rule, mpuAbortDays should be 0
(no abort), not the worker config default. The config fallback
should only apply to buckets without lifecycle XML.

* lifecycle worker: only skip .uploads at bucket root

* lifecycle worker: use per-upload rule evaluation for MPU abort

Replace the single bucket-wide mpuAbortDays with per-upload evaluation
using s3lifecycle.EvaluateMPUAbort, which respects each rule's prefix
filter and DaysAfterInitiation threshold.

Previously the code took the first enabled abort rule's days value
and applied it to all uploads, ignoring prefix scoping and multiple
rules with different thresholds.

Config fallback (abort_mpu_days) now only applies when lifecycle XML
is truly absent (xmlPresent=false), not when XML exists but has no
abort rules.

Also fix EvaluateMPUAbort to use expectedExpiryTime for midnight-UTC
semantics matching other lifecycle cutoffs.

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-28 13:50:33 -07:00
Chris Lu
f52a3c87ce lifecycle worker: fix ExpiredObjectDeleteMarker to match AWS semantics (#8811)
* lifecycle worker: add NoncurrentVersionExpiration support

Add version-aware scanning to the rule-based execution path. When the
walker encounters a .versions directory, processVersionsDirectory():
- Lists all version entries (v_<versionId>)
- Sorts by version timestamp (newest first)
- Walks non-current versions with ShouldExpireNoncurrentVersion()
  which handles both NoncurrentDays and NewerNoncurrentVersions
- Extracts successor time from version IDs (both old/new format)
- Skips delete markers in noncurrent version counting
- Falls back to entry Mtime when version ID timestamp is unavailable

Helper functions:
- sortVersionsByTimestamp: insertion sort by version ID timestamp
- getEntryVersionTimestamp: extracts timestamp with Mtime fallback

* lifecycle worker: address review feedback for noncurrent versions

- Use sentinel errLimitReached in versions directory handler
- Set NoncurrentIndex on ObjectInfo for proper NewerNoncurrentVersions
  evaluation

* lifecycle worker: fail closed on XML parse error, guard zero Mtime

- Fail closed when lifecycle XML exists but fails to parse, instead
  of falling back to TTL which could apply broader rules
- Guard Mtime > 0 before using time.Unix(mtime, 0) to avoid mapping
  unset Mtime to 1970, which would misorder versions and cause
  premature expiration

* lifecycle worker: count delete markers toward NoncurrentIndex

Noncurrent delete markers should count toward the
NewerNoncurrentVersions retention threshold so data versions
get the correct position index. Previously, skipping delete
markers without incrementing the index could retain too many
versions after delete/recreate cycles.

* lifecycle worker: fix version ordering, error propagation, and fail-closed scope

1. Use full version ID comparison (CompareVersionIds) for sorting
   .versions entries, not just decoded timestamps. Two versions with
   the same timestamp prefix but different random suffixes were
   previously misordered, potentially treating the newest version as
   noncurrent and deleting it.

2. Propagate .versions listing failures to the caller instead of
   swallowing them with (nil, 0). Transient filer errors on a
   .versions directory now surface in the job result.

3. Narrow the fail-closed path to only malformed lifecycle XML
   (errMalformedLifecycleXML). Transient filer LookupEntry errors
   now fall back to TTL with a warning, matching the original intent
   of "fail closed on bad config, not on network blips."

* lifecycle worker: only skip .uploads at bucket root

* lifecycle worker: sort.Slice, mixed-format test, XML presence tracking

- Replace manual insertion sort with sort.Slice in sortVersionsByVersionId
- Add TestCompareVersionIdsMixedFormats covering old/new format ordering
- Distinguish "no lifecycle XML" (nil) from "XML present but no effective
  rules" (non-nil empty slice) so buckets with all-disabled rules don't
  incorrectly fall back to filer.conf TTL expiration

* lifecycle worker: guard nil Attributes, use TrimSuffix in test

- Guard entry.Attributes != nil before accessing GetFileSize() and
  Mtime in both listExpiredObjectsByRules and processVersionsDirectory
- Use strings.TrimPrefix/TrimSuffix in TestVersionsDirectoryNaming
  to match the production code pattern

* lifecycle worker: skip TTL scan when XML present, fix test assertions

- When lifecycle XML is present but has no effective rules, skip
  object scanning entirely instead of falling back to TTL path
- Test sort output against concrete expected names instead of
  re-using the same comparator as the sort itself

* lifecycle worker: fix ExpiredObjectDeleteMarker to match AWS semantics

Rewrite cleanupDeleteMarkers() to only remove delete markers that are
the sole remaining version of an object. Previously, delete markers
were removed unconditionally which could resurface older versions in
versioned buckets.

New algorithm:
1. Walk bucket tree looking for .versions directories
2. Check ExtLatestVersionIsDeleteMarker from directory metadata
3. Count versions in the .versions directory
4. Only remove if count == 1 (delete marker is sole version)
5. Require an ExpiredObjectDeleteMarker=true rule (when lifecycle
   XML rules are present)
6. Remove the empty .versions directory after cleanup

This phase runs after NoncurrentVersionExpiration so version counts
are accurate.

* lifecycle worker: respect prefix filter in ExpiredObjectDeleteMarker rules

Previously hasDeleteMarkerRule was a bucket-wide boolean that ignored
rule prefixes. A prefix-scoped rule like "logs/" would incorrectly
clean up delete markers in all paths.

Add matchesDeleteMarkerRule() that checks if a matching enabled
ExpiredObjectDeleteMarker rule exists for the specific object key,
respecting the rule's prefix filter. Falls back to legacy behavior
(allow cleanup) when no lifecycle XML rules are provided.

* lifecycle worker: only skip .uploads at bucket root

Check dir == bucketPath before skipping directories named .uploads.
Previously a user-created directory like data/.uploads/ at any depth
would be incorrectly skipped during lifecycle scanning.

* lifecycle worker: fix delete marker cleanup with XML-present empty rules

1. matchesDeleteMarkerRule now uses nil check (not len==0) for legacy
   fallback. A non-nil empty slice means lifecycle XML was present but
   had no ExpiredObjectDeleteMarker rules, so cleanup is blocked.
   Previously, an empty slice triggered the legacy true path.

2. Use per-directory removedHere flag instead of cumulative cleaned
   counter when deciding to remove .versions directories. Previously,
   after the first successful cleanup anywhere in the bucket, every
   subsequent .versions directory would be removed even if its own
   delete marker was not actually deleted.

* lifecycle worker: use full filter matching for delete marker rules

matchesDeleteMarkerRule now uses s3lifecycle.MatchesFilter (exported)
instead of prefix-only matching. This ensures tag and size filters
on ExpiredObjectDeleteMarker rules are respected, preventing broader
deletions than the configured policy intends.

Add TestMatchesDeleteMarkerRule covering: nil rules (legacy), empty
rules (XML present), prefix match/mismatch, disabled rules, rules
without the flag, and tag-filtered rules against tagless markers.

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-28 13:26:57 -07:00
Lars Lehtonen
b01a74c6bb Prune Unused Functions from weed/s3api (#8815)
* weed/s3api: prune calculatePartOffset()

* weed/s3api: prune clearCachedListMetadata()

* weed/s3api: prune S3ApiServer.isObjectRetentionActive()

* weed/s3api: prune S3ApiServer.ensureDirectoryAllEmpty()

* weed/s3api: prune s3ApiServer.getEncryptionTypeString()

* weed/s3api: prune newStreamError()

* weed/s3api: prune S3ApiServer.rotateSSECKey()

weed/s3api: prune S3ApiServer.rotateSSEKMSKey()

weed/s3api: prune S3ApiServer.rotateSSEKMSMetadataOnly()

weed/s3api: prune S3ApiServer.rotateSSECChunks()

weed/s3api: prune S3ApiServer.rotateSSEKMSChunks()

weed/s3api: prune S3ApiServer.rotateSSECChunk()

weed/s3api: prune S3ApiServer.rotateSSEKMSChunk()

* weed/s3api: prune addCounterToIV()

* weed/s3api: prune minInt()

* weed/s3api: prune isMethodActionMismatch()

* weed/s3api: prune hasSpecificQueryParameters()

* weed/s3api: prune handlePutToFilerError()

weed/s3api: prune handlePutToFilerInternalError()

weed/s3api: prune logErrorAndReturn()

weed/s3api: prune logInternalError

weed/s3api: prune handleSSEError()

weed/s3api: prune handleSSEInternalError()

* weed/s3api: prune encryptionConfigToProto()

* weed/s3api: prune S3ApiServer.touch()
2026-03-28 13:24:11 -07:00
Chris Lu
f6ec9941cb lifecycle worker: NoncurrentVersionExpiration support (#8810)
* lifecycle worker: add NoncurrentVersionExpiration support

Add version-aware scanning to the rule-based execution path. When the
walker encounters a .versions directory, processVersionsDirectory():
- Lists all version entries (v_<versionId>)
- Sorts by version timestamp (newest first)
- Walks non-current versions with ShouldExpireNoncurrentVersion()
  which handles both NoncurrentDays and NewerNoncurrentVersions
- Extracts successor time from version IDs (both old/new format)
- Skips delete markers in noncurrent version counting
- Falls back to entry Mtime when version ID timestamp is unavailable

Helper functions:
- sortVersionsByTimestamp: insertion sort by version ID timestamp
- getEntryVersionTimestamp: extracts timestamp with Mtime fallback

* lifecycle worker: address review feedback for noncurrent versions

- Use sentinel errLimitReached in versions directory handler
- Set NoncurrentIndex on ObjectInfo for proper NewerNoncurrentVersions
  evaluation

* lifecycle worker: fail closed on XML parse error, guard zero Mtime

- Fail closed when lifecycle XML exists but fails to parse, instead
  of falling back to TTL which could apply broader rules
- Guard Mtime > 0 before using time.Unix(mtime, 0) to avoid mapping
  unset Mtime to 1970, which would misorder versions and cause
  premature expiration

* lifecycle worker: count delete markers toward NoncurrentIndex

Noncurrent delete markers should count toward the
NewerNoncurrentVersions retention threshold so data versions
get the correct position index. Previously, skipping delete
markers without incrementing the index could retain too many
versions after delete/recreate cycles.

* lifecycle worker: fix version ordering, error propagation, and fail-closed scope

1. Use full version ID comparison (CompareVersionIds) for sorting
   .versions entries, not just decoded timestamps. Two versions with
   the same timestamp prefix but different random suffixes were
   previously misordered, potentially treating the newest version as
   noncurrent and deleting it.

2. Propagate .versions listing failures to the caller instead of
   swallowing them with (nil, 0). Transient filer errors on a
   .versions directory now surface in the job result.

3. Narrow the fail-closed path to only malformed lifecycle XML
   (errMalformedLifecycleXML). Transient filer LookupEntry errors
   now fall back to TTL with a warning, matching the original intent
   of "fail closed on bad config, not on network blips."

* lifecycle worker: only skip .uploads at bucket root

* lifecycle worker: sort.Slice, mixed-format test, XML presence tracking

- Replace manual insertion sort with sort.Slice in sortVersionsByVersionId
- Add TestCompareVersionIdsMixedFormats covering old/new format ordering
- Distinguish "no lifecycle XML" (nil) from "XML present but no effective
  rules" (non-nil empty slice) so buckets with all-disabled rules don't
  incorrectly fall back to filer.conf TTL expiration

* lifecycle worker: guard nil Attributes, use TrimSuffix in test

- Guard entry.Attributes != nil before accessing GetFileSize() and
  Mtime in both listExpiredObjectsByRules and processVersionsDirectory
- Use strings.TrimPrefix/TrimSuffix in TestVersionsDirectoryNaming
  to match the production code pattern

* lifecycle worker: skip TTL scan when XML present, fix test assertions

- When lifecycle XML is present but has no effective rules, skip
  object scanning entirely instead of falling back to TTL path
- Test sort output against concrete expected names instead of
  re-using the same comparator as the sort itself

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-28 12:58:21 -07:00
Chris Lu
9c3bc138a0 lifecycle worker: scan-time rule evaluation for object expiration (#8809)
* s3api: extend lifecycle XML types with NoncurrentVersionExpiration, AbortIncompleteMultipartUpload

Add missing S3 lifecycle rule types to the XML data model:
- NoncurrentVersionExpiration with NoncurrentDays and NewerNoncurrentVersions
- NoncurrentVersionTransition with NoncurrentDays and StorageClass
- AbortIncompleteMultipartUpload with DaysAfterInitiation
- Filter.ObjectSizeGreaterThan and ObjectSizeLessThan
- And.ObjectSizeGreaterThan and ObjectSizeLessThan
- Filter.UnmarshalXML to properly parse Tag, And, and size filter elements

Each new type follows the existing set-field pattern for conditional
XML marshaling. No behavior changes - these types are not yet wired
into handlers or the lifecycle worker.

* s3lifecycle: add lifecycle rule evaluator package

New package weed/s3api/s3lifecycle/ provides a pure-function lifecycle
rule evaluation engine. The evaluator accepts flattened Rule structs and
ObjectInfo metadata, and returns the appropriate Action.

Components:
- evaluator.go: Evaluate() for per-object actions with S3 priority
  ordering (delete marker > noncurrent version > current expiration),
  ShouldExpireNoncurrentVersion() with NewerNoncurrentVersions support,
  EvaluateMPUAbort() for multipart upload rules
- filter.go: prefix, tag, and size-based filter matching
- tags.go: ExtractTags() extracts S3 tags from filer Extended metadata,
  HasTagRules() for scan-time optimization
- version_time.go: GetVersionTimestamp() extracts timestamps from
  SeaweedFS version IDs (both old and new format)

Comprehensive test coverage: 54 tests covering all action types,
filter combinations, edge cases, and version ID formats.

* s3api: add UnmarshalXML for Expiration, Transition, ExpireDeleteMarker

Add UnmarshalXML methods that set the internal 'set' flag during XML
parsing. Previously these flags were only set programmatically, causing
XML round-trip to drop elements. This ensures lifecycle configurations
stored as XML survive unmarshal/marshal cycles correctly.

Add comprehensive XML round-trip tests for all lifecycle rule types
including NoncurrentVersionExpiration, AbortIncompleteMultipartUpload,
Filter with Tag/And/size constraints, and a complete Terraform-style
lifecycle configuration.

* s3lifecycle: address review feedback

- Fix version_time.go overflow: guard timestampPart > MaxInt64 before
  the inversion subtraction to prevent uint64 wrap
- Make all expiry checks inclusive (!now.Before instead of now.After)
  so actions trigger at the exact scheduled instant
- Add NoncurrentIndex to ObjectInfo so Evaluate() can properly handle
  NewerNoncurrentVersions via ShouldExpireNoncurrentVersion()
- Add test for high-bit overflow version ID

* s3lifecycle: guard ShouldExpireNoncurrentVersion against zero SuccessorModTime

Add early return when obj.IsLatest or obj.SuccessorModTime.IsZero()
to prevent premature expiration of versions with uninitialized
successor timestamps (zero value would compute to epoch, always expired).

* lifecycle worker: detect buckets with lifecycle XML, not just filer.conf TTLs

Update the detection phase to check for stored lifecycle XML in bucket
metadata (key: s3-bucket-lifecycle-configuration-xml) in addition to
filer.conf TTL entries. A bucket is proposed for lifecycle processing if
it has lifecycle XML OR filer.conf TTLs (backward compatible).

New proposal parameters:
- has_lifecycle_xml: whether the bucket has stored lifecycle XML
- versioning_status: the bucket's versioning state (Enabled/Suspended/"")

These parameters will be used by the execution phase (subsequent PR)
to determine which evaluation path to use.

* lifecycle worker: update detection function comment to reflect XML support

* lifecycle worker: add lifecycle XML parsing and rule conversion

Add rules.go with:
- parseLifecycleXML() converts stored lifecycle XML to evaluator-friendly
  s3lifecycle.Rule structs, handling Filter.Prefix, Filter.Tag, Filter.And,
  size constraints, NoncurrentVersionExpiration, AbortIncompleteMultipartUpload,
  Expiration.Date, and ExpiredObjectDeleteMarker
- loadLifecycleRulesFromBucket() reads lifecycle XML from bucket metadata
- parseExpirationDate() supports RFC3339 and ISO 8601 date-only formats

Comprehensive tests for all XML variants, filter types, and date formats.

* lifecycle worker: add scan-time rule evaluation for object expiration

Update executeLifecycleForBucket to try lifecycle XML evaluation first,
falling back to TTL-only evaluation when no lifecycle XML exists.

New listExpiredObjectsByRules() function:
- Walks the bucket directory tree
- Builds s3lifecycle.ObjectInfo from each filer entry
- Calls s3lifecycle.Evaluate() to check lifecycle rules
- Skips objects already handled by TTL fast path (TtlSec set)
- Extracts tags only when rules use tag-based filters (optimization)
- Skips .uploads and .versions directories (handled by other phases)

Supports Expiration.Days, Expiration.Date, Filter.Prefix, Filter.Tag,
Filter.And, and Filter.ObjectSize* in the scan-time evaluation path.
Existing TTL-based path remains for backward compatibility.

* lifecycle worker: address review feedback

- Use sentinel error (errLimitReached) instead of string matching
  for scan limit detection
- Fix loadLifecycleRulesFromBucket path: use bucketsPath directly
  as directory for LookupEntry instead of path.Dir which produced
  the wrong parent

* lifecycle worker: fix And filter detection for size-only constraints

The And branch condition only triggered when Prefix or Tags were present,
missing the case where And contains only ObjectSizeGreaterThan or
ObjectSizeLessThan without a prefix or tags.

* lifecycle worker: address review feedback round 3

- rules.go: pass through Filter-level size constraints when Tag is
  present without And (Tag+size combination was dropping sizes)
- execution.go: add doc comment to listExpiredObjectsByRules noting
  that it handles non-versioned objects only; versioned objects are
  handled by processVersionsDirectory
- rules_test.go: add bounds checks before indexing rules[0]

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-28 11:39:50 -07:00
Chris Lu
98f545c7fa lifecycle worker: detect buckets via lifecycle XML metadata (#8808)
* s3api: extend lifecycle XML types with NoncurrentVersionExpiration, AbortIncompleteMultipartUpload

Add missing S3 lifecycle rule types to the XML data model:
- NoncurrentVersionExpiration with NoncurrentDays and NewerNoncurrentVersions
- NoncurrentVersionTransition with NoncurrentDays and StorageClass
- AbortIncompleteMultipartUpload with DaysAfterInitiation
- Filter.ObjectSizeGreaterThan and ObjectSizeLessThan
- And.ObjectSizeGreaterThan and ObjectSizeLessThan
- Filter.UnmarshalXML to properly parse Tag, And, and size filter elements

Each new type follows the existing set-field pattern for conditional
XML marshaling. No behavior changes - these types are not yet wired
into handlers or the lifecycle worker.

* s3lifecycle: add lifecycle rule evaluator package

New package weed/s3api/s3lifecycle/ provides a pure-function lifecycle
rule evaluation engine. The evaluator accepts flattened Rule structs and
ObjectInfo metadata, and returns the appropriate Action.

Components:
- evaluator.go: Evaluate() for per-object actions with S3 priority
  ordering (delete marker > noncurrent version > current expiration),
  ShouldExpireNoncurrentVersion() with NewerNoncurrentVersions support,
  EvaluateMPUAbort() for multipart upload rules
- filter.go: prefix, tag, and size-based filter matching
- tags.go: ExtractTags() extracts S3 tags from filer Extended metadata,
  HasTagRules() for scan-time optimization
- version_time.go: GetVersionTimestamp() extracts timestamps from
  SeaweedFS version IDs (both old and new format)

Comprehensive test coverage: 54 tests covering all action types,
filter combinations, edge cases, and version ID formats.

* s3api: add UnmarshalXML for Expiration, Transition, ExpireDeleteMarker

Add UnmarshalXML methods that set the internal 'set' flag during XML
parsing. Previously these flags were only set programmatically, causing
XML round-trip to drop elements. This ensures lifecycle configurations
stored as XML survive unmarshal/marshal cycles correctly.

Add comprehensive XML round-trip tests for all lifecycle rule types
including NoncurrentVersionExpiration, AbortIncompleteMultipartUpload,
Filter with Tag/And/size constraints, and a complete Terraform-style
lifecycle configuration.

* s3lifecycle: address review feedback

- Fix version_time.go overflow: guard timestampPart > MaxInt64 before
  the inversion subtraction to prevent uint64 wrap
- Make all expiry checks inclusive (!now.Before instead of now.After)
  so actions trigger at the exact scheduled instant
- Add NoncurrentIndex to ObjectInfo so Evaluate() can properly handle
  NewerNoncurrentVersions via ShouldExpireNoncurrentVersion()
- Add test for high-bit overflow version ID

* s3lifecycle: guard ShouldExpireNoncurrentVersion against zero SuccessorModTime

Add early return when obj.IsLatest or obj.SuccessorModTime.IsZero()
to prevent premature expiration of versions with uninitialized
successor timestamps (zero value would compute to epoch, always expired).

* lifecycle worker: detect buckets with lifecycle XML, not just filer.conf TTLs

Update the detection phase to check for stored lifecycle XML in bucket
metadata (key: s3-bucket-lifecycle-configuration-xml) in addition to
filer.conf TTL entries. A bucket is proposed for lifecycle processing if
it has lifecycle XML OR filer.conf TTLs (backward compatible).

New proposal parameters:
- has_lifecycle_xml: whether the bucket has stored lifecycle XML
- versioning_status: the bucket's versioning state (Enabled/Suspended/"")

These parameters will be used by the execution phase (subsequent PR)
to determine which evaluation path to use.

* lifecycle worker: update detection function comment to reflect XML support

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-28 11:16:58 -07:00