do delete expired entries on s3 list request (#7426)

* do delete expired entries on s3 list request
https://github.com/seaweedfs/seaweedfs/issues/6837

* disable delete expires s3 entry in filer

* pass opt allowDeleteObjectsByTTL to all servers

* delete on get and head

* add lifecycle expiration s3 tests

* fix opt allowDeleteObjectsByTTL for server

* fix test lifecycle expiration

* fix IsExpired

* fix locationPrefix for updateEntriesTTL

* fix s3tests

* resolv  coderabbitai

* GetS3ExpireTime on filer

* go mod

* clear TtlSeconds for volume

* move s3 delete expired entry to filer

* filer delete meta and data

* del unusing func removeExpiredObject

* test s3 put

* test s3 put multipart

* allowDeleteObjectsByTTL by default

* fix pipline tests

* rm dublicate SeaweedFSExpiresS3

* revert expiration tests

* fix updateTTL

* rm log

* resolv comment

* fix delete version object

* fix S3Versioning

* fix delete on FindEntry

* fix delete chunks

* fix sqlite not support concurrent writes/reads

* move deletion out of listing transaction; delete entries and empty folders

* Revert "fix sqlite not support concurrent writes/reads"

This reverts commit 5d5da14e0ed91c613fe5c0ed058f58bb04fba6f0.

* clearer handling on recursive empty directory deletion

* handle listing errors

* strut copying

* reuse code to delete empty folders

* use iterative approach with a queue to avoid recursive WithFilerClient calls

* stop a gRPC stream from the client-side callback is to return a specific error, e.g., io.EOF

* still issue UpdateEntry when the flag must be added

* errors join

* join path

* cleaner

* add context, sort directories by depth (deepest first) to avoid redundant checks

* batched operation, refactoring

* prevent deleting bucket

* constant

* reuse code

* more logging

* refactoring

* s3 TTL time

* Safety check

---------

Co-authored-by: chrislu <chris.lu@gmail.com>
This commit is contained in:
Konstantin Lebedev
2025-11-06 11:05:54 +05:00
committed by GitHub
parent cc444b1868
commit 084b377f87
18 changed files with 489 additions and 108 deletions

View File

@@ -54,7 +54,7 @@ jobs:
shell: bash
run: |
cd weed
go install -buildvcs=false
go install -tags s3tests -buildvcs=false
set -x
# Create clean data directory for this test run
export WEED_DATA_DIR="/tmp/seaweedfs-s3tests-$(date +%s)"
@@ -308,7 +308,10 @@ jobs:
s3tests/functional/test_s3.py::test_copy_object_ifnonematch_good \
s3tests/functional/test_s3.py::test_lifecycle_set \
s3tests/functional/test_s3.py::test_lifecycle_get \
s3tests/functional/test_s3.py::test_lifecycle_set_filter
s3tests/functional/test_s3.py::test_lifecycle_set_filter \
s3tests/functional/test_s3.py::test_lifecycle_expiration \
s3tests/functional/test_s3.py::test_lifecyclev2_expiration \
s3tests/functional/test_s3.py::test_lifecycle_expiration_versioning_enabled
kill -9 $pid || true
# Clean up data directory
rm -rf "$WEED_DATA_DIR" || true
@@ -791,7 +794,7 @@ jobs:
exit 1
fi
go install -tags "sqlite" -buildvcs=false
go install -tags "sqlite s3tests" -buildvcs=false
# Create clean data directory for this test run with unique timestamp and process ID
export WEED_DATA_DIR="/tmp/seaweedfs-sql-test-$(date +%s)-$$"
mkdir -p "$WEED_DATA_DIR"
@@ -1123,7 +1126,10 @@ jobs:
s3tests/functional/test_s3.py::test_copy_object_ifnonematch_good \
s3tests/functional/test_s3.py::test_lifecycle_set \
s3tests/functional/test_s3.py::test_lifecycle_get \
s3tests/functional/test_s3.py::test_lifecycle_set_filter
s3tests/functional/test_s3.py::test_lifecycle_set_filter \
s3tests/functional/test_s3.py::test_lifecycle_expiration \
s3tests/functional/test_s3.py::test_lifecyclev2_expiration \
s3tests/functional/test_s3.py::test_lifecycle_expiration_versioning_enabled
kill -9 $pid || true
# Clean up data directory
rm -rf "$WEED_DATA_DIR" || true