* filer: expose metadata events and list snapshots * mount: invalidate hot directory caches * mount: read hot directories directly from filer * mount: add sequenced metadata cache applier * mount: apply metadata responses through cache applier * mount: replay snapshot-consistent directory builds * mount: dedupe self metadata events * mount: factor directory build cleanup * mount: replace proto marshal dedup with composite key and ring buffer The dedup logic was doing a full deterministic proto.Marshal on every metadata event just to produce a dedup key. Replace with a cheap composite string key (TsNs|Directory|OldName|NewName). Also replace the sliding-window slice (which leaked the backing array unboundedly) with a fixed-size ring buffer that reuses the same array. * filer: remove mutex and proto.Clone from request-scoped MetadataEventSink MetadataEventSink is created per-request and only accessed by the goroutine handling the gRPC call. The mutex and double proto.Clone (once in Record, once in Last) were unnecessary overhead on every filer write operation. Store the pointer directly instead. * mount: skip proto.Clone for caller-owned metadata events Add ApplyMetadataResponseOwned that takes ownership of the response without cloning. Local metadata events (mkdir, create, flush, etc.) are freshly constructed and never shared, so the clone is unnecessary. * filer: only populate MetadataEvent on successful DeleteEntry Avoid calling eventSink.Last() on error paths where the sink may contain a partial event from an intermediate child deletion during recursive deletes. * mount: avoid map allocation in collectDirectoryNotifications Replace the map with a fixed-size array and linear dedup. There are at most 3 directories to notify (old parent, new parent, new child if directory), so a 3-element array avoids the heap allocation on every metadata event. * mount: fix potential deadlock in enqueueApplyRequest Release applyStateMu before the blocking channel send. Previously, if the channel was full (cap 128), the send would block while holding the mutex, preventing Shutdown from acquiring it to set applyClosed. * mount: restore signature-based self-event filtering as fast path Re-add the signature check that was removed when content-based dedup was introduced. Checking signatures is O(1) on a small slice and avoids enqueuing and processing events that originated from this mount instance. The content-based dedup remains as a fallback. * filer: send snapshotTsNs only in first ListEntries response The snapshot timestamp is identical for every entry in a single ListEntries stream. Sending it in every response message wastes wire bandwidth for large directories. The client already reads it only from the first response. * mount: exit read-through mode after successful full directory listing MarkDirectoryRefreshed was defined but never called, so directories that entered read-through mode (hot invalidation threshold) stayed there permanently, hitting the filer on every readdir even when cold. Call it after a complete read-through listing finishes. * mount: include event shape and full paths in dedup key The previous dedup key only used Names, which could collapse distinct rename targets. Include the event shape (C/D/U/R), source directory, new parent path, and both entry names so structurally different events are never treated as duplicates. * mount: drain pending requests on shutdown in runApplyLoop After receiving the shutdown sentinel, drain any remaining requests from applyCh non-blockingly and signal each with errMetaCacheClosed so callers waiting on req.done are released. * mount: include IsDirectory in synthetic delete events metadataDeleteEvent now accepts an isDirectory parameter so the applier can distinguish directory deletes from file deletes. Rmdir passes true, Unlink passes false. * mount: fall back to synthetic event when MetadataEvent is nil In mknod and mkdir, if the filer response omits MetadataEvent (e.g. older filer without the field), synthesize an equivalent local metadata event so the cache is always updated. * mount: make Flush metadata apply best-effort after successful commit After filer_pb.CreateEntryWithResponse succeeds, the entry is persisted. Don't fail the Flush syscall if the local metadata cache apply fails — log and invalidate the directory cache instead. Also fall back to a synthetic event when MetadataEvent is nil. * mount: make Rename metadata apply best-effort The rename has already succeeded on the filer by the time we apply the local metadata event. Log failures instead of returning errors that would be dropped by the caller anyway. * mount: make saveEntry metadata apply best-effort with fallback After UpdateEntryWithResponse succeeds, treat local metadata apply as non-fatal. Log and invalidate the directory cache on failure. Also fall back to a synthetic event when MetadataEvent is nil. * filer_pb: preserve snapshotTsNs on error in ReadDirAllEntriesWithSnapshot Return the snapshot timestamp even when the first page fails, so callers receive the snapshot boundary when partial data was received. * filer: send snapshot token for empty directory listings When no entries are streamed, send a final ListEntriesResponse with only SnapshotTsNs so clients always receive the snapshot boundary. * mount: distinguish not-found vs transient errors in lookupEntry Return fuse.EIO for non-not-found filer errors instead of unconditionally returning ENOENT, so transient failures don't masquerade as missing entries. * mount: make CacheRemoteObject metadata apply best-effort The file content has already been cached successfully. Don't fail the read if the local metadata cache update fails. * mount: use consistent snapshot for readdir in direct mode Capture the SnapshotTsNs from the first loadDirectoryEntriesDirect call and store it on the DirectoryHandle. Subsequent batch loads pass this stored timestamp so all batches use the same snapshot. Also export DoSeaweedListWithSnapshot so mount can use it directly with snapshot passthrough. * filer_pb: fix test fake to send SnapshotTsNs only on first response Match the server behavior: only the first ListEntriesResponse in a page carries the snapshot timestamp, subsequent entries leave it zero. * Fix nil pointer dereference in ListEntries stream consumers Remove the empty-directory snapshot-only response from ListEntries that sent a ListEntriesResponse with Entry==nil, which crashed every raw stream consumer that assumed resp.Entry is always non-nil. Also add defensive nil checks for resp.Entry in all raw ListEntries stream consumers across: S3 listing, broker topic lookup, broker topic config, admin dashboard, topic retention, hybrid message scanner, Kafka integration, and consumer offset storage. * Add nil guards for resp.Entry in remaining ListEntries stream consumers Covers: S3 object lock check, MQ management dashboard (version/ partition/offset loops), and topic retention version loop. * Make applyLocalMetadataEvent best-effort in Link and Symlink The filer operations already succeeded; failing the syscall because the local cache apply failed is wrong. Log a warning and invalidate the parent directory cache instead. * Make applyLocalMetadataEvent best-effort in Mkdir/Rmdir/Mknod/Unlink The filer RPC already committed; don't fail the syscall when the local metadata cache apply fails. Log a warning and invalidate the parent directory cache to force a re-fetch on next access. * flushFileMetadata: add nil-fallback for metadata event and best-effort apply Synthesize a metadata event when resp.GetMetadataEvent() is nil (matching doFlush), and make the apply best-effort with cache invalidation on failure. * Prevent double-invocation of cleanupBuild in doEnsureVisited Add a cleanupDone guard so the deferred cleanup and inline error-path cleanup don't both call DeleteFolderChildren/AbortDirectoryBuild. * Fix comment: signature check is O(n) not O(1) * Prevent deferred cleanup after successful CompleteDirectoryBuild Set cleanupDone before returning from the success path so the deferred context-cancellation check cannot undo a published build. * Invalidate parent directory caches on rename metadata apply failure When applyLocalMetadataEvent fails during rename, invalidate the source and destination parent directory caches so subsequent accesses trigger a re-fetch from the filer. * Add event nil-fallback and cache invalidation to Link and Symlink Synthesize metadata events when the server doesn't return one, and invalidate parent directory caches on apply failure. * Match requested partition when scanning partition directories Parse the partition range format (NNNN-NNNN) and match against the requested partition parameter instead of using the first directory. * Preserve snapshot timestamp across empty directory listings Initialize actualSnapshotTsNs from the caller-requested value so it isn't lost when the server returns no entries. Re-add the server-side snapshot-only response for empty directories (all raw stream consumers now have nil guards for Entry). * Fix CreateEntry error wrapping to support errors.Is/errors.As Use errors.New + %w instead of %v for resp.Error so callers can unwrap the underlying error. * Fix object lock pagination: only advance on non-nil entries Move entriesReceived inside the nil check so nil entries don't cause repeated ListEntries calls with the same lastFileName. * Guard Attributes nil check before accessing Mtime in MQ management * Do not send nil-Entry response for empty directory listings The snapshot-only ListEntriesResponse (with Entry == nil) for empty directories breaks consumers that treat any received response as an entry (Java FilerClient, S3 listing). The Go client-side DoSeaweedListWithSnapshot already preserves the caller-requested snapshot via actualSnapshotTsNs initialization, so the server-side send is unnecessary. * Fix review findings: subscriber dedup, invalidation normalization, nil guards, shutdown race - Remove self-signature early-return in processEventFn so all events flow through the applier (directory-build buffering sees self-originated events that arrive after a snapshot) - Normalize NewParentPath in collectEntryInvalidations to avoid duplicate invalidations when NewParentPath is empty (same-directory update) - Guard resp.Entry.Attributes for nil in admin_server.go and topic_retention.go to prevent panics on entries without attributes - Fix enqueueApplyRequest race with shutdown by using select on both applyCh and applyDone, preventing sends after the apply loop exits - Add cleanupDone check to deferred cleanup in meta_cache_init.go for clarity alongside the existing guard in cleanupBuild - Add empty directory test case for snapshot consistency * Propagate authoritative metadata event from CacheRemoteObjectToLocalCluster and generate client-side snapshot for empty directories - Add metadata_event field to CacheRemoteObjectToLocalClusterResponse proto so the filer-emitted event is available to callers - Use WithMetadataEventSink in the server handler to capture the event from NotifyUpdateEvent and return it on the response - Update filehandle_read.go to prefer the RPC's metadata event over a locally fabricated one, falling back to metadataUpdateEvent when the server doesn't provide one (e.g., older filers) - Generate a client-side snapshot cutoff in DoSeaweedListWithSnapshot when the server sends no snapshot (empty directory), so callers like CompleteDirectoryBuild get a meaningful boundary for filtering buffered events * Skip directory notifications for dirs being built to prevent mid-build cache wipe When a metadata event is buffered during a directory build, applyMetadataSideEffects was still firing noteDirectoryUpdate for the building directory. If the directory accumulated enough updates to become "hot", markDirectoryReadThrough would call DeleteFolderChildren, wiping entries that EnsureVisited had already inserted. The build would then complete and mark the directory cached with incomplete data. Fix by using applyMetadataSideEffectsSkippingBuildingDirs for buffered events, which suppresses directory notifications for dirs currently in buildingDirs while still applying entry invalidations. * Add test for directory notification suppression during active build TestDirectoryNotificationsSuppressedDuringBuild verifies that metadata events targeting a directory under active EnsureVisited build do NOT fire onDirectoryUpdate for that directory. In production, this prevents markDirectoryReadThrough from calling DeleteFolderChildren mid-build, which would wipe entries already inserted by the listing. The test inserts an entry during a build, sends multiple metadata events for the building directory, asserts no notifications fired for it, verifies the entry survives, and confirms buffered events are replayed after CompleteDirectoryBuild. * Fix create invalidations, build guard, event shape, context, and snapshot error path - collectEntryInvalidations: invalidate FUSE kernel cache on pure create events (OldEntry==nil && NewEntry!=nil), not just updates and deletes - completeDirectoryBuildNow: only call markCachedFn when an active build existed (state != nil), preventing an unpopulated directory from being marked as cached - Add metadataCreateEvent helper that produces a create-shaped event (NewEntry only, no OldEntry) and use it in mkdir, mknod, symlink, and hardlink create fallback paths instead of metadataUpdateEvent which incorrectly set both OldEntry and NewEntry - applyMetadataResponseEnqueue: use context.Background() for the queued mutation so a cancelled caller context cannot abort the apply loop mid-write - DoSeaweedListWithSnapshot: move snapshot initialization before ListEntries call so the error path returns the preserved snapshot instead of 0 * Fix review findings: test loop, cache race, context safety, snapshot consistency - Fix build test loop starting at i=1 instead of i=0, missing new-0.txt verification - Re-check IsDirectoryCached after cache miss to avoid ENOENT race with markDirectoryReadThrough - Use context.Background() in enqueueAndWait so caller cancellation can't abort build/complete mid-way - Pass dh.snapshotTsNs in skip-batch loadDirectoryEntriesDirect for snapshot consistency - Prefer resp.MetadataEvent over fallback in Unlink event derivation - Add comment on MetadataEventSink.Record single-event assumption * Fix empty-directory snapshot clock skew and build cancellation race Empty-directory snapshot: Remove client-side time.Now() synthesis when the server returns no entries. Instead return snapshotTsNs=0, and in completeDirectoryBuildNow replay ALL buffered events when snapshot is 0. This eliminates the clock-skew bug where a client ahead of the filer would filter out legitimate post-list events. Build cancellation: Use context.Background() for BeginDirectoryBuild and CompleteDirectoryBuild calls in doEnsureVisited, so errgroup cancellation doesn't cause enqueueAndWait to return early and trigger cleanupBuild while the operation is still queued. * Add tests for empty-directory build replay and cancellation resilience TestEmptyDirectoryBuildReplaysAllBufferedEvents: verifies that when CompleteDirectoryBuild receives snapshotTsNs=0 (empty directory, no server snapshot), ALL buffered events are replayed regardless of their TsNs values — no clock-skew-sensitive filtering occurs. TestBuildCompletionSurvivesCallerCancellation: verifies that once CompleteDirectoryBuild is enqueued, a cancelled caller context does not prevent the build from completing. The apply loop runs with context.Background(), so the directory becomes cached and buffered events are replayed even when the caller gives up waiting. * Fix directory subtree cleanup, Link rollback, test robustness - applyMetadataResponseLocked: when a directory entry is deleted or moved, call DeleteFolderChildren on the old path so cached descendants don't leak as stale entries. - Link: save original HardLinkId/Counter before mutation. If CreateEntryWithResponse fails after the source was already updated, rollback the source entry to its original state via UpdateEntry. - TestBuildCompletionSurvivesCallerCancellation: replace fixed time.Sleep(50ms) with a deadline-based poll that checks IsDirectoryCached in a loop, failing only after 2s timeout. - TestReadDirAllEntriesWithSnapshotEmptyDirectory: assert that ListEntries was actually invoked on the mock client so the test exercises the RPC path. - newMetadataEvent: add early return when both oldEntry and newEntry are nil to avoid emitting events with empty Directory. --------- Co-authored-by: Copilot <copilot@github.com>
881 lines
30 KiB
Go
881 lines
30 KiB
Go
package s3api
|
|
|
|
import (
|
|
"context"
|
|
"encoding/xml"
|
|
"errors"
|
|
"fmt"
|
|
"io"
|
|
"net/http"
|
|
"net/url"
|
|
"sort"
|
|
"strconv"
|
|
"strings"
|
|
|
|
"github.com/aws/aws-sdk-go/service/s3"
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
|
|
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
|
|
)
|
|
|
|
type OptionalString struct {
|
|
string
|
|
set bool
|
|
}
|
|
|
|
func (o OptionalString) MarshalXML(e *xml.Encoder, startElement xml.StartElement) error {
|
|
if !o.set {
|
|
return nil
|
|
}
|
|
return e.EncodeElement(o.string, startElement)
|
|
}
|
|
|
|
type ListBucketResultV2 struct {
|
|
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult"`
|
|
Name string `xml:"Name"`
|
|
Prefix string `xml:"Prefix"`
|
|
MaxKeys uint16 `xml:"MaxKeys"`
|
|
Delimiter string `xml:"Delimiter,omitempty"`
|
|
IsTruncated bool `xml:"IsTruncated"`
|
|
Contents []ListEntry `xml:"Contents,omitempty"`
|
|
CommonPrefixes []PrefixEntry `xml:"CommonPrefixes,omitempty"`
|
|
ContinuationToken OptionalString `xml:"ContinuationToken,omitempty"`
|
|
NextContinuationToken string `xml:"NextContinuationToken,omitempty"`
|
|
EncodingType string `xml:"EncodingType,omitempty"`
|
|
KeyCount int `xml:"KeyCount"`
|
|
StartAfter string `xml:"StartAfter,omitempty"`
|
|
}
|
|
|
|
type listBucketResultV1 struct {
|
|
XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ ListBucketResult"`
|
|
Metadata []MetadataEntry `xml:"Metadata,omitempty"`
|
|
Name string `xml:"Name"`
|
|
Prefix string `xml:"Prefix"`
|
|
Marker string `xml:"Marker"`
|
|
NextMarker string `xml:"NextMarker,omitempty"`
|
|
MaxKeys int `xml:"MaxKeys"`
|
|
Delimiter string `xml:"Delimiter,omitempty"`
|
|
IsTruncated bool `xml:"IsTruncated"`
|
|
Contents []ListEntry `xml:"Contents,omitempty"`
|
|
CommonPrefixes []PrefixEntry `xml:"CommonPrefixes,omitempty"`
|
|
EncodingType string `xml:"EncodingType,omitempty"`
|
|
}
|
|
|
|
func toListBucketResultV1(in ListBucketResult) listBucketResultV1 {
|
|
return listBucketResultV1{
|
|
Metadata: in.Metadata,
|
|
Name: in.Name,
|
|
Prefix: in.Prefix,
|
|
Marker: in.Marker,
|
|
NextMarker: in.NextMarker,
|
|
MaxKeys: in.MaxKeys,
|
|
Delimiter: in.Delimiter,
|
|
IsTruncated: in.IsTruncated,
|
|
Contents: in.Contents,
|
|
CommonPrefixes: in.CommonPrefixes,
|
|
EncodingType: in.EncodingType,
|
|
}
|
|
}
|
|
|
|
func (s3a *S3ApiServer) ListObjectsV2Handler(w http.ResponseWriter, r *http.Request) {
|
|
|
|
// https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html
|
|
|
|
// collect parameters
|
|
bucket, _ := s3_constants.GetBucketAndObject(r)
|
|
originalPrefix, startAfter, delimiter, continuationToken, encodingTypeUrl, fetchOwner, maxKeys, allowUnordered, errCode := getListObjectsV2Args(r.URL.Query())
|
|
|
|
glog.V(2).Infof("ListObjectsV2Handler bucket=%s prefix=%s marker=%s", bucket, originalPrefix, continuationToken.string)
|
|
|
|
if errCode != s3err.ErrNone {
|
|
s3err.WriteErrorResponse(w, r, errCode)
|
|
return
|
|
}
|
|
|
|
// maxKeys is uint16 here; negative values are rejected during parsing.
|
|
|
|
// AWS S3 compatibility: allow-unordered cannot be used with delimiter
|
|
if allowUnordered && delimiter != "" {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidUnorderedWithDelimiter)
|
|
return
|
|
}
|
|
|
|
marker := continuationToken.string
|
|
if !continuationToken.set {
|
|
marker = startAfter
|
|
}
|
|
|
|
// Adjust marker if it ends with delimiter to skip all entries with that prefix
|
|
marker = adjustMarkerForDelimiter(marker, delimiter)
|
|
|
|
response, err := s3a.listFilerEntries(bucket, originalPrefix, maxKeys, marker, delimiter, encodingTypeUrl, fetchOwner)
|
|
|
|
if err != nil {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
return
|
|
}
|
|
|
|
if len(response.Contents) == 0 {
|
|
if exists, existErr := s3a.bucketExists(bucket); existErr == nil && !exists {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucket)
|
|
return
|
|
}
|
|
}
|
|
|
|
responseV2 := &ListBucketResultV2{
|
|
Name: response.Name,
|
|
CommonPrefixes: response.CommonPrefixes,
|
|
Contents: response.Contents,
|
|
ContinuationToken: continuationToken,
|
|
Delimiter: response.Delimiter,
|
|
IsTruncated: response.IsTruncated,
|
|
KeyCount: len(response.Contents) + len(response.CommonPrefixes),
|
|
MaxKeys: uint16(response.MaxKeys),
|
|
NextContinuationToken: response.NextMarker,
|
|
Prefix: response.Prefix,
|
|
StartAfter: startAfter,
|
|
}
|
|
if encodingTypeUrl {
|
|
responseV2.EncodingType = s3.EncodingTypeUrl
|
|
}
|
|
|
|
glog.V(3).Infof("ListObjectsV2Handler response: %+v", responseV2)
|
|
writeSuccessResponseXML(w, r, responseV2)
|
|
}
|
|
|
|
func (s3a *S3ApiServer) ListObjectsV1Handler(w http.ResponseWriter, r *http.Request) {
|
|
|
|
// https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjects.html
|
|
|
|
// collect parameters
|
|
bucket, _ := s3_constants.GetBucketAndObject(r)
|
|
originalPrefix, marker, delimiter, encodingTypeUrl, maxKeys, allowUnordered, errCode := getListObjectsV1Args(r.URL.Query())
|
|
|
|
glog.V(2).Infof("ListObjectsV1Handler bucket=%s prefix=%s marker=%s delimiter=%s maxKeys=%d", bucket, originalPrefix, marker, delimiter, maxKeys)
|
|
|
|
if errCode != s3err.ErrNone {
|
|
s3err.WriteErrorResponse(w, r, errCode)
|
|
return
|
|
}
|
|
|
|
if maxKeys < 0 {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidMaxKeys)
|
|
return
|
|
}
|
|
|
|
// AWS S3 compatibility: allow-unordered cannot be used with delimiter
|
|
if allowUnordered && delimiter != "" {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidUnorderedWithDelimiter)
|
|
return
|
|
}
|
|
|
|
// Adjust marker if it ends with delimiter to skip all entries with that prefix
|
|
marker = adjustMarkerForDelimiter(marker, delimiter)
|
|
|
|
response, err := s3a.listFilerEntries(bucket, originalPrefix, uint16(maxKeys), marker, delimiter, encodingTypeUrl, true)
|
|
|
|
if err != nil {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
|
|
return
|
|
}
|
|
sanitizeV1MarkerEcho(&response, marker, encodingTypeUrl)
|
|
|
|
if len(response.Contents) == 0 {
|
|
if exists, existErr := s3a.bucketExists(bucket); existErr == nil && !exists {
|
|
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchBucket)
|
|
return
|
|
}
|
|
}
|
|
|
|
glog.V(3).Infof("ListObjectsV1Handler response: %+v", response)
|
|
writeSuccessResponseXML(w, r, toListBucketResultV1(response))
|
|
}
|
|
|
|
func sanitizeV1MarkerEcho(response *ListBucketResult, marker string, encodingTypeUrl bool) {
|
|
if marker == "" {
|
|
return
|
|
}
|
|
|
|
markerCandidates := map[string]struct{}{
|
|
marker: {},
|
|
strings.TrimPrefix(marker, "/"): {},
|
|
}
|
|
if encodingTypeUrl {
|
|
escapedMarker := urlPathEscape(strings.TrimPrefix(marker, "/"))
|
|
markerCandidates[escapedMarker] = struct{}{}
|
|
}
|
|
matchesMarker := func(v string) bool {
|
|
if _, ok := markerCandidates[v]; ok {
|
|
return true
|
|
}
|
|
_, ok := markerCandidates[strings.TrimPrefix(v, "/")]
|
|
return ok
|
|
}
|
|
|
|
if len(response.Contents) > 0 {
|
|
filtered := response.Contents[:0]
|
|
for _, content := range response.Contents {
|
|
if matchesMarker(content.Key) {
|
|
continue
|
|
}
|
|
filtered = append(filtered, content)
|
|
}
|
|
response.Contents = filtered
|
|
}
|
|
|
|
// doListFilerEntries advances nextMarker to the last emitted entry and skips
|
|
// the marker in exclusive mode. So NextMarker==marker indicates no progress.
|
|
if matchesMarker(response.NextMarker) && len(response.Contents) == 0 && len(response.CommonPrefixes) == 0 {
|
|
response.NextMarker = ""
|
|
response.IsTruncated = false
|
|
}
|
|
}
|
|
|
|
func (s3a *S3ApiServer) listFilerEntries(bucket string, originalPrefix string, maxKeys uint16, originalMarker string, delimiter string, encodingTypeUrl bool, fetchOwner bool) (response ListBucketResult, err error) {
|
|
// convert full path prefix into directory name and prefix for entry name
|
|
requestDir, prefix, marker := normalizePrefixMarker(originalPrefix, originalMarker)
|
|
bucketPrefix := s3a.bucketPrefix(bucket)
|
|
reqDir := bucketPrefix[:len(bucketPrefix)-1]
|
|
if requestDir != "" {
|
|
reqDir = fmt.Sprintf("%s%s", bucketPrefix, requestDir)
|
|
}
|
|
|
|
var contents []ListEntry
|
|
var commonPrefixes []PrefixEntry
|
|
var doErr error
|
|
var nextMarker string
|
|
cursor := &ListingCursor{
|
|
maxKeys: maxKeys,
|
|
prefixEndsOnDelimiter: strings.HasSuffix(originalPrefix, "/") && len(originalMarker) == 0,
|
|
}
|
|
|
|
// Special case: when maxKeys = 0, return empty results immediately with IsTruncated=false
|
|
if maxKeys == 0 {
|
|
response = ListBucketResult{
|
|
Name: bucket,
|
|
Prefix: originalPrefix,
|
|
Marker: originalMarker,
|
|
NextMarker: "",
|
|
MaxKeys: int(maxKeys),
|
|
Delimiter: delimiter,
|
|
IsTruncated: false,
|
|
Contents: contents,
|
|
CommonPrefixes: commonPrefixes,
|
|
}
|
|
if encodingTypeUrl {
|
|
response.EncodingType = s3.EncodingTypeUrl
|
|
}
|
|
return
|
|
}
|
|
|
|
// check filer
|
|
err = s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
|
var lastEntryWasCommonPrefix bool
|
|
var lastCommonPrefixName string
|
|
|
|
// Hoist versioning check out of per-entry callback
|
|
versioningState, _ := s3a.getVersioningState(bucket)
|
|
versioningEnabled := versioningState == "Enabled"
|
|
|
|
// Helper function to handle dedup/append logic
|
|
appendOrDedup := func(newEntry ListEntry) {
|
|
if versioningEnabled {
|
|
// For versioned buckets, we need to handle duplicates between the main file and the .versions directory
|
|
if len(contents) > 0 && contents[len(contents)-1].Key == newEntry.Key {
|
|
glog.V(3).Infof("listFilerEntries deduplicating versioned entry: %s", newEntry.Key)
|
|
contents[len(contents)-1] = newEntry
|
|
} else {
|
|
contents = append(contents, newEntry)
|
|
cursor.maxKeys--
|
|
}
|
|
} else {
|
|
contents = append(contents, newEntry)
|
|
cursor.maxKeys--
|
|
}
|
|
}
|
|
|
|
for {
|
|
empty := true
|
|
|
|
nextMarker, doErr = s3a.doListFilerEntries(client, reqDir, prefix, cursor, marker, delimiter, false, bucket, func(dir string, entry *filer_pb.Entry) {
|
|
empty = false
|
|
dirName, entryName, _ := entryUrlEncode(dir, entry.Name, encodingTypeUrl)
|
|
if entry.IsDirectory {
|
|
if originalPrefix != "" {
|
|
normalizedPrefix := strings.TrimPrefix(strings.TrimSuffix(originalPrefix, "/"), "/")
|
|
if normalizedPrefix != "" {
|
|
relativePath := strings.TrimPrefix(fmt.Sprintf("%s/%s", dir, entry.Name), bucketPrefix)
|
|
relativePath = strings.TrimPrefix(relativePath, "/")
|
|
if normalizedPrefix == relativePath && !s3a.hasChildren(bucket, relativePath) && !entry.IsDirectoryKeyObject() {
|
|
return
|
|
}
|
|
}
|
|
}
|
|
// When delimiter is specified, apply delimiter logic to directory key objects too
|
|
if delimiter != "" && entry.IsDirectoryKeyObject() {
|
|
// Apply the same delimiter logic as for regular files
|
|
var delimiterFound bool
|
|
// Use raw dir and entry.Name (not encoded) to ensure consistent handling
|
|
// Encoding will be applied after sorting if encodingTypeUrl is set
|
|
undelimitedPath := fmt.Sprintf("%s/%s/", dir, entry.Name)[len(bucketPrefix):]
|
|
|
|
// take into account a prefix if supplied while delimiting.
|
|
undelimitedPath = strings.TrimPrefix(undelimitedPath, originalPrefix)
|
|
|
|
delimitedPath := strings.SplitN(undelimitedPath, delimiter, 2)
|
|
if len(delimitedPath) == 2 {
|
|
// S3 clients expect the delimited prefix to contain the delimiter and prefix.
|
|
delimitedPrefix := originalPrefix + delimitedPath[0] + delimiter
|
|
|
|
// Check if this CommonPrefix already exists
|
|
if !lastEntryWasCommonPrefix || lastCommonPrefixName != delimitedPath[0] {
|
|
// New CommonPrefix found
|
|
commonPrefixes = append(commonPrefixes, PrefixEntry{
|
|
Prefix: delimitedPrefix,
|
|
})
|
|
cursor.maxKeys--
|
|
delimiterFound = true
|
|
lastEntryWasCommonPrefix = true
|
|
lastCommonPrefixName = delimitedPath[0]
|
|
} else {
|
|
// This directory object belongs to an existing CommonPrefix, skip it
|
|
delimiterFound = true
|
|
}
|
|
}
|
|
|
|
// If no delimiter found in the directory object name, treat it as a regular key
|
|
if !delimiterFound {
|
|
newEntry := newListEntry(s3a, entry, "", dirName, entryName, bucketPrefix, fetchOwner, true, false)
|
|
appendOrDedup(newEntry)
|
|
lastEntryWasCommonPrefix = false
|
|
}
|
|
} else if entry.IsDirectoryKeyObject() {
|
|
// No delimiter specified, or delimiter doesn't apply - treat as regular key
|
|
newEntry := newListEntry(s3a, entry, "", dirName, entryName, bucketPrefix, fetchOwner, true, false)
|
|
appendOrDedup(newEntry)
|
|
lastEntryWasCommonPrefix = false
|
|
// https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
|
|
} else if delimiter != "" { // A response can contain CommonPrefixes only if you specify a delimiter.
|
|
// Use raw dir and entry.Name (not encoded) to ensure consistent handling
|
|
// Encoding will be applied after sorting if encodingTypeUrl is set
|
|
commonPrefixes = append(commonPrefixes, PrefixEntry{
|
|
Prefix: fmt.Sprintf("%s/%s/", dir, entry.Name)[len(bucketPrefix):],
|
|
})
|
|
//All of the keys (up to 1,000) rolled up into a common prefix count as a single return when calculating the number of returns.
|
|
cursor.maxKeys--
|
|
lastEntryWasCommonPrefix = true
|
|
lastCommonPrefixName = entry.Name
|
|
}
|
|
} else {
|
|
var delimiterFound bool
|
|
if delimiter != "" {
|
|
// keys that contain the same string between the prefix and the first occurrence of the delimiter are grouped together as a commonPrefix.
|
|
// extract the string between the prefix and the delimiter and add it to the commonPrefixes if it's unique.
|
|
undelimitedPath := fmt.Sprintf("%s/%s", dir, entry.Name)[len(bucketPrefix):]
|
|
|
|
// take into account a prefix if supplied while delimiting.
|
|
undelimitedPath = strings.TrimPrefix(undelimitedPath, originalPrefix)
|
|
|
|
delimitedPath := strings.SplitN(undelimitedPath, delimiter, 2)
|
|
|
|
if len(delimitedPath) == 2 {
|
|
// S3 clients expect the delimited prefix to contain the delimiter and prefix.
|
|
delimitedPrefix := originalPrefix + delimitedPath[0] + delimiter
|
|
|
|
for i := range commonPrefixes {
|
|
if commonPrefixes[i].Prefix == delimitedPrefix {
|
|
delimiterFound = true
|
|
break
|
|
}
|
|
}
|
|
|
|
if !delimiterFound {
|
|
commonPrefixes = append(commonPrefixes, PrefixEntry{
|
|
Prefix: delimitedPrefix,
|
|
})
|
|
cursor.maxKeys--
|
|
delimiterFound = true
|
|
lastEntryWasCommonPrefix = true
|
|
lastCommonPrefixName = delimitedPath[0]
|
|
} else {
|
|
// This object belongs to an existing CommonPrefix, skip it
|
|
// but continue processing to maintain correct flow
|
|
delimiterFound = true
|
|
}
|
|
}
|
|
}
|
|
if !delimiterFound {
|
|
glog.V(4).Infof("Adding file to contents: %s", entryName)
|
|
newEntry := newListEntry(s3a, entry, "", dirName, entryName, bucketPrefix, fetchOwner, false, false)
|
|
appendOrDedup(newEntry)
|
|
lastEntryWasCommonPrefix = false
|
|
}
|
|
}
|
|
})
|
|
if doErr != nil {
|
|
if errors.Is(doErr, filer_pb.ErrNotFound) {
|
|
empty = true
|
|
nextMarker = ""
|
|
break
|
|
}
|
|
return doErr
|
|
}
|
|
|
|
// Adjust nextMarker for CommonPrefixes to include trailing slash (AWS S3 compliance)
|
|
if cursor.isTruncated {
|
|
nextMarker = buildTruncatedNextMarker(requestDir, prefix, nextMarker, lastEntryWasCommonPrefix, lastCommonPrefixName)
|
|
}
|
|
|
|
if cursor.isTruncated {
|
|
break
|
|
} else if empty || strings.HasSuffix(originalPrefix, "/") {
|
|
nextMarker = ""
|
|
break
|
|
} else {
|
|
// start next loop
|
|
marker = nextMarker
|
|
}
|
|
}
|
|
|
|
response = ListBucketResult{
|
|
Name: bucket,
|
|
Prefix: originalPrefix,
|
|
Marker: originalMarker,
|
|
NextMarker: nextMarker,
|
|
MaxKeys: int(maxKeys),
|
|
Delimiter: delimiter,
|
|
IsTruncated: cursor.isTruncated,
|
|
Contents: contents,
|
|
CommonPrefixes: commonPrefixes,
|
|
}
|
|
// Sort CommonPrefixes to match AWS S3 behavior
|
|
// AWS S3 treats the delimiter character specially for sorting common prefixes.
|
|
// For example, with delimiter '/', 'foo/' should come before 'foo+1/' even though '+' (ASCII 43) < '/' (ASCII 47).
|
|
// This custom comparison ensures correct S3-compatible lexicographical ordering.
|
|
sort.Slice(response.CommonPrefixes, func(i, j int) bool {
|
|
return compareWithDelimiter(response.CommonPrefixes[i].Prefix, response.CommonPrefixes[j].Prefix, delimiter)
|
|
})
|
|
|
|
// URL-encode CommonPrefixes AFTER sorting (if EncodingType=url)
|
|
// This ensures proper sort order (on decoded values) and correct encoding in response
|
|
if encodingTypeUrl {
|
|
response.EncodingType = s3.EncodingTypeUrl
|
|
for i := range response.CommonPrefixes {
|
|
response.CommonPrefixes[i].Prefix = urlPathEscape(response.CommonPrefixes[i].Prefix)
|
|
}
|
|
}
|
|
return nil
|
|
})
|
|
|
|
return
|
|
}
|
|
|
|
type ListingCursor struct {
|
|
maxKeys uint16
|
|
isTruncated bool
|
|
prefixEndsOnDelimiter bool
|
|
}
|
|
|
|
// the prefix and marker may be in different directories
|
|
// normalizePrefixMarker ensures the prefix and marker both starts from the same directory
|
|
func normalizePrefixMarker(prefix, marker string) (alignedDir, alignedPrefix, alignedMarker string) {
|
|
// alignedDir should not end with "/"
|
|
// alignedDir, alignedPrefix, alignedMarker should only have "/" in middle
|
|
if len(marker) == 0 {
|
|
prefix = strings.Trim(prefix, "/")
|
|
} else {
|
|
prefix = strings.TrimLeft(prefix, "/")
|
|
}
|
|
marker = strings.TrimLeft(marker, "/")
|
|
if prefix == "" {
|
|
return "", "", marker
|
|
}
|
|
if marker == "" {
|
|
alignedDir, alignedPrefix = toDirAndName(prefix)
|
|
return
|
|
}
|
|
if !strings.HasPrefix(marker, prefix) {
|
|
// something wrong
|
|
return "", prefix, marker
|
|
}
|
|
if strings.HasPrefix(marker, prefix+"/") {
|
|
alignedDir = prefix
|
|
alignedPrefix = ""
|
|
alignedMarker = marker[len(alignedDir)+1:]
|
|
return
|
|
}
|
|
|
|
alignedDir, alignedPrefix = toDirAndName(prefix)
|
|
if alignedDir != "" {
|
|
alignedMarker = marker[len(alignedDir)+1:]
|
|
} else {
|
|
alignedMarker = marker
|
|
}
|
|
return
|
|
}
|
|
|
|
func toDirAndName(dirAndName string) (dir, name string) {
|
|
sepIndex := strings.LastIndex(dirAndName, "/")
|
|
if sepIndex >= 0 {
|
|
dir, name = dirAndName[0:sepIndex], dirAndName[sepIndex+1:]
|
|
} else {
|
|
name = dirAndName
|
|
}
|
|
return
|
|
}
|
|
|
|
func toParentAndDescendants(dirAndName string) (dir, name string) {
|
|
sepIndex := strings.Index(dirAndName, "/")
|
|
if sepIndex >= 0 {
|
|
dir, name = dirAndName[0:sepIndex], dirAndName[sepIndex+1:]
|
|
} else {
|
|
name = dirAndName
|
|
}
|
|
return
|
|
}
|
|
|
|
func buildTruncatedNextMarker(requestDir, prefix, nextMarker string, lastEntryWasCommonPrefix bool, lastCommonPrefixName string) string {
|
|
if lastEntryWasCommonPrefix && lastCommonPrefixName != "" {
|
|
// For CommonPrefixes, NextMarker should include the trailing slash
|
|
if requestDir != "" {
|
|
if prefix != "" {
|
|
return requestDir + "/" + prefix + "/" + lastCommonPrefixName + "/"
|
|
}
|
|
return requestDir + "/" + lastCommonPrefixName + "/"
|
|
}
|
|
if prefix != "" {
|
|
return prefix + "/" + lastCommonPrefixName + "/"
|
|
}
|
|
return lastCommonPrefixName + "/"
|
|
}
|
|
|
|
if requestDir != "" {
|
|
return requestDir + "/" + nextMarker
|
|
}
|
|
|
|
return nextMarker
|
|
}
|
|
|
|
func (s3a *S3ApiServer) doListFilerEntries(client filer_pb.SeaweedFilerClient, dir, prefix string, cursor *ListingCursor, marker, delimiter string, inclusiveStartFrom bool, bucket string, eachEntryFn func(dir string, entry *filer_pb.Entry)) (nextMarker string, err error) {
|
|
// invariants
|
|
// prefix and marker should be under dir, marker may contain "/"
|
|
// maxKeys should be updated for each recursion
|
|
// glog.V(4).Infof("doListFilerEntries dir: %s, prefix: %s, marker %s, maxKeys: %d, prefixEndsOnDelimiter: %+v", dir, prefix, marker, cursor.maxKeys, cursor.prefixEndsOnDelimiter)
|
|
// When listing at bucket root with delimiter '/', prefix can be "/" after normalization.
|
|
// Returning early here would incorrectly hide all top-level entries (folders like "Veeam/").
|
|
if cursor.maxKeys <= 0 {
|
|
return // Don't set isTruncated here - let caller decide based on whether more entries exist
|
|
}
|
|
|
|
if strings.Contains(marker, "/") {
|
|
subDir, subMarker := toParentAndDescendants(marker)
|
|
// println("doListFilerEntries dir", dir+"/"+subDir, "subMarker", subMarker)
|
|
subNextMarker, subErr := s3a.doListFilerEntries(client, dir+"/"+subDir, "", cursor, subMarker, delimiter, false, bucket, eachEntryFn)
|
|
if subErr != nil {
|
|
err = subErr
|
|
return
|
|
}
|
|
nextMarker = subDir + "/" + subNextMarker
|
|
// finished processing this subdirectory
|
|
marker = subDir
|
|
}
|
|
if cursor.isTruncated {
|
|
return
|
|
}
|
|
|
|
// now marker is also a direct child of dir
|
|
request := &filer_pb.ListEntriesRequest{
|
|
Directory: dir,
|
|
Prefix: prefix,
|
|
Limit: uint32(cursor.maxKeys + 2), // bucket root directory needs to skip additional s3_constants.MultipartUploadsFolder folder
|
|
StartFromFileName: marker,
|
|
InclusiveStartFrom: inclusiveStartFrom,
|
|
}
|
|
|
|
ctx, cancel := context.WithCancel(context.Background())
|
|
defer cancel()
|
|
stream, listErr := client.ListEntries(ctx, request)
|
|
if listErr != nil {
|
|
if errors.Is(listErr, filer_pb.ErrNotFound) {
|
|
return
|
|
}
|
|
err = fmt.Errorf("list entries %+v: %w", request, listErr)
|
|
return
|
|
}
|
|
|
|
for {
|
|
resp, recvErr := stream.Recv()
|
|
if recvErr != nil {
|
|
if recvErr == io.EOF {
|
|
break
|
|
} else {
|
|
err = fmt.Errorf("iterating entries %+v: %v", request, recvErr)
|
|
return
|
|
}
|
|
}
|
|
entry := resp.Entry
|
|
if entry == nil {
|
|
continue
|
|
}
|
|
// listFilerEntries always calls doListFilerEntries with inclusiveStartFrom=false
|
|
// (S3 marker semantics are exclusive), but keep the guard explicit to preserve
|
|
// behavior if inclusive callers are introduced in the future.
|
|
if !inclusiveStartFrom && marker != "" && entry.Name == marker {
|
|
continue
|
|
}
|
|
|
|
if cursor.maxKeys <= 0 {
|
|
cursor.isTruncated = true
|
|
break
|
|
}
|
|
|
|
// Set nextMarker only when we have quota to process this entry
|
|
nextMarker = entry.Name
|
|
if cursor.prefixEndsOnDelimiter {
|
|
if entry.Name == prefix && entry.IsDirectory {
|
|
if delimiter != "/" {
|
|
cursor.prefixEndsOnDelimiter = false
|
|
}
|
|
} else {
|
|
continue
|
|
}
|
|
}
|
|
if entry.IsDirectory {
|
|
// glog.V(4).Infof("List Dir Entries %s, file: %s, maxKeys %d", dir, entry.Name, cursor.maxKeys)
|
|
if entry.Name == s3_constants.MultipartUploadsFolder { // FIXME no need to apply to all directories. this extra also affects maxKeys
|
|
continue
|
|
}
|
|
|
|
// Process .versions directories immediately to create logical versioned object entries
|
|
// These directories are never traversed (we continue here), so each is only encountered once
|
|
if strings.HasSuffix(entry.Name, s3_constants.VersionsFolder) {
|
|
// Extract object name from .versions directory name
|
|
baseObjectName := strings.TrimSuffix(entry.Name, s3_constants.VersionsFolder)
|
|
// Construct full object path relative to bucket
|
|
bucketFullPath := s3a.bucketDir(bucket)
|
|
bucketRelativePath := strings.TrimPrefix(dir, bucketFullPath)
|
|
bucketRelativePath = strings.TrimPrefix(bucketRelativePath, "/")
|
|
var fullObjectPath string
|
|
if bucketRelativePath == "" {
|
|
fullObjectPath = baseObjectName
|
|
} else {
|
|
fullObjectPath = bucketRelativePath + "/" + baseObjectName
|
|
}
|
|
// Use metadata from the already-fetched .versions directory entry
|
|
if latestVersionEntry, err := s3a.getLatestVersionEntryFromDirectoryEntry(bucket, fullObjectPath, entry); err == nil {
|
|
eachEntryFn(dir, latestVersionEntry)
|
|
} else if !errors.Is(err, ErrDeleteMarker) {
|
|
// Log unexpected errors (delete markers are expected)
|
|
glog.V(2).Infof("Skipping versioned object %s due to error: %v", fullObjectPath, err)
|
|
}
|
|
continue
|
|
}
|
|
|
|
if delimiter != "/" || cursor.prefixEndsOnDelimiter {
|
|
// When delimiter is empty (recursive mode), recurse into directories but don't add them to results
|
|
// Only files and versioned objects should appear in results
|
|
if cursor.prefixEndsOnDelimiter {
|
|
cursor.prefixEndsOnDelimiter = false
|
|
if entry.IsDirectoryKeyObject() {
|
|
eachEntryFn(dir, entry)
|
|
}
|
|
}
|
|
// Recurse into subdirectory - don't add the directory itself to results
|
|
subNextMarker, subErr := s3a.doListFilerEntries(client, dir+"/"+entry.Name, "", cursor, "", delimiter, false, bucket, eachEntryFn)
|
|
if subErr != nil {
|
|
err = fmt.Errorf("doListFilerEntries2: %w", subErr)
|
|
return
|
|
}
|
|
// println("doListFilerEntries2 dir", dir+"/"+entry.Name, "subNextMarker", subNextMarker)
|
|
nextMarker = entry.Name + "/" + subNextMarker
|
|
if cursor.isTruncated {
|
|
return
|
|
}
|
|
// println("doListFilerEntries2 nextMarker", nextMarker)
|
|
} else {
|
|
eachEntryFn(dir, entry)
|
|
}
|
|
} else {
|
|
eachEntryFn(dir, entry)
|
|
// glog.V(4).Infof("List File Entries %s, file: %s, maxKeys %d", dir, entry.Name, cursor.maxKeys)
|
|
}
|
|
if cursor.prefixEndsOnDelimiter {
|
|
cursor.prefixEndsOnDelimiter = false
|
|
}
|
|
}
|
|
|
|
// Versioned directories are processed above (lines 524-546)
|
|
return
|
|
}
|
|
|
|
func getListObjectsV2Args(values url.Values) (prefix, startAfter, delimiter string, token OptionalString, encodingTypeUrl bool, fetchOwner bool, maxkeys uint16, allowUnordered bool, errCode s3err.ErrorCode) {
|
|
prefix = values.Get("prefix")
|
|
token = OptionalString{set: values.Has("continuation-token"), string: values.Get("continuation-token")}
|
|
startAfter = values.Get("start-after")
|
|
delimiter = values.Get("delimiter")
|
|
encodingTypeUrl = values.Get("encoding-type") == s3.EncodingTypeUrl
|
|
if values.Get("max-keys") != "" {
|
|
if maxKeys, err := strconv.ParseUint(values.Get("max-keys"), 10, 16); err == nil {
|
|
maxkeys = uint16(maxKeys)
|
|
} else {
|
|
// Invalid max-keys value (non-numeric)
|
|
errCode = s3err.ErrInvalidMaxKeys
|
|
return
|
|
}
|
|
} else {
|
|
maxkeys = maxObjectListSizeLimit
|
|
}
|
|
fetchOwner = values.Get("fetch-owner") == "true"
|
|
allowUnordered = values.Get("allow-unordered") == "true"
|
|
errCode = s3err.ErrNone
|
|
return
|
|
}
|
|
|
|
func getListObjectsV1Args(values url.Values) (prefix, marker, delimiter string, encodingTypeUrl bool, maxkeys int16, allowUnordered bool, errCode s3err.ErrorCode) {
|
|
prefix = values.Get("prefix")
|
|
marker = values.Get("marker")
|
|
delimiter = values.Get("delimiter")
|
|
encodingTypeUrl = values.Get("encoding-type") == "url"
|
|
if values.Get("max-keys") != "" {
|
|
if maxKeys, err := strconv.ParseInt(values.Get("max-keys"), 10, 16); err == nil {
|
|
maxkeys = int16(maxKeys)
|
|
} else {
|
|
// Invalid max-keys value (non-numeric)
|
|
errCode = s3err.ErrInvalidMaxKeys
|
|
return
|
|
}
|
|
} else {
|
|
maxkeys = maxObjectListSizeLimit
|
|
}
|
|
allowUnordered = values.Get("allow-unordered") == "true"
|
|
errCode = s3err.ErrNone
|
|
return
|
|
}
|
|
|
|
func (s3a *S3ApiServer) ensureDirectoryAllEmpty(filerClient filer_pb.SeaweedFilerClient, parentDir, name string) (isEmpty bool, err error) {
|
|
// println("+ ensureDirectoryAllEmpty", dir, name)
|
|
glog.V(4).Infof("+ isEmpty %s/%s", parentDir, name)
|
|
defer glog.V(4).Infof("- isEmpty %s/%s %v", parentDir, name, isEmpty)
|
|
var fileCounter int
|
|
var subDirs []string
|
|
currentDir := parentDir + "/" + name
|
|
var startFrom string
|
|
var isExhausted bool
|
|
var foundEntry bool
|
|
for fileCounter == 0 && !isExhausted && err == nil {
|
|
err = filer_pb.SeaweedList(context.Background(), filerClient, currentDir, "", func(entry *filer_pb.Entry, isLast bool) error {
|
|
foundEntry = true
|
|
if entry.IsOlderDir() {
|
|
subDirs = append(subDirs, entry.Name)
|
|
} else {
|
|
fileCounter++
|
|
}
|
|
startFrom = entry.Name
|
|
isExhausted = isExhausted || isLast
|
|
glog.V(4).Infof(" * %s/%s isLast: %t", currentDir, startFrom, isLast)
|
|
return nil
|
|
}, startFrom, false, 8)
|
|
if !foundEntry {
|
|
break
|
|
}
|
|
}
|
|
|
|
if err != nil {
|
|
return false, err
|
|
}
|
|
|
|
if fileCounter > 0 {
|
|
return false, nil
|
|
}
|
|
|
|
for _, subDir := range subDirs {
|
|
isSubEmpty, subErr := s3a.ensureDirectoryAllEmpty(filerClient, currentDir, subDir)
|
|
if subErr != nil {
|
|
return false, subErr
|
|
}
|
|
if !isSubEmpty {
|
|
return false, nil
|
|
}
|
|
}
|
|
|
|
glog.V(1).Infof("deleting empty folder %s", currentDir)
|
|
if err = doDeleteEntry(filerClient, parentDir, name, true, false); err != nil {
|
|
return
|
|
}
|
|
|
|
return true, nil
|
|
}
|
|
|
|
// compareWithDelimiter compares two strings for sorting, treating the delimiter character
|
|
// as having lower precedence than other characters to match AWS S3 behavior.
|
|
// For example, with delimiter '/', 'foo/' should come before 'foo+1/' even though '+' < '/' in ASCII.
|
|
// Note: This function assumes delimiter is a single character. Multi-character delimiters will fall back to standard comparison.
|
|
func compareWithDelimiter(a, b, delimiter string) bool {
|
|
if delimiter == "" {
|
|
return a < b
|
|
}
|
|
|
|
// Multi-character delimiters are not supported by AWS S3 in practice,
|
|
// but if encountered, fall back to standard byte-wise comparison
|
|
if len(delimiter) != 1 {
|
|
return a < b
|
|
}
|
|
|
|
delimByte := delimiter[0]
|
|
minLen := len(a)
|
|
if len(b) < minLen {
|
|
minLen = len(b)
|
|
}
|
|
|
|
// Compare character by character
|
|
for i := 0; i < minLen; i++ {
|
|
charA := a[i]
|
|
charB := b[i]
|
|
|
|
if charA == charB {
|
|
continue
|
|
}
|
|
|
|
// Check if either character is the delimiter
|
|
isDelimA := charA == delimByte
|
|
isDelimB := charB == delimByte
|
|
|
|
if isDelimA && !isDelimB {
|
|
// Delimiter in 'a' should come first
|
|
return true
|
|
}
|
|
if !isDelimA && isDelimB {
|
|
// Delimiter in 'b' should come first
|
|
return false
|
|
}
|
|
|
|
// Neither or both are delimiters, use normal comparison
|
|
return charA < charB
|
|
}
|
|
|
|
// If we get here, one string is a prefix of the other
|
|
return len(a) < len(b)
|
|
}
|
|
|
|
// adjustMarkerForDelimiter handles delimiter-ending markers by incrementing them to skip entries with that prefix.
|
|
// For example, when continuation token is "boo/", this returns "boo~" to skip all "boo/*" entries
|
|
// but still finds any "bop" or later entries. We add a high ASCII character rather than incrementing
|
|
// the last character to avoid skipping potential directory entries.
|
|
// This is essential for correct S3 list operations with delimiters and CommonPrefixes.
|
|
func adjustMarkerForDelimiter(marker, delimiter string) string {
|
|
if delimiter == "" || !strings.HasSuffix(marker, delimiter) {
|
|
return marker
|
|
}
|
|
|
|
// Remove the trailing delimiter
|
|
// This ensures we skip all entries under the prefix but don't skip
|
|
// potential directory entries that start with a similar prefix
|
|
prefix := strings.TrimSuffix(marker, delimiter)
|
|
if len(prefix) == 0 {
|
|
return marker
|
|
}
|
|
|
|
return prefix
|
|
}
|