filer: add structured error codes to CreateEntryResponse (#8767)

* filer: add FilerError enum and error_code field to CreateEntryResponse

Add a machine-readable error code alongside the existing string error
field. This follows the precedent set by PublishMessageResponse in the
MQ broker proto. The string field is kept for human readability and
backward compatibility.

Defined codes: OK, ENTRY_NAME_TOO_LONG, PARENT_IS_FILE,
EXISTING_IS_DIRECTORY, EXISTING_IS_FILE, ENTRY_ALREADY_EXISTS.

* filer: add sentinel errors and error code mapping in filer_pb

Define sentinel errors (ErrEntryNameTooLong, ErrParentIsFile, etc.) in
the filer_pb package so both the filer and consumers can reference them
without circular imports.

Add FilerErrorToSentinel() to map proto error codes to sentinels, and
update CreateEntryWithResponse() to check error_code first, falling back
to the string-based path for backward compatibility with old servers.

* filer: return wrapped sentinel errors and set proto error codes

Replace fmt.Errorf string errors in filer.CreateEntry, UpdateEntry, and
ensureParentDirectoryEntry with wrapped filer_pb sentinel errors (using
%w). This preserves errors.Is() traversal on the server side.

In the gRPC CreateEntry handler, map sentinel errors to the
corresponding FilerError proto codes using errors.Is(), setting both
resp.Error (string, for backward compat) and resp.ErrorCode (enum).

* S3: use errors.Is() with filer sentinels instead of string matching

Replace fragile string-based error matching in filerErrorToS3Error and
other S3 API consumers with errors.Is() checks against filer_pb sentinel
errors. This works because the updated CreateEntryWithResponse helper
reconstructs sentinel errors from the proto FilerError code.

Update iceberg stage_create and metadata_files to check resp.ErrorCode
instead of parsing resp.Error strings. Update SSE-S3 to use errors.Is()
for the already-exists check.

String matching is retained only for non-filer errors (gRPC transport
errors, checksum validation) that don't go through CreateEntryResponse.

* filer: remove backward-compat string fallbacks for error codes

Clients and servers are always deployed together, so there is no need
for backward-compatibility fallback paths that parse resp.Error strings
when resp.ErrorCode is unset. Simplify all consumers to rely solely on
the structured error code.

* iceberg: ensure unknown non-OK error codes are not silently ignored

When FilerErrorToSentinel returns nil for an unrecognized error code,
return an error including the code and message rather than falling
through to return nil.

* filer: fix redundant error message and restore error wrapping in helper

Use request path instead of resp.Error in the sentinel error format
string to avoid duplicating the sentinel message (e.g. "entry already
exists: entry already exists"). Restore %w wrapping with errors.New()
in the fallback paths so callers can use errors.Is()/errors.As().

* filer: promote file to directory on path conflict instead of erroring

S3 allows both "foo/bar" (object) and "foo/bar/xyzzy" (another object)
to coexist because S3 has a flat key space. When ensureParentDirectoryEntry
finds a parent path that is a file instead of a directory, promote it to
a directory by setting ModeDir while preserving the original content and
chunks. Use Store.UpdateEntry directly to bypass the Filer.UpdateEntry
type-change guard.

This fixes the S3 compatibility test failures where creating overlapping
keys (e.g. "foo/bar" then "foo/bar/xyzzy") returned ExistingObjectIsFile.
This commit is contained in:
Chris Lu
2026-03-24 17:08:22 -07:00
committed by GitHub
parent 152884eff2
commit 0b3867dca3
11 changed files with 376 additions and 204 deletions

View File

@@ -203,9 +203,21 @@ message CreateEntryRequest {
bool skip_check_parent_directory = 6; bool skip_check_parent_directory = 6;
} }
// Structured error codes for filer entry operations.
// Values are stable — do not reorder or reuse numbers.
enum FilerError {
OK = 0;
ENTRY_NAME_TOO_LONG = 1; // name exceeds max_file_name_length
PARENT_IS_FILE = 2; // parent path component is a file, not a directory
EXISTING_IS_DIRECTORY = 3; // cannot overwrite directory with file
EXISTING_IS_FILE = 4; // cannot overwrite file with directory
ENTRY_ALREADY_EXISTS = 5; // O_EXCL and entry already exists
}
message CreateEntryResponse { message CreateEntryResponse {
string error = 1; string error = 1; // kept for human readability + backward compat
SubscribeMetadataResponse metadata_event = 2; SubscribeMetadataResponse metadata_event = 2;
FilerError error_code = 3; // machine-readable error code
} }
message UpdateEntryRequest { message UpdateEntryRequest {

View File

@@ -23,7 +23,6 @@ import (
"github.com/seaweedfs/seaweedfs/weed/glog" "github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" "github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/util" "github.com/seaweedfs/seaweedfs/weed/util"
"github.com/seaweedfs/seaweedfs/weed/util/constants"
"github.com/seaweedfs/seaweedfs/weed/util/log_buffer" "github.com/seaweedfs/seaweedfs/weed/util/log_buffer"
"github.com/seaweedfs/seaweedfs/weed/wdclient" "github.com/seaweedfs/seaweedfs/weed/wdclient"
"golang.org/x/sync/singleflight" "golang.org/x/sync/singleflight"
@@ -204,7 +203,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry, o_excl bool, isFr
} }
if entry.FullPath.IsLongerFileName(maxFilenameLength) { if entry.FullPath.IsLongerFileName(maxFilenameLength) {
return fmt.Errorf(constants.ErrMsgEntryNameTooLong) return filer_pb.ErrEntryNameTooLong
} }
if entry.IsDirectory() { if entry.IsDirectory() {
@@ -238,7 +237,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry, o_excl bool, isFr
} else { } else {
if o_excl { if o_excl {
glog.V(3).InfofCtx(ctx, "EEXIST: entry %s already exists", entry.FullPath) glog.V(3).InfofCtx(ctx, "EEXIST: entry %s already exists", entry.FullPath)
return fmt.Errorf("EEXIST: entry %s already exists", entry.FullPath) return fmt.Errorf("%s: %w", entry.FullPath, filer_pb.ErrEntryAlreadyExists)
} }
glog.V(4).InfofCtx(ctx, "UpdateEntry %s: old entry: %v", entry.FullPath, oldEntry.Name()) glog.V(4).InfofCtx(ctx, "UpdateEntry %s: old entry: %v", entry.FullPath, oldEntry.Name())
if err := f.UpdateEntry(ctx, oldEntry, entry); err != nil { if err := f.UpdateEntry(ctx, oldEntry, entry); err != nil {
@@ -324,8 +323,16 @@ func (f *Filer) ensureParentDirectoryEntry(ctx context.Context, entry *Entry, di
} }
} else if !dirEntry.IsDirectory() { } else if !dirEntry.IsDirectory() {
glog.ErrorfCtx(ctx, "CreateEntry %s: %s should be a directory", entry.FullPath, dirPath) // S3 allows both "foo/bar" (object) and "foo/bar/xyzzy" (another
return fmt.Errorf("%s%s", dirPath, constants.ErrMsgIsAFile) // object) to coexist because S3 has a flat key space. Promote the
// existing file to a directory, preserving its content/chunks so
// the original object data remains accessible.
glog.V(2).InfofCtx(ctx, "promoting %s from file to directory for %s", dirPath, entry.FullPath)
dirEntry.Attr.Mode |= os.ModeDir | 0111
if updateErr := f.Store.UpdateEntry(ctx, dirEntry); updateErr != nil {
return fmt.Errorf("promote %s to directory: %v", dirPath, updateErr)
}
f.NotifyUpdateEvent(ctx, nil, dirEntry, false, isFromOtherCluster, nil)
} }
return nil return nil
@@ -336,11 +343,11 @@ func (f *Filer) UpdateEntry(ctx context.Context, oldEntry, entry *Entry) (err er
entry.Attr.Crtime = oldEntry.Attr.Crtime entry.Attr.Crtime = oldEntry.Attr.Crtime
if oldEntry.IsDirectory() && !entry.IsDirectory() { if oldEntry.IsDirectory() && !entry.IsDirectory() {
glog.ErrorfCtx(ctx, "existing %s is a directory", oldEntry.FullPath) glog.ErrorfCtx(ctx, "existing %s is a directory", oldEntry.FullPath)
return fmt.Errorf("%s%s%s", constants.ErrMsgExistingPrefix, oldEntry.FullPath, constants.ErrMsgIsADirectory) return fmt.Errorf("%s: %w", oldEntry.FullPath, filer_pb.ErrExistingIsDirectory)
} }
if !oldEntry.IsDirectory() && entry.IsDirectory() { if !oldEntry.IsDirectory() && entry.IsDirectory() {
glog.ErrorfCtx(ctx, "existing %s is a file", oldEntry.FullPath) glog.ErrorfCtx(ctx, "existing %s is a file", oldEntry.FullPath)
return fmt.Errorf("%s%s%s", constants.ErrMsgExistingPrefix, oldEntry.FullPath, constants.ErrMsgIsAFile) return fmt.Errorf("%s: %w", oldEntry.FullPath, filer_pb.ErrExistingIsFile)
} }
} }
return f.Store.UpdateEntry(ctx, entry) return f.Store.UpdateEntry(ctx, entry)

View File

@@ -203,9 +203,21 @@ message CreateEntryRequest {
bool skip_check_parent_directory = 6; bool skip_check_parent_directory = 6;
} }
// Structured error codes for filer entry operations.
// Values are stable — do not reorder or reuse numbers.
enum FilerError {
OK = 0;
ENTRY_NAME_TOO_LONG = 1; // name exceeds max_file_name_length
PARENT_IS_FILE = 2; // parent path component is a file, not a directory
EXISTING_IS_DIRECTORY = 3; // cannot overwrite directory with file
EXISTING_IS_FILE = 4; // cannot overwrite file with directory
ENTRY_ALREADY_EXISTS = 5; // O_EXCL and entry already exists
}
message CreateEntryResponse { message CreateEntryResponse {
string error = 1; string error = 1; // kept for human readability + backward compat
SubscribeMetadataResponse metadata_event = 2; SubscribeMetadataResponse metadata_event = 2;
FilerError error_code = 3; // machine-readable error code
} }
message UpdateEntryRequest { message UpdateEntryRequest {

View File

@@ -73,6 +73,66 @@ func (SSEType) EnumDescriptor() ([]byte, []int) {
return file_filer_proto_rawDescGZIP(), []int{0} return file_filer_proto_rawDescGZIP(), []int{0}
} }
// Structured error codes for filer entry operations.
// Values are stable — do not reorder or reuse numbers.
type FilerError int32
const (
FilerError_OK FilerError = 0
FilerError_ENTRY_NAME_TOO_LONG FilerError = 1 // name exceeds max_file_name_length
FilerError_PARENT_IS_FILE FilerError = 2 // parent path component is a file, not a directory
FilerError_EXISTING_IS_DIRECTORY FilerError = 3 // cannot overwrite directory with file
FilerError_EXISTING_IS_FILE FilerError = 4 // cannot overwrite file with directory
FilerError_ENTRY_ALREADY_EXISTS FilerError = 5 // O_EXCL and entry already exists
)
// Enum value maps for FilerError.
var (
FilerError_name = map[int32]string{
0: "OK",
1: "ENTRY_NAME_TOO_LONG",
2: "PARENT_IS_FILE",
3: "EXISTING_IS_DIRECTORY",
4: "EXISTING_IS_FILE",
5: "ENTRY_ALREADY_EXISTS",
}
FilerError_value = map[string]int32{
"OK": 0,
"ENTRY_NAME_TOO_LONG": 1,
"PARENT_IS_FILE": 2,
"EXISTING_IS_DIRECTORY": 3,
"EXISTING_IS_FILE": 4,
"ENTRY_ALREADY_EXISTS": 5,
}
)
func (x FilerError) Enum() *FilerError {
p := new(FilerError)
*p = x
return p
}
func (x FilerError) String() string {
return protoimpl.X.EnumStringOf(x.Descriptor(), protoreflect.EnumNumber(x))
}
func (FilerError) Descriptor() protoreflect.EnumDescriptor {
return file_filer_proto_enumTypes[1].Descriptor()
}
func (FilerError) Type() protoreflect.EnumType {
return &file_filer_proto_enumTypes[1]
}
func (x FilerError) Number() protoreflect.EnumNumber {
return protoreflect.EnumNumber(x)
}
// Deprecated: Use FilerError.Descriptor instead.
func (FilerError) EnumDescriptor() ([]byte, []int) {
return file_filer_proto_rawDescGZIP(), []int{1}
}
type LookupDirectoryEntryRequest struct { type LookupDirectoryEntryRequest struct {
state protoimpl.MessageState `protogen:"open.v1"` state protoimpl.MessageState `protogen:"open.v1"`
Directory string `protobuf:"bytes,1,opt,name=directory,proto3" json:"directory,omitempty"` Directory string `protobuf:"bytes,1,opt,name=directory,proto3" json:"directory,omitempty"`
@@ -1119,8 +1179,9 @@ func (x *CreateEntryRequest) GetSkipCheckParentDirectory() bool {
type CreateEntryResponse struct { type CreateEntryResponse struct {
state protoimpl.MessageState `protogen:"open.v1"` state protoimpl.MessageState `protogen:"open.v1"`
Error string `protobuf:"bytes,1,opt,name=error,proto3" json:"error,omitempty"` Error string `protobuf:"bytes,1,opt,name=error,proto3" json:"error,omitempty"` // kept for human readability + backward compat
MetadataEvent *SubscribeMetadataResponse `protobuf:"bytes,2,opt,name=metadata_event,json=metadataEvent,proto3" json:"metadata_event,omitempty"` MetadataEvent *SubscribeMetadataResponse `protobuf:"bytes,2,opt,name=metadata_event,json=metadataEvent,proto3" json:"metadata_event,omitempty"`
ErrorCode FilerError `protobuf:"varint,3,opt,name=error_code,json=errorCode,proto3,enum=filer_pb.FilerError" json:"error_code,omitempty"` // machine-readable error code
unknownFields protoimpl.UnknownFields unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache sizeCache protoimpl.SizeCache
} }
@@ -1169,6 +1230,13 @@ func (x *CreateEntryResponse) GetMetadataEvent() *SubscribeMetadataResponse {
return nil return nil
} }
func (x *CreateEntryResponse) GetErrorCode() FilerError {
if x != nil {
return x.ErrorCode
}
return FilerError_OK
}
type UpdateEntryRequest struct { type UpdateEntryRequest struct {
state protoimpl.MessageState `protogen:"open.v1"` state protoimpl.MessageState `protogen:"open.v1"`
Directory string `protobuf:"bytes,1,opt,name=directory,proto3" json:"directory,omitempty"` Directory string `protobuf:"bytes,1,opt,name=directory,proto3" json:"directory,omitempty"`
@@ -4571,10 +4639,12 @@ const file_filer_proto_rawDesc = "" +
"\n" + "\n" +
"signatures\x18\x05 \x03(\x05R\n" + "signatures\x18\x05 \x03(\x05R\n" +
"signatures\x12=\n" + "signatures\x12=\n" +
"\x1bskip_check_parent_directory\x18\x06 \x01(\bR\x18skipCheckParentDirectory\"w\n" + "\x1bskip_check_parent_directory\x18\x06 \x01(\bR\x18skipCheckParentDirectory\"\xac\x01\n" +
"\x13CreateEntryResponse\x12\x14\n" + "\x13CreateEntryResponse\x12\x14\n" +
"\x05error\x18\x01 \x01(\tR\x05error\x12J\n" + "\x05error\x18\x01 \x01(\tR\x05error\x12J\n" +
"\x0emetadata_event\x18\x02 \x01(\v2#.filer_pb.SubscribeMetadataResponseR\rmetadataEvent\"\xd2\x02\n" + "\x0emetadata_event\x18\x02 \x01(\v2#.filer_pb.SubscribeMetadataResponseR\rmetadataEvent\x123\n" +
"\n" +
"error_code\x18\x03 \x01(\x0e2\x14.filer_pb.FilerErrorR\terrorCode\"\xd2\x02\n" +
"\x12UpdateEntryRequest\x12\x1c\n" + "\x12UpdateEntryRequest\x12\x1c\n" +
"\tdirectory\x18\x01 \x01(\tR\tdirectory\x12%\n" + "\tdirectory\x18\x01 \x01(\tR\tdirectory\x12%\n" +
"\x05entry\x18\x02 \x01(\v2\x0f.filer_pb.EntryR\x05entry\x121\n" + "\x05entry\x18\x02 \x01(\v2\x0f.filer_pb.EntryR\x05entry\x121\n" +
@@ -4851,7 +4921,15 @@ const file_filer_proto_rawDesc = "" +
"\x05SSE_C\x10\x01\x12\v\n" + "\x05SSE_C\x10\x01\x12\v\n" +
"\aSSE_KMS\x10\x02\x12\n" + "\aSSE_KMS\x10\x02\x12\n" +
"\n" + "\n" +
"\x06SSE_S3\x10\x032\xf7\x10\n" + "\x06SSE_S3\x10\x03*\x8c\x01\n" +
"\n" +
"FilerError\x12\x06\n" +
"\x02OK\x10\x00\x12\x17\n" +
"\x13ENTRY_NAME_TOO_LONG\x10\x01\x12\x12\n" +
"\x0ePARENT_IS_FILE\x10\x02\x12\x19\n" +
"\x15EXISTING_IS_DIRECTORY\x10\x03\x12\x14\n" +
"\x10EXISTING_IS_FILE\x10\x04\x12\x18\n" +
"\x14ENTRY_ALREADY_EXISTS\x10\x052\xf7\x10\n" +
"\fSeaweedFiler\x12g\n" + "\fSeaweedFiler\x12g\n" +
"\x14LookupDirectoryEntry\x12%.filer_pb.LookupDirectoryEntryRequest\x1a&.filer_pb.LookupDirectoryEntryResponse\"\x00\x12N\n" + "\x14LookupDirectoryEntry\x12%.filer_pb.LookupDirectoryEntryRequest\x1a&.filer_pb.LookupDirectoryEntryResponse\"\x00\x12N\n" +
"\vListEntries\x12\x1c.filer_pb.ListEntriesRequest\x1a\x1d.filer_pb.ListEntriesResponse\"\x000\x01\x12L\n" + "\vListEntries\x12\x1c.filer_pb.ListEntriesRequest\x1a\x1d.filer_pb.ListEntriesResponse\"\x000\x01\x12L\n" +
@@ -4894,171 +4972,173 @@ func file_filer_proto_rawDescGZIP() []byte {
return file_filer_proto_rawDescData return file_filer_proto_rawDescData
} }
var file_filer_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_filer_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
var file_filer_proto_msgTypes = make([]protoimpl.MessageInfo, 71) var file_filer_proto_msgTypes = make([]protoimpl.MessageInfo, 71)
var file_filer_proto_goTypes = []any{ var file_filer_proto_goTypes = []any{
(SSEType)(0), // 0: filer_pb.SSEType (SSEType)(0), // 0: filer_pb.SSEType
(*LookupDirectoryEntryRequest)(nil), // 1: filer_pb.LookupDirectoryEntryRequest (FilerError)(0), // 1: filer_pb.FilerError
(*LookupDirectoryEntryResponse)(nil), // 2: filer_pb.LookupDirectoryEntryResponse (*LookupDirectoryEntryRequest)(nil), // 2: filer_pb.LookupDirectoryEntryRequest
(*ListEntriesRequest)(nil), // 3: filer_pb.ListEntriesRequest (*LookupDirectoryEntryResponse)(nil), // 3: filer_pb.LookupDirectoryEntryResponse
(*ListEntriesResponse)(nil), // 4: filer_pb.ListEntriesResponse (*ListEntriesRequest)(nil), // 4: filer_pb.ListEntriesRequest
(*RemoteEntry)(nil), // 5: filer_pb.RemoteEntry (*ListEntriesResponse)(nil), // 5: filer_pb.ListEntriesResponse
(*Entry)(nil), // 6: filer_pb.Entry (*RemoteEntry)(nil), // 6: filer_pb.RemoteEntry
(*FullEntry)(nil), // 7: filer_pb.FullEntry (*Entry)(nil), // 7: filer_pb.Entry
(*EventNotification)(nil), // 8: filer_pb.EventNotification (*FullEntry)(nil), // 8: filer_pb.FullEntry
(*FileChunk)(nil), // 9: filer_pb.FileChunk (*EventNotification)(nil), // 9: filer_pb.EventNotification
(*FileChunkManifest)(nil), // 10: filer_pb.FileChunkManifest (*FileChunk)(nil), // 10: filer_pb.FileChunk
(*FileId)(nil), // 11: filer_pb.FileId (*FileChunkManifest)(nil), // 11: filer_pb.FileChunkManifest
(*FuseAttributes)(nil), // 12: filer_pb.FuseAttributes (*FileId)(nil), // 12: filer_pb.FileId
(*CreateEntryRequest)(nil), // 13: filer_pb.CreateEntryRequest (*FuseAttributes)(nil), // 13: filer_pb.FuseAttributes
(*CreateEntryResponse)(nil), // 14: filer_pb.CreateEntryResponse (*CreateEntryRequest)(nil), // 14: filer_pb.CreateEntryRequest
(*UpdateEntryRequest)(nil), // 15: filer_pb.UpdateEntryRequest (*CreateEntryResponse)(nil), // 15: filer_pb.CreateEntryResponse
(*UpdateEntryResponse)(nil), // 16: filer_pb.UpdateEntryResponse (*UpdateEntryRequest)(nil), // 16: filer_pb.UpdateEntryRequest
(*AppendToEntryRequest)(nil), // 17: filer_pb.AppendToEntryRequest (*UpdateEntryResponse)(nil), // 17: filer_pb.UpdateEntryResponse
(*AppendToEntryResponse)(nil), // 18: filer_pb.AppendToEntryResponse (*AppendToEntryRequest)(nil), // 18: filer_pb.AppendToEntryRequest
(*DeleteEntryRequest)(nil), // 19: filer_pb.DeleteEntryRequest (*AppendToEntryResponse)(nil), // 19: filer_pb.AppendToEntryResponse
(*DeleteEntryResponse)(nil), // 20: filer_pb.DeleteEntryResponse (*DeleteEntryRequest)(nil), // 20: filer_pb.DeleteEntryRequest
(*AtomicRenameEntryRequest)(nil), // 21: filer_pb.AtomicRenameEntryRequest (*DeleteEntryResponse)(nil), // 21: filer_pb.DeleteEntryResponse
(*AtomicRenameEntryResponse)(nil), // 22: filer_pb.AtomicRenameEntryResponse (*AtomicRenameEntryRequest)(nil), // 22: filer_pb.AtomicRenameEntryRequest
(*StreamRenameEntryRequest)(nil), // 23: filer_pb.StreamRenameEntryRequest (*AtomicRenameEntryResponse)(nil), // 23: filer_pb.AtomicRenameEntryResponse
(*StreamRenameEntryResponse)(nil), // 24: filer_pb.StreamRenameEntryResponse (*StreamRenameEntryRequest)(nil), // 24: filer_pb.StreamRenameEntryRequest
(*AssignVolumeRequest)(nil), // 25: filer_pb.AssignVolumeRequest (*StreamRenameEntryResponse)(nil), // 25: filer_pb.StreamRenameEntryResponse
(*AssignVolumeResponse)(nil), // 26: filer_pb.AssignVolumeResponse (*AssignVolumeRequest)(nil), // 26: filer_pb.AssignVolumeRequest
(*LookupVolumeRequest)(nil), // 27: filer_pb.LookupVolumeRequest (*AssignVolumeResponse)(nil), // 27: filer_pb.AssignVolumeResponse
(*Locations)(nil), // 28: filer_pb.Locations (*LookupVolumeRequest)(nil), // 28: filer_pb.LookupVolumeRequest
(*Location)(nil), // 29: filer_pb.Location (*Locations)(nil), // 29: filer_pb.Locations
(*LookupVolumeResponse)(nil), // 30: filer_pb.LookupVolumeResponse (*Location)(nil), // 30: filer_pb.Location
(*Collection)(nil), // 31: filer_pb.Collection (*LookupVolumeResponse)(nil), // 31: filer_pb.LookupVolumeResponse
(*CollectionListRequest)(nil), // 32: filer_pb.CollectionListRequest (*Collection)(nil), // 32: filer_pb.Collection
(*CollectionListResponse)(nil), // 33: filer_pb.CollectionListResponse (*CollectionListRequest)(nil), // 33: filer_pb.CollectionListRequest
(*DeleteCollectionRequest)(nil), // 34: filer_pb.DeleteCollectionRequest (*CollectionListResponse)(nil), // 34: filer_pb.CollectionListResponse
(*DeleteCollectionResponse)(nil), // 35: filer_pb.DeleteCollectionResponse (*DeleteCollectionRequest)(nil), // 35: filer_pb.DeleteCollectionRequest
(*StatisticsRequest)(nil), // 36: filer_pb.StatisticsRequest (*DeleteCollectionResponse)(nil), // 36: filer_pb.DeleteCollectionResponse
(*StatisticsResponse)(nil), // 37: filer_pb.StatisticsResponse (*StatisticsRequest)(nil), // 37: filer_pb.StatisticsRequest
(*PingRequest)(nil), // 38: filer_pb.PingRequest (*StatisticsResponse)(nil), // 38: filer_pb.StatisticsResponse
(*PingResponse)(nil), // 39: filer_pb.PingResponse (*PingRequest)(nil), // 39: filer_pb.PingRequest
(*GetFilerConfigurationRequest)(nil), // 40: filer_pb.GetFilerConfigurationRequest (*PingResponse)(nil), // 40: filer_pb.PingResponse
(*GetFilerConfigurationResponse)(nil), // 41: filer_pb.GetFilerConfigurationResponse (*GetFilerConfigurationRequest)(nil), // 41: filer_pb.GetFilerConfigurationRequest
(*SubscribeMetadataRequest)(nil), // 42: filer_pb.SubscribeMetadataRequest (*GetFilerConfigurationResponse)(nil), // 42: filer_pb.GetFilerConfigurationResponse
(*SubscribeMetadataResponse)(nil), // 43: filer_pb.SubscribeMetadataResponse (*SubscribeMetadataRequest)(nil), // 43: filer_pb.SubscribeMetadataRequest
(*TraverseBfsMetadataRequest)(nil), // 44: filer_pb.TraverseBfsMetadataRequest (*SubscribeMetadataResponse)(nil), // 44: filer_pb.SubscribeMetadataResponse
(*TraverseBfsMetadataResponse)(nil), // 45: filer_pb.TraverseBfsMetadataResponse (*TraverseBfsMetadataRequest)(nil), // 45: filer_pb.TraverseBfsMetadataRequest
(*LogEntry)(nil), // 46: filer_pb.LogEntry (*TraverseBfsMetadataResponse)(nil), // 46: filer_pb.TraverseBfsMetadataResponse
(*KeepConnectedRequest)(nil), // 47: filer_pb.KeepConnectedRequest (*LogEntry)(nil), // 47: filer_pb.LogEntry
(*KeepConnectedResponse)(nil), // 48: filer_pb.KeepConnectedResponse (*KeepConnectedRequest)(nil), // 48: filer_pb.KeepConnectedRequest
(*LocateBrokerRequest)(nil), // 49: filer_pb.LocateBrokerRequest (*KeepConnectedResponse)(nil), // 49: filer_pb.KeepConnectedResponse
(*LocateBrokerResponse)(nil), // 50: filer_pb.LocateBrokerResponse (*LocateBrokerRequest)(nil), // 50: filer_pb.LocateBrokerRequest
(*KvGetRequest)(nil), // 51: filer_pb.KvGetRequest (*LocateBrokerResponse)(nil), // 51: filer_pb.LocateBrokerResponse
(*KvGetResponse)(nil), // 52: filer_pb.KvGetResponse (*KvGetRequest)(nil), // 52: filer_pb.KvGetRequest
(*KvPutRequest)(nil), // 53: filer_pb.KvPutRequest (*KvGetResponse)(nil), // 53: filer_pb.KvGetResponse
(*KvPutResponse)(nil), // 54: filer_pb.KvPutResponse (*KvPutRequest)(nil), // 54: filer_pb.KvPutRequest
(*FilerConf)(nil), // 55: filer_pb.FilerConf (*KvPutResponse)(nil), // 55: filer_pb.KvPutResponse
(*CacheRemoteObjectToLocalClusterRequest)(nil), // 56: filer_pb.CacheRemoteObjectToLocalClusterRequest (*FilerConf)(nil), // 56: filer_pb.FilerConf
(*CacheRemoteObjectToLocalClusterResponse)(nil), // 57: filer_pb.CacheRemoteObjectToLocalClusterResponse (*CacheRemoteObjectToLocalClusterRequest)(nil), // 57: filer_pb.CacheRemoteObjectToLocalClusterRequest
(*LockRequest)(nil), // 58: filer_pb.LockRequest (*CacheRemoteObjectToLocalClusterResponse)(nil), // 58: filer_pb.CacheRemoteObjectToLocalClusterResponse
(*LockResponse)(nil), // 59: filer_pb.LockResponse (*LockRequest)(nil), // 59: filer_pb.LockRequest
(*UnlockRequest)(nil), // 60: filer_pb.UnlockRequest (*LockResponse)(nil), // 60: filer_pb.LockResponse
(*UnlockResponse)(nil), // 61: filer_pb.UnlockResponse (*UnlockRequest)(nil), // 61: filer_pb.UnlockRequest
(*FindLockOwnerRequest)(nil), // 62: filer_pb.FindLockOwnerRequest (*UnlockResponse)(nil), // 62: filer_pb.UnlockResponse
(*FindLockOwnerResponse)(nil), // 63: filer_pb.FindLockOwnerResponse (*FindLockOwnerRequest)(nil), // 63: filer_pb.FindLockOwnerRequest
(*Lock)(nil), // 64: filer_pb.Lock (*FindLockOwnerResponse)(nil), // 64: filer_pb.FindLockOwnerResponse
(*TransferLocksRequest)(nil), // 65: filer_pb.TransferLocksRequest (*Lock)(nil), // 65: filer_pb.Lock
(*TransferLocksResponse)(nil), // 66: filer_pb.TransferLocksResponse (*TransferLocksRequest)(nil), // 66: filer_pb.TransferLocksRequest
nil, // 67: filer_pb.Entry.ExtendedEntry (*TransferLocksResponse)(nil), // 67: filer_pb.TransferLocksResponse
nil, // 68: filer_pb.UpdateEntryRequest.ExpectedExtendedEntry nil, // 68: filer_pb.Entry.ExtendedEntry
nil, // 69: filer_pb.LookupVolumeResponse.LocationsMapEntry nil, // 69: filer_pb.UpdateEntryRequest.ExpectedExtendedEntry
(*LocateBrokerResponse_Resource)(nil), // 70: filer_pb.LocateBrokerResponse.Resource nil, // 70: filer_pb.LookupVolumeResponse.LocationsMapEntry
(*FilerConf_PathConf)(nil), // 71: filer_pb.FilerConf.PathConf (*LocateBrokerResponse_Resource)(nil), // 71: filer_pb.LocateBrokerResponse.Resource
(*FilerConf_PathConf)(nil), // 72: filer_pb.FilerConf.PathConf
} }
var file_filer_proto_depIdxs = []int32{ var file_filer_proto_depIdxs = []int32{
6, // 0: filer_pb.LookupDirectoryEntryResponse.entry:type_name -> filer_pb.Entry 7, // 0: filer_pb.LookupDirectoryEntryResponse.entry:type_name -> filer_pb.Entry
6, // 1: filer_pb.ListEntriesResponse.entry:type_name -> filer_pb.Entry 7, // 1: filer_pb.ListEntriesResponse.entry:type_name -> filer_pb.Entry
9, // 2: filer_pb.Entry.chunks:type_name -> filer_pb.FileChunk 10, // 2: filer_pb.Entry.chunks:type_name -> filer_pb.FileChunk
12, // 3: filer_pb.Entry.attributes:type_name -> filer_pb.FuseAttributes 13, // 3: filer_pb.Entry.attributes:type_name -> filer_pb.FuseAttributes
67, // 4: filer_pb.Entry.extended:type_name -> filer_pb.Entry.ExtendedEntry 68, // 4: filer_pb.Entry.extended:type_name -> filer_pb.Entry.ExtendedEntry
5, // 5: filer_pb.Entry.remote_entry:type_name -> filer_pb.RemoteEntry 6, // 5: filer_pb.Entry.remote_entry:type_name -> filer_pb.RemoteEntry
6, // 6: filer_pb.FullEntry.entry:type_name -> filer_pb.Entry 7, // 6: filer_pb.FullEntry.entry:type_name -> filer_pb.Entry
6, // 7: filer_pb.EventNotification.old_entry:type_name -> filer_pb.Entry 7, // 7: filer_pb.EventNotification.old_entry:type_name -> filer_pb.Entry
6, // 8: filer_pb.EventNotification.new_entry:type_name -> filer_pb.Entry 7, // 8: filer_pb.EventNotification.new_entry:type_name -> filer_pb.Entry
11, // 9: filer_pb.FileChunk.fid:type_name -> filer_pb.FileId 12, // 9: filer_pb.FileChunk.fid:type_name -> filer_pb.FileId
11, // 10: filer_pb.FileChunk.source_fid:type_name -> filer_pb.FileId 12, // 10: filer_pb.FileChunk.source_fid:type_name -> filer_pb.FileId
0, // 11: filer_pb.FileChunk.sse_type:type_name -> filer_pb.SSEType 0, // 11: filer_pb.FileChunk.sse_type:type_name -> filer_pb.SSEType
9, // 12: filer_pb.FileChunkManifest.chunks:type_name -> filer_pb.FileChunk 10, // 12: filer_pb.FileChunkManifest.chunks:type_name -> filer_pb.FileChunk
6, // 13: filer_pb.CreateEntryRequest.entry:type_name -> filer_pb.Entry 7, // 13: filer_pb.CreateEntryRequest.entry:type_name -> filer_pb.Entry
43, // 14: filer_pb.CreateEntryResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse 44, // 14: filer_pb.CreateEntryResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse
6, // 15: filer_pb.UpdateEntryRequest.entry:type_name -> filer_pb.Entry 1, // 15: filer_pb.CreateEntryResponse.error_code:type_name -> filer_pb.FilerError
68, // 16: filer_pb.UpdateEntryRequest.expected_extended:type_name -> filer_pb.UpdateEntryRequest.ExpectedExtendedEntry 7, // 16: filer_pb.UpdateEntryRequest.entry:type_name -> filer_pb.Entry
43, // 17: filer_pb.UpdateEntryResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse 69, // 17: filer_pb.UpdateEntryRequest.expected_extended:type_name -> filer_pb.UpdateEntryRequest.ExpectedExtendedEntry
9, // 18: filer_pb.AppendToEntryRequest.chunks:type_name -> filer_pb.FileChunk 44, // 18: filer_pb.UpdateEntryResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse
43, // 19: filer_pb.DeleteEntryResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse 10, // 19: filer_pb.AppendToEntryRequest.chunks:type_name -> filer_pb.FileChunk
8, // 20: filer_pb.StreamRenameEntryResponse.event_notification:type_name -> filer_pb.EventNotification 44, // 20: filer_pb.DeleteEntryResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse
29, // 21: filer_pb.AssignVolumeResponse.location:type_name -> filer_pb.Location 9, // 21: filer_pb.StreamRenameEntryResponse.event_notification:type_name -> filer_pb.EventNotification
29, // 22: filer_pb.Locations.locations:type_name -> filer_pb.Location 30, // 22: filer_pb.AssignVolumeResponse.location:type_name -> filer_pb.Location
69, // 23: filer_pb.LookupVolumeResponse.locations_map:type_name -> filer_pb.LookupVolumeResponse.LocationsMapEntry 30, // 23: filer_pb.Locations.locations:type_name -> filer_pb.Location
31, // 24: filer_pb.CollectionListResponse.collections:type_name -> filer_pb.Collection 70, // 24: filer_pb.LookupVolumeResponse.locations_map:type_name -> filer_pb.LookupVolumeResponse.LocationsMapEntry
8, // 25: filer_pb.SubscribeMetadataResponse.event_notification:type_name -> filer_pb.EventNotification 32, // 25: filer_pb.CollectionListResponse.collections:type_name -> filer_pb.Collection
6, // 26: filer_pb.TraverseBfsMetadataResponse.entry:type_name -> filer_pb.Entry 9, // 26: filer_pb.SubscribeMetadataResponse.event_notification:type_name -> filer_pb.EventNotification
70, // 27: filer_pb.LocateBrokerResponse.resources:type_name -> filer_pb.LocateBrokerResponse.Resource 7, // 27: filer_pb.TraverseBfsMetadataResponse.entry:type_name -> filer_pb.Entry
71, // 28: filer_pb.FilerConf.locations:type_name -> filer_pb.FilerConf.PathConf 71, // 28: filer_pb.LocateBrokerResponse.resources:type_name -> filer_pb.LocateBrokerResponse.Resource
6, // 29: filer_pb.CacheRemoteObjectToLocalClusterResponse.entry:type_name -> filer_pb.Entry 72, // 29: filer_pb.FilerConf.locations:type_name -> filer_pb.FilerConf.PathConf
43, // 30: filer_pb.CacheRemoteObjectToLocalClusterResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse 7, // 30: filer_pb.CacheRemoteObjectToLocalClusterResponse.entry:type_name -> filer_pb.Entry
64, // 31: filer_pb.TransferLocksRequest.locks:type_name -> filer_pb.Lock 44, // 31: filer_pb.CacheRemoteObjectToLocalClusterResponse.metadata_event:type_name -> filer_pb.SubscribeMetadataResponse
28, // 32: filer_pb.LookupVolumeResponse.LocationsMapEntry.value:type_name -> filer_pb.Locations 65, // 32: filer_pb.TransferLocksRequest.locks:type_name -> filer_pb.Lock
1, // 33: filer_pb.SeaweedFiler.LookupDirectoryEntry:input_type -> filer_pb.LookupDirectoryEntryRequest 29, // 33: filer_pb.LookupVolumeResponse.LocationsMapEntry.value:type_name -> filer_pb.Locations
3, // 34: filer_pb.SeaweedFiler.ListEntries:input_type -> filer_pb.ListEntriesRequest 2, // 34: filer_pb.SeaweedFiler.LookupDirectoryEntry:input_type -> filer_pb.LookupDirectoryEntryRequest
13, // 35: filer_pb.SeaweedFiler.CreateEntry:input_type -> filer_pb.CreateEntryRequest 4, // 35: filer_pb.SeaweedFiler.ListEntries:input_type -> filer_pb.ListEntriesRequest
15, // 36: filer_pb.SeaweedFiler.UpdateEntry:input_type -> filer_pb.UpdateEntryRequest 14, // 36: filer_pb.SeaweedFiler.CreateEntry:input_type -> filer_pb.CreateEntryRequest
17, // 37: filer_pb.SeaweedFiler.AppendToEntry:input_type -> filer_pb.AppendToEntryRequest 16, // 37: filer_pb.SeaweedFiler.UpdateEntry:input_type -> filer_pb.UpdateEntryRequest
19, // 38: filer_pb.SeaweedFiler.DeleteEntry:input_type -> filer_pb.DeleteEntryRequest 18, // 38: filer_pb.SeaweedFiler.AppendToEntry:input_type -> filer_pb.AppendToEntryRequest
21, // 39: filer_pb.SeaweedFiler.AtomicRenameEntry:input_type -> filer_pb.AtomicRenameEntryRequest 20, // 39: filer_pb.SeaweedFiler.DeleteEntry:input_type -> filer_pb.DeleteEntryRequest
23, // 40: filer_pb.SeaweedFiler.StreamRenameEntry:input_type -> filer_pb.StreamRenameEntryRequest 22, // 40: filer_pb.SeaweedFiler.AtomicRenameEntry:input_type -> filer_pb.AtomicRenameEntryRequest
25, // 41: filer_pb.SeaweedFiler.AssignVolume:input_type -> filer_pb.AssignVolumeRequest 24, // 41: filer_pb.SeaweedFiler.StreamRenameEntry:input_type -> filer_pb.StreamRenameEntryRequest
27, // 42: filer_pb.SeaweedFiler.LookupVolume:input_type -> filer_pb.LookupVolumeRequest 26, // 42: filer_pb.SeaweedFiler.AssignVolume:input_type -> filer_pb.AssignVolumeRequest
32, // 43: filer_pb.SeaweedFiler.CollectionList:input_type -> filer_pb.CollectionListRequest 28, // 43: filer_pb.SeaweedFiler.LookupVolume:input_type -> filer_pb.LookupVolumeRequest
34, // 44: filer_pb.SeaweedFiler.DeleteCollection:input_type -> filer_pb.DeleteCollectionRequest 33, // 44: filer_pb.SeaweedFiler.CollectionList:input_type -> filer_pb.CollectionListRequest
36, // 45: filer_pb.SeaweedFiler.Statistics:input_type -> filer_pb.StatisticsRequest 35, // 45: filer_pb.SeaweedFiler.DeleteCollection:input_type -> filer_pb.DeleteCollectionRequest
38, // 46: filer_pb.SeaweedFiler.Ping:input_type -> filer_pb.PingRequest 37, // 46: filer_pb.SeaweedFiler.Statistics:input_type -> filer_pb.StatisticsRequest
40, // 47: filer_pb.SeaweedFiler.GetFilerConfiguration:input_type -> filer_pb.GetFilerConfigurationRequest 39, // 47: filer_pb.SeaweedFiler.Ping:input_type -> filer_pb.PingRequest
44, // 48: filer_pb.SeaweedFiler.TraverseBfsMetadata:input_type -> filer_pb.TraverseBfsMetadataRequest 41, // 48: filer_pb.SeaweedFiler.GetFilerConfiguration:input_type -> filer_pb.GetFilerConfigurationRequest
42, // 49: filer_pb.SeaweedFiler.SubscribeMetadata:input_type -> filer_pb.SubscribeMetadataRequest 45, // 49: filer_pb.SeaweedFiler.TraverseBfsMetadata:input_type -> filer_pb.TraverseBfsMetadataRequest
42, // 50: filer_pb.SeaweedFiler.SubscribeLocalMetadata:input_type -> filer_pb.SubscribeMetadataRequest 43, // 50: filer_pb.SeaweedFiler.SubscribeMetadata:input_type -> filer_pb.SubscribeMetadataRequest
51, // 51: filer_pb.SeaweedFiler.KvGet:input_type -> filer_pb.KvGetRequest 43, // 51: filer_pb.SeaweedFiler.SubscribeLocalMetadata:input_type -> filer_pb.SubscribeMetadataRequest
53, // 52: filer_pb.SeaweedFiler.KvPut:input_type -> filer_pb.KvPutRequest 52, // 52: filer_pb.SeaweedFiler.KvGet:input_type -> filer_pb.KvGetRequest
56, // 53: filer_pb.SeaweedFiler.CacheRemoteObjectToLocalCluster:input_type -> filer_pb.CacheRemoteObjectToLocalClusterRequest 54, // 53: filer_pb.SeaweedFiler.KvPut:input_type -> filer_pb.KvPutRequest
58, // 54: filer_pb.SeaweedFiler.DistributedLock:input_type -> filer_pb.LockRequest 57, // 54: filer_pb.SeaweedFiler.CacheRemoteObjectToLocalCluster:input_type -> filer_pb.CacheRemoteObjectToLocalClusterRequest
60, // 55: filer_pb.SeaweedFiler.DistributedUnlock:input_type -> filer_pb.UnlockRequest 59, // 55: filer_pb.SeaweedFiler.DistributedLock:input_type -> filer_pb.LockRequest
62, // 56: filer_pb.SeaweedFiler.FindLockOwner:input_type -> filer_pb.FindLockOwnerRequest 61, // 56: filer_pb.SeaweedFiler.DistributedUnlock:input_type -> filer_pb.UnlockRequest
65, // 57: filer_pb.SeaweedFiler.TransferLocks:input_type -> filer_pb.TransferLocksRequest 63, // 57: filer_pb.SeaweedFiler.FindLockOwner:input_type -> filer_pb.FindLockOwnerRequest
2, // 58: filer_pb.SeaweedFiler.LookupDirectoryEntry:output_type -> filer_pb.LookupDirectoryEntryResponse 66, // 58: filer_pb.SeaweedFiler.TransferLocks:input_type -> filer_pb.TransferLocksRequest
4, // 59: filer_pb.SeaweedFiler.ListEntries:output_type -> filer_pb.ListEntriesResponse 3, // 59: filer_pb.SeaweedFiler.LookupDirectoryEntry:output_type -> filer_pb.LookupDirectoryEntryResponse
14, // 60: filer_pb.SeaweedFiler.CreateEntry:output_type -> filer_pb.CreateEntryResponse 5, // 60: filer_pb.SeaweedFiler.ListEntries:output_type -> filer_pb.ListEntriesResponse
16, // 61: filer_pb.SeaweedFiler.UpdateEntry:output_type -> filer_pb.UpdateEntryResponse 15, // 61: filer_pb.SeaweedFiler.CreateEntry:output_type -> filer_pb.CreateEntryResponse
18, // 62: filer_pb.SeaweedFiler.AppendToEntry:output_type -> filer_pb.AppendToEntryResponse 17, // 62: filer_pb.SeaweedFiler.UpdateEntry:output_type -> filer_pb.UpdateEntryResponse
20, // 63: filer_pb.SeaweedFiler.DeleteEntry:output_type -> filer_pb.DeleteEntryResponse 19, // 63: filer_pb.SeaweedFiler.AppendToEntry:output_type -> filer_pb.AppendToEntryResponse
22, // 64: filer_pb.SeaweedFiler.AtomicRenameEntry:output_type -> filer_pb.AtomicRenameEntryResponse 21, // 64: filer_pb.SeaweedFiler.DeleteEntry:output_type -> filer_pb.DeleteEntryResponse
24, // 65: filer_pb.SeaweedFiler.StreamRenameEntry:output_type -> filer_pb.StreamRenameEntryResponse 23, // 65: filer_pb.SeaweedFiler.AtomicRenameEntry:output_type -> filer_pb.AtomicRenameEntryResponse
26, // 66: filer_pb.SeaweedFiler.AssignVolume:output_type -> filer_pb.AssignVolumeResponse 25, // 66: filer_pb.SeaweedFiler.StreamRenameEntry:output_type -> filer_pb.StreamRenameEntryResponse
30, // 67: filer_pb.SeaweedFiler.LookupVolume:output_type -> filer_pb.LookupVolumeResponse 27, // 67: filer_pb.SeaweedFiler.AssignVolume:output_type -> filer_pb.AssignVolumeResponse
33, // 68: filer_pb.SeaweedFiler.CollectionList:output_type -> filer_pb.CollectionListResponse 31, // 68: filer_pb.SeaweedFiler.LookupVolume:output_type -> filer_pb.LookupVolumeResponse
35, // 69: filer_pb.SeaweedFiler.DeleteCollection:output_type -> filer_pb.DeleteCollectionResponse 34, // 69: filer_pb.SeaweedFiler.CollectionList:output_type -> filer_pb.CollectionListResponse
37, // 70: filer_pb.SeaweedFiler.Statistics:output_type -> filer_pb.StatisticsResponse 36, // 70: filer_pb.SeaweedFiler.DeleteCollection:output_type -> filer_pb.DeleteCollectionResponse
39, // 71: filer_pb.SeaweedFiler.Ping:output_type -> filer_pb.PingResponse 38, // 71: filer_pb.SeaweedFiler.Statistics:output_type -> filer_pb.StatisticsResponse
41, // 72: filer_pb.SeaweedFiler.GetFilerConfiguration:output_type -> filer_pb.GetFilerConfigurationResponse 40, // 72: filer_pb.SeaweedFiler.Ping:output_type -> filer_pb.PingResponse
45, // 73: filer_pb.SeaweedFiler.TraverseBfsMetadata:output_type -> filer_pb.TraverseBfsMetadataResponse 42, // 73: filer_pb.SeaweedFiler.GetFilerConfiguration:output_type -> filer_pb.GetFilerConfigurationResponse
43, // 74: filer_pb.SeaweedFiler.SubscribeMetadata:output_type -> filer_pb.SubscribeMetadataResponse 46, // 74: filer_pb.SeaweedFiler.TraverseBfsMetadata:output_type -> filer_pb.TraverseBfsMetadataResponse
43, // 75: filer_pb.SeaweedFiler.SubscribeLocalMetadata:output_type -> filer_pb.SubscribeMetadataResponse 44, // 75: filer_pb.SeaweedFiler.SubscribeMetadata:output_type -> filer_pb.SubscribeMetadataResponse
52, // 76: filer_pb.SeaweedFiler.KvGet:output_type -> filer_pb.KvGetResponse 44, // 76: filer_pb.SeaweedFiler.SubscribeLocalMetadata:output_type -> filer_pb.SubscribeMetadataResponse
54, // 77: filer_pb.SeaweedFiler.KvPut:output_type -> filer_pb.KvPutResponse 53, // 77: filer_pb.SeaweedFiler.KvGet:output_type -> filer_pb.KvGetResponse
57, // 78: filer_pb.SeaweedFiler.CacheRemoteObjectToLocalCluster:output_type -> filer_pb.CacheRemoteObjectToLocalClusterResponse 55, // 78: filer_pb.SeaweedFiler.KvPut:output_type -> filer_pb.KvPutResponse
59, // 79: filer_pb.SeaweedFiler.DistributedLock:output_type -> filer_pb.LockResponse 58, // 79: filer_pb.SeaweedFiler.CacheRemoteObjectToLocalCluster:output_type -> filer_pb.CacheRemoteObjectToLocalClusterResponse
61, // 80: filer_pb.SeaweedFiler.DistributedUnlock:output_type -> filer_pb.UnlockResponse 60, // 80: filer_pb.SeaweedFiler.DistributedLock:output_type -> filer_pb.LockResponse
63, // 81: filer_pb.SeaweedFiler.FindLockOwner:output_type -> filer_pb.FindLockOwnerResponse 62, // 81: filer_pb.SeaweedFiler.DistributedUnlock:output_type -> filer_pb.UnlockResponse
66, // 82: filer_pb.SeaweedFiler.TransferLocks:output_type -> filer_pb.TransferLocksResponse 64, // 82: filer_pb.SeaweedFiler.FindLockOwner:output_type -> filer_pb.FindLockOwnerResponse
58, // [58:83] is the sub-list for method output_type 67, // 83: filer_pb.SeaweedFiler.TransferLocks:output_type -> filer_pb.TransferLocksResponse
33, // [33:58] is the sub-list for method input_type 59, // [59:84] is the sub-list for method output_type
33, // [33:33] is the sub-list for extension type_name 34, // [34:59] is the sub-list for method input_type
33, // [33:33] is the sub-list for extension extendee 34, // [34:34] is the sub-list for extension type_name
0, // [0:33] is the sub-list for field type_name 34, // [34:34] is the sub-list for extension extendee
0, // [0:34] is the sub-list for field type_name
} }
func init() { file_filer_proto_init() } func init() { file_filer_proto_init() }
@@ -5071,7 +5151,7 @@ func file_filer_proto_init() {
File: protoimpl.DescBuilder{ File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(), GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_filer_proto_rawDesc), len(file_filer_proto_rawDesc)), RawDescriptor: unsafe.Slice(unsafe.StringData(file_filer_proto_rawDesc), len(file_filer_proto_rawDesc)),
NumEnums: 1, NumEnums: 2,
NumMessages: 71, NumMessages: 71,
NumExtensions: 0, NumExtensions: 0,
NumServices: 1, NumServices: 1,

View File

@@ -145,6 +145,13 @@ func CreateEntryWithResponse(ctx context.Context, client SeaweedFilerClient, req
glog.V(1).InfofCtx(ctx, "create entry %s/%s %v: %v", request.Directory, request.Entry.Name, request.OExcl, err) glog.V(1).InfofCtx(ctx, "create entry %s/%s %v: %v", request.Directory, request.Entry.Name, request.OExcl, err)
return nil, fmt.Errorf("CreateEntry: %w", err) return nil, fmt.Errorf("CreateEntry: %w", err)
} }
if resp.ErrorCode != FilerError_OK {
glog.V(1).InfofCtx(ctx, "create entry %s/%s %v: %v (code %v)", request.Directory, request.Entry.Name, request.OExcl, resp.Error, resp.ErrorCode)
if sentinel := FilerErrorToSentinel(resp.ErrorCode); sentinel != nil {
return nil, fmt.Errorf("CreateEntry %s/%s: %w", request.Directory, request.Entry.Name, sentinel)
}
return nil, fmt.Errorf("CreateEntry: %w", errors.New(resp.Error))
}
if resp.Error != "" { if resp.Error != "" {
glog.V(1).InfofCtx(ctx, "create entry %s/%s %v: %v", request.Directory, request.Entry.Name, request.OExcl, resp.Error) glog.V(1).InfofCtx(ctx, "create entry %s/%s %v: %v", request.Directory, request.Entry.Name, request.OExcl, resp.Error)
return nil, fmt.Errorf("CreateEntry: %w", errors.New(resp.Error)) return nil, fmt.Errorf("CreateEntry: %w", errors.New(resp.Error))
@@ -183,6 +190,37 @@ func LookupEntry(ctx context.Context, client SeaweedFilerClient, request *Lookup
var ErrNotFound = errors.New("filer: no entry is found in filer store") var ErrNotFound = errors.New("filer: no entry is found in filer store")
// Sentinel errors for filer entry operations.
// These are set by the filer and reconstructed from FilerError codes after
// crossing the gRPC boundary, so consumers can use errors.Is() instead of
// parsing error strings.
var (
ErrEntryNameTooLong = errors.New("entry name too long")
ErrParentIsFile = errors.New("parent path is a file")
ErrExistingIsDirectory = errors.New("existing entry is a directory")
ErrExistingIsFile = errors.New("existing entry is a file")
ErrEntryAlreadyExists = errors.New("entry already exists")
)
// FilerErrorToSentinel maps a proto FilerError code to its sentinel error.
// Returns nil for OK or unknown codes.
func FilerErrorToSentinel(code FilerError) error {
switch code {
case FilerError_ENTRY_NAME_TOO_LONG:
return ErrEntryNameTooLong
case FilerError_PARENT_IS_FILE:
return ErrParentIsFile
case FilerError_EXISTING_IS_DIRECTORY:
return ErrExistingIsDirectory
case FilerError_EXISTING_IS_FILE:
return ErrExistingIsFile
case FilerError_ENTRY_ALREADY_EXISTS:
return ErrEntryAlreadyExists
default:
return nil
}
}
func IsEmpty(event *SubscribeMetadataResponse) bool { func IsEmpty(event *SubscribeMetadataResponse) bool {
return event.EventNotification.NewEntry == nil && event.EventNotification.OldEntry == nil return event.EventNotification.NewEntry == nil && event.EventNotification.OldEntry == nil
} }

View File

@@ -51,7 +51,7 @@ func (s *Server) saveMetadataFile(ctx context.Context, bucketName, tablePath, me
if createErr != nil { if createErr != nil {
return fmt.Errorf("failed to create %s: %w", errorContext, createErr) return fmt.Errorf("failed to create %s: %w", errorContext, createErr)
} }
if resp.Error != "" && !strings.Contains(resp.Error, "exist") { if resp.ErrorCode != filer_pb.FilerError_OK && resp.ErrorCode != filer_pb.FilerError_ENTRY_ALREADY_EXISTS {
return fmt.Errorf("failed to create %s: %s", errorContext, resp.Error) return fmt.Errorf("failed to create %s: %s", errorContext, resp.Error)
} }
return nil return nil
@@ -104,8 +104,11 @@ func (s *Server) saveMetadataFile(ctx context.Context, bucketName, tablePath, me
if err != nil { if err != nil {
return fmt.Errorf("failed to write metadata file: %w", err) return fmt.Errorf("failed to write metadata file: %w", err)
} }
if resp.Error != "" { if resp.ErrorCode != filer_pb.FilerError_OK {
return fmt.Errorf("failed to write metadata file: %s", resp.Error) if sentinel := filer_pb.FilerErrorToSentinel(resp.ErrorCode); sentinel != nil {
return fmt.Errorf("failed to write metadata file: %w", sentinel)
}
return fmt.Errorf("failed to write metadata file: code=%v %s", resp.ErrorCode, resp.Error)
} }
return nil return nil
}) })

View File

@@ -198,7 +198,7 @@ func (s *Server) writeStageCreateMarker(ctx context.Context, bucketName string,
if createErr != nil { if createErr != nil {
return fmt.Errorf("failed to create %s: %w", errorContext, createErr) return fmt.Errorf("failed to create %s: %w", errorContext, createErr)
} }
if resp.Error != "" && !strings.Contains(resp.Error, "exist") { if resp.ErrorCode != filer_pb.FilerError_OK && resp.ErrorCode != filer_pb.FilerError_ENTRY_ALREADY_EXISTS {
return fmt.Errorf("failed to create %s: %s", errorContext, resp.Error) return fmt.Errorf("failed to create %s: %s", errorContext, resp.Error)
} }
return nil return nil
@@ -236,8 +236,11 @@ func (s *Server) writeStageCreateMarker(ctx context.Context, bucketName string,
if createErr != nil { if createErr != nil {
return createErr return createErr
} }
if resp.Error != "" { if resp.ErrorCode != filer_pb.FilerError_OK {
return errors.New(resp.Error) if sentinel := filer_pb.FilerErrorToSentinel(resp.ErrorCode); sentinel != nil {
return fmt.Errorf("create stage marker: %w", sentinel)
}
return fmt.Errorf("create stage marker: code=%v %s", resp.ErrorCode, resp.Error)
} }
return nil return nil
}) })

View File

@@ -15,7 +15,6 @@ import (
mathrand "math/rand" mathrand "math/rand"
"net/http" "net/http"
"os" "os"
"strings"
"sync" "sync"
"time" "time"
@@ -359,8 +358,8 @@ func (km *SSES3KeyManager) generateAndSaveSuperKeyToFiler() error {
// Set appropriate permissions for the directory // Set appropriate permissions for the directory
entry.Attributes.FileMode = uint32(0700 | os.ModeDir) entry.Attributes.FileMode = uint32(0700 | os.ModeDir)
}); err != nil { }); err != nil {
// Only ignore "file exists" errors. // Only ignore "already exists" errors.
if !strings.Contains(err.Error(), "file exists") { if !errors.Is(err, filer_pb.ErrEntryAlreadyExists) {
return fmt.Errorf("failed to create KEK directory %s: %w", SSES3KEKDirectory, err) return fmt.Errorf("failed to create KEK directory %s: %w", SSES3KEKDirectory, err)
} }
glog.V(3).Infof("Parent directory %s already exists, continuing.", SSES3KEKDirectory) glog.V(3).Infof("Parent directory %s already exists, continuing.", SSES3KEKDirectory)

View File

@@ -793,21 +793,25 @@ func filerErrorToS3Error(err error) s3err.ErrorCode {
return s3err.ErrNone return s3err.ErrNone
} }
errString := err.Error() // Filer sentinel errors — matched via errors.Is() after crossing gRPC boundary
switch {
case errors.Is(err, filer_pb.ErrEntryNameTooLong):
return s3err.ErrKeyTooLongError
case errors.Is(err, filer_pb.ErrParentIsFile), errors.Is(err, filer_pb.ErrExistingIsFile):
return s3err.ErrExistingObjectIsFile
case errors.Is(err, filer_pb.ErrExistingIsDirectory):
return s3err.ErrExistingObjectIsDirectory
case errors.Is(err, weed_server.ErrReadOnly):
return s3err.ErrAccessDenied
}
// Non-filer errors that don't go through CreateEntryResponse — string matching required
errString := err.Error()
switch { switch {
case errString == constants.ErrMsgBadDigest: case errString == constants.ErrMsgBadDigest:
return s3err.ErrBadDigest return s3err.ErrBadDigest
case errors.Is(err, weed_server.ErrReadOnly):
return s3err.ErrAccessDenied
case strings.Contains(errString, "context canceled") || strings.Contains(errString, "code = Canceled"): case strings.Contains(errString, "context canceled") || strings.Contains(errString, "code = Canceled"):
return s3err.ErrInvalidRequest return s3err.ErrInvalidRequest
case strings.Contains(errString, constants.ErrMsgExistingPrefix) && strings.HasSuffix(errString, constants.ErrMsgIsADirectory):
return s3err.ErrExistingObjectIsDirectory
case strings.HasSuffix(errString, constants.ErrMsgIsAFile):
return s3err.ErrExistingObjectIsFile
case strings.Contains(errString, constants.ErrMsgEntryNameTooLong):
return s3err.ErrKeyTooLongError
default: default:
return s3err.ErrInternalError return s3err.ErrInternalError
} }

View File

@@ -10,6 +10,7 @@ import (
"testing" "testing"
"github.com/gorilla/mux" "github.com/gorilla/mux"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants" "github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err" "github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
weed_server "github.com/seaweedfs/seaweedfs/weed/server" weed_server "github.com/seaweedfs/seaweedfs/weed/server"
@@ -53,28 +54,28 @@ func TestFilerErrorToS3Error(t *testing.T) {
expectedErr: s3err.ErrInvalidRequest, expectedErr: s3err.ErrInvalidRequest,
}, },
{ {
name: "Directory exists error", name: "Directory exists error (sentinel)",
err: errors.New("existing /path/to/file is a directory"), err: fmt.Errorf("CreateEntry /path: %w", filer_pb.ErrExistingIsDirectory),
expectedErr: s3err.ErrExistingObjectIsDirectory, expectedErr: s3err.ErrExistingObjectIsDirectory,
}, },
{ {
name: "Directory exists error (CreateEntry-wrapped)", name: "Parent is file error (sentinel)",
err: errors.New("CreateEntry: existing /path/to/file is a directory"), err: fmt.Errorf("CreateEntry /path: %w", filer_pb.ErrParentIsFile),
expectedErr: s3err.ErrExistingObjectIsDirectory,
},
{
name: "File exists error",
err: errors.New("/path/to/file is a file"),
expectedErr: s3err.ErrExistingObjectIsFile, expectedErr: s3err.ErrExistingObjectIsFile,
}, },
{ {
name: "Entry name too long error", name: "Existing is file error (sentinel)",
err: errors.New("CreateEntry: entry name too long"), err: fmt.Errorf("CreateEntry /path: %w", filer_pb.ErrExistingIsFile),
expectedErr: s3err.ErrExistingObjectIsFile,
},
{
name: "Entry name too long (sentinel)",
err: fmt.Errorf("CreateEntry: %w", filer_pb.ErrEntryNameTooLong),
expectedErr: s3err.ErrKeyTooLongError, expectedErr: s3err.ErrKeyTooLongError,
}, },
{ {
name: "Entry name too long error (unwrapped)", name: "Entry name too long (bare sentinel)",
err: errors.New("entry name too long"), err: filer_pb.ErrEntryNameTooLong,
expectedErr: s3err.ErrKeyTooLongError, expectedErr: s3err.ErrKeyTooLongError,
}, },
{ {

View File

@@ -3,6 +3,7 @@ package weed_server
import ( import (
"bytes" "bytes"
"context" "context"
"errors"
"fmt" "fmt"
"os" "os"
"path/filepath" "path/filepath"
@@ -191,6 +192,18 @@ func (fs *FilerServer) CreateEntry(ctx context.Context, req *filer_pb.CreateEntr
} else { } else {
glog.V(3).InfofCtx(ctx, "CreateEntry %s: %v", filepath.Join(req.Directory, req.Entry.Name), createErr) glog.V(3).InfofCtx(ctx, "CreateEntry %s: %v", filepath.Join(req.Directory, req.Entry.Name), createErr)
resp.Error = createErr.Error() resp.Error = createErr.Error()
switch {
case errors.Is(createErr, filer_pb.ErrEntryNameTooLong):
resp.ErrorCode = filer_pb.FilerError_ENTRY_NAME_TOO_LONG
case errors.Is(createErr, filer_pb.ErrParentIsFile):
resp.ErrorCode = filer_pb.FilerError_PARENT_IS_FILE
case errors.Is(createErr, filer_pb.ErrExistingIsDirectory):
resp.ErrorCode = filer_pb.FilerError_EXISTING_IS_DIRECTORY
case errors.Is(createErr, filer_pb.ErrExistingIsFile):
resp.ErrorCode = filer_pb.FilerError_EXISTING_IS_FILE
case errors.Is(createErr, filer_pb.ErrEntryAlreadyExists):
resp.ErrorCode = filer_pb.FilerError_ENTRY_ALREADY_EXISTS
}
} }
return return