* feat(ec_balance): add TaskTypeECBalance constant and protobuf definitions Add the ec_balance task type constant to both topology and worker type systems. Define EcBalanceTaskParams, EcShardMoveSpec, and EcBalanceTaskConfig protobuf messages for EC shard balance operations. * feat(ec_balance): add configuration for EC shard balance task Config includes imbalance threshold, min server count, collection filter, disk type, and preferred tags for tag-aware placement. * feat(ec_balance): add multi-phase EC shard balance detection algorithm Implements four detection phases adapted from the ec.balance shell command: 1. Duplicate shard detection and removal proposals 2. Cross-rack shard distribution balancing 3. Within-rack node-level shard balancing 4. Global shard count equalization across nodes Detection is side-effect-free: it builds an EC topology view from ActiveTopology and generates move proposals without executing them. * feat(ec_balance): add EC shard move task execution Implements the shard move sequence using the same VolumeEcShardsCopy, VolumeEcShardsMount, VolumeEcShardsUnmount, and VolumeEcShardsDelete RPCs as the shell ec.balance command. Supports both regular shard moves and dedup-phase deletions (unmount+delete without copy). * feat(ec_balance): add task registration and scheduling Register EC balance task definition with auto-config update support. Scheduling respects max concurrent limits and worker capabilities. * feat(ec_balance): add plugin handler for EC shard balance Implements the full plugin handler with detection, execution, admin and worker config forms, proposal building, and decision trace reporting. Supports collection/DC/disk type filtering, preferred tag placement, and configurable detection intervals. Auto-registered via init() with the handler registry. * test(ec_balance): add tests for detection algorithm and plugin handler Detection tests cover: duplicate shard detection, cross-rack imbalance, within-rack imbalance, global rebalancing, topology building, collection filtering, and edge cases. Handler tests cover: config derivation with clamping, proposal building, protobuf encode/decode round-trip, fallback parameter decoding, capability, and config policy round-trip. * fix(ec_balance): address PR review feedback and fix CI test failure - Update TestWorkerDefaultJobTypes to expect 6 handlers (was 5) - Extract threshold constants (ecBalanceMinImbalanceThreshold, etc.) to eliminate magic numbers in Descriptor and config derivation - Remove duplicate ShardIdsToUint32 helper (use erasure_coding package) - Add bounds checks for int64→int/uint32 conversions to fix CodeQL integer conversion warnings * fix(ec_balance): address code review findings storage_impact.go: - Add TaskTypeECBalance case returning shard-level reservation (ShardSlots: -1/+1) instead of falling through to default which incorrectly reserves a full volume slot on target. detection.go: - Use dc:rack composite key to avoid cross-DC rack name collisions. Only create rack entries after confirming node has matching disks. - Add exceedsImbalanceThreshold check to cross-rack, within-rack, and global phases so trivial skews below the configured threshold are ignored. Dedup phase always runs since duplicates are errors. - Reserve destination capacity after each planned move (decrement destNode.freeSlots, update rackShardCount/nodeShardCount) to prevent overbooking the same destination. - Skip nodes with freeSlots <= 0 when selecting minNode in global balance to avoid proposing moves to full nodes. - Include loop index and source/target node IDs in TaskID to guarantee uniqueness across moves with the same volumeID/shardID. ec_balance_handler.go: - Fail fast with error when shard_id is absent in fallback parameter decoding instead of silently defaulting to shard 0. ec_balance_task.go: - Delegate GetProgress() to BaseTask.GetProgress() so progress updates from ReportProgressWithStage are visible to callers. - Add fail-fast guard rejecting multiple sources/targets until batch execution is implemented. Findings verified but not changed (matches existing codebase pattern in vacuum/balance/erasure_coding handlers): - register.go globalTaskDef.Config race: same unsynchronized pattern in all 4 task packages. - CreateTask using generated ID: same fmt.Sprintf pattern in all 4 task packages. * fix(ec_balance): harden parameter decoding, progress tracking, and validation ec_balance_handler.go (decodeECBalanceTaskParams): - Validate execution-critical fields (Sources[0].Node, ShardIds, Targets[0].Node, ShardIds) after protobuf deserialization. - Require source_disk_id and target_disk_id in legacy fallback path so Targets[0].DiskId is populated for VolumeEcShardsCopyRequest. - All error messages reference decodeECBalanceTaskParams and the specific missing field (TaskParams, shard_id, Targets[0].DiskId, EcBalanceTaskParams) for debuggability. ec_balance_task.go: - Track progress in ECBalanceTask.progress field, updated via reportProgress() helper called before ReportProgressWithStage(), so GetProgress() returns real stage progress instead of stale 0. - Validate: require exactly 1 source and 1 target (mirrors Execute guard), require ShardIds on both, with error messages referencing ECBalanceTask.Validate and the specific field. * fix(ec_balance): fix dedup execution path, stale topology, collection filter, timeout, and dedupeKey detection.go: - Dedup moves now set target=source so isDedupPhase() triggers the unmount+delete-only execution path instead of attempting a copy. - Apply moves to in-memory topology between phases via applyMovesToTopology() so subsequent phases see updated shard placement and don't conflict with already-planned moves. - detectGlobalImbalance now accepts allowedVids and filters both shard counting and shard selection to respect CollectionFilter. ec_balance_task.go: - Apply EcBalanceTaskParams.TimeoutSeconds to the context via context.WithTimeout so all RPC operations respect the configured timeout instead of hanging indefinitely. ec_balance_handler.go: - Include source node ID in dedupeKey so dedup deletions from different source nodes for the same shard aren't collapsed. - Clamp minServerCountRaw and minIntervalRaw lower bounds on int64 before narrowing to int, preventing undefined overflow on 32-bit. * fix(ec_balance): log warning before cancelling on progress send failure Log the error, job ID, job type, progress percentage, and stage before calling execCancel() in the progress callback so failed progress sends are diagnosable instead of silently cancelling.
4234 lines
137 KiB
Go
4234 lines
137 KiB
Go
// Code generated by protoc-gen-go. DO NOT EDIT.
|
|
// versions:
|
|
// protoc-gen-go v1.36.6
|
|
// protoc v6.33.4
|
|
// source: worker.proto
|
|
|
|
package worker_pb
|
|
|
|
import (
|
|
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
|
|
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
|
|
reflect "reflect"
|
|
sync "sync"
|
|
unsafe "unsafe"
|
|
)
|
|
|
|
const (
|
|
// Verify that this generated code is sufficiently up-to-date.
|
|
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
|
|
// Verify that runtime/protoimpl is sufficiently up-to-date.
|
|
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
|
|
)
|
|
|
|
// WorkerMessage represents messages from worker to admin
|
|
type WorkerMessage struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
WorkerId string `protobuf:"bytes,1,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Timestamp int64 `protobuf:"varint,2,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
|
|
// Types that are valid to be assigned to Message:
|
|
//
|
|
// *WorkerMessage_Registration
|
|
// *WorkerMessage_Heartbeat
|
|
// *WorkerMessage_TaskRequest
|
|
// *WorkerMessage_TaskUpdate
|
|
// *WorkerMessage_TaskComplete
|
|
// *WorkerMessage_Shutdown
|
|
// *WorkerMessage_TaskLogResponse
|
|
Message isWorkerMessage_Message `protobuf_oneof:"message"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *WorkerMessage) Reset() {
|
|
*x = WorkerMessage{}
|
|
mi := &file_worker_proto_msgTypes[0]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *WorkerMessage) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*WorkerMessage) ProtoMessage() {}
|
|
|
|
func (x *WorkerMessage) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[0]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use WorkerMessage.ProtoReflect.Descriptor instead.
|
|
func (*WorkerMessage) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{0}
|
|
}
|
|
|
|
func (x *WorkerMessage) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *WorkerMessage) GetTimestamp() int64 {
|
|
if x != nil {
|
|
return x.Timestamp
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *WorkerMessage) GetMessage() isWorkerMessage_Message {
|
|
if x != nil {
|
|
return x.Message
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerMessage) GetRegistration() *WorkerRegistration {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*WorkerMessage_Registration); ok {
|
|
return x.Registration
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerMessage) GetHeartbeat() *WorkerHeartbeat {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*WorkerMessage_Heartbeat); ok {
|
|
return x.Heartbeat
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerMessage) GetTaskRequest() *TaskRequest {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*WorkerMessage_TaskRequest); ok {
|
|
return x.TaskRequest
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerMessage) GetTaskUpdate() *TaskUpdate {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*WorkerMessage_TaskUpdate); ok {
|
|
return x.TaskUpdate
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerMessage) GetTaskComplete() *TaskComplete {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*WorkerMessage_TaskComplete); ok {
|
|
return x.TaskComplete
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerMessage) GetShutdown() *WorkerShutdown {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*WorkerMessage_Shutdown); ok {
|
|
return x.Shutdown
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerMessage) GetTaskLogResponse() *TaskLogResponse {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*WorkerMessage_TaskLogResponse); ok {
|
|
return x.TaskLogResponse
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
type isWorkerMessage_Message interface {
|
|
isWorkerMessage_Message()
|
|
}
|
|
|
|
type WorkerMessage_Registration struct {
|
|
Registration *WorkerRegistration `protobuf:"bytes,3,opt,name=registration,proto3,oneof"`
|
|
}
|
|
|
|
type WorkerMessage_Heartbeat struct {
|
|
Heartbeat *WorkerHeartbeat `protobuf:"bytes,4,opt,name=heartbeat,proto3,oneof"`
|
|
}
|
|
|
|
type WorkerMessage_TaskRequest struct {
|
|
TaskRequest *TaskRequest `protobuf:"bytes,5,opt,name=task_request,json=taskRequest,proto3,oneof"`
|
|
}
|
|
|
|
type WorkerMessage_TaskUpdate struct {
|
|
TaskUpdate *TaskUpdate `protobuf:"bytes,6,opt,name=task_update,json=taskUpdate,proto3,oneof"`
|
|
}
|
|
|
|
type WorkerMessage_TaskComplete struct {
|
|
TaskComplete *TaskComplete `protobuf:"bytes,7,opt,name=task_complete,json=taskComplete,proto3,oneof"`
|
|
}
|
|
|
|
type WorkerMessage_Shutdown struct {
|
|
Shutdown *WorkerShutdown `protobuf:"bytes,8,opt,name=shutdown,proto3,oneof"`
|
|
}
|
|
|
|
type WorkerMessage_TaskLogResponse struct {
|
|
TaskLogResponse *TaskLogResponse `protobuf:"bytes,9,opt,name=task_log_response,json=taskLogResponse,proto3,oneof"`
|
|
}
|
|
|
|
func (*WorkerMessage_Registration) isWorkerMessage_Message() {}
|
|
|
|
func (*WorkerMessage_Heartbeat) isWorkerMessage_Message() {}
|
|
|
|
func (*WorkerMessage_TaskRequest) isWorkerMessage_Message() {}
|
|
|
|
func (*WorkerMessage_TaskUpdate) isWorkerMessage_Message() {}
|
|
|
|
func (*WorkerMessage_TaskComplete) isWorkerMessage_Message() {}
|
|
|
|
func (*WorkerMessage_Shutdown) isWorkerMessage_Message() {}
|
|
|
|
func (*WorkerMessage_TaskLogResponse) isWorkerMessage_Message() {}
|
|
|
|
// AdminMessage represents messages from admin to worker
|
|
type AdminMessage struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
AdminId string `protobuf:"bytes,1,opt,name=admin_id,json=adminId,proto3" json:"admin_id,omitempty"`
|
|
Timestamp int64 `protobuf:"varint,2,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
|
|
// Types that are valid to be assigned to Message:
|
|
//
|
|
// *AdminMessage_RegistrationResponse
|
|
// *AdminMessage_HeartbeatResponse
|
|
// *AdminMessage_TaskAssignment
|
|
// *AdminMessage_TaskCancellation
|
|
// *AdminMessage_AdminShutdown
|
|
// *AdminMessage_TaskLogRequest
|
|
Message isAdminMessage_Message `protobuf_oneof:"message"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *AdminMessage) Reset() {
|
|
*x = AdminMessage{}
|
|
mi := &file_worker_proto_msgTypes[1]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *AdminMessage) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*AdminMessage) ProtoMessage() {}
|
|
|
|
func (x *AdminMessage) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[1]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use AdminMessage.ProtoReflect.Descriptor instead.
|
|
func (*AdminMessage) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{1}
|
|
}
|
|
|
|
func (x *AdminMessage) GetAdminId() string {
|
|
if x != nil {
|
|
return x.AdminId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *AdminMessage) GetTimestamp() int64 {
|
|
if x != nil {
|
|
return x.Timestamp
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *AdminMessage) GetMessage() isAdminMessage_Message {
|
|
if x != nil {
|
|
return x.Message
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *AdminMessage) GetRegistrationResponse() *RegistrationResponse {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*AdminMessage_RegistrationResponse); ok {
|
|
return x.RegistrationResponse
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *AdminMessage) GetHeartbeatResponse() *HeartbeatResponse {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*AdminMessage_HeartbeatResponse); ok {
|
|
return x.HeartbeatResponse
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *AdminMessage) GetTaskAssignment() *TaskAssignment {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*AdminMessage_TaskAssignment); ok {
|
|
return x.TaskAssignment
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *AdminMessage) GetTaskCancellation() *TaskCancellation {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*AdminMessage_TaskCancellation); ok {
|
|
return x.TaskCancellation
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *AdminMessage) GetAdminShutdown() *AdminShutdown {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*AdminMessage_AdminShutdown); ok {
|
|
return x.AdminShutdown
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *AdminMessage) GetTaskLogRequest() *TaskLogRequest {
|
|
if x != nil {
|
|
if x, ok := x.Message.(*AdminMessage_TaskLogRequest); ok {
|
|
return x.TaskLogRequest
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
type isAdminMessage_Message interface {
|
|
isAdminMessage_Message()
|
|
}
|
|
|
|
type AdminMessage_RegistrationResponse struct {
|
|
RegistrationResponse *RegistrationResponse `protobuf:"bytes,3,opt,name=registration_response,json=registrationResponse,proto3,oneof"`
|
|
}
|
|
|
|
type AdminMessage_HeartbeatResponse struct {
|
|
HeartbeatResponse *HeartbeatResponse `protobuf:"bytes,4,opt,name=heartbeat_response,json=heartbeatResponse,proto3,oneof"`
|
|
}
|
|
|
|
type AdminMessage_TaskAssignment struct {
|
|
TaskAssignment *TaskAssignment `protobuf:"bytes,5,opt,name=task_assignment,json=taskAssignment,proto3,oneof"`
|
|
}
|
|
|
|
type AdminMessage_TaskCancellation struct {
|
|
TaskCancellation *TaskCancellation `protobuf:"bytes,6,opt,name=task_cancellation,json=taskCancellation,proto3,oneof"`
|
|
}
|
|
|
|
type AdminMessage_AdminShutdown struct {
|
|
AdminShutdown *AdminShutdown `protobuf:"bytes,7,opt,name=admin_shutdown,json=adminShutdown,proto3,oneof"`
|
|
}
|
|
|
|
type AdminMessage_TaskLogRequest struct {
|
|
TaskLogRequest *TaskLogRequest `protobuf:"bytes,8,opt,name=task_log_request,json=taskLogRequest,proto3,oneof"`
|
|
}
|
|
|
|
func (*AdminMessage_RegistrationResponse) isAdminMessage_Message() {}
|
|
|
|
func (*AdminMessage_HeartbeatResponse) isAdminMessage_Message() {}
|
|
|
|
func (*AdminMessage_TaskAssignment) isAdminMessage_Message() {}
|
|
|
|
func (*AdminMessage_TaskCancellation) isAdminMessage_Message() {}
|
|
|
|
func (*AdminMessage_AdminShutdown) isAdminMessage_Message() {}
|
|
|
|
func (*AdminMessage_TaskLogRequest) isAdminMessage_Message() {}
|
|
|
|
// WorkerRegistration message when worker connects
|
|
type WorkerRegistration struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
WorkerId string `protobuf:"bytes,1,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Address string `protobuf:"bytes,2,opt,name=address,proto3" json:"address,omitempty"`
|
|
Capabilities []string `protobuf:"bytes,3,rep,name=capabilities,proto3" json:"capabilities,omitempty"`
|
|
MaxConcurrent int32 `protobuf:"varint,4,opt,name=max_concurrent,json=maxConcurrent,proto3" json:"max_concurrent,omitempty"`
|
|
Metadata map[string]string `protobuf:"bytes,5,rep,name=metadata,proto3" json:"metadata,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *WorkerRegistration) Reset() {
|
|
*x = WorkerRegistration{}
|
|
mi := &file_worker_proto_msgTypes[2]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *WorkerRegistration) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*WorkerRegistration) ProtoMessage() {}
|
|
|
|
func (x *WorkerRegistration) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[2]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use WorkerRegistration.ProtoReflect.Descriptor instead.
|
|
func (*WorkerRegistration) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{2}
|
|
}
|
|
|
|
func (x *WorkerRegistration) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *WorkerRegistration) GetAddress() string {
|
|
if x != nil {
|
|
return x.Address
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *WorkerRegistration) GetCapabilities() []string {
|
|
if x != nil {
|
|
return x.Capabilities
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerRegistration) GetMaxConcurrent() int32 {
|
|
if x != nil {
|
|
return x.MaxConcurrent
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *WorkerRegistration) GetMetadata() map[string]string {
|
|
if x != nil {
|
|
return x.Metadata
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// RegistrationResponse confirms worker registration
|
|
type RegistrationResponse struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Success bool `protobuf:"varint,1,opt,name=success,proto3" json:"success,omitempty"`
|
|
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
|
|
AssignedWorkerId string `protobuf:"bytes,3,opt,name=assigned_worker_id,json=assignedWorkerId,proto3" json:"assigned_worker_id,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *RegistrationResponse) Reset() {
|
|
*x = RegistrationResponse{}
|
|
mi := &file_worker_proto_msgTypes[3]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *RegistrationResponse) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*RegistrationResponse) ProtoMessage() {}
|
|
|
|
func (x *RegistrationResponse) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[3]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use RegistrationResponse.ProtoReflect.Descriptor instead.
|
|
func (*RegistrationResponse) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{3}
|
|
}
|
|
|
|
func (x *RegistrationResponse) GetSuccess() bool {
|
|
if x != nil {
|
|
return x.Success
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *RegistrationResponse) GetMessage() string {
|
|
if x != nil {
|
|
return x.Message
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *RegistrationResponse) GetAssignedWorkerId() string {
|
|
if x != nil {
|
|
return x.AssignedWorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
// WorkerHeartbeat sent periodically by worker
|
|
type WorkerHeartbeat struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
WorkerId string `protobuf:"bytes,1,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Status string `protobuf:"bytes,2,opt,name=status,proto3" json:"status,omitempty"`
|
|
CurrentLoad int32 `protobuf:"varint,3,opt,name=current_load,json=currentLoad,proto3" json:"current_load,omitempty"`
|
|
MaxConcurrent int32 `protobuf:"varint,4,opt,name=max_concurrent,json=maxConcurrent,proto3" json:"max_concurrent,omitempty"`
|
|
CurrentTaskIds []string `protobuf:"bytes,5,rep,name=current_task_ids,json=currentTaskIds,proto3" json:"current_task_ids,omitempty"`
|
|
TasksCompleted int32 `protobuf:"varint,6,opt,name=tasks_completed,json=tasksCompleted,proto3" json:"tasks_completed,omitempty"`
|
|
TasksFailed int32 `protobuf:"varint,7,opt,name=tasks_failed,json=tasksFailed,proto3" json:"tasks_failed,omitempty"`
|
|
UptimeSeconds int64 `protobuf:"varint,8,opt,name=uptime_seconds,json=uptimeSeconds,proto3" json:"uptime_seconds,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) Reset() {
|
|
*x = WorkerHeartbeat{}
|
|
mi := &file_worker_proto_msgTypes[4]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*WorkerHeartbeat) ProtoMessage() {}
|
|
|
|
func (x *WorkerHeartbeat) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[4]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use WorkerHeartbeat.ProtoReflect.Descriptor instead.
|
|
func (*WorkerHeartbeat) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{4}
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetStatus() string {
|
|
if x != nil {
|
|
return x.Status
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetCurrentLoad() int32 {
|
|
if x != nil {
|
|
return x.CurrentLoad
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetMaxConcurrent() int32 {
|
|
if x != nil {
|
|
return x.MaxConcurrent
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetCurrentTaskIds() []string {
|
|
if x != nil {
|
|
return x.CurrentTaskIds
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetTasksCompleted() int32 {
|
|
if x != nil {
|
|
return x.TasksCompleted
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetTasksFailed() int32 {
|
|
if x != nil {
|
|
return x.TasksFailed
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *WorkerHeartbeat) GetUptimeSeconds() int64 {
|
|
if x != nil {
|
|
return x.UptimeSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// HeartbeatResponse acknowledges heartbeat
|
|
type HeartbeatResponse struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Success bool `protobuf:"varint,1,opt,name=success,proto3" json:"success,omitempty"`
|
|
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *HeartbeatResponse) Reset() {
|
|
*x = HeartbeatResponse{}
|
|
mi := &file_worker_proto_msgTypes[5]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *HeartbeatResponse) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*HeartbeatResponse) ProtoMessage() {}
|
|
|
|
func (x *HeartbeatResponse) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[5]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use HeartbeatResponse.ProtoReflect.Descriptor instead.
|
|
func (*HeartbeatResponse) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{5}
|
|
}
|
|
|
|
func (x *HeartbeatResponse) GetSuccess() bool {
|
|
if x != nil {
|
|
return x.Success
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *HeartbeatResponse) GetMessage() string {
|
|
if x != nil {
|
|
return x.Message
|
|
}
|
|
return ""
|
|
}
|
|
|
|
// TaskRequest from worker asking for new tasks
|
|
type TaskRequest struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
WorkerId string `protobuf:"bytes,1,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Capabilities []string `protobuf:"bytes,2,rep,name=capabilities,proto3" json:"capabilities,omitempty"`
|
|
AvailableSlots int32 `protobuf:"varint,3,opt,name=available_slots,json=availableSlots,proto3" json:"available_slots,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskRequest) Reset() {
|
|
*x = TaskRequest{}
|
|
mi := &file_worker_proto_msgTypes[6]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskRequest) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskRequest) ProtoMessage() {}
|
|
|
|
func (x *TaskRequest) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[6]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskRequest.ProtoReflect.Descriptor instead.
|
|
func (*TaskRequest) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{6}
|
|
}
|
|
|
|
func (x *TaskRequest) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskRequest) GetCapabilities() []string {
|
|
if x != nil {
|
|
return x.Capabilities
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskRequest) GetAvailableSlots() int32 {
|
|
if x != nil {
|
|
return x.AvailableSlots
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// TaskAssignment from admin to worker
|
|
type TaskAssignment struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
|
TaskType string `protobuf:"bytes,2,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
|
|
Params *TaskParams `protobuf:"bytes,3,opt,name=params,proto3" json:"params,omitempty"`
|
|
Priority int32 `protobuf:"varint,4,opt,name=priority,proto3" json:"priority,omitempty"`
|
|
CreatedTime int64 `protobuf:"varint,5,opt,name=created_time,json=createdTime,proto3" json:"created_time,omitempty"`
|
|
Metadata map[string]string `protobuf:"bytes,6,rep,name=metadata,proto3" json:"metadata,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskAssignment) Reset() {
|
|
*x = TaskAssignment{}
|
|
mi := &file_worker_proto_msgTypes[7]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskAssignment) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskAssignment) ProtoMessage() {}
|
|
|
|
func (x *TaskAssignment) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[7]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskAssignment.ProtoReflect.Descriptor instead.
|
|
func (*TaskAssignment) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{7}
|
|
}
|
|
|
|
func (x *TaskAssignment) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskAssignment) GetTaskType() string {
|
|
if x != nil {
|
|
return x.TaskType
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskAssignment) GetParams() *TaskParams {
|
|
if x != nil {
|
|
return x.Params
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskAssignment) GetPriority() int32 {
|
|
if x != nil {
|
|
return x.Priority
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskAssignment) GetCreatedTime() int64 {
|
|
if x != nil {
|
|
return x.CreatedTime
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskAssignment) GetMetadata() map[string]string {
|
|
if x != nil {
|
|
return x.Metadata
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// TaskParams contains task-specific parameters with typed variants
|
|
type TaskParams struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"` // ActiveTopology task ID for lifecycle management
|
|
VolumeId uint32 `protobuf:"varint,2,opt,name=volume_id,json=volumeId,proto3" json:"volume_id,omitempty"` // Primary volume ID for the task
|
|
Collection string `protobuf:"bytes,3,opt,name=collection,proto3" json:"collection,omitempty"` // Collection name
|
|
DataCenter string `protobuf:"bytes,4,opt,name=data_center,json=dataCenter,proto3" json:"data_center,omitempty"` // Primary data center
|
|
Rack string `protobuf:"bytes,5,opt,name=rack,proto3" json:"rack,omitempty"` // Primary rack
|
|
VolumeSize uint64 `protobuf:"varint,6,opt,name=volume_size,json=volumeSize,proto3" json:"volume_size,omitempty"` // Original volume size in bytes for tracking size changes
|
|
// Unified source and target arrays for all task types
|
|
Sources []*TaskSource `protobuf:"bytes,7,rep,name=sources,proto3" json:"sources,omitempty"` // Source locations (volume replicas, EC shards, etc.)
|
|
Targets []*TaskTarget `protobuf:"bytes,8,rep,name=targets,proto3" json:"targets,omitempty"` // Target locations (destinations, new replicas, etc.)
|
|
// Typed task parameters
|
|
//
|
|
// Types that are valid to be assigned to TaskParams:
|
|
//
|
|
// *TaskParams_VacuumParams
|
|
// *TaskParams_ErasureCodingParams
|
|
// *TaskParams_BalanceParams
|
|
// *TaskParams_ReplicationParams
|
|
// *TaskParams_EcBalanceParams
|
|
TaskParams isTaskParams_TaskParams `protobuf_oneof:"task_params"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskParams) Reset() {
|
|
*x = TaskParams{}
|
|
mi := &file_worker_proto_msgTypes[8]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskParams) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskParams) ProtoMessage() {}
|
|
|
|
func (x *TaskParams) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[8]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskParams.ProtoReflect.Descriptor instead.
|
|
func (*TaskParams) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{8}
|
|
}
|
|
|
|
func (x *TaskParams) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskParams) GetVolumeId() uint32 {
|
|
if x != nil {
|
|
return x.VolumeId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskParams) GetCollection() string {
|
|
if x != nil {
|
|
return x.Collection
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskParams) GetDataCenter() string {
|
|
if x != nil {
|
|
return x.DataCenter
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskParams) GetRack() string {
|
|
if x != nil {
|
|
return x.Rack
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskParams) GetVolumeSize() uint64 {
|
|
if x != nil {
|
|
return x.VolumeSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskParams) GetSources() []*TaskSource {
|
|
if x != nil {
|
|
return x.Sources
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskParams) GetTargets() []*TaskTarget {
|
|
if x != nil {
|
|
return x.Targets
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskParams) GetTaskParams() isTaskParams_TaskParams {
|
|
if x != nil {
|
|
return x.TaskParams
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskParams) GetVacuumParams() *VacuumTaskParams {
|
|
if x != nil {
|
|
if x, ok := x.TaskParams.(*TaskParams_VacuumParams); ok {
|
|
return x.VacuumParams
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskParams) GetErasureCodingParams() *ErasureCodingTaskParams {
|
|
if x != nil {
|
|
if x, ok := x.TaskParams.(*TaskParams_ErasureCodingParams); ok {
|
|
return x.ErasureCodingParams
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskParams) GetBalanceParams() *BalanceTaskParams {
|
|
if x != nil {
|
|
if x, ok := x.TaskParams.(*TaskParams_BalanceParams); ok {
|
|
return x.BalanceParams
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskParams) GetReplicationParams() *ReplicationTaskParams {
|
|
if x != nil {
|
|
if x, ok := x.TaskParams.(*TaskParams_ReplicationParams); ok {
|
|
return x.ReplicationParams
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskParams) GetEcBalanceParams() *EcBalanceTaskParams {
|
|
if x != nil {
|
|
if x, ok := x.TaskParams.(*TaskParams_EcBalanceParams); ok {
|
|
return x.EcBalanceParams
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
type isTaskParams_TaskParams interface {
|
|
isTaskParams_TaskParams()
|
|
}
|
|
|
|
type TaskParams_VacuumParams struct {
|
|
VacuumParams *VacuumTaskParams `protobuf:"bytes,9,opt,name=vacuum_params,json=vacuumParams,proto3,oneof"`
|
|
}
|
|
|
|
type TaskParams_ErasureCodingParams struct {
|
|
ErasureCodingParams *ErasureCodingTaskParams `protobuf:"bytes,10,opt,name=erasure_coding_params,json=erasureCodingParams,proto3,oneof"`
|
|
}
|
|
|
|
type TaskParams_BalanceParams struct {
|
|
BalanceParams *BalanceTaskParams `protobuf:"bytes,11,opt,name=balance_params,json=balanceParams,proto3,oneof"`
|
|
}
|
|
|
|
type TaskParams_ReplicationParams struct {
|
|
ReplicationParams *ReplicationTaskParams `protobuf:"bytes,12,opt,name=replication_params,json=replicationParams,proto3,oneof"`
|
|
}
|
|
|
|
type TaskParams_EcBalanceParams struct {
|
|
EcBalanceParams *EcBalanceTaskParams `protobuf:"bytes,13,opt,name=ec_balance_params,json=ecBalanceParams,proto3,oneof"`
|
|
}
|
|
|
|
func (*TaskParams_VacuumParams) isTaskParams_TaskParams() {}
|
|
|
|
func (*TaskParams_ErasureCodingParams) isTaskParams_TaskParams() {}
|
|
|
|
func (*TaskParams_BalanceParams) isTaskParams_TaskParams() {}
|
|
|
|
func (*TaskParams_ReplicationParams) isTaskParams_TaskParams() {}
|
|
|
|
func (*TaskParams_EcBalanceParams) isTaskParams_TaskParams() {}
|
|
|
|
// VacuumTaskParams for vacuum operations
|
|
type VacuumTaskParams struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
GarbageThreshold float64 `protobuf:"fixed64,1,opt,name=garbage_threshold,json=garbageThreshold,proto3" json:"garbage_threshold,omitempty"` // Minimum garbage ratio to trigger vacuum
|
|
ForceVacuum bool `protobuf:"varint,2,opt,name=force_vacuum,json=forceVacuum,proto3" json:"force_vacuum,omitempty"` // Force vacuum even if below threshold
|
|
BatchSize int32 `protobuf:"varint,3,opt,name=batch_size,json=batchSize,proto3" json:"batch_size,omitempty"` // Number of files to process per batch
|
|
WorkingDir string `protobuf:"bytes,4,opt,name=working_dir,json=workingDir,proto3" json:"working_dir,omitempty"` // Working directory for temporary files
|
|
VerifyChecksum bool `protobuf:"varint,5,opt,name=verify_checksum,json=verifyChecksum,proto3" json:"verify_checksum,omitempty"` // Verify file checksums during vacuum
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *VacuumTaskParams) Reset() {
|
|
*x = VacuumTaskParams{}
|
|
mi := &file_worker_proto_msgTypes[9]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *VacuumTaskParams) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*VacuumTaskParams) ProtoMessage() {}
|
|
|
|
func (x *VacuumTaskParams) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[9]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use VacuumTaskParams.ProtoReflect.Descriptor instead.
|
|
func (*VacuumTaskParams) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{9}
|
|
}
|
|
|
|
func (x *VacuumTaskParams) GetGarbageThreshold() float64 {
|
|
if x != nil {
|
|
return x.GarbageThreshold
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VacuumTaskParams) GetForceVacuum() bool {
|
|
if x != nil {
|
|
return x.ForceVacuum
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *VacuumTaskParams) GetBatchSize() int32 {
|
|
if x != nil {
|
|
return x.BatchSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VacuumTaskParams) GetWorkingDir() string {
|
|
if x != nil {
|
|
return x.WorkingDir
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *VacuumTaskParams) GetVerifyChecksum() bool {
|
|
if x != nil {
|
|
return x.VerifyChecksum
|
|
}
|
|
return false
|
|
}
|
|
|
|
// ErasureCodingTaskParams for EC encoding operations
|
|
type ErasureCodingTaskParams struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
EstimatedShardSize uint64 `protobuf:"varint,1,opt,name=estimated_shard_size,json=estimatedShardSize,proto3" json:"estimated_shard_size,omitempty"` // Estimated size per shard
|
|
DataShards int32 `protobuf:"varint,2,opt,name=data_shards,json=dataShards,proto3" json:"data_shards,omitempty"` // Number of data shards (default: 10)
|
|
ParityShards int32 `protobuf:"varint,3,opt,name=parity_shards,json=parityShards,proto3" json:"parity_shards,omitempty"` // Number of parity shards (default: 4)
|
|
WorkingDir string `protobuf:"bytes,4,opt,name=working_dir,json=workingDir,proto3" json:"working_dir,omitempty"` // Working directory for EC processing
|
|
MasterClient string `protobuf:"bytes,5,opt,name=master_client,json=masterClient,proto3" json:"master_client,omitempty"` // Master server address
|
|
CleanupSource bool `protobuf:"varint,6,opt,name=cleanup_source,json=cleanupSource,proto3" json:"cleanup_source,omitempty"` // Whether to cleanup source volume after EC
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) Reset() {
|
|
*x = ErasureCodingTaskParams{}
|
|
mi := &file_worker_proto_msgTypes[10]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*ErasureCodingTaskParams) ProtoMessage() {}
|
|
|
|
func (x *ErasureCodingTaskParams) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[10]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use ErasureCodingTaskParams.ProtoReflect.Descriptor instead.
|
|
func (*ErasureCodingTaskParams) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{10}
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) GetEstimatedShardSize() uint64 {
|
|
if x != nil {
|
|
return x.EstimatedShardSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) GetDataShards() int32 {
|
|
if x != nil {
|
|
return x.DataShards
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) GetParityShards() int32 {
|
|
if x != nil {
|
|
return x.ParityShards
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) GetWorkingDir() string {
|
|
if x != nil {
|
|
return x.WorkingDir
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) GetMasterClient() string {
|
|
if x != nil {
|
|
return x.MasterClient
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *ErasureCodingTaskParams) GetCleanupSource() bool {
|
|
if x != nil {
|
|
return x.CleanupSource
|
|
}
|
|
return false
|
|
}
|
|
|
|
// TaskSource represents a unified source location for any task type
|
|
type TaskSource struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Node string `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"` // Source server address
|
|
DiskId uint32 `protobuf:"varint,2,opt,name=disk_id,json=diskId,proto3" json:"disk_id,omitempty"` // Source disk ID
|
|
Rack string `protobuf:"bytes,3,opt,name=rack,proto3" json:"rack,omitempty"` // Source rack for tracking
|
|
DataCenter string `protobuf:"bytes,4,opt,name=data_center,json=dataCenter,proto3" json:"data_center,omitempty"` // Source data center for tracking
|
|
VolumeId uint32 `protobuf:"varint,5,opt,name=volume_id,json=volumeId,proto3" json:"volume_id,omitempty"` // Volume ID (for volume operations)
|
|
ShardIds []uint32 `protobuf:"varint,6,rep,packed,name=shard_ids,json=shardIds,proto3" json:"shard_ids,omitempty"` // Shard IDs (for EC shard operations)
|
|
EstimatedSize uint64 `protobuf:"varint,7,opt,name=estimated_size,json=estimatedSize,proto3" json:"estimated_size,omitempty"` // Estimated size to be processed
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskSource) Reset() {
|
|
*x = TaskSource{}
|
|
mi := &file_worker_proto_msgTypes[11]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskSource) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskSource) ProtoMessage() {}
|
|
|
|
func (x *TaskSource) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[11]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskSource.ProtoReflect.Descriptor instead.
|
|
func (*TaskSource) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{11}
|
|
}
|
|
|
|
func (x *TaskSource) GetNode() string {
|
|
if x != nil {
|
|
return x.Node
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskSource) GetDiskId() uint32 {
|
|
if x != nil {
|
|
return x.DiskId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskSource) GetRack() string {
|
|
if x != nil {
|
|
return x.Rack
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskSource) GetDataCenter() string {
|
|
if x != nil {
|
|
return x.DataCenter
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskSource) GetVolumeId() uint32 {
|
|
if x != nil {
|
|
return x.VolumeId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskSource) GetShardIds() []uint32 {
|
|
if x != nil {
|
|
return x.ShardIds
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskSource) GetEstimatedSize() uint64 {
|
|
if x != nil {
|
|
return x.EstimatedSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// TaskTarget represents a unified target location for any task type
|
|
type TaskTarget struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Node string `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"` // Target server address
|
|
DiskId uint32 `protobuf:"varint,2,opt,name=disk_id,json=diskId,proto3" json:"disk_id,omitempty"` // Target disk ID
|
|
Rack string `protobuf:"bytes,3,opt,name=rack,proto3" json:"rack,omitempty"` // Target rack for tracking
|
|
DataCenter string `protobuf:"bytes,4,opt,name=data_center,json=dataCenter,proto3" json:"data_center,omitempty"` // Target data center for tracking
|
|
VolumeId uint32 `protobuf:"varint,5,opt,name=volume_id,json=volumeId,proto3" json:"volume_id,omitempty"` // Volume ID (for volume operations)
|
|
ShardIds []uint32 `protobuf:"varint,6,rep,packed,name=shard_ids,json=shardIds,proto3" json:"shard_ids,omitempty"` // Shard IDs (for EC shard operations)
|
|
EstimatedSize uint64 `protobuf:"varint,7,opt,name=estimated_size,json=estimatedSize,proto3" json:"estimated_size,omitempty"` // Estimated size to be created
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskTarget) Reset() {
|
|
*x = TaskTarget{}
|
|
mi := &file_worker_proto_msgTypes[12]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskTarget) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskTarget) ProtoMessage() {}
|
|
|
|
func (x *TaskTarget) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[12]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskTarget.ProtoReflect.Descriptor instead.
|
|
func (*TaskTarget) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{12}
|
|
}
|
|
|
|
func (x *TaskTarget) GetNode() string {
|
|
if x != nil {
|
|
return x.Node
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskTarget) GetDiskId() uint32 {
|
|
if x != nil {
|
|
return x.DiskId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskTarget) GetRack() string {
|
|
if x != nil {
|
|
return x.Rack
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskTarget) GetDataCenter() string {
|
|
if x != nil {
|
|
return x.DataCenter
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskTarget) GetVolumeId() uint32 {
|
|
if x != nil {
|
|
return x.VolumeId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskTarget) GetShardIds() []uint32 {
|
|
if x != nil {
|
|
return x.ShardIds
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskTarget) GetEstimatedSize() uint64 {
|
|
if x != nil {
|
|
return x.EstimatedSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// BalanceMoveSpec describes a single volume move within a batch balance job
|
|
type BalanceMoveSpec struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
VolumeId uint32 `protobuf:"varint,1,opt,name=volume_id,json=volumeId,proto3" json:"volume_id,omitempty"` // Volume to move
|
|
SourceNode string `protobuf:"bytes,2,opt,name=source_node,json=sourceNode,proto3" json:"source_node,omitempty"` // Source server address (host:port)
|
|
TargetNode string `protobuf:"bytes,3,opt,name=target_node,json=targetNode,proto3" json:"target_node,omitempty"` // Destination server address (host:port)
|
|
Collection string `protobuf:"bytes,4,opt,name=collection,proto3" json:"collection,omitempty"` // Collection name
|
|
VolumeSize uint64 `protobuf:"varint,5,opt,name=volume_size,json=volumeSize,proto3" json:"volume_size,omitempty"` // Volume size in bytes (informational)
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *BalanceMoveSpec) Reset() {
|
|
*x = BalanceMoveSpec{}
|
|
mi := &file_worker_proto_msgTypes[13]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *BalanceMoveSpec) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*BalanceMoveSpec) ProtoMessage() {}
|
|
|
|
func (x *BalanceMoveSpec) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[13]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use BalanceMoveSpec.ProtoReflect.Descriptor instead.
|
|
func (*BalanceMoveSpec) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{13}
|
|
}
|
|
|
|
func (x *BalanceMoveSpec) GetVolumeId() uint32 {
|
|
if x != nil {
|
|
return x.VolumeId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *BalanceMoveSpec) GetSourceNode() string {
|
|
if x != nil {
|
|
return x.SourceNode
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *BalanceMoveSpec) GetTargetNode() string {
|
|
if x != nil {
|
|
return x.TargetNode
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *BalanceMoveSpec) GetCollection() string {
|
|
if x != nil {
|
|
return x.Collection
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *BalanceMoveSpec) GetVolumeSize() uint64 {
|
|
if x != nil {
|
|
return x.VolumeSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// BalanceTaskParams for volume balancing operations
|
|
type BalanceTaskParams struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
ForceMove bool `protobuf:"varint,1,opt,name=force_move,json=forceMove,proto3" json:"force_move,omitempty"` // Force move even with conflicts
|
|
TimeoutSeconds int32 `protobuf:"varint,2,opt,name=timeout_seconds,json=timeoutSeconds,proto3" json:"timeout_seconds,omitempty"` // Operation timeout
|
|
MaxConcurrentMoves int32 `protobuf:"varint,3,opt,name=max_concurrent_moves,json=maxConcurrentMoves,proto3" json:"max_concurrent_moves,omitempty"` // Max concurrent moves in a batch job (0 = default 5)
|
|
Moves []*BalanceMoveSpec `protobuf:"bytes,4,rep,name=moves,proto3" json:"moves,omitempty"` // Batch: multiple volume moves in one job
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *BalanceTaskParams) Reset() {
|
|
*x = BalanceTaskParams{}
|
|
mi := &file_worker_proto_msgTypes[14]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *BalanceTaskParams) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*BalanceTaskParams) ProtoMessage() {}
|
|
|
|
func (x *BalanceTaskParams) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[14]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use BalanceTaskParams.ProtoReflect.Descriptor instead.
|
|
func (*BalanceTaskParams) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{14}
|
|
}
|
|
|
|
func (x *BalanceTaskParams) GetForceMove() bool {
|
|
if x != nil {
|
|
return x.ForceMove
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *BalanceTaskParams) GetTimeoutSeconds() int32 {
|
|
if x != nil {
|
|
return x.TimeoutSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *BalanceTaskParams) GetMaxConcurrentMoves() int32 {
|
|
if x != nil {
|
|
return x.MaxConcurrentMoves
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *BalanceTaskParams) GetMoves() []*BalanceMoveSpec {
|
|
if x != nil {
|
|
return x.Moves
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// ReplicationTaskParams for adding replicas
|
|
type ReplicationTaskParams struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
ReplicaCount int32 `protobuf:"varint,1,opt,name=replica_count,json=replicaCount,proto3" json:"replica_count,omitempty"` // Target replica count
|
|
VerifyConsistency bool `protobuf:"varint,2,opt,name=verify_consistency,json=verifyConsistency,proto3" json:"verify_consistency,omitempty"` // Verify replica consistency after creation
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *ReplicationTaskParams) Reset() {
|
|
*x = ReplicationTaskParams{}
|
|
mi := &file_worker_proto_msgTypes[15]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *ReplicationTaskParams) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*ReplicationTaskParams) ProtoMessage() {}
|
|
|
|
func (x *ReplicationTaskParams) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[15]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use ReplicationTaskParams.ProtoReflect.Descriptor instead.
|
|
func (*ReplicationTaskParams) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{15}
|
|
}
|
|
|
|
func (x *ReplicationTaskParams) GetReplicaCount() int32 {
|
|
if x != nil {
|
|
return x.ReplicaCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *ReplicationTaskParams) GetVerifyConsistency() bool {
|
|
if x != nil {
|
|
return x.VerifyConsistency
|
|
}
|
|
return false
|
|
}
|
|
|
|
// TaskUpdate reports task progress
|
|
type TaskUpdate struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
|
WorkerId string `protobuf:"bytes,2,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Status string `protobuf:"bytes,3,opt,name=status,proto3" json:"status,omitempty"`
|
|
Progress float32 `protobuf:"fixed32,4,opt,name=progress,proto3" json:"progress,omitempty"`
|
|
Message string `protobuf:"bytes,5,opt,name=message,proto3" json:"message,omitempty"`
|
|
Metadata map[string]string `protobuf:"bytes,6,rep,name=metadata,proto3" json:"metadata,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskUpdate) Reset() {
|
|
*x = TaskUpdate{}
|
|
mi := &file_worker_proto_msgTypes[16]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskUpdate) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskUpdate) ProtoMessage() {}
|
|
|
|
func (x *TaskUpdate) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[16]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskUpdate.ProtoReflect.Descriptor instead.
|
|
func (*TaskUpdate) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{16}
|
|
}
|
|
|
|
func (x *TaskUpdate) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskUpdate) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskUpdate) GetStatus() string {
|
|
if x != nil {
|
|
return x.Status
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskUpdate) GetProgress() float32 {
|
|
if x != nil {
|
|
return x.Progress
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskUpdate) GetMessage() string {
|
|
if x != nil {
|
|
return x.Message
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskUpdate) GetMetadata() map[string]string {
|
|
if x != nil {
|
|
return x.Metadata
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// TaskComplete reports task completion
|
|
type TaskComplete struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
|
WorkerId string `protobuf:"bytes,2,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Success bool `protobuf:"varint,3,opt,name=success,proto3" json:"success,omitempty"`
|
|
ErrorMessage string `protobuf:"bytes,4,opt,name=error_message,json=errorMessage,proto3" json:"error_message,omitempty"`
|
|
CompletionTime int64 `protobuf:"varint,5,opt,name=completion_time,json=completionTime,proto3" json:"completion_time,omitempty"`
|
|
ResultMetadata map[string]string `protobuf:"bytes,6,rep,name=result_metadata,json=resultMetadata,proto3" json:"result_metadata,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskComplete) Reset() {
|
|
*x = TaskComplete{}
|
|
mi := &file_worker_proto_msgTypes[17]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskComplete) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskComplete) ProtoMessage() {}
|
|
|
|
func (x *TaskComplete) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[17]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskComplete.ProtoReflect.Descriptor instead.
|
|
func (*TaskComplete) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{17}
|
|
}
|
|
|
|
func (x *TaskComplete) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskComplete) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskComplete) GetSuccess() bool {
|
|
if x != nil {
|
|
return x.Success
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *TaskComplete) GetErrorMessage() string {
|
|
if x != nil {
|
|
return x.ErrorMessage
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskComplete) GetCompletionTime() int64 {
|
|
if x != nil {
|
|
return x.CompletionTime
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskComplete) GetResultMetadata() map[string]string {
|
|
if x != nil {
|
|
return x.ResultMetadata
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// TaskCancellation from admin to cancel a task
|
|
type TaskCancellation struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
|
Reason string `protobuf:"bytes,2,opt,name=reason,proto3" json:"reason,omitempty"`
|
|
Force bool `protobuf:"varint,3,opt,name=force,proto3" json:"force,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskCancellation) Reset() {
|
|
*x = TaskCancellation{}
|
|
mi := &file_worker_proto_msgTypes[18]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskCancellation) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskCancellation) ProtoMessage() {}
|
|
|
|
func (x *TaskCancellation) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[18]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskCancellation.ProtoReflect.Descriptor instead.
|
|
func (*TaskCancellation) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{18}
|
|
}
|
|
|
|
func (x *TaskCancellation) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskCancellation) GetReason() string {
|
|
if x != nil {
|
|
return x.Reason
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskCancellation) GetForce() bool {
|
|
if x != nil {
|
|
return x.Force
|
|
}
|
|
return false
|
|
}
|
|
|
|
// WorkerShutdown notifies admin that worker is shutting down
|
|
type WorkerShutdown struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
WorkerId string `protobuf:"bytes,1,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Reason string `protobuf:"bytes,2,opt,name=reason,proto3" json:"reason,omitempty"`
|
|
PendingTaskIds []string `protobuf:"bytes,3,rep,name=pending_task_ids,json=pendingTaskIds,proto3" json:"pending_task_ids,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *WorkerShutdown) Reset() {
|
|
*x = WorkerShutdown{}
|
|
mi := &file_worker_proto_msgTypes[19]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *WorkerShutdown) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*WorkerShutdown) ProtoMessage() {}
|
|
|
|
func (x *WorkerShutdown) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[19]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use WorkerShutdown.ProtoReflect.Descriptor instead.
|
|
func (*WorkerShutdown) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{19}
|
|
}
|
|
|
|
func (x *WorkerShutdown) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *WorkerShutdown) GetReason() string {
|
|
if x != nil {
|
|
return x.Reason
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *WorkerShutdown) GetPendingTaskIds() []string {
|
|
if x != nil {
|
|
return x.PendingTaskIds
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// AdminShutdown notifies worker that admin is shutting down
|
|
type AdminShutdown struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Reason string `protobuf:"bytes,1,opt,name=reason,proto3" json:"reason,omitempty"`
|
|
GracefulShutdownSeconds int32 `protobuf:"varint,2,opt,name=graceful_shutdown_seconds,json=gracefulShutdownSeconds,proto3" json:"graceful_shutdown_seconds,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *AdminShutdown) Reset() {
|
|
*x = AdminShutdown{}
|
|
mi := &file_worker_proto_msgTypes[20]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *AdminShutdown) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*AdminShutdown) ProtoMessage() {}
|
|
|
|
func (x *AdminShutdown) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[20]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use AdminShutdown.ProtoReflect.Descriptor instead.
|
|
func (*AdminShutdown) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{20}
|
|
}
|
|
|
|
func (x *AdminShutdown) GetReason() string {
|
|
if x != nil {
|
|
return x.Reason
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *AdminShutdown) GetGracefulShutdownSeconds() int32 {
|
|
if x != nil {
|
|
return x.GracefulShutdownSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// TaskLogRequest requests logs for a specific task
|
|
type TaskLogRequest struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
|
WorkerId string `protobuf:"bytes,2,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
IncludeMetadata bool `protobuf:"varint,3,opt,name=include_metadata,json=includeMetadata,proto3" json:"include_metadata,omitempty"` // Include task metadata
|
|
MaxEntries int32 `protobuf:"varint,4,opt,name=max_entries,json=maxEntries,proto3" json:"max_entries,omitempty"` // Maximum number of log entries (0 = all)
|
|
LogLevel string `protobuf:"bytes,5,opt,name=log_level,json=logLevel,proto3" json:"log_level,omitempty"` // Filter by log level (INFO, WARNING, ERROR, DEBUG)
|
|
StartTime int64 `protobuf:"varint,6,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"` // Unix timestamp for start time filter
|
|
EndTime int64 `protobuf:"varint,7,opt,name=end_time,json=endTime,proto3" json:"end_time,omitempty"` // Unix timestamp for end time filter
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskLogRequest) Reset() {
|
|
*x = TaskLogRequest{}
|
|
mi := &file_worker_proto_msgTypes[21]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskLogRequest) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskLogRequest) ProtoMessage() {}
|
|
|
|
func (x *TaskLogRequest) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[21]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskLogRequest.ProtoReflect.Descriptor instead.
|
|
func (*TaskLogRequest) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{21}
|
|
}
|
|
|
|
func (x *TaskLogRequest) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogRequest) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogRequest) GetIncludeMetadata() bool {
|
|
if x != nil {
|
|
return x.IncludeMetadata
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *TaskLogRequest) GetMaxEntries() int32 {
|
|
if x != nil {
|
|
return x.MaxEntries
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogRequest) GetLogLevel() string {
|
|
if x != nil {
|
|
return x.LogLevel
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogRequest) GetStartTime() int64 {
|
|
if x != nil {
|
|
return x.StartTime
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogRequest) GetEndTime() int64 {
|
|
if x != nil {
|
|
return x.EndTime
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// TaskLogResponse returns task logs and metadata
|
|
type TaskLogResponse struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
|
WorkerId string `protobuf:"bytes,2,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Success bool `protobuf:"varint,3,opt,name=success,proto3" json:"success,omitempty"`
|
|
ErrorMessage string `protobuf:"bytes,4,opt,name=error_message,json=errorMessage,proto3" json:"error_message,omitempty"`
|
|
Metadata *TaskLogMetadata `protobuf:"bytes,5,opt,name=metadata,proto3" json:"metadata,omitempty"`
|
|
LogEntries []*TaskLogEntry `protobuf:"bytes,6,rep,name=log_entries,json=logEntries,proto3" json:"log_entries,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskLogResponse) Reset() {
|
|
*x = TaskLogResponse{}
|
|
mi := &file_worker_proto_msgTypes[22]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskLogResponse) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskLogResponse) ProtoMessage() {}
|
|
|
|
func (x *TaskLogResponse) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[22]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskLogResponse.ProtoReflect.Descriptor instead.
|
|
func (*TaskLogResponse) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{22}
|
|
}
|
|
|
|
func (x *TaskLogResponse) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogResponse) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogResponse) GetSuccess() bool {
|
|
if x != nil {
|
|
return x.Success
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *TaskLogResponse) GetErrorMessage() string {
|
|
if x != nil {
|
|
return x.ErrorMessage
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogResponse) GetMetadata() *TaskLogMetadata {
|
|
if x != nil {
|
|
return x.Metadata
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskLogResponse) GetLogEntries() []*TaskLogEntry {
|
|
if x != nil {
|
|
return x.LogEntries
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// TaskLogMetadata contains metadata about task execution
|
|
type TaskLogMetadata struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskId string `protobuf:"bytes,1,opt,name=task_id,json=taskId,proto3" json:"task_id,omitempty"`
|
|
TaskType string `protobuf:"bytes,2,opt,name=task_type,json=taskType,proto3" json:"task_type,omitempty"`
|
|
WorkerId string `protobuf:"bytes,3,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
StartTime int64 `protobuf:"varint,4,opt,name=start_time,json=startTime,proto3" json:"start_time,omitempty"`
|
|
EndTime int64 `protobuf:"varint,5,opt,name=end_time,json=endTime,proto3" json:"end_time,omitempty"`
|
|
DurationMs int64 `protobuf:"varint,6,opt,name=duration_ms,json=durationMs,proto3" json:"duration_ms,omitempty"`
|
|
Status string `protobuf:"bytes,7,opt,name=status,proto3" json:"status,omitempty"`
|
|
Progress float32 `protobuf:"fixed32,8,opt,name=progress,proto3" json:"progress,omitempty"`
|
|
VolumeId uint32 `protobuf:"varint,9,opt,name=volume_id,json=volumeId,proto3" json:"volume_id,omitempty"`
|
|
Server string `protobuf:"bytes,10,opt,name=server,proto3" json:"server,omitempty"`
|
|
Collection string `protobuf:"bytes,11,opt,name=collection,proto3" json:"collection,omitempty"`
|
|
LogFilePath string `protobuf:"bytes,12,opt,name=log_file_path,json=logFilePath,proto3" json:"log_file_path,omitempty"`
|
|
CreatedAt int64 `protobuf:"varint,13,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
|
|
CustomData map[string]string `protobuf:"bytes,14,rep,name=custom_data,json=customData,proto3" json:"custom_data,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskLogMetadata) Reset() {
|
|
*x = TaskLogMetadata{}
|
|
mi := &file_worker_proto_msgTypes[23]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskLogMetadata) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskLogMetadata) ProtoMessage() {}
|
|
|
|
func (x *TaskLogMetadata) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[23]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskLogMetadata.ProtoReflect.Descriptor instead.
|
|
func (*TaskLogMetadata) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{23}
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetTaskId() string {
|
|
if x != nil {
|
|
return x.TaskId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetTaskType() string {
|
|
if x != nil {
|
|
return x.TaskType
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetStartTime() int64 {
|
|
if x != nil {
|
|
return x.StartTime
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetEndTime() int64 {
|
|
if x != nil {
|
|
return x.EndTime
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetDurationMs() int64 {
|
|
if x != nil {
|
|
return x.DurationMs
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetStatus() string {
|
|
if x != nil {
|
|
return x.Status
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetProgress() float32 {
|
|
if x != nil {
|
|
return x.Progress
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetVolumeId() uint32 {
|
|
if x != nil {
|
|
return x.VolumeId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetServer() string {
|
|
if x != nil {
|
|
return x.Server
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetCollection() string {
|
|
if x != nil {
|
|
return x.Collection
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetLogFilePath() string {
|
|
if x != nil {
|
|
return x.LogFilePath
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetCreatedAt() int64 {
|
|
if x != nil {
|
|
return x.CreatedAt
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogMetadata) GetCustomData() map[string]string {
|
|
if x != nil {
|
|
return x.CustomData
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// TaskLogEntry represents a single log entry
|
|
type TaskLogEntry struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Timestamp int64 `protobuf:"varint,1,opt,name=timestamp,proto3" json:"timestamp,omitempty"`
|
|
Level string `protobuf:"bytes,2,opt,name=level,proto3" json:"level,omitempty"`
|
|
Message string `protobuf:"bytes,3,opt,name=message,proto3" json:"message,omitempty"`
|
|
Fields map[string]string `protobuf:"bytes,4,rep,name=fields,proto3" json:"fields,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
|
Progress float32 `protobuf:"fixed32,5,opt,name=progress,proto3" json:"progress,omitempty"`
|
|
Status string `protobuf:"bytes,6,opt,name=status,proto3" json:"status,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskLogEntry) Reset() {
|
|
*x = TaskLogEntry{}
|
|
mi := &file_worker_proto_msgTypes[24]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskLogEntry) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskLogEntry) ProtoMessage() {}
|
|
|
|
func (x *TaskLogEntry) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[24]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskLogEntry.ProtoReflect.Descriptor instead.
|
|
func (*TaskLogEntry) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{24}
|
|
}
|
|
|
|
func (x *TaskLogEntry) GetTimestamp() int64 {
|
|
if x != nil {
|
|
return x.Timestamp
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogEntry) GetLevel() string {
|
|
if x != nil {
|
|
return x.Level
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogEntry) GetMessage() string {
|
|
if x != nil {
|
|
return x.Message
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskLogEntry) GetFields() map[string]string {
|
|
if x != nil {
|
|
return x.Fields
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskLogEntry) GetProgress() float32 {
|
|
if x != nil {
|
|
return x.Progress
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskLogEntry) GetStatus() string {
|
|
if x != nil {
|
|
return x.Status
|
|
}
|
|
return ""
|
|
}
|
|
|
|
// MaintenanceConfig holds configuration for the maintenance system
|
|
type MaintenanceConfig struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"`
|
|
ScanIntervalSeconds int32 `protobuf:"varint,2,opt,name=scan_interval_seconds,json=scanIntervalSeconds,proto3" json:"scan_interval_seconds,omitempty"` // How often to scan for maintenance needs
|
|
WorkerTimeoutSeconds int32 `protobuf:"varint,3,opt,name=worker_timeout_seconds,json=workerTimeoutSeconds,proto3" json:"worker_timeout_seconds,omitempty"` // Worker heartbeat timeout
|
|
TaskTimeoutSeconds int32 `protobuf:"varint,4,opt,name=task_timeout_seconds,json=taskTimeoutSeconds,proto3" json:"task_timeout_seconds,omitempty"` // Individual task timeout
|
|
RetryDelaySeconds int32 `protobuf:"varint,5,opt,name=retry_delay_seconds,json=retryDelaySeconds,proto3" json:"retry_delay_seconds,omitempty"` // Delay between retries
|
|
MaxRetries int32 `protobuf:"varint,6,opt,name=max_retries,json=maxRetries,proto3" json:"max_retries,omitempty"` // Default max retries for tasks
|
|
CleanupIntervalSeconds int32 `protobuf:"varint,7,opt,name=cleanup_interval_seconds,json=cleanupIntervalSeconds,proto3" json:"cleanup_interval_seconds,omitempty"` // How often to clean up old tasks
|
|
TaskRetentionSeconds int32 `protobuf:"varint,8,opt,name=task_retention_seconds,json=taskRetentionSeconds,proto3" json:"task_retention_seconds,omitempty"` // How long to keep completed/failed tasks
|
|
Policy *MaintenancePolicy `protobuf:"bytes,9,opt,name=policy,proto3" json:"policy,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *MaintenanceConfig) Reset() {
|
|
*x = MaintenanceConfig{}
|
|
mi := &file_worker_proto_msgTypes[25]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *MaintenanceConfig) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*MaintenanceConfig) ProtoMessage() {}
|
|
|
|
func (x *MaintenanceConfig) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[25]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use MaintenanceConfig.ProtoReflect.Descriptor instead.
|
|
func (*MaintenanceConfig) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{25}
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetEnabled() bool {
|
|
if x != nil {
|
|
return x.Enabled
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetScanIntervalSeconds() int32 {
|
|
if x != nil {
|
|
return x.ScanIntervalSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetWorkerTimeoutSeconds() int32 {
|
|
if x != nil {
|
|
return x.WorkerTimeoutSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetTaskTimeoutSeconds() int32 {
|
|
if x != nil {
|
|
return x.TaskTimeoutSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetRetryDelaySeconds() int32 {
|
|
if x != nil {
|
|
return x.RetryDelaySeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetMaxRetries() int32 {
|
|
if x != nil {
|
|
return x.MaxRetries
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetCleanupIntervalSeconds() int32 {
|
|
if x != nil {
|
|
return x.CleanupIntervalSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetTaskRetentionSeconds() int32 {
|
|
if x != nil {
|
|
return x.TaskRetentionSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceConfig) GetPolicy() *MaintenancePolicy {
|
|
if x != nil {
|
|
return x.Policy
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// MaintenancePolicy defines policies for maintenance operations
|
|
type MaintenancePolicy struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TaskPolicies map[string]*TaskPolicy `protobuf:"bytes,1,rep,name=task_policies,json=taskPolicies,proto3" json:"task_policies,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` // Task type -> policy mapping
|
|
GlobalMaxConcurrent int32 `protobuf:"varint,2,opt,name=global_max_concurrent,json=globalMaxConcurrent,proto3" json:"global_max_concurrent,omitempty"` // Overall limit across all task types
|
|
DefaultRepeatIntervalSeconds int32 `protobuf:"varint,3,opt,name=default_repeat_interval_seconds,json=defaultRepeatIntervalSeconds,proto3" json:"default_repeat_interval_seconds,omitempty"` // Default seconds if task doesn't specify
|
|
DefaultCheckIntervalSeconds int32 `protobuf:"varint,4,opt,name=default_check_interval_seconds,json=defaultCheckIntervalSeconds,proto3" json:"default_check_interval_seconds,omitempty"` // Default seconds for periodic checks
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *MaintenancePolicy) Reset() {
|
|
*x = MaintenancePolicy{}
|
|
mi := &file_worker_proto_msgTypes[26]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *MaintenancePolicy) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*MaintenancePolicy) ProtoMessage() {}
|
|
|
|
func (x *MaintenancePolicy) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[26]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use MaintenancePolicy.ProtoReflect.Descriptor instead.
|
|
func (*MaintenancePolicy) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{26}
|
|
}
|
|
|
|
func (x *MaintenancePolicy) GetTaskPolicies() map[string]*TaskPolicy {
|
|
if x != nil {
|
|
return x.TaskPolicies
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *MaintenancePolicy) GetGlobalMaxConcurrent() int32 {
|
|
if x != nil {
|
|
return x.GlobalMaxConcurrent
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenancePolicy) GetDefaultRepeatIntervalSeconds() int32 {
|
|
if x != nil {
|
|
return x.DefaultRepeatIntervalSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenancePolicy) GetDefaultCheckIntervalSeconds() int32 {
|
|
if x != nil {
|
|
return x.DefaultCheckIntervalSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// TaskPolicy represents configuration for a specific task type
|
|
type TaskPolicy struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Enabled bool `protobuf:"varint,1,opt,name=enabled,proto3" json:"enabled,omitempty"`
|
|
MaxConcurrent int32 `protobuf:"varint,2,opt,name=max_concurrent,json=maxConcurrent,proto3" json:"max_concurrent,omitempty"`
|
|
RepeatIntervalSeconds int32 `protobuf:"varint,3,opt,name=repeat_interval_seconds,json=repeatIntervalSeconds,proto3" json:"repeat_interval_seconds,omitempty"` // Seconds to wait before repeating
|
|
CheckIntervalSeconds int32 `protobuf:"varint,4,opt,name=check_interval_seconds,json=checkIntervalSeconds,proto3" json:"check_interval_seconds,omitempty"` // Seconds between checks
|
|
// Typed task-specific configuration (replaces generic map)
|
|
//
|
|
// Types that are valid to be assigned to TaskConfig:
|
|
//
|
|
// *TaskPolicy_VacuumConfig
|
|
// *TaskPolicy_ErasureCodingConfig
|
|
// *TaskPolicy_BalanceConfig
|
|
// *TaskPolicy_ReplicationConfig
|
|
// *TaskPolicy_EcBalanceConfig
|
|
TaskConfig isTaskPolicy_TaskConfig `protobuf_oneof:"task_config"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskPolicy) Reset() {
|
|
*x = TaskPolicy{}
|
|
mi := &file_worker_proto_msgTypes[27]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskPolicy) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskPolicy) ProtoMessage() {}
|
|
|
|
func (x *TaskPolicy) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[27]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskPolicy.ProtoReflect.Descriptor instead.
|
|
func (*TaskPolicy) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{27}
|
|
}
|
|
|
|
func (x *TaskPolicy) GetEnabled() bool {
|
|
if x != nil {
|
|
return x.Enabled
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *TaskPolicy) GetMaxConcurrent() int32 {
|
|
if x != nil {
|
|
return x.MaxConcurrent
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskPolicy) GetRepeatIntervalSeconds() int32 {
|
|
if x != nil {
|
|
return x.RepeatIntervalSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskPolicy) GetCheckIntervalSeconds() int32 {
|
|
if x != nil {
|
|
return x.CheckIntervalSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskPolicy) GetTaskConfig() isTaskPolicy_TaskConfig {
|
|
if x != nil {
|
|
return x.TaskConfig
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskPolicy) GetVacuumConfig() *VacuumTaskConfig {
|
|
if x != nil {
|
|
if x, ok := x.TaskConfig.(*TaskPolicy_VacuumConfig); ok {
|
|
return x.VacuumConfig
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskPolicy) GetErasureCodingConfig() *ErasureCodingTaskConfig {
|
|
if x != nil {
|
|
if x, ok := x.TaskConfig.(*TaskPolicy_ErasureCodingConfig); ok {
|
|
return x.ErasureCodingConfig
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskPolicy) GetBalanceConfig() *BalanceTaskConfig {
|
|
if x != nil {
|
|
if x, ok := x.TaskConfig.(*TaskPolicy_BalanceConfig); ok {
|
|
return x.BalanceConfig
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskPolicy) GetReplicationConfig() *ReplicationTaskConfig {
|
|
if x != nil {
|
|
if x, ok := x.TaskConfig.(*TaskPolicy_ReplicationConfig); ok {
|
|
return x.ReplicationConfig
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskPolicy) GetEcBalanceConfig() *EcBalanceTaskConfig {
|
|
if x != nil {
|
|
if x, ok := x.TaskConfig.(*TaskPolicy_EcBalanceConfig); ok {
|
|
return x.EcBalanceConfig
|
|
}
|
|
}
|
|
return nil
|
|
}
|
|
|
|
type isTaskPolicy_TaskConfig interface {
|
|
isTaskPolicy_TaskConfig()
|
|
}
|
|
|
|
type TaskPolicy_VacuumConfig struct {
|
|
VacuumConfig *VacuumTaskConfig `protobuf:"bytes,5,opt,name=vacuum_config,json=vacuumConfig,proto3,oneof"`
|
|
}
|
|
|
|
type TaskPolicy_ErasureCodingConfig struct {
|
|
ErasureCodingConfig *ErasureCodingTaskConfig `protobuf:"bytes,6,opt,name=erasure_coding_config,json=erasureCodingConfig,proto3,oneof"`
|
|
}
|
|
|
|
type TaskPolicy_BalanceConfig struct {
|
|
BalanceConfig *BalanceTaskConfig `protobuf:"bytes,7,opt,name=balance_config,json=balanceConfig,proto3,oneof"`
|
|
}
|
|
|
|
type TaskPolicy_ReplicationConfig struct {
|
|
ReplicationConfig *ReplicationTaskConfig `protobuf:"bytes,8,opt,name=replication_config,json=replicationConfig,proto3,oneof"`
|
|
}
|
|
|
|
type TaskPolicy_EcBalanceConfig struct {
|
|
EcBalanceConfig *EcBalanceTaskConfig `protobuf:"bytes,9,opt,name=ec_balance_config,json=ecBalanceConfig,proto3,oneof"`
|
|
}
|
|
|
|
func (*TaskPolicy_VacuumConfig) isTaskPolicy_TaskConfig() {}
|
|
|
|
func (*TaskPolicy_ErasureCodingConfig) isTaskPolicy_TaskConfig() {}
|
|
|
|
func (*TaskPolicy_BalanceConfig) isTaskPolicy_TaskConfig() {}
|
|
|
|
func (*TaskPolicy_ReplicationConfig) isTaskPolicy_TaskConfig() {}
|
|
|
|
func (*TaskPolicy_EcBalanceConfig) isTaskPolicy_TaskConfig() {}
|
|
|
|
// VacuumTaskConfig contains vacuum-specific configuration
|
|
type VacuumTaskConfig struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
GarbageThreshold float64 `protobuf:"fixed64,1,opt,name=garbage_threshold,json=garbageThreshold,proto3" json:"garbage_threshold,omitempty"` // Minimum garbage ratio to trigger vacuum (0.0-1.0)
|
|
MinVolumeAgeHours int32 `protobuf:"varint,2,opt,name=min_volume_age_hours,json=minVolumeAgeHours,proto3" json:"min_volume_age_hours,omitempty"` // Minimum age before vacuum is considered
|
|
MinIntervalSeconds int32 `protobuf:"varint,3,opt,name=min_interval_seconds,json=minIntervalSeconds,proto3" json:"min_interval_seconds,omitempty"` // Minimum time between vacuum operations on the same volume
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *VacuumTaskConfig) Reset() {
|
|
*x = VacuumTaskConfig{}
|
|
mi := &file_worker_proto_msgTypes[28]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *VacuumTaskConfig) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*VacuumTaskConfig) ProtoMessage() {}
|
|
|
|
func (x *VacuumTaskConfig) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[28]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use VacuumTaskConfig.ProtoReflect.Descriptor instead.
|
|
func (*VacuumTaskConfig) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{28}
|
|
}
|
|
|
|
func (x *VacuumTaskConfig) GetGarbageThreshold() float64 {
|
|
if x != nil {
|
|
return x.GarbageThreshold
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VacuumTaskConfig) GetMinVolumeAgeHours() int32 {
|
|
if x != nil {
|
|
return x.MinVolumeAgeHours
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VacuumTaskConfig) GetMinIntervalSeconds() int32 {
|
|
if x != nil {
|
|
return x.MinIntervalSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// ErasureCodingTaskConfig contains EC-specific configuration
|
|
type ErasureCodingTaskConfig struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
FullnessRatio float64 `protobuf:"fixed64,1,opt,name=fullness_ratio,json=fullnessRatio,proto3" json:"fullness_ratio,omitempty"` // Minimum fullness ratio to trigger EC (0.0-1.0)
|
|
QuietForSeconds int32 `protobuf:"varint,2,opt,name=quiet_for_seconds,json=quietForSeconds,proto3" json:"quiet_for_seconds,omitempty"` // Minimum quiet time before EC
|
|
MinVolumeSizeMb int32 `protobuf:"varint,3,opt,name=min_volume_size_mb,json=minVolumeSizeMb,proto3" json:"min_volume_size_mb,omitempty"` // Minimum volume size for EC
|
|
CollectionFilter string `protobuf:"bytes,4,opt,name=collection_filter,json=collectionFilter,proto3" json:"collection_filter,omitempty"` // Only process volumes from specific collections
|
|
PreferredTags []string `protobuf:"bytes,5,rep,name=preferred_tags,json=preferredTags,proto3" json:"preferred_tags,omitempty"` // Disk tags to prioritize for EC shard placement
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *ErasureCodingTaskConfig) Reset() {
|
|
*x = ErasureCodingTaskConfig{}
|
|
mi := &file_worker_proto_msgTypes[29]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *ErasureCodingTaskConfig) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*ErasureCodingTaskConfig) ProtoMessage() {}
|
|
|
|
func (x *ErasureCodingTaskConfig) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[29]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use ErasureCodingTaskConfig.ProtoReflect.Descriptor instead.
|
|
func (*ErasureCodingTaskConfig) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{29}
|
|
}
|
|
|
|
func (x *ErasureCodingTaskConfig) GetFullnessRatio() float64 {
|
|
if x != nil {
|
|
return x.FullnessRatio
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *ErasureCodingTaskConfig) GetQuietForSeconds() int32 {
|
|
if x != nil {
|
|
return x.QuietForSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *ErasureCodingTaskConfig) GetMinVolumeSizeMb() int32 {
|
|
if x != nil {
|
|
return x.MinVolumeSizeMb
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *ErasureCodingTaskConfig) GetCollectionFilter() string {
|
|
if x != nil {
|
|
return x.CollectionFilter
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *ErasureCodingTaskConfig) GetPreferredTags() []string {
|
|
if x != nil {
|
|
return x.PreferredTags
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// BalanceTaskConfig contains balance-specific configuration
|
|
type BalanceTaskConfig struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
ImbalanceThreshold float64 `protobuf:"fixed64,1,opt,name=imbalance_threshold,json=imbalanceThreshold,proto3" json:"imbalance_threshold,omitempty"` // Threshold for triggering rebalancing (0.0-1.0)
|
|
MinServerCount int32 `protobuf:"varint,2,opt,name=min_server_count,json=minServerCount,proto3" json:"min_server_count,omitempty"` // Minimum number of servers required for balancing
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *BalanceTaskConfig) Reset() {
|
|
*x = BalanceTaskConfig{}
|
|
mi := &file_worker_proto_msgTypes[30]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *BalanceTaskConfig) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*BalanceTaskConfig) ProtoMessage() {}
|
|
|
|
func (x *BalanceTaskConfig) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[30]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use BalanceTaskConfig.ProtoReflect.Descriptor instead.
|
|
func (*BalanceTaskConfig) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{30}
|
|
}
|
|
|
|
func (x *BalanceTaskConfig) GetImbalanceThreshold() float64 {
|
|
if x != nil {
|
|
return x.ImbalanceThreshold
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *BalanceTaskConfig) GetMinServerCount() int32 {
|
|
if x != nil {
|
|
return x.MinServerCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// ReplicationTaskConfig contains replication-specific configuration
|
|
type ReplicationTaskConfig struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TargetReplicaCount int32 `protobuf:"varint,1,opt,name=target_replica_count,json=targetReplicaCount,proto3" json:"target_replica_count,omitempty"` // Target number of replicas
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *ReplicationTaskConfig) Reset() {
|
|
*x = ReplicationTaskConfig{}
|
|
mi := &file_worker_proto_msgTypes[31]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *ReplicationTaskConfig) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*ReplicationTaskConfig) ProtoMessage() {}
|
|
|
|
func (x *ReplicationTaskConfig) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[31]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use ReplicationTaskConfig.ProtoReflect.Descriptor instead.
|
|
func (*ReplicationTaskConfig) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{31}
|
|
}
|
|
|
|
func (x *ReplicationTaskConfig) GetTargetReplicaCount() int32 {
|
|
if x != nil {
|
|
return x.TargetReplicaCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// EcBalanceTaskParams for EC shard balancing operations
|
|
type EcBalanceTaskParams struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
DiskType string `protobuf:"bytes,1,opt,name=disk_type,json=diskType,proto3" json:"disk_type,omitempty"` // Disk type filter (hdd, ssd, "")
|
|
MaxParallelization int32 `protobuf:"varint,2,opt,name=max_parallelization,json=maxParallelization,proto3" json:"max_parallelization,omitempty"` // Max parallel shard moves within a batch
|
|
TimeoutSeconds int32 `protobuf:"varint,3,opt,name=timeout_seconds,json=timeoutSeconds,proto3" json:"timeout_seconds,omitempty"` // Operation timeout per move
|
|
Moves []*EcShardMoveSpec `protobuf:"bytes,4,rep,name=moves,proto3" json:"moves,omitempty"` // Batch: multiple shard moves in one job
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *EcBalanceTaskParams) Reset() {
|
|
*x = EcBalanceTaskParams{}
|
|
mi := &file_worker_proto_msgTypes[32]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *EcBalanceTaskParams) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*EcBalanceTaskParams) ProtoMessage() {}
|
|
|
|
func (x *EcBalanceTaskParams) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[32]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use EcBalanceTaskParams.ProtoReflect.Descriptor instead.
|
|
func (*EcBalanceTaskParams) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{32}
|
|
}
|
|
|
|
func (x *EcBalanceTaskParams) GetDiskType() string {
|
|
if x != nil {
|
|
return x.DiskType
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *EcBalanceTaskParams) GetMaxParallelization() int32 {
|
|
if x != nil {
|
|
return x.MaxParallelization
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *EcBalanceTaskParams) GetTimeoutSeconds() int32 {
|
|
if x != nil {
|
|
return x.TimeoutSeconds
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *EcBalanceTaskParams) GetMoves() []*EcShardMoveSpec {
|
|
if x != nil {
|
|
return x.Moves
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// EcShardMoveSpec describes a single EC shard move within a batch
|
|
type EcShardMoveSpec struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
VolumeId uint32 `protobuf:"varint,1,opt,name=volume_id,json=volumeId,proto3" json:"volume_id,omitempty"` // EC volume ID
|
|
ShardId uint32 `protobuf:"varint,2,opt,name=shard_id,json=shardId,proto3" json:"shard_id,omitempty"` // Shard ID (0-13)
|
|
Collection string `protobuf:"bytes,3,opt,name=collection,proto3" json:"collection,omitempty"` // Collection name
|
|
SourceNode string `protobuf:"bytes,4,opt,name=source_node,json=sourceNode,proto3" json:"source_node,omitempty"` // Source server address
|
|
SourceDiskId uint32 `protobuf:"varint,5,opt,name=source_disk_id,json=sourceDiskId,proto3" json:"source_disk_id,omitempty"` // Source disk ID
|
|
TargetNode string `protobuf:"bytes,6,opt,name=target_node,json=targetNode,proto3" json:"target_node,omitempty"` // Target server address
|
|
TargetDiskId uint32 `protobuf:"varint,7,opt,name=target_disk_id,json=targetDiskId,proto3" json:"target_disk_id,omitempty"` // Target disk ID
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) Reset() {
|
|
*x = EcShardMoveSpec{}
|
|
mi := &file_worker_proto_msgTypes[33]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*EcShardMoveSpec) ProtoMessage() {}
|
|
|
|
func (x *EcShardMoveSpec) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[33]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use EcShardMoveSpec.ProtoReflect.Descriptor instead.
|
|
func (*EcShardMoveSpec) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{33}
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) GetVolumeId() uint32 {
|
|
if x != nil {
|
|
return x.VolumeId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) GetShardId() uint32 {
|
|
if x != nil {
|
|
return x.ShardId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) GetCollection() string {
|
|
if x != nil {
|
|
return x.Collection
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) GetSourceNode() string {
|
|
if x != nil {
|
|
return x.SourceNode
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) GetSourceDiskId() uint32 {
|
|
if x != nil {
|
|
return x.SourceDiskId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) GetTargetNode() string {
|
|
if x != nil {
|
|
return x.TargetNode
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *EcShardMoveSpec) GetTargetDiskId() uint32 {
|
|
if x != nil {
|
|
return x.TargetDiskId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
// EcBalanceTaskConfig contains EC balance-specific configuration
|
|
type EcBalanceTaskConfig struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
ImbalanceThreshold float64 `protobuf:"fixed64,1,opt,name=imbalance_threshold,json=imbalanceThreshold,proto3" json:"imbalance_threshold,omitempty"` // Threshold for triggering EC shard rebalancing
|
|
MinServerCount int32 `protobuf:"varint,2,opt,name=min_server_count,json=minServerCount,proto3" json:"min_server_count,omitempty"` // Minimum number of servers required
|
|
CollectionFilter string `protobuf:"bytes,3,opt,name=collection_filter,json=collectionFilter,proto3" json:"collection_filter,omitempty"` // Collection filter
|
|
DiskType string `protobuf:"bytes,4,opt,name=disk_type,json=diskType,proto3" json:"disk_type,omitempty"` // Disk type filter
|
|
PreferredTags []string `protobuf:"bytes,5,rep,name=preferred_tags,json=preferredTags,proto3" json:"preferred_tags,omitempty"` // Preferred disk tags for placement
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *EcBalanceTaskConfig) Reset() {
|
|
*x = EcBalanceTaskConfig{}
|
|
mi := &file_worker_proto_msgTypes[34]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *EcBalanceTaskConfig) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*EcBalanceTaskConfig) ProtoMessage() {}
|
|
|
|
func (x *EcBalanceTaskConfig) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[34]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use EcBalanceTaskConfig.ProtoReflect.Descriptor instead.
|
|
func (*EcBalanceTaskConfig) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{34}
|
|
}
|
|
|
|
func (x *EcBalanceTaskConfig) GetImbalanceThreshold() float64 {
|
|
if x != nil {
|
|
return x.ImbalanceThreshold
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *EcBalanceTaskConfig) GetMinServerCount() int32 {
|
|
if x != nil {
|
|
return x.MinServerCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *EcBalanceTaskConfig) GetCollectionFilter() string {
|
|
if x != nil {
|
|
return x.CollectionFilter
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *EcBalanceTaskConfig) GetDiskType() string {
|
|
if x != nil {
|
|
return x.DiskType
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *EcBalanceTaskConfig) GetPreferredTags() []string {
|
|
if x != nil {
|
|
return x.PreferredTags
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// MaintenanceTaskData represents complete task state for persistence
|
|
type MaintenanceTaskData struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"`
|
|
Type string `protobuf:"bytes,2,opt,name=type,proto3" json:"type,omitempty"`
|
|
Priority string `protobuf:"bytes,3,opt,name=priority,proto3" json:"priority,omitempty"`
|
|
Status string `protobuf:"bytes,4,opt,name=status,proto3" json:"status,omitempty"`
|
|
VolumeId uint32 `protobuf:"varint,5,opt,name=volume_id,json=volumeId,proto3" json:"volume_id,omitempty"`
|
|
Server string `protobuf:"bytes,6,opt,name=server,proto3" json:"server,omitempty"`
|
|
Collection string `protobuf:"bytes,7,opt,name=collection,proto3" json:"collection,omitempty"`
|
|
TypedParams *TaskParams `protobuf:"bytes,8,opt,name=typed_params,json=typedParams,proto3" json:"typed_params,omitempty"`
|
|
Reason string `protobuf:"bytes,9,opt,name=reason,proto3" json:"reason,omitempty"`
|
|
CreatedAt int64 `protobuf:"varint,10,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
|
|
ScheduledAt int64 `protobuf:"varint,11,opt,name=scheduled_at,json=scheduledAt,proto3" json:"scheduled_at,omitempty"`
|
|
StartedAt int64 `protobuf:"varint,12,opt,name=started_at,json=startedAt,proto3" json:"started_at,omitempty"`
|
|
CompletedAt int64 `protobuf:"varint,13,opt,name=completed_at,json=completedAt,proto3" json:"completed_at,omitempty"`
|
|
WorkerId string `protobuf:"bytes,14,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
Error string `protobuf:"bytes,15,opt,name=error,proto3" json:"error,omitempty"`
|
|
Progress float64 `protobuf:"fixed64,16,opt,name=progress,proto3" json:"progress,omitempty"`
|
|
RetryCount int32 `protobuf:"varint,17,opt,name=retry_count,json=retryCount,proto3" json:"retry_count,omitempty"`
|
|
MaxRetries int32 `protobuf:"varint,18,opt,name=max_retries,json=maxRetries,proto3" json:"max_retries,omitempty"`
|
|
// Enhanced fields for detailed task tracking
|
|
CreatedBy string `protobuf:"bytes,19,opt,name=created_by,json=createdBy,proto3" json:"created_by,omitempty"`
|
|
CreationContext string `protobuf:"bytes,20,opt,name=creation_context,json=creationContext,proto3" json:"creation_context,omitempty"`
|
|
AssignmentHistory []*TaskAssignmentRecord `protobuf:"bytes,21,rep,name=assignment_history,json=assignmentHistory,proto3" json:"assignment_history,omitempty"`
|
|
DetailedReason string `protobuf:"bytes,22,opt,name=detailed_reason,json=detailedReason,proto3" json:"detailed_reason,omitempty"`
|
|
Tags map[string]string `protobuf:"bytes,23,rep,name=tags,proto3" json:"tags,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
|
|
CreationMetrics *TaskCreationMetrics `protobuf:"bytes,24,opt,name=creation_metrics,json=creationMetrics,proto3" json:"creation_metrics,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) Reset() {
|
|
*x = MaintenanceTaskData{}
|
|
mi := &file_worker_proto_msgTypes[35]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*MaintenanceTaskData) ProtoMessage() {}
|
|
|
|
func (x *MaintenanceTaskData) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[35]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use MaintenanceTaskData.ProtoReflect.Descriptor instead.
|
|
func (*MaintenanceTaskData) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{35}
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetId() string {
|
|
if x != nil {
|
|
return x.Id
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetType() string {
|
|
if x != nil {
|
|
return x.Type
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetPriority() string {
|
|
if x != nil {
|
|
return x.Priority
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetStatus() string {
|
|
if x != nil {
|
|
return x.Status
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetVolumeId() uint32 {
|
|
if x != nil {
|
|
return x.VolumeId
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetServer() string {
|
|
if x != nil {
|
|
return x.Server
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetCollection() string {
|
|
if x != nil {
|
|
return x.Collection
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetTypedParams() *TaskParams {
|
|
if x != nil {
|
|
return x.TypedParams
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetReason() string {
|
|
if x != nil {
|
|
return x.Reason
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetCreatedAt() int64 {
|
|
if x != nil {
|
|
return x.CreatedAt
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetScheduledAt() int64 {
|
|
if x != nil {
|
|
return x.ScheduledAt
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetStartedAt() int64 {
|
|
if x != nil {
|
|
return x.StartedAt
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetCompletedAt() int64 {
|
|
if x != nil {
|
|
return x.CompletedAt
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetError() string {
|
|
if x != nil {
|
|
return x.Error
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetProgress() float64 {
|
|
if x != nil {
|
|
return x.Progress
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetRetryCount() int32 {
|
|
if x != nil {
|
|
return x.RetryCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetMaxRetries() int32 {
|
|
if x != nil {
|
|
return x.MaxRetries
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetCreatedBy() string {
|
|
if x != nil {
|
|
return x.CreatedBy
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetCreationContext() string {
|
|
if x != nil {
|
|
return x.CreationContext
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetAssignmentHistory() []*TaskAssignmentRecord {
|
|
if x != nil {
|
|
return x.AssignmentHistory
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetDetailedReason() string {
|
|
if x != nil {
|
|
return x.DetailedReason
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetTags() map[string]string {
|
|
if x != nil {
|
|
return x.Tags
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *MaintenanceTaskData) GetCreationMetrics() *TaskCreationMetrics {
|
|
if x != nil {
|
|
return x.CreationMetrics
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// TaskAssignmentRecord tracks worker assignments for a task
|
|
type TaskAssignmentRecord struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
WorkerId string `protobuf:"bytes,1,opt,name=worker_id,json=workerId,proto3" json:"worker_id,omitempty"`
|
|
WorkerAddress string `protobuf:"bytes,2,opt,name=worker_address,json=workerAddress,proto3" json:"worker_address,omitempty"`
|
|
AssignedAt int64 `protobuf:"varint,3,opt,name=assigned_at,json=assignedAt,proto3" json:"assigned_at,omitempty"`
|
|
UnassignedAt int64 `protobuf:"varint,4,opt,name=unassigned_at,json=unassignedAt,proto3" json:"unassigned_at,omitempty"` // Optional: when worker was unassigned
|
|
Reason string `protobuf:"bytes,5,opt,name=reason,proto3" json:"reason,omitempty"` // Reason for assignment/unassignment
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskAssignmentRecord) Reset() {
|
|
*x = TaskAssignmentRecord{}
|
|
mi := &file_worker_proto_msgTypes[36]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskAssignmentRecord) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskAssignmentRecord) ProtoMessage() {}
|
|
|
|
func (x *TaskAssignmentRecord) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[36]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskAssignmentRecord.ProtoReflect.Descriptor instead.
|
|
func (*TaskAssignmentRecord) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{36}
|
|
}
|
|
|
|
func (x *TaskAssignmentRecord) GetWorkerId() string {
|
|
if x != nil {
|
|
return x.WorkerId
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskAssignmentRecord) GetWorkerAddress() string {
|
|
if x != nil {
|
|
return x.WorkerAddress
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskAssignmentRecord) GetAssignedAt() int64 {
|
|
if x != nil {
|
|
return x.AssignedAt
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskAssignmentRecord) GetUnassignedAt() int64 {
|
|
if x != nil {
|
|
return x.UnassignedAt
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskAssignmentRecord) GetReason() string {
|
|
if x != nil {
|
|
return x.Reason
|
|
}
|
|
return ""
|
|
}
|
|
|
|
// TaskCreationMetrics tracks why and how a task was created
|
|
type TaskCreationMetrics struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TriggerMetric string `protobuf:"bytes,1,opt,name=trigger_metric,json=triggerMetric,proto3" json:"trigger_metric,omitempty"` // Name of metric that triggered creation
|
|
MetricValue float64 `protobuf:"fixed64,2,opt,name=metric_value,json=metricValue,proto3" json:"metric_value,omitempty"` // Value that triggered creation
|
|
Threshold float64 `protobuf:"fixed64,3,opt,name=threshold,proto3" json:"threshold,omitempty"` // Threshold that was exceeded
|
|
VolumeMetrics *VolumeHealthMetrics `protobuf:"bytes,4,opt,name=volume_metrics,json=volumeMetrics,proto3" json:"volume_metrics,omitempty"` // Volume health at creation time
|
|
AdditionalData map[string]string `protobuf:"bytes,5,rep,name=additional_data,json=additionalData,proto3" json:"additional_data,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"` // Additional context data
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskCreationMetrics) Reset() {
|
|
*x = TaskCreationMetrics{}
|
|
mi := &file_worker_proto_msgTypes[37]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskCreationMetrics) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskCreationMetrics) ProtoMessage() {}
|
|
|
|
func (x *TaskCreationMetrics) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[37]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskCreationMetrics.ProtoReflect.Descriptor instead.
|
|
func (*TaskCreationMetrics) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{37}
|
|
}
|
|
|
|
func (x *TaskCreationMetrics) GetTriggerMetric() string {
|
|
if x != nil {
|
|
return x.TriggerMetric
|
|
}
|
|
return ""
|
|
}
|
|
|
|
func (x *TaskCreationMetrics) GetMetricValue() float64 {
|
|
if x != nil {
|
|
return x.MetricValue
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskCreationMetrics) GetThreshold() float64 {
|
|
if x != nil {
|
|
return x.Threshold
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskCreationMetrics) GetVolumeMetrics() *VolumeHealthMetrics {
|
|
if x != nil {
|
|
return x.VolumeMetrics
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskCreationMetrics) GetAdditionalData() map[string]string {
|
|
if x != nil {
|
|
return x.AdditionalData
|
|
}
|
|
return nil
|
|
}
|
|
|
|
// VolumeHealthMetrics captures volume state at task creation
|
|
type VolumeHealthMetrics struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
TotalSize uint64 `protobuf:"varint,1,opt,name=total_size,json=totalSize,proto3" json:"total_size,omitempty"`
|
|
UsedSize uint64 `protobuf:"varint,2,opt,name=used_size,json=usedSize,proto3" json:"used_size,omitempty"`
|
|
GarbageSize uint64 `protobuf:"varint,3,opt,name=garbage_size,json=garbageSize,proto3" json:"garbage_size,omitempty"`
|
|
GarbageRatio float64 `protobuf:"fixed64,4,opt,name=garbage_ratio,json=garbageRatio,proto3" json:"garbage_ratio,omitempty"`
|
|
FileCount int32 `protobuf:"varint,5,opt,name=file_count,json=fileCount,proto3" json:"file_count,omitempty"`
|
|
DeletedFileCount int32 `protobuf:"varint,6,opt,name=deleted_file_count,json=deletedFileCount,proto3" json:"deleted_file_count,omitempty"`
|
|
LastModified int64 `protobuf:"varint,7,opt,name=last_modified,json=lastModified,proto3" json:"last_modified,omitempty"`
|
|
ReplicaCount int32 `protobuf:"varint,8,opt,name=replica_count,json=replicaCount,proto3" json:"replica_count,omitempty"`
|
|
IsEcVolume bool `protobuf:"varint,9,opt,name=is_ec_volume,json=isEcVolume,proto3" json:"is_ec_volume,omitempty"`
|
|
Collection string `protobuf:"bytes,10,opt,name=collection,proto3" json:"collection,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) Reset() {
|
|
*x = VolumeHealthMetrics{}
|
|
mi := &file_worker_proto_msgTypes[38]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*VolumeHealthMetrics) ProtoMessage() {}
|
|
|
|
func (x *VolumeHealthMetrics) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[38]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use VolumeHealthMetrics.ProtoReflect.Descriptor instead.
|
|
func (*VolumeHealthMetrics) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{38}
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetTotalSize() uint64 {
|
|
if x != nil {
|
|
return x.TotalSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetUsedSize() uint64 {
|
|
if x != nil {
|
|
return x.UsedSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetGarbageSize() uint64 {
|
|
if x != nil {
|
|
return x.GarbageSize
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetGarbageRatio() float64 {
|
|
if x != nil {
|
|
return x.GarbageRatio
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetFileCount() int32 {
|
|
if x != nil {
|
|
return x.FileCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetDeletedFileCount() int32 {
|
|
if x != nil {
|
|
return x.DeletedFileCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetLastModified() int64 {
|
|
if x != nil {
|
|
return x.LastModified
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetReplicaCount() int32 {
|
|
if x != nil {
|
|
return x.ReplicaCount
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetIsEcVolume() bool {
|
|
if x != nil {
|
|
return x.IsEcVolume
|
|
}
|
|
return false
|
|
}
|
|
|
|
func (x *VolumeHealthMetrics) GetCollection() string {
|
|
if x != nil {
|
|
return x.Collection
|
|
}
|
|
return ""
|
|
}
|
|
|
|
// TaskStateFile wraps task data with metadata for persistence
|
|
type TaskStateFile struct {
|
|
state protoimpl.MessageState `protogen:"open.v1"`
|
|
Task *MaintenanceTaskData `protobuf:"bytes,1,opt,name=task,proto3" json:"task,omitempty"`
|
|
LastUpdated int64 `protobuf:"varint,2,opt,name=last_updated,json=lastUpdated,proto3" json:"last_updated,omitempty"`
|
|
AdminVersion string `protobuf:"bytes,3,opt,name=admin_version,json=adminVersion,proto3" json:"admin_version,omitempty"`
|
|
unknownFields protoimpl.UnknownFields
|
|
sizeCache protoimpl.SizeCache
|
|
}
|
|
|
|
func (x *TaskStateFile) Reset() {
|
|
*x = TaskStateFile{}
|
|
mi := &file_worker_proto_msgTypes[39]
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
|
|
func (x *TaskStateFile) String() string {
|
|
return protoimpl.X.MessageStringOf(x)
|
|
}
|
|
|
|
func (*TaskStateFile) ProtoMessage() {}
|
|
|
|
func (x *TaskStateFile) ProtoReflect() protoreflect.Message {
|
|
mi := &file_worker_proto_msgTypes[39]
|
|
if x != nil {
|
|
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
|
|
if ms.LoadMessageInfo() == nil {
|
|
ms.StoreMessageInfo(mi)
|
|
}
|
|
return ms
|
|
}
|
|
return mi.MessageOf(x)
|
|
}
|
|
|
|
// Deprecated: Use TaskStateFile.ProtoReflect.Descriptor instead.
|
|
func (*TaskStateFile) Descriptor() ([]byte, []int) {
|
|
return file_worker_proto_rawDescGZIP(), []int{39}
|
|
}
|
|
|
|
func (x *TaskStateFile) GetTask() *MaintenanceTaskData {
|
|
if x != nil {
|
|
return x.Task
|
|
}
|
|
return nil
|
|
}
|
|
|
|
func (x *TaskStateFile) GetLastUpdated() int64 {
|
|
if x != nil {
|
|
return x.LastUpdated
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (x *TaskStateFile) GetAdminVersion() string {
|
|
if x != nil {
|
|
return x.AdminVersion
|
|
}
|
|
return ""
|
|
}
|
|
|
|
var File_worker_proto protoreflect.FileDescriptor
|
|
|
|
const file_worker_proto_rawDesc = "" +
|
|
"\n" +
|
|
"\fworker.proto\x12\tworker_pb\"\x90\x04\n" +
|
|
"\rWorkerMessage\x12\x1b\n" +
|
|
"\tworker_id\x18\x01 \x01(\tR\bworkerId\x12\x1c\n" +
|
|
"\ttimestamp\x18\x02 \x01(\x03R\ttimestamp\x12C\n" +
|
|
"\fregistration\x18\x03 \x01(\v2\x1d.worker_pb.WorkerRegistrationH\x00R\fregistration\x12:\n" +
|
|
"\theartbeat\x18\x04 \x01(\v2\x1a.worker_pb.WorkerHeartbeatH\x00R\theartbeat\x12;\n" +
|
|
"\ftask_request\x18\x05 \x01(\v2\x16.worker_pb.TaskRequestH\x00R\vtaskRequest\x128\n" +
|
|
"\vtask_update\x18\x06 \x01(\v2\x15.worker_pb.TaskUpdateH\x00R\n" +
|
|
"taskUpdate\x12>\n" +
|
|
"\rtask_complete\x18\a \x01(\v2\x17.worker_pb.TaskCompleteH\x00R\ftaskComplete\x127\n" +
|
|
"\bshutdown\x18\b \x01(\v2\x19.worker_pb.WorkerShutdownH\x00R\bshutdown\x12H\n" +
|
|
"\x11task_log_response\x18\t \x01(\v2\x1a.worker_pb.TaskLogResponseH\x00R\x0ftaskLogResponseB\t\n" +
|
|
"\amessage\"\x95\x04\n" +
|
|
"\fAdminMessage\x12\x19\n" +
|
|
"\badmin_id\x18\x01 \x01(\tR\aadminId\x12\x1c\n" +
|
|
"\ttimestamp\x18\x02 \x01(\x03R\ttimestamp\x12V\n" +
|
|
"\x15registration_response\x18\x03 \x01(\v2\x1f.worker_pb.RegistrationResponseH\x00R\x14registrationResponse\x12M\n" +
|
|
"\x12heartbeat_response\x18\x04 \x01(\v2\x1c.worker_pb.HeartbeatResponseH\x00R\x11heartbeatResponse\x12D\n" +
|
|
"\x0ftask_assignment\x18\x05 \x01(\v2\x19.worker_pb.TaskAssignmentH\x00R\x0etaskAssignment\x12J\n" +
|
|
"\x11task_cancellation\x18\x06 \x01(\v2\x1b.worker_pb.TaskCancellationH\x00R\x10taskCancellation\x12A\n" +
|
|
"\x0eadmin_shutdown\x18\a \x01(\v2\x18.worker_pb.AdminShutdownH\x00R\radminShutdown\x12E\n" +
|
|
"\x10task_log_request\x18\b \x01(\v2\x19.worker_pb.TaskLogRequestH\x00R\x0etaskLogRequestB\t\n" +
|
|
"\amessage\"\x9c\x02\n" +
|
|
"\x12WorkerRegistration\x12\x1b\n" +
|
|
"\tworker_id\x18\x01 \x01(\tR\bworkerId\x12\x18\n" +
|
|
"\aaddress\x18\x02 \x01(\tR\aaddress\x12\"\n" +
|
|
"\fcapabilities\x18\x03 \x03(\tR\fcapabilities\x12%\n" +
|
|
"\x0emax_concurrent\x18\x04 \x01(\x05R\rmaxConcurrent\x12G\n" +
|
|
"\bmetadata\x18\x05 \x03(\v2+.worker_pb.WorkerRegistration.MetadataEntryR\bmetadata\x1a;\n" +
|
|
"\rMetadataEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"x\n" +
|
|
"\x14RegistrationResponse\x12\x18\n" +
|
|
"\asuccess\x18\x01 \x01(\bR\asuccess\x12\x18\n" +
|
|
"\amessage\x18\x02 \x01(\tR\amessage\x12,\n" +
|
|
"\x12assigned_worker_id\x18\x03 \x01(\tR\x10assignedWorkerId\"\xad\x02\n" +
|
|
"\x0fWorkerHeartbeat\x12\x1b\n" +
|
|
"\tworker_id\x18\x01 \x01(\tR\bworkerId\x12\x16\n" +
|
|
"\x06status\x18\x02 \x01(\tR\x06status\x12!\n" +
|
|
"\fcurrent_load\x18\x03 \x01(\x05R\vcurrentLoad\x12%\n" +
|
|
"\x0emax_concurrent\x18\x04 \x01(\x05R\rmaxConcurrent\x12(\n" +
|
|
"\x10current_task_ids\x18\x05 \x03(\tR\x0ecurrentTaskIds\x12'\n" +
|
|
"\x0ftasks_completed\x18\x06 \x01(\x05R\x0etasksCompleted\x12!\n" +
|
|
"\ftasks_failed\x18\a \x01(\x05R\vtasksFailed\x12%\n" +
|
|
"\x0euptime_seconds\x18\b \x01(\x03R\ruptimeSeconds\"G\n" +
|
|
"\x11HeartbeatResponse\x12\x18\n" +
|
|
"\asuccess\x18\x01 \x01(\bR\asuccess\x12\x18\n" +
|
|
"\amessage\x18\x02 \x01(\tR\amessage\"w\n" +
|
|
"\vTaskRequest\x12\x1b\n" +
|
|
"\tworker_id\x18\x01 \x01(\tR\bworkerId\x12\"\n" +
|
|
"\fcapabilities\x18\x02 \x03(\tR\fcapabilities\x12'\n" +
|
|
"\x0favailable_slots\x18\x03 \x01(\x05R\x0eavailableSlots\"\xb6\x02\n" +
|
|
"\x0eTaskAssignment\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x1b\n" +
|
|
"\ttask_type\x18\x02 \x01(\tR\btaskType\x12-\n" +
|
|
"\x06params\x18\x03 \x01(\v2\x15.worker_pb.TaskParamsR\x06params\x12\x1a\n" +
|
|
"\bpriority\x18\x04 \x01(\x05R\bpriority\x12!\n" +
|
|
"\fcreated_time\x18\x05 \x01(\x03R\vcreatedTime\x12C\n" +
|
|
"\bmetadata\x18\x06 \x03(\v2'.worker_pb.TaskAssignment.MetadataEntryR\bmetadata\x1a;\n" +
|
|
"\rMetadataEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xaf\x05\n" +
|
|
"\n" +
|
|
"TaskParams\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x1b\n" +
|
|
"\tvolume_id\x18\x02 \x01(\rR\bvolumeId\x12\x1e\n" +
|
|
"\n" +
|
|
"collection\x18\x03 \x01(\tR\n" +
|
|
"collection\x12\x1f\n" +
|
|
"\vdata_center\x18\x04 \x01(\tR\n" +
|
|
"dataCenter\x12\x12\n" +
|
|
"\x04rack\x18\x05 \x01(\tR\x04rack\x12\x1f\n" +
|
|
"\vvolume_size\x18\x06 \x01(\x04R\n" +
|
|
"volumeSize\x12/\n" +
|
|
"\asources\x18\a \x03(\v2\x15.worker_pb.TaskSourceR\asources\x12/\n" +
|
|
"\atargets\x18\b \x03(\v2\x15.worker_pb.TaskTargetR\atargets\x12B\n" +
|
|
"\rvacuum_params\x18\t \x01(\v2\x1b.worker_pb.VacuumTaskParamsH\x00R\fvacuumParams\x12X\n" +
|
|
"\x15erasure_coding_params\x18\n" +
|
|
" \x01(\v2\".worker_pb.ErasureCodingTaskParamsH\x00R\x13erasureCodingParams\x12E\n" +
|
|
"\x0ebalance_params\x18\v \x01(\v2\x1c.worker_pb.BalanceTaskParamsH\x00R\rbalanceParams\x12Q\n" +
|
|
"\x12replication_params\x18\f \x01(\v2 .worker_pb.ReplicationTaskParamsH\x00R\x11replicationParams\x12L\n" +
|
|
"\x11ec_balance_params\x18\r \x01(\v2\x1e.worker_pb.EcBalanceTaskParamsH\x00R\x0fecBalanceParamsB\r\n" +
|
|
"\vtask_params\"\xcb\x01\n" +
|
|
"\x10VacuumTaskParams\x12+\n" +
|
|
"\x11garbage_threshold\x18\x01 \x01(\x01R\x10garbageThreshold\x12!\n" +
|
|
"\fforce_vacuum\x18\x02 \x01(\bR\vforceVacuum\x12\x1d\n" +
|
|
"\n" +
|
|
"batch_size\x18\x03 \x01(\x05R\tbatchSize\x12\x1f\n" +
|
|
"\vworking_dir\x18\x04 \x01(\tR\n" +
|
|
"workingDir\x12'\n" +
|
|
"\x0fverify_checksum\x18\x05 \x01(\bR\x0everifyChecksum\"\xfe\x01\n" +
|
|
"\x17ErasureCodingTaskParams\x120\n" +
|
|
"\x14estimated_shard_size\x18\x01 \x01(\x04R\x12estimatedShardSize\x12\x1f\n" +
|
|
"\vdata_shards\x18\x02 \x01(\x05R\n" +
|
|
"dataShards\x12#\n" +
|
|
"\rparity_shards\x18\x03 \x01(\x05R\fparityShards\x12\x1f\n" +
|
|
"\vworking_dir\x18\x04 \x01(\tR\n" +
|
|
"workingDir\x12#\n" +
|
|
"\rmaster_client\x18\x05 \x01(\tR\fmasterClient\x12%\n" +
|
|
"\x0ecleanup_source\x18\x06 \x01(\bR\rcleanupSource\"\xcf\x01\n" +
|
|
"\n" +
|
|
"TaskSource\x12\x12\n" +
|
|
"\x04node\x18\x01 \x01(\tR\x04node\x12\x17\n" +
|
|
"\adisk_id\x18\x02 \x01(\rR\x06diskId\x12\x12\n" +
|
|
"\x04rack\x18\x03 \x01(\tR\x04rack\x12\x1f\n" +
|
|
"\vdata_center\x18\x04 \x01(\tR\n" +
|
|
"dataCenter\x12\x1b\n" +
|
|
"\tvolume_id\x18\x05 \x01(\rR\bvolumeId\x12\x1b\n" +
|
|
"\tshard_ids\x18\x06 \x03(\rR\bshardIds\x12%\n" +
|
|
"\x0eestimated_size\x18\a \x01(\x04R\restimatedSize\"\xcf\x01\n" +
|
|
"\n" +
|
|
"TaskTarget\x12\x12\n" +
|
|
"\x04node\x18\x01 \x01(\tR\x04node\x12\x17\n" +
|
|
"\adisk_id\x18\x02 \x01(\rR\x06diskId\x12\x12\n" +
|
|
"\x04rack\x18\x03 \x01(\tR\x04rack\x12\x1f\n" +
|
|
"\vdata_center\x18\x04 \x01(\tR\n" +
|
|
"dataCenter\x12\x1b\n" +
|
|
"\tvolume_id\x18\x05 \x01(\rR\bvolumeId\x12\x1b\n" +
|
|
"\tshard_ids\x18\x06 \x03(\rR\bshardIds\x12%\n" +
|
|
"\x0eestimated_size\x18\a \x01(\x04R\restimatedSize\"\xb1\x01\n" +
|
|
"\x0fBalanceMoveSpec\x12\x1b\n" +
|
|
"\tvolume_id\x18\x01 \x01(\rR\bvolumeId\x12\x1f\n" +
|
|
"\vsource_node\x18\x02 \x01(\tR\n" +
|
|
"sourceNode\x12\x1f\n" +
|
|
"\vtarget_node\x18\x03 \x01(\tR\n" +
|
|
"targetNode\x12\x1e\n" +
|
|
"\n" +
|
|
"collection\x18\x04 \x01(\tR\n" +
|
|
"collection\x12\x1f\n" +
|
|
"\vvolume_size\x18\x05 \x01(\x04R\n" +
|
|
"volumeSize\"\xbf\x01\n" +
|
|
"\x11BalanceTaskParams\x12\x1d\n" +
|
|
"\n" +
|
|
"force_move\x18\x01 \x01(\bR\tforceMove\x12'\n" +
|
|
"\x0ftimeout_seconds\x18\x02 \x01(\x05R\x0etimeoutSeconds\x120\n" +
|
|
"\x14max_concurrent_moves\x18\x03 \x01(\x05R\x12maxConcurrentMoves\x120\n" +
|
|
"\x05moves\x18\x04 \x03(\v2\x1a.worker_pb.BalanceMoveSpecR\x05moves\"k\n" +
|
|
"\x15ReplicationTaskParams\x12#\n" +
|
|
"\rreplica_count\x18\x01 \x01(\x05R\freplicaCount\x12-\n" +
|
|
"\x12verify_consistency\x18\x02 \x01(\bR\x11verifyConsistency\"\x8e\x02\n" +
|
|
"\n" +
|
|
"TaskUpdate\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x1b\n" +
|
|
"\tworker_id\x18\x02 \x01(\tR\bworkerId\x12\x16\n" +
|
|
"\x06status\x18\x03 \x01(\tR\x06status\x12\x1a\n" +
|
|
"\bprogress\x18\x04 \x01(\x02R\bprogress\x12\x18\n" +
|
|
"\amessage\x18\x05 \x01(\tR\amessage\x12?\n" +
|
|
"\bmetadata\x18\x06 \x03(\v2#.worker_pb.TaskUpdate.MetadataEntryR\bmetadata\x1a;\n" +
|
|
"\rMetadataEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xc5\x02\n" +
|
|
"\fTaskComplete\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x1b\n" +
|
|
"\tworker_id\x18\x02 \x01(\tR\bworkerId\x12\x18\n" +
|
|
"\asuccess\x18\x03 \x01(\bR\asuccess\x12#\n" +
|
|
"\rerror_message\x18\x04 \x01(\tR\ferrorMessage\x12'\n" +
|
|
"\x0fcompletion_time\x18\x05 \x01(\x03R\x0ecompletionTime\x12T\n" +
|
|
"\x0fresult_metadata\x18\x06 \x03(\v2+.worker_pb.TaskComplete.ResultMetadataEntryR\x0eresultMetadata\x1aA\n" +
|
|
"\x13ResultMetadataEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"Y\n" +
|
|
"\x10TaskCancellation\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x16\n" +
|
|
"\x06reason\x18\x02 \x01(\tR\x06reason\x12\x14\n" +
|
|
"\x05force\x18\x03 \x01(\bR\x05force\"o\n" +
|
|
"\x0eWorkerShutdown\x12\x1b\n" +
|
|
"\tworker_id\x18\x01 \x01(\tR\bworkerId\x12\x16\n" +
|
|
"\x06reason\x18\x02 \x01(\tR\x06reason\x12(\n" +
|
|
"\x10pending_task_ids\x18\x03 \x03(\tR\x0ependingTaskIds\"c\n" +
|
|
"\rAdminShutdown\x12\x16\n" +
|
|
"\x06reason\x18\x01 \x01(\tR\x06reason\x12:\n" +
|
|
"\x19graceful_shutdown_seconds\x18\x02 \x01(\x05R\x17gracefulShutdownSeconds\"\xe9\x01\n" +
|
|
"\x0eTaskLogRequest\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x1b\n" +
|
|
"\tworker_id\x18\x02 \x01(\tR\bworkerId\x12)\n" +
|
|
"\x10include_metadata\x18\x03 \x01(\bR\x0fincludeMetadata\x12\x1f\n" +
|
|
"\vmax_entries\x18\x04 \x01(\x05R\n" +
|
|
"maxEntries\x12\x1b\n" +
|
|
"\tlog_level\x18\x05 \x01(\tR\blogLevel\x12\x1d\n" +
|
|
"\n" +
|
|
"start_time\x18\x06 \x01(\x03R\tstartTime\x12\x19\n" +
|
|
"\bend_time\x18\a \x01(\x03R\aendTime\"\xf8\x01\n" +
|
|
"\x0fTaskLogResponse\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x1b\n" +
|
|
"\tworker_id\x18\x02 \x01(\tR\bworkerId\x12\x18\n" +
|
|
"\asuccess\x18\x03 \x01(\bR\asuccess\x12#\n" +
|
|
"\rerror_message\x18\x04 \x01(\tR\ferrorMessage\x126\n" +
|
|
"\bmetadata\x18\x05 \x01(\v2\x1a.worker_pb.TaskLogMetadataR\bmetadata\x128\n" +
|
|
"\vlog_entries\x18\x06 \x03(\v2\x17.worker_pb.TaskLogEntryR\n" +
|
|
"logEntries\"\x97\x04\n" +
|
|
"\x0fTaskLogMetadata\x12\x17\n" +
|
|
"\atask_id\x18\x01 \x01(\tR\x06taskId\x12\x1b\n" +
|
|
"\ttask_type\x18\x02 \x01(\tR\btaskType\x12\x1b\n" +
|
|
"\tworker_id\x18\x03 \x01(\tR\bworkerId\x12\x1d\n" +
|
|
"\n" +
|
|
"start_time\x18\x04 \x01(\x03R\tstartTime\x12\x19\n" +
|
|
"\bend_time\x18\x05 \x01(\x03R\aendTime\x12\x1f\n" +
|
|
"\vduration_ms\x18\x06 \x01(\x03R\n" +
|
|
"durationMs\x12\x16\n" +
|
|
"\x06status\x18\a \x01(\tR\x06status\x12\x1a\n" +
|
|
"\bprogress\x18\b \x01(\x02R\bprogress\x12\x1b\n" +
|
|
"\tvolume_id\x18\t \x01(\rR\bvolumeId\x12\x16\n" +
|
|
"\x06server\x18\n" +
|
|
" \x01(\tR\x06server\x12\x1e\n" +
|
|
"\n" +
|
|
"collection\x18\v \x01(\tR\n" +
|
|
"collection\x12\"\n" +
|
|
"\rlog_file_path\x18\f \x01(\tR\vlogFilePath\x12\x1d\n" +
|
|
"\n" +
|
|
"created_at\x18\r \x01(\x03R\tcreatedAt\x12K\n" +
|
|
"\vcustom_data\x18\x0e \x03(\v2*.worker_pb.TaskLogMetadata.CustomDataEntryR\n" +
|
|
"customData\x1a=\n" +
|
|
"\x0fCustomDataEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\x88\x02\n" +
|
|
"\fTaskLogEntry\x12\x1c\n" +
|
|
"\ttimestamp\x18\x01 \x01(\x03R\ttimestamp\x12\x14\n" +
|
|
"\x05level\x18\x02 \x01(\tR\x05level\x12\x18\n" +
|
|
"\amessage\x18\x03 \x01(\tR\amessage\x12;\n" +
|
|
"\x06fields\x18\x04 \x03(\v2#.worker_pb.TaskLogEntry.FieldsEntryR\x06fields\x12\x1a\n" +
|
|
"\bprogress\x18\x05 \x01(\x02R\bprogress\x12\x16\n" +
|
|
"\x06status\x18\x06 \x01(\tR\x06status\x1a9\n" +
|
|
"\vFieldsEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xc0\x03\n" +
|
|
"\x11MaintenanceConfig\x12\x18\n" +
|
|
"\aenabled\x18\x01 \x01(\bR\aenabled\x122\n" +
|
|
"\x15scan_interval_seconds\x18\x02 \x01(\x05R\x13scanIntervalSeconds\x124\n" +
|
|
"\x16worker_timeout_seconds\x18\x03 \x01(\x05R\x14workerTimeoutSeconds\x120\n" +
|
|
"\x14task_timeout_seconds\x18\x04 \x01(\x05R\x12taskTimeoutSeconds\x12.\n" +
|
|
"\x13retry_delay_seconds\x18\x05 \x01(\x05R\x11retryDelaySeconds\x12\x1f\n" +
|
|
"\vmax_retries\x18\x06 \x01(\x05R\n" +
|
|
"maxRetries\x128\n" +
|
|
"\x18cleanup_interval_seconds\x18\a \x01(\x05R\x16cleanupIntervalSeconds\x124\n" +
|
|
"\x16task_retention_seconds\x18\b \x01(\x05R\x14taskRetentionSeconds\x124\n" +
|
|
"\x06policy\x18\t \x01(\v2\x1c.worker_pb.MaintenancePolicyR\x06policy\"\x80\x03\n" +
|
|
"\x11MaintenancePolicy\x12S\n" +
|
|
"\rtask_policies\x18\x01 \x03(\v2..worker_pb.MaintenancePolicy.TaskPoliciesEntryR\ftaskPolicies\x122\n" +
|
|
"\x15global_max_concurrent\x18\x02 \x01(\x05R\x13globalMaxConcurrent\x12E\n" +
|
|
"\x1fdefault_repeat_interval_seconds\x18\x03 \x01(\x05R\x1cdefaultRepeatIntervalSeconds\x12C\n" +
|
|
"\x1edefault_check_interval_seconds\x18\x04 \x01(\x05R\x1bdefaultCheckIntervalSeconds\x1aV\n" +
|
|
"\x11TaskPoliciesEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12+\n" +
|
|
"\x05value\x18\x02 \x01(\v2\x15.worker_pb.TaskPolicyR\x05value:\x028\x01\"\xd0\x04\n" +
|
|
"\n" +
|
|
"TaskPolicy\x12\x18\n" +
|
|
"\aenabled\x18\x01 \x01(\bR\aenabled\x12%\n" +
|
|
"\x0emax_concurrent\x18\x02 \x01(\x05R\rmaxConcurrent\x126\n" +
|
|
"\x17repeat_interval_seconds\x18\x03 \x01(\x05R\x15repeatIntervalSeconds\x124\n" +
|
|
"\x16check_interval_seconds\x18\x04 \x01(\x05R\x14checkIntervalSeconds\x12B\n" +
|
|
"\rvacuum_config\x18\x05 \x01(\v2\x1b.worker_pb.VacuumTaskConfigH\x00R\fvacuumConfig\x12X\n" +
|
|
"\x15erasure_coding_config\x18\x06 \x01(\v2\".worker_pb.ErasureCodingTaskConfigH\x00R\x13erasureCodingConfig\x12E\n" +
|
|
"\x0ebalance_config\x18\a \x01(\v2\x1c.worker_pb.BalanceTaskConfigH\x00R\rbalanceConfig\x12Q\n" +
|
|
"\x12replication_config\x18\b \x01(\v2 .worker_pb.ReplicationTaskConfigH\x00R\x11replicationConfig\x12L\n" +
|
|
"\x11ec_balance_config\x18\t \x01(\v2\x1e.worker_pb.EcBalanceTaskConfigH\x00R\x0fecBalanceConfigB\r\n" +
|
|
"\vtask_config\"\xa2\x01\n" +
|
|
"\x10VacuumTaskConfig\x12+\n" +
|
|
"\x11garbage_threshold\x18\x01 \x01(\x01R\x10garbageThreshold\x12/\n" +
|
|
"\x14min_volume_age_hours\x18\x02 \x01(\x05R\x11minVolumeAgeHours\x120\n" +
|
|
"\x14min_interval_seconds\x18\x03 \x01(\x05R\x12minIntervalSeconds\"\xed\x01\n" +
|
|
"\x17ErasureCodingTaskConfig\x12%\n" +
|
|
"\x0efullness_ratio\x18\x01 \x01(\x01R\rfullnessRatio\x12*\n" +
|
|
"\x11quiet_for_seconds\x18\x02 \x01(\x05R\x0fquietForSeconds\x12+\n" +
|
|
"\x12min_volume_size_mb\x18\x03 \x01(\x05R\x0fminVolumeSizeMb\x12+\n" +
|
|
"\x11collection_filter\x18\x04 \x01(\tR\x10collectionFilter\x12%\n" +
|
|
"\x0epreferred_tags\x18\x05 \x03(\tR\rpreferredTags\"n\n" +
|
|
"\x11BalanceTaskConfig\x12/\n" +
|
|
"\x13imbalance_threshold\x18\x01 \x01(\x01R\x12imbalanceThreshold\x12(\n" +
|
|
"\x10min_server_count\x18\x02 \x01(\x05R\x0eminServerCount\"I\n" +
|
|
"\x15ReplicationTaskConfig\x120\n" +
|
|
"\x14target_replica_count\x18\x01 \x01(\x05R\x12targetReplicaCount\"\xbe\x01\n" +
|
|
"\x13EcBalanceTaskParams\x12\x1b\n" +
|
|
"\tdisk_type\x18\x01 \x01(\tR\bdiskType\x12/\n" +
|
|
"\x13max_parallelization\x18\x02 \x01(\x05R\x12maxParallelization\x12'\n" +
|
|
"\x0ftimeout_seconds\x18\x03 \x01(\x05R\x0etimeoutSeconds\x120\n" +
|
|
"\x05moves\x18\x04 \x03(\v2\x1a.worker_pb.EcShardMoveSpecR\x05moves\"\xf7\x01\n" +
|
|
"\x0fEcShardMoveSpec\x12\x1b\n" +
|
|
"\tvolume_id\x18\x01 \x01(\rR\bvolumeId\x12\x19\n" +
|
|
"\bshard_id\x18\x02 \x01(\rR\ashardId\x12\x1e\n" +
|
|
"\n" +
|
|
"collection\x18\x03 \x01(\tR\n" +
|
|
"collection\x12\x1f\n" +
|
|
"\vsource_node\x18\x04 \x01(\tR\n" +
|
|
"sourceNode\x12$\n" +
|
|
"\x0esource_disk_id\x18\x05 \x01(\rR\fsourceDiskId\x12\x1f\n" +
|
|
"\vtarget_node\x18\x06 \x01(\tR\n" +
|
|
"targetNode\x12$\n" +
|
|
"\x0etarget_disk_id\x18\a \x01(\rR\ftargetDiskId\"\xe1\x01\n" +
|
|
"\x13EcBalanceTaskConfig\x12/\n" +
|
|
"\x13imbalance_threshold\x18\x01 \x01(\x01R\x12imbalanceThreshold\x12(\n" +
|
|
"\x10min_server_count\x18\x02 \x01(\x05R\x0eminServerCount\x12+\n" +
|
|
"\x11collection_filter\x18\x03 \x01(\tR\x10collectionFilter\x12\x1b\n" +
|
|
"\tdisk_type\x18\x04 \x01(\tR\bdiskType\x12%\n" +
|
|
"\x0epreferred_tags\x18\x05 \x03(\tR\rpreferredTags\"\xae\a\n" +
|
|
"\x13MaintenanceTaskData\x12\x0e\n" +
|
|
"\x02id\x18\x01 \x01(\tR\x02id\x12\x12\n" +
|
|
"\x04type\x18\x02 \x01(\tR\x04type\x12\x1a\n" +
|
|
"\bpriority\x18\x03 \x01(\tR\bpriority\x12\x16\n" +
|
|
"\x06status\x18\x04 \x01(\tR\x06status\x12\x1b\n" +
|
|
"\tvolume_id\x18\x05 \x01(\rR\bvolumeId\x12\x16\n" +
|
|
"\x06server\x18\x06 \x01(\tR\x06server\x12\x1e\n" +
|
|
"\n" +
|
|
"collection\x18\a \x01(\tR\n" +
|
|
"collection\x128\n" +
|
|
"\ftyped_params\x18\b \x01(\v2\x15.worker_pb.TaskParamsR\vtypedParams\x12\x16\n" +
|
|
"\x06reason\x18\t \x01(\tR\x06reason\x12\x1d\n" +
|
|
"\n" +
|
|
"created_at\x18\n" +
|
|
" \x01(\x03R\tcreatedAt\x12!\n" +
|
|
"\fscheduled_at\x18\v \x01(\x03R\vscheduledAt\x12\x1d\n" +
|
|
"\n" +
|
|
"started_at\x18\f \x01(\x03R\tstartedAt\x12!\n" +
|
|
"\fcompleted_at\x18\r \x01(\x03R\vcompletedAt\x12\x1b\n" +
|
|
"\tworker_id\x18\x0e \x01(\tR\bworkerId\x12\x14\n" +
|
|
"\x05error\x18\x0f \x01(\tR\x05error\x12\x1a\n" +
|
|
"\bprogress\x18\x10 \x01(\x01R\bprogress\x12\x1f\n" +
|
|
"\vretry_count\x18\x11 \x01(\x05R\n" +
|
|
"retryCount\x12\x1f\n" +
|
|
"\vmax_retries\x18\x12 \x01(\x05R\n" +
|
|
"maxRetries\x12\x1d\n" +
|
|
"\n" +
|
|
"created_by\x18\x13 \x01(\tR\tcreatedBy\x12)\n" +
|
|
"\x10creation_context\x18\x14 \x01(\tR\x0fcreationContext\x12N\n" +
|
|
"\x12assignment_history\x18\x15 \x03(\v2\x1f.worker_pb.TaskAssignmentRecordR\x11assignmentHistory\x12'\n" +
|
|
"\x0fdetailed_reason\x18\x16 \x01(\tR\x0edetailedReason\x12<\n" +
|
|
"\x04tags\x18\x17 \x03(\v2(.worker_pb.MaintenanceTaskData.TagsEntryR\x04tags\x12I\n" +
|
|
"\x10creation_metrics\x18\x18 \x01(\v2\x1e.worker_pb.TaskCreationMetricsR\x0fcreationMetrics\x1a7\n" +
|
|
"\tTagsEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xb8\x01\n" +
|
|
"\x14TaskAssignmentRecord\x12\x1b\n" +
|
|
"\tworker_id\x18\x01 \x01(\tR\bworkerId\x12%\n" +
|
|
"\x0eworker_address\x18\x02 \x01(\tR\rworkerAddress\x12\x1f\n" +
|
|
"\vassigned_at\x18\x03 \x01(\x03R\n" +
|
|
"assignedAt\x12#\n" +
|
|
"\runassigned_at\x18\x04 \x01(\x03R\funassignedAt\x12\x16\n" +
|
|
"\x06reason\x18\x05 \x01(\tR\x06reason\"\xe4\x02\n" +
|
|
"\x13TaskCreationMetrics\x12%\n" +
|
|
"\x0etrigger_metric\x18\x01 \x01(\tR\rtriggerMetric\x12!\n" +
|
|
"\fmetric_value\x18\x02 \x01(\x01R\vmetricValue\x12\x1c\n" +
|
|
"\tthreshold\x18\x03 \x01(\x01R\tthreshold\x12E\n" +
|
|
"\x0evolume_metrics\x18\x04 \x01(\v2\x1e.worker_pb.VolumeHealthMetricsR\rvolumeMetrics\x12[\n" +
|
|
"\x0fadditional_data\x18\x05 \x03(\v22.worker_pb.TaskCreationMetrics.AdditionalDataEntryR\x0eadditionalData\x1aA\n" +
|
|
"\x13AdditionalDataEntry\x12\x10\n" +
|
|
"\x03key\x18\x01 \x01(\tR\x03key\x12\x14\n" +
|
|
"\x05value\x18\x02 \x01(\tR\x05value:\x028\x01\"\xf2\x02\n" +
|
|
"\x13VolumeHealthMetrics\x12\x1d\n" +
|
|
"\n" +
|
|
"total_size\x18\x01 \x01(\x04R\ttotalSize\x12\x1b\n" +
|
|
"\tused_size\x18\x02 \x01(\x04R\busedSize\x12!\n" +
|
|
"\fgarbage_size\x18\x03 \x01(\x04R\vgarbageSize\x12#\n" +
|
|
"\rgarbage_ratio\x18\x04 \x01(\x01R\fgarbageRatio\x12\x1d\n" +
|
|
"\n" +
|
|
"file_count\x18\x05 \x01(\x05R\tfileCount\x12,\n" +
|
|
"\x12deleted_file_count\x18\x06 \x01(\x05R\x10deletedFileCount\x12#\n" +
|
|
"\rlast_modified\x18\a \x01(\x03R\flastModified\x12#\n" +
|
|
"\rreplica_count\x18\b \x01(\x05R\freplicaCount\x12 \n" +
|
|
"\fis_ec_volume\x18\t \x01(\bR\n" +
|
|
"isEcVolume\x12\x1e\n" +
|
|
"\n" +
|
|
"collection\x18\n" +
|
|
" \x01(\tR\n" +
|
|
"collection\"\x8b\x01\n" +
|
|
"\rTaskStateFile\x122\n" +
|
|
"\x04task\x18\x01 \x01(\v2\x1e.worker_pb.MaintenanceTaskDataR\x04task\x12!\n" +
|
|
"\flast_updated\x18\x02 \x01(\x03R\vlastUpdated\x12#\n" +
|
|
"\radmin_version\x18\x03 \x01(\tR\fadminVersion2V\n" +
|
|
"\rWorkerService\x12E\n" +
|
|
"\fWorkerStream\x12\x18.worker_pb.WorkerMessage\x1a\x17.worker_pb.AdminMessage(\x010\x01B2Z0github.com/seaweedfs/seaweedfs/weed/pb/worker_pbb\x06proto3"
|
|
|
|
var (
|
|
file_worker_proto_rawDescOnce sync.Once
|
|
file_worker_proto_rawDescData []byte
|
|
)
|
|
|
|
func file_worker_proto_rawDescGZIP() []byte {
|
|
file_worker_proto_rawDescOnce.Do(func() {
|
|
file_worker_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_worker_proto_rawDesc), len(file_worker_proto_rawDesc)))
|
|
})
|
|
return file_worker_proto_rawDescData
|
|
}
|
|
|
|
var file_worker_proto_msgTypes = make([]protoimpl.MessageInfo, 49)
|
|
var file_worker_proto_goTypes = []any{
|
|
(*WorkerMessage)(nil), // 0: worker_pb.WorkerMessage
|
|
(*AdminMessage)(nil), // 1: worker_pb.AdminMessage
|
|
(*WorkerRegistration)(nil), // 2: worker_pb.WorkerRegistration
|
|
(*RegistrationResponse)(nil), // 3: worker_pb.RegistrationResponse
|
|
(*WorkerHeartbeat)(nil), // 4: worker_pb.WorkerHeartbeat
|
|
(*HeartbeatResponse)(nil), // 5: worker_pb.HeartbeatResponse
|
|
(*TaskRequest)(nil), // 6: worker_pb.TaskRequest
|
|
(*TaskAssignment)(nil), // 7: worker_pb.TaskAssignment
|
|
(*TaskParams)(nil), // 8: worker_pb.TaskParams
|
|
(*VacuumTaskParams)(nil), // 9: worker_pb.VacuumTaskParams
|
|
(*ErasureCodingTaskParams)(nil), // 10: worker_pb.ErasureCodingTaskParams
|
|
(*TaskSource)(nil), // 11: worker_pb.TaskSource
|
|
(*TaskTarget)(nil), // 12: worker_pb.TaskTarget
|
|
(*BalanceMoveSpec)(nil), // 13: worker_pb.BalanceMoveSpec
|
|
(*BalanceTaskParams)(nil), // 14: worker_pb.BalanceTaskParams
|
|
(*ReplicationTaskParams)(nil), // 15: worker_pb.ReplicationTaskParams
|
|
(*TaskUpdate)(nil), // 16: worker_pb.TaskUpdate
|
|
(*TaskComplete)(nil), // 17: worker_pb.TaskComplete
|
|
(*TaskCancellation)(nil), // 18: worker_pb.TaskCancellation
|
|
(*WorkerShutdown)(nil), // 19: worker_pb.WorkerShutdown
|
|
(*AdminShutdown)(nil), // 20: worker_pb.AdminShutdown
|
|
(*TaskLogRequest)(nil), // 21: worker_pb.TaskLogRequest
|
|
(*TaskLogResponse)(nil), // 22: worker_pb.TaskLogResponse
|
|
(*TaskLogMetadata)(nil), // 23: worker_pb.TaskLogMetadata
|
|
(*TaskLogEntry)(nil), // 24: worker_pb.TaskLogEntry
|
|
(*MaintenanceConfig)(nil), // 25: worker_pb.MaintenanceConfig
|
|
(*MaintenancePolicy)(nil), // 26: worker_pb.MaintenancePolicy
|
|
(*TaskPolicy)(nil), // 27: worker_pb.TaskPolicy
|
|
(*VacuumTaskConfig)(nil), // 28: worker_pb.VacuumTaskConfig
|
|
(*ErasureCodingTaskConfig)(nil), // 29: worker_pb.ErasureCodingTaskConfig
|
|
(*BalanceTaskConfig)(nil), // 30: worker_pb.BalanceTaskConfig
|
|
(*ReplicationTaskConfig)(nil), // 31: worker_pb.ReplicationTaskConfig
|
|
(*EcBalanceTaskParams)(nil), // 32: worker_pb.EcBalanceTaskParams
|
|
(*EcShardMoveSpec)(nil), // 33: worker_pb.EcShardMoveSpec
|
|
(*EcBalanceTaskConfig)(nil), // 34: worker_pb.EcBalanceTaskConfig
|
|
(*MaintenanceTaskData)(nil), // 35: worker_pb.MaintenanceTaskData
|
|
(*TaskAssignmentRecord)(nil), // 36: worker_pb.TaskAssignmentRecord
|
|
(*TaskCreationMetrics)(nil), // 37: worker_pb.TaskCreationMetrics
|
|
(*VolumeHealthMetrics)(nil), // 38: worker_pb.VolumeHealthMetrics
|
|
(*TaskStateFile)(nil), // 39: worker_pb.TaskStateFile
|
|
nil, // 40: worker_pb.WorkerRegistration.MetadataEntry
|
|
nil, // 41: worker_pb.TaskAssignment.MetadataEntry
|
|
nil, // 42: worker_pb.TaskUpdate.MetadataEntry
|
|
nil, // 43: worker_pb.TaskComplete.ResultMetadataEntry
|
|
nil, // 44: worker_pb.TaskLogMetadata.CustomDataEntry
|
|
nil, // 45: worker_pb.TaskLogEntry.FieldsEntry
|
|
nil, // 46: worker_pb.MaintenancePolicy.TaskPoliciesEntry
|
|
nil, // 47: worker_pb.MaintenanceTaskData.TagsEntry
|
|
nil, // 48: worker_pb.TaskCreationMetrics.AdditionalDataEntry
|
|
}
|
|
var file_worker_proto_depIdxs = []int32{
|
|
2, // 0: worker_pb.WorkerMessage.registration:type_name -> worker_pb.WorkerRegistration
|
|
4, // 1: worker_pb.WorkerMessage.heartbeat:type_name -> worker_pb.WorkerHeartbeat
|
|
6, // 2: worker_pb.WorkerMessage.task_request:type_name -> worker_pb.TaskRequest
|
|
16, // 3: worker_pb.WorkerMessage.task_update:type_name -> worker_pb.TaskUpdate
|
|
17, // 4: worker_pb.WorkerMessage.task_complete:type_name -> worker_pb.TaskComplete
|
|
19, // 5: worker_pb.WorkerMessage.shutdown:type_name -> worker_pb.WorkerShutdown
|
|
22, // 6: worker_pb.WorkerMessage.task_log_response:type_name -> worker_pb.TaskLogResponse
|
|
3, // 7: worker_pb.AdminMessage.registration_response:type_name -> worker_pb.RegistrationResponse
|
|
5, // 8: worker_pb.AdminMessage.heartbeat_response:type_name -> worker_pb.HeartbeatResponse
|
|
7, // 9: worker_pb.AdminMessage.task_assignment:type_name -> worker_pb.TaskAssignment
|
|
18, // 10: worker_pb.AdminMessage.task_cancellation:type_name -> worker_pb.TaskCancellation
|
|
20, // 11: worker_pb.AdminMessage.admin_shutdown:type_name -> worker_pb.AdminShutdown
|
|
21, // 12: worker_pb.AdminMessage.task_log_request:type_name -> worker_pb.TaskLogRequest
|
|
40, // 13: worker_pb.WorkerRegistration.metadata:type_name -> worker_pb.WorkerRegistration.MetadataEntry
|
|
8, // 14: worker_pb.TaskAssignment.params:type_name -> worker_pb.TaskParams
|
|
41, // 15: worker_pb.TaskAssignment.metadata:type_name -> worker_pb.TaskAssignment.MetadataEntry
|
|
11, // 16: worker_pb.TaskParams.sources:type_name -> worker_pb.TaskSource
|
|
12, // 17: worker_pb.TaskParams.targets:type_name -> worker_pb.TaskTarget
|
|
9, // 18: worker_pb.TaskParams.vacuum_params:type_name -> worker_pb.VacuumTaskParams
|
|
10, // 19: worker_pb.TaskParams.erasure_coding_params:type_name -> worker_pb.ErasureCodingTaskParams
|
|
14, // 20: worker_pb.TaskParams.balance_params:type_name -> worker_pb.BalanceTaskParams
|
|
15, // 21: worker_pb.TaskParams.replication_params:type_name -> worker_pb.ReplicationTaskParams
|
|
32, // 22: worker_pb.TaskParams.ec_balance_params:type_name -> worker_pb.EcBalanceTaskParams
|
|
13, // 23: worker_pb.BalanceTaskParams.moves:type_name -> worker_pb.BalanceMoveSpec
|
|
42, // 24: worker_pb.TaskUpdate.metadata:type_name -> worker_pb.TaskUpdate.MetadataEntry
|
|
43, // 25: worker_pb.TaskComplete.result_metadata:type_name -> worker_pb.TaskComplete.ResultMetadataEntry
|
|
23, // 26: worker_pb.TaskLogResponse.metadata:type_name -> worker_pb.TaskLogMetadata
|
|
24, // 27: worker_pb.TaskLogResponse.log_entries:type_name -> worker_pb.TaskLogEntry
|
|
44, // 28: worker_pb.TaskLogMetadata.custom_data:type_name -> worker_pb.TaskLogMetadata.CustomDataEntry
|
|
45, // 29: worker_pb.TaskLogEntry.fields:type_name -> worker_pb.TaskLogEntry.FieldsEntry
|
|
26, // 30: worker_pb.MaintenanceConfig.policy:type_name -> worker_pb.MaintenancePolicy
|
|
46, // 31: worker_pb.MaintenancePolicy.task_policies:type_name -> worker_pb.MaintenancePolicy.TaskPoliciesEntry
|
|
28, // 32: worker_pb.TaskPolicy.vacuum_config:type_name -> worker_pb.VacuumTaskConfig
|
|
29, // 33: worker_pb.TaskPolicy.erasure_coding_config:type_name -> worker_pb.ErasureCodingTaskConfig
|
|
30, // 34: worker_pb.TaskPolicy.balance_config:type_name -> worker_pb.BalanceTaskConfig
|
|
31, // 35: worker_pb.TaskPolicy.replication_config:type_name -> worker_pb.ReplicationTaskConfig
|
|
34, // 36: worker_pb.TaskPolicy.ec_balance_config:type_name -> worker_pb.EcBalanceTaskConfig
|
|
33, // 37: worker_pb.EcBalanceTaskParams.moves:type_name -> worker_pb.EcShardMoveSpec
|
|
8, // 38: worker_pb.MaintenanceTaskData.typed_params:type_name -> worker_pb.TaskParams
|
|
36, // 39: worker_pb.MaintenanceTaskData.assignment_history:type_name -> worker_pb.TaskAssignmentRecord
|
|
47, // 40: worker_pb.MaintenanceTaskData.tags:type_name -> worker_pb.MaintenanceTaskData.TagsEntry
|
|
37, // 41: worker_pb.MaintenanceTaskData.creation_metrics:type_name -> worker_pb.TaskCreationMetrics
|
|
38, // 42: worker_pb.TaskCreationMetrics.volume_metrics:type_name -> worker_pb.VolumeHealthMetrics
|
|
48, // 43: worker_pb.TaskCreationMetrics.additional_data:type_name -> worker_pb.TaskCreationMetrics.AdditionalDataEntry
|
|
35, // 44: worker_pb.TaskStateFile.task:type_name -> worker_pb.MaintenanceTaskData
|
|
27, // 45: worker_pb.MaintenancePolicy.TaskPoliciesEntry.value:type_name -> worker_pb.TaskPolicy
|
|
0, // 46: worker_pb.WorkerService.WorkerStream:input_type -> worker_pb.WorkerMessage
|
|
1, // 47: worker_pb.WorkerService.WorkerStream:output_type -> worker_pb.AdminMessage
|
|
47, // [47:48] is the sub-list for method output_type
|
|
46, // [46:47] is the sub-list for method input_type
|
|
46, // [46:46] is the sub-list for extension type_name
|
|
46, // [46:46] is the sub-list for extension extendee
|
|
0, // [0:46] is the sub-list for field type_name
|
|
}
|
|
|
|
func init() { file_worker_proto_init() }
|
|
func file_worker_proto_init() {
|
|
if File_worker_proto != nil {
|
|
return
|
|
}
|
|
file_worker_proto_msgTypes[0].OneofWrappers = []any{
|
|
(*WorkerMessage_Registration)(nil),
|
|
(*WorkerMessage_Heartbeat)(nil),
|
|
(*WorkerMessage_TaskRequest)(nil),
|
|
(*WorkerMessage_TaskUpdate)(nil),
|
|
(*WorkerMessage_TaskComplete)(nil),
|
|
(*WorkerMessage_Shutdown)(nil),
|
|
(*WorkerMessage_TaskLogResponse)(nil),
|
|
}
|
|
file_worker_proto_msgTypes[1].OneofWrappers = []any{
|
|
(*AdminMessage_RegistrationResponse)(nil),
|
|
(*AdminMessage_HeartbeatResponse)(nil),
|
|
(*AdminMessage_TaskAssignment)(nil),
|
|
(*AdminMessage_TaskCancellation)(nil),
|
|
(*AdminMessage_AdminShutdown)(nil),
|
|
(*AdminMessage_TaskLogRequest)(nil),
|
|
}
|
|
file_worker_proto_msgTypes[8].OneofWrappers = []any{
|
|
(*TaskParams_VacuumParams)(nil),
|
|
(*TaskParams_ErasureCodingParams)(nil),
|
|
(*TaskParams_BalanceParams)(nil),
|
|
(*TaskParams_ReplicationParams)(nil),
|
|
(*TaskParams_EcBalanceParams)(nil),
|
|
}
|
|
file_worker_proto_msgTypes[27].OneofWrappers = []any{
|
|
(*TaskPolicy_VacuumConfig)(nil),
|
|
(*TaskPolicy_ErasureCodingConfig)(nil),
|
|
(*TaskPolicy_BalanceConfig)(nil),
|
|
(*TaskPolicy_ReplicationConfig)(nil),
|
|
(*TaskPolicy_EcBalanceConfig)(nil),
|
|
}
|
|
type x struct{}
|
|
out := protoimpl.TypeBuilder{
|
|
File: protoimpl.DescBuilder{
|
|
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
|
|
RawDescriptor: unsafe.Slice(unsafe.StringData(file_worker_proto_rawDesc), len(file_worker_proto_rawDesc)),
|
|
NumEnums: 0,
|
|
NumMessages: 49,
|
|
NumExtensions: 0,
|
|
NumServices: 1,
|
|
},
|
|
GoTypes: file_worker_proto_goTypes,
|
|
DependencyIndexes: file_worker_proto_depIdxs,
|
|
MessageInfos: file_worker_proto_msgTypes,
|
|
}.Build()
|
|
File_worker_proto = out.File
|
|
file_worker_proto_goTypes = nil
|
|
file_worker_proto_depIdxs = nil
|
|
}
|