* refactor: add ECContext structure to encapsulate EC parameters
- Create ec_context.go with ECContext struct
- NewDefaultECContext() creates context with default 10+4 configuration
- Helper methods: CreateEncoder(), ToExt(), String()
- Foundation for cleaner function signatures
- No behavior change, still uses hardcoded 10+4
* refactor: update ec_encoder.go to use ECContext
- Add WriteEcFilesWithContext() and RebuildEcFilesWithContext() functions
- Keep old functions for backward compatibility (call new versions)
- Update all internal functions to accept ECContext parameter
- Use ctx.DataShards, ctx.ParityShards, ctx.TotalShards consistently
- Use ctx.CreateEncoder() instead of hardcoded reedsolomon.New()
- Use ctx.ToExt() for shard file extensions
- No behavior change, still uses default 10+4 configuration
* refactor: update ec_volume.go to use ECContext
- Add ECContext field to EcVolume struct
- Initialize ECContext with default configuration in NewEcVolume()
- Update LocateEcShardNeedleInterval() to use ECContext.DataShards
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* refactor: add EC shard count fields to VolumeInfo protobuf
- Add data_shards_count field (field 8) to VolumeInfo message
- Add parity_shards_count field (field 9) to VolumeInfo message
- Fields are optional, 0 means use default (10+4)
- Backward compatible: fields added at end
- Phase 1: Foundation for future customization
* refactor: regenerate protobuf Go files with EC shard count fields
- Regenerated volume_server_pb/*.go with new EC fields
- DataShardsCount and ParityShardsCount accessors added to VolumeInfo
- No behavior change, fields not yet used
* refactor: update VolumeEcShardsGenerate to use ECContext
- Create ECContext with default configuration in VolumeEcShardsGenerate
- Use ecCtx.TotalShards and ecCtx.ToExt() in cleanup
- Call WriteEcFilesWithContext() instead of WriteEcFiles()
- Save EC configuration (DataShardsCount, ParityShardsCount) to VolumeInfo
- Log EC context being used
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* fmt
* refactor: update ec_test.go to use ECContext
- Update TestEncodingDecoding to create and use ECContext
- Update validateFiles() to accept ECContext parameter
- Update removeGeneratedFiles() to use ctx.TotalShards and ctx.ToExt()
- Test passes with default 10+4 configuration
* refactor: use EcShardConfig message instead of separate fields
* optimize: pre-calculate row sizes in EC encoding loop
* refactor: replace TotalShards field with Total() method
- Remove TotalShards field from ECContext to avoid field drift
- Add Total() method that computes DataShards + ParityShards
- Update all references to use ctx.Total() instead of ctx.TotalShards
- Read EC config from VolumeInfo when loading EC volumes
- Read data shard count from .vif in VolumeEcShardsToVolume
- Use >= instead of > for exact boundary handling in encoding loops
* optimize: simplify VolumeEcShardsToVolume to use existing EC context
- Remove redundant CollectEcShards call
- Remove redundant .vif file loading
- Use v.ECContext.DataShards directly (already loaded by NewEcVolume)
- Slice tempShards instead of collecting again
* refactor: rename MaxShardId to MaxShardCount for clarity
- Change from MaxShardId=31 to MaxShardCount=32
- Eliminates confusing +1 arithmetic (MaxShardId+1)
- More intuitive: MaxShardCount directly represents the limit
fix: support custom EC ratios beyond 14 shards in VolumeEcShardsToVolume
- Add MaxShardId constant (31, since ShardBits is uint32)
- Use MaxShardId+1 (32) instead of TotalShardsCount (14) for tempShards buffer
- Prevents panic when slicing for volumes with >14 total shards
- Critical fix for custom EC configurations like 20+10
* fix: add validation for EC shard counts from VolumeInfo
- Validate DataShards/ParityShards are positive and within MaxShardCount
- Prevent zero or invalid values that could cause divide-by-zero
- Fallback to defaults if validation fails, with warning log
- VolumeEcShardsGenerate now preserves existing EC config when regenerating
- Critical safety fix for corrupted or legacy .vif files
* fix: RebuildEcFiles now loads EC config from .vif file
- Critical: RebuildEcFiles was always using default 10+4 config
- Now loads actual EC config from .vif file when rebuilding shards
- Validates config before use (positive shards, within MaxShardCount)
- Falls back to default if .vif missing or invalid
- Prevents data corruption when rebuilding custom EC volumes
* add: defensive validation for dataShards in VolumeEcShardsToVolume
- Validate dataShards > 0 and <= MaxShardCount before use
- Prevents panic from corrupted or uninitialized ECContext
- Returns clear error message instead of panic
- Defense-in-depth: validates even though upstream should catch issues
* fix: replace TotalShardsCount with MaxShardCount for custom EC ratio support
Critical fixes to support custom EC ratios > 14 shards:
disk_location_ec.go:
- validateEcVolume: Check shards 0-31 instead of 0-13 during validation
- removeEcVolumeFiles: Remove shards 0-31 instead of 0-13 during cleanup
ec_volume_info.go ShardBits methods:
- ShardIds(): Iterate up to MaxShardCount (32) instead of TotalShardsCount (14)
- ToUint32Slice(): Iterate up to MaxShardCount (32)
- IndexToShardId(): Iterate up to MaxShardCount (32)
- MinusParityShards(): Remove shards 10-31 instead of 10-13 (added note about Phase 2)
- Minus() shard size copy: Iterate up to MaxShardCount (32)
- resizeShardSizes(): Iterate up to MaxShardCount (32)
Without these changes:
- Custom EC ratios > 14 total shards would fail validation on startup
- Shards 14-31 would never be discovered or cleaned up
- ShardBits operations would miss shards >= 14
These changes are backward compatible - MaxShardCount (32) includes
the default TotalShardsCount (14), so existing 10+4 volumes work as before.
* fix: replace TotalShardsCount with MaxShardCount in critical data structures
Critical fixes for buffer allocations and loops that must support
custom EC ratios up to 32 shards:
Data Structures:
- store_ec.go:354: Buffer allocation for shard recovery (bufs array)
- topology_ec.go:14: EcShardLocations.Locations fixed array size
- command_ec_rebuild.go:268: EC shard map allocation
- command_ec_common.go:626: Shard-to-locations map allocation
Shard Discovery Loops:
- ec_task.go:378: Loop to find generated shard files
- ec_shard_management.go: All 8 loops that check/count EC shards
These changes are critical because:
1. Buffer allocations sized to 14 would cause index-out-of-bounds panics
when accessing shards 14-31
2. Fixed arrays sized to 14 would truncate shard location data
3. Loops limited to 0-13 would never discover/manage shards 14-31
Note: command_ec_encode.go:208 intentionally NOT changed - it creates
shard IDs to mount after encoding. In Phase 1 we always generate 14
shards, so this remains TotalShardsCount and will be made dynamic in
Phase 2 based on actual EC context.
Without these fixes, custom EC ratios > 14 total shards would cause:
- Runtime panics (array index out of bounds)
- Data loss (shards 14-31 never discovered/tracked)
- Incomplete shard management (missing shards not detected)
* refactor: move MaxShardCount constant to ec_encoder.go
Moved MaxShardCount from ec_volume_info.go to ec_encoder.go to group it
with other shard count constants (DataShardsCount, ParityShardsCount,
TotalShardsCount). This improves code organization and makes it easier
to understand the relationship between these constants.
Location: ec_encoder.go line 22, between TotalShardsCount and MinTotalDisks
* improve: add defensive programming and better error messages for EC
Code review improvements from CodeRabbit:
1. ShardBits Guardrails (ec_volume_info.go):
- AddShardId, RemoveShardId: Reject shard IDs >= MaxShardCount
- HasShardId: Return false for out-of-range shard IDs
- Prevents silent no-ops from bit shifts with invalid IDs
2. Future-Proof Regex (disk_location_ec.go):
- Updated regex from \.ec[0-9][0-9] to \.ec\d{2,3}
- Now matches .ec00 through .ec999 (currently .ec00-.ec31 used)
- Supports future increases to MaxShardCount beyond 99
3. Better Error Messages (volume_grpc_erasure_coding.go):
- Include valid range (1..32) in dataShards validation error
- Helps operators quickly identify the problem
4. Validation Before Save (volume_grpc_erasure_coding.go):
- Validate ECContext (DataShards > 0, ParityShards > 0, Total <= MaxShardCount)
- Log EC config being saved to .vif for debugging
- Prevents writing invalid configs to disk
These changes improve robustness and debuggability without changing
core functionality.
* fmt
* fix: critical bugs from code review + clean up comments
Critical bug fixes:
1. command_ec_rebuild.go: Fixed indentation causing compilation error
- Properly nested if/for blocks in registerEcNode
2. ec_shard_management.go: Fixed isComplete logic incorrectly using MaxShardCount
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Default 10+4 volumes were being incorrectly reported as incomplete
- Missing shards 14-31 were being incorrectly reported as missing
- Fixed in 4 locations: volume completeness checks and getMissingShards
3. ec_volume_info.go: Fixed MinusParityShards removing too many shards
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Was incorrectly removing shard IDs 10-31 instead of just 10-13
Comment cleanup:
- Removed Phase 1/Phase 2 references (development plan context)
- Replaced with clear statements about default 10+4 configuration
- SeaweedFS repo uses fixed 10+4 EC ratio, no phases needed
Root cause: Over-aggressive replacement of TotalShardsCount with MaxShardCount.
MaxShardCount (32) is the limit for buffer allocations and shard ID loops,
but TotalShardsCount (14) must be used for default EC configuration logic.
* fix: add defensive bounds checks and compute actual shard counts
Critical fixes from code review:
1. topology_ec.go: Add defensive bounds checks to AddShard/DeleteShard
- Prevent panic when shardId >= MaxShardCount (32)
- Return false instead of crashing on out-of-range shard IDs
2. command_ec_common.go: Fix doBalanceEcShardsAcrossRacks
- Was using hardcoded TotalShardsCount (14) for all volumes
- Now computes actual totalShardsForVolume from rackToShardCount
- Fixes incorrect rebalancing for volumes with custom EC ratios
- Example: 5+2=7 shards would incorrectly use 14 as average
These fixes improve robustness and prepare for future custom EC ratios
without changing current behavior for default 10+4 volumes.
Note: MinusParityShards and ec_task.go intentionally NOT changed for
seaweedfs repo - these will be enhanced in seaweed-enterprise repo
where custom EC ratio configuration is added.
* fmt
* style: make MaxShardCount type casting explicit in loops
Improved code clarity by explicitly casting MaxShardCount to the
appropriate type when used in loop comparisons:
- ShardId comparisons: Cast to ShardId(MaxShardCount)
- uint32 comparisons: Cast to uint32(MaxShardCount)
Changed in 5 locations:
- Minus() loop (line 90)
- ShardIds() loop (line 143)
- ToUint32Slice() loop (line 152)
- IndexToShardId() loop (line 219)
- resizeShardSizes() loop (line 248)
This makes the intent explicit and improves type safety readability.
No functional changes - purely a style improvement.
331 lines
9.8 KiB
Go
331 lines
9.8 KiB
Go
package erasure_coding
|
|
|
|
import (
|
|
"errors"
|
|
"fmt"
|
|
"math"
|
|
"os"
|
|
"slices"
|
|
"sync"
|
|
"time"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/volume_server_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage/idx"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage/needle"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage/types"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage/volume_info"
|
|
)
|
|
|
|
var (
|
|
NotFoundError = errors.New("needle not found")
|
|
destroyDelaySeconds int64 = 0
|
|
)
|
|
|
|
type EcVolume struct {
|
|
VolumeId needle.VolumeId
|
|
Collection string
|
|
dir string
|
|
dirIdx string
|
|
ecxFile *os.File
|
|
ecxFileSize int64
|
|
ecxCreatedAt time.Time
|
|
Shards []*EcVolumeShard
|
|
ShardLocations map[ShardId][]pb.ServerAddress
|
|
ShardLocationsRefreshTime time.Time
|
|
ShardLocationsLock sync.RWMutex
|
|
Version needle.Version
|
|
ecjFile *os.File
|
|
ecjFileAccessLock sync.Mutex
|
|
diskType types.DiskType
|
|
datFileSize int64
|
|
ExpireAtSec uint64 //ec volume destroy time, calculated from the ec volume was created
|
|
ECContext *ECContext // EC encoding parameters
|
|
}
|
|
|
|
func NewEcVolume(diskType types.DiskType, dir string, dirIdx string, collection string, vid needle.VolumeId) (ev *EcVolume, err error) {
|
|
ev = &EcVolume{dir: dir, dirIdx: dirIdx, Collection: collection, VolumeId: vid, diskType: diskType}
|
|
|
|
dataBaseFileName := EcShardFileName(collection, dir, int(vid))
|
|
indexBaseFileName := EcShardFileName(collection, dirIdx, int(vid))
|
|
|
|
// open ecx file
|
|
if ev.ecxFile, err = os.OpenFile(indexBaseFileName+".ecx", os.O_RDWR, 0644); err != nil {
|
|
return nil, fmt.Errorf("cannot open ec volume index %s.ecx: %v", indexBaseFileName, err)
|
|
}
|
|
ecxFi, statErr := ev.ecxFile.Stat()
|
|
if statErr != nil {
|
|
_ = ev.ecxFile.Close()
|
|
return nil, fmt.Errorf("can not stat ec volume index %s.ecx: %v", indexBaseFileName, statErr)
|
|
}
|
|
ev.ecxFileSize = ecxFi.Size()
|
|
ev.ecxCreatedAt = ecxFi.ModTime()
|
|
|
|
// open ecj file
|
|
if ev.ecjFile, err = os.OpenFile(indexBaseFileName+".ecj", os.O_RDWR|os.O_CREATE, 0644); err != nil {
|
|
return nil, fmt.Errorf("cannot open ec volume journal %s.ecj: %v", indexBaseFileName, err)
|
|
}
|
|
|
|
// read volume info
|
|
ev.Version = needle.Version3
|
|
if volumeInfo, _, found, _ := volume_info.MaybeLoadVolumeInfo(dataBaseFileName + ".vif"); found {
|
|
ev.Version = needle.Version(volumeInfo.Version)
|
|
ev.datFileSize = volumeInfo.DatFileSize
|
|
ev.ExpireAtSec = volumeInfo.ExpireAtSec
|
|
|
|
// Initialize EC context from .vif if present; fallback to defaults
|
|
if volumeInfo.EcShardConfig != nil {
|
|
ds := int(volumeInfo.EcShardConfig.DataShards)
|
|
ps := int(volumeInfo.EcShardConfig.ParityShards)
|
|
|
|
// Validate shard counts to prevent zero or invalid values
|
|
if ds <= 0 || ps <= 0 || ds+ps > MaxShardCount {
|
|
glog.Warningf("Invalid EC config in VolumeInfo for volume %d (data=%d, parity=%d), using defaults", vid, ds, ps)
|
|
ev.ECContext = NewDefaultECContext(collection, vid)
|
|
} else {
|
|
ev.ECContext = &ECContext{
|
|
Collection: collection,
|
|
VolumeId: vid,
|
|
DataShards: ds,
|
|
ParityShards: ps,
|
|
}
|
|
glog.V(1).Infof("Loaded EC config from VolumeInfo for volume %d: %s", vid, ev.ECContext.String())
|
|
}
|
|
} else {
|
|
ev.ECContext = NewDefaultECContext(collection, vid)
|
|
}
|
|
} else {
|
|
glog.Warningf("vif file not found,volumeId:%d, filename:%s", vid, dataBaseFileName)
|
|
volume_info.SaveVolumeInfo(dataBaseFileName+".vif", &volume_server_pb.VolumeInfo{Version: uint32(ev.Version)})
|
|
ev.ECContext = NewDefaultECContext(collection, vid)
|
|
}
|
|
|
|
ev.ShardLocations = make(map[ShardId][]pb.ServerAddress)
|
|
|
|
return
|
|
}
|
|
|
|
func (ev *EcVolume) AddEcVolumeShard(ecVolumeShard *EcVolumeShard) bool {
|
|
for _, s := range ev.Shards {
|
|
if s.ShardId == ecVolumeShard.ShardId {
|
|
return false
|
|
}
|
|
}
|
|
ev.Shards = append(ev.Shards, ecVolumeShard)
|
|
slices.SortFunc(ev.Shards, func(a, b *EcVolumeShard) int {
|
|
if a.VolumeId != b.VolumeId {
|
|
return int(a.VolumeId - b.VolumeId)
|
|
}
|
|
return int(a.ShardId - b.ShardId)
|
|
})
|
|
return true
|
|
}
|
|
|
|
func (ev *EcVolume) DeleteEcVolumeShard(shardId ShardId) (ecVolumeShard *EcVolumeShard, deleted bool) {
|
|
foundPosition := -1
|
|
for i, s := range ev.Shards {
|
|
if s.ShardId == shardId {
|
|
foundPosition = i
|
|
}
|
|
}
|
|
if foundPosition < 0 {
|
|
return nil, false
|
|
}
|
|
|
|
ecVolumeShard = ev.Shards[foundPosition]
|
|
ecVolumeShard.Unmount()
|
|
ev.Shards = append(ev.Shards[:foundPosition], ev.Shards[foundPosition+1:]...)
|
|
return ecVolumeShard, true
|
|
}
|
|
|
|
func (ev *EcVolume) FindEcVolumeShard(shardId ShardId) (ecVolumeShard *EcVolumeShard, found bool) {
|
|
for _, s := range ev.Shards {
|
|
if s.ShardId == shardId {
|
|
return s, true
|
|
}
|
|
}
|
|
return nil, false
|
|
}
|
|
|
|
func (ev *EcVolume) Close() {
|
|
for _, s := range ev.Shards {
|
|
s.Close()
|
|
}
|
|
if ev.ecjFile != nil {
|
|
ev.ecjFileAccessLock.Lock()
|
|
_ = ev.ecjFile.Close()
|
|
ev.ecjFile = nil
|
|
ev.ecjFileAccessLock.Unlock()
|
|
}
|
|
if ev.ecxFile != nil {
|
|
_ = ev.ecxFile.Sync()
|
|
_ = ev.ecxFile.Close()
|
|
ev.ecxFile = nil
|
|
}
|
|
}
|
|
|
|
func (ev *EcVolume) Destroy() {
|
|
|
|
ev.Close()
|
|
|
|
for _, s := range ev.Shards {
|
|
s.Destroy()
|
|
}
|
|
os.Remove(ev.FileName(".ecx"))
|
|
os.Remove(ev.FileName(".ecj"))
|
|
os.Remove(ev.FileName(".vif"))
|
|
}
|
|
|
|
func (ev *EcVolume) FileName(ext string) string {
|
|
switch ext {
|
|
case ".ecx", ".ecj":
|
|
return ev.IndexBaseFileName() + ext
|
|
}
|
|
// .vif
|
|
return ev.DataBaseFileName() + ext
|
|
}
|
|
|
|
func (ev *EcVolume) DataBaseFileName() string {
|
|
return EcShardFileName(ev.Collection, ev.dir, int(ev.VolumeId))
|
|
}
|
|
|
|
func (ev *EcVolume) IndexBaseFileName() string {
|
|
return EcShardFileName(ev.Collection, ev.dirIdx, int(ev.VolumeId))
|
|
}
|
|
|
|
func (ev *EcVolume) ShardSize() uint64 {
|
|
if len(ev.Shards) > 0 {
|
|
return uint64(ev.Shards[0].Size())
|
|
}
|
|
return 0
|
|
}
|
|
|
|
func (ev *EcVolume) Size() (size uint64) {
|
|
for _, shard := range ev.Shards {
|
|
if shardSize := shard.Size(); shardSize > 0 {
|
|
size += uint64(shardSize)
|
|
}
|
|
}
|
|
return
|
|
}
|
|
|
|
func (ev *EcVolume) CreatedAt() time.Time {
|
|
return ev.ecxCreatedAt
|
|
}
|
|
|
|
func (ev *EcVolume) ShardIdList() (shardIds []ShardId) {
|
|
for _, s := range ev.Shards {
|
|
shardIds = append(shardIds, s.ShardId)
|
|
}
|
|
return
|
|
}
|
|
|
|
type ShardInfo struct {
|
|
ShardId ShardId
|
|
Size uint64
|
|
}
|
|
|
|
func (ev *EcVolume) ShardDetails() (shards []ShardInfo) {
|
|
for _, s := range ev.Shards {
|
|
shardSize := s.Size()
|
|
if shardSize >= 0 {
|
|
shards = append(shards, ShardInfo{
|
|
ShardId: s.ShardId,
|
|
Size: uint64(shardSize),
|
|
})
|
|
}
|
|
}
|
|
return
|
|
}
|
|
|
|
func (ev *EcVolume) ToVolumeEcShardInformationMessage(diskId uint32) (messages []*master_pb.VolumeEcShardInformationMessage) {
|
|
prevVolumeId := needle.VolumeId(math.MaxUint32)
|
|
var m *master_pb.VolumeEcShardInformationMessage
|
|
for _, s := range ev.Shards {
|
|
if s.VolumeId != prevVolumeId {
|
|
m = &master_pb.VolumeEcShardInformationMessage{
|
|
Id: uint32(s.VolumeId),
|
|
Collection: s.Collection,
|
|
DiskType: string(ev.diskType),
|
|
ExpireAtSec: ev.ExpireAtSec,
|
|
DiskId: diskId,
|
|
}
|
|
messages = append(messages, m)
|
|
}
|
|
prevVolumeId = s.VolumeId
|
|
m.EcIndexBits = uint32(ShardBits(m.EcIndexBits).AddShardId(s.ShardId))
|
|
|
|
// Add shard size information using the optimized format
|
|
SetShardSize(m, s.ShardId, s.Size())
|
|
}
|
|
return
|
|
}
|
|
|
|
func (ev *EcVolume) LocateEcShardNeedle(needleId types.NeedleId, version needle.Version) (offset types.Offset, size types.Size, intervals []Interval, err error) {
|
|
|
|
// find the needle from ecx file
|
|
offset, size, err = ev.FindNeedleFromEcx(needleId)
|
|
if err != nil {
|
|
return types.Offset{}, 0, nil, fmt.Errorf("FindNeedleFromEcx: %w", err)
|
|
}
|
|
|
|
intervals = ev.LocateEcShardNeedleInterval(version, offset.ToActualOffset(), types.Size(needle.GetActualSize(size, version)))
|
|
return
|
|
}
|
|
|
|
func (ev *EcVolume) LocateEcShardNeedleInterval(version needle.Version, offset int64, size types.Size) (intervals []Interval) {
|
|
shard := ev.Shards[0]
|
|
// Usually shard will be padded to round of ErasureCodingSmallBlockSize.
|
|
// So in most cases, if shardSize equals to n * ErasureCodingLargeBlockSize,
|
|
// the data would be in small blocks.
|
|
shardSize := shard.ecdFileSize - 1
|
|
if ev.datFileSize > 0 {
|
|
// To get the correct LargeBlockRowsCount
|
|
// use datFileSize to calculate the shardSize to match the EC encoding logic.
|
|
shardSize = ev.datFileSize / int64(ev.ECContext.DataShards)
|
|
}
|
|
// calculate the locations in the ec shards
|
|
intervals = LocateData(ErasureCodingLargeBlockSize, ErasureCodingSmallBlockSize, shardSize, offset, types.Size(needle.GetActualSize(size, version)))
|
|
|
|
return
|
|
}
|
|
|
|
func (ev *EcVolume) FindNeedleFromEcx(needleId types.NeedleId) (offset types.Offset, size types.Size, err error) {
|
|
return SearchNeedleFromSortedIndex(ev.ecxFile, ev.ecxFileSize, needleId, nil)
|
|
}
|
|
|
|
func SearchNeedleFromSortedIndex(ecxFile *os.File, ecxFileSize int64, needleId types.NeedleId, processNeedleFn func(file *os.File, offset int64) error) (offset types.Offset, size types.Size, err error) {
|
|
var key types.NeedleId
|
|
buf := make([]byte, types.NeedleMapEntrySize)
|
|
l, h := int64(0), ecxFileSize/types.NeedleMapEntrySize
|
|
for l < h {
|
|
m := (l + h) / 2
|
|
if n, err := ecxFile.ReadAt(buf, m*types.NeedleMapEntrySize); err != nil {
|
|
if n != types.NeedleMapEntrySize {
|
|
return types.Offset{}, types.TombstoneFileSize, fmt.Errorf("ecx file %d read at %d: %v", ecxFileSize, m*types.NeedleMapEntrySize, err)
|
|
}
|
|
}
|
|
key, offset, size = idx.IdxFileEntry(buf)
|
|
if key == needleId {
|
|
if processNeedleFn != nil {
|
|
err = processNeedleFn(ecxFile, m*types.NeedleMapEntrySize)
|
|
}
|
|
return
|
|
}
|
|
if key < needleId {
|
|
l = m + 1
|
|
} else {
|
|
h = m
|
|
}
|
|
}
|
|
|
|
err = NotFoundError
|
|
return
|
|
}
|
|
|
|
func (ev *EcVolume) IsTimeToDestroy() bool {
|
|
return ev.ExpireAtSec > 0 && time.Now().Unix() > (int64(ev.ExpireAtSec)+destroyDelaySeconds)
|
|
}
|