* refactor: add ECContext structure to encapsulate EC parameters
- Create ec_context.go with ECContext struct
- NewDefaultECContext() creates context with default 10+4 configuration
- Helper methods: CreateEncoder(), ToExt(), String()
- Foundation for cleaner function signatures
- No behavior change, still uses hardcoded 10+4
* refactor: update ec_encoder.go to use ECContext
- Add WriteEcFilesWithContext() and RebuildEcFilesWithContext() functions
- Keep old functions for backward compatibility (call new versions)
- Update all internal functions to accept ECContext parameter
- Use ctx.DataShards, ctx.ParityShards, ctx.TotalShards consistently
- Use ctx.CreateEncoder() instead of hardcoded reedsolomon.New()
- Use ctx.ToExt() for shard file extensions
- No behavior change, still uses default 10+4 configuration
* refactor: update ec_volume.go to use ECContext
- Add ECContext field to EcVolume struct
- Initialize ECContext with default configuration in NewEcVolume()
- Update LocateEcShardNeedleInterval() to use ECContext.DataShards
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* refactor: add EC shard count fields to VolumeInfo protobuf
- Add data_shards_count field (field 8) to VolumeInfo message
- Add parity_shards_count field (field 9) to VolumeInfo message
- Fields are optional, 0 means use default (10+4)
- Backward compatible: fields added at end
- Phase 1: Foundation for future customization
* refactor: regenerate protobuf Go files with EC shard count fields
- Regenerated volume_server_pb/*.go with new EC fields
- DataShardsCount and ParityShardsCount accessors added to VolumeInfo
- No behavior change, fields not yet used
* refactor: update VolumeEcShardsGenerate to use ECContext
- Create ECContext with default configuration in VolumeEcShardsGenerate
- Use ecCtx.TotalShards and ecCtx.ToExt() in cleanup
- Call WriteEcFilesWithContext() instead of WriteEcFiles()
- Save EC configuration (DataShardsCount, ParityShardsCount) to VolumeInfo
- Log EC context being used
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* fmt
* refactor: update ec_test.go to use ECContext
- Update TestEncodingDecoding to create and use ECContext
- Update validateFiles() to accept ECContext parameter
- Update removeGeneratedFiles() to use ctx.TotalShards and ctx.ToExt()
- Test passes with default 10+4 configuration
* refactor: use EcShardConfig message instead of separate fields
* optimize: pre-calculate row sizes in EC encoding loop
* refactor: replace TotalShards field with Total() method
- Remove TotalShards field from ECContext to avoid field drift
- Add Total() method that computes DataShards + ParityShards
- Update all references to use ctx.Total() instead of ctx.TotalShards
- Read EC config from VolumeInfo when loading EC volumes
- Read data shard count from .vif in VolumeEcShardsToVolume
- Use >= instead of > for exact boundary handling in encoding loops
* optimize: simplify VolumeEcShardsToVolume to use existing EC context
- Remove redundant CollectEcShards call
- Remove redundant .vif file loading
- Use v.ECContext.DataShards directly (already loaded by NewEcVolume)
- Slice tempShards instead of collecting again
* refactor: rename MaxShardId to MaxShardCount for clarity
- Change from MaxShardId=31 to MaxShardCount=32
- Eliminates confusing +1 arithmetic (MaxShardId+1)
- More intuitive: MaxShardCount directly represents the limit
fix: support custom EC ratios beyond 14 shards in VolumeEcShardsToVolume
- Add MaxShardId constant (31, since ShardBits is uint32)
- Use MaxShardId+1 (32) instead of TotalShardsCount (14) for tempShards buffer
- Prevents panic when slicing for volumes with >14 total shards
- Critical fix for custom EC configurations like 20+10
* fix: add validation for EC shard counts from VolumeInfo
- Validate DataShards/ParityShards are positive and within MaxShardCount
- Prevent zero or invalid values that could cause divide-by-zero
- Fallback to defaults if validation fails, with warning log
- VolumeEcShardsGenerate now preserves existing EC config when regenerating
- Critical safety fix for corrupted or legacy .vif files
* fix: RebuildEcFiles now loads EC config from .vif file
- Critical: RebuildEcFiles was always using default 10+4 config
- Now loads actual EC config from .vif file when rebuilding shards
- Validates config before use (positive shards, within MaxShardCount)
- Falls back to default if .vif missing or invalid
- Prevents data corruption when rebuilding custom EC volumes
* add: defensive validation for dataShards in VolumeEcShardsToVolume
- Validate dataShards > 0 and <= MaxShardCount before use
- Prevents panic from corrupted or uninitialized ECContext
- Returns clear error message instead of panic
- Defense-in-depth: validates even though upstream should catch issues
* fix: replace TotalShardsCount with MaxShardCount for custom EC ratio support
Critical fixes to support custom EC ratios > 14 shards:
disk_location_ec.go:
- validateEcVolume: Check shards 0-31 instead of 0-13 during validation
- removeEcVolumeFiles: Remove shards 0-31 instead of 0-13 during cleanup
ec_volume_info.go ShardBits methods:
- ShardIds(): Iterate up to MaxShardCount (32) instead of TotalShardsCount (14)
- ToUint32Slice(): Iterate up to MaxShardCount (32)
- IndexToShardId(): Iterate up to MaxShardCount (32)
- MinusParityShards(): Remove shards 10-31 instead of 10-13 (added note about Phase 2)
- Minus() shard size copy: Iterate up to MaxShardCount (32)
- resizeShardSizes(): Iterate up to MaxShardCount (32)
Without these changes:
- Custom EC ratios > 14 total shards would fail validation on startup
- Shards 14-31 would never be discovered or cleaned up
- ShardBits operations would miss shards >= 14
These changes are backward compatible - MaxShardCount (32) includes
the default TotalShardsCount (14), so existing 10+4 volumes work as before.
* fix: replace TotalShardsCount with MaxShardCount in critical data structures
Critical fixes for buffer allocations and loops that must support
custom EC ratios up to 32 shards:
Data Structures:
- store_ec.go:354: Buffer allocation for shard recovery (bufs array)
- topology_ec.go:14: EcShardLocations.Locations fixed array size
- command_ec_rebuild.go:268: EC shard map allocation
- command_ec_common.go:626: Shard-to-locations map allocation
Shard Discovery Loops:
- ec_task.go:378: Loop to find generated shard files
- ec_shard_management.go: All 8 loops that check/count EC shards
These changes are critical because:
1. Buffer allocations sized to 14 would cause index-out-of-bounds panics
when accessing shards 14-31
2. Fixed arrays sized to 14 would truncate shard location data
3. Loops limited to 0-13 would never discover/manage shards 14-31
Note: command_ec_encode.go:208 intentionally NOT changed - it creates
shard IDs to mount after encoding. In Phase 1 we always generate 14
shards, so this remains TotalShardsCount and will be made dynamic in
Phase 2 based on actual EC context.
Without these fixes, custom EC ratios > 14 total shards would cause:
- Runtime panics (array index out of bounds)
- Data loss (shards 14-31 never discovered/tracked)
- Incomplete shard management (missing shards not detected)
* refactor: move MaxShardCount constant to ec_encoder.go
Moved MaxShardCount from ec_volume_info.go to ec_encoder.go to group it
with other shard count constants (DataShardsCount, ParityShardsCount,
TotalShardsCount). This improves code organization and makes it easier
to understand the relationship between these constants.
Location: ec_encoder.go line 22, between TotalShardsCount and MinTotalDisks
* improve: add defensive programming and better error messages for EC
Code review improvements from CodeRabbit:
1. ShardBits Guardrails (ec_volume_info.go):
- AddShardId, RemoveShardId: Reject shard IDs >= MaxShardCount
- HasShardId: Return false for out-of-range shard IDs
- Prevents silent no-ops from bit shifts with invalid IDs
2. Future-Proof Regex (disk_location_ec.go):
- Updated regex from \.ec[0-9][0-9] to \.ec\d{2,3}
- Now matches .ec00 through .ec999 (currently .ec00-.ec31 used)
- Supports future increases to MaxShardCount beyond 99
3. Better Error Messages (volume_grpc_erasure_coding.go):
- Include valid range (1..32) in dataShards validation error
- Helps operators quickly identify the problem
4. Validation Before Save (volume_grpc_erasure_coding.go):
- Validate ECContext (DataShards > 0, ParityShards > 0, Total <= MaxShardCount)
- Log EC config being saved to .vif for debugging
- Prevents writing invalid configs to disk
These changes improve robustness and debuggability without changing
core functionality.
* fmt
* fix: critical bugs from code review + clean up comments
Critical bug fixes:
1. command_ec_rebuild.go: Fixed indentation causing compilation error
- Properly nested if/for blocks in registerEcNode
2. ec_shard_management.go: Fixed isComplete logic incorrectly using MaxShardCount
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Default 10+4 volumes were being incorrectly reported as incomplete
- Missing shards 14-31 were being incorrectly reported as missing
- Fixed in 4 locations: volume completeness checks and getMissingShards
3. ec_volume_info.go: Fixed MinusParityShards removing too many shards
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Was incorrectly removing shard IDs 10-31 instead of just 10-13
Comment cleanup:
- Removed Phase 1/Phase 2 references (development plan context)
- Replaced with clear statements about default 10+4 configuration
- SeaweedFS repo uses fixed 10+4 EC ratio, no phases needed
Root cause: Over-aggressive replacement of TotalShardsCount with MaxShardCount.
MaxShardCount (32) is the limit for buffer allocations and shard ID loops,
but TotalShardsCount (14) must be used for default EC configuration logic.
* fix: add defensive bounds checks and compute actual shard counts
Critical fixes from code review:
1. topology_ec.go: Add defensive bounds checks to AddShard/DeleteShard
- Prevent panic when shardId >= MaxShardCount (32)
- Return false instead of crashing on out-of-range shard IDs
2. command_ec_common.go: Fix doBalanceEcShardsAcrossRacks
- Was using hardcoded TotalShardsCount (14) for all volumes
- Now computes actual totalShardsForVolume from rackToShardCount
- Fixes incorrect rebalancing for volumes with custom EC ratios
- Example: 5+2=7 shards would incorrectly use 14 as average
These fixes improve robustness and prepare for future custom EC ratios
without changing current behavior for default 10+4 volumes.
Note: MinusParityShards and ec_task.go intentionally NOT changed for
seaweedfs repo - these will be enhanced in seaweed-enterprise repo
where custom EC ratio configuration is added.
* fmt
* style: make MaxShardCount type casting explicit in loops
Improved code clarity by explicitly casting MaxShardCount to the
appropriate type when used in loop comparisons:
- ShardId comparisons: Cast to ShardId(MaxShardCount)
- uint32 comparisons: Cast to uint32(MaxShardCount)
Changed in 5 locations:
- Minus() loop (line 90)
- ShardIds() loop (line 143)
- ToUint32Slice() loop (line 152)
- IndexToShardId() loop (line 219)
- resizeShardSizes() loop (line 248)
This makes the intent explicit and improves type safety readability.
No functional changes - purely a style improvement.
261 lines
7.2 KiB
Go
261 lines
7.2 KiB
Go
package erasure_coding
|
|
|
|
import (
|
|
"math/bits"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage/needle"
|
|
)
|
|
|
|
// data structure used in master
|
|
type EcVolumeInfo struct {
|
|
VolumeId needle.VolumeId
|
|
Collection string
|
|
ShardBits ShardBits
|
|
DiskType string
|
|
DiskId uint32 // ID of the disk this EC volume is on
|
|
ExpireAtSec uint64 // ec volume destroy time, calculated from the ec volume was created
|
|
ShardSizes []int64 // optimized: sizes for shards in order of set bits in ShardBits
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) AddShardId(id ShardId) {
|
|
oldBits := ecInfo.ShardBits
|
|
ecInfo.ShardBits = ecInfo.ShardBits.AddShardId(id)
|
|
|
|
// If shard was actually added, resize ShardSizes array
|
|
if oldBits != ecInfo.ShardBits {
|
|
ecInfo.resizeShardSizes(oldBits)
|
|
}
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) RemoveShardId(id ShardId) {
|
|
oldBits := ecInfo.ShardBits
|
|
ecInfo.ShardBits = ecInfo.ShardBits.RemoveShardId(id)
|
|
|
|
// If shard was actually removed, resize ShardSizes array
|
|
if oldBits != ecInfo.ShardBits {
|
|
ecInfo.resizeShardSizes(oldBits)
|
|
}
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) SetShardSize(id ShardId, size int64) {
|
|
ecInfo.ensureShardSizesInitialized()
|
|
if index, found := ecInfo.ShardBits.ShardIdToIndex(id); found && index < len(ecInfo.ShardSizes) {
|
|
ecInfo.ShardSizes[index] = size
|
|
}
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) GetShardSize(id ShardId) (int64, bool) {
|
|
if index, found := ecInfo.ShardBits.ShardIdToIndex(id); found && index < len(ecInfo.ShardSizes) {
|
|
return ecInfo.ShardSizes[index], true
|
|
}
|
|
return 0, false
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) GetTotalSize() int64 {
|
|
var total int64
|
|
for _, size := range ecInfo.ShardSizes {
|
|
total += size
|
|
}
|
|
return total
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) HasShardId(id ShardId) bool {
|
|
return ecInfo.ShardBits.HasShardId(id)
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) ShardIds() (ret []ShardId) {
|
|
return ecInfo.ShardBits.ShardIds()
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) ShardIdCount() (count int) {
|
|
return ecInfo.ShardBits.ShardIdCount()
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) Minus(other *EcVolumeInfo) *EcVolumeInfo {
|
|
ret := &EcVolumeInfo{
|
|
VolumeId: ecInfo.VolumeId,
|
|
Collection: ecInfo.Collection,
|
|
ShardBits: ecInfo.ShardBits.Minus(other.ShardBits),
|
|
DiskType: ecInfo.DiskType,
|
|
DiskId: ecInfo.DiskId,
|
|
ExpireAtSec: ecInfo.ExpireAtSec,
|
|
}
|
|
|
|
// Initialize optimized ShardSizes for the result
|
|
ret.ensureShardSizesInitialized()
|
|
|
|
// Copy shard sizes for remaining shards
|
|
retIndex := 0
|
|
for shardId := ShardId(0); shardId < ShardId(MaxShardCount) && retIndex < len(ret.ShardSizes); shardId++ {
|
|
if ret.ShardBits.HasShardId(shardId) {
|
|
if size, exists := ecInfo.GetShardSize(shardId); exists {
|
|
ret.ShardSizes[retIndex] = size
|
|
}
|
|
retIndex++
|
|
}
|
|
}
|
|
|
|
return ret
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) ToVolumeEcShardInformationMessage() (ret *master_pb.VolumeEcShardInformationMessage) {
|
|
t := &master_pb.VolumeEcShardInformationMessage{
|
|
Id: uint32(ecInfo.VolumeId),
|
|
EcIndexBits: uint32(ecInfo.ShardBits),
|
|
Collection: ecInfo.Collection,
|
|
DiskType: ecInfo.DiskType,
|
|
ExpireAtSec: ecInfo.ExpireAtSec,
|
|
DiskId: ecInfo.DiskId,
|
|
}
|
|
|
|
// Directly set the optimized ShardSizes
|
|
t.ShardSizes = make([]int64, len(ecInfo.ShardSizes))
|
|
copy(t.ShardSizes, ecInfo.ShardSizes)
|
|
|
|
return t
|
|
}
|
|
|
|
type ShardBits uint32 // use bits to indicate the shard id, use 32 bits just for possible future extension
|
|
|
|
func (b ShardBits) AddShardId(id ShardId) ShardBits {
|
|
if id >= MaxShardCount {
|
|
return b // Reject out-of-range shard IDs
|
|
}
|
|
return b | (1 << id)
|
|
}
|
|
|
|
func (b ShardBits) RemoveShardId(id ShardId) ShardBits {
|
|
if id >= MaxShardCount {
|
|
return b // Reject out-of-range shard IDs
|
|
}
|
|
return b &^ (1 << id)
|
|
}
|
|
|
|
func (b ShardBits) HasShardId(id ShardId) bool {
|
|
if id >= MaxShardCount {
|
|
return false // Out-of-range shard IDs are never present
|
|
}
|
|
return b&(1<<id) > 0
|
|
}
|
|
|
|
func (b ShardBits) ShardIds() (ret []ShardId) {
|
|
for i := ShardId(0); i < ShardId(MaxShardCount); i++ {
|
|
if b.HasShardId(i) {
|
|
ret = append(ret, i)
|
|
}
|
|
}
|
|
return
|
|
}
|
|
|
|
func (b ShardBits) ToUint32Slice() (ret []uint32) {
|
|
for i := uint32(0); i < uint32(MaxShardCount); i++ {
|
|
if b.HasShardId(ShardId(i)) {
|
|
ret = append(ret, i)
|
|
}
|
|
}
|
|
return
|
|
}
|
|
|
|
func (b ShardBits) ShardIdCount() (count int) {
|
|
for count = 0; b > 0; count++ {
|
|
b &= b - 1
|
|
}
|
|
return
|
|
}
|
|
|
|
func (b ShardBits) Minus(other ShardBits) ShardBits {
|
|
return b &^ other
|
|
}
|
|
|
|
func (b ShardBits) Plus(other ShardBits) ShardBits {
|
|
return b | other
|
|
}
|
|
|
|
func (b ShardBits) MinusParityShards() ShardBits {
|
|
// Removes parity shards from the bit mask
|
|
// Assumes default 10+4 EC layout where parity shards are IDs 10-13
|
|
for i := DataShardsCount; i < TotalShardsCount; i++ {
|
|
b = b.RemoveShardId(ShardId(i))
|
|
}
|
|
return b
|
|
}
|
|
|
|
// ShardIdToIndex converts a shard ID to its index position in the ShardSizes slice
|
|
// Returns the index and true if the shard is present, -1 and false if not present
|
|
func (b ShardBits) ShardIdToIndex(shardId ShardId) (index int, found bool) {
|
|
if !b.HasShardId(shardId) {
|
|
return -1, false
|
|
}
|
|
|
|
// Create a mask for bits before the shardId
|
|
mask := uint32((1 << shardId) - 1)
|
|
// Count set bits before the shardId using efficient bit manipulation
|
|
index = bits.OnesCount32(uint32(b) & mask)
|
|
return index, true
|
|
}
|
|
|
|
// EachSetIndex iterates over all set shard IDs and calls the provided function for each
|
|
// This is highly efficient using bit manipulation - only iterates over actual set bits
|
|
func (b ShardBits) EachSetIndex(fn func(shardId ShardId)) {
|
|
bitsValue := uint32(b)
|
|
for bitsValue != 0 {
|
|
// Find the position of the least significant set bit
|
|
shardId := ShardId(bits.TrailingZeros32(bitsValue))
|
|
fn(shardId)
|
|
// Clear the least significant set bit
|
|
bitsValue &= bitsValue - 1
|
|
}
|
|
}
|
|
|
|
// IndexToShardId converts an index position in ShardSizes slice to the corresponding shard ID
|
|
// Returns the shard ID and true if valid index, -1 and false if invalid index
|
|
func (b ShardBits) IndexToShardId(index int) (shardId ShardId, found bool) {
|
|
if index < 0 {
|
|
return 0, false
|
|
}
|
|
|
|
currentIndex := 0
|
|
for i := ShardId(0); i < ShardId(MaxShardCount); i++ {
|
|
if b.HasShardId(i) {
|
|
if currentIndex == index {
|
|
return i, true
|
|
}
|
|
currentIndex++
|
|
}
|
|
}
|
|
return 0, false // index out of range
|
|
}
|
|
|
|
// Helper methods for EcVolumeInfo to manage the optimized ShardSizes slice
|
|
func (ecInfo *EcVolumeInfo) ensureShardSizesInitialized() {
|
|
expectedLength := ecInfo.ShardBits.ShardIdCount()
|
|
if ecInfo.ShardSizes == nil {
|
|
ecInfo.ShardSizes = make([]int64, expectedLength)
|
|
} else if len(ecInfo.ShardSizes) != expectedLength {
|
|
// Resize and preserve existing data
|
|
ecInfo.resizeShardSizes(ecInfo.ShardBits)
|
|
}
|
|
}
|
|
|
|
func (ecInfo *EcVolumeInfo) resizeShardSizes(prevShardBits ShardBits) {
|
|
expectedLength := ecInfo.ShardBits.ShardIdCount()
|
|
newSizes := make([]int64, expectedLength)
|
|
|
|
// Copy existing sizes to new positions based on current ShardBits
|
|
if len(ecInfo.ShardSizes) > 0 {
|
|
newIndex := 0
|
|
for shardId := ShardId(0); shardId < ShardId(MaxShardCount) && newIndex < expectedLength; shardId++ {
|
|
if ecInfo.ShardBits.HasShardId(shardId) {
|
|
// Try to find the size for this shard in the old array using previous ShardBits
|
|
if oldIndex, found := prevShardBits.ShardIdToIndex(shardId); found && oldIndex < len(ecInfo.ShardSizes) {
|
|
newSizes[newIndex] = ecInfo.ShardSizes[oldIndex]
|
|
}
|
|
newIndex++
|
|
}
|
|
}
|
|
}
|
|
|
|
ecInfo.ShardSizes = newSizes
|
|
}
|