Files
seaweedFS/test/erasure_coding
Chris Lu 4f038820dc Add disk-aware EC rebalancing (#7597)
* Add placement package for EC shard placement logic

- Consolidate EC shard placement algorithm for reuse across shell and worker tasks
- Support multi-pass selection: racks, then servers, then disks
- Include proper spread verification and scoring functions
- Comprehensive test coverage for various cluster topologies

* Make ec.balance disk-aware for multi-disk servers

- Add EcDisk struct to track individual disks on volume servers
- Update EcNode to maintain per-disk shard distribution
- Parse disk_id from EC shard information during topology collection
- Implement pickBestDiskOnNode() for selecting best disk per shard
- Add diskDistributionScore() for tie-breaking node selection
- Update all move operations to specify target disk in RPC calls
- Improves shard balance within multi-disk servers, not just across servers

* Use placement package in EC detection for consistent disk-level placement

- Replace custom EC disk selection logic with shared placement package
- Convert topology DiskInfo to placement.DiskCandidate format
- Use SelectDestinations() for multi-rack/server/disk spreading
- Convert placement results back to topology DiskInfo for task creation
- Ensures EC detection uses same placement logic as shell commands

* Make volume server evacuation disk-aware

- Use pickBestDiskOnNode() when selecting evacuation target disk
- Specify target disk in evacuation RPC requests
- Maintains balanced disk distribution during server evacuations

* Rename PlacementConfig to PlacementRequest for clarity

PlacementRequest better reflects that this is a request for placement
rather than a configuration object. This improves API semantics.

* Rename DefaultConfig to DefaultPlacementRequest

Aligns with the PlacementRequest type naming for consistency

* Address review comments from Gemini and CodeRabbit

Fix HIGH issues:
- Fix empty disk discovery: Now discovers all disks from VolumeInfos,
  not just from EC shards. This ensures disks without EC shards are
  still considered for placement.
- Fix EC shard count calculation in detection.go: Now correctly filters
  by DiskId and sums actual shard counts using ShardBits.ShardIdCount()
  instead of just counting EcShardInfo entries.

Fix MEDIUM issues:
- Add disk ID to evacuation log messages for consistency with other logging
- Remove unused serverToDisks variable in placement.go
- Fix comment that incorrectly said 'ascending' when sorting is 'descending'

* add ec tests

* Update ec-integration-tests.yml

* Update ec_integration_test.go

* Fix EC integration tests CI: build weed binary and update actions

- Add 'Build weed binary' step before running tests
- Update actions/setup-go from v4 to v6 (Node20 compatibility)
- Update actions/checkout from v2 to v4 (Node20 compatibility)
- Move working-directory to test step only

* Add disk-aware EC rebalancing integration tests

- Add TestDiskAwareECRebalancing test with multi-disk cluster setup
- Test EC encode with disk awareness (shows disk ID in output)
- Test EC balance with disk-level shard distribution
- Add helper functions for disk-level verification:
  - startMultiDiskCluster: 3 servers x 4 disks each
  - countShardsPerDisk: track shards per disk per server
  - calculateDiskShardVariance: measure distribution balance
- Verify no single disk is overloaded with shards
2025-12-02 12:30:15 -08:00
..
2025-07-14 12:17:33 -07:00

Erasure Coding Integration Tests

This directory contains integration tests for the EC (Erasure Coding) encoding volume location timing bug fix.

The Bug

The bug caused double storage usage during EC encoding because:

  1. Silent failure: Functions returned nil instead of proper error messages
  2. Timing race condition: Volume locations were collected AFTER EC encoding when master metadata was already updated
  3. Missing cleanup: Original volumes weren't being deleted after EC encoding

This resulted in both original .dat files AND EC .ec00-.ec13 files coexisting, effectively doubling storage usage.

The Fix

The fix addresses all three issues:

  1. Fixed silent failures: Updated doDeleteVolumes() and doEcEncode() to return proper errors
  2. Fixed timing race condition: Created doDeleteVolumesWithLocations() that uses pre-collected volume locations
  3. Enhanced cleanup: Volume locations are now collected BEFORE EC encoding, preventing the race condition

Integration Tests

TestECEncodingVolumeLocationTimingBug

The main integration test that:

  • Simulates master timing race condition: Tests what happens when volume locations are read from master AFTER EC encoding has updated the metadata
  • Verifies fix effectiveness: Checks for the "Collecting volume locations...before EC encoding" message that proves the fix is working
  • Tests multi-server distribution: Runs EC encoding with 6 volume servers to test shard distribution
  • Validates cleanup: Ensures original volumes are properly cleaned up after EC encoding

TestECEncodingMasterTimingRaceCondition

A focused test that specifically targets the master metadata timing race condition:

  • Simulates the exact race condition: Tests volume location collection timing relative to master metadata updates
  • Detects timing fix: Verifies that volume locations are collected BEFORE EC encoding starts
  • Demonstrates bug impact: Shows what happens when volume locations are unavailable after master metadata update

TestECEncodingRegressionPrevention

Regression tests that ensure:

  • Function signatures: Fixed functions still exist and return proper errors
  • Timing patterns: Volume location collection happens in the correct order

Test Architecture

The tests use:

  • Real SeaweedFS cluster: 1 master server + 6 volume servers
  • Multi-server setup: Tests realistic EC shard distribution across multiple servers
  • Timing simulation: Goroutines and delays to simulate race conditions
  • Output validation: Checks for specific log messages that prove the fix is working

Why Integration Tests Were Necessary

Unit tests could not catch this bug because:

  1. Race condition: The bug only occurred in real-world timing scenarios
  2. Master-volume server interaction: Required actual master metadata updates
  3. File system operations: Needed real volume creation and EC shard generation
  4. Cleanup timing: Required testing the sequence of operations in correct order

The integration tests successfully catch the timing bug by:

  • Testing real command execution: Uses actual ec.encode shell command
  • Simulating race conditions: Creates timing scenarios that expose the bug
  • Validating output messages: Checks for the key "Collecting volume locations...before EC encoding" message
  • Monitoring cleanup behavior: Ensures original volumes are properly deleted

Running the Tests

# Run all integration tests
go test -v

# Run only the main timing test
go test -v -run TestECEncodingVolumeLocationTimingBug

# Run only the race condition test
go test -v -run TestECEncodingMasterTimingRaceCondition

# Skip integration tests (short mode)
go test -v -short

Test Results

With the fix: Shows "Collecting volume locations for N volumes before EC encoding..." message Without the fix: No collection message, potential timing race condition

The tests demonstrate that the fix prevents the volume location timing bug that caused double storage usage in EC encoding operations.