* fix: prevent filer.backup stall in single-filer setups (#4977) When MetaAggregator.MetaLogBuffer is empty (which happens in single-filer setups with no peers), ReadFromBuffer was returning nil error, causing LoopProcessLogData to enter an infinite wait loop on ListenersCond. This fix returns ResumeFromDiskError instead, allowing SubscribeMetadata to loop back and read from persisted logs on disk. This ensures filer.backup continues processing events even when the in-memory aggregator buffer is empty. Fixes #4977 * test: add integration tests for metadata subscription Add integration tests for metadata subscription functionality: - TestMetadataSubscribeBasic: Tests basic subscription and event receiving - TestMetadataSubscribeSingleFilerNoStall: Regression test for #4977, verifies subscription doesn't stall under high load in single-filer setups - TestMetadataSubscribeResumeFromDisk: Tests resuming subscription from disk Related to #4977 * ci: add GitHub Actions workflow for metadata subscribe tests Add CI workflow that runs on: - Push/PR to master affecting filer, log_buffer, or metadata subscribe code - Runs the integration tests for metadata subscription - Uploads logs on failure for debugging Related to #4977 * fix: use multipart form-data for file uploads in integration tests The filer expects multipart/form-data for file uploads, not raw POST body. This fixes the 'Content-Type isn't multipart/form-data' error. * test: use -peers=none for faster master startup * test: add -peers=none to remaining master startup in ec tests * fix: use filer HTTP port 8888, WithFilerClient adds 10000 for gRPC WithFilerClient calls ToGrpcAddress() which adds 10000 to the port. Passing 18888 resulted in connecting to 28888. Use 8888 instead. * test: add concurrent writes and million updates tests - TestMetadataSubscribeConcurrentWrites: 50 goroutines writing 20 files each - TestMetadataSubscribeMillionUpdates: 1 million metadata entries via gRPC (metadata only, no actual file content for speed) * fix: address PR review comments - Handle os.MkdirAll errors explicitly instead of ignoring - Handle log file creation errors with proper error messages - Replace silent event dropping with 100ms timeout and warning log * Update metadata_subscribe_integration_test.go
Erasure Coding Integration Tests
This directory contains integration tests for the EC (Erasure Coding) encoding volume location timing bug fix.
The Bug
The bug caused double storage usage during EC encoding because:
- Silent failure: Functions returned
nilinstead of proper error messages - Timing race condition: Volume locations were collected AFTER EC encoding when master metadata was already updated
- Missing cleanup: Original volumes weren't being deleted after EC encoding
This resulted in both original .dat files AND EC .ec00-.ec13 files coexisting, effectively doubling storage usage.
The Fix
The fix addresses all three issues:
- Fixed silent failures: Updated
doDeleteVolumes()anddoEcEncode()to return proper errors - Fixed timing race condition: Created
doDeleteVolumesWithLocations()that uses pre-collected volume locations - Enhanced cleanup: Volume locations are now collected BEFORE EC encoding, preventing the race condition
Integration Tests
TestECEncodingVolumeLocationTimingBug
The main integration test that:
- Simulates master timing race condition: Tests what happens when volume locations are read from master AFTER EC encoding has updated the metadata
- Verifies fix effectiveness: Checks for the "Collecting volume locations...before EC encoding" message that proves the fix is working
- Tests multi-server distribution: Runs EC encoding with 6 volume servers to test shard distribution
- Validates cleanup: Ensures original volumes are properly cleaned up after EC encoding
TestECEncodingMasterTimingRaceCondition
A focused test that specifically targets the master metadata timing race condition:
- Simulates the exact race condition: Tests volume location collection timing relative to master metadata updates
- Detects timing fix: Verifies that volume locations are collected BEFORE EC encoding starts
- Demonstrates bug impact: Shows what happens when volume locations are unavailable after master metadata update
TestECEncodingRegressionPrevention
Regression tests that ensure:
- Function signatures: Fixed functions still exist and return proper errors
- Timing patterns: Volume location collection happens in the correct order
Test Architecture
The tests use:
- Real SeaweedFS cluster: 1 master server + 6 volume servers
- Multi-server setup: Tests realistic EC shard distribution across multiple servers
- Timing simulation: Goroutines and delays to simulate race conditions
- Output validation: Checks for specific log messages that prove the fix is working
Why Integration Tests Were Necessary
Unit tests could not catch this bug because:
- Race condition: The bug only occurred in real-world timing scenarios
- Master-volume server interaction: Required actual master metadata updates
- File system operations: Needed real volume creation and EC shard generation
- Cleanup timing: Required testing the sequence of operations in correct order
The integration tests successfully catch the timing bug by:
- Testing real command execution: Uses actual
ec.encodeshell command - Simulating race conditions: Creates timing scenarios that expose the bug
- Validating output messages: Checks for the key "Collecting volume locations...before EC encoding" message
- Monitoring cleanup behavior: Ensures original volumes are properly deleted
Running the Tests
# Run all integration tests
go test -v
# Run only the main timing test
go test -v -run TestECEncodingVolumeLocationTimingBug
# Run only the race condition test
go test -v -run TestECEncodingMasterTimingRaceCondition
# Skip integration tests (short mode)
go test -v -short
Test Results
With the fix: Shows "Collecting volume locations for N volumes before EC encoding..." message Without the fix: No collection message, potential timing race condition
The tests demonstrate that the fix prevents the volume location timing bug that caused double storage usage in EC encoding operations.