* Enhance volume.merge command with deduplication and disk-based backend * Fix copyVolume function call with correct argument order and missing bool parameter * Revert "Fix copyVolume function call with correct argument order and missing bool parameter" This reverts commit 7b4a190643576fec11f896b26bcad03dd02da2f7. * Fix critical issues: per-replica writable tracking, tail goroutine cancellation via done channel, and debug logging for allocation failures * Optimize memory usage with watermark approach for duplicate detection * Fix critical issues: swap copyVolume arguments, increase idle timeout, remove file double-close, use glog for logging * Replace temporary file with in-memory buffer for needle blob serialization * test(volume.merge): Add comprehensive unit and integration tests Add 7 unit tests covering: - Ordering by timestamp - Cross-stream duplicate deduplication - Empty stream handling - Complex multi-stream deduplication - Single stream passthrough - Large needle ID support - LastModified fallback when timestamp unavailable Add 2 integration validation tests: - TestMergeWorkflowValidation: Documents 9-stage merge workflow - TestMergeEdgeCaseHandling: Validates 10 edge case handling All tests passing (9/9) * fix(volume.merge): Use time window for deduplication to handle clock skew The same needle ID can have different timestamps on different servers due to clock skew and replication lag. Needles with the same ID within a 5-second time window are now treated as duplicates (same write with timestamp variance). Key changes: - Add mergeDeduplicationWindowNs constant (5 seconds) - Replace exact timestamp matching with time window comparison - Use windowInitialized flag to properly detect window transitions - Add TestMergeNeedleStreamsTimeWindowDeduplication test This ensures that replicated writes with slight timestamp differences are properly deduplicated during merge, while separate updates to the same file ID (outside the window) are preserved. All tests passing (10/10) * test: Add volume.merge integration tests with 5 comprehensive test cases * test: integration tests for volume.merge command * Fix integration tests: use TripleVolumeCluster for volume.merge testing - Created new TripleVolumeCluster framework (cluster_triple.go) with 3 volume servers - Rebuilt weed binary with volume.merge command compiled in - Updated all 5 integration tests to use TripleVolumeCluster instead of DualVolumeCluster - Tests now properly allocate volumes on 2 servers and let merge allocate on 3rd - All 5 integration tests now pass: - TestVolumeMergeBasic - TestVolumeMergeReadonly - TestVolumeMergeRestore - TestVolumeMergeTailNeedles - TestVolumeMergeDivergentReplicas * Refactor test framework: use parameterized server count instead of hardcoded - Renamed TripleVolumeCluster to MultiVolumeCluster with serverCount parameter - Replaced hardcoded volumePort0/1/2 with slices for flexible server count - Updated StartTripleVolumeCluster as backward-compatible wrapper calling StartMultiVolumeCluster(t, profile, 3) - Made directory creation, port allocation, and server startup loop-based - Updated accessor methods (VolumeAdminAddress, VolumeGRPCAddress, etc.) to support any server count - All 5 integration tests continue to pass with new parameterized cluster framework - Enables future testing with 2, 4, 5+ volume servers by calling StartMultiVolumeCluster directly * Consolidate cluster frameworks: StartDualVolumeCluster now uses MultiVolumeCluster - Made DualVolumeCluster a type alias for MultiVolumeCluster - Updated StartDualVolumeCluster to call StartMultiVolumeCluster(t, profile, 2) - Removed duplicate code from cluster_dual.go (now just 17 lines) - All existing tests using StartDualVolumeCluster continue to work without changes - Backward compatible: existing code continues to use the old function signatures - Added wrapper functions in cluster_multi.go for StartTripleVolumeCluster - Enables unified cluster management across all test suites * Address PR review comments: improve error handling and clean up code - Replace parse error swallow with proper error return - Log cleanup and restoration errors instead of silently discarding them - Remove unused offset field from memoryBackendFile struct - Fix WriteAt buffer truncation bug to preserve trailing bytes - All unit tests passing (10/10) - Code compiles successfully * Fix PR review findings: test improvements and code quality - Add timeout to runWeedShell to prevent hanging - Add server 1 readonly status verification in tests - Assert merge fails when replicas writable (not just log output) - Replace sleep with polling for writable restoration check - Fix WriteAt stale data snapshot bug in memoryBackendFile - Fix startVolume error logging to show current server log - Fix volumePubPorts double assignment in port allocation - Rename test to reflect behavior: DoesNotDeduplicateAcrossWindows - Fix misleading dedup window comment Unit tests: 10/10 passing Binary: Compiles successfully * Fix test assumption: merge command marks volumes readonly automatically TestVolumeMergeReadonly was expecting merge to fail on writable volumes, but the merge command is designed to mark volumes readonly as part of its operation. Fixed test to verify merge succeeds on writable volumes and properly restores writable state afterward. Removed redundant Test 2 code that duplicated the new behavior. * fmt * Fix deduplication logic to correctly handle same-stream vs cross-stream duplicates The dedup map previously used only NeedleId as key, causing same-stream overwrites to be incorrectly skipped as duplicates. Changed to track which stream first processed each needle ID in the current window: - Cross-stream duplicates (same ID from different streams, within window) are skipped - Same-stream duplicates (overwrites from same stream) are kept - Map now stores: needleId -> streamIndex of first occurrence in window Added TestMergeNeedleStreamsSameStreamDuplicates to verify same-stream overwrites are preserved while cross-stream duplicates are skipped. All unit tests passing (11/11) Binary compiles successfully
S3 Server-Side Encryption (SSE) Integration Tests
This directory contains comprehensive integration tests for SeaweedFS S3 API Server-Side Encryption functionality. These tests validate the complete end-to-end encryption/decryption pipeline from S3 API requests through filer metadata storage.
Overview
The SSE integration tests cover three main encryption methods:
- SSE-C (Customer-Provided Keys): Client provides encryption keys via request headers
- SSE-KMS (Key Management Service): Server manages encryption keys through a KMS provider
- SSE-S3 (Server-Managed Keys): Server automatically manages encryption keys
🆕 Real KMS Integration
The tests now include real KMS integration with OpenBao, providing:
- ✅ Actual encryption/decryption operations (not mock keys)
- ✅ Multiple KMS keys for different security levels
- ✅ Per-bucket KMS configuration testing
- ✅ Performance benchmarking with real KMS operations
See README_KMS.md for detailed KMS integration documentation.
Why Integration Tests Matter
These integration tests were created to address a critical gap in test coverage that previously existed. While the SeaweedFS codebase had comprehensive unit tests for SSE components, it lacked integration tests that validated the complete request flow:
Client Request → S3 API → Filer Storage → Metadata Persistence → Retrieval → Decryption
The Bug These Tests Would Have Caught
A critical bug was discovered where:
- ✅ S3 API correctly encrypted data and sent metadata headers to the filer
- ❌ Filer did not process SSE metadata headers, losing all encryption metadata
- ❌ Objects could be encrypted but never decrypted (metadata was lost)
Unit tests passed because they tested components in isolation, but the integration was broken. These integration tests specifically validate that:
- Encryption metadata is correctly sent to the filer
- Filer properly processes and stores the metadata
- Objects can be successfully retrieved and decrypted
- Copy operations preserve encryption metadata
- Multipart uploads maintain encryption consistency
Test Structure
Core Integration Tests
Basic Functionality
TestSSECIntegrationBasic- Basic SSE-C PUT/GET cycleTestSSEKMSIntegrationBasic- Basic SSE-KMS PUT/GET cycle
Data Size Validation
TestSSECIntegrationVariousDataSizes- SSE-C with various data sizes (0B to 1MB)TestSSEKMSIntegrationVariousDataSizes- SSE-KMS with various data sizes
Object Copy Operations
TestSSECObjectCopyIntegration- SSE-C object copying (key rotation, encryption changes)TestSSEKMSObjectCopyIntegration- SSE-KMS object copying
Multipart Uploads
TestSSEMultipartUploadIntegration- SSE multipart uploads for large objects
Error Conditions
TestSSEErrorConditions- Invalid keys, malformed requests, error handling
Performance Tests
BenchmarkSSECThroughput- SSE-C performance benchmarkingBenchmarkSSEKMSThroughput- SSE-KMS performance benchmarking
Running Tests
Prerequisites
-
Build SeaweedFS: Ensure the
weedbinary is built and available in PATHcd /path/to/seaweedfs make -
Dependencies: Tests use AWS SDK Go v2 and testify - these are handled by Go modules
Quick Test
Run basic SSE integration tests:
make test-basic
Comprehensive Testing
Run all SSE integration tests:
make test
Specific Test Categories
make test-ssec # SSE-C tests only
make test-ssekms # SSE-KMS tests only
make test-copy # Copy operation tests
make test-multipart # Multipart upload tests
make test-errors # Error condition tests
Performance Testing
make benchmark # Performance benchmarks
make perf # Various data size performance tests
KMS Integration Testing
make setup-openbao # Set up OpenBao KMS
make test-with-kms # Run all SSE tests with real KMS
make test-ssekms-integration # Run SSE-KMS with OpenBao only
make clean-kms # Clean up KMS environment
Development Testing
make manual-start # Start SeaweedFS for manual testing
# ... run manual tests ...
make manual-stop # Stop and cleanup
Test Configuration
Default Configuration
The tests use these default settings:
- S3 Endpoint:
http://127.0.0.1:8333 - Access Key:
some_access_key1 - Secret Key:
some_secret_key1 - Region:
us-east-1 - Bucket Prefix:
test-sse-
Custom Configuration
Override defaults via environment variables:
S3_PORT=8444 FILER_PORT=8889 make test
Test Environment
Each test run:
- Starts a complete SeaweedFS cluster (master, volume, filer, s3)
- Configures KMS support for SSE-KMS tests
- Creates temporary buckets with unique names
- Runs tests with real HTTP requests
- Cleans up all test artifacts
Test Data Coverage
Data Sizes Tested
- 0 bytes: Empty files (edge case)
- 1 byte: Minimal data
- 16 bytes: Single AES block
- 31 bytes: Just under two blocks
- 32 bytes: Exactly two blocks
- 100 bytes: Small file
- 1 KB: Small text file
- 8 KB: Medium file
- 64 KB: Large file
- 1 MB: Very large file
Encryption Key Scenarios
- SSE-C: Random 256-bit keys, key rotation, wrong keys
- SSE-KMS: Various key IDs, encryption contexts, bucket keys
- Copy Operations: Same key, different keys, encryption transitions
Critical Test Scenarios
Metadata Persistence Validation
The integration tests specifically validate scenarios that would catch metadata storage bugs:
// 1. Upload with SSE-C
client.PutObject(..., SSECustomerKey: key) // ← Metadata sent to filer
// 2. Retrieve with SSE-C
client.GetObject(..., SSECustomerKey: key) // ← Metadata retrieved from filer
// 3. Verify decryption works
assert.Equal(originalData, decryptedData) // ← Would fail if metadata lost
Content-Length Validation
Tests verify that Content-Length headers are correct, which would catch bugs related to IV handling:
assert.Equal(int64(originalSize), resp.ContentLength) // ← Would catch IV-in-stream bugs
Debugging
View Logs
make debug-logs # Show recent log entries
make debug-status # Show process and port status
Manual Testing
make manual-start # Start SeaweedFS
# Test with S3 clients, curl, etc.
make manual-stop # Cleanup
Integration Test Benefits
These integration tests provide:
- End-to-End Validation: Complete request pipeline testing
- Metadata Persistence: Validates filer storage/retrieval of encryption metadata
- Real Network Communication: Uses actual HTTP requests and responses
- Production-Like Environment: Full SeaweedFS cluster with all components
- Regression Protection: Prevents critical integration bugs
- Performance Baselines: Benchmarking for performance monitoring
Continuous Integration
For CI/CD pipelines, use:
make ci-test # Quick tests suitable for CI
make stress # Stress testing for stability validation
Key Differences from Unit Tests
| Aspect | Unit Tests | Integration Tests |
|---|---|---|
| Scope | Individual functions | Complete request pipeline |
| Dependencies | Mocked/simulated | Real SeaweedFS cluster |
| Network | None | Real HTTP requests |
| Storage | In-memory | Real filer database |
| Metadata | Manual simulation | Actual storage/retrieval |
| Speed | Fast (milliseconds) | Slower (seconds) |
| Coverage | Component logic | System integration |
Conclusion
These integration tests ensure that SeaweedFS SSE functionality works correctly in production-like environments. They complement the existing unit tests by validating that all components work together properly, providing confidence that encryption/decryption operations will succeed for real users.
Most importantly, these tests would have immediately caught the critical filer metadata storage bug that was previously undetected, demonstrating the crucial importance of integration testing for distributed systems.