Add TUS protocol support for resumable uploads (#7592)

* Add TUS protocol integration tests

This commit adds integration tests for the TUS (resumable upload) protocol
in preparation for implementing TUS support in the filer.

Test coverage includes:
- OPTIONS handler for capability discovery
- Basic single-request upload
- Chunked/resumable uploads
- HEAD requests for offset tracking
- DELETE for upload cancellation
- Error handling (invalid offsets, missing uploads)
- Creation-with-upload extension
- Resume after interruption simulation

Tests are skipped in short mode and require a running SeaweedFS cluster.

* Add TUS session storage types and utilities

Implements TUS upload session management:
- TusSession struct for tracking upload state
- Session creation with directory-based storage
- Session persistence using filer entries
- Session retrieval and offset updates
- Session deletion with chunk cleanup
- Upload completion with chunk assembly into final file

Session data is stored in /.uploads.tus/{upload-id}/ directory,
following the pattern used by S3 multipart uploads.

* Add TUS HTTP handlers

Implements TUS protocol HTTP handlers:
- tusHandler: Main entry point routing requests
- tusOptionsHandler: Capability discovery (OPTIONS)
- tusCreateHandler: Create new upload (POST)
- tusHeadHandler: Get upload offset (HEAD)
- tusPatchHandler: Upload data at offset (PATCH)
- tusDeleteHandler: Cancel upload (DELETE)
- tusWriteData: Upload data to volume servers

Features:
- Supports creation-with-upload extension
- Validates TUS protocol headers
- Offset conflict detection
- Automatic upload completion when size is reached
- Metadata parsing from Upload-Metadata header

* Wire up TUS protocol routes in filer server

Add TUS handler route (/.tus/) to the filer HTTP server.
The TUS route is registered before the catch-all route to ensure
proper routing of TUS protocol requests.

TUS protocol is now accessible at:
- OPTIONS /.tus/ - Capability discovery
- POST /.tus/{path} - Create upload
- HEAD /.tus/.uploads/{id} - Get offset
- PATCH /.tus/.uploads/{id} - Upload data
- DELETE /.tus/.uploads/{id} - Cancel upload

* Improve TUS integration test setup

Add comprehensive Makefile for TUS tests with targets:
- test-with-server: Run tests with automatic server management
- test-basic/chunked/resume/errors: Specific test categories
- manual-start/stop: For development testing
- debug-logs/status: For debugging
- ci-test: For CI/CD pipelines

Update README.md with:
- Detailed TUS protocol documentation
- All endpoint descriptions with headers
- Usage examples with curl commands
- Architecture diagram
- Comparison with S3 multipart uploads

Follows the pattern established by other tests in test/ folder.

* Fix TUS integration tests and creation-with-upload

- Fix test URLs to use full URLs instead of relative paths
- Fix creation-with-upload to refresh session before completing
- Fix Makefile to properly handle test cleanup
- Add FullURL helper function to TestCluster

* Add TUS protocol tests to GitHub Actions CI

- Add tus-tests.yml workflow that runs on PRs and pushes
- Runs when TUS-related files are modified
- Automatic server management for integration testing
- Upload logs on failure for debugging

* Make TUS base path configurable via CLI

- Add -tus.path CLI flag to filer command
- TUS is disabled by default (empty path)
- Example: -tus.path=/.tus to enable at /.tus endpoint
- Update test Makefile to use -tus.path flag
- Update README with TUS enabling instructions

* Rename -tus.path to -tusBasePath with default .tus

- Rename CLI flag from -tus.path to -tusBasePath
- Default to .tus (TUS enabled by default)
- Add -filer.tusBasePath option to weed server command
- Properly handle path prefix (prepend / if missing)

* Address code review comments

- Sort chunks by offset before assembling final file
- Use chunk.Offset directly instead of recalculating
- Return error on invalid file ID instead of skipping
- Require Content-Length header for PATCH requests
- Use fs.option.Cipher for encryption setting
- Detect MIME type from data using http.DetectContentType
- Fix concurrency group for push events in workflow
- Use os.Interrupt instead of Kill for graceful shutdown in tests

* fmt

* Address remaining code review comments

- Fix potential open redirect vulnerability by sanitizing uploadLocation path
- Add language specifier to README code block
- Handle os.Create errors in test setup
- Use waitForHTTPServer instead of time.Sleep for master/volume readiness
- Improve test reliability and debugging

* Address critical and high-priority review comments

- Add per-session locking to prevent race conditions in updateTusSessionOffset
- Stream data directly to volume server instead of buffering entire chunk
- Only buffer 512 bytes for MIME type detection, then stream remaining data
- Clean up session locks when session is deleted

* Fix race condition to work across multiple filer instances

- Store each chunk as a separate file entry instead of updating session JSON
- Chunk file names encode offset, size, and fileId for atomic storage
- getTusSession loads chunks from directory listing (atomic read)
- Eliminates read-modify-write race condition across multiple filers
- Remove in-memory mutex that only worked for single filer instance

* Address code review comments: fix variable shadowing, sniff size, and test stability

- Rename path variable to reqPath to avoid shadowing path package
- Make sniff buffer size respect contentLength (read at most contentLength bytes)
- Handle Content-Length < 0 in creation-with-upload (return error for chunked encoding)
- Fix test cluster: use temp directory for filer store, add startup delay

* Fix test stability: increase cluster stabilization delay to 5 seconds

The tests were intermittently failing because the volume server needed more
time to create volumes and register with the master. Increasing the delay
from 2 to 5 seconds fixes the flaky test behavior.

* Address PR review comments for TUS protocol support

- Fix strconv.Atoi error handling in test file (lines 386, 747)
- Fix lossy fileId encoding: use base64 instead of underscore replacement
- Add pagination support for ListDirectoryEntries in getTusSession
- Batch delete chunks instead of one-by-one in deleteTusSession

* Address additional PR review comments for TUS protocol

- Fix UploadAt timestamp: use entry.Crtime instead of time.Now()
- Remove redundant JSON content in chunk entry (metadata in filename)
- Refactor tusWriteData to stream in 4MB chunks to avoid OOM on large uploads
- Pass filer.Entry to parseTusChunkPath to preserve actual upload time

* Address more PR review comments for TUS protocol

- Normalize TUS path once in filer_server.go, store in option.TusPath
- Remove redundant path normalization from TUS handlers
- Remove goto statement in tusCreateHandler, simplify control flow

* Remove unnecessary mutexes in tusWriteData

The upload loop is sequential, so uploadErrLock and chunksLock are not needed.

* Rename updateTusSessionOffset to saveTusChunk

Remove unused newOffset parameter and rename function to better reflect its purpose.

* Improve TUS upload performance and add path validation

- Reuse operation.Uploader across sub-chunks for better connection reuse
- Guard against TusPath='/' to prevent hijacking all filer routes

* Address PR review comments for TUS protocol

- Fix critical chunk filename parsing: use strings.Cut instead of SplitN
  to correctly handle base64-encoded fileIds that may contain underscores
- Rename tusPath to tusBasePath for naming consistency across codebase
- Add background garbage collection for expired TUS sessions (runs hourly)
- Improve error messages with %w wrapping for better debuggability

* Address additional TUS PR review comments

- Fix tusBasePath default to use leading slash (/.tus) for consistency
- Add chunk contiguity validation in completeTusUpload to detect gaps/overlaps
- Fix offset calculation to find maximum contiguous range from 0, not just last chunk
- Return 413 Request Entity Too Large instead of silently truncating content
- Document tusChunkSize rationale (4MB balances memory vs request overhead)
- Fix Makefile xargs portability by removing GNU-specific -r flag
- Add explicit -tusBasePath flag to integration test for robustness
- Fix README example to use /.uploads/tus path format

* Revert log_buffer changes (moved to separate PR)

* Minor style fixes from PR review

- Simplify tusBasePath flag description to use example format
- Add 'TUS upload' prefix to session not found error message
- Remove duplicate tusChunkSize comment
- Capitalize warning message for consistency
- Add grep filter to Makefile xargs for better empty input handling
This commit is contained in:
Chris Lu
2025-12-14 21:56:07 -08:00
committed by GitHub
parent 221b352593
commit 1b1e5f69a2
12 changed files with 3058 additions and 13 deletions

View File

@@ -0,0 +1,505 @@
package sse_test
import (
"bytes"
"context"
"fmt"
"io"
"testing"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestGitHub7562CopyFromEncryptedToTempToEncrypted reproduces the exact scenario from
// GitHub issue #7562: copying from an encrypted bucket to a temp bucket, then to another
// encrypted bucket fails with InternalError.
//
// Reproduction steps:
// 1. Create source bucket with SSE-S3 encryption enabled
// 2. Upload object (automatically encrypted)
// 3. Create temp bucket (no encryption)
// 4. Copy object from source to temp (decrypts)
// 5. Delete source bucket
// 6. Create destination bucket with SSE-S3 encryption
// 7. Copy object from temp to dest (should re-encrypt) - THIS FAILS
func TestGitHub7562CopyFromEncryptedToTempToEncrypted(t *testing.T) {
ctx := context.Background()
client, err := createS3Client(ctx, defaultConfig)
require.NoError(t, err, "Failed to create S3 client")
// Create three buckets
srcBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-src-")
require.NoError(t, err, "Failed to create source bucket")
tempBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-temp-")
require.NoError(t, err, "Failed to create temp bucket")
destBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-dest-")
require.NoError(t, err, "Failed to create destination bucket")
// Cleanup at the end
defer func() {
// Clean up in reverse order of creation
cleanupTestBucket(ctx, client, destBucket)
cleanupTestBucket(ctx, client, tempBucket)
// Note: srcBucket is deleted during the test
}()
testData := []byte("Test data for GitHub issue #7562 - copy from encrypted to temp to encrypted bucket")
objectKey := "demo-file.txt"
t.Logf("[1] Creating source bucket with SSE-S3 default encryption: %s", srcBucket)
// Step 1: Enable SSE-S3 default encryption on source bucket
_, err = client.PutBucketEncryption(ctx, &s3.PutBucketEncryptionInput{
Bucket: aws.String(srcBucket),
ServerSideEncryptionConfiguration: &types.ServerSideEncryptionConfiguration{
Rules: []types.ServerSideEncryptionRule{
{
ApplyServerSideEncryptionByDefault: &types.ServerSideEncryptionByDefault{
SSEAlgorithm: types.ServerSideEncryptionAes256,
},
},
},
},
})
require.NoError(t, err, "Failed to set source bucket default encryption")
t.Log("[2] Uploading demo object to source bucket")
// Step 2: Upload object to source bucket (will be automatically encrypted)
_, err = client.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(srcBucket),
Key: aws.String(objectKey),
Body: bytes.NewReader(testData),
// No encryption header - bucket default applies
})
require.NoError(t, err, "Failed to upload to source bucket")
// Verify source object is encrypted
srcHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(srcBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD source object")
assert.Equal(t, types.ServerSideEncryptionAes256, srcHead.ServerSideEncryption,
"Source object should be SSE-S3 encrypted")
t.Logf("Source object encryption: %v", srcHead.ServerSideEncryption)
t.Logf("[3] Creating temp bucket (no encryption): %s", tempBucket)
// Temp bucket already created without encryption
t.Log("[4] Copying object from source to temp (should decrypt)")
// Step 4: Copy to temp bucket (no encryption = decrypts)
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: aws.String(tempBucket),
Key: aws.String(objectKey),
CopySource: aws.String(fmt.Sprintf("%s/%s", srcBucket, objectKey)),
// No encryption header - data stored unencrypted
})
require.NoError(t, err, "Failed to copy to temp bucket")
// Verify temp object is NOT encrypted
tempHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(tempBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD temp object")
assert.Empty(t, tempHead.ServerSideEncryption, "Temp object should NOT be encrypted")
t.Logf("Temp object encryption: %v (should be empty)", tempHead.ServerSideEncryption)
// Verify temp object content
tempGet, err := client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(tempBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to GET temp object")
tempData, err := io.ReadAll(tempGet.Body)
tempGet.Body.Close()
require.NoError(t, err, "Failed to read temp object")
assertDataEqual(t, testData, tempData, "Temp object data should match original")
t.Log("[5] Deleting original source bucket")
// Step 5: Delete source bucket
err = cleanupTestBucket(ctx, client, srcBucket)
require.NoError(t, err, "Failed to delete source bucket")
t.Logf("[6] Creating destination bucket with SSE-S3 encryption: %s", destBucket)
// Step 6: Enable SSE-S3 default encryption on destination bucket
_, err = client.PutBucketEncryption(ctx, &s3.PutBucketEncryptionInput{
Bucket: aws.String(destBucket),
ServerSideEncryptionConfiguration: &types.ServerSideEncryptionConfiguration{
Rules: []types.ServerSideEncryptionRule{
{
ApplyServerSideEncryptionByDefault: &types.ServerSideEncryptionByDefault{
SSEAlgorithm: types.ServerSideEncryptionAes256,
},
},
},
},
})
require.NoError(t, err, "Failed to set destination bucket default encryption")
t.Log("[7] Copying object from temp to dest (should re-encrypt) - THIS IS WHERE #7562 FAILS")
// Step 7: Copy from temp to dest bucket (should re-encrypt with SSE-S3)
// THIS IS THE STEP THAT FAILS IN GITHUB ISSUE #7562
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
CopySource: aws.String(fmt.Sprintf("%s/%s", tempBucket, objectKey)),
// No encryption header - bucket default should apply
})
require.NoError(t, err, "GitHub #7562: Failed to copy from temp to encrypted dest bucket")
// Verify destination object is encrypted
destHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD destination object")
assert.Equal(t, types.ServerSideEncryptionAes256, destHead.ServerSideEncryption,
"Destination object should be SSE-S3 encrypted via bucket default")
t.Logf("Destination object encryption: %v", destHead.ServerSideEncryption)
// Verify destination object content is correct
destGet, err := client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to GET destination object")
destData, err := io.ReadAll(destGet.Body)
destGet.Body.Close()
require.NoError(t, err, "Failed to read destination object")
assertDataEqual(t, testData, destData, "GitHub #7562: Destination object data mismatch after re-encryption")
t.Log("[done] GitHub #7562 reproduction test completed successfully!")
}
// TestGitHub7562SimpleScenario tests the simpler variant: just copy unencrypted to encrypted bucket
func TestGitHub7562SimpleScenario(t *testing.T) {
ctx := context.Background()
client, err := createS3Client(ctx, defaultConfig)
require.NoError(t, err, "Failed to create S3 client")
// Create two buckets
srcBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-simple-src-")
require.NoError(t, err, "Failed to create source bucket")
defer cleanupTestBucket(ctx, client, srcBucket)
destBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-simple-dest-")
require.NoError(t, err, "Failed to create destination bucket")
defer cleanupTestBucket(ctx, client, destBucket)
testData := []byte("Simple test for unencrypted to encrypted copy")
objectKey := "test-object.txt"
t.Logf("Source bucket (no encryption): %s", srcBucket)
t.Logf("Dest bucket (SSE-S3 default): %s", destBucket)
// Upload to unencrypted source bucket
_, err = client.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(srcBucket),
Key: aws.String(objectKey),
Body: bytes.NewReader(testData),
})
require.NoError(t, err, "Failed to upload to source bucket")
// Enable SSE-S3 on destination bucket
_, err = client.PutBucketEncryption(ctx, &s3.PutBucketEncryptionInput{
Bucket: aws.String(destBucket),
ServerSideEncryptionConfiguration: &types.ServerSideEncryptionConfiguration{
Rules: []types.ServerSideEncryptionRule{
{
ApplyServerSideEncryptionByDefault: &types.ServerSideEncryptionByDefault{
SSEAlgorithm: types.ServerSideEncryptionAes256,
},
},
},
},
})
require.NoError(t, err, "Failed to set dest bucket encryption")
// Copy to encrypted bucket (should use bucket default encryption)
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
CopySource: aws.String(fmt.Sprintf("%s/%s", srcBucket, objectKey)),
})
require.NoError(t, err, "Failed to copy to encrypted bucket")
// Verify destination is encrypted
destHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD dest object")
assert.Equal(t, types.ServerSideEncryptionAes256, destHead.ServerSideEncryption,
"Object should be encrypted via bucket default")
// Verify content
destGet, err := client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to GET dest object")
destData, err := io.ReadAll(destGet.Body)
destGet.Body.Close()
require.NoError(t, err, "Failed to read dest object")
assertDataEqual(t, testData, destData, "Data mismatch")
}
// TestGitHub7562DebugMetadata helps debug what metadata is present on objects at each step
func TestGitHub7562DebugMetadata(t *testing.T) {
ctx := context.Background()
client, err := createS3Client(ctx, defaultConfig)
require.NoError(t, err, "Failed to create S3 client")
// Create three buckets
srcBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-debug-src-")
require.NoError(t, err, "Failed to create source bucket")
tempBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-debug-temp-")
require.NoError(t, err, "Failed to create temp bucket")
destBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-debug-dest-")
require.NoError(t, err, "Failed to create destination bucket")
defer func() {
cleanupTestBucket(ctx, client, destBucket)
cleanupTestBucket(ctx, client, tempBucket)
}()
testData := []byte("Debug metadata test for GitHub #7562")
objectKey := "debug-file.txt"
// Enable SSE-S3 on source
_, err = client.PutBucketEncryption(ctx, &s3.PutBucketEncryptionInput{
Bucket: aws.String(srcBucket),
ServerSideEncryptionConfiguration: &types.ServerSideEncryptionConfiguration{
Rules: []types.ServerSideEncryptionRule{
{
ApplyServerSideEncryptionByDefault: &types.ServerSideEncryptionByDefault{
SSEAlgorithm: types.ServerSideEncryptionAes256,
},
},
},
},
})
require.NoError(t, err, "Failed to set source bucket encryption")
// Upload
_, err = client.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(srcBucket),
Key: aws.String(objectKey),
Body: bytes.NewReader(testData),
})
require.NoError(t, err, "Failed to upload")
// Log source object headers
srcHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(srcBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD source")
t.Logf("=== SOURCE OBJECT (encrypted) ===")
t.Logf("ServerSideEncryption: %v", srcHead.ServerSideEncryption)
t.Logf("Metadata: %v", srcHead.Metadata)
t.Logf("ContentLength: %d", aws.ToInt64(srcHead.ContentLength))
// Copy to temp
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: aws.String(tempBucket),
Key: aws.String(objectKey),
CopySource: aws.String(fmt.Sprintf("%s/%s", srcBucket, objectKey)),
})
require.NoError(t, err, "Failed to copy to temp")
// Log temp object headers
tempHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(tempBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD temp")
t.Logf("=== TEMP OBJECT (should be unencrypted) ===")
t.Logf("ServerSideEncryption: %v (should be empty)", tempHead.ServerSideEncryption)
t.Logf("Metadata: %v", tempHead.Metadata)
t.Logf("ContentLength: %d", aws.ToInt64(tempHead.ContentLength))
// Verify temp is NOT encrypted
if tempHead.ServerSideEncryption != "" {
t.Logf("WARNING: Temp object unexpectedly has encryption: %v", tempHead.ServerSideEncryption)
}
// Delete source bucket
cleanupTestBucket(ctx, client, srcBucket)
// Enable SSE-S3 on dest
_, err = client.PutBucketEncryption(ctx, &s3.PutBucketEncryptionInput{
Bucket: aws.String(destBucket),
ServerSideEncryptionConfiguration: &types.ServerSideEncryptionConfiguration{
Rules: []types.ServerSideEncryptionRule{
{
ApplyServerSideEncryptionByDefault: &types.ServerSideEncryptionByDefault{
SSEAlgorithm: types.ServerSideEncryptionAes256,
},
},
},
},
})
require.NoError(t, err, "Failed to set dest bucket encryption")
// Copy to dest - THIS IS WHERE #7562 FAILS
t.Log("=== COPYING TO ENCRYPTED DEST ===")
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
CopySource: aws.String(fmt.Sprintf("%s/%s", tempBucket, objectKey)),
})
if err != nil {
t.Logf("!!! COPY FAILED (GitHub #7562): %v", err)
t.FailNow()
}
// Log dest object headers
destHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD dest")
t.Logf("=== DEST OBJECT (should be encrypted) ===")
t.Logf("ServerSideEncryption: %v", destHead.ServerSideEncryption)
t.Logf("Metadata: %v", destHead.Metadata)
t.Logf("ContentLength: %d", aws.ToInt64(destHead.ContentLength))
// Verify dest IS encrypted
assert.Equal(t, types.ServerSideEncryptionAes256, destHead.ServerSideEncryption,
"Dest object should be encrypted")
// Verify content is readable
destGet, err := client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to GET dest")
destData, err := io.ReadAll(destGet.Body)
destGet.Body.Close()
require.NoError(t, err, "Failed to read dest")
assertDataEqual(t, testData, destData, "Data mismatch")
t.Log("=== DEBUG TEST PASSED ===")
}
// TestGitHub7562LargeFile tests the issue with larger files that might trigger multipart handling
func TestGitHub7562LargeFile(t *testing.T) {
ctx := context.Background()
client, err := createS3Client(ctx, defaultConfig)
require.NoError(t, err, "Failed to create S3 client")
srcBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-large-src-")
require.NoError(t, err, "Failed to create source bucket")
tempBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-large-temp-")
require.NoError(t, err, "Failed to create temp bucket")
destBucket, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"7562-large-dest-")
require.NoError(t, err, "Failed to create destination bucket")
defer func() {
cleanupTestBucket(ctx, client, destBucket)
cleanupTestBucket(ctx, client, tempBucket)
}()
// Use larger file to potentially trigger different code paths
testData := generateTestData(5 * 1024 * 1024) // 5MB
objectKey := "large-file.bin"
t.Logf("Testing with %d byte file", len(testData))
// Enable SSE-S3 on source
_, err = client.PutBucketEncryption(ctx, &s3.PutBucketEncryptionInput{
Bucket: aws.String(srcBucket),
ServerSideEncryptionConfiguration: &types.ServerSideEncryptionConfiguration{
Rules: []types.ServerSideEncryptionRule{
{
ApplyServerSideEncryptionByDefault: &types.ServerSideEncryptionByDefault{
SSEAlgorithm: types.ServerSideEncryptionAes256,
},
},
},
},
})
require.NoError(t, err, "Failed to set source bucket encryption")
// Upload
_, err = client.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(srcBucket),
Key: aws.String(objectKey),
Body: bytes.NewReader(testData),
})
require.NoError(t, err, "Failed to upload")
// Copy to temp (decrypt)
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: aws.String(tempBucket),
Key: aws.String(objectKey),
CopySource: aws.String(fmt.Sprintf("%s/%s", srcBucket, objectKey)),
})
require.NoError(t, err, "Failed to copy to temp")
// Delete source
cleanupTestBucket(ctx, client, srcBucket)
// Enable SSE-S3 on dest
_, err = client.PutBucketEncryption(ctx, &s3.PutBucketEncryptionInput{
Bucket: aws.String(destBucket),
ServerSideEncryptionConfiguration: &types.ServerSideEncryptionConfiguration{
Rules: []types.ServerSideEncryptionRule{
{
ApplyServerSideEncryptionByDefault: &types.ServerSideEncryptionByDefault{
SSEAlgorithm: types.ServerSideEncryptionAes256,
},
},
},
},
})
require.NoError(t, err, "Failed to set dest bucket encryption")
// Copy to dest (re-encrypt) - GitHub #7562
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
CopySource: aws.String(fmt.Sprintf("%s/%s", tempBucket, objectKey)),
})
require.NoError(t, err, "GitHub #7562: Large file copy to encrypted bucket failed")
// Verify
destHead, err := client.HeadObject(ctx, &s3.HeadObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to HEAD dest")
assert.Equal(t, types.ServerSideEncryptionAes256, destHead.ServerSideEncryption)
assert.Equal(t, int64(len(testData)), aws.ToInt64(destHead.ContentLength))
// Verify content
destGet, err := client.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(destBucket),
Key: aws.String(objectKey),
})
require.NoError(t, err, "Failed to GET dest")
destData, err := io.ReadAll(destGet.Body)
destGet.Body.Close()
require.NoError(t, err, "Failed to read dest")
assertDataEqual(t, testData, destData, "Large file data mismatch")
t.Log("Large file test passed!")
}

226
test/tus/Makefile Normal file
View File

@@ -0,0 +1,226 @@
# Makefile for TUS Protocol Integration Tests
# This Makefile provides targets for running TUS (resumable upload) integration tests
# Default values
SEAWEEDFS_BINARY ?= weed
FILER_PORT ?= 18888
VOLUME_PORT ?= 18080
MASTER_PORT ?= 19333
TEST_TIMEOUT ?= 10m
VOLUME_MAX_SIZE_MB ?= 50
VOLUME_MAX_COUNT ?= 100
# Test directory
TEST_DIR := $(shell pwd)
SEAWEEDFS_ROOT := $(shell cd ../.. && pwd)
# Colors for output
RED := \033[0;31m
GREEN := \033[0;32m
YELLOW := \033[1;33m
NC := \033[0m # No Color
.PHONY: all test clean start-seaweedfs stop-seaweedfs check-binary build-weed help test-basic test-chunked test-resume test-errors test-with-server
all: test
# Build SeaweedFS binary
build-weed:
@echo "Building SeaweedFS binary..."
@cd $(SEAWEEDFS_ROOT)/weed && go build -o weed
@echo "$(GREEN)SeaweedFS binary built successfully$(NC)"
help:
@echo "SeaweedFS TUS Protocol Integration Tests"
@echo ""
@echo "Available targets:"
@echo " test - Run all TUS integration tests"
@echo " test-basic - Run basic TUS upload tests"
@echo " test-chunked - Run chunked upload tests"
@echo " test-resume - Run upload resume tests"
@echo " test-errors - Run error handling tests"
@echo " test-with-server - Run tests with automatic server management"
@echo " start-seaweedfs - Start SeaweedFS server for testing"
@echo " stop-seaweedfs - Stop SeaweedFS server"
@echo " clean - Clean up test artifacts"
@echo " check-binary - Check if SeaweedFS binary exists"
@echo " build-weed - Build SeaweedFS binary"
@echo ""
@echo "Configuration:"
@echo " SEAWEEDFS_BINARY=$(SEAWEEDFS_BINARY)"
@echo " FILER_PORT=$(FILER_PORT)"
@echo " VOLUME_PORT=$(VOLUME_PORT)"
@echo " MASTER_PORT=$(MASTER_PORT)"
@echo " TEST_TIMEOUT=$(TEST_TIMEOUT)"
check-binary:
@if ! command -v $(SEAWEEDFS_BINARY) > /dev/null 2>&1 && [ ! -f "$(SEAWEEDFS_ROOT)/weed/weed" ]; then \
echo "$(RED)Error: SeaweedFS binary not found$(NC)"; \
echo "Please build SeaweedFS first: make build-weed"; \
exit 1; \
fi
@echo "$(GREEN)SeaweedFS binary found$(NC)"
start-seaweedfs: check-binary
@echo "$(YELLOW)Starting SeaweedFS server for TUS testing...$(NC)"
@# Clean up any existing processes on our test ports
@lsof -ti :$(MASTER_PORT) | grep -v '^$$' | xargs kill -TERM 2>/dev/null || true
@lsof -ti :$(VOLUME_PORT) | grep -v '^$$' | xargs kill -TERM 2>/dev/null || true
@lsof -ti :$(FILER_PORT) | grep -v '^$$' | xargs kill -TERM 2>/dev/null || true
@sleep 2
# Create necessary directories
@mkdir -p /tmp/seaweedfs-test-tus-master
@mkdir -p /tmp/seaweedfs-test-tus-volume
@mkdir -p /tmp/seaweedfs-test-tus-filer
# Start master server (use freshly built binary)
@echo "Starting master server..."
@nohup $(SEAWEEDFS_ROOT)/weed/weed master \
-port=$(MASTER_PORT) \
-mdir=/tmp/seaweedfs-test-tus-master \
-volumeSizeLimitMB=$(VOLUME_MAX_SIZE_MB) \
-ip=127.0.0.1 \
> /tmp/seaweedfs-tus-master.log 2>&1 &
@sleep 3
# Start volume server
@echo "Starting volume server..."
@nohup $(SEAWEEDFS_ROOT)/weed/weed volume \
-port=$(VOLUME_PORT) \
-mserver=127.0.0.1:$(MASTER_PORT) \
-dir=/tmp/seaweedfs-test-tus-volume \
-max=$(VOLUME_MAX_COUNT) \
-ip=127.0.0.1 \
> /tmp/seaweedfs-tus-volume.log 2>&1 &
@sleep 3
# Start filer server with TUS enabled (default tusBasePath is .tus)
@echo "Starting filer server..."
@nohup $(SEAWEEDFS_ROOT)/weed/weed filer \
-port=$(FILER_PORT) \
-master=127.0.0.1:$(MASTER_PORT) \
-ip=127.0.0.1 \
> /tmp/seaweedfs-tus-filer.log 2>&1 &
@sleep 5
# Wait for filer to be ready
@echo "$(YELLOW)Waiting for filer to be ready...$(NC)"
@for i in $$(seq 1 30); do \
if curl -s -f http://127.0.0.1:$(FILER_PORT)/ > /dev/null 2>&1; then \
echo "$(GREEN)Filer is ready$(NC)"; \
break; \
fi; \
if [ $$i -eq 30 ]; then \
echo "$(RED)Filer failed to start within 30 seconds$(NC)"; \
$(MAKE) debug-logs; \
exit 1; \
fi; \
echo "Waiting for filer... ($$i/30)"; \
sleep 1; \
done
@echo "$(GREEN)SeaweedFS server started successfully for TUS testing$(NC)"
@echo "Master: http://localhost:$(MASTER_PORT)"
@echo "Volume: http://localhost:$(VOLUME_PORT)"
@echo "Filer: http://localhost:$(FILER_PORT)"
@echo "TUS Endpoint: http://localhost:$(FILER_PORT)/.tus/"
stop-seaweedfs:
@echo "$(YELLOW)Stopping SeaweedFS server...$(NC)"
@lsof -ti :$(MASTER_PORT) | grep -v '^$$' | xargs kill -TERM 2>/dev/null || true
@lsof -ti :$(VOLUME_PORT) | grep -v '^$$' | xargs kill -TERM 2>/dev/null || true
@lsof -ti :$(FILER_PORT) | grep -v '^$$' | xargs kill -TERM 2>/dev/null || true
@sleep 2
@echo "$(GREEN)SeaweedFS server stopped$(NC)"
clean:
@echo "$(YELLOW)Cleaning up TUS test artifacts...$(NC)"
@rm -rf /tmp/seaweedfs-test-tus-*
@rm -f /tmp/seaweedfs-tus-*.log
@echo "$(GREEN)TUS test cleanup completed$(NC)"
# Run all tests
test: check-binary
@echo "$(YELLOW)Running all TUS integration tests...$(NC)"
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) ./test/tus/...
@echo "$(GREEN)All TUS tests completed$(NC)"
# Run basic upload tests
test-basic: check-binary
@echo "$(YELLOW)Running basic TUS upload tests...$(NC)"
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestTusBasicUpload|TestTusOptionsHandler" ./test/tus/...
@echo "$(GREEN)Basic TUS tests completed$(NC)"
# Run chunked upload tests
test-chunked: check-binary
@echo "$(YELLOW)Running chunked TUS upload tests...$(NC)"
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestTusChunkedUpload" ./test/tus/...
@echo "$(GREEN)Chunked TUS tests completed$(NC)"
# Run resume tests
test-resume: check-binary
@echo "$(YELLOW)Running TUS upload resume tests...$(NC)"
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestTusResumeAfterInterruption|TestTusHeadRequest" ./test/tus/...
@echo "$(GREEN)TUS resume tests completed$(NC)"
# Run error handling tests
test-errors: check-binary
@echo "$(YELLOW)Running TUS error handling tests...$(NC)"
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestTusInvalidOffset|TestTusUploadNotFound|TestTusDeleteUpload" ./test/tus/...
@echo "$(GREEN)TUS error tests completed$(NC)"
# Run tests with automatic server management
test-with-server: build-weed
@echo "$(YELLOW)Running TUS tests with automatic server management...$(NC)"
@$(MAKE) -C $(TEST_DIR) start-seaweedfs && \
sleep 3 && \
cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) ./test/tus/...; \
TEST_RESULT=$$?; \
$(MAKE) -C $(TEST_DIR) stop-seaweedfs; \
$(MAKE) -C $(TEST_DIR) clean; \
if [ $$TEST_RESULT -eq 0 ]; then echo "$(GREEN)All TUS tests passed!$(NC)"; fi; \
exit $$TEST_RESULT
# Debug targets
debug-logs:
@echo "$(YELLOW)=== Master Log ===$(NC)"
@tail -n 50 /tmp/seaweedfs-tus-master.log 2>/dev/null || echo "No master log found"
@echo "$(YELLOW)=== Volume Log ===$(NC)"
@tail -n 50 /tmp/seaweedfs-tus-volume.log 2>/dev/null || echo "No volume log found"
@echo "$(YELLOW)=== Filer Log ===$(NC)"
@tail -n 50 /tmp/seaweedfs-tus-filer.log 2>/dev/null || echo "No filer log found"
debug-status:
@echo "$(YELLOW)=== Process Status ===$(NC)"
@ps aux | grep -E "(weed|seaweedfs)" | grep -v grep || echo "No SeaweedFS processes found"
@echo "$(YELLOW)=== Port Status ===$(NC)"
@lsof -i :$(MASTER_PORT) -i :$(VOLUME_PORT) -i :$(FILER_PORT) 2>/dev/null || echo "No ports in use"
# Manual testing targets
manual-start: start-seaweedfs
@echo "$(GREEN)SeaweedFS is now running for manual TUS testing$(NC)"
@echo ""
@echo "TUS Endpoints:"
@echo " OPTIONS /.tus/ - Capability discovery"
@echo " POST /.tus/{path} - Create upload"
@echo " HEAD /.tus/.uploads/{id} - Get offset"
@echo " PATCH /.tus/.uploads/{id} - Upload data"
@echo " DELETE /.tus/.uploads/{id} - Cancel upload"
@echo ""
@echo "Example curl commands:"
@echo " curl -X OPTIONS http://localhost:$(FILER_PORT)/.tus/ -H 'Tus-Resumable: 1.0.0'"
@echo ""
@echo "Run 'make manual-stop' when finished"
manual-stop: stop-seaweedfs clean
# CI targets
ci-test: test-with-server
# Skip integration tests (short mode)
test-short:
@echo "$(YELLOW)Running TUS tests in short mode (skipping integration tests)...$(NC)"
@cd $(SEAWEEDFS_ROOT) && go test -v -short ./test/tus/...
@echo "$(GREEN)Short tests completed$(NC)"

241
test/tus/README.md Normal file
View File

@@ -0,0 +1,241 @@
# TUS Protocol Integration Tests
This directory contains integration tests for the TUS (resumable upload) protocol support in SeaweedFS Filer.
## Overview
TUS is an open protocol for resumable file uploads over HTTP. It allows clients to upload files in chunks and resume uploads after network failures or interruptions.
### Why TUS?
- **Resumable uploads**: Resume interrupted uploads without re-sending data
- **Chunked uploads**: Upload large files in smaller pieces
- **Simple protocol**: Standard HTTP methods with custom headers
- **Wide client support**: Libraries available for JavaScript, Python, Go, and more
## TUS Protocol Endpoints
| Method | Path | Description |
|--------|------|-------------|
| `OPTIONS` | `/.tus/` | Server capability discovery |
| `POST` | `/.tus/{path}` | Create new upload session |
| `HEAD` | `/.tus/.uploads/{id}` | Get current upload offset |
| `PATCH` | `/.tus/.uploads/{id}` | Upload data at offset |
| `DELETE` | `/.tus/.uploads/{id}` | Cancel upload |
### TUS Headers
**Request Headers:**
- `Tus-Resumable: 1.0.0` - Protocol version (required)
- `Upload-Length` - Total file size in bytes (required on POST)
- `Upload-Offset` - Current byte offset (required on PATCH)
- `Upload-Metadata` - Base64-encoded key-value pairs (optional)
- `Content-Type: application/offset+octet-stream` (required on PATCH)
**Response Headers:**
- `Tus-Resumable` - Protocol version
- `Tus-Version` - Supported versions
- `Tus-Extension` - Supported extensions
- `Tus-Max-Size` - Maximum upload size
- `Upload-Offset` - Current byte offset
- `Location` - Upload URL (on POST)
## Enabling TUS
TUS protocol support is enabled by default at `/.tus` path. You can customize the path using the `-tusBasePath` flag:
```bash
# Start filer with default TUS path (/.tus)
weed filer -master=localhost:9333
# Use a custom path (leading slash added automatically if missing)
weed filer -master=localhost:9333 -tusBasePath=/.uploads/tus
# Disable TUS by setting empty path
weed filer -master=localhost:9333 -tusBasePath=
```
## Test Structure
### Integration Tests
The tests cover:
1. **Basic Functionality**
- `TestTusOptionsHandler` - Capability discovery
- `TestTusBasicUpload` - Simple complete upload
- `TestTusCreationWithUpload` - Creation-with-upload extension
2. **Chunked Uploads**
- `TestTusChunkedUpload` - Upload in multiple chunks
3. **Resumable Uploads**
- `TestTusHeadRequest` - Offset tracking
- `TestTusResumeAfterInterruption` - Resume after failure
4. **Error Handling**
- `TestTusInvalidOffset` - Offset mismatch (409 Conflict)
- `TestTusUploadNotFound` - Missing upload (404 Not Found)
- `TestTusDeleteUpload` - Upload cancellation
## Running Tests
### Prerequisites
1. **Build SeaweedFS**:
```bash
make build-weed
# or
cd ../../weed && go build -o weed
```
### Using Makefile
```bash
# Show available targets
make help
# Run all tests with automatic server management
make test-with-server
# Run all tests (requires running server)
make test
# Run specific test categories
make test-basic # Basic upload tests
make test-chunked # Chunked upload tests
make test-resume # Resume/HEAD tests
make test-errors # Error handling tests
# Manual testing
make manual-start # Start SeaweedFS for manual testing
make manual-stop # Stop and cleanup
```
### Using Go Test Directly
```bash
# Run all TUS tests
go test -v ./test/tus/...
# Run specific test
go test -v ./test/tus -run TestTusBasicUpload
# Skip integration tests (short mode)
go test -v -short ./test/tus/...
```
### Debug
```bash
# View server logs
make debug-logs
# Check process and port status
make debug-status
```
## Test Environment
Each test run:
1. Starts a SeaweedFS cluster (master, volume, filer)
2. Creates uploads using TUS protocol
3. Verifies files are stored correctly
4. Cleans up test data
### Default Ports
| Service | Port |
|---------|------|
| Master | 19333 |
| Volume | 18080 |
| Filer | 18888 |
### Configuration
Override defaults via environment or Makefile variables:
```bash
FILER_PORT=8889 MASTER_PORT=9334 make test
```
## Example Usage
### Create Upload
```bash
curl -X POST http://localhost:18888/.tus/mydir/file.txt \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Length: 1000" \
-H "Upload-Metadata: filename dGVzdC50eHQ="
```
### Upload Data
```bash
curl -X PATCH http://localhost:18888/.tus/.uploads/{upload-id} \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Offset: 0" \
-H "Content-Type: application/offset+octet-stream" \
--data-binary @file.txt
```
### Check Offset
```bash
curl -I http://localhost:18888/.tus/.uploads/{upload-id} \
-H "Tus-Resumable: 1.0.0"
```
### Cancel Upload
```bash
curl -X DELETE http://localhost:18888/.tus/.uploads/{upload-id} \
-H "Tus-Resumable: 1.0.0"
```
## TUS Extensions Supported
- **creation**: Create new uploads with POST
- **creation-with-upload**: Send data in creation request
- **termination**: Cancel uploads with DELETE
## Architecture
```text
Client Filer Volume Servers
| | |
|-- POST /.tus/path/file.mp4 ->| |
| |-- Create session dir ------->|
|<-- 201 Location: /.../{id} --| |
| | |
|-- PATCH /.tus/.uploads/{id} >| |
| Upload-Offset: 0 |-- Assign volume ------------>|
| [chunk data] |-- Upload chunk ------------->|
|<-- 204 Upload-Offset: N -----| |
| | |
| (network failure) | |
| | |
|-- HEAD /.tus/.uploads/{id} ->| |
|<-- Upload-Offset: N ---------| |
| | |
|-- PATCH (resume) ----------->|-- Upload remaining -------->|
|<-- 204 (complete) -----------|-- Assemble final file ----->|
```
## Comparison with S3 Multipart
| Feature | TUS | S3 Multipart |
|---------|-----|--------------|
| Protocol | Custom HTTP headers | S3 API |
| Session Init | POST with Upload-Length | CreateMultipartUpload |
| Upload Data | PATCH with offset | UploadPart with partNumber |
| Resume | HEAD to get offset | ListParts |
| Complete | Automatic at final offset | CompleteMultipartUpload |
| Ordering | Sequential (offset-based) | Parallel (part numbers) |
## Related Resources
- [TUS Protocol Specification](https://tus.io/protocols/resumable-upload)
- [tus-js-client](https://github.com/tus/tus-js-client) - JavaScript client
- [go-tus](https://github.com/eventials/go-tus) - Go client
- [SeaweedFS S3 API](../../weed/s3api) - Alternative multipart upload

View File

@@ -0,0 +1,775 @@
package tus
import (
"bytes"
"context"
"encoding/base64"
"fmt"
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
TusVersion = "1.0.0"
testFilerPort = "18888"
testMasterPort = "19333"
testVolumePort = "18080"
)
// TestCluster represents a running SeaweedFS cluster for testing
type TestCluster struct {
masterCmd *exec.Cmd
volumeCmd *exec.Cmd
filerCmd *exec.Cmd
dataDir string
}
func (c *TestCluster) Stop() {
if c.filerCmd != nil && c.filerCmd.Process != nil {
c.filerCmd.Process.Signal(os.Interrupt)
c.filerCmd.Wait()
}
if c.volumeCmd != nil && c.volumeCmd.Process != nil {
c.volumeCmd.Process.Signal(os.Interrupt)
c.volumeCmd.Wait()
}
if c.masterCmd != nil && c.masterCmd.Process != nil {
c.masterCmd.Process.Signal(os.Interrupt)
c.masterCmd.Wait()
}
}
func (c *TestCluster) FilerURL() string {
return fmt.Sprintf("http://127.0.0.1:%s", testFilerPort)
}
func (c *TestCluster) TusURL() string {
return fmt.Sprintf("%s/.tus", c.FilerURL())
}
// FullURL converts a relative path to a full URL
func (c *TestCluster) FullURL(path string) string {
if strings.HasPrefix(path, "http://") || strings.HasPrefix(path, "https://") {
return path
}
return fmt.Sprintf("http://127.0.0.1:%s%s", testFilerPort, path)
}
// startTestCluster starts a SeaweedFS cluster for testing
func startTestCluster(t *testing.T, ctx context.Context) (*TestCluster, error) {
weedBinary := findWeedBinary()
if weedBinary == "" {
return nil, fmt.Errorf("weed binary not found - please build it first: cd weed && go build")
}
dataDir, err := os.MkdirTemp("", "seaweedfs_tus_test_")
if err != nil {
return nil, err
}
cluster := &TestCluster{dataDir: dataDir}
// Create subdirectories
masterDir := filepath.Join(dataDir, "master")
volumeDir := filepath.Join(dataDir, "volume")
filerDir := filepath.Join(dataDir, "filer")
os.MkdirAll(masterDir, 0755)
os.MkdirAll(volumeDir, 0755)
os.MkdirAll(filerDir, 0755)
// Start master
masterCmd := exec.CommandContext(ctx, weedBinary, "master",
"-port", testMasterPort,
"-mdir", masterDir,
"-ip", "127.0.0.1",
)
masterLogFile, err := os.Create(filepath.Join(masterDir, "master.log"))
if err != nil {
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to create master log: %v", err)
}
masterCmd.Stdout = masterLogFile
masterCmd.Stderr = masterLogFile
if err := masterCmd.Start(); err != nil {
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to start master: %v", err)
}
cluster.masterCmd = masterCmd
// Wait for master to be ready
if err := waitForHTTPServer("http://127.0.0.1:"+testMasterPort+"/dir/status", 30*time.Second); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("master not ready: %v", err)
}
// Start volume server
volumeCmd := exec.CommandContext(ctx, weedBinary, "volume",
"-port", testVolumePort,
"-dir", volumeDir,
"-mserver", "127.0.0.1:"+testMasterPort,
"-ip", "127.0.0.1",
)
volumeLogFile, err := os.Create(filepath.Join(volumeDir, "volume.log"))
if err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to create volume log: %v", err)
}
volumeCmd.Stdout = volumeLogFile
volumeCmd.Stderr = volumeLogFile
if err := volumeCmd.Start(); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to start volume server: %v", err)
}
cluster.volumeCmd = volumeCmd
// Wait for volume server to register with master
if err := waitForHTTPServer("http://127.0.0.1:"+testVolumePort+"/status", 30*time.Second); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("volume server not ready: %v", err)
}
// Start filer with TUS enabled
filerCmd := exec.CommandContext(ctx, weedBinary, "filer",
"-port", testFilerPort,
"-master", "127.0.0.1:"+testMasterPort,
"-ip", "127.0.0.1",
"-defaultStoreDir", filerDir,
"-tusBasePath", "/.tus",
)
filerLogFile, err := os.Create(filepath.Join(filerDir, "filer.log"))
if err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to create filer log: %v", err)
}
filerCmd.Stdout = filerLogFile
filerCmd.Stderr = filerLogFile
if err := filerCmd.Start(); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to start filer: %v", err)
}
cluster.filerCmd = filerCmd
// Wait for filer
if err := waitForHTTPServer("http://127.0.0.1:"+testFilerPort+"/", 30*time.Second); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("filer not ready: %v", err)
}
// Wait a bit more for the cluster to fully stabilize
// Volumes are created lazily, and we need to ensure the master topology is ready
time.Sleep(5 * time.Second)
return cluster, nil
}
func findWeedBinary() string {
candidates := []string{
"../../weed/weed",
"../weed/weed",
"./weed/weed",
"weed",
}
for _, candidate := range candidates {
if _, err := os.Stat(candidate); err == nil {
return candidate
}
}
if path, err := exec.LookPath("weed"); err == nil {
return path
}
return ""
}
func waitForHTTPServer(url string, timeout time.Duration) error {
start := time.Now()
client := &http.Client{Timeout: 1 * time.Second}
for time.Since(start) < timeout {
resp, err := client.Get(url)
if err == nil {
resp.Body.Close()
return nil
}
time.Sleep(500 * time.Millisecond)
}
return fmt.Errorf("timeout waiting for %s", url)
}
// encodeTusMetadata encodes key-value pairs for Upload-Metadata header
func encodeTusMetadata(metadata map[string]string) string {
var parts []string
for k, v := range metadata {
encoded := base64.StdEncoding.EncodeToString([]byte(v))
parts = append(parts, fmt.Sprintf("%s %s", k, encoded))
}
return strings.Join(parts, ",")
}
// TestTusOptionsHandler tests the OPTIONS endpoint for capability discovery
func TestTusOptionsHandler(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
// Test OPTIONS request
req, err := http.NewRequest(http.MethodOptions, cluster.TusURL()+"/", nil)
require.NoError(t, err)
req.Header.Set("Tus-Resumable", TusVersion)
client := &http.Client{}
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Verify TUS headers
assert.Equal(t, http.StatusOK, resp.StatusCode, "OPTIONS should return 200 OK")
assert.Equal(t, TusVersion, resp.Header.Get("Tus-Resumable"), "Should return Tus-Resumable header")
assert.NotEmpty(t, resp.Header.Get("Tus-Version"), "Should return Tus-Version header")
assert.NotEmpty(t, resp.Header.Get("Tus-Extension"), "Should return Tus-Extension header")
assert.NotEmpty(t, resp.Header.Get("Tus-Max-Size"), "Should return Tus-Max-Size header")
}
// TestTusBasicUpload tests a simple complete upload
func TestTusBasicUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Hello, TUS Protocol! This is a test file.")
targetPath := "/testdir/testfile.txt"
// Step 1: Create upload (POST)
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createReq.Header.Set("Upload-Metadata", encodeTusMetadata(map[string]string{
"filename": "testfile.txt",
"content-type": "text/plain",
}))
client := &http.Client{}
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
assert.Equal(t, http.StatusCreated, createResp.StatusCode, "POST should return 201 Created")
uploadLocation := createResp.Header.Get("Location")
assert.NotEmpty(t, uploadLocation, "Should return Location header with upload URL")
t.Logf("Upload location: %s", uploadLocation)
// Step 2: Upload data (PATCH)
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchReq.Header.Set("Content-Length", strconv.Itoa(len(testData)))
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
defer patchResp.Body.Close()
assert.Equal(t, http.StatusNoContent, patchResp.StatusCode, "PATCH should return 204 No Content")
newOffset := patchResp.Header.Get("Upload-Offset")
assert.Equal(t, strconv.Itoa(len(testData)), newOffset, "Upload-Offset should equal total file size")
// Step 3: Verify the file was created
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode, "GET should return 200 OK")
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body, "File content should match uploaded data")
}
// TestTusChunkedUpload tests uploading a file in multiple chunks
func TestTusChunkedUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
// Create test data (100KB)
testData := make([]byte, 100*1024)
for i := range testData {
testData[i] = byte(i % 256)
}
chunkSize := 32 * 1024 // 32KB chunks
targetPath := "/chunked/largefile.bin"
client := &http.Client{}
// Step 1: Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
require.NotEmpty(t, uploadLocation)
t.Logf("Upload location: %s", uploadLocation)
// Step 2: Upload in chunks
offset := 0
for offset < len(testData) {
end := offset + chunkSize
if end > len(testData) {
end = len(testData)
}
chunk := testData[offset:end]
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(chunk))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", strconv.Itoa(offset))
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchReq.Header.Set("Content-Length", strconv.Itoa(len(chunk)))
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
patchResp.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp.StatusCode,
"PATCH chunk at offset %d should return 204", offset)
newOffset, err := strconv.Atoi(patchResp.Header.Get("Upload-Offset"))
require.NoError(t, err, "Upload-Offset header should be a valid integer")
require.Equal(t, end, newOffset, "New offset should be %d", end)
t.Logf("Uploaded chunk: offset=%d, size=%d, newOffset=%d", offset, len(chunk), newOffset)
offset = end
}
// Step 3: Verify the complete file
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode)
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body, "File content should match uploaded data")
}
// TestTusHeadRequest tests the HEAD endpoint to get upload offset
func TestTusHeadRequest(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Test data for HEAD request verification")
targetPath := "/headtest/file.txt"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// HEAD before any data uploaded - offset should be 0
headReq1, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq1.Header.Set("Tus-Resumable", TusVersion)
headResp1, err := client.Do(headReq1)
require.NoError(t, err)
defer headResp1.Body.Close()
assert.Equal(t, http.StatusOK, headResp1.StatusCode)
assert.Equal(t, "0", headResp1.Header.Get("Upload-Offset"), "Initial offset should be 0")
assert.Equal(t, strconv.Itoa(len(testData)), headResp1.Header.Get("Upload-Length"))
// Upload half the data
halfLen := len(testData) / 2
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[:halfLen]))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
patchResp.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp.StatusCode)
// HEAD after partial upload - offset should be halfLen
headReq2, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq2.Header.Set("Tus-Resumable", TusVersion)
headResp2, err := client.Do(headReq2)
require.NoError(t, err)
defer headResp2.Body.Close()
assert.Equal(t, http.StatusOK, headResp2.StatusCode)
assert.Equal(t, strconv.Itoa(halfLen), headResp2.Header.Get("Upload-Offset"),
"Offset should be %d after partial upload", halfLen)
}
// TestTusDeleteUpload tests canceling an in-progress upload
func TestTusDeleteUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Data to be deleted")
targetPath := "/deletetest/file.txt"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// Upload some data
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[:10]))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
patchResp.Body.Close()
// Delete the upload
deleteReq, err := http.NewRequest(http.MethodDelete, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
deleteReq.Header.Set("Tus-Resumable", TusVersion)
deleteResp, err := client.Do(deleteReq)
require.NoError(t, err)
defer deleteResp.Body.Close()
assert.Equal(t, http.StatusNoContent, deleteResp.StatusCode, "DELETE should return 204")
// Verify upload is gone - HEAD should return 404
headReq, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq.Header.Set("Tus-Resumable", TusVersion)
headResp, err := client.Do(headReq)
require.NoError(t, err)
defer headResp.Body.Close()
assert.Equal(t, http.StatusNotFound, headResp.StatusCode, "HEAD after DELETE should return 404")
}
// TestTusInvalidOffset tests error handling for mismatched offsets
func TestTusInvalidOffset(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Test data for offset validation")
targetPath := "/offsettest/file.txt"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// Try to upload with wrong offset (should be 0, but we send 100)
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "100") // Wrong offset!
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
defer patchResp.Body.Close()
assert.Equal(t, http.StatusConflict, patchResp.StatusCode,
"PATCH with wrong offset should return 409 Conflict")
}
// TestTusUploadNotFound tests accessing a non-existent upload
func TestTusUploadNotFound(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
client := &http.Client{}
fakeUploadURL := cluster.TusURL() + "/.uploads/nonexistent-upload-id"
// HEAD on non-existent upload
headReq, err := http.NewRequest(http.MethodHead, fakeUploadURL, nil)
require.NoError(t, err)
headReq.Header.Set("Tus-Resumable", TusVersion)
headResp, err := client.Do(headReq)
require.NoError(t, err)
defer headResp.Body.Close()
assert.Equal(t, http.StatusNotFound, headResp.StatusCode,
"HEAD on non-existent upload should return 404")
// PATCH on non-existent upload
patchReq, err := http.NewRequest(http.MethodPatch, fakeUploadURL, bytes.NewReader([]byte("data")))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
defer patchResp.Body.Close()
assert.Equal(t, http.StatusNotFound, patchResp.StatusCode,
"PATCH on non-existent upload should return 404")
}
// TestTusCreationWithUpload tests the creation-with-upload extension
func TestTusCreationWithUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Small file uploaded in creation request")
targetPath := "/creationwithupload/smallfile.txt"
client := &http.Client{}
// Create upload with data in the same request
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, bytes.NewReader(testData))
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createReq.Header.Set("Content-Type", "application/offset+octet-stream")
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
assert.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
assert.NotEmpty(t, uploadLocation)
// Check Upload-Offset header - should indicate all data was received
uploadOffset := createResp.Header.Get("Upload-Offset")
assert.Equal(t, strconv.Itoa(len(testData)), uploadOffset,
"Upload-Offset should equal file size for complete upload")
// Verify the file
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode)
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body)
}
// TestTusResumeAfterInterruption simulates resuming an upload after failure
func TestTusResumeAfterInterruption(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
// 50KB test data
testData := make([]byte, 50*1024)
for i := range testData {
testData[i] = byte(i % 256)
}
targetPath := "/resume/interrupted.bin"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// Upload first 20KB
firstChunkSize := 20 * 1024
patchReq1, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[:firstChunkSize]))
require.NoError(t, err)
patchReq1.Header.Set("Tus-Resumable", TusVersion)
patchReq1.Header.Set("Upload-Offset", "0")
patchReq1.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp1, err := client.Do(patchReq1)
require.NoError(t, err)
patchResp1.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp1.StatusCode)
t.Log("Simulating network interruption...")
// Simulate resumption: Query current offset with HEAD
headReq, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq.Header.Set("Tus-Resumable", TusVersion)
headResp, err := client.Do(headReq)
require.NoError(t, err)
defer headResp.Body.Close()
require.Equal(t, http.StatusOK, headResp.StatusCode)
currentOffset, err := strconv.Atoi(headResp.Header.Get("Upload-Offset"))
require.NoError(t, err, "Upload-Offset header should be a valid integer")
t.Logf("Resumed upload at offset: %d", currentOffset)
require.Equal(t, firstChunkSize, currentOffset)
// Resume upload from current offset
patchReq2, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[currentOffset:]))
require.NoError(t, err)
patchReq2.Header.Set("Tus-Resumable", TusVersion)
patchReq2.Header.Set("Upload-Offset", strconv.Itoa(currentOffset))
patchReq2.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp2, err := client.Do(patchReq2)
require.NoError(t, err)
patchResp2.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp2.StatusCode)
// Verify complete file
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode)
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body, "Resumed upload should produce complete file")
}