Files
seaweedFS/test/tus/tus_integration_test.go
Chris Lu 1b1e5f69a2 Add TUS protocol support for resumable uploads (#7592)
* Add TUS protocol integration tests

This commit adds integration tests for the TUS (resumable upload) protocol
in preparation for implementing TUS support in the filer.

Test coverage includes:
- OPTIONS handler for capability discovery
- Basic single-request upload
- Chunked/resumable uploads
- HEAD requests for offset tracking
- DELETE for upload cancellation
- Error handling (invalid offsets, missing uploads)
- Creation-with-upload extension
- Resume after interruption simulation

Tests are skipped in short mode and require a running SeaweedFS cluster.

* Add TUS session storage types and utilities

Implements TUS upload session management:
- TusSession struct for tracking upload state
- Session creation with directory-based storage
- Session persistence using filer entries
- Session retrieval and offset updates
- Session deletion with chunk cleanup
- Upload completion with chunk assembly into final file

Session data is stored in /.uploads.tus/{upload-id}/ directory,
following the pattern used by S3 multipart uploads.

* Add TUS HTTP handlers

Implements TUS protocol HTTP handlers:
- tusHandler: Main entry point routing requests
- tusOptionsHandler: Capability discovery (OPTIONS)
- tusCreateHandler: Create new upload (POST)
- tusHeadHandler: Get upload offset (HEAD)
- tusPatchHandler: Upload data at offset (PATCH)
- tusDeleteHandler: Cancel upload (DELETE)
- tusWriteData: Upload data to volume servers

Features:
- Supports creation-with-upload extension
- Validates TUS protocol headers
- Offset conflict detection
- Automatic upload completion when size is reached
- Metadata parsing from Upload-Metadata header

* Wire up TUS protocol routes in filer server

Add TUS handler route (/.tus/) to the filer HTTP server.
The TUS route is registered before the catch-all route to ensure
proper routing of TUS protocol requests.

TUS protocol is now accessible at:
- OPTIONS /.tus/ - Capability discovery
- POST /.tus/{path} - Create upload
- HEAD /.tus/.uploads/{id} - Get offset
- PATCH /.tus/.uploads/{id} - Upload data
- DELETE /.tus/.uploads/{id} - Cancel upload

* Improve TUS integration test setup

Add comprehensive Makefile for TUS tests with targets:
- test-with-server: Run tests with automatic server management
- test-basic/chunked/resume/errors: Specific test categories
- manual-start/stop: For development testing
- debug-logs/status: For debugging
- ci-test: For CI/CD pipelines

Update README.md with:
- Detailed TUS protocol documentation
- All endpoint descriptions with headers
- Usage examples with curl commands
- Architecture diagram
- Comparison with S3 multipart uploads

Follows the pattern established by other tests in test/ folder.

* Fix TUS integration tests and creation-with-upload

- Fix test URLs to use full URLs instead of relative paths
- Fix creation-with-upload to refresh session before completing
- Fix Makefile to properly handle test cleanup
- Add FullURL helper function to TestCluster

* Add TUS protocol tests to GitHub Actions CI

- Add tus-tests.yml workflow that runs on PRs and pushes
- Runs when TUS-related files are modified
- Automatic server management for integration testing
- Upload logs on failure for debugging

* Make TUS base path configurable via CLI

- Add -tus.path CLI flag to filer command
- TUS is disabled by default (empty path)
- Example: -tus.path=/.tus to enable at /.tus endpoint
- Update test Makefile to use -tus.path flag
- Update README with TUS enabling instructions

* Rename -tus.path to -tusBasePath with default .tus

- Rename CLI flag from -tus.path to -tusBasePath
- Default to .tus (TUS enabled by default)
- Add -filer.tusBasePath option to weed server command
- Properly handle path prefix (prepend / if missing)

* Address code review comments

- Sort chunks by offset before assembling final file
- Use chunk.Offset directly instead of recalculating
- Return error on invalid file ID instead of skipping
- Require Content-Length header for PATCH requests
- Use fs.option.Cipher for encryption setting
- Detect MIME type from data using http.DetectContentType
- Fix concurrency group for push events in workflow
- Use os.Interrupt instead of Kill for graceful shutdown in tests

* fmt

* Address remaining code review comments

- Fix potential open redirect vulnerability by sanitizing uploadLocation path
- Add language specifier to README code block
- Handle os.Create errors in test setup
- Use waitForHTTPServer instead of time.Sleep for master/volume readiness
- Improve test reliability and debugging

* Address critical and high-priority review comments

- Add per-session locking to prevent race conditions in updateTusSessionOffset
- Stream data directly to volume server instead of buffering entire chunk
- Only buffer 512 bytes for MIME type detection, then stream remaining data
- Clean up session locks when session is deleted

* Fix race condition to work across multiple filer instances

- Store each chunk as a separate file entry instead of updating session JSON
- Chunk file names encode offset, size, and fileId for atomic storage
- getTusSession loads chunks from directory listing (atomic read)
- Eliminates read-modify-write race condition across multiple filers
- Remove in-memory mutex that only worked for single filer instance

* Address code review comments: fix variable shadowing, sniff size, and test stability

- Rename path variable to reqPath to avoid shadowing path package
- Make sniff buffer size respect contentLength (read at most contentLength bytes)
- Handle Content-Length < 0 in creation-with-upload (return error for chunked encoding)
- Fix test cluster: use temp directory for filer store, add startup delay

* Fix test stability: increase cluster stabilization delay to 5 seconds

The tests were intermittently failing because the volume server needed more
time to create volumes and register with the master. Increasing the delay
from 2 to 5 seconds fixes the flaky test behavior.

* Address PR review comments for TUS protocol support

- Fix strconv.Atoi error handling in test file (lines 386, 747)
- Fix lossy fileId encoding: use base64 instead of underscore replacement
- Add pagination support for ListDirectoryEntries in getTusSession
- Batch delete chunks instead of one-by-one in deleteTusSession

* Address additional PR review comments for TUS protocol

- Fix UploadAt timestamp: use entry.Crtime instead of time.Now()
- Remove redundant JSON content in chunk entry (metadata in filename)
- Refactor tusWriteData to stream in 4MB chunks to avoid OOM on large uploads
- Pass filer.Entry to parseTusChunkPath to preserve actual upload time

* Address more PR review comments for TUS protocol

- Normalize TUS path once in filer_server.go, store in option.TusPath
- Remove redundant path normalization from TUS handlers
- Remove goto statement in tusCreateHandler, simplify control flow

* Remove unnecessary mutexes in tusWriteData

The upload loop is sequential, so uploadErrLock and chunksLock are not needed.

* Rename updateTusSessionOffset to saveTusChunk

Remove unused newOffset parameter and rename function to better reflect its purpose.

* Improve TUS upload performance and add path validation

- Reuse operation.Uploader across sub-chunks for better connection reuse
- Guard against TusPath='/' to prevent hijacking all filer routes

* Address PR review comments for TUS protocol

- Fix critical chunk filename parsing: use strings.Cut instead of SplitN
  to correctly handle base64-encoded fileIds that may contain underscores
- Rename tusPath to tusBasePath for naming consistency across codebase
- Add background garbage collection for expired TUS sessions (runs hourly)
- Improve error messages with %w wrapping for better debuggability

* Address additional TUS PR review comments

- Fix tusBasePath default to use leading slash (/.tus) for consistency
- Add chunk contiguity validation in completeTusUpload to detect gaps/overlaps
- Fix offset calculation to find maximum contiguous range from 0, not just last chunk
- Return 413 Request Entity Too Large instead of silently truncating content
- Document tusChunkSize rationale (4MB balances memory vs request overhead)
- Fix Makefile xargs portability by removing GNU-specific -r flag
- Add explicit -tusBasePath flag to integration test for robustness
- Fix README example to use /.uploads/tus path format

* Revert log_buffer changes (moved to separate PR)

* Minor style fixes from PR review

- Simplify tusBasePath flag description to use example format
- Add 'TUS upload' prefix to session not found error message
- Remove duplicate tusChunkSize comment
- Capitalize warning message for consistency
- Add grep filter to Makefile xargs for better empty input handling
2025-12-14 21:56:07 -08:00

776 lines
24 KiB
Go

package tus
import (
"bytes"
"context"
"encoding/base64"
"fmt"
"io"
"net/http"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const (
TusVersion = "1.0.0"
testFilerPort = "18888"
testMasterPort = "19333"
testVolumePort = "18080"
)
// TestCluster represents a running SeaweedFS cluster for testing
type TestCluster struct {
masterCmd *exec.Cmd
volumeCmd *exec.Cmd
filerCmd *exec.Cmd
dataDir string
}
func (c *TestCluster) Stop() {
if c.filerCmd != nil && c.filerCmd.Process != nil {
c.filerCmd.Process.Signal(os.Interrupt)
c.filerCmd.Wait()
}
if c.volumeCmd != nil && c.volumeCmd.Process != nil {
c.volumeCmd.Process.Signal(os.Interrupt)
c.volumeCmd.Wait()
}
if c.masterCmd != nil && c.masterCmd.Process != nil {
c.masterCmd.Process.Signal(os.Interrupt)
c.masterCmd.Wait()
}
}
func (c *TestCluster) FilerURL() string {
return fmt.Sprintf("http://127.0.0.1:%s", testFilerPort)
}
func (c *TestCluster) TusURL() string {
return fmt.Sprintf("%s/.tus", c.FilerURL())
}
// FullURL converts a relative path to a full URL
func (c *TestCluster) FullURL(path string) string {
if strings.HasPrefix(path, "http://") || strings.HasPrefix(path, "https://") {
return path
}
return fmt.Sprintf("http://127.0.0.1:%s%s", testFilerPort, path)
}
// startTestCluster starts a SeaweedFS cluster for testing
func startTestCluster(t *testing.T, ctx context.Context) (*TestCluster, error) {
weedBinary := findWeedBinary()
if weedBinary == "" {
return nil, fmt.Errorf("weed binary not found - please build it first: cd weed && go build")
}
dataDir, err := os.MkdirTemp("", "seaweedfs_tus_test_")
if err != nil {
return nil, err
}
cluster := &TestCluster{dataDir: dataDir}
// Create subdirectories
masterDir := filepath.Join(dataDir, "master")
volumeDir := filepath.Join(dataDir, "volume")
filerDir := filepath.Join(dataDir, "filer")
os.MkdirAll(masterDir, 0755)
os.MkdirAll(volumeDir, 0755)
os.MkdirAll(filerDir, 0755)
// Start master
masterCmd := exec.CommandContext(ctx, weedBinary, "master",
"-port", testMasterPort,
"-mdir", masterDir,
"-ip", "127.0.0.1",
)
masterLogFile, err := os.Create(filepath.Join(masterDir, "master.log"))
if err != nil {
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to create master log: %v", err)
}
masterCmd.Stdout = masterLogFile
masterCmd.Stderr = masterLogFile
if err := masterCmd.Start(); err != nil {
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to start master: %v", err)
}
cluster.masterCmd = masterCmd
// Wait for master to be ready
if err := waitForHTTPServer("http://127.0.0.1:"+testMasterPort+"/dir/status", 30*time.Second); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("master not ready: %v", err)
}
// Start volume server
volumeCmd := exec.CommandContext(ctx, weedBinary, "volume",
"-port", testVolumePort,
"-dir", volumeDir,
"-mserver", "127.0.0.1:"+testMasterPort,
"-ip", "127.0.0.1",
)
volumeLogFile, err := os.Create(filepath.Join(volumeDir, "volume.log"))
if err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to create volume log: %v", err)
}
volumeCmd.Stdout = volumeLogFile
volumeCmd.Stderr = volumeLogFile
if err := volumeCmd.Start(); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to start volume server: %v", err)
}
cluster.volumeCmd = volumeCmd
// Wait for volume server to register with master
if err := waitForHTTPServer("http://127.0.0.1:"+testVolumePort+"/status", 30*time.Second); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("volume server not ready: %v", err)
}
// Start filer with TUS enabled
filerCmd := exec.CommandContext(ctx, weedBinary, "filer",
"-port", testFilerPort,
"-master", "127.0.0.1:"+testMasterPort,
"-ip", "127.0.0.1",
"-defaultStoreDir", filerDir,
"-tusBasePath", "/.tus",
)
filerLogFile, err := os.Create(filepath.Join(filerDir, "filer.log"))
if err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to create filer log: %v", err)
}
filerCmd.Stdout = filerLogFile
filerCmd.Stderr = filerLogFile
if err := filerCmd.Start(); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("failed to start filer: %v", err)
}
cluster.filerCmd = filerCmd
// Wait for filer
if err := waitForHTTPServer("http://127.0.0.1:"+testFilerPort+"/", 30*time.Second); err != nil {
cluster.Stop()
os.RemoveAll(dataDir)
return nil, fmt.Errorf("filer not ready: %v", err)
}
// Wait a bit more for the cluster to fully stabilize
// Volumes are created lazily, and we need to ensure the master topology is ready
time.Sleep(5 * time.Second)
return cluster, nil
}
func findWeedBinary() string {
candidates := []string{
"../../weed/weed",
"../weed/weed",
"./weed/weed",
"weed",
}
for _, candidate := range candidates {
if _, err := os.Stat(candidate); err == nil {
return candidate
}
}
if path, err := exec.LookPath("weed"); err == nil {
return path
}
return ""
}
func waitForHTTPServer(url string, timeout time.Duration) error {
start := time.Now()
client := &http.Client{Timeout: 1 * time.Second}
for time.Since(start) < timeout {
resp, err := client.Get(url)
if err == nil {
resp.Body.Close()
return nil
}
time.Sleep(500 * time.Millisecond)
}
return fmt.Errorf("timeout waiting for %s", url)
}
// encodeTusMetadata encodes key-value pairs for Upload-Metadata header
func encodeTusMetadata(metadata map[string]string) string {
var parts []string
for k, v := range metadata {
encoded := base64.StdEncoding.EncodeToString([]byte(v))
parts = append(parts, fmt.Sprintf("%s %s", k, encoded))
}
return strings.Join(parts, ",")
}
// TestTusOptionsHandler tests the OPTIONS endpoint for capability discovery
func TestTusOptionsHandler(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
// Test OPTIONS request
req, err := http.NewRequest(http.MethodOptions, cluster.TusURL()+"/", nil)
require.NoError(t, err)
req.Header.Set("Tus-Resumable", TusVersion)
client := &http.Client{}
resp, err := client.Do(req)
require.NoError(t, err)
defer resp.Body.Close()
// Verify TUS headers
assert.Equal(t, http.StatusOK, resp.StatusCode, "OPTIONS should return 200 OK")
assert.Equal(t, TusVersion, resp.Header.Get("Tus-Resumable"), "Should return Tus-Resumable header")
assert.NotEmpty(t, resp.Header.Get("Tus-Version"), "Should return Tus-Version header")
assert.NotEmpty(t, resp.Header.Get("Tus-Extension"), "Should return Tus-Extension header")
assert.NotEmpty(t, resp.Header.Get("Tus-Max-Size"), "Should return Tus-Max-Size header")
}
// TestTusBasicUpload tests a simple complete upload
func TestTusBasicUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Hello, TUS Protocol! This is a test file.")
targetPath := "/testdir/testfile.txt"
// Step 1: Create upload (POST)
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createReq.Header.Set("Upload-Metadata", encodeTusMetadata(map[string]string{
"filename": "testfile.txt",
"content-type": "text/plain",
}))
client := &http.Client{}
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
assert.Equal(t, http.StatusCreated, createResp.StatusCode, "POST should return 201 Created")
uploadLocation := createResp.Header.Get("Location")
assert.NotEmpty(t, uploadLocation, "Should return Location header with upload URL")
t.Logf("Upload location: %s", uploadLocation)
// Step 2: Upload data (PATCH)
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchReq.Header.Set("Content-Length", strconv.Itoa(len(testData)))
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
defer patchResp.Body.Close()
assert.Equal(t, http.StatusNoContent, patchResp.StatusCode, "PATCH should return 204 No Content")
newOffset := patchResp.Header.Get("Upload-Offset")
assert.Equal(t, strconv.Itoa(len(testData)), newOffset, "Upload-Offset should equal total file size")
// Step 3: Verify the file was created
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode, "GET should return 200 OK")
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body, "File content should match uploaded data")
}
// TestTusChunkedUpload tests uploading a file in multiple chunks
func TestTusChunkedUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
// Create test data (100KB)
testData := make([]byte, 100*1024)
for i := range testData {
testData[i] = byte(i % 256)
}
chunkSize := 32 * 1024 // 32KB chunks
targetPath := "/chunked/largefile.bin"
client := &http.Client{}
// Step 1: Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
require.NotEmpty(t, uploadLocation)
t.Logf("Upload location: %s", uploadLocation)
// Step 2: Upload in chunks
offset := 0
for offset < len(testData) {
end := offset + chunkSize
if end > len(testData) {
end = len(testData)
}
chunk := testData[offset:end]
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(chunk))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", strconv.Itoa(offset))
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchReq.Header.Set("Content-Length", strconv.Itoa(len(chunk)))
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
patchResp.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp.StatusCode,
"PATCH chunk at offset %d should return 204", offset)
newOffset, err := strconv.Atoi(patchResp.Header.Get("Upload-Offset"))
require.NoError(t, err, "Upload-Offset header should be a valid integer")
require.Equal(t, end, newOffset, "New offset should be %d", end)
t.Logf("Uploaded chunk: offset=%d, size=%d, newOffset=%d", offset, len(chunk), newOffset)
offset = end
}
// Step 3: Verify the complete file
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode)
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body, "File content should match uploaded data")
}
// TestTusHeadRequest tests the HEAD endpoint to get upload offset
func TestTusHeadRequest(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Test data for HEAD request verification")
targetPath := "/headtest/file.txt"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// HEAD before any data uploaded - offset should be 0
headReq1, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq1.Header.Set("Tus-Resumable", TusVersion)
headResp1, err := client.Do(headReq1)
require.NoError(t, err)
defer headResp1.Body.Close()
assert.Equal(t, http.StatusOK, headResp1.StatusCode)
assert.Equal(t, "0", headResp1.Header.Get("Upload-Offset"), "Initial offset should be 0")
assert.Equal(t, strconv.Itoa(len(testData)), headResp1.Header.Get("Upload-Length"))
// Upload half the data
halfLen := len(testData) / 2
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[:halfLen]))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
patchResp.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp.StatusCode)
// HEAD after partial upload - offset should be halfLen
headReq2, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq2.Header.Set("Tus-Resumable", TusVersion)
headResp2, err := client.Do(headReq2)
require.NoError(t, err)
defer headResp2.Body.Close()
assert.Equal(t, http.StatusOK, headResp2.StatusCode)
assert.Equal(t, strconv.Itoa(halfLen), headResp2.Header.Get("Upload-Offset"),
"Offset should be %d after partial upload", halfLen)
}
// TestTusDeleteUpload tests canceling an in-progress upload
func TestTusDeleteUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Data to be deleted")
targetPath := "/deletetest/file.txt"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// Upload some data
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[:10]))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
patchResp.Body.Close()
// Delete the upload
deleteReq, err := http.NewRequest(http.MethodDelete, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
deleteReq.Header.Set("Tus-Resumable", TusVersion)
deleteResp, err := client.Do(deleteReq)
require.NoError(t, err)
defer deleteResp.Body.Close()
assert.Equal(t, http.StatusNoContent, deleteResp.StatusCode, "DELETE should return 204")
// Verify upload is gone - HEAD should return 404
headReq, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq.Header.Set("Tus-Resumable", TusVersion)
headResp, err := client.Do(headReq)
require.NoError(t, err)
defer headResp.Body.Close()
assert.Equal(t, http.StatusNotFound, headResp.StatusCode, "HEAD after DELETE should return 404")
}
// TestTusInvalidOffset tests error handling for mismatched offsets
func TestTusInvalidOffset(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Test data for offset validation")
targetPath := "/offsettest/file.txt"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// Try to upload with wrong offset (should be 0, but we send 100)
patchReq, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "100") // Wrong offset!
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
defer patchResp.Body.Close()
assert.Equal(t, http.StatusConflict, patchResp.StatusCode,
"PATCH with wrong offset should return 409 Conflict")
}
// TestTusUploadNotFound tests accessing a non-existent upload
func TestTusUploadNotFound(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
client := &http.Client{}
fakeUploadURL := cluster.TusURL() + "/.uploads/nonexistent-upload-id"
// HEAD on non-existent upload
headReq, err := http.NewRequest(http.MethodHead, fakeUploadURL, nil)
require.NoError(t, err)
headReq.Header.Set("Tus-Resumable", TusVersion)
headResp, err := client.Do(headReq)
require.NoError(t, err)
defer headResp.Body.Close()
assert.Equal(t, http.StatusNotFound, headResp.StatusCode,
"HEAD on non-existent upload should return 404")
// PATCH on non-existent upload
patchReq, err := http.NewRequest(http.MethodPatch, fakeUploadURL, bytes.NewReader([]byte("data")))
require.NoError(t, err)
patchReq.Header.Set("Tus-Resumable", TusVersion)
patchReq.Header.Set("Upload-Offset", "0")
patchReq.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp, err := client.Do(patchReq)
require.NoError(t, err)
defer patchResp.Body.Close()
assert.Equal(t, http.StatusNotFound, patchResp.StatusCode,
"PATCH on non-existent upload should return 404")
}
// TestTusCreationWithUpload tests the creation-with-upload extension
func TestTusCreationWithUpload(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
testData := []byte("Small file uploaded in creation request")
targetPath := "/creationwithupload/smallfile.txt"
client := &http.Client{}
// Create upload with data in the same request
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, bytes.NewReader(testData))
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createReq.Header.Set("Content-Type", "application/offset+octet-stream")
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
assert.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
assert.NotEmpty(t, uploadLocation)
// Check Upload-Offset header - should indicate all data was received
uploadOffset := createResp.Header.Get("Upload-Offset")
assert.Equal(t, strconv.Itoa(len(testData)), uploadOffset,
"Upload-Offset should equal file size for complete upload")
// Verify the file
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode)
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body)
}
// TestTusResumeAfterInterruption simulates resuming an upload after failure
func TestTusResumeAfterInterruption(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
ctx, cancel := context.WithTimeout(context.Background(), 120*time.Second)
defer cancel()
cluster, err := startTestCluster(t, ctx)
require.NoError(t, err)
defer func() {
cluster.Stop()
os.RemoveAll(cluster.dataDir)
}()
// 50KB test data
testData := make([]byte, 50*1024)
for i := range testData {
testData[i] = byte(i % 256)
}
targetPath := "/resume/interrupted.bin"
client := &http.Client{}
// Create upload
createReq, err := http.NewRequest(http.MethodPost, cluster.TusURL()+targetPath, nil)
require.NoError(t, err)
createReq.Header.Set("Tus-Resumable", TusVersion)
createReq.Header.Set("Upload-Length", strconv.Itoa(len(testData)))
createResp, err := client.Do(createReq)
require.NoError(t, err)
defer createResp.Body.Close()
require.Equal(t, http.StatusCreated, createResp.StatusCode)
uploadLocation := createResp.Header.Get("Location")
// Upload first 20KB
firstChunkSize := 20 * 1024
patchReq1, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[:firstChunkSize]))
require.NoError(t, err)
patchReq1.Header.Set("Tus-Resumable", TusVersion)
patchReq1.Header.Set("Upload-Offset", "0")
patchReq1.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp1, err := client.Do(patchReq1)
require.NoError(t, err)
patchResp1.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp1.StatusCode)
t.Log("Simulating network interruption...")
// Simulate resumption: Query current offset with HEAD
headReq, err := http.NewRequest(http.MethodHead, cluster.FullURL(uploadLocation), nil)
require.NoError(t, err)
headReq.Header.Set("Tus-Resumable", TusVersion)
headResp, err := client.Do(headReq)
require.NoError(t, err)
defer headResp.Body.Close()
require.Equal(t, http.StatusOK, headResp.StatusCode)
currentOffset, err := strconv.Atoi(headResp.Header.Get("Upload-Offset"))
require.NoError(t, err, "Upload-Offset header should be a valid integer")
t.Logf("Resumed upload at offset: %d", currentOffset)
require.Equal(t, firstChunkSize, currentOffset)
// Resume upload from current offset
patchReq2, err := http.NewRequest(http.MethodPatch, cluster.FullURL(uploadLocation), bytes.NewReader(testData[currentOffset:]))
require.NoError(t, err)
patchReq2.Header.Set("Tus-Resumable", TusVersion)
patchReq2.Header.Set("Upload-Offset", strconv.Itoa(currentOffset))
patchReq2.Header.Set("Content-Type", "application/offset+octet-stream")
patchResp2, err := client.Do(patchReq2)
require.NoError(t, err)
patchResp2.Body.Close()
require.Equal(t, http.StatusNoContent, patchResp2.StatusCode)
// Verify complete file
getResp, err := client.Get(cluster.FilerURL() + targetPath)
require.NoError(t, err)
defer getResp.Body.Close()
assert.Equal(t, http.StatusOK, getResp.StatusCode)
body, err := io.ReadAll(getResp.Body)
require.NoError(t, err)
assert.Equal(t, testData, body, "Resumed upload should produce complete file")
}