* do delete expired entries on s3 list request https://github.com/seaweedfs/seaweedfs/issues/6837 * disable delete expires s3 entry in filer * pass opt allowDeleteObjectsByTTL to all servers * delete on get and head * add lifecycle expiration s3 tests * fix opt allowDeleteObjectsByTTL for server * fix test lifecycle expiration * fix IsExpired * fix locationPrefix for updateEntriesTTL * fix s3tests * resolv coderabbitai * GetS3ExpireTime on filer * go mod * clear TtlSeconds for volume * move s3 delete expired entry to filer * filer delete meta and data * del unusing func removeExpiredObject * test s3 put * test s3 put multipart * allowDeleteObjectsByTTL by default * fix pipline tests * rm dublicate SeaweedFSExpiresS3 * revert expiration tests * fix updateTTL * rm log * resolv comment * fix delete version object * fix S3Versioning * fix delete on FindEntry * fix delete chunks * fix sqlite not support concurrent writes/reads * move deletion out of listing transaction; delete entries and empty folders * Revert "fix sqlite not support concurrent writes/reads" This reverts commit 5d5da14e0ed91c613fe5c0ed058f58bb04fba6f0. * clearer handling on recursive empty directory deletion * handle listing errors * strut copying * reuse code to delete empty folders * use iterative approach with a queue to avoid recursive WithFilerClient calls * stop a gRPC stream from the client-side callback is to return a specific error, e.g., io.EOF * still issue UpdateEntry when the flag must be added * errors join * join path * cleaner * add context, sort directories by depth (deepest first) to avoid redundant checks * batched operation, refactoring * prevent deleting bucket * constant * reuse code * more logging * refactoring * s3 TTL time * Safety check --------- Co-authored-by: chrislu <chris.lu@gmail.com>
Kafka Gateway Tests with SMQ Integration
This directory contains tests for the SeaweedFS Kafka Gateway with full SeaweedMQ (SMQ) integration.
Test Types
Unit Tests (./unit/)
- Basic gateway functionality
- Protocol compatibility
- No SeaweedFS backend required
- Uses mock handlers
Integration Tests (./integration/)
- Mock Mode (default): Uses in-memory handlers for protocol testing
- SMQ Mode (with
SEAWEEDFS_MASTERS): Uses real SeaweedFS backend for full integration
E2E Tests (./e2e/)
- End-to-end workflows
- Automatically detects SMQ availability
- Falls back to mock mode if SMQ unavailable
Running Tests Locally
Quick Protocol Testing (Mock Mode)
# Run all integration tests with mock backend
cd test/kafka
go test ./integration/...
# Run specific test
go test -v ./integration/ -run TestClientCompatibility
Full Integration Testing (SMQ Mode)
Requires running SeaweedFS instance:
- Start SeaweedFS with MQ support:
# Terminal 1: Start SeaweedFS server
weed server -ip="127.0.0.1" -ip.bind="0.0.0.0" -dir=/tmp/seaweedfs-data -master.port=9333 -volume.port=8081 -filer.port=8888 -filer=true
# Terminal 2: Start MQ broker
weed mq.broker -master="127.0.0.1:9333" -ip="127.0.0.1" -port=17777
- Run tests with SMQ backend:
cd test/kafka
SEAWEEDFS_MASTERS=127.0.0.1:9333 go test ./integration/...
# Run specific SMQ integration tests
SEAWEEDFS_MASTERS=127.0.0.1:9333 go test -v ./integration/ -run TestSMQIntegration
Test Broker Startup
If you're having broker startup issues:
# Debug broker startup locally
./scripts/test-broker-startup.sh
CI/CD Integration
GitHub Actions Jobs
- Unit Tests - Fast protocol tests with mock backend
- Integration Tests - Mock mode by default
- E2E Tests (with SMQ) - Full SeaweedFS + MQ broker stack
- Client Compatibility (with SMQ) - Tests different Kafka clients against real backend
- Consumer Group Tests (with SMQ) - Tests consumer group persistence
- SMQ Integration Tests - Dedicated SMQ-specific functionality tests
What Gets Tested with SMQ
When SEAWEEDFS_MASTERS is available, tests exercise:
- Real Message Persistence - Messages stored in SeaweedFS volumes
- Offset Persistence - Consumer group offsets stored in SeaweedFS filer
- Topic Persistence - Topic metadata persisted in SeaweedFS filer
- Consumer Group Coordination - Distributed coordinator assignment
- Cross-Client Compatibility - Sarama, kafka-go with real backend
- Broker Discovery - Gateway discovers MQ brokers via masters
Test Infrastructure
testutil.NewGatewayTestServerWithSMQ(t, mode)
Smart gateway creation that automatically:
- Detects SMQ availability via
SEAWEEDFS_MASTERS - Uses production handler when available
- Falls back to mock when unavailable
- Provides timeout protection against hanging
Modes:
SMQRequired- Skip test if SMQ unavailableSMQAvailable- Use SMQ if available, otherwise mockSMQUnavailable- Always use mock
Timeout Protection
Gateway creation includes timeout protection to prevent CI hanging:
- 20 second timeout for
SMQRequiredmode - 15 second timeout for
SMQAvailablemode - Clear error messages when broker discovery fails
Debugging Failed Tests
CI Logs to Check
- "SeaweedFS master is up" - Master started successfully
- "SeaweedFS filer is up" - Filer ready
- "SeaweedFS MQ broker is up" - Broker started successfully
- Broker/Server logs - Shown on broker startup failure
Local Debugging
- Run
./scripts/test-broker-startup.shto test broker startup - Check logs at
/tmp/weed-*.log - Test individual components:
# Test master curl http://127.0.0.1:9333/cluster/status # Test filer curl http://127.0.0.1:8888/status # Test broker nc -z 127.0.0.1 17777
Common Issues
- Broker fails to start: Check filer is ready before starting broker
- Gateway timeout: Broker discovery fails, check broker is accessible
- Test hangs: Timeout protection not working, reduce timeout values
Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Kafka Client │───▶│ Kafka Gateway │───▶│ SeaweedMQ Broker│
│ (Sarama, │ │ (Protocol │ │ (Message │
│ kafka-go) │ │ Handler) │ │ Persistence) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ SeaweedFS Filer │ │ SeaweedFS Master│
│ (Offset Storage)│ │ (Coordination) │
└─────────────────┘ └─────────────────┘
│ │
▼ ▼
┌─────────────────────────────────────────┐
│ SeaweedFS Volumes │
│ (Message Storage) │
└─────────────────────────────────────────┘
This architecture ensures full integration testing of the entire Kafka → SeaweedFS message path.