Files
seaweedFS/.github/workflows/s3-proxy-signature-tests.yml
blitt001 3d81d5bef7 Fix S3 signature verification behind reverse proxies (#8444)
* Fix S3 signature verification behind reverse proxies

When SeaweedFS is deployed behind a reverse proxy (e.g. nginx, Kong,
Traefik), AWS S3 Signature V4 verification fails because the Host header
the client signed with (e.g. "localhost:9000") differs from the Host
header SeaweedFS receives on the backend (e.g. "seaweedfs:8333").

This commit adds a new -s3.externalUrl parameter (and S3_EXTERNAL_URL
environment variable) that tells SeaweedFS what public-facing URL clients
use to connect. When set, SeaweedFS uses this host value for signature
verification instead of the Host header from the incoming request.

New parameter:
  -s3.externalUrl  (flag) or S3_EXTERNAL_URL (environment variable)
  Example: -s3.externalUrl=http://localhost:9000
  Example: S3_EXTERNAL_URL=https://s3.example.com

The environment variable is particularly useful in Docker/Kubernetes
deployments where the external URL is injected via container config.
The flag takes precedence over the environment variable when both are set.

At startup, the URL is parsed and default ports are stripped to match
AWS SDK behavior (port 80 for HTTP, port 443 for HTTPS), so
"http://s3.example.com:80" and "http://s3.example.com" are equivalent.

Bugs fixed:
- Default port stripping was removed by a prior PR, causing signature
  mismatches when clients connect on standard ports (80/443)
- X-Forwarded-Port was ignored when X-Forwarded-Host was not present
- Scheme detection now uses proper precedence: X-Forwarded-Proto >
  TLS connection > URL scheme > "http"
- Test expectations for standard port stripping were incorrect
- expectedHost field in TestSignatureV4WithForwardedPort was declared
  but never actually checked (self-referential test)

* Add Docker integration test for S3 proxy signature verification

Docker Compose setup with nginx reverse proxy to validate that the
-s3.externalUrl parameter (or S3_EXTERNAL_URL env var) correctly
resolves S3 signature verification when SeaweedFS runs behind a proxy.

The test uses nginx proxying port 9000 to SeaweedFS on port 8333,
with X-Forwarded-Host/Port/Proto headers set. SeaweedFS is configured
with -s3.externalUrl=http://localhost:9000 so it uses "localhost:9000"
for signature verification, matching what the AWS CLI signs with.

The test can be run with aws CLI on the host or without it by using
the amazon/aws-cli Docker image with --network host.

Test covers: create-bucket, list-buckets, put-object, head-object,
list-objects-v2, get-object, content round-trip integrity,
delete-object, and delete-bucket — all through the reverse proxy.

* Create s3-proxy-signature-tests.yml

* fix CLI

* fix CI

* Update s3-proxy-signature-tests.yml

* address comments

* Update Dockerfile

* add user

* no need for fuse

* Update s3-proxy-signature-tests.yml

* debug

* weed mini

* fix health check

* health check

* fix health checking

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-02-26 14:20:42 -08:00

123 lines
3.8 KiB
YAML

name: "S3 Proxy Signature Tests"
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
concurrency:
group: ${{ github.head_ref || github.ref }}/s3-proxy-signature-tests
cancel-in-progress: true
permissions:
contents: read
jobs:
proxy-signature-tests:
name: S3 Proxy Signature Verification Tests
runs-on: ubuntu-22.04
timeout-minutes: 15
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v6
- name: Set up Go 1.x
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
id: go
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build SeaweedFS binary for Linux
run: |
set -x
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -buildvcs=false -v -o test/s3/proxy_signature/weed ./weed
- name: Run S3 Proxy Signature Tests
timeout-minutes: 10
working-directory: test/s3/proxy_signature
run: |
set -x
echo "Starting Docker Compose services..."
docker compose up -d --build
# Check if containers are running
echo "Checking container status..."
docker compose ps
# Wait for services to be ready
echo "Waiting for nginx proxy to be ready..."
PROXY_READY=0
for i in $(seq 1 30); do
if curl -s http://localhost:9000/ > /dev/null 2>&1; then
echo "Proxy is ready"
PROXY_READY=1
break
fi
echo "Waiting for proxy... ($i/30)"
sleep 1
done
if [ $PROXY_READY -eq 0 ]; then
echo "ERROR: Proxy failed to become ready after 30 seconds"
echo "Docker compose logs:"
docker compose logs --no-color || true
exit 1
fi
# Wait for SeaweedFS to be ready
echo "Waiting for SeaweedFS S3 gateway to be ready via proxy..."
S3_READY=0
for i in $(seq 1 30); do
# Check logs first for startup message (weed mini says "S3 service is ready")
if docker compose logs seaweedfs 2>&1 | grep -qE "S3 (gateway|service).*(started|ready)"; then
echo "SeaweedFS S3 gateway is ready"
S3_READY=1
break
fi
# Fallback: check headers via proxy (which is already ready)
if curl -s -I http://localhost:9000/ | grep -qi "SeaweedFS"; then
echo "SeaweedFS S3 gateway is responding via proxy"
S3_READY=1
break
fi
echo "Waiting for S3 gateway... ($i/30)"
sleep 1
done
if [ $S3_READY -eq 0 ]; then
echo "ERROR: SeaweedFS S3 gateway failed to become ready after 30 seconds"
echo "Latest seaweedfs logs:"
docker compose logs --no-color --tail 20 seaweedfs || true
exit 1
fi
# Run the test script inside AWS CLI container
echo "Running test script..."
docker run --rm --network host \
--entrypoint bash \
amazon/aws-cli:latest \
-c "$(cat test.sh)"
TEST_RESULT=$?
# Cleanup
docker compose down
exit $TEST_RESULT
- name: Cleanup on failure
if: failure()
working-directory: test/s3/proxy_signature
run: |
echo "Cleaning up Docker containers..."
ls -al weed || true
ldd weed || true
echo "Docker compose logs:"
docker compose logs --no-color || true
echo "Container status before cleanup:"
docker ps -a
echo "Stopping services..."
docker compose down || true