Fix S3 signature verification behind reverse proxies (#8444)

* Fix S3 signature verification behind reverse proxies

When SeaweedFS is deployed behind a reverse proxy (e.g. nginx, Kong,
Traefik), AWS S3 Signature V4 verification fails because the Host header
the client signed with (e.g. "localhost:9000") differs from the Host
header SeaweedFS receives on the backend (e.g. "seaweedfs:8333").

This commit adds a new -s3.externalUrl parameter (and S3_EXTERNAL_URL
environment variable) that tells SeaweedFS what public-facing URL clients
use to connect. When set, SeaweedFS uses this host value for signature
verification instead of the Host header from the incoming request.

New parameter:
  -s3.externalUrl  (flag) or S3_EXTERNAL_URL (environment variable)
  Example: -s3.externalUrl=http://localhost:9000
  Example: S3_EXTERNAL_URL=https://s3.example.com

The environment variable is particularly useful in Docker/Kubernetes
deployments where the external URL is injected via container config.
The flag takes precedence over the environment variable when both are set.

At startup, the URL is parsed and default ports are stripped to match
AWS SDK behavior (port 80 for HTTP, port 443 for HTTPS), so
"http://s3.example.com:80" and "http://s3.example.com" are equivalent.

Bugs fixed:
- Default port stripping was removed by a prior PR, causing signature
  mismatches when clients connect on standard ports (80/443)
- X-Forwarded-Port was ignored when X-Forwarded-Host was not present
- Scheme detection now uses proper precedence: X-Forwarded-Proto >
  TLS connection > URL scheme > "http"
- Test expectations for standard port stripping were incorrect
- expectedHost field in TestSignatureV4WithForwardedPort was declared
  but never actually checked (self-referential test)

* Add Docker integration test for S3 proxy signature verification

Docker Compose setup with nginx reverse proxy to validate that the
-s3.externalUrl parameter (or S3_EXTERNAL_URL env var) correctly
resolves S3 signature verification when SeaweedFS runs behind a proxy.

The test uses nginx proxying port 9000 to SeaweedFS on port 8333,
with X-Forwarded-Host/Port/Proto headers set. SeaweedFS is configured
with -s3.externalUrl=http://localhost:9000 so it uses "localhost:9000"
for signature verification, matching what the AWS CLI signs with.

The test can be run with aws CLI on the host or without it by using
the amazon/aws-cli Docker image with --network host.

Test covers: create-bucket, list-buckets, put-object, head-object,
list-objects-v2, get-object, content round-trip integrity,
delete-object, and delete-bucket — all through the reverse proxy.

* Create s3-proxy-signature-tests.yml

* fix CLI

* fix CI

* Update s3-proxy-signature-tests.yml

* address comments

* Update Dockerfile

* add user

* no need for fuse

* Update s3-proxy-signature-tests.yml

* debug

* weed mini

* fix health check

* health check

* fix health checking

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
This commit is contained in:
blitt001
2026-02-26 14:20:42 -08:00
committed by GitHub
parent ae02d47433
commit 3d81d5bef7
19 changed files with 1157 additions and 38 deletions

View File

@@ -0,0 +1,12 @@
FROM alpine:3.20
RUN apk add --no-cache curl && \
addgroup -S seaweed && \
adduser -S seaweed -G seaweed
COPY weed /usr/bin/weed
RUN chmod +x /usr/bin/weed && \
chown seaweed:seaweed /usr/bin/weed && \
mkdir -p /etc/seaweedfs /data/filerldb2 && \
chown -R seaweed:seaweed /etc/seaweedfs /data && \
chmod 755 /data /etc/seaweedfs /data/filerldb2
WORKDIR /data
USER seaweed

View File

@@ -0,0 +1,79 @@
# S3 Proxy Signature Verification Test
Integration test that verifies S3 signature verification works correctly when
SeaweedFS is deployed behind a reverse proxy (nginx).
## What it tests
- S3 operations (create bucket, put/get/head/list/delete) through an nginx
reverse proxy with `X-Forwarded-Host`, `X-Forwarded-Port`, and
`X-Forwarded-Proto` headers
- SeaweedFS configured with `-s3.externalUrl=http://localhost:9000` so the
signature verification uses the client-facing host instead of the internal
backend address
## Architecture
```text
AWS CLI (signs with Host: localhost:9000)
|
v
nginx (:9000)
| proxy_pass → seaweedfs:8333
| Sets: X-Forwarded-Host: localhost
| X-Forwarded-Port: 9000
| X-Forwarded-Proto: http
v
SeaweedFS S3 (:8333, -s3.externalUrl=http://localhost:9000)
| externalHost = "localhost:9000" (parsed at startup)
| extractHostHeader() returns "localhost:9000"
| Matches what AWS CLI signed with
v
Signature verification succeeds
```
**Note:** When `-s3.externalUrl` is configured, direct access to the backend
port (8333) will fail signature verification because the client signs with a
different Host header than what `externalUrl` specifies. This is expected —
all S3 traffic should go through the proxy.
## Prerequisites
- Docker and Docker Compose
- AWS CLI v2 (on host or via Docker, see below)
## Running
```bash
# Build the weed binary first (from repo root):
cd /path/to/seaweedfs
go build -o test/s3/proxy_signature/weed ./weed
cd test/s3/proxy_signature
# Start services
docker compose up -d --build
# Option A: Run test with aws CLI installed locally
./test.sh
# Option B: Run test without aws CLI (uses Docker container)
docker run --rm --network host --entrypoint "" amazon/aws-cli:latest \
bash < test.sh
# Tear down
docker compose down
# Clean up the weed binary
rm -f weed
```
## Troubleshooting
If signature verification fails through the proxy, check:
1. nginx is setting `X-Forwarded-Host` and `X-Forwarded-Port` correctly
2. SeaweedFS is started with `-s3.externalUrl` matching the client endpoint
3. The AWS CLI endpoint URL matches the proxy address
You can also set the `S3_EXTERNAL_URL` environment variable instead of the
`-s3.externalUrl` flag.

View File

@@ -0,0 +1,28 @@
services:
seaweedfs:
build:
context: .
dockerfile: Dockerfile
command: >
/usr/bin/weed mini
-s3.config=/etc/seaweedfs/s3.json
-s3.externalUrl=http://localhost:9000
-ip=seaweedfs
volumes:
- ./s3.json:/etc/seaweedfs/s3.json:ro
healthcheck:
test: ["CMD", "curl", "-sf", "http://seaweedfs:8333/status"]
interval: 3s
timeout: 2s
retries: 20
start_period: 5s
nginx:
image: nginx:alpine
ports:
- "9000:9000"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
seaweedfs:
condition: service_healthy

View File

@@ -0,0 +1,23 @@
server {
listen 9000;
server_name localhost;
# Allow large uploads
client_max_body_size 64m;
location / {
proxy_pass http://seaweedfs:8333;
# Standard reverse proxy headers — this is what Kong, Traefik, etc. do
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
proxy_request_buffering off;
}
}

View File

@@ -0,0 +1,20 @@
{
"identities": [
{
"name": "test_admin",
"credentials": [
{
"accessKey": "test_access_key",
"secretKey": "test_secret_key"
}
],
"actions": [
"Admin",
"Read",
"List",
"Tagging",
"Write"
]
}
]
}

132
test/s3/proxy_signature/test.sh Executable file
View File

@@ -0,0 +1,132 @@
#!/usr/bin/env bash
#
# Integration test for S3 signature verification behind a reverse proxy.
#
# Usage:
# # With aws CLI installed locally:
# docker compose up -d --build && ./test.sh && docker compose down
#
# # Without aws CLI (runs test inside a container):
# docker compose up -d --build
# docker run --rm --network host --entrypoint "" amazon/aws-cli:latest \
# bash < test.sh
# docker compose down
#
# This script tests S3 operations through an nginx reverse proxy to verify
# that signature verification works correctly when SeaweedFS is configured
# with -s3.externalUrl=http://localhost:9000.
#
set -euo pipefail
PROXY_ENDPOINT="http://localhost:9000"
ACCESS_KEY="test_access_key"
SECRET_KEY="test_secret_key"
REGION="us-east-1"
BUCKET="test-proxy-sig-$$"
RED='\033[0;31m'
GREEN='\033[0;32m'
NC='\033[0m'
pass() { echo -e "${GREEN}PASS${NC}: $1"; }
fail() { echo -e "${RED}FAIL${NC}: $1"; exit 1; }
# Helper: run aws s3api command against a given endpoint
s3() {
local endpoint="$1"
shift
aws s3api \
--endpoint-url "$endpoint" \
--region "$REGION" \
--no-verify-ssl \
"$@" 2>&1
}
export AWS_ACCESS_KEY_ID="$ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="$SECRET_KEY"
echo "=== S3 Proxy Signature Verification Test ==="
echo ""
echo "Testing S3 access through nginx reverse proxy at $PROXY_ENDPOINT"
echo "SeaweedFS configured with -s3.externalUrl=http://localhost:9000"
echo "AWS CLI signs requests with Host: localhost:9000"
echo ""
# Wait for proxy to be ready
echo "Waiting for nginx proxy to be ready..."
for i in $(seq 1 30); do
# Use aws CLI for health check if curl is missing
if command -v curl >/dev/null 2>&1; then
http_code=$(curl -s -o /dev/null -w "%{http_code}" "$PROXY_ENDPOINT/" 2>/dev/null || echo "000")
case $http_code in
200|403|405) break ;;
esac
else
if aws s3api list-buckets --endpoint-url "$PROXY_ENDPOINT" --no-sign-request >/dev/null 2>&1; then
break
fi
fi
if [ "$i" -eq 30 ]; then
fail "Proxy did not become ready in time"
fi
echo "Waiting for proxy $i/30..."
sleep 1
done
echo "Proxy is ready."
echo ""
# --- Test 1: Bucket operations through proxy ---
echo "--- Test 1: Bucket operations through proxy ---"
s3 "$PROXY_ENDPOINT" create-bucket --bucket "$BUCKET" > /dev/null \
&& pass "create-bucket" \
|| fail "create-bucket — signature verification likely failed"
s3 "$PROXY_ENDPOINT" list-buckets > /dev/null \
&& pass "list-buckets" \
|| fail "list-buckets"
echo ""
# --- Test 2: Object CRUD through proxy ---
echo "--- Test 2: Object CRUD through proxy ---"
echo "hello-from-proxy" > /tmp/test-proxy-sig.txt
s3 "$PROXY_ENDPOINT" put-object --bucket "$BUCKET" --key "test.txt" --body /tmp/test-proxy-sig.txt > /dev/null \
&& pass "put-object" \
|| fail "put-object"
s3 "$PROXY_ENDPOINT" head-object --bucket "$BUCKET" --key "test.txt" > /dev/null \
&& pass "head-object" \
|| fail "head-object"
s3 "$PROXY_ENDPOINT" list-objects-v2 --bucket "$BUCKET" > /dev/null \
&& pass "list-objects-v2" \
|| fail "list-objects-v2"
s3 "$PROXY_ENDPOINT" get-object --bucket "$BUCKET" --key "test.txt" /tmp/test-proxy-sig-get.txt > /dev/null \
&& pass "get-object" \
|| fail "get-object"
# Verify content round-trip
CONTENT=$(cat /tmp/test-proxy-sig-get.txt)
if [ "$CONTENT" = "hello-from-proxy" ]; then
pass "content integrity (round-trip)"
else
fail "content mismatch: got \"$CONTENT\", expected \"hello-from-proxy\""
fi
echo ""
# --- Test 3: Delete operations through proxy ---
echo "--- Test 3: Delete through proxy ---"
s3 "$PROXY_ENDPOINT" delete-object --bucket "$BUCKET" --key "test.txt" > /dev/null \
&& pass "delete-object" \
|| fail "delete-object"
s3 "$PROXY_ENDPOINT" delete-bucket --bucket "$BUCKET" > /dev/null \
&& pass "delete-bucket" \
|| fail "delete-bucket"
echo ""
# Cleanup temp files
rm -f /tmp/test-proxy-sig.txt /tmp/test-proxy-sig-get.txt
echo "=== All tests passed ==="