Files
seaweedFS/.github/workflows/java_integration_tests.yml
Chris Lu 0d8588e3ae S3: Implement IAM defaults and STS signing key fallback (#8348)
* S3: Implement IAM defaults and STS signing key fallback logic

* S3: Refactor startup order to init SSE-S3 key manager before IAM

* S3: Derive STS signing key from KEK using HKDF for security isolation

* S3: Document STS signing key fallback in security.toml

* fix(s3api): refine anonymous access logic and secure-by-default behavior

- Initialize anonymous identity by default in `NewIdentityAccessManagement` to prevent nil pointer exceptions.
- Ensure `ReplaceS3ApiConfiguration` preserves the anonymous identity if not present in the new configuration.
- Update `NewIdentityAccessManagement` signature to accept `filerClient`.
- In legacy mode (no policy engine), anonymous defaults to Deny (no actions), preserving secure-by-default behavior.
- Use specific `LookupAnonymous` method instead of generic map lookup.
- Update tests to accommodate signature changes and verify improved anonymous handling.

* feat(s3api): make IAM configuration optional

- Start S3 API server without a configuration file if `EnableIam` option is set.
- Default to `Allow` effect for policy engine when no configuration is provided (Zero-Config mode).
- Handle empty configuration path gracefully in `loadIAMManagerFromConfig`.
- Add integration test `iam_optional_test.go` to verify empty config behavior.

* fix(iamapi): fix signature mismatch in NewIdentityAccessManagementWithStore

* fix(iamapi): properly initialize FilerClient instead of passing nil

* fix(iamapi): properly initialize filer client for IAM management

- Instead of passing `nil`, construct a `wdclient.FilerClient` using the provided `Filers` addresses.
- Ensure `NewIdentityAccessManagementWithStore` receives a valid `filerClient` to avoid potential nil pointer dereferences or limited functionality.

* clean: remove dead code in s3api_server.go

* refactor(s3api): improve IAM initialization, safety and anonymous access security

* fix(s3api): ensure IAM config loads from filer after client init

* fix(s3): resolve test failures in integration, CORS, and tagging tests

- Fix CORS tests by providing explicit anonymous permissions config
- Fix S3 integration tests by setting admin credentials in init
- Align tagging test credentials in CI with IAM defaults
- Added goroutine to retry IAM config load in iamapi server

* fix(s3): allow anonymous access to health targets and S3 Tables when identities are present

* fix(ci): use /healthz for Caddy health check in awscli tests

* iam, s3api: expose DefaultAllow from IAM and Policy Engine

This allows checking the global "Open by Default" configuration from
other components like S3 Tables.

* s3api/s3tables: support DefaultAllow in permission logic and handler

Updated CheckPermissionWithContext to respect the DefaultAllow flag
in PolicyContext. This enables "Open by Default" behavior for
unauthenticated access in zero-config environments. Added a targeted
unit test to verify the logic.

* s3api/s3tables: propagate DefaultAllow through handlers

Propagated the DefaultAllow flag to individual handlers for
namespaces, buckets, tables, policies, and tagging. This ensures
consistent "Open by Default" behavior across all S3 Tables API
endpoints.

* s3api: wire up DefaultAllow for S3 Tables API initialization

Updated registerS3TablesRoutes to query the global IAM configuration
and set the DefaultAllow flag on the S3 Tables API server. This
completes the end-to-end propagation required for anonymous access in
zero-config environments. Added a SetDefaultAllow method to
S3TablesApiServer to facilitate this.

* s3api: fix tests by adding DefaultAllow to mock IAM integrations

The IAMIntegration interface was updated to include DefaultAllow(),
breaking several mock implementations in tests. This commit fixes
the build errors by adding the missing method to the mocks.

* env

* ensure ports

* env

* env

* fix default allow

* add one more test using non-anonymous user

* debug

* add more debug

* less logs
2026-02-16 13:59:13 -08:00

194 lines
6.0 KiB
YAML

name: Java Client Integration Tests
on:
push:
branches: [ master ]
paths:
- 'other/java/**'
- 'weed/**'
- '.github/workflows/java_integration_tests.yml'
pull_request:
branches: [ master ]
paths:
- 'other/java/**'
- 'weed/**'
- '.github/workflows/java_integration_tests.yml'
jobs:
test:
name: Java Integration Tests
runs-on: ubuntu-latest
strategy:
matrix:
java: ['11', '17']
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
id: go
- name: Set up Java
uses: actions/setup-java@v5
with:
java-version: ${{ matrix.java }}
distribution: 'temurin'
cache: 'maven'
- name: Build SeaweedFS
run: |
cd weed
go install -buildvcs=false
weed version
- name: Start SeaweedFS Server
run: |
# Create clean data directory
export WEED_DATA_DIR="/tmp/seaweedfs-java-tests-$(date +%s)"
mkdir -p "$WEED_DATA_DIR"
# Start SeaweedFS with optimized settings for CI
# Include S3 API for s3copier integration tests
weed server -dir="$WEED_DATA_DIR" \
-master.raftHashicorp \
-master.electionTimeout=1s \
-master.volumeSizeLimitMB=100 \
-volume.max=100 \
-volume.preStopSeconds=1 \
-master.peers=none \
-filer -filer.maxMB=64 \
-s3 -s3.port=8333 \
-s3.config="$GITHUB_WORKSPACE/docker/compose/s3.json" \
-s3.allowDeleteBucketNotEmpty=true \
-master.port=9333 \
-volume.port=8080 \
-filer.port=8888 \
-metricsPort=9324 > seaweedfs.log 2>&1 &
SERVER_PID=$!
echo "SERVER_PID=$SERVER_PID" >> $GITHUB_ENV
echo "WEED_DATA_DIR=$WEED_DATA_DIR" >> $GITHUB_ENV
echo "SeaweedFS server started with PID: $SERVER_PID"
- name: Wait for SeaweedFS Components
run: |
echo "Waiting for SeaweedFS components to start..."
# Wait for master
for i in {1..30}; do
if curl -s http://localhost:9333/cluster/status > /dev/null 2>&1; then
echo "✓ Master server is ready"
break
fi
echo "Waiting for master server... ($i/30)"
sleep 2
done
# Wait for volume
for i in {1..30}; do
if curl -s http://localhost:8080/status > /dev/null 2>&1; then
echo "✓ Volume server is ready"
break
fi
echo "Waiting for volume server... ($i/30)"
sleep 2
done
# Wait for filer
for i in {1..30}; do
if curl -s http://localhost:8888/ > /dev/null 2>&1; then
echo "✓ Filer is ready"
break
fi
echo "Waiting for filer... ($i/30)"
sleep 2
done
# Wait for S3 API
for i in {1..30}; do
if curl -s http://localhost:8333/healthz > /dev/null 2>&1; then
echo "✓ S3 API is ready"
break
fi
echo "Waiting for S3 API... ($i/30)"
sleep 2
done
echo "✓ All SeaweedFS components are ready!"
# Display cluster status
echo "Cluster status:"
curl -s http://localhost:9333/cluster/status | head -20
- name: Build and Install SeaweedFS Client
working-directory: other/java/client
run: |
mvn clean install -DskipTests -Dmaven.javadoc.skip=true -Dgpg.skip=true
- name: Run Client Unit Tests
working-directory: other/java/client
run: |
mvn test -Dtest=SeaweedReadTest,SeaweedCipherTest
- name: Run Client Integration Tests
working-directory: other/java/client
env:
SEAWEEDFS_TEST_ENABLED: true
run: |
mvn test -Dtest=*IntegrationTest
- name: Run HDFS3 Configuration Tests
working-directory: other/java/hdfs3
run: |
mvn test -Dtest=SeaweedFileSystemConfigTest -Dmaven.javadoc.skip=true -Dgpg.skip=true
- name: Run S3 ETag Validation Tests (Issue #7768)
working-directory: other/java/s3copier
env:
S3_ENDPOINT: http://127.0.0.1:8333
S3_ACCESS_KEY: some_access_key1
S3_SECRET_KEY: some_secret_key1
run: |
echo "Running S3 ETag validation tests against $S3_ENDPOINT"
mvn test -Dtest=ETagValidationTest \
-DS3_ENDPOINT=$S3_ENDPOINT \
-DS3_ACCESS_KEY=$S3_ACCESS_KEY \
-DS3_SECRET_KEY=$S3_SECRET_KEY \
-Dmaven.javadoc.skip=true -Dgpg.skip=true
- name: Display logs on failure
if: failure()
run: |
echo "=== SeaweedFS Server Log ==="
tail -100 seaweedfs.log || echo "No server log"
echo ""
echo "=== Cluster Status ==="
curl -s http://localhost:9333/cluster/status || echo "Cannot reach cluster"
echo ""
echo "=== Process Status ==="
ps aux | grep weed || echo "No weed processes"
- name: Cleanup
if: always()
run: |
# Stop server using stored PID
if [ -n "$SERVER_PID" ]; then
echo "Stopping SeaweedFS server (PID: $SERVER_PID)"
kill -9 $SERVER_PID 2>/dev/null || true
fi
# Fallback: kill any remaining weed processes
pkill -f "weed server" || true
# Clean up data directory
if [ -n "$WEED_DATA_DIR" ]; then
echo "Cleaning up data directory: $WEED_DATA_DIR"
rm -rf "$WEED_DATA_DIR" || true
fi