test: add Trino Iceberg catalog integration test (#8228)

* test: add Trino Iceberg catalog integration test

- Create test/s3/catalog_trino/trino_catalog_test.go with TestTrinoIcebergCatalog
- Tests integration between Trino SQL engine and SeaweedFS Iceberg REST catalog
- Starts weed mini with all services and Trino in Docker container
- Validates Iceberg catalog schema creation and listing operations
- Uses native S3 filesystem support in Trino with path-style access
- Add workflow job to s3-tables-tests.yml for CI execution

* fix: preserve AWS environment credentials when replacing S3 configuration

When S3 configuration is loaded from filer/db, it replaces the identities list
and inadvertently removes AWS_ACCESS_KEY_ID credentials that were added from
environment variables. This caused auth to remain disabled even though valid
credentials were present.

Fix by preserving environment-based identities when replacing the configuration
and re-adding them after the replacement. This ensures environment credentials
persist across configuration reloads and properly enable authentication.

* fix: use correct ServerAddress format with gRPC port encoding

The admin server couldn't connect to master because the master address
was missing the gRPC port information. Use pb.NewServerAddress() which
properly encodes both HTTP and gRPC ports in the address string.

Changes:
- weed/command/mini.go: Use pb.NewServerAddress for master address in admin
- test/s3/policy/policy_test.go: Store and use gRPC ports for master/filer addresses

This fix applies to:
1. Admin server connection to master (mini.go)
2. Test shell commands that need master/filer addresses (policy_test.go)

* move

* move

* fix: always include gRPC port in server address encoding

The NewServerAddress() function was omitting the gRPC port from the address
string when it matched the port+10000 convention. However, gRPC port allocation
doesn't always follow this convention - when the calculated port is busy, an
alternative port is allocated.

This caused a bug where:
1. Master's gRPC port was allocated as 50661 (sequential, not port+10000)
2. Address was encoded as '192.168.1.66:50660' (gRPC port omitted)
3. Admin client called ToGrpcAddress() which assumed port+10000 offset
4. Admin tried to connect to 60660 but master was on 50661 → connection failed

Fix: Always include explicit gRPC port in address format (host:httpPort.grpcPort)
unless gRPC port is 0. This makes addresses unambiguous and works regardless of
the port allocation strategy used.

Impacts: All server-to-server gRPC connections now use properly formatted addresses.

* test: fix Iceberg REST API readiness check

The Iceberg REST API endpoints require authentication. When checked without
credentials, the API returns 403 Forbidden (not 401 Unauthorized).  The
readiness check now accepts both auth error codes (401/403) as indicators
that the service is up and ready, it just needs credentials.

This fixes the 'Iceberg REST API did not become ready' test failure.

* Fix AWS SigV4 signature verification for base64-encoded payload hashes

   AWS SigV4 canonical requests must use hex-encoded SHA256 hashes,
   but the X-Amz-Content-Sha256 header may be transmitted as base64.

   Changes:
   - Added normalizePayloadHash() function to convert base64 to hex
   - Call normalizePayloadHash() in extractV4AuthInfoFromHeader()
   - Added encoding/base64 import

   Fixes 403 Forbidden errors on POST requests to Iceberg REST API
   when clients send base64-encoded content hashes in the header.

   Impacted services: Iceberg REST API, S3Tables

* Fix AWS SigV4 signature verification for base64-encoded payload hashes

   AWS SigV4 canonical requests must use hex-encoded SHA256 hashes,
   but the X-Amz-Content-Sha256 header may be transmitted as base64.

   Changes:
   - Added normalizePayloadHash() function to convert base64 to hex
   - Call normalizePayloadHash() in extractV4AuthInfoFromHeader()
   - Added encoding/base64 import
   - Removed unused fmt import

   Fixes 403 Forbidden errors on POST requests to Iceberg REST API
   when clients send base64-encoded content hashes in the header.

   Impacted services: Iceberg REST API, S3Tables

* pass sigv4

* s3api: fix identity preservation and logging levels

- Ensure environment-based identities are preserved during config replacement
- Update accessKeyIdent and nameToIdentity maps correctly
- Downgrade informational logs to V(2) to reduce noise

* test: fix trino integration test and s3 policy test

- Pin Trino image version to 479
- Fix port binding to 0.0.0.0 for Docker connectivity
- Fix S3 policy test hang by correctly assigning MiniClusterCtx
- Improve port finding robustness in policy tests

* ci: pre-pull trino image to avoid timeouts

- Pull trinodb/trino:479 after Docker setup
- Ensure image is ready before integration tests start

* iceberg: remove unused checkAuth and improve logging

- Remove unused checkAuth method
- Downgrade informational logs to V(2)
- Ensure loggingMiddleware uses a status writer for accurate reported codes
- Narrow catch-all route to avoid interfering with other subsystems

* iceberg: fix build failure by removing unused s3api import

* Update iceberg.go

* use warehouse

* Update trino_catalog_test.go
This commit is contained in:
Chris Lu
2026-02-06 13:12:25 -08:00
committed by GitHub
parent a04e8dd00b
commit a3b83f8808
13 changed files with 839 additions and 135 deletions

View File

@@ -16,22 +16,25 @@ import (
"github.com/seaweedfs/seaweedfs/weed/command"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb"
flag "github.com/seaweedfs/seaweedfs/weed/util/fla9"
"github.com/stretchr/testify/require"
)
// TestCluster manages the weed mini instance for integration testing
type TestCluster struct {
dataDir string
ctx context.Context
cancel context.CancelFunc
isRunning bool
wg sync.WaitGroup
masterPort int
volumePort int
filerPort int
s3Port int
s3Endpoint string
dataDir string
ctx context.Context
cancel context.CancelFunc
isRunning bool
wg sync.WaitGroup
masterPort int
masterGrpcPort int
volumePort int
filerPort int
filerGrpcPort int
s3Port int
s3Endpoint string
}
func TestS3PolicyShellRevised(t *testing.T) {
@@ -53,8 +56,8 @@ func TestS3PolicyShellRevised(t *testing.T) {
require.NoError(t, tmpPolicyFile.Close())
weedCmd := "weed"
masterAddr := fmt.Sprintf("127.0.0.1:%d", cluster.masterPort)
filerAddr := fmt.Sprintf("127.0.0.1:%d", cluster.filerPort)
masterAddr := string(pb.NewServerAddress("127.0.0.1", cluster.masterPort, cluster.masterGrpcPort))
filerAddr := string(pb.NewServerAddress("127.0.0.1", cluster.filerPort, cluster.filerGrpcPort))
// Put
execShell(t, weedCmd, masterAddr, filerAddr, fmt.Sprintf("s3.policy -put -name=testpolicy -file=%s", tmpPolicyFile.Name()))
@@ -156,17 +159,16 @@ func findAvailablePort() (int, error) {
// findAvailablePortPair finds an available http port P such that P and P+10000 (grpc) are both available
func findAvailablePortPair() (int, int, error) {
httpPort, err := findAvailablePort()
if err != nil {
return 0, 0, err
}
for i := 0; i < 100; i++ {
httpPort, err := findAvailablePort()
grpcPort, err := findAvailablePort()
if err != nil {
return 0, 0, err
}
grpcPort := httpPort + 10000
// check if grpc port is available
listener, err := net.Listen("tcp", fmt.Sprintf("127.0.0.1:%d", grpcPort))
if err == nil {
listener.Close()
if grpcPort != httpPort {
return httpPort, grpcPort, nil
}
}
@@ -188,14 +190,16 @@ func startMiniCluster(t *testing.T) (*TestCluster, error) {
ctx, cancel := context.WithCancel(context.Background())
s3Endpoint := fmt.Sprintf("http://127.0.0.1:%d", s3Port)
cluster := &TestCluster{
dataDir: testDir,
ctx: ctx,
cancel: cancel,
masterPort: masterPort,
volumePort: volumePort,
filerPort: filerPort,
s3Port: s3Port,
s3Endpoint: s3Endpoint,
dataDir: testDir,
ctx: ctx,
cancel: cancel,
masterPort: masterPort,
masterGrpcPort: masterGrpcPort,
volumePort: volumePort,
filerPort: filerPort,
filerGrpcPort: filerGrpcPort,
s3Port: s3Port,
s3Endpoint: s3Endpoint,
}
// Disable authentication for tests

View File

@@ -275,7 +275,7 @@ func TestIcebergNamespaces(t *testing.T) {
env.StartSeaweedFS(t)
// Create the default table bucket first via S3
createTableBucket(t, env, "default")
createTableBucket(t, env, "warehouse")
// Test GET /v1/namespaces (should return empty list initially)
resp, err := http.Get(env.IcebergURL() + "/v1/namespaces")

View File

@@ -0,0 +1,536 @@
package catalog_trino
import (
"context"
"crypto/rand"
"fmt"
"io"
"net"
"net/http"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
type TestEnvironment struct {
seaweedDir string
weedBinary string
dataDir string
bindIP string
s3Port int
s3GrpcPort int
icebergPort int
masterPort int
masterGrpcPort int
filerPort int
filerGrpcPort int
volumePort int
volumeGrpcPort int
weedProcess *exec.Cmd
weedCancel context.CancelFunc
trinoContainer string
dockerAvailable bool
accessKey string
secretKey string
}
func TestTrinoIcebergCatalog(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
env := NewTestEnvironment(t)
defer env.Cleanup(t)
if !env.dockerAvailable {
t.Skip("Docker not available, skipping Trino integration test")
}
fmt.Printf(">>> Starting SeaweedFS...\n")
env.StartSeaweedFS(t)
fmt.Printf(">>> SeaweedFS started.\n")
catalogBucket := "warehouse"
tableBucket := "iceberg-tables"
fmt.Printf(">>> Creating table bucket: %s\n", tableBucket)
createTableBucket(t, env, tableBucket)
fmt.Printf(">>> Creating table bucket: %s\n", catalogBucket)
createTableBucket(t, env, catalogBucket)
fmt.Printf(">>> All buckets created.\n")
// Test Iceberg REST API directly
testIcebergRestAPI(t, env)
configDir := env.writeTrinoConfig(t, catalogBucket)
env.startTrinoContainer(t, configDir)
waitForTrino(t, env.trinoContainer, 60*time.Second)
schemaName := "trino_" + randomString(6)
runTrinoSQL(t, env.trinoContainer, fmt.Sprintf("CREATE SCHEMA IF NOT EXISTS iceberg.%s", schemaName))
output := runTrinoSQL(t, env.trinoContainer, "SHOW SCHEMAS FROM iceberg")
if !strings.Contains(output, schemaName) {
t.Fatalf("Expected schema %s in output:\n%s", schemaName, output)
}
runTrinoSQL(t, env.trinoContainer, fmt.Sprintf("SHOW TABLES FROM iceberg.%s", schemaName))
}
func NewTestEnvironment(t *testing.T) *TestEnvironment {
t.Helper()
wd, err := os.Getwd()
if err != nil {
t.Fatalf("Failed to get working directory: %v", err)
}
seaweedDir := wd
for i := 0; i < 6; i++ {
if _, err := os.Stat(filepath.Join(seaweedDir, "go.mod")); err == nil {
break
}
seaweedDir = filepath.Dir(seaweedDir)
}
weedBinary := filepath.Join(seaweedDir, "weed", "weed")
if _, err := os.Stat(weedBinary); os.IsNotExist(err) {
weedBinary = "weed"
if _, err := exec.LookPath(weedBinary); err != nil {
t.Skip("weed binary not found, skipping integration test")
}
}
dataDir, err := os.MkdirTemp("", "seaweed-trino-test-*")
if err != nil {
t.Fatalf("Failed to create temp dir: %v", err)
}
bindIP := findBindIP()
masterPort, masterGrpcPort := mustFreePortPair(t, "Master")
volumePort, volumeGrpcPort := mustFreePortPair(t, "Volume")
filerPort, filerGrpcPort := mustFreePortPair(t, "Filer")
s3Port, s3GrpcPort := mustFreePortPair(t, "S3")
icebergPort := mustFreePort(t, "Iceberg")
return &TestEnvironment{
seaweedDir: seaweedDir,
weedBinary: weedBinary,
dataDir: dataDir,
bindIP: bindIP,
s3Port: s3Port,
s3GrpcPort: s3GrpcPort,
icebergPort: icebergPort,
masterPort: masterPort,
masterGrpcPort: masterGrpcPort,
filerPort: filerPort,
filerGrpcPort: filerGrpcPort,
volumePort: volumePort,
volumeGrpcPort: volumeGrpcPort,
dockerAvailable: hasDocker(),
accessKey: "AKIAIOSFODNN7EXAMPLE",
secretKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
}
}
func (env *TestEnvironment) StartSeaweedFS(t *testing.T) {
t.Helper()
// Create IAM config file
iamConfigPath := filepath.Join(env.dataDir, "iam_config.json")
iamConfig := fmt.Sprintf(`{
"identities": [
{
"name": "admin",
"credentials": [
{
"accessKey": "%s",
"secretKey": "%s"
}
],
"actions": [
"Admin",
"Read",
"List",
"Tagging",
"Write"
]
}
]
}`, env.accessKey, env.secretKey)
if err := os.WriteFile(iamConfigPath, []byte(iamConfig), 0644); err != nil {
t.Fatalf("Failed to create IAM config: %v", err)
}
securityToml := filepath.Join(env.dataDir, "security.toml")
if err := os.WriteFile(securityToml, []byte("# Empty security config for testing\n"), 0644); err != nil {
t.Fatalf("Failed to create security.toml: %v", err)
}
ctx, cancel := context.WithCancel(context.Background())
env.weedCancel = cancel
cmd := exec.CommandContext(ctx, env.weedBinary, "mini",
"-master.port", fmt.Sprintf("%d", env.masterPort),
"-master.port.grpc", fmt.Sprintf("%d", env.masterGrpcPort),
"-volume.port", fmt.Sprintf("%d", env.volumePort),
"-volume.port.grpc", fmt.Sprintf("%d", env.volumeGrpcPort),
"-filer.port", fmt.Sprintf("%d", env.filerPort),
"-filer.port.grpc", fmt.Sprintf("%d", env.filerGrpcPort),
"-s3.port", fmt.Sprintf("%d", env.s3Port),
"-s3.port.grpc", fmt.Sprintf("%d", env.s3GrpcPort),
"-s3.port.iceberg", fmt.Sprintf("%d", env.icebergPort),
"-s3.config", iamConfigPath,
"-ip", env.bindIP,
"-ip.bind", "0.0.0.0",
"-dir", env.dataDir,
)
cmd.Dir = env.dataDir
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
// Set AWS credentials in environment (for compatibility)
cmd.Env = append(os.Environ(),
"AWS_ACCESS_KEY_ID="+env.accessKey,
"AWS_SECRET_ACCESS_KEY="+env.secretKey,
)
if err := cmd.Start(); err != nil {
t.Fatalf("Failed to start SeaweedFS: %v", err)
}
env.weedProcess = cmd
// Try to check if Iceberg API is ready
// First try checking the /v1/config endpoint (requires auth, so will return 401 if server is up)
icebergURL := fmt.Sprintf("http://%s:%d/v1/config", env.bindIP, env.icebergPort)
if !env.waitForService(icebergURL, 30*time.Second) {
// Try to get more info about why it failed
client := &http.Client{Timeout: 2 * time.Second}
resp, err := client.Get(icebergURL)
if err != nil {
t.Logf("WARNING: Could not connect to Iceberg service at %s: %v", icebergURL, err)
} else {
t.Logf("WARNING: Iceberg service returned status %d at %s", resp.StatusCode, icebergURL)
resp.Body.Close()
}
t.Fatalf("Iceberg REST API did not become ready")
}
}
func (env *TestEnvironment) Cleanup(t *testing.T) {
t.Helper()
if env.trinoContainer != "" {
_ = exec.Command("docker", "rm", "-f", env.trinoContainer).Run()
}
if env.weedCancel != nil {
env.weedCancel()
}
if env.weedProcess != nil {
time.Sleep(2 * time.Second)
_ = env.weedProcess.Wait()
}
if env.dataDir != "" {
_ = os.RemoveAll(env.dataDir)
}
}
func (env *TestEnvironment) waitForService(url string, timeout time.Duration) bool {
client := &http.Client{Timeout: 2 * time.Second}
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
resp, err := client.Get(url)
if err != nil {
// Service not responding yet
time.Sleep(500 * time.Millisecond)
continue
}
statusCode := resp.StatusCode
resp.Body.Close()
// Accept 2xx status codes (successful responses)
if statusCode >= 200 && statusCode < 300 {
return true
}
// Also accept 401/403 (auth errors mean service is up, just needs credentials)
if statusCode == http.StatusUnauthorized || statusCode == http.StatusForbidden {
return true
}
// For other status codes, keep trying
time.Sleep(500 * time.Millisecond)
}
return false
}
func testIcebergRestAPI(t *testing.T, env *TestEnvironment) {
t.Helper()
fmt.Printf(">>> Testing Iceberg REST API directly...\n")
// First, verify the service is listening
conn, err := net.Dial("tcp", fmt.Sprintf("%s:%d", env.bindIP, env.icebergPort))
if err != nil {
t.Fatalf("Cannot connect to Iceberg service at %s:%d: %v", env.bindIP, env.icebergPort, err)
}
conn.Close()
t.Logf("Successfully connected to Iceberg service at %s:%d", env.bindIP, env.icebergPort)
// Test /v1/config endpoint
url := fmt.Sprintf("http://%s:%d/v1/config", env.bindIP, env.icebergPort)
t.Logf("Testing Iceberg REST API at %s", url)
resp, err := http.Get(url)
if err != nil {
t.Fatalf("Failed to connect to Iceberg REST API at %s: %v", url, err)
}
defer resp.Body.Close()
t.Logf("Iceberg REST API response status: %d", resp.StatusCode)
body, _ := io.ReadAll(resp.Body)
t.Logf("Iceberg REST API response body: %s", string(body))
if resp.StatusCode != http.StatusOK {
t.Fatalf("Expected 200 OK from /v1/config, got %d", resp.StatusCode)
}
}
func (env *TestEnvironment) writeTrinoConfig(t *testing.T, warehouseBucket string) string {
t.Helper()
configDir := filepath.Join(env.dataDir, "trino")
if err := os.MkdirAll(configDir, 0755); err != nil {
t.Fatalf("Failed to create Trino config dir: %v", err)
}
config := fmt.Sprintf(`connector.name=iceberg
iceberg.catalog.type=rest
iceberg.rest-catalog.uri=http://host.docker.internal:%d
iceberg.rest-catalog.warehouse=s3://%s/
iceberg.file-format=PARQUET
# S3 storage config
fs.native-s3.enabled=true
s3.endpoint=http://host.docker.internal:%d
s3.path-style-access=true
s3.signer-type=AwsS3V4Signer
s3.aws-access-key=%s
s3.aws-secret-key=%s
s3.region=us-west-2
# REST catalog authentication
iceberg.rest-catalog.security=SIGV4
`, env.icebergPort, warehouseBucket, env.s3Port, env.accessKey, env.secretKey)
if err := os.WriteFile(filepath.Join(configDir, "iceberg.properties"), []byte(config), 0644); err != nil {
t.Fatalf("Failed to write Trino config: %v", err)
}
return configDir
}
func (env *TestEnvironment) startTrinoContainer(t *testing.T, configDir string) {
t.Helper()
containerName := "seaweed-trino-" + randomString(8)
env.trinoContainer = containerName
cmd := exec.Command("docker", "run", "-d",
"--name", containerName,
"--add-host", "host.docker.internal:host-gateway",
"-v", fmt.Sprintf("%s:/etc/trino/catalog", configDir),
"-v", fmt.Sprintf("%s:/test", env.dataDir),
"-e", "AWS_ACCESS_KEY_ID="+env.accessKey,
"-e", "AWS_SECRET_ACCESS_KEY="+env.secretKey,
"-e", "AWS_REGION=us-west-2",
"trinodb/trino:479",
)
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("Failed to start Trino container: %v\n%s", err, string(output))
}
}
func waitForTrino(t *testing.T, containerName string, timeout time.Duration) {
t.Helper()
deadline := time.Now().Add(timeout)
var lastOutput []byte
retryCount := 0
for time.Now().Before(deadline) {
// Try system catalog query as a readiness check
cmd := exec.Command("docker", "exec", containerName,
"trino", "--catalog", "system", "--schema", "runtime",
"--execute", "SELECT 1",
)
if output, err := cmd.CombinedOutput(); err == nil {
return
} else {
lastOutput = output
outputStr := string(output)
if strings.Contains(outputStr, "No such container") ||
strings.Contains(outputStr, "is not running") {
break
}
retryCount++
}
time.Sleep(1 * time.Second)
}
// If we can't connect to system catalog, try to at least connect to Trino server
cmd := exec.Command("docker", "exec", containerName, "trino", "--version")
if err := cmd.Run(); err == nil {
// Trino process is running, even if catalog isn't ready yet
// Give it a bit more time
time.Sleep(5 * time.Second)
return
}
logs, _ := exec.Command("docker", "logs", containerName).CombinedOutput()
t.Fatalf("Timed out waiting for Trino to be ready\nLast output:\n%s\nTrino logs:\n%s", string(lastOutput), string(logs))
}
func runTrinoSQL(t *testing.T, containerName, sql string) string {
t.Helper()
cmd := exec.Command("docker", "exec", containerName,
"trino", "--catalog", "iceberg",
"--output-format", "CSV",
"--execute", sql,
)
output, err := cmd.CombinedOutput()
if err != nil {
logs, _ := exec.Command("docker", "logs", containerName).CombinedOutput()
t.Fatalf("Trino command failed: %v\nSQL: %s\nOutput:\n%s\nTrino logs:\n%s", err, sql, string(output), string(logs))
}
return string(output)
}
func createTableBucket(t *testing.T, env *TestEnvironment, bucketName string) {
t.Helper()
// Use weed shell to create the table bucket
// Create with "000000000000" account ID (matches AccountAdmin.Id from auth_credentials.go)
// This ensures bucket owner matches authenticated identity's Account.Id
cmd := exec.Command(env.weedBinary, "shell",
fmt.Sprintf("-master=%s:%d.%d", env.bindIP, env.masterPort, env.masterGrpcPort),
)
cmd.Stdin = strings.NewReader(fmt.Sprintf("s3tables.bucket -create -name %s -account 000000000000\nexit\n", bucketName))
fmt.Printf(">>> EXECUTING: %v\n", cmd.Args)
output, err := cmd.CombinedOutput()
if err != nil {
fmt.Printf(">>> ERROR Output: %s\n", string(output))
t.Fatalf("Failed to create table bucket %s via weed shell: %v\nOutput: %s", bucketName, err, string(output))
}
fmt.Printf(">>> SUCCESS: Created table bucket %s\n", bucketName)
t.Logf("Created table bucket: %s", bucketName)
}
func createObjectBucket(t *testing.T, env *TestEnvironment, bucketName string) {
t.Helper()
// Create an AWS S3 client with the test credentials pointing to our local server
cfg := aws.Config{
Region: "us-east-1",
Credentials: aws.NewCredentialsCache(credentials.NewStaticCredentialsProvider(env.accessKey, env.secretKey, "")),
BaseEndpoint: aws.String(fmt.Sprintf("http://%s:%d", env.bindIP, env.s3Port)),
}
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.UsePathStyle = true
})
// Create the bucket using standard S3 API
_, err := client.CreateBucket(context.Background(), &s3.CreateBucketInput{
Bucket: aws.String(bucketName),
})
if err != nil {
t.Fatalf("Failed to create object bucket %s: %v", bucketName, err)
}
}
func hasDocker() bool {
cmd := exec.Command("docker", "version")
return cmd.Run() == nil
}
func mustFreePort(t *testing.T, name string) int {
t.Helper()
port, err := getFreePort()
if err != nil {
t.Fatalf("Failed to get free port for %s: %v", name, err)
}
return port
}
func mustFreePortPair(t *testing.T, name string) (int, int) {
t.Helper()
httpPort, grpcPort, err := findAvailablePortPair()
if err != nil {
t.Fatalf("Failed to get free port pair for %s: %v", name, err)
}
return httpPort, grpcPort
}
func findAvailablePortPair() (int, int, error) {
httpPort, err := getFreePort()
if err != nil {
return 0, 0, err
}
grpcPort, err := getFreePort()
if err != nil {
return 0, 0, err
}
return httpPort, grpcPort, nil
}
func getFreePort() (int, error) {
listener, err := net.Listen("tcp", "0.0.0.0:0")
if err != nil {
return 0, err
}
defer listener.Close()
addr := listener.Addr().(*net.TCPAddr)
return addr.Port, nil
}
func findBindIP() string {
addrs, err := net.InterfaceAddrs()
if err != nil {
return "127.0.0.1"
}
for _, addr := range addrs {
ipNet, ok := addr.(*net.IPNet)
if !ok || ipNet.IP == nil {
continue
}
ip := ipNet.IP.To4()
if ip == nil || ip.IsLoopback() || ip.IsLinkLocalUnicast() {
continue
}
return ip.String()
}
return "127.0.0.1"
}
func randomString(length int) string {
const charset = "abcdefghijklmnopqrstuvwxyz0123456789"
b := make([]byte, length)
if _, err := rand.Read(b); err != nil {
panic("failed to generate random string: " + err.Error())
}
for i := range b {
b[i] = charset[int(b[i])%len(charset)]
}
return string(b)
}