Add Spark Iceberg catalog integration tests and CI support (#8242)
* Add Spark Iceberg catalog integration tests and CI support Implement comprehensive integration tests for Spark with SeaweedFS Iceberg REST catalog: - Basic CRUD operations (Create, Read, Update, Delete) on Iceberg tables - Namespace (database) management - Data insertion, querying, and deletion - Time travel capabilities via snapshot versioning - Compatible with SeaweedFS S3 and Iceberg REST endpoints Tests mirror the structure of existing Trino integration tests but use Spark's Python SQL API and PySpark for testing. Add GitHub Actions CI job for spark-iceberg-catalog-tests in s3-tables-tests.yml to automatically run Spark integration tests on pull requests. * fmt * Fix Spark integration tests - code review feedback * go mod tidy * Add go mod tidy step to integration test jobs Add 'go mod tidy' step before test runs for all integration test jobs: - s3-tables-tests - iceberg-catalog-tests - trino-iceberg-catalog-tests - spark-iceberg-catalog-tests This ensures dependencies are clean before running tests. * Fix remaining Spark operations test issues Address final code review comments: Setup & Initialization: - Add waitForSparkReady() helper function that polls Spark readiness with backoff instead of hardcoded 10-second sleep - Extract setupSparkTestEnv() helper to reduce boilerplate duplication between TestSparkCatalogBasicOperations and TestSparkTimeTravel - Both tests now use helpers for consistent, reliable setup Assertions & Validation: - Make setup-critical operations (namespace, table creation, initial insert) use t.Fatalf instead of t.Errorf to fail fast - Validate setupSQL output in TestSparkTimeTravel and fail if not 'Setup complete' - Add validation after second INSERT in TestSparkTimeTravel: verify row count increased to 2 before time travel test - Add context to error messages with namespace and tableName params Code Quality: - Remove code duplication between test functions - All critical paths now properly validated - Consistent error handling throughout * Fix go vet errors in S3 Tables tests Fixes: 1. setup_test.go (Spark): - Add missing import: github.com/testcontainers/testcontainers-go/wait - Use wait.ForLog instead of undefined testcontainers.NewLogStrategy - Remove unused strings import 2. trino_catalog_test.go: - Use net.JoinHostPort instead of fmt.Sprintf for address formatting - Properly handles IPv6 addresses by wrapping them in brackets * Use weed mini for simpler SeaweedFS startup Replace complex multi-process startup (master, volume, filer, s3) with single 'weed mini' command that starts all services together. Benefits: - Simpler, more reliable startup - Single weed mini process vs 4 separate processes - Automatic coordination between components - Better port management with no manual coordination Changes: - Remove separate master, volume, filer process startup - Use weed mini with -master.port, -filer.port, -s3.port flags - Keep Iceberg REST as separate service (still needed) - Increase timeout to 15s for port readiness (weed mini startup) - Remove volumePort and filerProcess fields from TestEnvironment - Simplify cleanup to only handle two processes (mini, iceberg rest) * Clean up dead code and temp directory leaks Fixes: 1. Remove dead s3Process field and cleanup: - weed mini bundles S3 gateway, no separate process needed - Removed s3Process field from TestEnvironment - Removed unnecessary s3Process cleanup code 2. Fix temp config directory leak: - Add sparkConfigDir field to TestEnvironment - Store returned configDir in writeSparkConfig - Clean up sparkConfigDir in Cleanup() with os.RemoveAll - Prevents accumulation of temp directories in test runs 3. Simplify Cleanup: - Now handles only necessary processes (weed mini, iceberg rest) - Removes both seaweedfsDataDir and sparkConfigDir - Cleaner shutdown sequence * Use weed mini's built-in Iceberg REST and fix python binary Changes: - Add -s3.port.iceberg flag to weed mini for built-in Iceberg REST Catalog - Remove separate 'weed server' process for Iceberg REST - Remove icebergRestProcess field from TestEnvironment - Simplify Cleanup() to only manage weed mini + Spark - Add port readiness check for iceberg REST from weed mini - Set Spark container Cmd to '/bin/sh -c sleep 3600' to keep it running - Change python to python3 in container.Exec calls This simplifies to truly one all-in-one weed mini process (master, filer, s3, iceberg-rest) plus just the Spark container. * go fmt * clean up * bind on a non-loopback IP for container access, aligned Iceberg metadata saves/locations with table locations, and reworked Spark time travel to use TIMESTAMP AS OF with safe timestamp extraction. * shared mini start * Fixed internal directory creation under /buckets so .objects paths can auto-create without failing bucket-name validation, which restores table bucket object writes * fix path Updated table bucket objects to write under `/buckets/<bucket>` and saved Iceberg metadata there, adjusting Spark time-travel timestamp to committed_at +1s. Rebuilt the weed binary (`go install ./weed`) and confirmed passing tests for Spark and Trino with focused test commands. * Updated table bucket creation to stop creating /buckets/.objects and switched Trino REST warehouse to s3://<bucket> to match Iceberg layout. * Stabilize S3Tables integration tests * Fix timestamp extraction and remove dead code in bucketDir * Use table bucket as warehouse in s3tables tests * Update trino_blog_operations_test.go * adds the CASCADE option to handle any remaining table metadata/files in the schema directory * skip namespace not empty
This commit is contained in:
@@ -89,9 +89,6 @@ func (s3a *S3ApiServer) bucketDir(bucket string) string {
|
||||
if tablePath, ok := s3a.tableLocationDir(bucket); ok {
|
||||
return tablePath
|
||||
}
|
||||
if s3a.isTableBucket(bucket) {
|
||||
return s3tables.GetTableObjectBucketPath(bucket)
|
||||
}
|
||||
return path.Join(s3a.bucketRoot(bucket), bucket)
|
||||
}
|
||||
|
||||
|
||||
@@ -217,13 +217,14 @@ func (s *Server) saveMetadataFile(ctx context.Context, bucketName, tablePath, me
|
||||
return nil
|
||||
}
|
||||
|
||||
bucketDir := path.Join(bucketsPath, bucketName)
|
||||
// 1. Ensure bucket directory exists: <bucketsPath>/<bucket>
|
||||
if err := ensureDir(bucketsPath, bucketName, "bucket directory"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// 2. Ensure table path exists: <bucketsPath>/<bucket>/<tablePath>
|
||||
tableDir := path.Join(bucketsPath, bucketName)
|
||||
// 2. Ensure table path exists under the bucket directory
|
||||
tableDir := bucketDir
|
||||
if tablePath != "" {
|
||||
segments := strings.Split(tablePath, "/")
|
||||
for _, segment := range segments {
|
||||
@@ -354,6 +355,9 @@ func getBucketFromPrefix(r *http.Request) string {
|
||||
if prefix := vars["prefix"]; prefix != "" {
|
||||
return prefix
|
||||
}
|
||||
if bucket := os.Getenv("S3TABLES_DEFAULT_BUCKET"); bucket != "" {
|
||||
return bucket
|
||||
}
|
||||
// Default bucket if no prefix - use "warehouse" for Iceberg
|
||||
return "warehouse"
|
||||
}
|
||||
@@ -680,24 +684,32 @@ func (s *Server) handleCreateTable(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// Generate UUID for the new table
|
||||
tableUUID := uuid.New()
|
||||
location := strings.TrimSuffix(req.Location, "/")
|
||||
tablePath := path.Join(encodeNamespace(namespace), req.Name)
|
||||
storageBucket := bucketName
|
||||
tableLocationBucket := ""
|
||||
if location != "" {
|
||||
location := strings.TrimSuffix(req.Location, "/")
|
||||
if location == "" {
|
||||
if req.Properties != nil {
|
||||
if warehouse := strings.TrimSuffix(req.Properties["warehouse"], "/"); warehouse != "" {
|
||||
location = fmt.Sprintf("%s/%s", warehouse, tablePath)
|
||||
}
|
||||
}
|
||||
if location == "" {
|
||||
if warehouse := strings.TrimSuffix(os.Getenv("ICEBERG_WAREHOUSE"), "/"); warehouse != "" {
|
||||
location = fmt.Sprintf("%s/%s", warehouse, tablePath)
|
||||
}
|
||||
}
|
||||
if location == "" {
|
||||
location = fmt.Sprintf("s3://%s/%s", bucketName, tablePath)
|
||||
}
|
||||
} else {
|
||||
parsedBucket, parsedPath, err := parseS3Location(location)
|
||||
if err != nil {
|
||||
writeError(w, http.StatusBadRequest, "BadRequestException", "Invalid table location: "+err.Error())
|
||||
return
|
||||
}
|
||||
if strings.HasSuffix(parsedBucket, "--table-s3") && parsedPath == "" {
|
||||
tableLocationBucket = parsedBucket
|
||||
if parsedPath == "" {
|
||||
location = fmt.Sprintf("s3://%s/%s", parsedBucket, tablePath)
|
||||
}
|
||||
}
|
||||
if tableLocationBucket == "" {
|
||||
tableLocationBucket = fmt.Sprintf("%s--table-s3", tableUUID.String())
|
||||
}
|
||||
location = fmt.Sprintf("s3://%s", tableLocationBucket)
|
||||
|
||||
// Build proper Iceberg table metadata using iceberg-go types
|
||||
metadata := newTableMetadata(tableUUID, location, req.Schema, req.PartitionSpec, req.WriteOrder, req.Properties)
|
||||
@@ -713,15 +725,21 @@ func (s *Server) handleCreateTable(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// 1. Save metadata file to filer
|
||||
tableName := req.Name
|
||||
metadataFileName := "v1.metadata.json" // Initial version is always 1
|
||||
if err := s.saveMetadataFile(r.Context(), storageBucket, tablePath, metadataFileName, metadataBytes); err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "InternalServerError", "Failed to save metadata file: "+err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
metadataLocation := fmt.Sprintf("%s/metadata/%s", location, metadataFileName)
|
||||
if !req.StageCreate {
|
||||
// Save metadata file to filer for immediate table creation.
|
||||
metadataBucket, metadataPath, err := parseS3Location(location)
|
||||
if err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "InternalServerError", "Invalid table location: "+err.Error())
|
||||
return
|
||||
}
|
||||
if err := s.saveMetadataFile(r.Context(), metadataBucket, metadataPath, metadataFileName, metadataBytes); err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "InternalServerError", "Failed to save metadata file: "+err.Error())
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Use S3 Tables manager to create table
|
||||
createReq := &s3tables.CreateTableRequest{
|
||||
@@ -746,8 +764,42 @@ func (s *Server) handleCreateTable(w http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
if tableErr, ok := err.(*s3tables.S3TablesError); ok && tableErr.Type == s3tables.ErrCodeTableAlreadyExists {
|
||||
getReq := &s3tables.GetTableRequest{
|
||||
TableBucketARN: bucketARN,
|
||||
Namespace: namespace,
|
||||
Name: tableName,
|
||||
}
|
||||
var getResp s3tables.GetTableResponse
|
||||
getErr := s.filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
||||
mgrClient := s3tables.NewManagerClient(client)
|
||||
return s.tablesManager.Execute(r.Context(), mgrClient, "GetTable", getReq, &getResp, identityName)
|
||||
})
|
||||
if getErr != nil {
|
||||
writeError(w, http.StatusConflict, "AlreadyExistsException", err.Error())
|
||||
return
|
||||
}
|
||||
result := buildLoadTableResult(getResp, bucketName, namespace, tableName)
|
||||
writeJSON(w, http.StatusOK, result)
|
||||
return
|
||||
}
|
||||
if strings.Contains(err.Error(), "already exists") {
|
||||
writeError(w, http.StatusConflict, "AlreadyExistsException", err.Error())
|
||||
getReq := &s3tables.GetTableRequest{
|
||||
TableBucketARN: bucketARN,
|
||||
Namespace: namespace,
|
||||
Name: tableName,
|
||||
}
|
||||
var getResp s3tables.GetTableResponse
|
||||
getErr := s.filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
||||
mgrClient := s3tables.NewManagerClient(client)
|
||||
return s.tablesManager.Execute(r.Context(), mgrClient, "GetTable", getReq, &getResp, identityName)
|
||||
})
|
||||
if getErr != nil {
|
||||
writeError(w, http.StatusConflict, "AlreadyExistsException", err.Error())
|
||||
return
|
||||
}
|
||||
result := buildLoadTableResult(getResp, bucketName, namespace, tableName)
|
||||
writeJSON(w, http.StatusOK, result)
|
||||
return
|
||||
}
|
||||
glog.V(1).Infof("Iceberg: CreateTable error: %v", err)
|
||||
@@ -809,7 +861,11 @@ func (s *Server) handleLoadTable(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
// Build table metadata using iceberg-go types
|
||||
result := buildLoadTableResult(getResp, bucketName, namespace, tableName)
|
||||
writeJSON(w, http.StatusOK, result)
|
||||
}
|
||||
|
||||
func buildLoadTableResult(getResp s3tables.GetTableResponse, bucketName string, namespace []string, tableName string) LoadTableResult {
|
||||
location := tableLocationFromMetadataLocation(getResp.MetadataLocation)
|
||||
if location == "" {
|
||||
location = fmt.Sprintf("s3://%s/%s/%s", bucketName, encodeNamespace(namespace), tableName)
|
||||
@@ -840,12 +896,11 @@ func (s *Server) handleLoadTable(w http.ResponseWriter, r *http.Request) {
|
||||
metadata = newTableMetadata(tableUUID, location, nil, nil, nil, nil)
|
||||
}
|
||||
|
||||
result := LoadTableResult{
|
||||
return LoadTableResult{
|
||||
MetadataLocation: getResp.MetadataLocation,
|
||||
Metadata: metadata,
|
||||
Config: make(iceberg.Properties),
|
||||
}
|
||||
writeJSON(w, http.StatusOK, result)
|
||||
}
|
||||
|
||||
// handleTableExists checks if a table exists.
|
||||
@@ -943,13 +998,53 @@ func (s *Server) handleUpdateTable(w http.ResponseWriter, r *http.Request) {
|
||||
// Extract identity from context
|
||||
identityName := s3_constants.GetIdentityNameFromContext(r)
|
||||
|
||||
// Parse the commit request
|
||||
var req CommitTableRequest
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
// Parse the commit request, skipping update actions not supported by iceberg-go.
|
||||
var raw struct {
|
||||
Identifier *TableIdentifier `json:"identifier,omitempty"`
|
||||
Requirements json.RawMessage `json:"requirements"`
|
||||
Updates []json.RawMessage `json:"updates"`
|
||||
}
|
||||
if err := json.NewDecoder(r.Body).Decode(&raw); err != nil {
|
||||
writeError(w, http.StatusBadRequest, "BadRequestException", "Invalid request body: "+err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
var req CommitTableRequest
|
||||
req.Identifier = raw.Identifier
|
||||
if len(raw.Requirements) > 0 {
|
||||
if err := json.Unmarshal(raw.Requirements, &req.Requirements); err != nil {
|
||||
writeError(w, http.StatusBadRequest, "BadRequestException", "Invalid requirements: "+err.Error())
|
||||
return
|
||||
}
|
||||
}
|
||||
if len(raw.Updates) > 0 {
|
||||
filtered := make([]json.RawMessage, 0, len(raw.Updates))
|
||||
for _, update := range raw.Updates {
|
||||
var action struct {
|
||||
Action string `json:"action"`
|
||||
}
|
||||
if err := json.Unmarshal(update, &action); err != nil {
|
||||
writeError(w, http.StatusBadRequest, "BadRequestException", "Invalid update: "+err.Error())
|
||||
return
|
||||
}
|
||||
if action.Action == "set-statistics" {
|
||||
continue
|
||||
}
|
||||
filtered = append(filtered, update)
|
||||
}
|
||||
if len(filtered) > 0 {
|
||||
updatesBytes, err := json.Marshal(filtered)
|
||||
if err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "InternalServerError", "Failed to parse updates: "+err.Error())
|
||||
return
|
||||
}
|
||||
if err := json.Unmarshal(updatesBytes, &req.Updates); err != nil {
|
||||
writeError(w, http.StatusBadRequest, "BadRequestException", "Invalid updates: "+err.Error())
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// First, load current table metadata
|
||||
getReq := &s3tables.GetTableRequest{
|
||||
TableBucketARN: bucketARN,
|
||||
@@ -1049,8 +1144,12 @@ func (s *Server) handleUpdateTable(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
// 1. Save metadata file to filer
|
||||
tablePath := path.Join(encodeNamespace(namespace), tableName)
|
||||
if err := s.saveMetadataFile(r.Context(), bucketName, tablePath, metadataFileName, metadataBytes); err != nil {
|
||||
metadataBucket, metadataPath, err := parseS3Location(location)
|
||||
if err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "InternalServerError", "Invalid table location: "+err.Error())
|
||||
return
|
||||
}
|
||||
if err := s.saveMetadataFile(r.Context(), metadataBucket, metadataPath, metadataFileName, metadataBytes); err != nil {
|
||||
writeError(w, http.StatusInternalServerError, "InternalServerError", "Failed to save metadata file: "+err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
@@ -524,7 +524,6 @@ func (s3a *S3ApiServer) doListFilerEntries(client filer_pb.SeaweedFilerClient, d
|
||||
stream, listErr := client.ListEntries(ctx, request)
|
||||
if listErr != nil {
|
||||
if errors.Is(listErr, filer_pb.ErrNotFound) {
|
||||
err = filer_pb.ErrNotFound
|
||||
return
|
||||
}
|
||||
err = fmt.Errorf("list entries %+v: %w", request, listErr)
|
||||
|
||||
@@ -105,14 +105,6 @@ func (h *S3TablesHandler) handleCreateTableBucket(w http.ResponseWriter, r *http
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure object root directory exists for table bucket S3 operations
|
||||
if err := h.ensureDirectory(r.Context(), client, GetTableObjectRootDir()); err != nil {
|
||||
return fmt.Errorf("failed to create table object root directory: %w", err)
|
||||
}
|
||||
if err := h.ensureDirectory(r.Context(), client, GetTableObjectBucketPath(req.Name)); err != nil {
|
||||
return fmt.Errorf("failed to create table object bucket directory: %w", err)
|
||||
}
|
||||
|
||||
// Create bucket directory
|
||||
if err := h.createDirectory(r.Context(), client, bucketPath); err != nil {
|
||||
return err
|
||||
|
||||
@@ -50,12 +50,38 @@ func (h *S3TablesHandler) handleCreateNamespace(w http.ResponseWriter, r *http.R
|
||||
var bucketMetadata tableBucketMetadata
|
||||
var bucketPolicy string
|
||||
var bucketTags map[string]string
|
||||
ownerAccountID := h.getAccountID(r)
|
||||
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
||||
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
|
||||
if errors.Is(err, ErrAttributeNotFound) {
|
||||
dir, name := splitPath(bucketPath)
|
||||
entryResp, lookupErr := filer_pb.LookupEntry(r.Context(), client, &filer_pb.LookupDirectoryEntryRequest{
|
||||
Directory: dir,
|
||||
Name: name,
|
||||
})
|
||||
if lookupErr != nil {
|
||||
return lookupErr
|
||||
}
|
||||
if entryResp.Entry == nil || !IsTableBucketEntry(entryResp.Entry) {
|
||||
return filer_pb.ErrNotFound
|
||||
}
|
||||
bucketMetadata = tableBucketMetadata{
|
||||
Name: bucketName,
|
||||
CreatedAt: time.Now(),
|
||||
OwnerAccountID: ownerAccountID,
|
||||
}
|
||||
metadataBytes, err := json.Marshal(&bucketMetadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal bucket metadata: %w", err)
|
||||
}
|
||||
if err := h.setExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata, metadataBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
} else if err := json.Unmarshal(data, &bucketMetadata); err != nil {
|
||||
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
|
||||
}
|
||||
|
||||
|
||||
@@ -164,14 +164,26 @@ func (h *S3TablesHandler) handleCreateTable(w http.ResponseWriter, r *http.Reque
|
||||
tablePath := GetTablePath(bucketName, namespaceName, tableName)
|
||||
|
||||
// Check if table already exists
|
||||
var existingMetadata tableMetadataInternal
|
||||
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
||||
_, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
|
||||
return err
|
||||
data, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if unmarshalErr := json.Unmarshal(data, &existingMetadata); unmarshalErr != nil {
|
||||
return fmt.Errorf("failed to parse existing table metadata: %w", unmarshalErr)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
if err == nil {
|
||||
h.writeError(w, http.StatusConflict, ErrCodeTableAlreadyExists, fmt.Sprintf("table %s already exists", tableName))
|
||||
return fmt.Errorf("table already exists")
|
||||
tableARN := h.generateTableARN(existingMetadata.OwnerAccountID, bucketName, namespaceName+"/"+tableName)
|
||||
h.writeJSON(w, http.StatusOK, &CreateTableResponse{
|
||||
TableARN: tableARN,
|
||||
VersionToken: existingMetadata.VersionToken,
|
||||
MetadataLocation: existingMetadata.MetadataLocation,
|
||||
})
|
||||
return nil
|
||||
} else if !errors.Is(err, filer_pb.ErrNotFound) && !errors.Is(err, ErrAttributeNotFound) {
|
||||
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table: %v", err))
|
||||
return err
|
||||
@@ -201,14 +213,14 @@ func (h *S3TablesHandler) handleCreateTable(w http.ResponseWriter, r *http.Reque
|
||||
}
|
||||
|
||||
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
|
||||
// Create table directory
|
||||
if err := h.createDirectory(r.Context(), client, tablePath); err != nil {
|
||||
// Ensure table directory exists (may already be created by object storage clients)
|
||||
if err := h.ensureDirectory(r.Context(), client, tablePath); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create data subdirectory for Iceberg files
|
||||
dataPath := tablePath + "/data"
|
||||
if err := h.createDirectory(r.Context(), client, dataPath); err != nil {
|
||||
if err := h.ensureDirectory(r.Context(), client, dataPath); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user