* admin: remove misleading "secret key only shown once" warning
The access key details modal already allows viewing both the access key
and secret key at any time, so the warning about the secret key only
being displayed once is incorrect and misleading.
* admin: allow specifying custom access key and secret key
Add optional access_key and secret_key fields to the create access key
API. When provided, the specified keys are used instead of generating
random ones. The UI now shows a form with optional fields when creating
a new key, with a note that leaving them blank auto-generates keys.
* admin: check access key uniqueness before creating
Access keys must be globally unique across all users since S3 auth
looks them up in a single global map. Add an explicit check using
GetUserByAccessKey before creating, so the user gets a clear error
("access key is already in use") rather than a generic store error.
* Update object_store_users_templ.go
* admin: address review feedback for access key creation
Handler:
- Use decodeJSONBody/newJSONMaxReader instead of raw json.Decode to
enforce request size limits and handle malformed JSON properly
- Return 409 Conflict for duplicate access keys, 400 Bad Request for
validation errors, instead of generic 500
Backend:
- Validate access key length (4-128 chars) and secret key length
(8-128 chars) when user-provided
Frontend:
- Extract resetCreateKeyForm() helper to avoid duplicated cleanup logic
- Wire resetCreateKeyForm to accessKeysModal hidden.bs.modal event so
form state is always cleared when modal is dismissed
- Change secret key input to type="password" with a visibility toggle
* admin: guard against nil request and handle GetUserByAccessKey errors
- Add nil check for the CreateAccessKeyRequest pointer before
dereferencing, defaulting to an empty request (auto-generate both
keys).
- Handle non-"not found" errors from GetUserByAccessKey explicitly
instead of silently proceeding, so store errors (e.g. db connection
failures) surface rather than being swallowed.
* Update object_store_users_templ.go
* admin: fix access key uniqueness check with gRPC store
GetUserByAccessKey returns a gRPC NotFound status error (not the
sentinel credential.ErrAccessKeyNotFound) when using the gRPC store,
causing the uniqueness check to fail with a spurious error.
Treat the lookup as best-effort: only reject when a user is found
(err == nil). Any error (not-found via any store, connectivity issues)
falls through to the store's own CreateAccessKey which enforces
uniqueness definitively.
* admin: fix error handling and input validation for access key creation
Backend:
- Remove access key value from the duplicate-key error message to avoid
logging the caller-supplied identifier.
Handler:
- Handle empty POST body (io.EOF) as a valid request that auto-generates
both keys, instead of rejecting it as malformed JSON.
- Return 404 for "not found" errors (e.g. non-existent user) instead of
collapsing them into a 500.
Frontend:
- Add minlength/maxlength attributes matching backend constraints
(access key 4-128, secret key 8-128).
- Call reportValidity() before submitting so invalid lengths are caught
client-side without a round trip.
* admin: use sentinel errors and fix GetUserByAccessKey error handling
Backend (user_management.go):
- Define sentinel errors (ErrAccessKeyInUse, ErrUserNotFound,
ErrInvalidInput) and wrap them in returned errors so callers can use
errors.Is.
- Handle GetUserByAccessKey errors properly: check the sentinel
credential.ErrAccessKeyNotFound first, then fall back to string
matching for stores (gRPC) that return non-sentinel not-found errors.
Surface unexpected errors instead of silently proceeding.
Handler (user_handlers.go):
- Replace fragile strings.Contains error matching with errors.Is
against the new dash sentinels.
Frontend (object_store_users.templ):
- Add double-submit guard (isCreatingKey flag + button disabling) to
prevent duplicate access key creation requests.
331 lines
9.9 KiB
Go
331 lines
9.9 KiB
Go
package dash
|
|
|
|
import (
|
|
"context"
|
|
"net/http"
|
|
"sort"
|
|
"time"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/cluster"
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/iam"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/master_pb"
|
|
)
|
|
|
|
// Access key status constants
|
|
const (
|
|
AccessKeyStatusActive = iam.AccessKeyStatusActive
|
|
AccessKeyStatusInactive = iam.AccessKeyStatusInactive
|
|
)
|
|
|
|
type AdminData struct {
|
|
Username string `json:"username"`
|
|
TotalVolumes int `json:"total_volumes"`
|
|
TotalFiles int64 `json:"total_files"`
|
|
TotalSize int64 `json:"total_size"`
|
|
VolumeSizeLimitMB uint64 `json:"volume_size_limit_mb"`
|
|
MasterNodes []MasterNode `json:"master_nodes"`
|
|
VolumeServers []VolumeServer `json:"volume_servers"`
|
|
FilerNodes []FilerNode `json:"filer_nodes"`
|
|
MessageBrokers []MessageBrokerNode `json:"message_brokers"`
|
|
DataCenters []DataCenter `json:"datacenters"`
|
|
LastUpdated time.Time `json:"last_updated"`
|
|
|
|
// EC shard totals for dashboard
|
|
TotalEcVolumes int `json:"total_ec_volumes"` // Total number of EC volumes across all servers
|
|
TotalEcShards int `json:"total_ec_shards"` // Total number of EC shards across all servers
|
|
}
|
|
|
|
// Object Store Users management structures
|
|
type ObjectStoreUser struct {
|
|
Username string `json:"username"`
|
|
Email string `json:"email"`
|
|
AccessKey string `json:"access_key"`
|
|
SecretKey string `json:"secret_key"`
|
|
Permissions []string `json:"permissions"`
|
|
PolicyNames []string `json:"policy_names"`
|
|
}
|
|
|
|
type ObjectStoreUsersData struct {
|
|
Username string `json:"username"`
|
|
Users []ObjectStoreUser `json:"users"`
|
|
TotalUsers int `json:"total_users"`
|
|
LastUpdated time.Time `json:"last_updated"`
|
|
}
|
|
|
|
// User management request structures
|
|
type CreateUserRequest struct {
|
|
Username string `json:"username" binding:"required"`
|
|
Email string `json:"email"`
|
|
Actions []string `json:"actions"`
|
|
GenerateKey bool `json:"generate_key"`
|
|
PolicyNames []string `json:"policy_names"`
|
|
}
|
|
|
|
type UpdateUserRequest struct {
|
|
Email string `json:"email"`
|
|
Actions []string `json:"actions"`
|
|
PolicyNames []string `json:"policy_names"`
|
|
}
|
|
|
|
type UpdateUserPoliciesRequest struct {
|
|
Actions []string `json:"actions" binding:"required"`
|
|
}
|
|
|
|
type AccessKeyInfo struct {
|
|
AccessKey string `json:"access_key"`
|
|
SecretKey string `json:"secret_key"`
|
|
Status string `json:"status"`
|
|
CreatedAt time.Time `json:"created_at"`
|
|
}
|
|
|
|
type CreateAccessKeyRequest struct {
|
|
AccessKey string `json:"access_key"`
|
|
SecretKey string `json:"secret_key"`
|
|
}
|
|
|
|
type UpdateAccessKeyStatusRequest struct {
|
|
Status string `json:"status" binding:"required"`
|
|
}
|
|
|
|
type UserDetails struct {
|
|
Username string `json:"username"`
|
|
Email string `json:"email"`
|
|
Actions []string `json:"actions"`
|
|
PolicyNames []string `json:"policy_names"`
|
|
AccessKeys []AccessKeyInfo `json:"access_keys"`
|
|
Groups []string `json:"groups"`
|
|
}
|
|
|
|
type FilerNode struct {
|
|
Address string `json:"address"`
|
|
DataCenter string `json:"datacenter"`
|
|
Rack string `json:"rack"`
|
|
LastUpdated time.Time `json:"last_updated"`
|
|
}
|
|
|
|
type MessageBrokerNode struct {
|
|
Address string `json:"address"`
|
|
DataCenter string `json:"datacenter"`
|
|
Rack string `json:"rack"`
|
|
LastUpdated time.Time `json:"last_updated"`
|
|
}
|
|
|
|
// GetAdminData retrieves admin data as a struct (for reuse by both JSON and HTML handlers)
|
|
func (s *AdminServer) GetAdminData(username string) (AdminData, error) {
|
|
if username == "" {
|
|
username = "admin"
|
|
}
|
|
|
|
// Get cluster topology
|
|
topology, err := s.GetClusterTopology()
|
|
if err != nil {
|
|
glog.Errorf("Failed to get cluster topology: %v", err)
|
|
return AdminData{}, err
|
|
}
|
|
|
|
// Get volume servers data with EC shard information
|
|
volumeServersData, err := s.GetClusterVolumeServers()
|
|
if err != nil {
|
|
glog.Errorf("Failed to get cluster volume servers: %v", err)
|
|
return AdminData{}, err
|
|
}
|
|
|
|
// Get master nodes status
|
|
masterNodes := s.getMasterNodesStatus()
|
|
|
|
// Get filer nodes status
|
|
filerNodes := s.getFilerNodesStatus()
|
|
|
|
// Get message broker nodes status
|
|
messageBrokers := s.getMessageBrokerNodesStatus()
|
|
|
|
// Get volume size limit from master configuration
|
|
var volumeSizeLimitMB uint64 = 30000 // Default to 30GB
|
|
err = s.WithMasterClient(func(client master_pb.SeaweedClient) error {
|
|
resp, err := client.GetMasterConfiguration(context.Background(), &master_pb.GetMasterConfigurationRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
volumeSizeLimitMB = uint64(resp.VolumeSizeLimitMB)
|
|
return nil
|
|
})
|
|
if err != nil {
|
|
glog.Warningf("Failed to get volume size limit from master: %v", err)
|
|
// Keep default value on error
|
|
}
|
|
|
|
// Calculate EC shard totals
|
|
var totalEcVolumes, totalEcShards int
|
|
ecVolumeSet := make(map[uint32]bool) // To avoid counting the same EC volume multiple times
|
|
|
|
for _, vs := range volumeServersData.VolumeServers {
|
|
totalEcShards += vs.EcShards
|
|
// Count unique EC volumes across all servers
|
|
for _, ecInfo := range vs.EcShardDetails {
|
|
ecVolumeSet[ecInfo.VolumeID] = true
|
|
}
|
|
}
|
|
totalEcVolumes = len(ecVolumeSet)
|
|
|
|
// Prepare admin data
|
|
adminData := AdminData{
|
|
Username: username,
|
|
TotalVolumes: topology.TotalVolumes,
|
|
TotalFiles: topology.TotalFiles,
|
|
TotalSize: topology.TotalSize,
|
|
VolumeSizeLimitMB: volumeSizeLimitMB,
|
|
MasterNodes: masterNodes,
|
|
VolumeServers: volumeServersData.VolumeServers,
|
|
FilerNodes: filerNodes,
|
|
MessageBrokers: messageBrokers,
|
|
DataCenters: topology.DataCenters,
|
|
LastUpdated: topology.UpdatedAt,
|
|
TotalEcVolumes: totalEcVolumes,
|
|
TotalEcShards: totalEcShards,
|
|
}
|
|
|
|
return adminData, nil
|
|
}
|
|
|
|
// ShowAdmin displays the main admin page (now uses GetAdminData)
|
|
func (s *AdminServer) ShowAdmin(w http.ResponseWriter, r *http.Request) {
|
|
username := UsernameFromContext(r.Context())
|
|
|
|
adminData, err := s.GetAdminData(username)
|
|
if err != nil {
|
|
writeJSONError(w, http.StatusInternalServerError, "Failed to get admin data: "+err.Error())
|
|
return
|
|
}
|
|
|
|
// Return JSON for API calls
|
|
writeJSON(w, http.StatusOK, adminData)
|
|
}
|
|
|
|
// ShowOverview displays cluster overview
|
|
func (s *AdminServer) ShowOverview(w http.ResponseWriter, r *http.Request) {
|
|
topology, err := s.GetClusterTopology()
|
|
if err != nil {
|
|
writeJSONError(w, http.StatusInternalServerError, err.Error())
|
|
return
|
|
}
|
|
|
|
writeJSON(w, http.StatusOK, topology)
|
|
}
|
|
|
|
// getMasterNodesStatus checks status of all master nodes
|
|
func (s *AdminServer) getMasterNodesStatus() []MasterNode {
|
|
var masterNodes []MasterNode
|
|
|
|
// Since we have a single master address, create one entry
|
|
var isLeader bool = true // Assume leader since it's the only master we know about
|
|
|
|
// Try to get leader info from this master
|
|
err := s.WithMasterClient(func(client master_pb.SeaweedClient) error {
|
|
_, err := client.GetMasterConfiguration(context.Background(), &master_pb.GetMasterConfigurationRequest{})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
// For now, assume this master is the leader since we can connect to it
|
|
isLeader = true
|
|
return nil
|
|
})
|
|
|
|
if err != nil {
|
|
isLeader = false
|
|
}
|
|
|
|
currentMaster := s.masterClient.GetMaster(context.Background())
|
|
if currentMaster != "" {
|
|
masterNodes = append(masterNodes, MasterNode{
|
|
Address: pb.ServerAddress(currentMaster).ToHttpAddress(),
|
|
IsLeader: isLeader,
|
|
})
|
|
}
|
|
|
|
return masterNodes
|
|
}
|
|
|
|
// getFilerNodesStatus checks status of all filer nodes using master's ListClusterNodes
|
|
func (s *AdminServer) getFilerNodesStatus() []FilerNode {
|
|
var filerNodes []FilerNode
|
|
|
|
// Get filer nodes from master using ListClusterNodes
|
|
err := s.WithMasterClient(func(client master_pb.SeaweedClient) error {
|
|
resp, err := client.ListClusterNodes(context.Background(), &master_pb.ListClusterNodesRequest{
|
|
ClientType: cluster.FilerType,
|
|
})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
// Process each filer node
|
|
for _, node := range resp.ClusterNodes {
|
|
filerNodes = append(filerNodes, FilerNode{
|
|
Address: pb.ServerAddress(node.Address).ToHttpAddress(),
|
|
DataCenter: node.DataCenter,
|
|
Rack: node.Rack,
|
|
LastUpdated: time.Now(),
|
|
})
|
|
}
|
|
|
|
return nil
|
|
})
|
|
|
|
if err != nil {
|
|
currentMaster := s.masterClient.GetMaster(context.Background())
|
|
glog.Errorf("Failed to get filer nodes from master %s: %v", currentMaster, err)
|
|
// Return empty list if we can't get filer info from master
|
|
return []FilerNode{}
|
|
}
|
|
|
|
// Sort filer nodes by address for consistent ordering on page refresh
|
|
sort.Slice(filerNodes, func(i, j int) bool {
|
|
return filerNodes[i].Address < filerNodes[j].Address
|
|
})
|
|
|
|
return filerNodes
|
|
}
|
|
|
|
// getMessageBrokerNodesStatus checks status of all message broker nodes using master's ListClusterNodes
|
|
func (s *AdminServer) getMessageBrokerNodesStatus() []MessageBrokerNode {
|
|
var messageBrokers []MessageBrokerNode
|
|
|
|
// Get message broker nodes from master using ListClusterNodes
|
|
err := s.WithMasterClient(func(client master_pb.SeaweedClient) error {
|
|
resp, err := client.ListClusterNodes(context.Background(), &master_pb.ListClusterNodesRequest{
|
|
ClientType: cluster.BrokerType,
|
|
})
|
|
if err != nil {
|
|
return err
|
|
}
|
|
|
|
// Process each message broker node
|
|
for _, node := range resp.ClusterNodes {
|
|
messageBrokers = append(messageBrokers, MessageBrokerNode{
|
|
Address: node.Address,
|
|
DataCenter: node.DataCenter,
|
|
Rack: node.Rack,
|
|
LastUpdated: time.Now(),
|
|
})
|
|
}
|
|
|
|
return nil
|
|
})
|
|
|
|
if err != nil {
|
|
currentMaster := s.masterClient.GetMaster(context.Background())
|
|
glog.Errorf("Failed to get message broker nodes from master %s: %v", currentMaster, err)
|
|
// Return empty list if we can't get broker info from master
|
|
return []MessageBrokerNode{}
|
|
}
|
|
|
|
// Sort message broker nodes by address for consistent ordering on page refresh
|
|
sort.Slice(messageBrokers, func(i, j int) bool {
|
|
return messageBrokers[i].Address < messageBrokers[j].Address
|
|
})
|
|
|
|
return messageBrokers
|
|
}
|