* Add IAM gRPC service definition - Add GetConfiguration/PutConfiguration for config management - Add CreateUser/GetUser/UpdateUser/DeleteUser/ListUsers for user management - Add CreateAccessKey/DeleteAccessKey/GetUserByAccessKey for access key management - Methods mirror existing IAM HTTP API functionality * Add IAM gRPC handlers on filer server - Implement IamGrpcServer with CredentialManager integration - Handle configuration get/put operations - Handle user CRUD operations - Handle access key create/delete operations - All methods delegate to CredentialManager for actual storage * Wire IAM gRPC service to filer server - Add CredentialManager field to FilerOption and FilerServer - Import credential store implementations in filer command - Initialize CredentialManager from credential.toml if available - Register IAM gRPC service on filer gRPC server - Enable credential management via gRPC alongside existing filer services * Regenerate IAM protobuf with gRPC service methods * fix: compilation error in DeleteUser * fix: address code review comments for IAM migration * feat: migrate policies to multi-file layout and fix identity duplicated content * refactor: remove configuration.json and migrate Service Accounts to multi-file layout * refactor: standardize Service Accounts as distinct store entities and fix Admin Server persistence * config: set ServiceAccountsDirectory to /etc/iam/service_accounts * Fix Chrome dialog auto-dismiss with Bootstrap modals - Add modal-alerts.js library with Bootstrap modal replacements - Replace all 15 confirm() calls with showConfirm/showDeleteConfirm - Auto-override window.alert() for all alert() calls - Fixes Chrome 132+ aggressively blocking native dialogs * Upgrade Bootstrap from 5.3.2 to 5.3.8 * Fix syntax error in object_store_users.templ - remove duplicate closing braces * create policy * display errors * migrate to multi-file policies * address PR feedback: use showDeleteConfirm and showErrorMessage in policies.templ, refine migration check * Update policies_templ.go * add service account to iam grpc * iam: fix potential path traversal in policy names by validating name pattern * iam: add GetServiceAccountByAccessKey to CredentialStore interface * iam: implement service account support for PostgresStore Includes full CRUD operations and efficient lookup by access key. * iam: implement GetServiceAccountByAccessKey for filer_etc, grpc, and memory stores Provides efficient lookup of service accounts by access key where possible, with linear scan fallbacks for file-based stores. * iam: remove filer_multiple support Deleted its implementation and references in imports, scaffold config, and core interface constants. Redundant with filer_etc. * clear comment * dash: robustify service account construction - Guard against nil sa.Credential when constructing responses - Fix Expiration logic to only set if > 0, avoiding Unix epoch 1970 - Ensure consistency across Get, Create, and Update handlers * credential/filer_etc: improve error propagation in configuration handlers - Return error from loadServiceAccountsFromMultiFile to callers - Ensure listEntries errors in SaveConfiguration (cleanup logic) are propagated unless they are "not found" failures. - Fixes potential silent failures during IAM configuration sync. * credential/filer_etc: add existence check to CreateServiceAccount Ensures consistency with other stores by preventing accidental overwrite of existing service accounts during creation. * credential/memory: improve store robustness and Reset logic - Enforce ID immutability in UpdateServiceAccount to prevent orphans - Update Reset() to also clear the policies map, ensuring full state cleanup for tests. * dash: improve service account robustness and policy docs - Wrap parent user lookup errors to preserve context - Strictly validate Status field in UpdateServiceAccount - Add deprecation comments to legacy policy management methods * credential/filer_etc: protect against path traversal in service accounts Implemented ID validation (alphanumeric, underscores, hyphens) and applied it to Get, Save, and Delete operations to ensure no directory traversal via saId.json filenames. * credential/postgres: improve robustness and cleanup comments - Removed brainstorming comments in GetServiceAccountByAccessKey - Added missing rows.Err() check during iteration - Properly propagate Scan and Unmarshal errors instead of swallowing them * admin: unify UI alerts and confirmations using Bootstrap modals - Updated modal-alerts.js with improved automated alert type detection - Replaced native alert() and confirm() with showAlert(), showConfirm(), and showDeleteConfirm() across various Templ components - Improved UX for delete operations by providing better context and styling - Ensured consistent error reporting across IAM and Maintenance views * admin: additional UI consistency fixes for alerts and confirmations - Replaced native alert() and confirm() with Bootstrap modals in: - EC volumes (repair flow) - Collection details (repair flow) - File browser (properties and delete) - Maintenance config schema (save and reset) - Improved delete confirmation in file browser with item context - Ensured consistent success/error/info styling for all feedbacks * make * iam: add GetServiceAccountByAccessKey RPC and update GetConfiguration * iam: implement GetServiceAccountByAccessKey on server and client * iam: centralize policy and service account validation * iam: optimize MemoryStore service account lookups with indexing * iam: fix postgres service_accounts table and optimize lookups * admin: refactor modal alerts and clean up dashboard logic * admin: fix EC shards table layout mismatch * admin: URL-encode IAM path parameters for safety * admin: implement pauseWorker logic in maintenance view * iam: add rows.Err() check to postgres ListServiceAccounts * iam: standardize ErrServiceAccountNotFound across credential stores * iam: map ErrServiceAccountNotFound to codes.NotFound in DeleteServiceAccount * iam: refine service account store logic, errors and schema * iam: add validation to GetServiceAccountByAccessKey * admin: refine modal titles and ensure URL safety * admin: address bot review comments for alerts and async usage * iam: fix syntax error by restoring missing function declaration * [FilerEtcStore] improve error handling in CreateServiceAccount Refine error handling to provide clearer messages when checking for existing service accounts. * [PostgresStore] add nil guards and validation to service account methods Ensure input parameters are not nil and required IDs are present to prevent runtime panics and ensure data integrity. * [JS] add shared IAM utility script Consolidate common IAM operations like deleteUser and deleteAccessKey into a shared utility script for better maintainability. * [View] include shared IAM utilities in layout Include iam-utils.js in the main layout to make IAM functions available across all administrative pages. * [View] refactor IAM logic and restore async in EC Shards view Remove redundant local IAM functions and ensure that delete confirmation callbacks are properly marked as async. * [View] consolidate IAM logic in Object Store Users view Remove redundant local definitions of deleteUser and deleteAccessKey, relying on the shared utilities instead. * [View] update generated templ files for UI consistency * credential/postgres: remove redundant name column from service_accounts table The id is already used as the unique identifier and was being copied to the name column. This removes the name column from the schema and updates the INSERT/UPDATE queries. * credential/filer_etc: improve logging for policy migration failures Added Errorf log if AtomicRenameEntry fails during migration to ensure visibility of common failure points. * credential: allow uppercase characters in service account ID username Updated ServiceAccountIdPattern to allow [A-Za-z0-9_-]+ for the username component, matching the actual service account creation logic which uses the parent user name directly. * Update object_store_users_templ.go * admin: fix ec_shards pagination to handle numeric page arguments Updated goToPage in cluster_ec_shards.templ to accept either an Event or a numeric page argument. This prevents errors when goToPage(1) is called directly. Corrected both the .templ source and generated Go code. * credential/filer_etc: improve service account storage robustness Added nil guard to saveServiceAccount, updated GetServiceAccount to return ErrServiceAccountNotFound for empty data, and improved deleteServiceAccount to handle response-level Filer errors.
671 lines
27 KiB
Plaintext
671 lines
27 KiB
Plaintext
package app
|
|
|
|
import (
|
|
"fmt"
|
|
"sort"
|
|
"strings"
|
|
"github.com/seaweedfs/seaweedfs/weed/admin/dash"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding"
|
|
)
|
|
|
|
templ ClusterEcVolumes(data dash.ClusterEcVolumesData) {
|
|
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pt-3 pb-2 mb-3 border-bottom">
|
|
<div>
|
|
<h1 class="h2">
|
|
<i class="fas fa-database me-2"></i>EC Volumes
|
|
</h1>
|
|
if data.Collection != "" {
|
|
<div class="d-flex align-items-center mt-2">
|
|
if data.Collection == "default" {
|
|
<span class="badge bg-secondary text-white me-2">
|
|
<i class="fas fa-filter me-1"></i>Collection: default
|
|
</span>
|
|
} else {
|
|
<span class="badge bg-info text-white me-2">
|
|
<i class="fas fa-filter me-1"></i>Collection: {data.Collection}
|
|
</span>
|
|
}
|
|
<a href="/storage/ec-volumes" class="btn btn-sm btn-outline-secondary">
|
|
<i class="fas fa-times me-1"></i>Clear Filter
|
|
</a>
|
|
</div>
|
|
}
|
|
</div>
|
|
<div class="btn-toolbar mb-2 mb-md-0">
|
|
<div class="btn-group me-2">
|
|
<select class="form-select form-select-sm me-2" id="pageSizeSelect" onchange="changePageSize(this.value)" style="width: auto;">
|
|
<option value="5" if data.PageSize == 5 { selected="selected" }>5 per page</option>
|
|
<option value="10" if data.PageSize == 10 { selected="selected" }>10 per page</option>
|
|
<option value="25" if data.PageSize == 25 { selected="selected" }>25 per page</option>
|
|
<option value="50" if data.PageSize == 50 { selected="selected" }>50 per page</option>
|
|
<option value="100" if data.PageSize == 100 { selected="selected" }>100 per page</option>
|
|
</select>
|
|
<button type="button" class="btn btn-sm btn-outline-primary" onclick="window.location.reload()">
|
|
<i class="fas fa-refresh me-1"></i>Refresh
|
|
</button>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<!-- Statistics Cards -->
|
|
<div class="row mb-4">
|
|
<div class="col-md-3">
|
|
<div class="card text-bg-primary">
|
|
<div class="card-body">
|
|
<div class="d-flex justify-content-between">
|
|
<div>
|
|
<h6 class="card-title">Total Volumes</h6>
|
|
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalVolumes)}</h4>
|
|
</div>
|
|
<div class="align-self-center">
|
|
<i class="fas fa-cubes fa-2x"></i>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
<div class="col-md-3">
|
|
<div class="card text-bg-info">
|
|
<div class="card-body">
|
|
<div class="d-flex justify-content-between">
|
|
<div>
|
|
<h6 class="card-title">Total Shards</h6>
|
|
<h4 class="mb-0">{fmt.Sprintf("%d", data.TotalShards)}</h4>
|
|
</div>
|
|
<div class="align-self-center">
|
|
<i class="fas fa-puzzle-piece fa-2x"></i>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
<div class="col-md-3">
|
|
<div class="card text-bg-success">
|
|
<div class="card-body">
|
|
<div class="d-flex justify-content-between">
|
|
<div>
|
|
<h6 class="card-title">Healthy Volumes</h6>
|
|
<h4 class="mb-0">{fmt.Sprintf("%d", data.CompleteVolumes)}</h4>
|
|
<small>All { fmt.Sprintf("%d", erasure_coding.TotalShardsCount) } shards present</small>
|
|
</div>
|
|
<div class="align-self-center">
|
|
<i class="fas fa-check-circle fa-2x"></i>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
<div class="col-md-3">
|
|
<div class="card text-bg-warning">
|
|
<div class="card-body">
|
|
<div class="d-flex justify-content-between">
|
|
<div>
|
|
<h6 class="card-title">Degraded Volumes</h6>
|
|
<h4 class="mb-0">{fmt.Sprintf("%d", data.IncompleteVolumes)}</h4>
|
|
<small>Incomplete/Critical</small>
|
|
</div>
|
|
<div class="align-self-center">
|
|
<i class="fas fa-exclamation-triangle fa-2x"></i>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<!-- EC Storage Information Note -->
|
|
<div class="alert alert-info mb-4" role="alert">
|
|
<i class="fas fa-info-circle me-2"></i>
|
|
<strong>EC Storage Note:</strong>
|
|
EC volumes use erasure coding ({ fmt.Sprintf("%d+%d", erasure_coding.DataShardsCount, erasure_coding.ParityShardsCount) }) which stores data across { fmt.Sprintf("%d", erasure_coding.TotalShardsCount) } shards with redundancy.
|
|
Physical storage is approximately { fmt.Sprintf("%.1fx", float64(erasure_coding.TotalShardsCount)/float64(erasure_coding.DataShardsCount)) } the original logical data size due to { fmt.Sprintf("%d", erasure_coding.ParityShardsCount) } parity shards.
|
|
</div>
|
|
|
|
<!-- Volumes Table -->
|
|
<div class="table-responsive">
|
|
<table class="table table-striped table-hover" id="ecVolumesTable">
|
|
<thead>
|
|
<tr>
|
|
<th>
|
|
<a href="#" onclick="sortBy('volume_id')" class="text-dark text-decoration-none">
|
|
Volume ID
|
|
if data.SortBy == "volume_id" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
if data.ShowCollectionColumn {
|
|
<th>
|
|
<a href="#" onclick="sortBy('collection')" class="text-dark text-decoration-none">
|
|
Collection
|
|
if data.SortBy == "collection" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
}
|
|
<th>
|
|
<a href="#" onclick="sortBy('total_shards')" class="text-dark text-decoration-none">
|
|
Shard Count
|
|
if data.SortBy == "total_shards" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
<th class="text-dark">Shard Size</th>
|
|
<th class="text-dark">Shard Locations</th>
|
|
<th>
|
|
<a href="#" onclick="sortBy('completeness')" class="text-dark text-decoration-none">
|
|
Status
|
|
if data.SortBy == "completeness" {
|
|
if data.SortOrder == "asc" {
|
|
<i class="fas fa-sort-up ms-1"></i>
|
|
} else {
|
|
<i class="fas fa-sort-down ms-1"></i>
|
|
}
|
|
} else {
|
|
<i class="fas fa-sort ms-1 text-muted"></i>
|
|
}
|
|
</a>
|
|
</th>
|
|
if data.ShowDataCenterColumn {
|
|
<th class="text-dark">Data Centers</th>
|
|
}
|
|
<th class="text-dark">Actions</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody>
|
|
for _, volume := range data.EcVolumes {
|
|
<tr>
|
|
<td>
|
|
<span class="fw-bold">{fmt.Sprintf("%d", volume.VolumeID)}</span>
|
|
</td>
|
|
if data.ShowCollectionColumn {
|
|
<td>
|
|
if volume.Collection != "" {
|
|
<a href={ templ.URL(fmt.Sprintf("/storage/ec-shards?collection=%s", volume.Collection)) } class="text-decoration-none">
|
|
<span class="badge bg-info text-white">{volume.Collection}</span>
|
|
</a>
|
|
} else {
|
|
<a href={ templ.URL("/storage/ec-shards?collection=default") } class="text-decoration-none">
|
|
<span class="badge bg-secondary text-white">default</span>
|
|
</a>
|
|
}
|
|
</td>
|
|
}
|
|
<td>
|
|
<span class="badge bg-primary">{fmt.Sprintf("%d/%d", volume.TotalShards, erasure_coding.TotalShardsCount)}</span>
|
|
</td>
|
|
<td>
|
|
@displayShardSizes(volume.ShardSizes)
|
|
</td>
|
|
<td>
|
|
@displayVolumeDistribution(volume)
|
|
</td>
|
|
<td>
|
|
@displayEcVolumeStatus(volume)
|
|
</td>
|
|
if data.ShowDataCenterColumn {
|
|
<td>
|
|
for i, dc := range volume.DataCenters {
|
|
if i > 0 {
|
|
<span>, </span>
|
|
}
|
|
<span class="badge bg-primary text-white">{dc}</span>
|
|
}
|
|
</td>
|
|
}
|
|
<td>
|
|
<div class="btn-group" role="group">
|
|
<button type="button" class="btn btn-sm btn-outline-primary"
|
|
onclick="showVolumeDetails(event)"
|
|
data-volume-id={ fmt.Sprintf("%d", volume.VolumeID) }
|
|
title="View EC volume details">
|
|
<i class="fas fa-info-circle"></i>
|
|
</button>
|
|
if !volume.IsComplete {
|
|
<button type="button" class="btn btn-sm btn-outline-warning"
|
|
onclick="repairVolume(event)"
|
|
data-volume-id={ fmt.Sprintf("%d", volume.VolumeID) }
|
|
title="Repair missing shards">
|
|
<i class="fas fa-wrench"></i>
|
|
</button>
|
|
}
|
|
</div>
|
|
</td>
|
|
</tr>
|
|
}
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
|
|
<!-- Pagination -->
|
|
if data.TotalPages > 1 {
|
|
<nav aria-label="EC Volumes pagination">
|
|
<ul class="pagination justify-content-center">
|
|
if data.Page > 1 {
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page="1">
|
|
<i class="fas fa-angle-double-left"></i>
|
|
</a>
|
|
</li>
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page-1) }>
|
|
<i class="fas fa-chevron-left"></i>
|
|
</a>
|
|
</li>
|
|
}
|
|
|
|
<!-- First page -->
|
|
if data.Page > 3 {
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page="1">1</a>
|
|
</li>
|
|
if data.Page > 4 {
|
|
<li class="page-item disabled">
|
|
<span class="page-link">...</span>
|
|
</li>
|
|
}
|
|
}
|
|
|
|
<!-- Current page neighbors -->
|
|
if data.Page > 1 {
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page-1) }>{fmt.Sprintf("%d", data.Page-1)}</a>
|
|
</li>
|
|
}
|
|
|
|
<li class="page-item active">
|
|
<span class="page-link">{fmt.Sprintf("%d", data.Page)}</span>
|
|
</li>
|
|
|
|
if data.Page < data.TotalPages {
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page+1) }>{fmt.Sprintf("%d", data.Page+1)}</a>
|
|
</li>
|
|
}
|
|
|
|
<!-- Last page -->
|
|
if data.Page < data.TotalPages-2 {
|
|
if data.Page < data.TotalPages-3 {
|
|
<li class="page-item disabled">
|
|
<span class="page-link">...</span>
|
|
</li>
|
|
}
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.TotalPages) }>{fmt.Sprintf("%d", data.TotalPages)}</a>
|
|
</li>
|
|
}
|
|
|
|
if data.Page < data.TotalPages {
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.Page+1) }>
|
|
<i class="fas fa-chevron-right"></i>
|
|
</a>
|
|
</li>
|
|
<li class="page-item">
|
|
<a class="page-link" href="#" onclick="goToPage(event)" data-page={ fmt.Sprintf("%d", data.TotalPages) }>
|
|
<i class="fas fa-angle-double-right"></i>
|
|
</a>
|
|
</li>
|
|
}
|
|
</ul>
|
|
</nav>
|
|
}
|
|
|
|
<!-- JavaScript -->
|
|
<script>
|
|
function sortBy(field) {
|
|
const currentSort = new URLSearchParams(window.location.search).get('sort_by');
|
|
const currentOrder = new URLSearchParams(window.location.search).get('sort_order') || 'asc';
|
|
|
|
let newOrder = 'asc';
|
|
if (currentSort === field && currentOrder === 'asc') {
|
|
newOrder = 'desc';
|
|
}
|
|
|
|
updateUrl({
|
|
sort_by: field,
|
|
sort_order: newOrder,
|
|
page: 1
|
|
});
|
|
}
|
|
|
|
function goToPage(event) {
|
|
event.preventDefault();
|
|
const page = event.target.closest('a').getAttribute('data-page');
|
|
updateUrl({ page: page });
|
|
}
|
|
|
|
function changePageSize(newPageSize) {
|
|
updateUrl({ page_size: newPageSize, page: 1 });
|
|
}
|
|
|
|
function updateUrl(params) {
|
|
const url = new URL(window.location);
|
|
Object.keys(params).forEach(key => {
|
|
if (params[key] != null) {
|
|
url.searchParams.set(key, params[key]);
|
|
} else {
|
|
url.searchParams.delete(key);
|
|
}
|
|
});
|
|
window.location.href = url.toString();
|
|
}
|
|
|
|
function showVolumeDetails(event) {
|
|
const volumeId = event.target.closest('button').getAttribute('data-volume-id');
|
|
window.location.href = `/storage/ec-volumes/${volumeId}`;
|
|
}
|
|
|
|
function repairVolume(event) {
|
|
const volumeId = event.target.closest('button').getAttribute('data-volume-id');
|
|
showConfirm(`Are you sure you want to repair missing shards for volume ${volumeId}?`, function() {
|
|
fetch(`/api/storage/ec-volumes/${volumeId}/repair`, {
|
|
method: 'POST',
|
|
headers: {
|
|
'Content-Type': 'application/json',
|
|
}
|
|
})
|
|
.then(response => {
|
|
if (!response.ok) {
|
|
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
|
|
}
|
|
return response.json();
|
|
})
|
|
.then(data => {
|
|
if (data && data.success) {
|
|
showAlert('Repair initiated successfully', 'success');
|
|
location.reload();
|
|
} else {
|
|
showAlert('Failed to initiate repair: ' + (data && data.error ? data.error : 'Unknown error'), 'error');
|
|
}
|
|
})
|
|
.catch(error => {
|
|
showAlert('Error: ' + error.message, 'error');
|
|
});
|
|
});
|
|
}
|
|
</script>
|
|
}
|
|
|
|
// displayShardSizes renders shard sizes in a compact format
|
|
templ displayShardSizes(shardSizes map[int]int64) {
|
|
if len(shardSizes) == 0 {
|
|
<span class="text-muted">-</span>
|
|
} else {
|
|
@renderShardSizesContent(shardSizes)
|
|
}
|
|
}
|
|
|
|
// renderShardSizesContent renders the content of shard sizes
|
|
templ renderShardSizesContent(shardSizes map[int]int64) {
|
|
if areAllShardSizesSame(shardSizes) {
|
|
// All shards have the same size, show just the common size
|
|
<span class="text-success">{getCommonShardSize(shardSizes)}</span>
|
|
} else {
|
|
// Shards have different sizes, show individual sizes
|
|
<div class="shard-sizes" style="max-width: 300px;">
|
|
{ formatIndividualShardSizes(shardSizes) }
|
|
</div>
|
|
}
|
|
}
|
|
|
|
// ServerShardInfo represents server and its shard ranges with sizes
|
|
type ServerShardInfo struct {
|
|
Server string
|
|
ShardRanges string
|
|
}
|
|
|
|
// groupShardsByServer groups shards by server and formats ranges
|
|
func groupShardsByServer(shardLocations map[int]string) []ServerShardInfo {
|
|
if len(shardLocations) == 0 {
|
|
return []ServerShardInfo{}
|
|
}
|
|
|
|
// Group shards by server
|
|
serverShards := make(map[string][]int)
|
|
for shardId, server := range shardLocations {
|
|
serverShards[server] = append(serverShards[server], shardId)
|
|
}
|
|
|
|
var serverInfos []ServerShardInfo
|
|
for server, shards := range serverShards {
|
|
// Sort shards for each server
|
|
sort.Ints(shards)
|
|
|
|
// Format shard ranges compactly
|
|
shardRanges := formatShardRanges(shards)
|
|
serverInfos = append(serverInfos, ServerShardInfo{
|
|
Server: server,
|
|
ShardRanges: shardRanges,
|
|
})
|
|
}
|
|
|
|
// Sort by server name
|
|
sort.Slice(serverInfos, func(i, j int) bool {
|
|
return serverInfos[i].Server < serverInfos[j].Server
|
|
})
|
|
|
|
return serverInfos
|
|
}
|
|
|
|
// Helper function to format shard ranges compactly (e.g., "0-3,7,9-11")
|
|
func formatShardRanges(shards []int) string {
|
|
if len(shards) == 0 {
|
|
return ""
|
|
}
|
|
|
|
var ranges []string
|
|
start := shards[0]
|
|
end := shards[0]
|
|
|
|
for i := 1; i < len(shards); i++ {
|
|
if shards[i] == end+1 {
|
|
end = shards[i]
|
|
} else {
|
|
if start == end {
|
|
ranges = append(ranges, fmt.Sprintf("%d", start))
|
|
} else {
|
|
ranges = append(ranges, fmt.Sprintf("%d-%d", start, end))
|
|
}
|
|
start = shards[i]
|
|
end = shards[i]
|
|
}
|
|
}
|
|
|
|
// Add the last range
|
|
if start == end {
|
|
ranges = append(ranges, fmt.Sprintf("%d", start))
|
|
} else {
|
|
ranges = append(ranges, fmt.Sprintf("%d-%d", start, end))
|
|
}
|
|
|
|
return strings.Join(ranges, ",")
|
|
}
|
|
|
|
// Helper function to convert bytes to human readable format
|
|
func bytesToHumanReadable(bytes int64) string {
|
|
const unit = 1024
|
|
if bytes < unit {
|
|
return fmt.Sprintf("%dB", bytes)
|
|
}
|
|
div, exp := int64(unit), 0
|
|
for n := bytes / unit; n >= unit; n /= unit {
|
|
div *= unit
|
|
exp++
|
|
}
|
|
return fmt.Sprintf("%.1f%cB", float64(bytes)/float64(div), "KMGTPE"[exp])
|
|
}
|
|
|
|
// Helper function to format missing shards
|
|
func formatMissingShards(missingShards []int) string {
|
|
if len(missingShards) == 0 {
|
|
return ""
|
|
}
|
|
|
|
var shardStrs []string
|
|
for _, shard := range missingShards {
|
|
shardStrs = append(shardStrs, fmt.Sprintf("%d", shard))
|
|
}
|
|
|
|
return strings.Join(shardStrs, ", ")
|
|
}
|
|
|
|
// Helper function to check if all shard sizes are the same
|
|
func areAllShardSizesSame(shardSizes map[int]int64) bool {
|
|
if len(shardSizes) <= 1 {
|
|
return true
|
|
}
|
|
|
|
var firstSize int64 = -1
|
|
for _, size := range shardSizes {
|
|
if firstSize == -1 {
|
|
firstSize = size
|
|
} else if size != firstSize {
|
|
return false
|
|
}
|
|
}
|
|
return true
|
|
}
|
|
|
|
// Helper function to get the common shard size (when all shards are the same size)
|
|
func getCommonShardSize(shardSizes map[int]int64) string {
|
|
for _, size := range shardSizes {
|
|
return bytesToHumanReadable(size)
|
|
}
|
|
return "-"
|
|
}
|
|
|
|
// Helper function to format individual shard sizes
|
|
func formatIndividualShardSizes(shardSizes map[int]int64) string {
|
|
if len(shardSizes) == 0 {
|
|
return ""
|
|
}
|
|
|
|
// Group shards by size for more compact display
|
|
sizeGroups := make(map[int64][]int)
|
|
for shardId, size := range shardSizes {
|
|
sizeGroups[size] = append(sizeGroups[size], shardId)
|
|
}
|
|
|
|
// If there are only 1-2 different sizes, show them grouped
|
|
if len(sizeGroups) <= 3 {
|
|
var groupStrs []string
|
|
for size, shardIds := range sizeGroups {
|
|
// Sort shard IDs
|
|
sort.Ints(shardIds)
|
|
|
|
var idRanges []string
|
|
if len(shardIds) <= erasure_coding.ParityShardsCount {
|
|
// Show individual IDs if few shards
|
|
for _, id := range shardIds {
|
|
idRanges = append(idRanges, fmt.Sprintf("%d", id))
|
|
}
|
|
} else {
|
|
// Show count if many shards
|
|
idRanges = append(idRanges, fmt.Sprintf("%d shards", len(shardIds)))
|
|
}
|
|
groupStrs = append(groupStrs, fmt.Sprintf("%s: %s", strings.Join(idRanges, ","), bytesToHumanReadable(size)))
|
|
}
|
|
return strings.Join(groupStrs, " | ")
|
|
}
|
|
|
|
// If too many different sizes, show summary
|
|
return fmt.Sprintf("%d different sizes", len(sizeGroups))
|
|
}
|
|
|
|
// displayVolumeDistribution shows the distribution summary for a volume
|
|
templ displayVolumeDistribution(volume dash.EcVolumeWithShards) {
|
|
<div class="small">
|
|
<i class="fas fa-sitemap me-1"></i>
|
|
{ calculateVolumeDistributionSummary(volume) }
|
|
</div>
|
|
}
|
|
|
|
// displayEcVolumeStatus shows an improved status display for EC volumes.
|
|
// Status thresholds are based on EC recovery capability:
|
|
// - Critical: More than DataShardsCount missing (data is unrecoverable)
|
|
// - Degraded: More than half of DataShardsCount missing (high risk)
|
|
// - Incomplete: More than half of ParityShardsCount missing (reduced redundancy)
|
|
// - Minor Issues: Few shards missing (still fully recoverable)
|
|
templ displayEcVolumeStatus(volume dash.EcVolumeWithShards) {
|
|
if volume.IsComplete {
|
|
<span class="badge bg-success"><i class="fas fa-check me-1"></i>Complete</span>
|
|
} else {
|
|
if len(volume.MissingShards) > erasure_coding.DataShardsCount {
|
|
// Unrecoverable: more shards missing than EC can reconstruct
|
|
<span class="badge bg-danger"><i class="fas fa-skull me-1"></i>Critical ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
|
|
} else if len(volume.MissingShards) > (erasure_coding.DataShardsCount/2) {
|
|
// High risk: approaching unrecoverable state
|
|
<span class="badge bg-warning"><i class="fas fa-exclamation-triangle me-1"></i>Degraded ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
|
|
} else if len(volume.MissingShards) > (erasure_coding.ParityShardsCount/2) {
|
|
// Reduced redundancy but still recoverable
|
|
<span class="badge bg-warning"><i class="fas fa-info-circle me-1"></i>Incomplete ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
|
|
} else {
|
|
// Minor: few shards missing, fully recoverable with good margin
|
|
<span class="badge bg-info"><i class="fas fa-info-circle me-1"></i>Minor Issues ({fmt.Sprintf("%d", len(volume.MissingShards))} missing)</span>
|
|
}
|
|
}
|
|
}
|
|
|
|
// calculateVolumeDistributionSummary calculates and formats the distribution summary for a volume
|
|
func calculateVolumeDistributionSummary(volume dash.EcVolumeWithShards) string {
|
|
dataCenters := make(map[string]bool)
|
|
racks := make(map[string]bool)
|
|
servers := make(map[string]bool)
|
|
|
|
// Count unique servers from shard locations
|
|
for _, server := range volume.ShardLocations {
|
|
servers[server] = true
|
|
}
|
|
|
|
// Use the DataCenters field if available
|
|
for _, dc := range volume.DataCenters {
|
|
dataCenters[dc] = true
|
|
}
|
|
|
|
// Use the Servers field if available
|
|
for _, server := range volume.Servers {
|
|
servers[server] = true
|
|
}
|
|
|
|
// Use the Racks field if available
|
|
for _, rack := range volume.Racks {
|
|
racks[rack] = true
|
|
}
|
|
|
|
// If we don't have rack information, estimate it from servers as fallback
|
|
rackCount := len(racks)
|
|
if rackCount == 0 {
|
|
// Fallback estimation - assume each server might be in a different rack
|
|
rackCount = len(servers)
|
|
if len(dataCenters) > 0 {
|
|
// More conservative estimate if we have DC info
|
|
rackCount = (len(servers) + len(dataCenters) - 1) / len(dataCenters)
|
|
if rackCount == 0 {
|
|
rackCount = 1
|
|
}
|
|
}
|
|
}
|
|
|
|
return fmt.Sprintf("%d DCs, %d racks, %d servers", len(dataCenters), rackCount, len(servers))
|
|
} |