master
5 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
076d504044 |
fix(admin): reduce memory usage and verbose logging for large clusters (#8927)
* fix(admin): reduce memory usage and verbose logging for large clusters (#8919) The admin server used excessive memory and produced thousands of log lines on clusters with many volumes (e.g., 33k volumes). Three root causes: 1. Scanner duplicated all volume metrics: getVolumeHealthMetrics() created VolumeHealthMetrics objects, then convertToTaskMetrics() copied them all into identical types.VolumeHealthMetrics. Now uses the task-system type directly, eliminating the duplicate allocation and removing convertToTaskMetrics. 2. All previous task states loaded at startup: LoadTasksFromPersistence read and deserialized every .pb file from disk, logging each one. With thousands of balance tasks persisted, this caused massive startup I/O, memory usage, and log noise (including unguarded DEBUG glog.Infof per task). Now starts with an empty queue — the scanner re-detects current needs from live cluster state. Terminal tasks are purged from memory and disk when new scan results arrive. 3. Verbose per-volume/per-node logging: V(2) and V(3) logs produced thousands of lines per scan. Per-volume logs bumped to V(4), per-node/rack/disk logs bumped to V(3). Topology summary now logs counts instead of full node ID arrays. Also removes lastTopologyInfo field from MaintenanceScanner — the raw protobuf topology is returned as a local value and not retained between 30-minute scans. * fix(admin): delete stale task files at startup, add DeleteAllTaskStates Old task .pb files from previous runs were left on disk. The periodic CleanupCompletedTasks still loads all files to find completed ones — the same expensive 4GB path from the pprof profile. Now at startup, DeleteAllTaskStates removes all .pb files by scanning the directory without reading or deserializing them. The scanner will re-detect any tasks still needed from live cluster state. * fix(admin): don't persist terminal tasks to disk CompleteTask was saving failed/completed tasks to disk where they'd accumulate. The periodic cleanup only triggered for completed tasks, not failed ones. Now terminal tasks are deleted from disk immediately and only kept in memory for the current session's UI. * fix(admin): cap in-memory tasks to 100 per job type Without a limit, the task map grows unbounded — balance could create thousands of pending tasks for a cluster with many imbalanced volumes. Now AddTask rejects new tasks when a job type already has 100 in the queue. The scanner will re-detect skipped volumes on the next scan. * fix(admin): address PR review - memory-only purge, active-only capacity - purgeTerminalTasks now only cleans in-memory map (terminal tasks are already deleted from disk by CompleteTask) - Per-type capacity limit counts only active tasks (pending/assigned/ in_progress), not terminal ones - When at capacity, purge terminal tasks first before rejecting * fix(admin): fix orphaned comment, add TaskStatusCancelled to terminal switch - Move hasQueuedOrActiveTaskForVolume comment to its function definition - Add TaskStatusCancelled to the terminal state switch in CompleteTask so cancelled task files are deleted from disk |
||
|
|
c19f88eef1 |
fix: resolve ServerAddress to NodeId in maintenance task sync (#8508)
* fix: maintenance task topology lookup, retry, and stale task cleanup 1. Strip gRPC port from ServerAddress in SyncTask using ToHttpAddress() so task targets match topology disk keys (NodeId format). 2. Skip capacity check when topology has no disks yet (startup race where tasks are loaded from persistence before first topology update). 3. Don't retry permanent errors like "volume not found" - these will never succeed on retry. 4. Cancel all pending tasks for each task type before re-detection, ensuring stale proposals from previous cycles are cleaned up. This prevents stale tasks from blocking new detection and from repeatedly failing. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * logs Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> * less lock scope Co-Authored-By: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
||
|
|
5b86d33c3c |
Fix worker reconnection race condition causing context canceled errors (#7825)
* Fix worker reconnection race condition causing context canceled errors Fixes #7824 This commit fixes critical connection stability issues between admin server and workers that manifested as rapid reconnection cycles with 'context canceled' errors, particularly after 24+ hours of operation in containerized environments. Root Cause: ----------- Race condition where TWO goroutines were calling stream.Recv() on the same gRPC bidirectional stream concurrently: 1. sendRegistrationSync() started a goroutine that calls stream.Recv() 2. handleIncoming() also calls stream.Recv() in a loop Per gRPC specification, only ONE goroutine can call Recv() on a stream at a time. Concurrent Recv() calls cause undefined behavior, manifesting as 'context canceled' errors and stream corruption. The race occurred during worker reconnection: - Sometimes sendRegistrationSync goroutine read the registration response first (success) - Sometimes handleIncoming read it first, causing sendRegistrationSync to timeout - This left the stream in an inconsistent state, triggering 'context canceled' error - The error triggered rapid reconnection attempts, creating a reconnection storm Why it happened after 24 hours: Container orchestration systems (Docker Swarm/Kubernetes) periodically restart pods. Over time, workers reconnect multiple times. Each reconnection had a chance of hitting the race condition. Eventually the race manifested and caused the connection storm. Changes: -------- weed/worker/client.go: - Start handleIncoming and handleOutgoing goroutines BEFORE sending registration - Use sendRegistration() instead of sendRegistrationSync() - Ensures only ONE goroutine (handleIncoming) calls stream.Recv() - Eliminates race condition entirely weed/admin/dash/worker_grpc_server.go: - Clean up old connection when worker reconnects with same ID - Cancel old connection context to stop its goroutines - Prevents resource leaks and stale connection accumulation Impact: ------- Before: Random 'context canceled' errors during reconnection, rapid reconnection cycles, resource leaks, requires manual restart to recover After: Reliable reconnection, single Recv() goroutine, proper cleanup, stable operation over 24+ hours Testing: -------- Build verified successful with no compilation errors. How to reproduce the bug: 1. Start admin server and worker 2. Restart admin server (simulates container recreation) 3. Worker reconnects 4. Race condition may manifest, causing 'context canceled' error 5. Observe rapid reconnection cycles in logs The fix is backward compatible and requires no configuration changes. * Add MaxConnectionAge to gRPC server for Docker Swarm DNS handling - Configure MaxConnectionAge and MaxConnectionAgeGrace for gRPC server - Expand error detection in shouldInvalidateConnection for better cache invalidation - Add connection lifecycle logging for debugging * Add topology validation and nil-safety checks - Add validation guards in UpdateTopology to prevent invalid updates - Add nil-safety checks in rebuildIndexes - Add GetDiskCount method for diagnostic purposes * Fix worker registration race condition - Reorder goroutine startup in WorkerStream to prevent race conditions - Add defensive cleanup in unregisterWorker with panic-safe channel closing * Add comprehensive topology update logging - Enhance UpdateTopologyInfo with detailed logging of datacenter/node/disk counts - Add metrics logging for topology changes * Add periodic diagnostic status logging - Implement topologyStatusLoop running every 5 minutes - Add logTopologyStatus function reporting system metrics - Run as background goroutine in maintenance manager * Enhance master client connection logging - Add connection timing logs in tryConnectToMaster - Add reconnection attempt counting in KeepConnectedToMaster - Improve diagnostic visibility for connection issues * Remove unused sendRegistrationSync function - Function is no longer called after switching to asynchronous sendRegistration - Contains the problematic concurrent stream.Recv() pattern that caused race conditions - Cleanup as suggested in PR review * Clarify comment for channel closing during disconnection - Improve comment to explain why channels are closed and their effect - Make the code more self-documenting as suggested in PR review * Address code review feedback: refactor and improvements - Extract topology counting logic to shared helper function CountTopologyResources() to eliminate duplication between topology_management.go and maintenance_integration.go - Use gRPC status codes for more robust error detection in shouldInvalidateConnection(), falling back to string matching for transport-level errors - Add recover wrapper for channel close consistency in cleanupStaleConnections() to match unregisterWorker() pattern * Update grpc_client_server.go * Fix data race on lastSeen field access - Add mutex protection around conn.lastSeen = time.Now() in WorkerStream method - Ensures thread-safe access consistent with cleanupStaleConnections * Fix goroutine leaks in worker reconnection logic - Close streamExit in reconnect() before creating new connection - Close streamExit in attemptConnection() when sendRegistration fails - Prevents orphaned handleOutgoing/handleIncoming goroutines from previous connections - Ensures proper cleanup of goroutines competing for shared outgoing channel * Minor cleanup improvements for consistency and clarity - Remove redundant string checks in shouldInvalidateConnection that overlap with gRPC status codes - Add recover block to Stop() method for consistency with other channel close operations - Maintains valuable DNS and transport-specific error detection while eliminating redundancy * Improve topology update error handling - Return descriptive errors instead of silently preserving topology for invalid updates - Change nil topologyInfo case to return 'rejected invalid topology update: nil topologyInfo' - Change empty DataCenterInfos case to return 'rejected invalid topology update: empty DataCenterInfos (had X nodes, Y disks)' - Keep existing glog.Warningf calls but append error details to logs before returning errors - Allows callers to distinguish rejected updates and handle them appropriately * Refactor safe channel closing into helper method - Add safeCloseOutgoingChannel helper method to eliminate code duplication - Replace repeated recover blocks in Stop, unregisterWorker, and cleanupStaleConnections - Improves maintainability and ensures consistent error handling across all channel close operations - Maintains same panic recovery behavior with contextual source identification * Make connection invalidation string matching case-insensitive - Convert error string to lowercase once for all string.Contains checks - Improves robustness by catching error message variations from different sources - Eliminates need for separate 'DNS resolution' and 'dns' checks - Maintains same error detection coverage with better reliability * Clean up warning logs in UpdateTopology to avoid duplicating error text - Remove duplicated error phrases from glog.Warningf messages - Keep concise contextual warnings that don't repeat the fmt.Errorf content - Maintain same error returns for backward compatibility * Add robust validation to prevent topology wipeout during master restart - Reject topology updates with 0 nodes when current topology has nodes - Prevents transient empty topology from overwriting valid state - Improves resilience during master restart scenarios - Maintains backward compatibility for legitimate empty topology updates |
||
|
|
25bbf4c3d4 |
Admin UI: Fetch task logs (#7114)
* show task details * loading tasks * task UI works * generic rendering * rendering the export link * removing placementConflicts from task parameters * remove TaskSourceLocation * remove "Server ID" column * rendering balance task source * sources and targets * fix ec task generation * move info * render timeline * simplified worker id * simplify * read task logs from worker * isValidTaskID * address comments * Update weed/worker/tasks/balance/execution.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/worker/tasks/erasure_coding/ec_task.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/worker/tasks/task_log_handler.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix shard ids * plan distributing shard id * rendering planned shards in task details * remove Conflicts * worker logs correctly * pass in dc and rack * task logging * Update weed/admin/maintenance/maintenance_queue.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * display log details * logs have fields now * sort field keys * fix link * fix collection filtering * avoid hard coded ec shard counts --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
||
|
|
0ecb466eda |
Admin: refactoring active topology (#7073)
* refactoring * add ec shard size * address comments * passing task id There seems to be a disconnect between the pending tasks created in ActiveTopology and the TaskDetectionResult returned by this function. A taskID is generated locally and used to create pending tasks via AddPendingECShardTask, but this taskID is not stored in the TaskDetectionResult or passed along in any way. This makes it impossible for the worker that eventually executes the task to know which pending task in ActiveTopology it corresponds to. Without the correct taskID, the worker cannot call AssignTask or CompleteTask on the master, breaking the entire task lifecycle and capacity management feature. A potential solution is to add a TaskID field to TaskDetectionResult and worker_pb.TaskParams, ensuring the ID is propagated from detection to execution. * 1 source multiple destinations * task supports multi source and destination * ec needs to clean up previous shards * use erasure coding constants * getPlanningCapacityUnsafe getEffectiveAvailableCapacityUnsafe should return StorageSlotChange for calculation * use CanAccommodate to calculate * remove dead code * address comments * fix Mutex Copying in Protobuf Structs * use constants * fix estimatedSize The calculation for estimatedSize only considers source.EstimatedSize and dest.StorageChange, but omits dest.EstimatedSize. The TaskDestination struct has an EstimatedSize field, which seems to be ignored here. This could lead to an incorrect estimation of the total size of data involved in tasks on a disk. The loop should probably also include estimatedSize += dest.EstimatedSize. * at.assignTaskToDisk(task) * refactoring * Update weed/admin/topology/internal.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * fail fast * fix compilation * Update weed/worker/tasks/erasure_coding/detection.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * indexes for volume and shard locations * dedup with ToVolumeSlots * return an additional boolean to indicate success, or an error * Update abstract_sql_store.go * fix * Update weed/worker/tasks/erasure_coding/detection.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/admin/topology/task_management.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * faster findVolumeDisk * Update weed/worker/tasks/erasure_coding/detection.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/admin/topology/storage_slot_test.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * refactor * simplify * remove unused GetDiskStorageImpact function * refactor * add comments * Update weed/admin/topology/storage_impact.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/admin/topology/storage_slot_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update storage_impact.go * AddPendingTask The unified AddPendingTask function now serves as the single entry point for all task creation, successfully consolidating the previously separate functions while maintaining full functionality and improving code organization. --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |