feat: add S3 bucket size and object count metrics (#7776)
* feat: add S3 bucket size and object count metrics Adds periodic collection of bucket size metrics: - SeaweedFS_s3_bucket_size_bytes: logical size (deduplicated across replicas) - SeaweedFS_s3_bucket_physical_size_bytes: physical size (including replicas) - SeaweedFS_s3_bucket_object_count: object count (deduplicated) Collection runs every 1 minute via background goroutine that queries filer Statistics RPC for each bucket's collection. Also adds Grafana dashboard panels for: - S3 Bucket Size (logical vs physical) - S3 Bucket Object Count * address PR comments: fix bucket size metrics collection 1. Fix collectCollectionInfoFromMaster to use master VolumeList API - Now properly queries master for topology info - Uses WithMasterClient to get volume list from master - Correctly calculates logical vs physical size based on replication 2. Return error when filerClient is nil to trigger fallback - Changed from 'return nil, nil' to 'return nil, error' - Ensures fallback to filer stats is properly triggered 3. Implement pagination in listBucketNames - Added listBucketPageSize constant (1000) - Uses StartFromFileName for pagination - Continues fetching until fewer entries than limit returned 4. Handle NewReplicaPlacementFromByte error and prevent division by zero - Check error return from NewReplicaPlacementFromByte - Default to 1 copy if error occurs - Add explicit check for copyCount == 0 * simplify bucket size metrics: remove filer fallback, align with quota enforcement - Remove fallback to filer Statistics RPC - Use only master topology for collection info (same as s3.bucket.quota.enforce) - Updated comments to clarify this runs the same collection logic as quota enforcement - Simplified code by removing collectBucketSizeFromFilerStats * use s3a.option.Masters directly instead of querying filer * address PR comments: fix dashboard overlaps and improve metrics collection Grafana dashboard fixes: - Fix overlapping panels 55 and 59 in grafana_seaweedfs.json (moved 59 to y=30) - Fix grid collision in k8s dashboard (moved panel 72 to y=48) - Aggregate bucket metrics with max() by (bucket) for multi-instance S3 gateways Go code improvements: - Add graceful shutdown support via context cancellation - Use ticker instead of time.Sleep for better shutdown responsiveness - Distinguish EOF from actual errors in stream handling * improve bucket size metrics: multi-master failover and proper error handling - Initial delay now respects context cancellation using select with time.After - Use WithOneOfGrpcMasterClients for multi-master failover instead of hardcoding Masters[0] - Properly propagate stream errors instead of just logging them (EOF vs real errors) * improve bucket size metrics: distributed lock and volume ID deduplication - Add distributed lock (LiveLock) so only one S3 instance collects metrics at a time - Add IsLocked() method to LiveLock for checking lock status - Fix deduplication: use volume ID tracking instead of dividing by copyCount - Previous approach gave wrong results if replicas were missing - Now tracks seen volume IDs and counts each volume only once - Physical size still includes all replicas for accurate disk usage reporting * rename lock to s3.leader * simplify: remove StartBucketSizeMetricsCollection wrapper function * fix data race: use atomic operations for LiveLock.isLocked field - Change isLocked from bool to int32 - Use atomic.LoadInt32/StoreInt32 for all reads/writes - Sync shared isLocked field in StartLongLivedLock goroutine * add nil check for topology info to prevent panic * fix bucket metrics: use Ticker for consistent intervals, fix pagination logic - Use time.Ticker instead of time.After for consistent interval execution - Fix pagination: count all entries (not just directories) for proper termination - Update lastFileName for all entries to prevent pagination issues * address PR comments: remove redundant atomic store, propagate context - Remove redundant atomic.StoreInt32 in StartLongLivedLock (AttemptToLock already sets it) - Propagate context through metrics collection for proper cancellation on shutdown - collectAndUpdateBucketSizeMetrics now accepts ctx - collectCollectionInfoFromMaster uses ctx for VolumeList RPC - listBucketNames uses ctx for ListEntries RPC
This commit is contained in:
@@ -434,6 +434,30 @@ var (
|
||||
Name: "uploaded_objects",
|
||||
Help: "Number of objects uploaded in each bucket.",
|
||||
}, []string{"bucket"})
|
||||
|
||||
S3BucketSizeBytesGauge = prometheus.NewGaugeVec(
|
||||
prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "s3",
|
||||
Name: "bucket_size_bytes",
|
||||
Help: "Current size of each S3 bucket in bytes (logical size, deduplicated across replicas).",
|
||||
}, []string{"bucket"})
|
||||
|
||||
S3BucketPhysicalSizeBytesGauge = prometheus.NewGaugeVec(
|
||||
prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "s3",
|
||||
Name: "bucket_physical_size_bytes",
|
||||
Help: "Current physical size of each S3 bucket in bytes (including all replicas).",
|
||||
}, []string{"bucket"})
|
||||
|
||||
S3BucketObjectCountGauge = prometheus.NewGaugeVec(
|
||||
prometheus.GaugeOpts{
|
||||
Namespace: Namespace,
|
||||
Subsystem: "s3",
|
||||
Name: "bucket_object_count",
|
||||
Help: "Current number of objects in each S3 bucket (logical count, deduplicated across replicas).",
|
||||
}, []string{"bucket"})
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -491,6 +515,9 @@ func init() {
|
||||
Gather.MustRegister(S3BucketTrafficSentBytesCounter)
|
||||
Gather.MustRegister(S3DeletedObjectsCounter)
|
||||
Gather.MustRegister(S3UploadedObjectsCounter)
|
||||
Gather.MustRegister(S3BucketSizeBytesGauge)
|
||||
Gather.MustRegister(S3BucketPhysicalSizeBytesGauge)
|
||||
Gather.MustRegister(S3BucketObjectCountGauge)
|
||||
|
||||
go bucketMetricTTLControl()
|
||||
}
|
||||
@@ -576,6 +603,9 @@ func bucketMetricTTLControl() {
|
||||
c += S3BucketTrafficSentBytesCounter.DeletePartialMatch(labels)
|
||||
c += S3DeletedObjectsCounter.DeletePartialMatch(labels)
|
||||
c += S3UploadedObjectsCounter.DeletePartialMatch(labels)
|
||||
c += S3BucketSizeBytesGauge.DeletePartialMatch(labels)
|
||||
c += S3BucketPhysicalSizeBytesGauge.DeletePartialMatch(labels)
|
||||
c += S3BucketObjectCountGauge.DeletePartialMatch(labels)
|
||||
glog.V(0).Infof("delete inactive bucket metrics, %s: %d", bucket, c)
|
||||
}
|
||||
}
|
||||
@@ -585,3 +615,14 @@ func bucketMetricTTLControl() {
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// UpdateBucketSizeMetrics updates the bucket size gauges
|
||||
// logicalSize is the deduplicated size (accounting for replication)
|
||||
// physicalSize is the raw size including all replicas
|
||||
// objectCount is the number of objects in the bucket (deduplicated)
|
||||
func UpdateBucketSizeMetrics(bucket string, logicalSize, physicalSize float64, objectCount float64) {
|
||||
S3BucketSizeBytesGauge.WithLabelValues(bucket).Set(logicalSize)
|
||||
S3BucketPhysicalSizeBytesGauge.WithLabelValues(bucket).Set(physicalSize)
|
||||
S3BucketObjectCountGauge.WithLabelValues(bucket).Set(objectCount)
|
||||
RecordBucketActiveTime(bucket)
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user