14 Commits

Author SHA1 Message Date
mtmn
c9100a7213 fix(grafana): unify datasource usage in grafana_seaweedfs.json (#8635)
Some panels were using `Prometheus` instead of `${DS_PROMETHEUS}` which
caused missing data when other sources (e.g. VictoriaMetrics) are in use.
2026-03-15 08:45:42 -07:00
Moray Baruh
3fe5a7d761 Fix misuse of $__interval instead of $__rate_interval in Grafana panels (#8617) 2026-03-13 07:54:03 -07:00
Chris Lu
f5c666052e feat: add S3 bucket size and object count metrics (#7776)
* feat: add S3 bucket size and object count metrics

Adds periodic collection of bucket size metrics:
- SeaweedFS_s3_bucket_size_bytes: logical size (deduplicated across replicas)
- SeaweedFS_s3_bucket_physical_size_bytes: physical size (including replicas)
- SeaweedFS_s3_bucket_object_count: object count (deduplicated)

Collection runs every 1 minute via background goroutine that queries
filer Statistics RPC for each bucket's collection.

Also adds Grafana dashboard panels for:
- S3 Bucket Size (logical vs physical)
- S3 Bucket Object Count

* address PR comments: fix bucket size metrics collection

1. Fix collectCollectionInfoFromMaster to use master VolumeList API
   - Now properly queries master for topology info
   - Uses WithMasterClient to get volume list from master
   - Correctly calculates logical vs physical size based on replication

2. Return error when filerClient is nil to trigger fallback
   - Changed from 'return nil, nil' to 'return nil, error'
   - Ensures fallback to filer stats is properly triggered

3. Implement pagination in listBucketNames
   - Added listBucketPageSize constant (1000)
   - Uses StartFromFileName for pagination
   - Continues fetching until fewer entries than limit returned

4. Handle NewReplicaPlacementFromByte error and prevent division by zero
   - Check error return from NewReplicaPlacementFromByte
   - Default to 1 copy if error occurs
   - Add explicit check for copyCount == 0

* simplify bucket size metrics: remove filer fallback, align with quota enforcement

- Remove fallback to filer Statistics RPC
- Use only master topology for collection info (same as s3.bucket.quota.enforce)
- Updated comments to clarify this runs the same collection logic as quota enforcement
- Simplified code by removing collectBucketSizeFromFilerStats

* use s3a.option.Masters directly instead of querying filer

* address PR comments: fix dashboard overlaps and improve metrics collection

Grafana dashboard fixes:
- Fix overlapping panels 55 and 59 in grafana_seaweedfs.json (moved 59 to y=30)
- Fix grid collision in k8s dashboard (moved panel 72 to y=48)
- Aggregate bucket metrics with max() by (bucket) for multi-instance S3 gateways

Go code improvements:
- Add graceful shutdown support via context cancellation
- Use ticker instead of time.Sleep for better shutdown responsiveness
- Distinguish EOF from actual errors in stream handling

* improve bucket size metrics: multi-master failover and proper error handling

- Initial delay now respects context cancellation using select with time.After
- Use WithOneOfGrpcMasterClients for multi-master failover instead of hardcoding Masters[0]
- Properly propagate stream errors instead of just logging them (EOF vs real errors)

* improve bucket size metrics: distributed lock and volume ID deduplication

- Add distributed lock (LiveLock) so only one S3 instance collects metrics at a time
- Add IsLocked() method to LiveLock for checking lock status
- Fix deduplication: use volume ID tracking instead of dividing by copyCount
  - Previous approach gave wrong results if replicas were missing
  - Now tracks seen volume IDs and counts each volume only once
- Physical size still includes all replicas for accurate disk usage reporting

* rename lock to s3.leader

* simplify: remove StartBucketSizeMetricsCollection wrapper function

* fix data race: use atomic operations for LiveLock.isLocked field

- Change isLocked from bool to int32
- Use atomic.LoadInt32/StoreInt32 for all reads/writes
- Sync shared isLocked field in StartLongLivedLock goroutine

* add nil check for topology info to prevent panic

* fix bucket metrics: use Ticker for consistent intervals, fix pagination logic

- Use time.Ticker instead of time.After for consistent interval execution
- Fix pagination: count all entries (not just directories) for proper termination
- Update lastFileName for all entries to prevent pagination issues

* address PR comments: remove redundant atomic store, propagate context

- Remove redundant atomic.StoreInt32 in StartLongLivedLock (AttemptToLock already sets it)
- Propagate context through metrics collection for proper cancellation on shutdown
  - collectAndUpdateBucketSizeMetrics now accepts ctx
  - collectCollectionInfoFromMaster uses ctx for VolumeList RPC
  - listBucketNames uses ctx for ListEntries RPC
2025-12-15 19:23:25 -08:00
Chris Lu
93d0779318 fix: add S3 bucket traffic sent metric tracking (#7774)
* fix: add S3 bucket traffic sent metric tracking

The BucketTrafficSent() function was defined but never called, causing
the S3 Bucket Traffic Sent Grafana dashboard panel to not display data.

Added BucketTrafficSent() calls in the streaming functions:
- streamFromVolumeServers: for inline and chunked content
- streamFromVolumeServersWithSSE: for encrypted range and full object requests

The traffic received metric already worked because BucketTrafficReceived()
was properly called in putToFiler() for both regular and multipart uploads.

* feat: add S3 API Calls per Bucket panel to Grafana dashboards

Added a new panel showing API calls per bucket using the existing
SeaweedFS_s3_request_total metric aggregated by bucket.

Updated all Grafana dashboard files:
- other/metrics/grafana_seaweedfs.json
- other/metrics/grafana_seaweedfs_k8s.json
- other/metrics/grafana_seaweedfs_heartbeat.json
- k8s/charts/seaweedfs/dashboards/seaweedfs-grafana-dashboard.json

* address PR comments: use actual bytes written for traffic metrics

- Use actual bytes written from w.Write instead of expected size for inline content
- Add countingWriter wrapper to track actual bytes for chunked content streaming
- Update streamDecryptedRangeFromChunks to return actual bytes written for SSE
- Remove redundant nil check that caused linter warning
- Fix duplicate panel id 86 in grafana_seaweedfs.json (changed to 90)
- Fix overlapping panel positions in grafana_seaweedfs_k8s.json (rebalanced x positions)

* fix grafana k8s dashboard: rebalance S3 panels to avoid overlap

- Panel 86 (S3 API Calls per Bucket): w:6, x:0, y:15
- Panel 67 (S3 Request Duration 95th): w:6, x:6, y:15
- Panel 68 (S3 Request Duration 80th): w:6, x:12, y:15
- Panel 65 (S3 Request Duration 99th): w:6, x:18, y:15

All four S3 panels now fit in a single row (y:15) with width 6 each.
Filer row header at y:22 and subsequent panels remain correctly positioned.

* add input validation and clarify comments in adjustRangeForPart

- Add validation that partStartOffset <= partEndOffset at function start
- Add clarifying comments for suffix-range handling where clientEnd
  temporarily holds the suffix length before being reassigned

* align pluginVersion for panel 86 to 10.3.1 in k8s dashboard

* track partial writes for accurate egress traffic accounting

- Change condition from 'err == nil' to 'written > 0' for inline content
- Move BucketTrafficSent before error check for chunked content streaming
- Track traffic even on partial SSE range writes
- Track traffic even on partial full SSE object copies

This ensures egress traffic is counted even when writes fail partway through,
providing more accurate bandwidth metrics.
2025-12-15 17:36:35 -08:00
Chris Lu
848bec6d24 Metrics: Add Prometheus metrics for concurrent upload tracking (#7555)
* metrics: add Prometheus metrics for concurrent upload tracking

Add Prometheus metrics to monitor concurrent upload activity for both
filer and S3 servers. This provides visibility into the upload limiting
feature added in the previous PR.

New Metrics:
- SeaweedFS_filer_in_flight_upload_bytes: Current bytes being uploaded to filer
- SeaweedFS_filer_in_flight_upload_count: Current number of uploads to filer
- SeaweedFS_s3_in_flight_upload_bytes: Current bytes being uploaded to S3
- SeaweedFS_s3_in_flight_upload_count: Current number of uploads to S3

The metrics are updated atomically whenever uploads start or complete,
providing real-time visibility into upload concurrency levels.

This helps operators:
- Monitor upload concurrency in real-time
- Set appropriate limits based on actual usage patterns
- Detect potential bottlenecks or capacity issues
- Track the effectiveness of upload limiting configuration

* grafana: add dashboard panels for concurrent upload metrics

Add 4 new panels to the Grafana dashboard to visualize the concurrent
upload metrics added in this PR:

Filer Section:
- Filer Concurrent Uploads: Shows current number of concurrent uploads
- Filer Concurrent Upload Bytes: Shows current bytes being uploaded

S3 Gateway Section:
- S3 Concurrent Uploads: Shows current number of concurrent uploads
- S3 Concurrent Upload Bytes: Shows current bytes being uploaded

These panels help operators monitor upload concurrency in real-time and
tune the upload limiting configuration based on actual usage patterns.

* more efficient
2025-11-26 15:51:38 -08:00
Hadi Zamani
c7ae969c06 Add bucket's traffic metrics (#6444)
* Add bucket's traffic metrics

* Add bucket traffic to dashboards

* Fix bucket metrics help messages

* Fix variable names
2025-01-16 08:23:35 -08:00
Brad Murray
7bd638de47 Fix invalid metric name (#6141)
Replaced `SeaweedFS_filer_` with `SeaweedFS_filerStore_` because the metric name was not found.
2024-10-17 09:44:57 -07:00
Alby Hernández
75f7893c11 feat: Add datasource as variable (#4584) 2023-06-16 10:46:02 -07:00
zzq09494
6449114e5e format 2022-06-16 13:52:36 +08:00
zzq09494
0a613876ca add bucket label to the grafana dashboard 2022-06-16 13:50:16 +08:00
nivekuil
a7383a8a1c grafana dashboard updates 2021-08-28 16:50:09 -07:00
Jonas Falck
829b195084 Add process metrics of weed itself 2021-06-22 13:09:42 +02:00
Thilo-Alexander Ginkel
ec51d77dcf grafana: remove incorrect QPS factor 2020-11-23 12:00:36 +01:00
Chris Lu
a34bad2cee moving grafana dashboard here 2020-09-30 13:30:21 -07:00