Lisandro Pin
|
f400fb44a0
|
Update cluster.status to resolve file details on EC volumes. (#8268)
Also parallelizes queries for file metrics collections when the `--files`
flag is specified, and improves the command's output for readability:
```
> cluster.status --files
collecting file stats: 100%
cluster:
id: topo
status: LOCKED
nodes: 10
topology: 1 DC, 10 disks on 1 rack
volumes:
total: 3 volumes, 1 collection
max size: 32 GB
regular: 1/80 volume on 3 replicas, 3 writable (100%), 0 read-only (0%)
EC: 2 EC volumes on 28 shards (14 shards/volume)
storage:
total: 269 MB (522 MB raw, 193.95%)
regular volumes: 91 MB (272 MB raw, 300%)
EC volumes: 178 MB (250 MB raw, 140%)
files:
total: 363 files, 300 readable (82.64%), 63 deleted (17.35%), avg 522 kB per file
regular: 168 files, 105 readable (62.5%), 63 deleted (37.5%), avg 540 kB per file
EC: 195 files, 195 readable (100%), 0 deleted (0%), avg 506 kB per file
```
|
2026-02-09 17:52:43 -08:00 |
|
Lisandro Pin
|
6a1b9ce8cd
|
Give cluster.status detailed file metrics for regular volumes (#7791)
* Implement a `weed shell` command to return a status overview of the cluster.
Detailed file information will be implemented in a follow-up MR. Note also
that masters are currently not reporting back EC shard sizes correctly, via
`master_pb.VolumeEcShardInformationMessage.shard_sizes`.
F.ex:
```
> status
cluster:
id: topo
status: LOCKED
nodes: 10
topology: 1 DC(s)s, 1 disk(s) on 1 rack(s)
volumes:
total: 3 volumes on 1 collections
max size: 31457280000 bytes
regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%)
EC: 1 EC volumes on 14 shards (14.00 shards/volume)
storage:
total: 186024424 bytes
regular volumes: 186024424 bytes
EC volumes: 0 bytes
raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files
```
* Humanize output for `weed.server` by default.
Makes things more readable :)
```
> cluster.status
cluster:
id: topo
status: LOCKED
nodes: 10
topology: 1 DC, 10 disks on 1 rack
volumes:
total: 3 volumes, 1 collection
max size: 32 GB
regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%)
EC: 1 EC volume on 14 shards (14 shards/volume)
storage:
total: 172 MB
regular volumes: 172 MB
EC volumes: 0 B
raw: 516 MB on volume replicas, 0 B on EC shards
```
```
> cluster.status --humanize=false
cluster:
id: topo
status: LOCKED
nodes: 10
topology: 1 DC(s), 10 disk(s) on 1 rack(s)
volumes:
total: 3 volume(s), 1 collection(s)
max size: 31457280000 byte(s)
regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%)
EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume)
storage:
total: 172128072 byte(s)
regular volumes: 172128072 byte(s)
EC volumes: 0 byte(s)
raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards
```
Also adds unit tests, and reshuffles test files handling for clarity.
* `cluster.status`: Add detailed file metrics for regular volumes.
|
2025-12-17 16:40:27 -08:00 |
|
Lisandro Pin
|
187ef65e8f
|
Humanize output for weed.server by default (#7758)
* Implement a `weed shell` command to return a status overview of the cluster.
Detailed file information will be implemented in a follow-up MR. Note also
that masters are currently not reporting back EC shard sizes correctly, via
`master_pb.VolumeEcShardInformationMessage.shard_sizes`.
F.ex:
```
> status
cluster:
id: topo
status: LOCKED
nodes: 10
topology: 1 DC(s)s, 1 disk(s) on 1 rack(s)
volumes:
total: 3 volumes on 1 collections
max size: 31457280000 bytes
regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%)
EC: 1 EC volumes on 14 shards (14.00 shards/volume)
storage:
total: 186024424 bytes
regular volumes: 186024424 bytes
EC volumes: 0 bytes
raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files
```
* Humanize output for `weed.server` by default.
Makes things more readable :)
```
> cluster.status
cluster:
id: topo
status: LOCKED
nodes: 10
topology: 1 DC, 10 disks on 1 rack
volumes:
total: 3 volumes, 1 collection
max size: 32 GB
regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%)
EC: 1 EC volume on 14 shards (14 shards/volume)
storage:
total: 172 MB
regular volumes: 172 MB
EC volumes: 0 B
raw: 516 MB on volume replicas, 0 B on EC shards
```
```
> cluster.status --humanize=false
cluster:
id: topo
status: LOCKED
nodes: 10
topology: 1 DC(s), 10 disk(s) on 1 rack(s)
volumes:
total: 3 volume(s), 1 collection(s)
max size: 31457280000 byte(s)
regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%)
EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume)
storage:
total: 172128072 byte(s)
regular volumes: 172128072 byte(s)
EC volumes: 0 byte(s)
raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards
```
Also adds unit tests, and reshuffles test files handling for clarity.
|
2025-12-15 11:18:45 -08:00 |
|