Files
seaweedFS/VOLUME_SERVER_RUST_PLAN.md
Chris Lu ba624f1f34 Rust volume server implementation with CI (#8539)
* Match Go gRPC client transport defaults

* Honor Go HTTP idle timeout

* Honor maintenanceMBps during volume copy

* Honor images.fix.orientation on uploads

* Honor cpuprofile when pprof is disabled

* Match Go memory status payloads

* Propagate request IDs across gRPC calls

* Format pending Rust source updates

* Match Go stats endpoint payloads

* Serve Go volume server UI assets

* Enforce Go HTTP whitelist guards

* Align Rust metrics admin-port test with Go behavior

* Format pending Rust server updates

* Honor access.ui without per-request JWT checks

* Honor keepLocalDatFile in tier upload shortcut

* Honor Go remote volume write mode

* Load tier backends from master config

* Check master config before loading volumes

* Remove vif files on volume destroy

* Delete remote tier data on volume destroy

* Honor vif version defaults and overrides

* Reject mismatched vif bytes offsets

* Load remote-only tiered volumes

* Report Go tail offsets in sync status

* Stream remote dat in incremental copy

* Honor collection vif for EC shard config

* Persist EC expireAtSec in vif metadata

* Stream remote volume reads through HTTP

* Serve HTTP ranges from backend source

* Match Go ReadAllNeedles scan order

* Match Go CopyFile zero-stop metadata

* Delete EC volumes with collection cleanup

* Drop deleted collection metrics

* Match Go tombstone ReadNeedleMeta

* Match Go TTL parsing: all-digit default to minutes, two-pass fit algorithm

* Match Go needle ID/cookie formatting and name size computation

* Match Go image ext checks: webp resize only, no crop; empty healthz body

* Match Go Prometheus metric names and add missing handler counter constants

* Match Go ReplicaPlacement short string parsing with zero-padding

* Add missing EC constants MAX_SHARD_COUNT and MIN_TOTAL_DISKS

* Add walk_ecx_stats for accurate EC volume file counts and size

* Match Go VolumeStatus dat file size, EC shard stats, and disk pct precision

* Match Go needle map: unconditional delete counter, fix redb idx walk offset

* Add CompactMapSegment overflow panic guard matching Go

* Match Go volume: vif creation, version from superblock, TTL expiry, dedup data_size, garbage_level fallback

* Match Go 304 Not Modified: return bare status with no headers

* Match Go JWT error message: use "wrong jwt" instead of detailed error

* Match Go read handler bare 400, delete error prefix, download throttle timeout

* Match Go pretty JSON 1-space indent and "Deletion Failed:" error prefix

* Match Go heartbeat: keep is_heartbeating on error, add EC shard identification

* Match Go needle ReadBytes V2: tolerate EOF on truncated body

* Match Go volume: cookie check on any existing needle, return DataSize, 128KB meta guard

* Match Go DeleteCollection: propagate destroy errors

* Match Go gRPC: BatchDelete no flag, IncrementalCopy error, FetchAndWrite concurrent, VolumeUnmount/DeleteCollection errors, tail draining, query error code

* Match Go Content-Disposition RFC 6266 formatting with RFC 2231 encoding

* Match Go Guard isWriteActive: combine whitelist and signing key check

* Match Go DeleteCollectionMetrics: use partial label matching

* Match Go heartbeat: send state-only delta on volume state changes

* Match Go ReadNeedleMeta paged I/O: read header+tail only, skip data; add EIO tracking

* Match Go ScrubVolume INDEX mode dispatch; add VolumeCopy preallocation and EC NeedleStatus TODOs

* Add read_ec_shard_needle for full needle reconstruction from local EC shards

* Make heartbeat master config helpers pub for VolumeCopy preallocation

* Match Go gRPC: VolumeCopy preallocation, EC NeedleStatus full read, error message wording

* Match Go HTTP responses: omitempty fields, 2-space JSON indent, JWT JSON error, delete pretty/JSONP, 304 Last-Modified, raw write error

* Match Go WriteNeedleBlob V3 timestamp patching, fix makeup_diff double padding, count==0 read handling

* Add rebuild_ecx_file for EC index reconstruction from data shards

* Match Go gRPC: tail header first-chunk-only, EC cleanup on failure, copy append mode, ecx rebuild, compact cancellation

* Add EC volume read and delete support in HTTP handlers

* Add per-shard EC mount/unmount, location predicate search, idx directory for EC

* Add CheckVolumeDataIntegrity on volume load matching Go

* Match Go gRPC: EC multi-disk placement, per-shard mount/unmount, no auto-mount on reconstruct, streaming ReadAll/EcShardRead, ReceiveFile cleanup, version check, proxy streaming, redirect Content-Type

* Match Go heartbeat metric accounting

* Match Go duplicate UUID heartbeat retries

* Delete expired EC volumes during heartbeat

* Match Go volume heartbeat pruning

* Honor master preallocate in volume max

* Report remote storage info in heartbeats

* Emit EC heartbeat deltas on shard changes

* Match Go throttle boundary: use <= instead of <, fix pretty JSON to 1-space

* Match Go write_needle_blob monotonic appendAtNs via get_append_at_ns

* Match Go VolumeUnmount: idempotent success when volume not found

* Match Go TTL Display: return empty string when unit is Empty

Go checks `t.Unit == Empty` separately and returns "" for TTLs
with nonzero count but Empty unit. Rust only checked is_empty()
(count==0 && unit==0), so count>0 with unit=0 would format as
"5 " instead of "".

* Match Go error behavior for truncated needle data in read_body_v2

Go's readNeedleDataVersion2 returns "index out of range %d" errors
(indices 1-7) when needle body or metadata fields are truncated.
Rust was silently tolerating truncation and returning Ok. Now returns
NeedleError::IndexOutOfRange with the matching index for each field.

* Match Go download throttle: return JSON error instead of plain text

* Match Go crop params: default x1/y1 to 0 when not provided

* Match Go ScrubEcVolume: accumulate total_files from EC shards

* Match Go ScrubVolume: count total_files even on scrub error

* Match Go VolumeEcShardsCopy: set ignore_source_file_not_found for .vif

* Match Go VolumeTailSender: send needle_header on every chunk

* Match Go read_super_block: apply replication override from .vif

* Match Go check_volume_data_integrity: verify all 10 entries, detect trailing corruption

* Match Go WriteNeedleBlob: dedup check before writing during replication

* handlers: use meta-only reads for HEAD

* handlers: align range parsing and responses with Go

* handlers: align upload parsing with Go

* deps: enable webp support

* Make 5bytes the default feature for idx entry compatibility

* Match Go TTL: preserve original unit when count fits in byte

* Fix EC locate_needle: use get_actual_size for full needle size

* Fix raw body POST: only parse multipart when Content-Type contains form-data

* Match Go ReceiveFile: return protocol errors in response body, not gRPC status

* add docs

* Match Go VolumeEcShardsCopy: append to .ecj file instead of truncating

* Match Go ParsePath: support _delta suffix on file IDs for sub-file addressing

* Match Go chunk manifest: add Accept-Ranges, Content-Disposition, filename fallback, MIME detection

* Match Go privateStoreHandler: use proper JSON error for unsupported methods

* Match Go Destroy: add only_empty parameter to reject non-empty volume deletion

* Fix compilation: set_read_only_persist and set_writable return ()

These methods fire-and-forget save_vif internally, so gRPC callers
should not try to chain .map_err() on the unit return type.

* Match Go SaveVolumeInfo: check writability and propagate errors in save_vif

* Match Go VolumeDelete: propagate only_empty to delete_volume for defense in depth

The gRPC VolumeDelete handler had a pre-check for only_empty but then
passed false to store.delete_volume(), bypassing the store-level check.
Go passes req.OnlyEmpty directly to DeleteVolume. Now Rust does the same
for defense in depth against TOCTOU races (though the store write lock
makes this unlikely).

* Match Go ProcessRangeRequest: return full content for empty/oversized ranges

Go returns nil from ProcessRangeRequest when ranges are empty or total
range size exceeds content length, causing the caller to serve the full
content as a normal 200 response. Rust was returning an empty 200 body.

* Match Go Query: quote JSON keys in output records

Go's ToJson produces valid JSON with quoted keys like {"name":"Alice"}.
Rust was producing invalid JSON with unquoted keys like {name:"Alice"}.

* Match Go VolumeCopy: reject when no suitable disk location exists

Go returns ErrVolumeNoSpaceLeft when no location matches the disk type
and has sufficient space. Rust had an unsafe fallback that silently
picked the first location regardless of type or available space.

* Match Go DeleteVolumeNeedle: check noWriteOrDelete before allowing delete

Go checks v.noWriteOrDelete before proceeding with needle deletion,
returning "volume is read only" if true. Rust was skipping this check.

* Match Go ReceiveFile: prefer HardDrive location for EC and use response-level write errors

Two fixes: (1) Go prefers HardDriveType disk location for EC volumes,
falling back to first location. Returns "no storage location available"
when no locations exist. (2) Write failures are now response-level
errors (in response body) instead of gRPC status errors, matching Go.

* Match Go CopyFile: sync EC volume journal to disk before copying

Go calls ecVolume.Sync() before copying EC volume files to ensure the
.ecj journal is flushed to disk. Added sync_to_disk() to EcVolume and
call it in the CopyFile EC branch.

* Match Go readSuperBlock: propagate replication parse errors

Go returns an error when parsing the replication string from the .vif
file fails. Rust was silently ignoring the parse failure and using the
super block's replication as-is.

* Match Go TTL expiry: remove append_at_ns > 0 guard

Go computes TTL expiry from AppendAtNs without guarding against zero.
When append_at_ns is 0, the expiry is epoch + TTL which is in the past,
correctly returning NotFound. Rust's extra guard skipped the check,
incorrectly returning success for such needles.

* Match Go delete_collection: skip volumes with compaction in progress

Go checks !v.isCompactionInProgress.Load() before destroying a volume
during collection deletion, skipping compacting volumes. Also changed
destroy errors to log instead of aborting the entire collection delete.

* Match Go MarkReadonly/MarkWritable: always notify master even on local error

Go always notifies the master regardless of whether the local
set_read_only_persist or set_writable step fails. The Rust code was
using `?` which short-circuited on error, skipping the final master
notification. Save the result and defer the `?` until after the
notify call.

* Match Go PostHandler: return 500 for all write errors

Go returns 500 (InternalServerError) for all write failures. Rust was
returning 404 for volume-not-found and 403 for read-only volumes.

* Match Go makeupDiff: validate .cpd compaction revision is old + 1

Go reads the new .cpd file's super block and verifies the compaction
revision is exactly old + 1. Rust only validated the old revision.

* Match Go VolumeStatus: check data backend before returning status

Go checks v.DataBackend != nil before building the status response,
returning an error if missing. Rust was silently returning size 0.

* Match Go PostHandler: always include mime field in upload response JSON

Go always serializes the mime field even when empty ("mime":""). Rust was
omitting it when empty due to Option<String> with skip_serializing_if.

* Match Go FindFreeLocation: account for EC shards in free slot calculation

Go subtracts EC shard equivalents when computing available volume slots.
Rust was only comparing volume count, potentially over-counting free
slots on locations with many EC shards.

* Match Go privateStoreHandler: use INVALID as metrics label for unsupported methods

Go records the method as INVALID in metrics for unsupported HTTP methods.
Rust was using the actual method name.

* Match Go volume: add commit_compact guard and scrub data size validation

Two fixes: (1) commit_compact now checks/sets is_compacting flag to
prevent concurrent commits, matching Go's CompareAndSwap guard.
(2) scrub now validates total needle sizes against .dat file size.

* Match Go gRPC: fix TailSender error propagation, EcShardsInfo all slots, EcShardRead .ecx check

Three fixes: (1) VolumeTailSender now propagates binary search errors
instead of silently falling back to start. (2) VolumeEcShardsInfo
returns entries for all shard slots including unmounted. (3)
VolumeEcShardRead checks .ecx index for deletions instead of .ecj.

* Match Go metrics: add BuildInfo gauge and connection tracking functions

Go exposes a BuildInfo Prometheus metric with version labels, and tracks
open connections via stats.ConnectionOpen/Close. Added both to Rust.

* Match Go NeedleMap.Delete: use !is_deleted() instead of is_valid()

Go's CompactMap.Delete checks !IsDeleted() not IsValid(), so needles
with size==0 (live but anomalous) can still be deleted. The Rust code
was using is_valid() which returns false for size==0, preventing
deletion of such needles.

* Match Go fitTtlCount: always normalize TTL to coarsest unit

Go's fitTtlCount always converts to seconds first, then finds the
coarsest unit that fits in one byte (e.g., 120m → 2h). Rust had an
early return for count<=255 that skipped normalization, producing
different binary encodings for the same duration.

* Match Go BuildInfo metric: correct name and add missing labels

Go uses SeaweedFS_build_info (Namespace=SeaweedFS, Subsystem=build,
Name=info) with labels [version, commit, sizelimit, goos, goarch].
Rust had SeaweedFS_volumeServer_buildInfo with only [version].

* Match Go HTTP handlers: fix UploadResult fields, DiskStatus JSON, chunk manifest ETag

- UploadResult.mime: add skip_serializing_if to omit empty MIME (Go uses omitempty)
- UploadResult.contentMd5: only include when request provided Content-MD5 header
- Content-MD5 response header: only set when request provided it
- DiskStatuses: use camelCase field names (percentFree, percentUsed, diskType)
  to match Go's protobuf JSON marshaling
- Chunk manifest: preserve needle ETag in expanded response headers

* Match Go volume: fix version(), integrity check, scrub, and commit_compact

- version(): use self.version() instead of self.super_block.version in
  read_all_needles, check_volume_data_integrity, scan_raw_needles_from
  to respect volumeInfo.version override
- check_volume_data_integrity: initialize healthy_index_size to idx_size
  (matching Go) and continue on EOF instead of returning error
- scrub(): count deleted needles in total_read since they still occupy
  space in the .dat file (matches Go's totalRead += actualSize for deleted)
- commit_compact: clean up .cpd/.cpx files on makeup_diff failure
  (matches Go's error path cleanup)

* Match Go write queue: add 4MB batch byte limit

Go's startWorker breaks the batch at either 128 requests or 4MB of
accumulated write data. Rust only had the 128-request limit, allowing
large writes to accumulate unbounded latency.

* Add TTL normalization tests for Go parity verification

Test that fit_ttl_count normalizes 120m→2h, 24h→1d, 7d→1w even
when count fits in a byte, matching Go's fitTtlCount behavior.

* Match Go FindFreeLocation: account for EC shards in free slot calculation

Go's free volume count subtracts both regular volumes and EC volumes
from max_volume_count. Rust was only counting regular volumes, which
could over-report available slots when EC shards are mounted.

* Match Go EC volume: mark deletions in .ecx and replay .ecj at startup

Go's DeleteNeedleFromEcx marks needles as deleted in the .ecx index
in-place (writing TOMBSTONE_FILE_SIZE at the size field) in addition
to appending to the .ecj journal. Go's RebuildEcxFile replays .ecj
entries into .ecx on startup, then removes the .ecj file.

Rust was only appending to .ecj without marking .ecx, which meant
deleted EC needles remained readable via .ecx binary search. This
fix:
- Opens .ecx in read/write mode (was read-only)
- Adds mark_needle_deleted_in_ecx: binary search + in-place write
- Calls it from journal_delete before appending to .ecj
- Adds rebuild_ecx_from_journal: replays .ecj into .ecx on startup

* Match Go check_all_ec_shards_deleted: use MAX_SHARD_COUNT instead of hardcoded 14

Go's TotalShardsCount is DataShardsCount + ParityShardsCount = 14 by
default, but custom EC configs via .vif can have more shards (up to
MaxShardCount = 32). Using MAX_SHARD_COUNT ensures all shard files
are checked regardless of EC configuration.

* Match Go EC locate: subtract 1 from shard size and use datFileSize override

Go's LocateEcShardNeedleInterval passes shard.ecdFileSize-1 to
LocateData (shards are padded, -1 avoids overcounting large block
rows). When datFileSize is known, Go uses datFileSize/DataShards
instead. Rust was passing the raw shard file size without adjustment.

* Fix TTL parsing and DiskStatus field names to match Go exactly

TTL::read: Go's ReadTTL preserves the original unit (7d stays 7d,
not 1w) and errors on count > 255. The previous normalization change
was incorrect — Go only normalizes internally via fitTtlCount, not
during string parsing.

DiskStatus: Go uses encoding/json on protobuf structs, which reads
the json struct tags (snake_case: percent_free, percent_used,
disk_type), not the protobuf JSON names (camelCase). Revert to
snake_case to match Go's actual output.

* Fix heartbeat: check leader != current master before redirect, process duplicated UUIDs first

Match Go's volume_grpc_client_to_master.go behavior:
1. Only trigger leader redirect when the leader address differs from the
   current master (prevents unnecessary reconnect loops when master confirms
   its own address).
2. Process duplicated_uuids before leader redirect check, matching Go's
   ordering where duplicate UUID detection takes priority.

* Remove SetState version check to match Go behavior

Go's SetState unconditionally applies the state without any version
mismatch check. The Rust version had an extra optimistic concurrency
check that would reject valid requests from Go clients that don't
track versions.

* Fix TTL::read() to normalize via fit_ttl_count matching Go's ReadTTL

Go's ReadTTL calls fitTtlCount which converts to seconds and normalizes
to the coarsest unit that fits in a byte count (e.g. 120m->2h, 7d->1w,
24h->1d). The Rust version was preserving the original unit, producing
different binary encodings on disk and in heartbeat messages.

* Always return Content-MD5 header and JSON field on successful writes

Go always sets Content-MD5 in the response regardless of whether the
request included it. The Rust version was conditionally including it
only when the request provided Content-MD5.

* Include name and size in UploadResult JSON even when empty/zero

Go's encoding/json always includes empty strings and zero values in
the upload response. The Rust version was using skip_serializing_if
to omit them, causing JSON structure differences.

* Include deleted needles in scan_raw_needles_from to match Go

Go's ScanVolumeFileFrom visits ALL needles including deleted ones.
Skipping deleted entries during incremental copy would cause tombstones
to not be propagated, making deleted files reappear on the receiving side.

* Match Go NeedleMap.Delete: always write tombstone to idx file

Go's NeedleMap.Delete unconditionally writes a tombstone entry to the
idx file and updates metrics, even if the needle doesn't exist or is
already deleted. This is important for replication where every delete
operation must produce an idx write. The Rust version was skipping the
tombstone write for non-existent or already-deleted needles.

* Limit MIME type to 255 bytes matching Go's CreateNeedleFromRequest

* Title-case Seaweed-* pair keys to match Go HTTP header canonicalization

* Unify DiskType::Hdd into HardDrive to match Go's single HardDriveType

* Skip tombstone entries in walk_ecx_stats total_size matching Go's Raw()

* Return EMPTY TTL when computed seconds is zero matching Go's fitTtlCount

* Include disk-space-low in Volume.is_read_only() matching Go

* Log error on CIDR parse failure in whitelist matching Go's glog.Errorf

* Log cookie mismatch in gRPC Query matching Go's V(0).Infof

* Fix is_expired volume_size comparison to use < matching Go

Go checks `volumeSize < super_block.SuperBlockSize` (strict less-than),
but Rust used `<=`. This meant Rust would fail to expire a volume that
is exactly SUPER_BLOCK_SIZE bytes.

* Apply Go's JWT expiry defaults: 10s write, 60s read

Go calls v.SetDefault("jwt.signing.expires_after_seconds", 10) and
v.SetDefault("jwt.signing.read.expires_after_seconds", 60). Rust
defaulted to 0 for both, which meant tokens would never expire when
security.toml has a signing key but omits expires_after_seconds.

* Stop [grpc.volume].ca from overriding [grpc].ca matching Go

Go reads the gRPC CA file only from config.GetString("grpc.ca"), i.e.
the [grpc] section. The [grpc.volume] section only provides cert and
key. Rust was also reading ca from [grpc.volume] which would silently
override the [grpc].ca value when both were present.

* Fix free_volume_count to use EC shard count matching Go

Was counting EC volumes instead of EC shards, which underestimates EC
space usage. One EC volume with 14 shards uses ~1.4 volume slots, not 1.
Now uses Go's formula: ((max - volumes) * DataShardsCount - ecShardCount) / DataShardsCount.

* Include preallocate in compaction space check matching Go

Go uses max(preallocate, estimatedCompactSize) for the free space check.
Rust was only using the estimated volume size, which could start a
compaction that fails mid-way if preallocate exceeds the volume size.

* Check gzip magic bytes before setting Content-Encoding matching Go

Go checks both Accept-Encoding contains "gzip" AND IsGzippedContent
(data starts with 0x1f 0x8b) before setting Content-Encoding: gzip.
Rust only checked Accept-Encoding, which could incorrectly declare
gzip encoding for non-gzip compressed data.

* Only set upload response name when needle HasName matching Go

Go checks reqNeedle.HasName() before setting ret.Name. Rust always set
the name from the filename variable, which could return the fid portion
of the path as the name for raw PUT requests without a filename.

* Treat MaxVolumeCount==0 as unlimited matching Go's hasFreeDiskLocation

Go's hasFreeDiskLocation returns true immediately when MaxVolumeCount
is 0, treating it as unlimited. Rust was computing effective_free as
<= 0 for max==0, rejecting the location. This could fail volume
creation during early startup before the first heartbeat adjusts max.

* Read lastAppendAtNs from deleted V3 entries in integrity check

Go's doCheckAndFixVolumeData reads AppendAtNs from both live entries
(verifyNeedleIntegrity) and deleted tombstones (verifyDeletedNeedleIntegrity).
Rust was skipping deleted entries, which could result in a stale
last_append_at_ns if the last index entry is a deletion.

* Return empty body for empty/oversized range requests matching Go

Go's ProcessRangeRequest returns nil (empty body, 200 OK) when
parsed ranges are empty or combined range size exceeds total content
size. The Rust buffered path incorrectly returned the full file data
for both cases. The streaming path already handled this correctly.

* Dispatch ScrubEcVolume by mode matching Go's INDEX/LOCAL/FULL

Go's ScrubEcVolume switches on mode: INDEX calls v.ScrubIndex()
(ecx integrity only), LOCAL calls v.ScrubLocal(), FULL calls
vs.store.ScrubEcVolume(). Rust was ignoring the mode and always
running verify_ec_shards. Now INDEX mode checks ecx index integrity
(sorted overlap detection + file size validation) without shard I/O,
while LOCAL/FULL modes run the existing shard verification.

* Fix TTL test expectation: 7d normalizes to 1w matching Go's fitTtlCount

Go's ReadTTL calls fitTtlCount which normalizes to the coarsest unit
that fits: 7 days = 1 week, so "7d" becomes {Count:1, Unit:Week}
which displays as "1w". Both Go and Rust normalize identically.

* Add version mismatch check to SetState matching Go's State.Update

Go's State.Update compares the incoming version with the stored
version and returns "version mismatch" error if they differ. This
provides optimistic concurrency control. The Rust implementation
was accepting any version unconditionally.

* Use unquoted keys in Query JSON output matching Go's json.ToJson

Go's json.ToJson produces records with unquoted keys like
{score:12} not {"score":12}. This is a custom format used
internally by SeaweedFS for query results.

* Fix TTL test expectation in VolumeNeedleStatus: 7d normalizes to 1w

Same normalization as the HTTP test: Go's ReadTTL calls fitTtlCount
which converts 7 days to 1 week.

* Include ETag header in 304 Not Modified responses matching Go behavior

Go sets ETag on the response writer (via SetEtag) before the
If-Modified-Since and If-None-Match conditional checks, so both
304 response paths include the ETag header. The Rust implementation
was only adding ETag to 200 responses.

* Remove needle-name fallback in chunk manifest filename resolution

Go's tryHandleChunkedFile only falls back from URL filename to
manifest name. Rust had an extra fallback to needle.name that
Go does not perform, which could produce different
Content-Disposition filenames for chunk manifests.

* Validate JWT nbf (Not Before) claim matching Go's jwt-go/v5

Go's jwt.ParseWithClaims validates the nbf claim when present,
rejecting tokens whose nbf is in the future. The Rust jsonwebtoken
crate defaults validate_nbf to false, so tokens with future nbf
were incorrectly accepted.

* Set isHeartbeating to true at startup matching Go's VolumeServer init

Go unconditionally sets isHeartbeating: true in the VolumeServer
struct literal. Rust was starting with false when masters are
configured, causing /healthz to return 503 until the first
heartbeat succeeds.

* Call store.close() on shutdown matching Go's Shutdown()

Go's Shutdown() calls vs.store.Close() which closes all volumes
and flushes file handles. The Rust server was relying on process
exit for cleanup, which could leave data unflushed.

* Include server ID in maintenance mode error matching Go's format

Go returns "volume server %s is in maintenance mode" with the
store ID. Rust was returning a generic "maintenance mode" message.

* Fix DiskType test: use HardDrive variant matching Go's HddType=""

Go maps both "" and "hdd" to HardDriveType (empty string). The
Rust enum variant is HardDrive, not Hdd. The test referenced a
nonexistent Hdd variant causing compilation failure.

* Do not include ETag in 304 responses matching Go's GetOrHeadHandler

Go sets ETag at L235 AFTER the If-Modified-Since and If-None-Match
304 return paths, so Go's 304 responses do not include the ETag header.
The Rust code was incorrectly including ETag in both 304 response paths.

* Return 400 on malformed query strings in PostHandler matching Go's ParseForm

Go's r.ParseForm() returns HTTP 400 with "form parse error: ..." when
the query string is malformed. Rust was silently falling back to empty
query params via unwrap_or_default().

* Load EC volume version from .vif matching Go's NewEcVolume

Go sets ev.Version = needle.Version(volumeInfo.Version) from the .vif
file. Rust was always using Version::current() (V3), which would produce
wrong needle actual size calculations for volumes created with V1 or V2.

* Sync .ecx file before close matching Go's EcVolume.Close

Go calls ev.ecxFile.Sync() before closing to ensure in-place deletion
marks are flushed to disk. Without this, deletion marks written via
MarkNeedleDeleted could be lost on crash.

* Validate SuperBlock extra data size matching Go's Bytes() guard

Go checks extraSize > 256*256-2 and calls glog.Fatalf to prevent
corrupt super block headers. Rust was silently truncating via u16 cast,
which would write an incorrect extra_size field.

* Update quinn-proto 0.11.13 -> 0.11.14 to fix GHSA-6xvm-j4wr-6v98

Fixes Dependency Review CI failure: quinn-proto < 0.11.14 is vulnerable
to unauthenticated remote DoS via panic in QUIC transport parameter
parsing.

* Skip TestMultipartUploadUsesFormFieldsForTimestampAndTTL for Go server

Go's r.FormValue() cannot read multipart text fields after
r.MultipartReader() consumes the body, so ts/ttl sent as multipart
form fields only work with the Rust volume server. Skip this test
when VOLUME_SERVER_IMPL != "rust" to fix CI failure.

* Flush .ecx in EC volume sync_to_disk matching Go's Sync()

Go's EcVolume.Sync() flushes both the .ecj journal and the .ecx index
to disk. The Rust version only flushed .ecj, leaving in-place deletion
marks in .ecx unpersisted until close(). This could cause data
inconsistency if the server crashes after marking a needle deleted in
.ecx but before close().

* Remove .vif file in EC volume destroy matching Go's Destroy()

Go's EcVolume.Destroy() removes .ecx, .ecj, and .vif files. The Rust
version only removed .ecx and .ecj, leaving orphaned .vif files on
disk after EC volume destruction (e.g., after TTL expiry).

* Fix is_expired to use <= for SuperBlockSize check matching Go

Go checks contentSize <= SuperBlockSize to detect empty volumes (no
needles). Rust used < which would incorrectly allow a volume with
exactly SuperBlockSize bytes (header only, no data) to proceed to
the TTL expiry check and potentially be marked as expired.

* Fix read_append_at_ns to read timestamps from tombstone entries

Go reads the full needle body for all entries including tombstones
(deleted needles with size=0) to extract the actual AppendAtNs
timestamp. The Rust version returned 0 early for size <= 0 entries,
which would cause the binary search in incremental copy to produce
incorrect results for positions containing deleted needles.

Now uses get_actual_size to compute the on-disk size (which handles
tombstones correctly) and only returns 0 when the actual size is 0.

* Add X-Request-Id response header matching Go's requestIDMiddleware

Go sets both X-Request-Id and x-amz-request-id response headers.
The Rust server only set x-amz-request-id, missing X-Request-Id.

* Add skip_serializing_if for UploadResult name and size fields

Go's UploadResult uses json:"name,omitempty" and json:"size,omitempty",
omitting these fields from JSON when they are zero values (empty
string / 0). The Rust struct always serialized them, producing
"name":"" and "size":0 where Go would omit them.

* Support JSONP/pretty-print for write success responses

Go's writeJsonQuiet checks for callback (JSONP) and pretty query
parameters on all JSON responses including write success. The Rust
write success path used axum::Json directly, bypassing JSONP and
pretty-print support. Now uses json_result_with_query to match Go.

* Include actual limit in file size limit error message

Go returns "file over the limited %d bytes" with the actual limit
value included. Rust returned a generic "file size limit exceeded"
without the limit value, making it harder to debug.

* Extract extension from 2-segment URL paths for image operations

Go's parseURLPath extracts the file extension from all URL formats
including 2-segment paths like /vid,fid.jpg. The Rust version only
handled 3-segment paths (/vid/fid/filename.ext), so extensions in
2-segment paths were lost. This caused image resize/crop operations
requested via query params to be silently skipped for those paths.

* Add size_hint to TrackedBody so throttled downloads get Content-Length

TrackedBody (used for download throttling) did not implement
size_hint(), causing HTTP/1.1 to fall back to chunked transfer
encoding instead of setting Content-Length. Go always sets
Content-Length explicitly for non-range responses.

* Add Last-Modified, pairs, and S3 headers to chunk manifest responses

Go sets Last-Modified, needle pairs, and S3 pass-through headers on
the response writer BEFORE calling tryHandleChunkedFile. Since the
Rust chunk manifest handler created fresh response headers and
returned early, these headers were missing from chunk manifest
responses. Now passes last_modified_str into the chunk manifest
handler and applies pairs and S3 pass-through query params
(response-cache-control, response-content-encoding, etc.) to the
chunk manifest response headers.

* Fix multipart fallback to use first part data when no filename

Go reads the first part's data unconditionally, then looks for a
part with a filename. If none found, Go uses the first part's data
(with empty filename). Rust only captured parts with filenames, so
when no part had a filename it fell back to the raw multipart body
bytes (including boundary delimiters), producing corrupt needle data.

* Set HasName and HasMime flags for empty values matching Go

Go's CreateNeedleFromRequest sets HasName and HasMime flags even when
the filename or MIME type is empty (len < 256 is true for len 0).
Rust skipped empty values, causing the on-disk needle format to
differ: Go-written needles include extra bytes for the empty name/mime
size fields, changing the serialized needle size in the idx entry.
This ensures binary format compatibility between Go and Rust servers.

* Add is_stopping guard to vacuum_volume_commit matching Go

Go's CommitCompactVolume (store_vacuum.go L53-54) checks
s.isStopping before committing compaction to prevent file
swaps during shutdown. The Rust handler was missing this
check, which could allow compaction commits while the
server is stopping.

* Remove disk_type from required status fields since Go omits it

Go's default DiskType is "" (HardDriveType), and protobuf's omitempty
tag causes empty strings to be dropped from JSON output.

* test: honor rust env in dual volume harness

* grpc: notify master after volume lifecycle changes

* http: proxy to replicas before download-limit timeout

* test: pass readMode to rust volume harnesses

* fix store free-location predicate selection

* fix volume copy disk placement and heartbeat notification

* fix chunk manifest delete replication

* fix write replication to survive client disconnects

* fix download limit proxy and wait flow

* fix crop gating for streamed reads

* fix upload limit wait counter behavior

* fix chunk manifest image transforms

* fix has_resize_ops to check width/height > 0 instead of is_some()

Go's shouldResizeImages condition is `width > 0 || height > 0`, so
`?width=0` correctly evaluates to false. Rust was using `is_some()`
which made `?width=0` evaluate to true, unnecessarily disabling
streaming reads for those requests.

* fix Content-MD5 to only compute and return when provided by client

Go only computes the MD5 of uncompressed data when a Content-MD5
header or multipart field is provided. Rust was always computing and
returning it. Also fix the mismatch error message to include size,
matching Go's format.

* fix save_vif to compute ExpireAtSec from TTL

Go's SaveVolumeInfo always computes ExpireAtSec = now + ttlSeconds
when the volume has a TTL. The save_vif path (used by set_read_only
and set_writable) was missing this computation, causing .vif files
to be written without the correct expiration timestamp for TTL volumes.

* fix set_writable to not modify no_write_can_delete

Go's MarkVolumeWritable only sets noWriteOrDelete=false and persists.
Rust was additionally setting no_write_can_delete=has_remote_file,
which could incorrectly change the write mode for remote-file volumes
when the master explicitly asks to make the volume writable.

* fix write_needle_blob_and_index to error on too-small V3 blob

Go returns an error when the needle blob is too small for timestamp
patching. Rust was silently skipping the patch and writing the blob
with a stale/zero timestamp, which could cause data integrity issues
during incremental replication that relies on AppendAtNs ordering.

* fix VolumeEcShardsToVolume to validate dataShards range

Go validates that dataShards is > 0 and <= MaxShardCount before
proceeding with EC-to-volume reconstruction. Without this check,
a zero or excessively large data_shards value could cause confusing
downstream failures.

* fix destroy to use VolumeError::NotEmpty instead of generic Io error

The dedicated NotEmpty variant exists in the enum but was not being
used. This makes error matching consistent with Go's ErrVolumeNotEmpty.

* fix SetState to persist state to disk with rollback on failure

Go's State.Update saves VolumeServerState to a state.pb file after
each SetState call, and rolls back the in-memory state if persistence
fails. Rust was only updating in-memory atomics, so maintenance mode
would be lost on server restart. Now saves protobuf-encoded state.pb
and loads it on startup.

* fix VolumeTierMoveDatToRemote to close local dat backend after upload

Go calls v.LoadRemoteFile() after saving volume info, which closes
the local DataBackend before transitioning to remote storage. Without
this, the volume holds a stale file handle to the deleted local .dat
file, causing reads to fail until server restart.

* fix VolumeTierMoveDatFromRemote to close remote dat backend after download

Go calls v.DataBackend.Close() and sets DataBackend=nil after removing
the remote file reference. Without this, the stale remote backend
state lingers and reads may not discover the newly downloaded local
.dat file until server restart.

* fix redirect to use internal url instead of public_url

Go's proxyReqToTargetServer builds the redirect Location header from
loc.Url (the internal URL), not publicUrl. Using public_url could
cause redirect failures when internal and external URLs differ.

* fix redirect test and add state_file_path to integration test

Update redirect unit test to expect internal url (matching the
previous fix). Add missing state_file_path field to the integration
test VolumeServerState constructor.

* fix FetchAndWriteNeedle to await all writes before checking errors

Go uses a WaitGroup to await all writes (local + replicas) before
checking errors. Rust was short-circuiting on local write failure,
which could leave replica writes in-flight without waiting for
completion.

* fix shutdown to send deregister heartbeat before pre_stop delay

Go's StopHeartbeat() closes stopChan immediately on interrupt, causing
the heartbeat goroutine to send the deregister heartbeat right away,
before the preStopSeconds delay. Rust was only setting is_stopping=true
without waking the heartbeat loop, so the deregister was delayed until
after the pre_stop sleep. Now we call volume_state_notify.notify_one()
to wake the heartbeat immediately.

* fix heartbeat response ordering to check duplicate UUIDs first

Go processes heartbeat responses in this order: DuplicatedUuids first,
then volume options (prealloc/size limit), then leader redirect. Rust
was applying volume options before checking for duplicate UUIDs, which
meant volume option changes would take effect even when the response
contained a duplicate UUID error that should cause an immediate return.

* the test thread was blocked

* fix(deps): update aws-lc-sys 0.38.0 → 0.39.0 to resolve security advisories

Bumps aws-lc-rs 1.16.1 → 1.16.2, pulling in aws-lc-sys 0.39.0 which
fixes GHSA-394x-vwmw-crm3 (X.509 Name Constraints wildcard/unicode
bypass) and GHSA-9f94-5g5w-gf6r (CRL Distribution Point scope check
logic error).

* fix: match Go Content-MD5 mismatch error message format

Go uses "Content-MD5 did not match md5 of file data expected [X]
received [Y] size Z" while Rust had a shorter format. Match the
exact Go error string so clients see identical messages.

* fix: match Go Bearer token length check (> 7, not >= 7)

Go requires len(bearer) > 7 ensuring at least one char after
"Bearer ". Rust used >= 7 which would accept an empty token.

* fix(deps): drop legacy rustls 0.21 to resolve rustls-webpki GHSA-pwjx-qhcg-rvj4

aws-sdk-s3's default "rustls" feature enables tls-rustls in
aws-smithy-runtime, which pulls in legacy-rustls-ring (rustls 0.21
→ rustls-webpki 0.101.7, moderate CRL advisory). Replace with
explicit default-https-client which uses only rustls 0.23 /
rustls-webpki 0.103.9.

* fix: use uploaded filename for auto-compression extension detection

Go extracts the file extension from pu.FileName (the uploaded
filename) for auto-compression decisions. Rust was using the URL
path, which typically has no extension for SeaweedFS file IDs.

* fix: add CRC legacy Value() backward-compat check on needle read

Go double-checks CRC: n.Checksum != crc && uint32(n.Checksum) !=
crc.Value(). The Value() path is a deprecated transform for compat
with seaweed versions prior to commit 056c480eb. Rust had the
legacy_value() method but wasn't using it in validation.

* fix: remove /stats/* endpoints to match Go (commented out since L130)

Go's volume_server.go has the /stats/counter, /stats/memory, and
/stats/disk endpoints commented out (lines 130-134). Remove them
from the Rust router along with the now-unused whitelist_guard
middleware.

* fix: filter application/octet-stream MIME for chunk manifests

Go's tryHandleChunkedFile (L334) filters out application/octet-stream
from chunk manifest MIME types, falling back to extension-based
detection. Rust was returning the stored MIME as-is for manifests.

* fix: VolumeMarkWritable returns error before notifying master

Go returns early at L200 if MarkVolumeWritable fails, before
reaching the master notification at L206. Rust was notifying master
even on failure, creating inconsistent state where master thinks
the volume is writable but local marking failed.

* fix: check volume existence before maintenance in MarkReadonly/Writable

Go's VolumeMarkReadonly (L239-241) and VolumeMarkWritable (L253-255)
look up the volume first, then call makeVolumeReadonly/Writable which
checks maintenance. Rust was checking maintenance first, returning
"maintenance mode" instead of "not found" for missing volumes.

* feat: implement ScrubVolume mark_broken_volumes_readonly (PR #8360)

Add the mark_broken_volumes_readonly flag from PR #8360:
- Sync proto field (tag 3) to local volume_server.proto
- After scrubbing, if flag is set, call makeVolumeReadonly on each
  broken volume (notify master, mark local readonly, notify again)
- Collect errors via joined error semantics matching Go's errors.Join
- Factor out make_volume_readonly helper reused by both
  VolumeMarkReadonly and ScrubVolume

Also refactors VolumeMarkReadonly to use the shared helper.

* fix(deps): update rustls-webpki 0.103.9 → 0.103.10 (GHSA-pwjx-qhcg-rvj4)

CRL Distribution Point matching logic fix for moderate severity
advisory about CRLs not considered authoritative.

* test: update integration tests for removed /stats/* endpoints

Replace tests that expected /stats/* routes to return 200/401 with
tests confirming they now fall through to the store handler (400),
matching Go's commented-out stats endpoints.

* docs: fix misleading comment about default offset feature

The comment said "4-byte offsets unless explicitly built with 5-byte
support" but the default feature enables 5bytes. This is intentional
for production parity with Go -tags 5BytesOffset builds. Fix the
comment to match reality.
2026-03-26 17:24:35 -07:00

31 KiB
Raw Permalink Blame History

Execution Plan: SeaweedFS Volume Server — Go to Rust Port

Scope Summary

Component Go Source Lines (non-test) Description
CLI & startup weed/command/volume.go 476 ~40 CLI flags, server bootstrap
HTTP server + handlers weed/server/volume_server*.go 1,517 Struct, routes, read/write/delete handlers
gRPC handlers weed/server/volume_grpc_*.go 3,073 40 RPC method implementations
Storage engine weed/storage/ 15,271 Volumes, needles, index, compaction, EC, backend
Protobuf definitions weed/pb/volume_server.proto 759 Service + message definitions
Shared utilities weed/security/, weed/stats/, weed/util/ ~2,000+ JWT, TLS, metrics, helpers
Total ~23,000+

Rust Crate & Dependency Strategy

seaweed-volume/
├── Cargo.toml
├── build.rs                    # protobuf codegen
├── proto/
│   ├── volume_server.proto     # copied from Go, adapted
│   └── remote.proto
├── src/
│   ├── main.rs                 # CLI entry point
│   ├── config.rs               # CLI flags + config
│   ├── server/
│   │   ├── mod.rs
│   │   ├── volume_server.rs    # VolumeServer struct + lifecycle
│   │   ├── http_handlers.rs    # HTTP route dispatch
│   │   ├── http_read.rs        # GET/HEAD handlers
│   │   ├── http_write.rs       # POST/PUT handlers
│   │   ├── http_delete.rs      # DELETE handler
│   │   ├── http_admin.rs       # /status, /healthz, /ui
│   │   ├── grpc_service.rs     # gRPC trait impl dispatch
│   │   ├── grpc_vacuum.rs
│   │   ├── grpc_copy.rs
│   │   ├── grpc_erasure_coding.rs
│   │   ├── grpc_tail.rs
│   │   ├── grpc_admin.rs
│   │   ├── grpc_read_write.rs
│   │   ├── grpc_batch_delete.rs
│   │   ├── grpc_scrub.rs
│   │   ├── grpc_tier.rs
│   │   ├── grpc_remote.rs
│   │   ├── grpc_query.rs
│   │   ├── grpc_state.rs
│   │   └── grpc_client_to_master.rs  # heartbeat
│   ├── storage/
│   │   ├── mod.rs
│   │   ├── store.rs            # Store (multi-disk manager)
│   │   ├── volume.rs           # Volume struct + lifecycle
│   │   ├── volume_read.rs
│   │   ├── volume_write.rs
│   │   ├── volume_compact.rs
│   │   ├── volume_info.rs
│   │   ├── needle/
│   │   │   ├── mod.rs
│   │   │   ├── needle.rs       # Needle struct + serialization
│   │   │   ├── needle_read.rs
│   │   │   ├── needle_write.rs
│   │   │   ├── needle_map.rs   # in-memory NeedleMap
│   │   │   ├── needle_value.rs
│   │   │   └── crc.rs
│   │   ├── super_block.rs
│   │   ├── idx/
│   │   │   ├── mod.rs
│   │   │   └── idx.rs          # .idx file format read/write
│   │   ├── needle_map_leveldb.rs
│   │   ├── types.rs            # NeedleId, Offset, Size, DiskType
│   │   ├── disk_location.rs    # DiskLocation per-directory
│   │   ├── erasure_coding/
│   │   │   ├── mod.rs
│   │   │   ├── ec_volume.rs
│   │   │   ├── ec_shard.rs
│   │   │   ├── ec_encoder.rs   # Reed-Solomon encoding
│   │   │   └── ec_decoder.rs
│   │   └── backend/
│   │       ├── mod.rs
│   │       ├── disk.rs
│   │       └── s3_backend.rs   # tiered storage to S3
│   ├── topology/
│   │   └── volume_layout.rs    # replication placement
│   ├── security/
│   │   ├── mod.rs
│   │   ├── guard.rs            # whitelist + JWT gate
│   │   ├── jwt.rs
│   │   └── tls.rs
│   ├── stats/
│   │   ├── mod.rs
│   │   └── metrics.rs          # Prometheus counters/gauges
│   └── util/
│       ├── mod.rs
│       ├── grpc.rs
│       ├── http.rs
│       └── file.rs
└── tests/
    ├── integration/
    │   ├── http_read_test.rs
    │   ├── http_write_test.rs
    │   ├── grpc_test.rs
    │   └── storage_test.rs
    └── unit/
        ├── needle_test.rs
        ├── idx_test.rs
        ├── super_block_test.rs
        └── ec_test.rs

Key Rust dependencies

Purpose Crate
Async runtime tokio
gRPC tonic + prost
HTTP server hyper + axum
CLI parsing clap (derive)
Prometheus metrics prometheus
JWT jsonwebtoken
TLS rustls + tokio-rustls
LevelDB rusty-leveldb or rocksdb
Reed-Solomon EC reed-solomon-erasure
Logging tracing + tracing-subscriber
Config (security.toml) toml + serde
CRC32 crc32fast
Memory-mapped files memmap2

Phased Execution Plan

Phase 1: Project Skeleton & Protobuf Codegen

Goal: Cargo project compiles, proto codegen works, CLI parses all flags.

Steps:

1.1. Create seaweed-volume/Cargo.toml with all dependencies listed above.

1.2. Copy volume_server.proto and remote.proto into proto/. Adjust package paths for Rust codegen.

1.3. Create build.rs using tonic-build to compile .proto files into Rust types.

1.4. Create src/main.rs with clap derive structs mirroring all 40 CLI flags from weed/command/volume.go:

  • --port (default 8080)
  • --port.grpc (default 0 → 10000+port)
  • --port.public (default 0 → same as port)
  • --ip (auto-detect)
  • --id (default empty → ip:port)
  • --publicUrl
  • --ip.bind
  • --master (default "localhost:9333")
  • --mserver (deprecated compat)
  • --preStopSeconds (default 10)
  • --idleTimeout (default 30)
  • --dataCenter
  • --rack
  • --index [memory|leveldb|leveldbMedium|leveldbLarge]
  • --disk [hdd|ssd|]
  • --tags
  • --dir (default temp dir)
  • --dir.idx
  • --max (default "8")
  • --whiteList
  • --minFreeSpacePercent (default "1")
  • --minFreeSpace
  • --images.fix.orientation (default false)
  • --readMode [local|proxy|redirect] (default "proxy")
  • --cpuprofile
  • --memprofile
  • --compactionMBps (default 0)
  • --maintenanceMBps (default 0)
  • --fileSizeLimitMB (default 256)
  • --concurrentUploadLimitMB (default 0)
  • --concurrentDownloadLimitMB (default 0)
  • --pprof (default false)
  • --metricsPort (default 0)
  • --metricsIp
  • --inflightUploadDataTimeout (default 60s)
  • --inflightDownloadDataTimeout (default 60s)
  • --hasSlowRead (default true)
  • --readBufferSizeMB (default 4)
  • --index.leveldbTimeout (default 0)
  • --debug (default false)
  • --debug.port (default 6060)

1.5. Implement the same flag validation logic from startVolumeServer():

  • Parse comma-separated --dir, --max, --minFreeSpace, --disk, --tags
  • Replicate single-value-to-all-dirs expansion
  • Validate count matches between dirs and limits
  • --mserver backward compat

1.6. Test: cargo build succeeds. cargo run -- --help shows all flags. Proto types generated.

Verification: Run with --port 8080 --dir /tmp --master localhost:9333 — should parse without error and print config.


Phase 2: Core Storage Types & On-Disk Format

Goal: Read and write the SeaweedFS needle/volume binary format bit-for-bit compatible with Go.

Source files to port:

  • weed/storage/types/needle_types.gosrc/storage/types.rs
  • weed/storage/needle/needle.gosrc/storage/needle/needle.rs
  • weed/storage/needle/needle_read.gosrc/storage/needle/needle_read.rs
  • weed/storage/needle/needle_write.go (partial) → src/storage/needle/needle_write.rs
  • weed/storage/needle/crc.gosrc/storage/needle/crc.rs
  • weed/storage/needle/needle_value_map.gosrc/storage/needle/needle_value.rs
  • weed/storage/super_block/super_block.gosrc/storage/super_block.rs
  • weed/storage/idx/src/storage/idx/

Steps:

2.1. Fundamental types (types.rs):

  • NeedleId (u64), Offset (u32 or u64 depending on version), Size (i32, negative = deleted)
  • Cookie (u32)
  • DiskType enum (HDD, SSD, Custom)
  • Version constants (Version1=1, Version2=2, Version3=3, CurrentVersion=3)
  • Byte serialization matching Go's binary.BigEndian encoding

2.2. SuperBlock (super_block.rs):

  • 8-byte header: Version(1) + ReplicaPlacement(1) + TTL(2) + CompactRevision(2) + Reserved(2)
  • ReplicaPlacement struct with same/diff rack/dc counts
  • TTL struct with count + unit
  • Read/write from first 8 bytes of .dat file
  • Match exact byte layout from super_block.go

2.3. Needle binary format (needle.rs, needle_read.rs):

  • Version 2/3 header: Cookie(4) + NeedleId(8) + Size(4)
  • Body: Data, Flags, Name, Mime, PairsSize, Pairs, LastModified, TTL, Checksum, AppendAtNs, Padding
  • CRC32 checksum (matching Go's crc32.ChecksumIEEE)
  • Padding to 8-byte alignment
  • Read path: read header → compute body length → read body → verify CRC

2.4. Idx file format (idx/):

  • Fixed 16-byte records: NeedleId(8) + Offset(4) + Size(4)
  • Sequential append-only file
  • Walk/iterate all entries
  • Binary search not used (loaded into memory map)

2.5. NeedleMap (in-memory) (needle_map.rs):

  • HashMap<NeedleId, NeedleValue> where NeedleValue = {Offset, Size}
  • Load from .idx file on volume mount
  • Support Get, Set, Delete operations
  • Track file count, deleted count, deleted byte count

2.6. Tests:

  • Unit test: write a needle to bytes → read it back → verify fields match
  • Unit test: write/read SuperBlock round-trip
  • Unit test: write/read idx entries round-trip
  • Cross-compat test: Use Go volume server to create a small volume with known data. Read it from Rust and verify all needles decoded correctly. (Keep test fixture .dat/.idx files in tests/fixtures/)

Phase 3: Volume Struct & Lifecycle

Goal: Mount, read from, write to, and unmount a volume.

Source files to port:

  • weed/storage/volume.gosrc/storage/volume.rs
  • weed/storage/volume_read.gosrc/storage/volume_read.rs
  • weed/storage/volume_write.gosrc/storage/volume_write.rs
  • weed/storage/volume_loading.go
  • weed/storage/volume_vacuum.gosrc/storage/volume_compact.rs
  • weed/storage/volume_info/volume_info.gosrc/storage/volume_info.rs
  • weed/storage/volume_super_block.go

Steps:

3.1. Volume struct (volume.rs):

  • Fields: Id, dir, dataFile, nm (NeedleMap), SuperBlock, readOnly, lastModifiedTs, lastCompactIndexOffset, lastCompactRevision
  • noWriteOrDelete / noWriteCanDelete / readOnly state flags
  • File handles for .dat file (read + append)
  • Lock strategy: RwLock for concurrent reads, exclusive writes

3.2. Volume loading — exact logic from volume_loading.go:

  • Open .dat file, read SuperBlock from first 8 bytes
  • Load .idx file into NeedleMap
  • Handle .vif (VolumeInfo) JSON sidecar file
  • Set volume state based on SuperBlock + VolumeInfo

3.3. Volume read (volume_read.rs) — from volume_read.go:

  • ReadNeedle(needleId, cookie): lookup in NeedleMap → seek in .dat → read needle bytes → verify cookie + CRC → return data
  • Handle deleted needles (Size < 0)
  • ReadNeedleBlob(offset, size): raw blob read
  • ReadNeedleMeta(needleId, offset, size): read metadata only

3.4. Volume write (volume_write.rs) — from volume_write.go:

  • WriteNeedle(needle): serialize needle → append to .dat → update .idx → update NeedleMap
  • DeleteNeedle(needleId): mark as deleted in NeedleMap + append tombstone to .idx
  • File size limit check
  • Concurrent write serialization (mutex on write path)

3.5. Volume compaction (volume_compact.rs) — from volume_vacuum.go:

  • CheckCompact(): compute garbage ratio
  • Compact(): create new .dat/.idx, copy only live needles, update compact revision
  • CommitCompact(): rename compacted files over originals
  • CleanupCompact(): remove temp files
  • Throttle by compactionBytePerSecond

3.6. Volume info (volume_info.rs):

  • Read/write .vif JSON sidecar
  • VolumeInfo protobuf struct mapping
  • Remote file references for tiered storage

3.7. Tests:

  • Mount a volume, write 100 needles, read them all back, verify content
  • Delete 50 needles, verify they return "deleted"
  • Compact, verify only 50 remain, verify content
  • Read Go-created volume fixtures

Phase 4: Store (Multi-Volume, Multi-Disk Manager)

Goal: Manage multiple volumes across multiple disk directories.

Source files to port:

  • weed/storage/store.gosrc/storage/store.rs
  • weed/storage/disk_location.gosrc/storage/disk_location.rs
  • weed/storage/store_ec.go
  • weed/storage/store_state.go

Steps:

4.1. DiskLocation (disk_location.rs):

  • Directory path, max volume count, min free space, disk type, tags
  • Load all volumes from directory on startup
  • Track free space, check writable

4.2. Store (store.rs):

  • Vector of DiskLocations
  • GetVolume(volumeId) → lookup across all locations
  • HasVolume(volumeId) check
  • AllocateVolume(...) — create new volume in appropriate location
  • DeleteVolume(...), MountVolume(...), UnmountVolume(...)
  • DeleteCollection(collection) — delete all volumes of a collection
  • Collect volume status for heartbeat
  • SetStopping(), Close()
  • Persistent state (maintenance mode) via store_state.go

4.3. Store stateVolumeServerState protobuf with maintenance flag, persisted to disk.

4.4. Tests:

  • Create store with 2 dirs, allocate volumes in each, verify load balancing
  • Mount/unmount/delete lifecycle
  • State persistence across restart

Phase 5: Erasure Coding

Goal: Full EC shard encode/decode/read/write/rebuild.

Source files to port:

  • weed/storage/erasure_coding/ (3,599 lines)

Steps:

5.1. EC volume + shard structsEcVolume, EcShard with file handles for .ec00.ec13 shard files + .ecx index + .ecj journal.

5.2. EC encoder — Reed-Solomon 10+4 (configurable) encoding using reed-solomon-erasure crate:

  • VolumeEcShardsGenerate: read .dat → split into data shards → compute parity → write .ec00-.ec13 + .ecx

5.3. EC decoder/reader — reconstruct data from any 10 of 14 shards:

  • EcShardRead: read range from a specific shard
  • Locate needle in EC volume via .ecx index
  • Handle cross-shard needle reads

5.4. EC shard operations:

  • Copy, delete, mount, unmount shards
  • VolumeEcShardsRebuild: rebuild missing shards from remaining
  • VolumeEcShardsToVolume: reconstruct .dat from EC shards
  • VolumeEcBlobDelete: mark deleted in EC journal
  • VolumeEcShardsInfo: report shard metadata

5.5. Tests:

  • Encode a volume → verify 14 shards created
  • Delete 4 shards → rebuild → verify data intact
  • Read individual needles from EC volume
  • Cross-compat with Go-generated EC shards

Phase 6: Backend / Tiered Storage

Goal: Support tiered storage to remote backends (S3, etc).

Source files to port:

  • weed/storage/backend/ (1,850 lines)

Steps:

6.1. Backend trait — abstract BackendStorage trait with ReadAt, WriteAt, Truncate, Close, Name.

6.2. Disk backend — default local disk implementation.

6.3. S3 backend — upload .dat to S3, read ranges via S3 range requests.

6.4. Tier move operations:

  • VolumeTierMoveDatToRemote: upload .dat to remote, optionally delete local
  • VolumeTierMoveDatFromRemote: download .dat from remote

6.5. Tests:

  • Disk backend read/write round-trip
  • S3 backend with mock/localstack

Phase 7: Security Layer

Goal: JWT authentication, whitelist guard, TLS configuration.

Source files to port:

  • weed/security/guard.gosrc/security/guard.rs
  • weed/security/jwt.gosrc/security/jwt.rs
  • weed/security/tls.gosrc/security/tls.rs

Steps:

7.1. Guard (guard.rs):

  • Whitelist IP check (exact match on r.RemoteAddr)
  • Wrap handlers with whitelist enforcement
  • UpdateWhiteList() for live reload

7.2. JWT (jwt.rs):

  • SeaweedFileIdClaims with fid field
  • Sign with HMAC-SHA256
  • Verify + decode with expiry check
  • Separate signing keys for read vs write
  • GetJwt(request) — extract from Authorization: Bearer header or jwt query param

7.3. TLS (tls.rs):

  • Load server TLS cert/key for gRPC and HTTPS
  • Load client TLS for mutual TLS
  • Read from security.toml config (same format as Go's viper config)

7.4. Tests:

  • JWT sign → verify round-trip
  • JWT with wrong key → reject
  • JWT with expired token → reject
  • JWT fid mismatch → reject
  • Whitelist allow/deny

Phase 8: Prometheus Metrics

Goal: Export same metric names as Go for dashboard compatibility.

Source files to port:

  • weed/stats/metrics.go (volume server counters/gauges/histograms)

Steps:

8.1. Define all Prometheus metrics matching Go names:

  • VolumeServerRequestCounter (labels: method, status)
  • VolumeServerRequestHistogram (labels: method)
  • VolumeServerInFlightRequestsGauge (labels: method)
  • VolumeServerInFlightUploadSize
  • VolumeServerInFlightDownloadSize
  • VolumeServerConcurrentUploadLimit
  • VolumeServerConcurrentDownloadLimit
  • VolumeServerHandlerCounter (labels: type — UploadLimitCond, DownloadLimitCond)
  • Read/Write/Delete request counters

8.2. Metrics HTTP endpoint on --metricsPort.

8.3. Optional push-based metrics loop (LoopPushingMetric).

8.4. Test: Verify metric names and labels match Go output.


Phase 9: HTTP Server & Handlers

Goal: All HTTP endpoints with exact same behavior as Go.

Source files to port:

  • weed/server/volume_server.gosrc/server/volume_server.rs
  • weed/server/volume_server_handlers.gosrc/server/http_handlers.rs
  • weed/server/volume_server_handlers_read.gosrc/server/http_read.rs
  • weed/server/volume_server_handlers_write.gosrc/server/http_write.rs
  • weed/server/volume_server_handlers_admin.gosrc/server/http_admin.rs
  • weed/server/volume_server_handlers_helper.go (URL parsing, proxy, JSON responses)
  • weed/server/volume_server_handlers_ui.gosrc/server/http_admin.rs

Steps:

9.1. URL path parsing — from handlers_helper.go:

  • Parse /<vid>,<fid> and /<vid>/<fid> patterns
  • Extract volume ID, file ID, filename, ext

9.2. Route dispatch — from privateStoreHandler and publicReadOnlyHandler:

  • GET /GetOrHeadHandler
  • HEAD /GetOrHeadHandler
  • POST /PostHandler (whitelist gated)
  • PUT /PostHandler (whitelist gated)
  • DELETE /DeleteHandler (whitelist gated)
  • OPTIONS / → CORS preflight
  • GET /status → JSON status
  • GET /healthz → health check
  • GET /ui/index.html → HTML UI page
  • Static resources (CSS/JS for UI)

9.3. GET/HEAD handler (http_read.rs) — from handlers_read.go (468 lines):

  • JWT read authorization check
  • Lookup needle by volume ID + needle ID + cookie
  • ETag / If-None-Match / If-Modified-Since conditional responses
  • Content-Type from stored MIME or filename extension
  • Content-Disposition header
  • Content-Encoding (gzip/zstd stored data)
  • Range request support (HTTP 206 Partial Content)
  • JPEG orientation fix (if configured)
  • Proxy to replica on local miss (readMode=proxy)
  • Redirect to replica (readMode=redirect)
  • Download tracking (in-flight size accounting)

9.4. POST/PUT handler (http_write.rs) — from handlers_write.go (170 lines):

  • JWT write authorization check
  • Multipart form parsing
  • Extract file data, filename, content type, TTL, last-modified
  • Optional gzip/zstd compression
  • Write needle to volume
  • Replicate to peers (same logic as Go's DistributedOperation)
  • Return JSON: {name, size, eTag, error}

9.5. DELETE handler — already in handlers.go:

  • JWT authorization
  • Delete from local volume
  • Replicate delete to peers
  • Return JSON result

9.6. Admin handlers (http_admin.rs):

  • /status → JSON with volumes, version, disk status
  • /healthz → 200 OK if serving
  • /ui/index.html → HTML dashboard

9.7. Concurrency limiting — from handlers.go:

  • Upload concurrency limit with sync::Condvar + timeout
  • Download concurrency limit with proxy fallback to replicas
  • HTTP 429 on timeout, 499 on client cancel
  • Replication traffic bypasses upload limits

9.8. Public port — if configured, separate listener with read-only routes (GET/HEAD/OPTIONS only).

9.9. Request ID middleware — generate unique request ID per request.

9.10. Tests:

  • Integration: start server → upload file via POST → GET it back → verify content
  • Integration: upload → DELETE → GET returns 404
  • Integration: conditional GET with ETag → 304
  • Integration: range request → 206 with correct bytes
  • Integration: exceed upload limit → 429
  • Integration: whitelist enforcement
  • Integration: JWT enforcement

Phase 10: gRPC Service Implementation

Goal: All 40 gRPC methods with exact logic.

Source files to port:

  • weed/server/volume_grpc_admin.go (380 lines)
  • weed/server/volume_grpc_vacuum.go (124 lines)
  • weed/server/volume_grpc_copy.go (636 lines)
  • weed/server/volume_grpc_copy_incremental.go (66 lines)
  • weed/server/volume_grpc_read_write.go (74 lines)
  • weed/server/volume_grpc_batch_delete.go (124 lines)
  • weed/server/volume_grpc_tail.go (140 lines)
  • weed/server/volume_grpc_erasure_coding.go (619 lines)
  • weed/server/volume_grpc_scrub.go (121 lines)
  • weed/server/volume_grpc_tier_upload.go (98 lines)
  • weed/server/volume_grpc_tier_download.go (85 lines)
  • weed/server/volume_grpc_remote.go (95 lines)
  • weed/server/volume_grpc_query.go (69 lines)
  • weed/server/volume_grpc_state.go (26 lines)
  • weed/server/volume_grpc_read_all.go (35 lines)
  • weed/server/volume_grpc_client_to_master.go (325 lines)

Steps (grouped by functional area):

10.1. Implement tonic::Service for VolumeServer — the generated trait from proto.

10.2. Admin RPCs (grpc_admin.rs):

  • AllocateVolume — create volume on appropriate disk location
  • VolumeMount / VolumeUnmount / VolumeDelete
  • VolumeMarkReadonly / VolumeMarkWritable
  • VolumeConfigure — change replication
  • VolumeStatus — return read-only, size, file counts
  • VolumeServerStatus — disk statuses, memory, version, DC, rack
  • VolumeServerLeave — deregister from master
  • DeleteCollection
  • VolumeNeedleStatus — get needle metadata by ID
  • Ping — latency measurement
  • GetState / SetState — maintenance mode

10.3. Vacuum RPCs (grpc_vacuum.rs):

  • VacuumVolumeCheck — return garbage ratio
  • VacuumVolumeCompact — stream progress (streaming response)
  • VacuumVolumeCommit — finalize compaction
  • VacuumVolumeCleanup — remove temp files

10.4. Copy RPCs (grpc_copy.rs):

  • VolumeCopy — stream .dat/.idx from source to create local copy
  • VolumeSyncStatus — return sync metadata
  • VolumeIncrementalCopy — stream .dat delta since timestamp (streaming)
  • CopyFile — generic file copy by extension (streaming)
  • ReceiveFile — receive streamed file (client streaming)
  • ReadVolumeFileStatus — return file timestamps and sizes

10.5. Read/Write RPCs (grpc_read_write.rs):

  • ReadNeedleBlob — raw needle blob read
  • ReadNeedleMeta — needle metadata
  • WriteNeedleBlob — raw needle blob write
  • ReadAllNeedles — stream all needles from volume(s) (streaming)

10.6. Batch delete (grpc_batch_delete.rs):

  • BatchDelete — delete multiple file IDs, return per-ID results

10.7. Tail RPCs (grpc_tail.rs):

  • VolumeTailSender — stream new needles since timestamp (streaming)
  • VolumeTailReceiver — connect to another volume server and tail its changes

10.8. Erasure coding RPCs (grpc_erasure_coding.rs):

  • VolumeEcShardsGenerate — generate EC shards from volume
  • VolumeEcShardsRebuild — rebuild missing shards
  • VolumeEcShardsCopy — copy shards from another server
  • VolumeEcShardsDelete — delete EC shards
  • VolumeEcShardsMount / VolumeEcShardsUnmount
  • VolumeEcShardRead — read from EC shard (streaming)
  • VolumeEcBlobDelete — mark blob deleted in EC volume
  • VolumeEcShardsToVolume — reconstruct volume from EC shards
  • VolumeEcShardsInfo — return shard metadata

10.9. Scrub RPCs (grpc_scrub.rs):

  • ScrubVolume — integrity check volumes (INDEX / FULL / LOCAL modes)
  • ScrubEcVolume — integrity check EC volumes

10.10. Tier RPCs (grpc_tier.rs):

  • VolumeTierMoveDatToRemote — upload to remote backend (streaming progress)
  • VolumeTierMoveDatFromRemote — download from remote (streaming progress)

10.11. Remote storage (grpc_remote.rs):

  • FetchAndWriteNeedle — fetch from remote storage, write locally, replicate

10.12. Query (grpc_query.rs):

  • Query — experimental CSV/JSON/Parquet select on stored data (streaming)

10.13. Master heartbeat (grpc_client_to_master.rs):

  • heartbeat() background task — periodic gRPC stream to master
  • Send: volume info, EC shard info, disk stats, has-no-space flags, deleted volumes
  • Receive: volume size limit, leader address, metrics config
  • Reconnect on failure with backoff
  • StopHeartbeat() for graceful shutdown

10.14. Tests:

  • Integration test per RPC: call via tonic client → verify response
  • Streaming RPCs: verify all chunks received
  • Error cases: invalid volume ID, non-existent volume, etc.
  • Heartbeat: mock master gRPC server, verify registration

Phase 11: Startup, Lifecycle & Graceful Shutdown

Goal: Full server startup matching Go's runVolume() and startVolumeServer().

Steps:

11.1. Startup sequence (match volume.go exactly):

  1. Load security configuration from security.toml
  2. Start metrics server on metrics port
  3. Parse folder/max/minFreeSpace/diskType/tags
  4. Validate all directory writable
  5. Resolve IP, bind IP, public URL, gRPC port
  6. Create VolumeServer struct
  7. Check with master (initial handshake)
  8. Create Store (loads all existing volumes from disk)
  9. Create security Guard
  10. Register HTTP routes on admin mux
  11. Optionally register public mux
  12. Start gRPC server on gRPC port
  13. Start public HTTP server (if separated)
  14. Start cluster HTTP server (with optional TLS)
  15. Start heartbeat background task
  16. Start metrics push loop
  17. Register SIGHUP handler for config reload + new volume loading

11.2. Graceful shutdown (match Go exactly):

  1. On SIGINT/SIGTERM:
  2. Stop heartbeat (notify master we're leaving)
  3. Wait preStopSeconds
  4. Stop public HTTP server
  5. Stop cluster HTTP server
  6. Graceful stop gRPC server
  7. volumeServer.Shutdown()store.Close() (flush all volumes)

11.3. Reload (SIGHUP):

  • Reload security config
  • Update whitelist
  • Load newly appeared volumes from disk

11.4. Tests:

  • Start server → send SIGTERM → verify clean shutdown
  • Start server → SIGHUP → verify config reloaded

Phase 12: Integration & Cross-Compatibility Testing

Goal: Rust volume server is a drop-in replacement for Go volume server.

Steps:

12.1. Binary compatibility tests:

  • Create volumes with Go volume server
  • Start Rust volume server on same data directory
  • Read all data → verify identical
  • Write new data with Rust → read with Go → verify

12.2. API compatibility tests:

  • Run same HTTP requests against both Go and Rust servers
  • Compare response bodies, headers, status codes
  • Test all gRPC RPCs against both

12.3. Master interop test:

  • Start Go master server
  • Register Rust volume server
  • Verify heartbeat works
  • Verify volume assignment works
  • Upload via filer → stored on Rust volume server → read back

12.4. Performance benchmarks:

  • Throughput: sequential writes, sequential reads
  • Latency: p50/p99 for read/write
  • Concurrency: parallel reads/writes
  • Compare Rust vs Go numbers

12.5. Edge cases:

  • Volume at max size
  • Disk full handling
  • Corrupt .dat file recovery
  • Network partition during replication
  • EC shard loss + rebuild

Execution Order & Dependencies

Phase 1  (Skeleton + CLI)        ← no deps, start here
   ↓
Phase 2  (Storage types)         ← needs Phase 1 (types used everywhere)
   ↓
Phase 3  (Volume struct)         ← needs Phase 2
   ↓
Phase 4  (Store manager)         ← needs Phase 3
   ↓
Phase 7  (Security)              ← independent, can parallel with 3-4
Phase 8  (Metrics)               ← independent, can parallel with 3-4
   ↓
Phase 9  (HTTP server)           ← needs Phase 4 + 7 + 8
Phase 10 (gRPC server)           ← needs Phase 4 + 7 + 8
   ↓
Phase 5  (Erasure coding)        ← needs Phase 4, wire into Phase 10
Phase 6  (Tiered storage)        ← needs Phase 4, wire into Phase 10
   ↓
Phase 11 (Startup + shutdown)    ← needs Phase 9 + 10
   ↓
Phase 12 (Integration tests)     ← needs all above

Estimated Scope

Phase Estimated Rust Lines Complexity
1. Skeleton + CLI ~400 Low
2. Storage types ~2,000 High (binary compat critical)
3. Volume struct ~2,500 High
4. Store manager ~1,000 Medium
5. Erasure coding ~3,000 High
6. Tiered storage ~1,500 Medium
7. Security ~500 Medium
8. Metrics ~300 Low
9. HTTP server ~2,000 High
10. gRPC server ~3,500 High
11. Startup/shutdown ~500 Medium
12. Integration tests ~2,000 Medium
Total ~19,000

Critical Invariants to Preserve

  1. Binary format compatibility — Rust must read/write .dat, .idx, .vif, .ecX files identically to Go. A single byte off = data loss.
  2. gRPC wire compatibility — Same proto, same field semantics. Go master must talk to Rust volume server seamlessly.
  3. HTTP API compatibility — Same URL patterns, same JSON response shapes, same headers, same status codes.
  4. Replication protocol — Write replication between Go and Rust volume servers must work bidirectionally.
  5. Heartbeat protocol — Rust volume server must register with Go master and maintain heartbeat.
  6. CRC32 algorithm — Must use IEEE polynomial (same as Go's crc32.ChecksumIEEE).
  7. JWT compatibility — Tokens signed by Go filer/master must be verifiable by Rust volume server and vice versa.