5 Commits
stage ... dev

35 changed files with 2685 additions and 15 deletions

1026
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

31
Cargo.toml Normal file
View File

@@ -0,0 +1,31 @@
[workspace]
resolver = "2"
members = [
"src/stratum-core",
"src/stratum-storage",
"src/stratum-metadata",
"src/stratum-auth",
"src/stratum-api-s3",
"src/stratum-tiering",
"src/stratum",
]
[workspace.dependencies]
axum = "0.8.8"
bytes = "1.11.1"
crc32fast = "1.5.0"
hex = "0.4.3"
hmac = "0.12.1"
hyper = "1.8.1"
md5 = "0.8.0"
quick-xml = "0.39.2"
reed-solomon-erasure = "6.0.0"
serde = { version = "1.0.228", features = ["derive"] }
sha2 = "0.10.9"
sled = "0.34.7"
tokio = "1.50.0"
tower = "0.5.3"
tracing = "0.1.44"
tracing-subscriber = "0.3.23"
uuid = { version = "1.22.0", features = ["v4"] }
anyhow = "1.0.102"

197
LICENSE
View File

@@ -1,18 +1,187 @@
MIT License
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
Copyright (c) 2026 gsh-digital-services
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the "Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the
following conditions:
1. Definitions.
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
USE OR OTHER DEALINGS IN THE SOFTWARE.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship made available under
the License, as indicated by a copyright notice that is included in
or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean, as submitted to the Licensor for inclusion
in the Work by the copyright owner or by an individual or Legal Entity
authorized to submit on behalf of the copyright owner. For the purposes
of this definition, "submitted" means any form of electronic, verbal,
or written communication sent to the Licensor or its representatives,
including but not limited to communication on electronic mailing lists,
source code control systems, and issue tracking systems that are managed
by, or on behalf of, the Licensor for the purpose of discussing and
improving the Work, but excluding communication that is conspicuously
marked or designated in writing by the copyright owner as "Not a
Contribution."
"Contributor" shall mean Licensor and any Legal Entity on behalf of
whom a Contribution has been received by the Licensor and subsequently
incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a cross-claim
or counterclaim in a lawsuit) alleging that the Work or a Contribution
incorporated within the Work constitutes direct or contributory patent
infringement, then any patent licenses granted to You under this License
for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work
or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You meet
the following conditions:
(a) You must give any other recipients of the Work or Derivative Works
a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that
You distribute, all copyright, patent, trademark, and attribution
notices from the Source form of the Work, excluding those notices
that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, You must include a readable copy of the attribution
notices contained within such NOTICE file, in at least one of the
following places: within a NOTICE text file distributed as part of
the Derivative Works; within the Source form or documentation, if
provided along with the Derivative Works; or, within a display
generated by the Derivative Works, if and wherever such third-party
notices normally appear. The contents of the NOTICE file are for
informational purposes only and do not modify the License. You may
add Your own attribution notices within Derivative Works that You
distribute, alongside or as an addendum to the NOTICE text from
the Work, provided that such additional attribution notices cannot
be construed as modifying the License.
You may add Your own license statement for Your modifications and
may provide additional or different license terms and conditions for
use, reproduction, or distribution of Your modifications, or for such
Derivative Works as a whole, provided Your use, reproduction, and
distribution of the Work otherwise complies with the conditions stated
in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed
to in writing, Licensor provides the Work (and each Contributor
provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES
OR CONDITIONS OF ANY KIND, either express or implied, including,
without limitation, any warranties or conditions of TITLE,
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR
PURPOSE. You are solely responsible for determining the
appropriateness of using or reproducing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or exemplary damages of any character arising as a result
of this License or out of the use or inability to use the Work
(including but not limited to damages for loss of goodwill, work
stoppage, computer failure or malfunction, or all other commercial
damages or losses), even if such Contributor has been advised of the
possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the
Work or Derivative Works thereof, You may choose to offer, and charge
a fee for, acceptance of support, warranty, indemnity, or other
liability obligations and/or rights consistent with this License.
However, in accepting such obligations, You may offer only conditions
consistent with this License.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format in question. It is recommended
that a file be included in the same directory as the source files.
Copyright 2026 GSH Digital Services
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.

198
README.md
View File

@@ -1,3 +1,199 @@
# Stratum
An S3-compatible object storage server written in Rust that autonomously moves objects between storage tiers based on observed access patterns, with optional developer-defined priority hints.
> Self-optimizing, S3-compatible object storage with autonomous intelligent tiering — built for EU data sovereignty.
---
## What Is Stratum?
Stratum is an open-source S3-compatible object storage server written in Rust. Unlike every other S3-compatible storage solution, Stratum autonomously moves objects between storage tiers (hot/warm/cold) based on observed access patterns — with zero configuration required.
Point any S3-compatible client at it. It gets smarter over time.
---
## Why Stratum?
| | AWS S3 Intelligent-Tiering | MinIO | Garage | **Stratum** |
|---|---|---|---|---|
| S3 compatible | ✅ | ✅ | ✅ | ✅ |
| Autonomous tiering | ✅ (black box) | ❌ | ❌ | ✅ (transparent) |
| EU sovereign | ❌ (CLOUD Act) | ❌ | ✅ | ✅ |
| Open source | ❌ | ☠️ Archived | ✅ | ✅ |
| Transparent tier reasoning | ❌ | ❌ | ❌ | ✅ |
| Self-hosted | ❌ | ✅ | ✅ | ✅ |
MinIO was archived in February 2026. RustFS is alpha. Garage targets geo-distribution only. **The space for a production-ready, intelligent, EU-sovereign S3 server is open.**
---
## Architecture
Stratum is a Cargo workspace split into focused crates:
```
stratum/
├── src/
│ ├── stratum/ → binary — wires everything together
│ ├── stratum-api-s3/ → S3 API layer (routes, handlers, auth)
│ ├── stratum-storage/ → volume management, tier logic, shard I/O
│ ├── stratum-metadata/ → bucket/key → volume mapping (sled)
│ ├── stratum-tiering/ → tier decision engine
│ ├── stratum-auth/ → AWS Signature V4 validation
│ └── stratum-core/ → shared types and config
```
### Storage Model
Objects are not stored directly by key. Keys point to **volumes**. Volumes hold the actual data and can live on any tier:
```
bucket/key → volume_id → Volume {
tier: Hot | Warm | Cold
location: Local(path) | Remote(url)
size, checksum
last_accessed, access_count ← tiering signals
}
```
When tiering promotes or demotes an object, only the volume location changes. The key never moves. Clients never know.
### Storage Tiers
```
Hot → NVMe/SSD — frequently accessed objects, lowest latency
Warm → HDD — infrequently accessed, medium cost
Cold → Remote S3 — rarely accessed, cheapest (B2, R2, AWS, Garage...)
```
### Erasure Coding
Stratum uses Reed-Solomon erasure coding (4 data + 2 parity shards) instead of replication. This gives:
```
3x replication: 3.0x storage overhead, lose 1 node
4+2 erasure: 1.5x storage overhead, lose any 2 nodes
```
Each object is split into shards. Shards are distributed across nodes/disks. Loss of any 2 shards is fully recoverable.
---
## S3 API Coverage
### Implemented (routing layer)
All routes are defined and return `501 Not Implemented` until handlers are built.
| Operation | Method | Status |
|---|---|---|
| ListBuckets | GET / | 🔲 Stub |
| CreateBucket | PUT /{bucket} | 🔲 Stub |
| DeleteBucket | DELETE /{bucket} | 🔲 Stub |
| HeadBucket | HEAD /{bucket} | 🔲 Stub |
| ListObjectsV2 | GET /{bucket} | 🔲 Stub |
| GetObject | GET /{bucket}/{*key} | 🔲 Stub |
| PutObject | PUT /{bucket}/{*key} | 🔲 Stub |
| DeleteObject | DELETE /{bucket}/{*key} | 🔲 Stub |
| HeadObject | HEAD /{bucket}/{*key} | 🔲 Stub |
| CreateMultipartUpload | POST /{bucket}/{*key}?uploads | 🔲 Stub |
| UploadPart | PUT /{bucket}/{*key}?partNumber&uploadId | 🔲 Stub |
| CompleteMultipartUpload | POST /{bucket}/{*key}?uploadId | 🔲 Stub |
| AbortMultipartUpload | DELETE /{bucket}/{*key}?uploadId | 🔲 Stub |
### Endpoint Parser
All S3 endpoints are parsed from raw HTTP requests into typed `Endpoint` enum variants before reaching handlers. Query parameters disambiguate operations sharing the same route (e.g. `UploadPart` vs `PutObject`).
### Error Handling
S3-compatible error types defined:
- `BucketNotFound` → 404
- `ObjectNotFound` → 404
- `BucketAlreadyExists` → 409
- `InvalidArgument` → 400
- `InvalidBucketName` → 400
- `AuthorizationFailed` → 403
- `MissingAuthHeader` → 401
- `InternalError` → 500
- `NotImplemented` → 501
---
## Design Principles
- **KISS** — no macros where plain match arms work
- **Bottom-up** — storage layer before API layer
- **TDD** — tests written before implementation
- **One concern per file** — enum definitions separate from parsing logic
- **No lifetime annotations** — owned types throughout for maintainability
- **`cargo fmt` always** — enforced formatting
---
## Testing
```bash
# run all tests
cargo test
# run specific crate
cargo test -p stratum-api-s3
# coverage report
cargo tarpaulin -p stratum-api-s3 --out Html
```
### Test Layers
```
Unit tests → endpoint parser, individual functions
Integration tests → axum routes, full HTTP request/response
E2E tests → awscli + rclone against running server (planned)
```
---
## Development Setup
```bash
git clone https://github.com/gsh-digital/stratum
cd stratum
cargo build
cargo test
```
### Requirements
- Rust 1.75+
- cargo
### Tested On
- Linux x86_64
- Linux aarch64 (Raspberry Pi 4) ← primary dev/test bench
---
## Roadmap
### Phase 1 — Core S3 Server (current)
> Goal: pass MinIO s3-tests suite at >95%, work with awscli and rclone out of the box
### Phase 2 — Geo Distribution
> Goal: multi-node replication across geographic regions with Raft consensus
### Phase 3 — Intelligent Tiering
> Goal: autonomous object movement between hot/warm/cold based on access patterns
### Phase 4 — Managed Service
> Goal: GSH Digital Services hosted offering with Grafana monitoring
---
## License
Apache 2.0 — see LICENSE
---
## By
**GSH Digital Services**
Author: [Soliman, Ramez](mailto:r.soliman@gsh-services.com)
Building EU-sovereign infrastructure that doesn't cost like AWS and doesn't require a PhD to operate.

View File

@@ -0,0 +1,10 @@
[package]
name = "stratum-api-s3"
version = "0.1.0"
edition = "2024"
[dependencies]
axum.workspace = true
hyper.workspace = true
tower.workspace = true
tokio.workspace = true

View File

@@ -0,0 +1,53 @@
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Endpoint {
// Service level
ListBuckets,
// Bucket level
CreateBucket,
DeleteBucket,
HeadBucket,
ListObjectsV2 {
delimiter: Option<String>,
prefix: Option<String>,
max_keys: Option<usize>,
continuation_token: Option<String>,
},
// Object level
GetObject {
key: String,
version_id: Option<String>,
part_number: Option<u64>,
},
PutObject {
key: String,
},
DeleteObject {
key: String,
version_id: Option<String>,
},
HeadObject {
key: String,
version_id: Option<String>,
part_number: Option<u64>,
},
// Multipart
CreateMultipartUpload {
key: String,
},
UploadPart {
key: String,
part_number: u64,
upload_id: String,
},
CompleteMultipartUpload {
key: String,
upload_id: String,
},
AbortMultipartUpload {
key: String,
upload_id: String,
},
}

View File

@@ -0,0 +1,4 @@
mod definitions;
mod parser;
pub use definitions::Endpoint;

View File

@@ -0,0 +1,301 @@
use super::definitions::Endpoint;
use crate::errors::ApiError;
use hyper::Method;
use std::collections::HashMap;
pub fn parse_endpoint(
method: &Method,
path: &str,
query: &HashMap<String, String>,
) -> Result<Endpoint, ApiError> {
let segments: Vec<&str> = path.trim_start_matches('/').splitn(2, '/').collect();
let bucket = segments.get(0).copied().unwrap_or("");
let key = segments.get(1).copied().unwrap_or("");
match (method, bucket, key) {
// Service level
(&Method::GET, "", "") => Ok(Endpoint::ListBuckets),
// Bucket level
(&Method::PUT, b, "") if !b.is_empty() => Ok(Endpoint::CreateBucket),
(&Method::DELETE, b, "") if !b.is_empty() => Ok(Endpoint::DeleteBucket),
(&Method::HEAD, b, "") if !b.is_empty() => Ok(Endpoint::HeadBucket),
(&Method::GET, b, "") if !b.is_empty() => Ok(Endpoint::ListObjectsV2 {
delimiter: query.get("delimiter").cloned(),
prefix: query.get("prefix").cloned(),
max_keys: query.get("max-keys").and_then(|v| v.parse().ok()),
continuation_token: query.get("continuation-token").cloned(),
}),
// Object level
(&Method::GET, _, k) if !k.is_empty() => Ok(Endpoint::GetObject {
key: k.to_string(),
version_id: query.get("versionId").cloned(),
part_number: query.get("partNumber").and_then(|v| v.parse().ok()),
}),
(&Method::PUT, _, k) if !k.is_empty() => {
// distinguish UploadPart from PutObject
if let Some(upload_id) = query.get("uploadId") {
Ok(Endpoint::UploadPart {
key: k.to_string(),
upload_id: upload_id.clone(),
part_number: query
.get("partNumber")
.and_then(|v| v.parse().ok())
.ok_or(ApiError::InvalidArgument("missing partNumber".into()))?,
})
} else {
Ok(Endpoint::PutObject { key: k.to_string() })
}
}
(&Method::DELETE, _, k) if !k.is_empty() => {
if let Some(upload_id) = query.get("uploadId") {
Ok(Endpoint::AbortMultipartUpload {
key: k.to_string(),
upload_id: upload_id.clone(),
})
} else {
Ok(Endpoint::DeleteObject {
key: k.to_string(),
version_id: query.get("versionId").cloned(),
})
}
}
(&Method::HEAD, _, k) if !k.is_empty() => Ok(Endpoint::HeadObject {
key: k.to_string(),
version_id: query.get("versionId").cloned(),
part_number: query.get("partNumber").and_then(|v| v.parse().ok()),
}),
(&Method::POST, _, k) if !k.is_empty() => {
if query.contains_key("uploads") {
Ok(Endpoint::CreateMultipartUpload { key: k.to_string() })
} else if let Some(upload_id) = query.get("uploadId") {
Ok(Endpoint::CompleteMultipartUpload {
key: k.to_string(),
upload_id: upload_id.clone(),
})
} else {
Err(ApiError::InvalidArgument("unknown POST operation".into()))
}
}
_ => Err(ApiError::InvalidArgument(format!(
"unknown endpoint: {} {}",
method, path
))),
}
}
#[cfg(test)]
mod tests {
use super::*;
use hyper::Method;
use std::collections::HashMap;
fn empty_query() -> HashMap<String, String> {
HashMap::new()
}
fn query(pairs: &[(&str, &str)]) -> HashMap<String, String> {
pairs
.iter()
.map(|(k, v)| (k.to_string(), v.to_string()))
.collect()
}
// Service level
#[test]
fn test_list_buckets() {
let result = parse_endpoint(&Method::GET, "/", &empty_query());
assert_eq!(result.unwrap(), Endpoint::ListBuckets);
}
// Bucket level
#[test]
fn test_create_bucket() {
let result = parse_endpoint(&Method::PUT, "/my-bucket", &empty_query());
assert_eq!(result.unwrap(), Endpoint::CreateBucket);
}
#[test]
fn test_delete_bucket() {
let result = parse_endpoint(&Method::DELETE, "/my-bucket", &empty_query());
assert_eq!(result.unwrap(), Endpoint::DeleteBucket);
}
#[test]
fn test_head_bucket() {
let result = parse_endpoint(&Method::HEAD, "/my-bucket", &empty_query());
assert_eq!(result.unwrap(), Endpoint::HeadBucket);
}
#[test]
fn test_list_objects_v2_empty() {
let result = parse_endpoint(&Method::GET, "/my-bucket", &empty_query());
assert_eq!(
result.unwrap(),
Endpoint::ListObjectsV2 {
delimiter: None,
prefix: None,
max_keys: None,
continuation_token: None,
}
);
}
#[test]
fn test_list_objects_v2_with_prefix() {
let q = query(&[("prefix", "photos/"), ("max-keys", "100")]);
let result = parse_endpoint(&Method::GET, "/my-bucket", &q);
assert_eq!(
result.unwrap(),
Endpoint::ListObjectsV2 {
delimiter: None,
prefix: Some("photos/".to_string()),
max_keys: Some(100),
continuation_token: None,
}
);
}
// Object level
#[test]
fn test_get_object() {
let result = parse_endpoint(&Method::GET, "/my-bucket/photo.jpg", &empty_query());
assert_eq!(
result.unwrap(),
Endpoint::GetObject {
key: "photo.jpg".to_string(),
version_id: None,
part_number: None,
}
);
}
#[test]
fn test_get_object_nested_key() {
let result = parse_endpoint(
&Method::GET,
"/my-bucket/photos/2024/beach.jpg",
&empty_query(),
);
assert_eq!(
result.unwrap(),
Endpoint::GetObject {
key: "photos/2024/beach.jpg".to_string(), // full path preserved
version_id: None,
part_number: None,
}
);
}
#[test]
fn test_put_object() {
let result = parse_endpoint(&Method::PUT, "/my-bucket/photo.jpg", &empty_query());
assert_eq!(
result.unwrap(),
Endpoint::PutObject {
key: "photo.jpg".to_string(),
}
);
}
#[test]
fn test_delete_object() {
let result = parse_endpoint(&Method::DELETE, "/my-bucket/photo.jpg", &empty_query());
assert_eq!(
result.unwrap(),
Endpoint::DeleteObject {
key: "photo.jpg".to_string(),
version_id: None,
}
);
}
#[test]
fn test_delete_object_with_version() {
let q = query(&[("versionId", "abc123")]);
let result = parse_endpoint(&Method::DELETE, "/my-bucket/photo.jpg", &q);
assert_eq!(
result.unwrap(),
Endpoint::DeleteObject {
key: "photo.jpg".to_string(),
version_id: Some("abc123".to_string()),
}
);
}
#[test]
fn test_head_object() {
let result = parse_endpoint(&Method::HEAD, "/my-bucket/photo.jpg", &empty_query());
assert_eq!(
result.unwrap(),
Endpoint::HeadObject {
key: "photo.jpg".to_string(),
version_id: None,
part_number: None,
}
);
}
// Multipart
#[test]
fn test_create_multipart_upload() {
let q = query(&[("uploads", "")]);
let result = parse_endpoint(&Method::POST, "/my-bucket/video.mp4", &q);
assert_eq!(
result.unwrap(),
Endpoint::CreateMultipartUpload {
key: "video.mp4".to_string(),
}
);
}
#[test]
fn test_upload_part() {
let q = query(&[("partNumber", "1"), ("uploadId", "abc123")]);
let result = parse_endpoint(&Method::PUT, "/my-bucket/video.mp4", &q);
assert_eq!(
result.unwrap(),
Endpoint::UploadPart {
key: "video.mp4".to_string(),
part_number: 1,
upload_id: "abc123".to_string(),
}
);
}
#[test]
fn test_complete_multipart_upload() {
let q = query(&[("uploadId", "abc123")]);
let result = parse_endpoint(&Method::POST, "/my-bucket/video.mp4", &q);
assert_eq!(
result.unwrap(),
Endpoint::CompleteMultipartUpload {
key: "video.mp4".to_string(),
upload_id: "abc123".to_string(),
}
);
}
#[test]
fn test_abort_multipart_upload() {
let q = query(&[("uploadId", "abc123")]);
let result = parse_endpoint(&Method::DELETE, "/my-bucket/video.mp4", &q);
assert_eq!(
result.unwrap(),
Endpoint::AbortMultipartUpload {
key: "video.mp4".to_string(),
upload_id: "abc123".to_string(),
}
);
}
// Error cases
#[test]
fn test_unknown_endpoint_returns_error() {
let result = parse_endpoint(&Method::PATCH, "/my-bucket/photo.jpg", &empty_query());
assert!(result.is_err());
}
}

View File

@@ -0,0 +1,67 @@
use axum::{
http::StatusCode,
response::{IntoResponse, Response},
};
#[derive(Debug, Clone, PartialEq)]
pub enum ApiError {
// bucket errors
BucketNotFound,
BucketAlreadyExists,
// object errors
ObjectNotFound,
// request errors
InvalidArgument(String),
InvalidBucketName,
// auth errors
AuthorizationFailed,
MissingAuthHeader,
// server errors
InternalError(String),
// not implemented yet
NotImplemented,
}
impl std::fmt::Display for ApiError {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
match self {
//TODO: Add Api Error messages
ApiError::AuthorizationFailed => write!(f, "AuthorizationFailed"),
ApiError::BucketAlreadyExists => write!(f, "BucketAlreadyExists"),
ApiError::BucketNotFound => write!(f, "BucketNotFound"),
ApiError::InternalError(message) => write!(f, "InternalError: {}", message),
ApiError::InvalidArgument(message) => write!(f, "InvalidArgument: {}", message),
ApiError::InvalidBucketName => write!(f, "InvalidBucketName"),
ApiError::MissingAuthHeader => write!(f, "MissingAuthHeader"),
ApiError::NotImplemented => write!(f, "NotImplemented"),
ApiError::ObjectNotFound => write!(f, "ObjectNotFound"),
}
}
}
impl ApiError {
fn status_code(&self) -> StatusCode {
match self {
ApiError::BucketNotFound => StatusCode::NOT_FOUND,
ApiError::ObjectNotFound => StatusCode::NOT_FOUND,
ApiError::BucketAlreadyExists => StatusCode::CONFLICT,
ApiError::InvalidArgument(_) => StatusCode::BAD_REQUEST,
ApiError::InvalidBucketName => StatusCode::BAD_REQUEST,
ApiError::AuthorizationFailed => StatusCode::FORBIDDEN,
ApiError::MissingAuthHeader => StatusCode::UNAUTHORIZED,
ApiError::InternalError(_) => StatusCode::INTERNAL_SERVER_ERROR,
ApiError::NotImplemented => StatusCode::NOT_IMPLEMENTED,
}
}
}
impl IntoResponse for ApiError {
fn into_response(self) -> Response {
(self.status_code(), self.to_string()).into_response()
}
}

View File

@@ -0,0 +1,3 @@
mod error;
pub use error::ApiError;

View File

@@ -0,0 +1,79 @@
use axum::http::StatusCode;
use axum::response::IntoResponse;
pub async fn list_buckets() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn create_bucket() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn delete_bucket() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn head_bucket() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
#[cfg(test)]
mod tests {
use axum::{
body::Body,
http::{Request, StatusCode},
};
use tower::ServiceExt; // for oneshot
use crate::router::s3_router;
#[tokio::test]
async fn test_list_buckets_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("GET")
.uri("/")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_create_bucket_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/my-bucket")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_get_object_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("GET")
.uri("/my-bucket/photo.jpg")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
}

View File

@@ -0,0 +1,3 @@
pub mod bucket;
pub mod multipart;
pub mod object;

View File

@@ -0,0 +1,96 @@
use axum::http::StatusCode;
use axum::response::IntoResponse;
pub async fn create_multipart_upload() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn upload_part() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn complete_multipart_upload() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn abort_multipart_upload() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
#[cfg(test)]
mod tests {
use axum::{
body::Body,
http::{Request, StatusCode},
};
use tower::ServiceExt;
use crate::router::s3_router;
#[tokio::test]
async fn test_create_multipart_upload_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("POST")
.uri("/my-bucket/video.mp4?uploads")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_upload_part_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/my-bucket/video.mp4?partNumber=1&uploadId=abc123")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_complete_multipart_upload_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("POST")
.uri("/my-bucket/video.mp4?uploadId=abc123")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_abort_multipart_upload_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("DELETE")
.uri("/my-bucket/video.mp4?uploadId=abc123")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
}

View File

@@ -0,0 +1,136 @@
use axum::http::StatusCode;
use axum::response::IntoResponse;
pub async fn get_object() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn put_object() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn delete_object() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn head_object() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
pub async fn list_objects_v2() -> impl IntoResponse {
StatusCode::NOT_IMPLEMENTED
}
#[cfg(test)]
mod tests {
use axum::{
body::Body,
http::{Request, StatusCode},
};
use tower::ServiceExt;
use crate::router::s3_router;
#[tokio::test]
async fn test_get_object_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("GET")
.uri("/my-bucket/photo.jpg")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_put_object_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("PUT")
.uri("/my-bucket/photo.jpg")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_delete_object_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("DELETE")
.uri("/my-bucket/photo.jpg")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_head_object_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("HEAD")
.uri("/my-bucket/photo.jpg")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_list_objects_v2_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("GET")
.uri("/my-bucket")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
#[tokio::test]
async fn test_get_object_nested_key_returns_501() {
let app = s3_router();
let response = app
.oneshot(
Request::builder()
.method("GET")
.uri("/my-bucket/photos/2024/beach.jpg")
.body(Body::empty())
.unwrap()
)
.await
.unwrap();
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
}
}

View File

@@ -0,0 +1,6 @@
pub mod endpoint;
pub mod errors;
pub mod handlers;
pub mod router;
pub use endpoint::Endpoint;

View File

@@ -0,0 +1,22 @@
use axum::{
Router,
routing::{delete, get, head, post, put},
};
use crate::handlers::{bucket, object, multipart};
pub fn s3_router() -> Router {
Router::new()
// Service level
.route("/", get(bucket::list_buckets))
// Bucket level
.route("/{bucket}", put(bucket::create_bucket))
.route("/{bucket}", delete(bucket::delete_bucket))
.route("/{bucket}", head(bucket::head_bucket))
// Object level
.route("/{bucket}", get(object::list_objects_v2))
.route("/{bucket}/{*key}", get(object::get_object))
.route("/{bucket}/{*key}", head(object::head_object))
.route("/{bucket}/{*key}", delete(object::delete_object))
.route("/{bucket}/{*key}", post(multipart::create_multipart_upload))
.route("/{bucket}/{*key}", put(object::put_object))
}

View File

@@ -0,0 +1,6 @@
[package]
name = "stratum-auth"
version = "0.1.0"
edition = "2024"
[dependencies]

View File

@@ -0,0 +1,14 @@
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}

View File

@@ -0,0 +1,6 @@
[package]
name = "stratum-core"
version = "0.1.0"
edition = "2024"
[dependencies]

View File

@@ -0,0 +1,14 @@
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}

View File

@@ -0,0 +1,6 @@
[package]
name = "stratum-metadata"
version = "0.1.0"
edition = "2024"
[dependencies]

View File

@@ -0,0 +1,14 @@
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}

View File

@@ -0,0 +1,10 @@
[package]
name = "stratum-storage"
version = "0.1.0"
edition = "2024"
[dependencies]
reed-solomon-erasure.workspace = true
uuid.workspace = true
tokio.workspace = true
anyhow.workspace = true

View File

@@ -0,0 +1,9 @@
use std::path::PathBuf;
pub struct StorageConfig {
pub hot_path: PathBuf,
pub warm_path: PathBuf,
pub cold_endpoint: String,
pub data_shards: usize, // default 4
pub parity_shards: usize, // default 2
}

View File

@@ -0,0 +1,11 @@
use std::path::PathBuf;
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Location {
Local(Vec<PathBuf>),
Remote {
endpoint: String,
bucket: String,
keys: Vec<String>
}
}

View File

@@ -0,0 +1,7 @@
mod location;
mod storage_tier;
mod object_manifest;
mod config;
pub use object_manifest::ObjectManifest;
pub use config::StorageConfig;

View File

@@ -0,0 +1,13 @@
use crate::definitions::{location::Location, storage_tier::StorageTier};
#[derive(Debug, Clone)]
pub struct ObjectManifest {
pub id: String,
pub tier: StorageTier,
pub location : Location,
pub size : u64,
pub checksum : String,
pub created_at : u64,
pub last_accessed: u64,
pub access_count: u64
}

View File

@@ -0,0 +1,6 @@
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
pub enum StorageTier {
Hot = 1,
Warm = 2,
Cold = 3
}

View File

@@ -0,0 +1,2 @@
pub mod definitions;
pub mod shard;

View File

@@ -0,0 +1,198 @@
use reed_solomon_erasure::galois_8::ReedSolomon;
use anyhow::{Ok, Result};
use crate::shard;
const DATA_SHARDS: usize = 4;
const PARITY_SHARDS: usize = 2;
const TOTAL_SHARDS: usize = DATA_SHARDS + PARITY_SHARDS;
pub struct ShardEncoder {
rs: ReedSolomon,
}
impl ShardEncoder {
pub fn new() -> Result<Self> {
Ok(Self {
rs: ReedSolomon::new(DATA_SHARDS, PARITY_SHARDS)?,
})
}
// Split bytes into 6 shards (4 data + 2 parity)
pub fn encode(&self, data: &[u8]) -> Result<Vec<Vec<u8>>> {
// calculate chunk size, padding to equal length
let chunk_size = (data.len() + DATA_SHARDS - 1) / DATA_SHARDS;
// split data into 4 equal shards
let mut shards : Vec<Vec<u8>> = (0..DATA_SHARDS)
.map(|i| {
let start = i * chunk_size;
let end = (start + chunk_size).min(data.len());
let mut chunk = data[start..end].to_vec();
chunk.resize(chunk_size, 0); // pad last chunk if needed
chunk
}).collect();
// add empty parity shards
for _ in 0..PARITY_SHARDS {
shards.push(vec![0u8; chunk_size]);
}
self.rs.encode(&mut shards)?;
Ok(shards)
}
pub fn decode(
&self,
shards: Vec<Option<Vec<u8>>>,
original_size: usize
) -> Result<Vec<u8>> {
let mut shards = shards;
self.rs.reconstruct(&mut shards)?;
// reconstruct missing shards
let mut result = Vec::with_capacity(original_size);
// reassemble data shards only (ignore parity)
for i in 0..DATA_SHARDS {
if let Some(shard) = &shards[i] {
result.extend_from_slice(shard);
}
}
// trim padding back to original size
result.truncate(original_size);
Ok(result)
}
}
#[cfg(test)]
mod tests {
use super::*;
fn encoder() -> ShardEncoder {
ShardEncoder::new().unwrap()
}
#[test]
fn test_encode_produces_six_shards() {
let encoder = encoder();
let data = b"hello world this is test data!!";
let shards = encoder.encode(data).unwrap();
assert_eq!(shards.len(), TOTAL_SHARDS);
}
#[test]
fn test_encode_decode_roundtrip() {
let encoder = encoder();
let data = b"hello world this is test data!!";
let shards = encoder.encode(data).unwrap();
let wrapped: Vec<Option<Vec<u8>>> = shards
.into_iter()
.map(Some)
.collect();
let decoded = encoder.decode(wrapped, data.len()).unwrap();
assert_eq!(decoded, data);
}
#[test]
fn test_decode_with_one_missing_shard() {
let encoder = encoder();
let data = b"hello world this is test data!!";
let shards = encoder.encode(data).unwrap();
// lose shard 0
let mut wrapped: Vec<Option<Vec<u8>>> = shards
.into_iter()
.map(Some)
.collect();
wrapped[0] = None;
let decoded = encoder.decode(wrapped, data.len()).unwrap();
assert_eq!(decoded, data);
}
#[test]
fn test_decode_with_two_missing_shards() {
let encoder = encoder();
let data = b"hello world this is test data!!";
let shards = encoder.encode(data).unwrap();
// lose shard 0 and shard 2
let mut wrapped: Vec<Option<Vec<u8>>> = shards
.into_iter()
.map(Some)
.collect();
wrapped[0] = None;
wrapped[2] = None;
let decoded = encoder.decode(wrapped, data.len()).unwrap();
assert_eq!(decoded, data);
}
#[test]
fn test_decode_with_two_parity_shards_missing() {
let encoder = encoder();
let data = b"hello world this is test data!!";
let shards = encoder.encode(data).unwrap();
// lose both parity shards
let mut wrapped: Vec<Option<Vec<u8>>> = shards
.into_iter()
.map(Some)
.collect();
wrapped[4] = None;
wrapped[5] = None;
let decoded = encoder.decode(wrapped, data.len()).unwrap();
assert_eq!(decoded, data);
}
#[test]
fn test_shards_are_equal_size() {
let encoder = encoder();
let data = b"hello world this is test data!!";
let shards = encoder.encode(data).unwrap();
let first_size = shards[0].len();
for shard in &shards {
assert_eq!(shard.len(), first_size);
}
}
#[test]
fn test_large_object() {
let encoder = encoder();
let data = vec![42u8; 1024 * 1024]; // 1MB
let shards = encoder.encode(&data).unwrap();
let wrapped: Vec<Option<Vec<u8>>> = shards
.into_iter()
.map(Some)
.collect();
let decoded = encoder.decode(wrapped, data.len()).unwrap();
assert_eq!(decoded, data);
}
#[test]
fn test_three_missing_shards_fails() {
let encoder = encoder();
let data = b"hello world this is test data!!";
let shards = encoder.encode(data).unwrap();
// lose 3 shards - beyond recovery
let mut wrapped: Vec<Option<Vec<u8>>> = shards
.into_iter()
.map(Some)
.collect();
wrapped[0] = None;
wrapped[1] = None;
wrapped[2] = None;
let result = encoder.decode(wrapped, data.len());
assert!(result.is_err());
}
}

View File

@@ -0,0 +1,6 @@
[package]
name = "stratum-tiering"
version = "0.1.0"
edition = "2024"
[dependencies]

View File

@@ -0,0 +1,14 @@
pub fn add(left: u64, right: u64) -> u64 {
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
let result = add(2, 2);
assert_eq!(result, 4);
}
}

6
src/stratum/Cargo.toml Normal file
View File

@@ -0,0 +1,6 @@
[package]
name = "stratum"
version = "0.1.0"
edition = "2024"
[dependencies]

3
src/stratum/src/main.rs Normal file
View File

@@ -0,0 +1,3 @@
fn main() {
println!("Hello, world!");
}

123
todo.md Normal file
View File

@@ -0,0 +1,123 @@
# Stratum — TODO
## Immediate Next Session
### 1. `stratum-storage` — Volume Layer
- [ ] `config.rs` — StorageConfig with hot/warm/cold paths
- [x] `tier.rs` — StorageTier enum (Hot, Warm, Cold)
- [x] `location.rs` — Location + ShardLocation enums (Local/Remote/Mixed)
- [x] `manifest.rs` — Volume struct with access tracking fields
- [ ] `store.rs` — VolumeStore (in-memory HashMap for now)
- [ ] `shard.rs` — async read/write/delete shard files via tokio::fs
- [ ] Tests for all of the above
### 2. `stratum-metadata` — Bucket + Key Mapping
- [ ] Sled-backed metadata store
- [ ] Bucket operations (create, delete, exists, list)
- [ ] Key → Volume ID mapping (put, get, delete, list)
- [ ] Tests for all of the above
### 3. Wire Storage Into API Handlers (bottom-up)
- [ ] `CreateBucket` → 200 (create metadata entry)
- [ ] `ListBuckets` → 200 + XML response
- [ ] `PutObject` → 200 (write shard, create volume, store mapping)
- [ ] `GetObject` → 200 + stream bytes (read shard via volume location)
- [ ] `DeleteObject` → 204 (delete shard + metadata)
- [ ] `HeadObject` → 200 + metadata headers only
- [ ] `ListObjectsV2` → 200 + XML response
- [ ] Multipart (last, most complex)
### 4. XML Responses
- [ ] `xml/responses.rs` — ListBuckets XML
- [ ] `xml/responses.rs` — ListObjectsV2 XML
- [ ] `xml/responses.rs` — Error XML (replace current plain text)
- [ ] `xml/responses.rs` — InitiateMultipartUploadResult XML
- [ ] `xml/responses.rs` — CompleteMultipartUploadResult XML
---
## Backlog (Implement After Core Works)
### S3 Compatibility
- [ ] AWS Signature V4 validation (`stratum-auth`)
- [ ] ETag generation (MD5 for single part, MD5-of-MD5s for multipart)
- [ ] Content-MD5 header validation on PUT
- [ ] Bucket naming validation (3-63 chars, lowercase, no underscores)
- [ ] `GetBucketLocation` endpoint
- [ ] `CopyObject` endpoint
- [ ] Virtual-hosted style URLs (bucket.host/key)
- [ ] Range request support (critical for video streaming)
- [ ] Conditional requests (If-None-Match, If-Modified-Since)
### Storage
- [ ] Erasure coding integration (reed-solomon-erasure)
- [ ] Shard distribution across multiple disks/directories
- [ ] Checksum verification on read
- [ ] Atomic writes (write to temp, rename to final)
- [ ] Multipart upload temporary shard storage
- [ ] Multipart upload cleanup on abort
### Testing
- [ ] Run MinIO s3-tests compliance suite against server
- [ ] Test with awscli (`--no-sign-request` flag)
- [ ] Test with rclone
- [ ] Test with aws-sdk-rust
- [ ] Coverage report via cargo-tarpaulin
- [ ] Helper function refactor for query param extraction (backlogged from parser)
### Binary (`stratum`)
- [ ] `main.rs` — start axum server
- [ ] Config file loading (toml)
- [ ] CLI args (port, config path, data dir)
- [ ] Graceful shutdown
- [ ] Structured logging via tracing
---
## Phase 2 Backlog — Geo Distribution
- [ ] Node discovery and membership
- [ ] Raft consensus via openraft (metadata only)
- [ ] Consistent hashing for object placement
- [ ] Shard distribution across geographic nodes
- [ ] Node failure detection and recovery
- [ ] Replication lag monitoring
---
## Phase 3 Backlog — Intelligent Tiering
- [ ] Access frequency tracking (exponential moving average)
- [ ] Spike detection (sudden 10x access increase → promote immediately)
- [ ] Time-of-day pattern recognition
- [ ] Decay function (not accessed in 48h → demote)
- [ ] MIME type classification (pre-trained ONNX model)
- [ ] Range request pattern detection (video streaming awareness)
- [ ] Tier promotion/demotion engine
- [ ] Warmup period (observe 7 days before making tier decisions)
- [ ] Developer priority hints via object metadata
- [ ] Transparency API (why is this object in this tier?)
- [ ] Prometheus metrics endpoint
---
## Phase 4 Backlog — Managed Service
- [ ] Multi-tenant isolation
- [ ] Grafana dashboard
- [ ] Alerting (disk usage, node health, replication lag)
- [ ] Billing metrics
- [ ] BSI C5 certification process
- [ ] ISO 27001 certification process
- [ ] SLA definition and monitoring
- [ ] Enterprise support tier
---
## Known Issues / Technical Debt
- [ ] `VolumeStore` is currently in-memory only — needs sled persistence
- [ ] Error responses return plain text — should return S3 XML format
- [ ] No auth middleware yet — all requests accepted unsigned
- [ ] `StorageConfig` cold tier credentials need secure storage solution
- [ ] Query param helper functions (opt_string, opt_parse) backlogged from parser refactor