Changed licens from MIT to apache + updated readme + added todo
This commit is contained in:
2
Cargo.lock
generated
2
Cargo.lock
generated
@@ -381,6 +381,8 @@ version = "0.1.0"
|
|||||||
dependencies = [
|
dependencies = [
|
||||||
"axum",
|
"axum",
|
||||||
"hyper",
|
"hyper",
|
||||||
|
"tokio",
|
||||||
|
"tower",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|||||||
197
LICENSE
197
LICENSE
@@ -1,18 +1,187 @@
|
|||||||
MIT License
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
Copyright (c) 2026 gsh-digital-services
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
|
1. Definitions.
|
||||||
associated documentation files (the "Software"), to deal in the Software without restriction, including
|
|
||||||
without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the
|
|
||||||
following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all copies or substantial
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
portions of the Software.
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO
|
the copyright owner that is granting the License.
|
||||||
EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
|
||||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
USE OR OTHER DEALINGS IN THE SOFTWARE.
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship made available under
|
||||||
|
the License, as indicated by a copyright notice that is included in
|
||||||
|
or attached to the work (an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean, as submitted to the Licensor for inclusion
|
||||||
|
in the Work by the copyright owner or by an individual or Legal Entity
|
||||||
|
authorized to submit on behalf of the copyright owner. For the purposes
|
||||||
|
of this definition, "submitted" means any form of electronic, verbal,
|
||||||
|
or written communication sent to the Licensor or its representatives,
|
||||||
|
including but not limited to communication on electronic mailing lists,
|
||||||
|
source code control systems, and issue tracking systems that are managed
|
||||||
|
by, or on behalf of, the Licensor for the purpose of discussing and
|
||||||
|
improving the Work, but excluding communication that is conspicuously
|
||||||
|
marked or designated in writing by the copyright owner as "Not a
|
||||||
|
Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any Legal Entity on behalf of
|
||||||
|
whom a Contribution has been received by the Licensor and subsequently
|
||||||
|
incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a cross-claim
|
||||||
|
or counterclaim in a lawsuit) alleging that the Work or a Contribution
|
||||||
|
incorporated within the Work constitutes direct or contributory patent
|
||||||
|
infringement, then any patent licenses granted to You under this License
|
||||||
|
for that Work shall terminate as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the Work
|
||||||
|
or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You meet
|
||||||
|
the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or Derivative Works
|
||||||
|
a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works that
|
||||||
|
You distribute, all copyright, patent, trademark, and attribution
|
||||||
|
notices from the Source form of the Work, excluding those notices
|
||||||
|
that do not pertain to any part of the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, You must include a readable copy of the attribution
|
||||||
|
notices contained within such NOTICE file, in at least one of the
|
||||||
|
following places: within a NOTICE text file distributed as part of
|
||||||
|
the Derivative Works; within the Source form or documentation, if
|
||||||
|
provided along with the Derivative Works; or, within a display
|
||||||
|
generated by the Derivative Works, if and wherever such third-party
|
||||||
|
notices normally appear. The contents of the NOTICE file are for
|
||||||
|
informational purposes only and do not modify the License. You may
|
||||||
|
add Your own attribution notices within Derivative Works that You
|
||||||
|
distribute, alongside or as an addendum to the NOTICE text from
|
||||||
|
the Work, provided that such additional attribution notices cannot
|
||||||
|
be construed as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own license statement for Your modifications and
|
||||||
|
may provide additional or different license terms and conditions for
|
||||||
|
use, reproduction, or distribution of Your modifications, or for such
|
||||||
|
Derivative Works as a whole, provided Your use, reproduction, and
|
||||||
|
distribution of the Work otherwise complies with the conditions stated
|
||||||
|
in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or agreed
|
||||||
|
to in writing, Licensor provides the Work (and each Contributor
|
||||||
|
provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES
|
||||||
|
OR CONDITIONS OF ANY KIND, either express or implied, including,
|
||||||
|
without limitation, any warranties or conditions of TITLE,
|
||||||
|
NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR
|
||||||
|
PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or reproducing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or exemplary damages of any character arising as a result
|
||||||
|
of this License or out of the use or inability to use the Work
|
||||||
|
(including but not limited to damages for loss of goodwill, work
|
||||||
|
stoppage, computer failure or malfunction, or all other commercial
|
||||||
|
damages or losses), even if such Contributor has been advised of the
|
||||||
|
possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing the
|
||||||
|
Work or Derivative Works thereof, You may choose to offer, and charge
|
||||||
|
a fee for, acceptance of support, warranty, indemnity, or other
|
||||||
|
liability obligations and/or rights consistent with this License.
|
||||||
|
However, in accepting such obligations, You may offer only conditions
|
||||||
|
consistent with this License.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format in question. It is recommended
|
||||||
|
that a file be included in the same directory as the source files.
|
||||||
|
|
||||||
|
Copyright 2026 GSH Digital Services
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied. See the License for the specific language governing
|
||||||
|
permissions and limitations under the License.
|
||||||
198
README.md
198
README.md
@@ -1,3 +1,199 @@
|
|||||||
# Stratum
|
# Stratum
|
||||||
|
|
||||||
An S3-compatible object storage server written in Rust that autonomously moves objects between storage tiers based on observed access patterns, with optional developer-defined priority hints.
|
> Self-optimizing, S3-compatible object storage with autonomous intelligent tiering — built for EU data sovereignty.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What Is Stratum?
|
||||||
|
|
||||||
|
Stratum is an open-source S3-compatible object storage server written in Rust. Unlike every other S3-compatible storage solution, Stratum autonomously moves objects between storage tiers (hot/warm/cold) based on observed access patterns — with zero configuration required.
|
||||||
|
|
||||||
|
Point any S3-compatible client at it. It gets smarter over time.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Stratum?
|
||||||
|
|
||||||
|
| | AWS S3 Intelligent-Tiering | MinIO | Garage | **Stratum** |
|
||||||
|
|---|---|---|---|---|
|
||||||
|
| S3 compatible | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
| Autonomous tiering | ✅ (black box) | ❌ | ❌ | ✅ (transparent) |
|
||||||
|
| EU sovereign | ❌ (CLOUD Act) | ❌ | ✅ | ✅ |
|
||||||
|
| Open source | ❌ | ☠️ Archived | ✅ | ✅ |
|
||||||
|
| Transparent tier reasoning | ❌ | ❌ | ❌ | ✅ |
|
||||||
|
| Self-hosted | ❌ | ✅ | ✅ | ✅ |
|
||||||
|
|
||||||
|
MinIO was archived in February 2026. RustFS is alpha. Garage targets geo-distribution only. **The space for a production-ready, intelligent, EU-sovereign S3 server is open.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
Stratum is a Cargo workspace split into focused crates:
|
||||||
|
|
||||||
|
```
|
||||||
|
stratum/
|
||||||
|
├── src/
|
||||||
|
│ ├── stratum/ → binary — wires everything together
|
||||||
|
│ ├── stratum-api-s3/ → S3 API layer (routes, handlers, auth)
|
||||||
|
│ ├── stratum-storage/ → volume management, tier logic, shard I/O
|
||||||
|
│ ├── stratum-metadata/ → bucket/key → volume mapping (sled)
|
||||||
|
│ ├── stratum-tiering/ → tier decision engine
|
||||||
|
│ ├── stratum-auth/ → AWS Signature V4 validation
|
||||||
|
│ └── stratum-core/ → shared types and config
|
||||||
|
```
|
||||||
|
|
||||||
|
### Storage Model
|
||||||
|
|
||||||
|
Objects are not stored directly by key. Keys point to **volumes**. Volumes hold the actual data and can live on any tier:
|
||||||
|
|
||||||
|
```
|
||||||
|
bucket/key → volume_id → Volume {
|
||||||
|
tier: Hot | Warm | Cold
|
||||||
|
location: Local(path) | Remote(url)
|
||||||
|
size, checksum
|
||||||
|
last_accessed, access_count ← tiering signals
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
When tiering promotes or demotes an object, only the volume location changes. The key never moves. Clients never know.
|
||||||
|
|
||||||
|
### Storage Tiers
|
||||||
|
|
||||||
|
```
|
||||||
|
Hot → NVMe/SSD — frequently accessed objects, lowest latency
|
||||||
|
Warm → HDD — infrequently accessed, medium cost
|
||||||
|
Cold → Remote S3 — rarely accessed, cheapest (B2, R2, AWS, Garage...)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Erasure Coding
|
||||||
|
|
||||||
|
Stratum uses Reed-Solomon erasure coding (4 data + 2 parity shards) instead of replication. This gives:
|
||||||
|
|
||||||
|
```
|
||||||
|
3x replication: 3.0x storage overhead, lose 1 node
|
||||||
|
4+2 erasure: 1.5x storage overhead, lose any 2 nodes
|
||||||
|
```
|
||||||
|
|
||||||
|
Each object is split into shards. Shards are distributed across nodes/disks. Loss of any 2 shards is fully recoverable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## S3 API Coverage
|
||||||
|
|
||||||
|
### Implemented (routing layer)
|
||||||
|
All routes are defined and return `501 Not Implemented` until handlers are built.
|
||||||
|
|
||||||
|
| Operation | Method | Status |
|
||||||
|
|---|---|---|
|
||||||
|
| ListBuckets | GET / | 🔲 Stub |
|
||||||
|
| CreateBucket | PUT /{bucket} | 🔲 Stub |
|
||||||
|
| DeleteBucket | DELETE /{bucket} | 🔲 Stub |
|
||||||
|
| HeadBucket | HEAD /{bucket} | 🔲 Stub |
|
||||||
|
| ListObjectsV2 | GET /{bucket} | 🔲 Stub |
|
||||||
|
| GetObject | GET /{bucket}/{*key} | 🔲 Stub |
|
||||||
|
| PutObject | PUT /{bucket}/{*key} | 🔲 Stub |
|
||||||
|
| DeleteObject | DELETE /{bucket}/{*key} | 🔲 Stub |
|
||||||
|
| HeadObject | HEAD /{bucket}/{*key} | 🔲 Stub |
|
||||||
|
| CreateMultipartUpload | POST /{bucket}/{*key}?uploads | 🔲 Stub |
|
||||||
|
| UploadPart | PUT /{bucket}/{*key}?partNumber&uploadId | 🔲 Stub |
|
||||||
|
| CompleteMultipartUpload | POST /{bucket}/{*key}?uploadId | 🔲 Stub |
|
||||||
|
| AbortMultipartUpload | DELETE /{bucket}/{*key}?uploadId | 🔲 Stub |
|
||||||
|
|
||||||
|
### Endpoint Parser
|
||||||
|
All S3 endpoints are parsed from raw HTTP requests into typed `Endpoint` enum variants before reaching handlers. Query parameters disambiguate operations sharing the same route (e.g. `UploadPart` vs `PutObject`).
|
||||||
|
|
||||||
|
### Error Handling
|
||||||
|
S3-compatible error types defined:
|
||||||
|
- `BucketNotFound` → 404
|
||||||
|
- `ObjectNotFound` → 404
|
||||||
|
- `BucketAlreadyExists` → 409
|
||||||
|
- `InvalidArgument` → 400
|
||||||
|
- `InvalidBucketName` → 400
|
||||||
|
- `AuthorizationFailed` → 403
|
||||||
|
- `MissingAuthHeader` → 401
|
||||||
|
- `InternalError` → 500
|
||||||
|
- `NotImplemented` → 501
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Design Principles
|
||||||
|
|
||||||
|
- **KISS** — no macros where plain match arms work
|
||||||
|
- **Bottom-up** — storage layer before API layer
|
||||||
|
- **TDD** — tests written before implementation
|
||||||
|
- **One concern per file** — enum definitions separate from parsing logic
|
||||||
|
- **No lifetime annotations** — owned types throughout for maintainability
|
||||||
|
- **`cargo fmt` always** — enforced formatting
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# run all tests
|
||||||
|
cargo test
|
||||||
|
|
||||||
|
# run specific crate
|
||||||
|
cargo test -p stratum-api-s3
|
||||||
|
|
||||||
|
# coverage report
|
||||||
|
cargo tarpaulin -p stratum-api-s3 --out Html
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Layers
|
||||||
|
|
||||||
|
```
|
||||||
|
Unit tests → endpoint parser, individual functions
|
||||||
|
Integration tests → axum routes, full HTTP request/response
|
||||||
|
E2E tests → awscli + rclone against running server (planned)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Development Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/gsh-digital/stratum
|
||||||
|
cd stratum
|
||||||
|
cargo build
|
||||||
|
cargo test
|
||||||
|
```
|
||||||
|
|
||||||
|
### Requirements
|
||||||
|
- Rust 1.75+
|
||||||
|
- cargo
|
||||||
|
|
||||||
|
### Tested On
|
||||||
|
- Linux x86_64
|
||||||
|
- Linux aarch64 (Raspberry Pi 4) ← primary dev/test bench
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
|
||||||
|
### Phase 1 — Core S3 Server (current)
|
||||||
|
> Goal: pass MinIO s3-tests suite at >95%, work with awscli and rclone out of the box
|
||||||
|
|
||||||
|
### Phase 2 — Geo Distribution
|
||||||
|
> Goal: multi-node replication across geographic regions with Raft consensus
|
||||||
|
|
||||||
|
### Phase 3 — Intelligent Tiering
|
||||||
|
> Goal: autonomous object movement between hot/warm/cold based on access patterns
|
||||||
|
|
||||||
|
### Phase 4 — Managed Service
|
||||||
|
> Goal: GSH Digital Services hosted offering with Grafana monitoring
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Apache 2.0 — see LICENSE
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## By
|
||||||
|
|
||||||
|
**GSH Digital Services**
|
||||||
|
Author: [Soliman, Ramez](mailto:r.soliman@gsh-services.com)
|
||||||
|
Building EU-sovereign infrastructure that doesn't cost like AWS and doesn't require a PhD to operate.
|
||||||
@@ -5,4 +5,6 @@ edition = "2024"
|
|||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
axum.workspace = true
|
axum.workspace = true
|
||||||
hyper.workspace = true
|
hyper.workspace = true
|
||||||
|
tower.workspace = true
|
||||||
|
tokio.workspace = true
|
||||||
|
|||||||
@@ -50,4 +50,4 @@ pub enum Endpoint {
|
|||||||
key: String,
|
key: String,
|
||||||
upload_id: String,
|
upload_id: String,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,4 +2,3 @@ mod definitions;
|
|||||||
mod parser;
|
mod parser;
|
||||||
|
|
||||||
pub use definitions::Endpoint;
|
pub use definitions::Endpoint;
|
||||||
|
|
||||||
|
|||||||
@@ -1,18 +1,14 @@
|
|||||||
use std::collections::HashMap;
|
|
||||||
use hyper::Method;
|
|
||||||
use super::definitions::Endpoint;
|
use super::definitions::Endpoint;
|
||||||
use crate::errors::ApiError;
|
use crate::errors::ApiError;
|
||||||
|
use hyper::Method;
|
||||||
|
use std::collections::HashMap;
|
||||||
|
|
||||||
pub fn parse_endpoint(
|
pub fn parse_endpoint(
|
||||||
method: &Method,
|
method: &Method,
|
||||||
path: &str,
|
path: &str,
|
||||||
query: &HashMap<String, String>,
|
query: &HashMap<String, String>,
|
||||||
) -> Result<Endpoint, ApiError> {
|
) -> Result<Endpoint, ApiError> {
|
||||||
let segments: Vec<&str> = path
|
let segments: Vec<&str> = path.trim_start_matches('/').splitn(2, '/').collect();
|
||||||
.trim_start_matches('/')
|
|
||||||
.splitn(2, '/')
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
let bucket = segments.get(0).copied().unwrap_or("");
|
let bucket = segments.get(0).copied().unwrap_or("");
|
||||||
let key = segments.get(1).copied().unwrap_or("");
|
let key = segments.get(1).copied().unwrap_or("");
|
||||||
@@ -28,8 +24,7 @@ pub fn parse_endpoint(
|
|||||||
(&Method::GET, b, "") if !b.is_empty() => Ok(Endpoint::ListObjectsV2 {
|
(&Method::GET, b, "") if !b.is_empty() => Ok(Endpoint::ListObjectsV2 {
|
||||||
delimiter: query.get("delimiter").cloned(),
|
delimiter: query.get("delimiter").cloned(),
|
||||||
prefix: query.get("prefix").cloned(),
|
prefix: query.get("prefix").cloned(),
|
||||||
max_keys: query.get("max-keys")
|
max_keys: query.get("max-keys").and_then(|v| v.parse().ok()),
|
||||||
.and_then(|v| v.parse().ok()),
|
|
||||||
continuation_token: query.get("continuation-token").cloned(),
|
continuation_token: query.get("continuation-token").cloned(),
|
||||||
}),
|
}),
|
||||||
|
|
||||||
@@ -37,8 +32,7 @@ pub fn parse_endpoint(
|
|||||||
(&Method::GET, _, k) if !k.is_empty() => Ok(Endpoint::GetObject {
|
(&Method::GET, _, k) if !k.is_empty() => Ok(Endpoint::GetObject {
|
||||||
key: k.to_string(),
|
key: k.to_string(),
|
||||||
version_id: query.get("versionId").cloned(),
|
version_id: query.get("versionId").cloned(),
|
||||||
part_number: query.get("partNumber")
|
part_number: query.get("partNumber").and_then(|v| v.parse().ok()),
|
||||||
.and_then(|v| v.parse().ok()),
|
|
||||||
}),
|
}),
|
||||||
(&Method::PUT, _, k) if !k.is_empty() => {
|
(&Method::PUT, _, k) if !k.is_empty() => {
|
||||||
// distinguish UploadPart from PutObject
|
// distinguish UploadPart from PutObject
|
||||||
@@ -46,16 +40,15 @@ pub fn parse_endpoint(
|
|||||||
Ok(Endpoint::UploadPart {
|
Ok(Endpoint::UploadPart {
|
||||||
key: k.to_string(),
|
key: k.to_string(),
|
||||||
upload_id: upload_id.clone(),
|
upload_id: upload_id.clone(),
|
||||||
part_number: query.get("partNumber")
|
part_number: query
|
||||||
|
.get("partNumber")
|
||||||
.and_then(|v| v.parse().ok())
|
.and_then(|v| v.parse().ok())
|
||||||
.ok_or(ApiError::InvalidArgument("missing partNumber".into()))?,
|
.ok_or(ApiError::InvalidArgument("missing partNumber".into()))?,
|
||||||
})
|
})
|
||||||
} else {
|
} else {
|
||||||
Ok(Endpoint::PutObject {
|
Ok(Endpoint::PutObject { key: k.to_string() })
|
||||||
key: k.to_string(),
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
},
|
}
|
||||||
(&Method::DELETE, _, k) if !k.is_empty() => {
|
(&Method::DELETE, _, k) if !k.is_empty() => {
|
||||||
if let Some(upload_id) = query.get("uploadId") {
|
if let Some(upload_id) = query.get("uploadId") {
|
||||||
Ok(Endpoint::AbortMultipartUpload {
|
Ok(Endpoint::AbortMultipartUpload {
|
||||||
@@ -68,18 +61,15 @@ pub fn parse_endpoint(
|
|||||||
version_id: query.get("versionId").cloned(),
|
version_id: query.get("versionId").cloned(),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
},
|
}
|
||||||
(&Method::HEAD, _, k) if !k.is_empty() => Ok(Endpoint::HeadObject {
|
(&Method::HEAD, _, k) if !k.is_empty() => Ok(Endpoint::HeadObject {
|
||||||
key: k.to_string(),
|
key: k.to_string(),
|
||||||
version_id: query.get("versionId").cloned(),
|
version_id: query.get("versionId").cloned(),
|
||||||
part_number: query.get("partNumber")
|
part_number: query.get("partNumber").and_then(|v| v.parse().ok()),
|
||||||
.and_then(|v| v.parse().ok()),
|
|
||||||
}),
|
}),
|
||||||
(&Method::POST, _, k) if !k.is_empty() => {
|
(&Method::POST, _, k) if !k.is_empty() => {
|
||||||
if query.contains_key("uploads") {
|
if query.contains_key("uploads") {
|
||||||
Ok(Endpoint::CreateMultipartUpload {
|
Ok(Endpoint::CreateMultipartUpload { key: k.to_string() })
|
||||||
key: k.to_string(),
|
|
||||||
})
|
|
||||||
} else if let Some(upload_id) = query.get("uploadId") {
|
} else if let Some(upload_id) = query.get("uploadId") {
|
||||||
Ok(Endpoint::CompleteMultipartUpload {
|
Ok(Endpoint::CompleteMultipartUpload {
|
||||||
key: k.to_string(),
|
key: k.to_string(),
|
||||||
@@ -88,18 +78,15 @@ pub fn parse_endpoint(
|
|||||||
} else {
|
} else {
|
||||||
Err(ApiError::InvalidArgument("unknown POST operation".into()))
|
Err(ApiError::InvalidArgument("unknown POST operation".into()))
|
||||||
}
|
}
|
||||||
},
|
}
|
||||||
|
|
||||||
_ => Err(ApiError::InvalidArgument(
|
_ => Err(ApiError::InvalidArgument(format!(
|
||||||
format!("unknown endpoint: {} {}", method, path)
|
"unknown endpoint: {} {}",
|
||||||
)),
|
method, path
|
||||||
|
))),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// src/stratum-api-s3/src/endpoint/parser.rs
|
|
||||||
|
|
||||||
// ... your existing parse_endpoint function ...
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
@@ -111,7 +98,10 @@ mod tests {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn query(pairs: &[(&str, &str)]) -> HashMap<String, String> {
|
fn query(pairs: &[(&str, &str)]) -> HashMap<String, String> {
|
||||||
pairs.iter().map(|(k, v)| (k.to_string(), v.to_string())).collect()
|
pairs
|
||||||
|
.iter()
|
||||||
|
.map(|(k, v)| (k.to_string(), v.to_string()))
|
||||||
|
.collect()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Service level
|
// Service level
|
||||||
@@ -143,82 +133,110 @@ mod tests {
|
|||||||
#[test]
|
#[test]
|
||||||
fn test_list_objects_v2_empty() {
|
fn test_list_objects_v2_empty() {
|
||||||
let result = parse_endpoint(&Method::GET, "/my-bucket", &empty_query());
|
let result = parse_endpoint(&Method::GET, "/my-bucket", &empty_query());
|
||||||
assert_eq!(result.unwrap(), Endpoint::ListObjectsV2 {
|
assert_eq!(
|
||||||
delimiter: None,
|
result.unwrap(),
|
||||||
prefix: None,
|
Endpoint::ListObjectsV2 {
|
||||||
max_keys: None,
|
delimiter: None,
|
||||||
continuation_token: None,
|
prefix: None,
|
||||||
});
|
max_keys: None,
|
||||||
|
continuation_token: None,
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_list_objects_v2_with_prefix() {
|
fn test_list_objects_v2_with_prefix() {
|
||||||
let q = query(&[("prefix", "photos/"), ("max-keys", "100")]);
|
let q = query(&[("prefix", "photos/"), ("max-keys", "100")]);
|
||||||
let result = parse_endpoint(&Method::GET, "/my-bucket", &q);
|
let result = parse_endpoint(&Method::GET, "/my-bucket", &q);
|
||||||
assert_eq!(result.unwrap(), Endpoint::ListObjectsV2 {
|
assert_eq!(
|
||||||
delimiter: None,
|
result.unwrap(),
|
||||||
prefix: Some("photos/".to_string()),
|
Endpoint::ListObjectsV2 {
|
||||||
max_keys: Some(100),
|
delimiter: None,
|
||||||
continuation_token: None,
|
prefix: Some("photos/".to_string()),
|
||||||
});
|
max_keys: Some(100),
|
||||||
|
continuation_token: None,
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Object level
|
// Object level
|
||||||
#[test]
|
#[test]
|
||||||
fn test_get_object() {
|
fn test_get_object() {
|
||||||
let result = parse_endpoint(&Method::GET, "/my-bucket/photo.jpg", &empty_query());
|
let result = parse_endpoint(&Method::GET, "/my-bucket/photo.jpg", &empty_query());
|
||||||
assert_eq!(result.unwrap(), Endpoint::GetObject {
|
assert_eq!(
|
||||||
key: "photo.jpg".to_string(),
|
result.unwrap(),
|
||||||
version_id: None,
|
Endpoint::GetObject {
|
||||||
part_number: None,
|
key: "photo.jpg".to_string(),
|
||||||
});
|
version_id: None,
|
||||||
|
part_number: None,
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_get_object_nested_key() {
|
fn test_get_object_nested_key() {
|
||||||
let result = parse_endpoint(&Method::GET, "/my-bucket/photos/2024/beach.jpg", &empty_query());
|
let result = parse_endpoint(
|
||||||
assert_eq!(result.unwrap(), Endpoint::GetObject {
|
&Method::GET,
|
||||||
key: "photos/2024/beach.jpg".to_string(), // full path preserved
|
"/my-bucket/photos/2024/beach.jpg",
|
||||||
version_id: None,
|
&empty_query(),
|
||||||
part_number: None,
|
);
|
||||||
});
|
assert_eq!(
|
||||||
|
result.unwrap(),
|
||||||
|
Endpoint::GetObject {
|
||||||
|
key: "photos/2024/beach.jpg".to_string(), // full path preserved
|
||||||
|
version_id: None,
|
||||||
|
part_number: None,
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_put_object() {
|
fn test_put_object() {
|
||||||
let result = parse_endpoint(&Method::PUT, "/my-bucket/photo.jpg", &empty_query());
|
let result = parse_endpoint(&Method::PUT, "/my-bucket/photo.jpg", &empty_query());
|
||||||
assert_eq!(result.unwrap(), Endpoint::PutObject {
|
assert_eq!(
|
||||||
key: "photo.jpg".to_string(),
|
result.unwrap(),
|
||||||
});
|
Endpoint::PutObject {
|
||||||
|
key: "photo.jpg".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_delete_object() {
|
fn test_delete_object() {
|
||||||
let result = parse_endpoint(&Method::DELETE, "/my-bucket/photo.jpg", &empty_query());
|
let result = parse_endpoint(&Method::DELETE, "/my-bucket/photo.jpg", &empty_query());
|
||||||
assert_eq!(result.unwrap(), Endpoint::DeleteObject {
|
assert_eq!(
|
||||||
key: "photo.jpg".to_string(),
|
result.unwrap(),
|
||||||
version_id: None,
|
Endpoint::DeleteObject {
|
||||||
});
|
key: "photo.jpg".to_string(),
|
||||||
|
version_id: None,
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_delete_object_with_version() {
|
fn test_delete_object_with_version() {
|
||||||
let q = query(&[("versionId", "abc123")]);
|
let q = query(&[("versionId", "abc123")]);
|
||||||
let result = parse_endpoint(&Method::DELETE, "/my-bucket/photo.jpg", &q);
|
let result = parse_endpoint(&Method::DELETE, "/my-bucket/photo.jpg", &q);
|
||||||
assert_eq!(result.unwrap(), Endpoint::DeleteObject {
|
assert_eq!(
|
||||||
key: "photo.jpg".to_string(),
|
result.unwrap(),
|
||||||
version_id: Some("abc123".to_string()),
|
Endpoint::DeleteObject {
|
||||||
});
|
key: "photo.jpg".to_string(),
|
||||||
|
version_id: Some("abc123".to_string()),
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_head_object() {
|
fn test_head_object() {
|
||||||
let result = parse_endpoint(&Method::HEAD, "/my-bucket/photo.jpg", &empty_query());
|
let result = parse_endpoint(&Method::HEAD, "/my-bucket/photo.jpg", &empty_query());
|
||||||
assert_eq!(result.unwrap(), Endpoint::HeadObject {
|
assert_eq!(
|
||||||
key: "photo.jpg".to_string(),
|
result.unwrap(),
|
||||||
version_id: None,
|
Endpoint::HeadObject {
|
||||||
part_number: None,
|
key: "photo.jpg".to_string(),
|
||||||
});
|
version_id: None,
|
||||||
|
part_number: None,
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Multipart
|
// Multipart
|
||||||
@@ -226,40 +244,52 @@ mod tests {
|
|||||||
fn test_create_multipart_upload() {
|
fn test_create_multipart_upload() {
|
||||||
let q = query(&[("uploads", "")]);
|
let q = query(&[("uploads", "")]);
|
||||||
let result = parse_endpoint(&Method::POST, "/my-bucket/video.mp4", &q);
|
let result = parse_endpoint(&Method::POST, "/my-bucket/video.mp4", &q);
|
||||||
assert_eq!(result.unwrap(), Endpoint::CreateMultipartUpload {
|
assert_eq!(
|
||||||
key: "video.mp4".to_string(),
|
result.unwrap(),
|
||||||
});
|
Endpoint::CreateMultipartUpload {
|
||||||
|
key: "video.mp4".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_upload_part() {
|
fn test_upload_part() {
|
||||||
let q = query(&[("partNumber", "1"), ("uploadId", "abc123")]);
|
let q = query(&[("partNumber", "1"), ("uploadId", "abc123")]);
|
||||||
let result = parse_endpoint(&Method::PUT, "/my-bucket/video.mp4", &q);
|
let result = parse_endpoint(&Method::PUT, "/my-bucket/video.mp4", &q);
|
||||||
assert_eq!(result.unwrap(), Endpoint::UploadPart {
|
assert_eq!(
|
||||||
key: "video.mp4".to_string(),
|
result.unwrap(),
|
||||||
part_number: 1,
|
Endpoint::UploadPart {
|
||||||
upload_id: "abc123".to_string(),
|
key: "video.mp4".to_string(),
|
||||||
});
|
part_number: 1,
|
||||||
|
upload_id: "abc123".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_complete_multipart_upload() {
|
fn test_complete_multipart_upload() {
|
||||||
let q = query(&[("uploadId", "abc123")]);
|
let q = query(&[("uploadId", "abc123")]);
|
||||||
let result = parse_endpoint(&Method::POST, "/my-bucket/video.mp4", &q);
|
let result = parse_endpoint(&Method::POST, "/my-bucket/video.mp4", &q);
|
||||||
assert_eq!(result.unwrap(), Endpoint::CompleteMultipartUpload {
|
assert_eq!(
|
||||||
key: "video.mp4".to_string(),
|
result.unwrap(),
|
||||||
upload_id: "abc123".to_string(),
|
Endpoint::CompleteMultipartUpload {
|
||||||
});
|
key: "video.mp4".to_string(),
|
||||||
|
upload_id: "abc123".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
fn test_abort_multipart_upload() {
|
fn test_abort_multipart_upload() {
|
||||||
let q = query(&[("uploadId", "abc123")]);
|
let q = query(&[("uploadId", "abc123")]);
|
||||||
let result = parse_endpoint(&Method::DELETE, "/my-bucket/video.mp4", &q);
|
let result = parse_endpoint(&Method::DELETE, "/my-bucket/video.mp4", &q);
|
||||||
assert_eq!(result.unwrap(), Endpoint::AbortMultipartUpload {
|
assert_eq!(
|
||||||
key: "video.mp4".to_string(),
|
result.unwrap(),
|
||||||
upload_id: "abc123".to_string(),
|
Endpoint::AbortMultipartUpload {
|
||||||
});
|
key: "video.mp4".to_string(),
|
||||||
|
upload_id: "abc123".to_string(),
|
||||||
|
}
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Error cases
|
// Error cases
|
||||||
@@ -268,4 +298,4 @@ mod tests {
|
|||||||
let result = parse_endpoint(&Method::PATCH, "/my-bucket/photo.jpg", &empty_query());
|
let result = parse_endpoint(&Method::PATCH, "/my-bucket/photo.jpg", &empty_query());
|
||||||
assert!(result.is_err());
|
assert!(result.is_err());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
use std::fmt::write;
|
use std::fmt::write;
|
||||||
|
|
||||||
use axum::{
|
use axum::{
|
||||||
|
http::StatusCode,
|
||||||
response::{IntoResponse, Response},
|
response::{IntoResponse, Response},
|
||||||
http::StatusCode
|
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Debug, Clone, PartialEq)]
|
#[derive(Debug, Clone, PartialEq)]
|
||||||
@@ -40,8 +40,8 @@ impl std::fmt::Display for ApiError {
|
|||||||
ApiError::InvalidArgument(message) => write!(f, "InvalidArgument: {}", message),
|
ApiError::InvalidArgument(message) => write!(f, "InvalidArgument: {}", message),
|
||||||
ApiError::InvalidBucketName => write!(f, "InvalidBucketName"),
|
ApiError::InvalidBucketName => write!(f, "InvalidBucketName"),
|
||||||
ApiError::MissingAuthHeader => write!(f, "MissingAuthHeader"),
|
ApiError::MissingAuthHeader => write!(f, "MissingAuthHeader"),
|
||||||
ApiError::NotImplemented => write!(f, "NotImplemented"),
|
ApiError::NotImplemented => write!(f, "NotImplemented"),
|
||||||
ApiError::ObjectNotFound => write!(f, "ObjectNotFound")
|
ApiError::ObjectNotFound => write!(f, "ObjectNotFound"),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -66,4 +66,4 @@ impl IntoResponse for ApiError {
|
|||||||
fn into_response(self) -> Response {
|
fn into_response(self) -> Response {
|
||||||
(self.status_code(), self.to_string()).into_response()
|
(self.status_code(), self.to_string()).into_response()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
81
src/stratum-api-s3/src/handlers/bucket.rs
Normal file
81
src/stratum-api-s3/src/handlers/bucket.rs
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
// src/stratum-api-s3/src/handlers/bucket.rs
|
||||||
|
use axum::http::StatusCode;
|
||||||
|
use axum::response::IntoResponse;
|
||||||
|
|
||||||
|
pub async fn list_buckets() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn create_bucket() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_bucket() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn head_bucket() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use axum::{
|
||||||
|
body::Body,
|
||||||
|
http::{Request, StatusCode},
|
||||||
|
};
|
||||||
|
use tower::ServiceExt; // for oneshot
|
||||||
|
use crate::router::s3_router;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_list_buckets_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("GET")
|
||||||
|
.uri("/")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_create_bucket_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("PUT")
|
||||||
|
.uri("/my-bucket")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_get_object_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("GET")
|
||||||
|
.uri("/my-bucket/photo.jpg")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
}
|
||||||
3
src/stratum-api-s3/src/handlers/mod.rs
Normal file
3
src/stratum-api-s3/src/handlers/mod.rs
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
pub mod bucket;
|
||||||
|
pub mod multipart;
|
||||||
|
pub mod object;
|
||||||
96
src/stratum-api-s3/src/handlers/multipart.rs
Normal file
96
src/stratum-api-s3/src/handlers/multipart.rs
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
use axum::http::StatusCode;
|
||||||
|
use axum::response::IntoResponse;
|
||||||
|
|
||||||
|
pub async fn create_multipart_upload() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn upload_part() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn complete_multipart_upload() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn abort_multipart_upload() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use axum::{
|
||||||
|
body::Body,
|
||||||
|
http::{Request, StatusCode},
|
||||||
|
};
|
||||||
|
use tower::ServiceExt;
|
||||||
|
use crate::router::s3_router;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_create_multipart_upload_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("POST")
|
||||||
|
.uri("/my-bucket/video.mp4?uploads")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_upload_part_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("PUT")
|
||||||
|
.uri("/my-bucket/video.mp4?partNumber=1&uploadId=abc123")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_complete_multipart_upload_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("POST")
|
||||||
|
.uri("/my-bucket/video.mp4?uploadId=abc123")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_abort_multipart_upload_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("DELETE")
|
||||||
|
.uri("/my-bucket/video.mp4?uploadId=abc123")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
}
|
||||||
136
src/stratum-api-s3/src/handlers/object.rs
Normal file
136
src/stratum-api-s3/src/handlers/object.rs
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
use axum::http::StatusCode;
|
||||||
|
use axum::response::IntoResponse;
|
||||||
|
|
||||||
|
pub async fn get_object() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn put_object() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn delete_object() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn head_object() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn list_objects_v2() -> impl IntoResponse {
|
||||||
|
StatusCode::NOT_IMPLEMENTED
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use axum::{
|
||||||
|
body::Body,
|
||||||
|
http::{Request, StatusCode},
|
||||||
|
};
|
||||||
|
use tower::ServiceExt;
|
||||||
|
use crate::router::s3_router;
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_get_object_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("GET")
|
||||||
|
.uri("/my-bucket/photo.jpg")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_put_object_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("PUT")
|
||||||
|
.uri("/my-bucket/photo.jpg")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_delete_object_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("DELETE")
|
||||||
|
.uri("/my-bucket/photo.jpg")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_head_object_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("HEAD")
|
||||||
|
.uri("/my-bucket/photo.jpg")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_list_objects_v2_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("GET")
|
||||||
|
.uri("/my-bucket")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_get_object_nested_key_returns_501() {
|
||||||
|
let app = s3_router();
|
||||||
|
let response = app
|
||||||
|
.oneshot(
|
||||||
|
Request::builder()
|
||||||
|
.method("GET")
|
||||||
|
.uri("/my-bucket/photos/2024/beach.jpg")
|
||||||
|
.body(Body::empty())
|
||||||
|
.unwrap()
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
assert_eq!(response.status(), StatusCode::NOT_IMPLEMENTED);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,4 +1,6 @@
|
|||||||
pub mod endpoint;
|
pub mod endpoint;
|
||||||
pub mod errors;
|
pub mod errors;
|
||||||
|
pub mod handlers;
|
||||||
|
pub mod router;
|
||||||
|
|
||||||
pub use endpoint::Endpoint;
|
pub use endpoint::Endpoint;
|
||||||
|
|||||||
22
src/stratum-api-s3/src/router.rs
Normal file
22
src/stratum-api-s3/src/router.rs
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
use axum::{
|
||||||
|
Router,
|
||||||
|
routing::{delete, get, head, post, put},
|
||||||
|
};
|
||||||
|
use crate::handlers::{bucket, object, multipart};
|
||||||
|
|
||||||
|
pub fn s3_router() -> Router {
|
||||||
|
Router::new()
|
||||||
|
// Service level
|
||||||
|
.route("/", get(bucket::list_buckets))
|
||||||
|
// Bucket level
|
||||||
|
.route("/{bucket}", put(bucket::create_bucket))
|
||||||
|
.route("/{bucket}", delete(bucket::delete_bucket))
|
||||||
|
.route("/{bucket}", head(bucket::head_bucket))
|
||||||
|
// Object level
|
||||||
|
.route("/{bucket}", get(object::list_objects_v2))
|
||||||
|
.route("/{bucket}/{*key}", get(object::get_object))
|
||||||
|
.route("/{bucket}/{*key}", head(object::head_object))
|
||||||
|
.route("/{bucket}/{*key}", delete(object::delete_object))
|
||||||
|
.route("/{bucket}/{*key}", post(multipart::create_multipart_upload))
|
||||||
|
.route("/{bucket}/{*key}", put(object::put_object))
|
||||||
|
}
|
||||||
123
todo.md
Normal file
123
todo.md
Normal file
@@ -0,0 +1,123 @@
|
|||||||
|
# Stratum — TODO
|
||||||
|
|
||||||
|
## Immediate Next Session
|
||||||
|
|
||||||
|
### 1. `stratum-storage` — Volume Layer
|
||||||
|
- [ ] `config.rs` — StorageConfig with hot/warm/cold paths
|
||||||
|
- [ ] `tier.rs` — StorageTier enum (Hot, Warm, Cold)
|
||||||
|
- [ ] `location.rs` — Location + ShardLocation enums (Local/Remote/Mixed)
|
||||||
|
- [ ] `volume.rs` — Volume struct with access tracking fields
|
||||||
|
- [ ] `store.rs` — VolumeStore (in-memory HashMap for now)
|
||||||
|
- [ ] `shard.rs` — async read/write/delete shard files via tokio::fs
|
||||||
|
- [ ] Tests for all of the above
|
||||||
|
|
||||||
|
### 2. `stratum-metadata` — Bucket + Key Mapping
|
||||||
|
- [ ] Sled-backed metadata store
|
||||||
|
- [ ] Bucket operations (create, delete, exists, list)
|
||||||
|
- [ ] Key → Volume ID mapping (put, get, delete, list)
|
||||||
|
- [ ] Tests for all of the above
|
||||||
|
|
||||||
|
### 3. Wire Storage Into API Handlers (bottom-up)
|
||||||
|
- [ ] `CreateBucket` → 200 (create metadata entry)
|
||||||
|
- [ ] `ListBuckets` → 200 + XML response
|
||||||
|
- [ ] `PutObject` → 200 (write shard, create volume, store mapping)
|
||||||
|
- [ ] `GetObject` → 200 + stream bytes (read shard via volume location)
|
||||||
|
- [ ] `DeleteObject` → 204 (delete shard + metadata)
|
||||||
|
- [ ] `HeadObject` → 200 + metadata headers only
|
||||||
|
- [ ] `ListObjectsV2` → 200 + XML response
|
||||||
|
- [ ] Multipart (last, most complex)
|
||||||
|
|
||||||
|
### 4. XML Responses
|
||||||
|
- [ ] `xml/responses.rs` — ListBuckets XML
|
||||||
|
- [ ] `xml/responses.rs` — ListObjectsV2 XML
|
||||||
|
- [ ] `xml/responses.rs` — Error XML (replace current plain text)
|
||||||
|
- [ ] `xml/responses.rs` — InitiateMultipartUploadResult XML
|
||||||
|
- [ ] `xml/responses.rs` — CompleteMultipartUploadResult XML
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Backlog (Implement After Core Works)
|
||||||
|
|
||||||
|
### S3 Compatibility
|
||||||
|
- [ ] AWS Signature V4 validation (`stratum-auth`)
|
||||||
|
- [ ] ETag generation (MD5 for single part, MD5-of-MD5s for multipart)
|
||||||
|
- [ ] Content-MD5 header validation on PUT
|
||||||
|
- [ ] Bucket naming validation (3-63 chars, lowercase, no underscores)
|
||||||
|
- [ ] `GetBucketLocation` endpoint
|
||||||
|
- [ ] `CopyObject` endpoint
|
||||||
|
- [ ] Virtual-hosted style URLs (bucket.host/key)
|
||||||
|
- [ ] Range request support (critical for video streaming)
|
||||||
|
- [ ] Conditional requests (If-None-Match, If-Modified-Since)
|
||||||
|
|
||||||
|
### Storage
|
||||||
|
- [ ] Erasure coding integration (reed-solomon-erasure)
|
||||||
|
- [ ] Shard distribution across multiple disks/directories
|
||||||
|
- [ ] Checksum verification on read
|
||||||
|
- [ ] Atomic writes (write to temp, rename to final)
|
||||||
|
- [ ] Multipart upload temporary shard storage
|
||||||
|
- [ ] Multipart upload cleanup on abort
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
- [ ] Run MinIO s3-tests compliance suite against server
|
||||||
|
- [ ] Test with awscli (`--no-sign-request` flag)
|
||||||
|
- [ ] Test with rclone
|
||||||
|
- [ ] Test with aws-sdk-rust
|
||||||
|
- [ ] Coverage report via cargo-tarpaulin
|
||||||
|
- [ ] Helper function refactor for query param extraction (backlogged from parser)
|
||||||
|
|
||||||
|
### Binary (`stratum`)
|
||||||
|
- [ ] `main.rs` — start axum server
|
||||||
|
- [ ] Config file loading (toml)
|
||||||
|
- [ ] CLI args (port, config path, data dir)
|
||||||
|
- [ ] Graceful shutdown
|
||||||
|
- [ ] Structured logging via tracing
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2 Backlog — Geo Distribution
|
||||||
|
|
||||||
|
- [ ] Node discovery and membership
|
||||||
|
- [ ] Raft consensus via openraft (metadata only)
|
||||||
|
- [ ] Consistent hashing for object placement
|
||||||
|
- [ ] Shard distribution across geographic nodes
|
||||||
|
- [ ] Node failure detection and recovery
|
||||||
|
- [ ] Replication lag monitoring
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3 Backlog — Intelligent Tiering
|
||||||
|
|
||||||
|
- [ ] Access frequency tracking (exponential moving average)
|
||||||
|
- [ ] Spike detection (sudden 10x access increase → promote immediately)
|
||||||
|
- [ ] Time-of-day pattern recognition
|
||||||
|
- [ ] Decay function (not accessed in 48h → demote)
|
||||||
|
- [ ] MIME type classification (pre-trained ONNX model)
|
||||||
|
- [ ] Range request pattern detection (video streaming awareness)
|
||||||
|
- [ ] Tier promotion/demotion engine
|
||||||
|
- [ ] Warmup period (observe 7 days before making tier decisions)
|
||||||
|
- [ ] Developer priority hints via object metadata
|
||||||
|
- [ ] Transparency API (why is this object in this tier?)
|
||||||
|
- [ ] Prometheus metrics endpoint
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4 Backlog — Managed Service
|
||||||
|
|
||||||
|
- [ ] Multi-tenant isolation
|
||||||
|
- [ ] Grafana dashboard
|
||||||
|
- [ ] Alerting (disk usage, node health, replication lag)
|
||||||
|
- [ ] Billing metrics
|
||||||
|
- [ ] BSI C5 certification process
|
||||||
|
- [ ] ISO 27001 certification process
|
||||||
|
- [ ] SLA definition and monitoring
|
||||||
|
- [ ] Enterprise support tier
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Known Issues / Technical Debt
|
||||||
|
|
||||||
|
- [ ] `VolumeStore` is currently in-memory only — needs sled persistence
|
||||||
|
- [ ] Error responses return plain text — should return S3 XML format
|
||||||
|
- [ ] No auth middleware yet — all requests accepted unsigned
|
||||||
|
- [ ] `StorageConfig` cold tier credentials need secure storage solution
|
||||||
|
- [ ] Query param helper functions (opt_string, opt_parse) backlogged from parser refactor
|
||||||
Reference in New Issue
Block a user