Chris Lu 5469b7c58f fix: resolve inconsistent S3 API authorization for DELETE operations (issue #7864) (#7865)
* fix(iam): add support for fine-grained S3 actions in IAM policies

Add support for fine-grained S3 actions like s3:DeleteObject, s3:PutObject,
and other specific S3 actions in IAM policy mapping. Previously, only
coarse-grained action patterns (Put*, Get*, etc.) were supported, causing
IAM policies with specific actions to be rejected with 'not a valid action'
error.

Fixes issue #7864 part 2: s3:DeleteObject IAM action is now supported.

Changes:
- Extended MapToStatementAction() to handle fine-grained S3 actions
- Maps S3-specific actions to appropriate internal action constants
- Supports 30+ S3 actions including DeleteObject, PutObject, GetObject, etc.

* fix(s3api): correct resource ARN generation for subpath permissions

Fix convertSingleAction() to properly handle subpath patterns in legacy
actions. Previously, when a user was granted Write permission to a subpath
(e.g., Write:bucket/sub_path/*), the resource ARN was incorrectly generated,
causing DELETE operations to be denied even though s3:DeleteObject was
included in the Write action.

The fix:
- Extract bucket name and prefix path separately from patterns like
  'bucket/prefix/*'
- Generate correct S3 ARN format: arn:aws:s3:::bucket/prefix/*
- Ensure all permission checks (Read, Write, List, Tagging, etc.) work
  correctly with subpaths
- Support nested paths (e.g., bucket/a/b/c/*)

Fixes issue #7864 part 1: Write permission on subpath now allows DELETE.

Example:
- Permission: Write:mybucket/documents/*
- Objects can now be: PUT, DELETE, or ACL operations on mybucket/documents/*
- Objects outside this path are still denied

* test(iam): add tests for fine-grained S3 action mappings

Extend TestMapToStatementAction with test cases for fine-grained S3 actions:
- s3:DeleteObject
- s3:PutObject
- s3:GetObject
- s3:ListBucket
- s3:PutObjectAcl
- s3:GetObjectAcl

Ensures the new action mapping support is working correctly.

* test(s3api): add comprehensive tests for subpath permission handling

Add new test file with comprehensive tests for convertSingleAction():

1. TestConvertSingleActionDeleteObject: Verifies s3:DeleteObject is
   included in Write actions (fixes issue #7864 part 2)

2. TestConvertSingleActionSubpath: Tests proper resource ARN generation
   for different permission patterns:
   - Bucket-level: Write:mybucket -> arn:aws:s3:::mybucket
   - Wildcard: Write:mybucket/* -> arn:aws:s3:::mybucket/*
   - Subpath: Write:mybucket/sub_path/* -> arn:aws:s3:::mybucket/sub_path/*
   - Nested: Read:mybucket/documents/* -> arn:aws:s3:::mybucket/documents/*

3. TestConvertSingleActionSubpathDeleteAllowed: Specifically validates
   that subpath Write permissions allow DELETE operations

4. TestConvertSingleActionNestedPaths: Tests deeply nested path handling
   (e.g., bucket/a/b/c/*)

All tests pass and validate the fixes for issue #7864.

* fix: address review comments from PR #7865

- Fix critical bug: use parsed 'bucket' instead of 'resourcePattern' for GetObjectRetention, GetObjectLegalHold, and PutObjectLegalHold actions to avoid malformed ARNs like arn:aws:s3:::bucket/*/*
- Refactor large switch statement in MapToStatementAction() into a map-based lookup for better performance and maintainability

* fmt

* refactor: extract extractBucketAndPrefix helper and simplify convertSingleAction

- Extract extractBucketAndPrefix as a package-level function for better testability and reusability
- Remove unused bucketName parameter from convertSingleAction signature
- Update GetResourcesFromLegacyAction to use the extracted helper for consistent ARN generation
- Update all call sites in tests to match new function signature
- All tests pass and module compiles without errors

* fix: use extracted bucket variable consistently in all ARN generation branches

Replace resourcePattern with extracted bucket variable in else branches and bucket-level cases to avoid malformed ARNs like 'arn:aws:s3:::mybucket/*/*':

- Read case: bucket-level else branch
- Write case: bucket-level else branch
- Admin case: both bucket and object ARNs
- List case: bucket-level else branch
- GetBucketObjectLockConfiguration: bucket extraction
- PutBucketObjectLockConfiguration: bucket extraction

This ensures consistent ARN format: arn:aws:s3:::bucket or arn:aws:s3:::bucket/*

* fix: address remaining review comments from PR #7865

High priority fixes:
- Write action on bucket-level now generates arn:aws:s3:::mybucket/* instead of
  arn:aws:s3:::mybucket to enable object-level S3 actions (s3:PutObject, s3:DeleteObject)
- GetResourcesFromLegacyAction now generates both bucket and object ARNs for /*
  patterns to maintain backward compatibility with mixed action groups

Medium priority improvements:
- Remove unused 'bucket' field from TestConvertSingleActionSubpath test struct
- Update test to use assert.ElementsMatch instead of assert.Contains for more
  comprehensive resource ARN validation
- Clarify test expectations with expectedResources slice instead of single expectedResource

All tests pass, compilation verified

* test: improve TestConvertSingleActionNestedPaths with comprehensive assertions

Update test to use assert.ElementsMatch for more robust resource ARN verification:
- Change struct from single expectedResource to expectedResources slice
- Update Read nested path test to expect both bucket and prefix ARNs
- Use assert.ElementsMatch to verify all generated resources match exactly
- Provides complete coverage for nested path handling

This matches the improvement pattern used in TestConvertSingleActionSubpath

* refactor: simplify S3 action map and improve resource ARN detection

- Refactor fineGrainedActionMap to use init() function for programmatic population of both prefixed (s3:Action) and unprefixed (Action) variants, eliminating 70+ duplicate entries
- Add buildObjectResourceArn() helper to eliminate duplicated resource ARN generation logic across switch cases
- Fix bucket vs object-level access detection: only use HasSuffix(/*) check instead of Contains('/') which incorrectly matched patterns like 'bucket/prefix' without wildcard
- Apply buildObjectResourceArn() consistently to Tagging, BypassGovernanceRetention, GetObjectRetention, PutObjectRetention, GetObjectLegalHold, and PutObjectLegalHold cases

* fmt

* fix: generate object-level ARNs for bucket-level read access

When bucket-level read access is granted (e.g., 'Read:mybucket'),
generate both bucket and object ARNs so that object-level actions
like s3:GetObject can properly authorize.

Similarly, in GetResourcesFromLegacyAction, bucket-level patterns
should generate both ARN levels for consistency with patterns that
include wildcards. This ensures that users with bucket-level
permissions can read objects, not just the bucket itself.

* fix: address Copilot code review comments

- Remove unused bucketName parameter from ConvertIdentityToPolicy signature
  - Update all callers in examples.go and engine_test.go
  - Bucket is now extracted from action string itself

- Update extractBucketAndPrefix documentation
  - Add nested path example (bucket/a/b/c/*)
  - Clarify that prefix can contain multiple path segments

- Make GetResourcesFromLegacyAction action-aware
  - Different action types have different resource requirements
  - List actions only need bucket ARN (bucket-only operations)
  - Read/Write/Tagging actions need both bucket and object ARNs
  - Aligns with convertSingleAction logic for consistency

All tests pass successfully

* test: add comprehensive tests for GetResourcesFromLegacyAction consistency

- Add TestGetResourcesFromLegacyAction to verify action-aware resource generation
- Validate consistency with convertSingleAction for all action types:
  * List actions: bucket-only ARNs (s3:ListBucket is bucket-level operation)
  * Read actions: both bucket and object ARNs
  * Write actions: object-only ARNs (subpaths) or object ARNs (bucket-level)
  * Admin actions: both bucket and object ARNs
- Update GetResourcesFromLegacyAction to generate Admin ARNs consistent with convertSingleAction
- All tests pass (35+ test cases across integration_test.go)

* refactor: eliminate code duplication in GetResourcesFromLegacyAction

- Simplify GetResourcesFromLegacyAction to delegate to convertSingleAction
- Eliminates ~50 lines of duplicated action-type-specific logic
- Ensures single source of truth for resource ARN generation
- Improves maintainability: changes to ARN logic only need to be made in one place
- All tests pass: any inconsistencies would be caught immediately
- Addresses Gemini Code Assist review comment about code duplication

* fix: remove fragile 'dummy' action type in CreatePolicyFromLegacyIdentity

- Replace hardcoded 'dummy:' prefix with proper representative action type
- Use first valid action type from the action list to determine resource requirements
- Ensures GetResourcesFromLegacyAction receives a valid action type
- Prevents silent failures when convertSingleAction encounters unknown action
- Improves code clarity: explains why representative action type is needed
- All tests pass: policy engine tests verify correct behavior

* security: prevent privilege escalation in Admin action subpath handling

- Admin action with subpath (e.g., Admin:bucket/admin/*) now correctly restricts
  to the specified subpath instead of granting full bucket access
- If prefix exists: resources restricted to bucket + bucket/prefix/*
- If no prefix: full bucket access (unchanged behavior for root Admin)
- Added test case Admin_on_subpath to validate the security fix
- All 40+ policy engine tests pass

* refactor: address Copilot code review comments on S3 authorization

- Fix GetObjectTagging mapping: change from ACTION_READ to ACTION_TAGGING
  (tagging operations should not be classified as general read operations)

- Enhance extractBucketAndPrefix edge case handling:
  - Add input validation (reject empty strings, whitespace, slash-only)
  - Normalize double slashes and trailing slashes
  - Return empty bucket/prefix for invalid patterns
  - Prevent generation of malformed ARNs

- Separate Read action from ListBucket (AWS S3 IAM semantics):
  - ListBucket is a bucket-level operation, not object-level
  - Read action now only includes s3:GetObject, s3:GetObjectVersion
  - This aligns with AWS S3 IAM policy best practices

- Update buildObjectResourceArn to handle invalid bucket names gracefully:
  - Return empty slice if bucket is empty after validation
  - Prevents malformed ARN generation

- Add comprehensive TestExtractBucketAndPrefixEdgeCases with 8 test cases:
  - Validates empty strings, whitespace, special characters
  - Confirms proper normalization of double/trailing slashes
  - Ensures robust parsing of nested paths

- Update existing tests to reflect removed ListBucket from Read action

All 40+ policy engine tests pass

* fix: aggregate resource ARNs from all action types in CreatePolicyFromLegacyIdentity

CRITICAL FIX: The previous implementation incorrectly used a single representative
action type to determine resource ARNs when multiple legacy actions targeted the
same resource pattern. This caused incorrect policy generation when action types
with different resource requirements (e.g., List vs Write) were grouped together.

Example of the bug:
- Input: List:mybucket/path/*, Write:mybucket/path/*
- Old behavior: Used only List's resources (bucket-level ARN)
- Result: Policy had Write actions (s3:PutObject) but only bucket ARN
- Consequence: s3:PutObject would be denied due to missing object-level ARN

Solution:
- Iterate through all action types for a given resource pattern
- For each action type, call GetResourcesFromLegacyAction to get required ARNs
- Aggregate all ARNs into a set to eliminate duplicates
- Use the merged set for the final policy statement
- Admin action short-circuits (always includes full permissions)

Example of correct behavior:
- Input: List:mybucket/path/*, Write:mybucket/path/*
- New behavior: Aggregates both List and Write resource requirements
- Result: Policy has Write actions with BOTH bucket and object-level ARNs
- Outcome: s3:PutObject works correctly on mybucket/path/*

Added TestCreatePolicyFromLegacyIdentityMultipleActions with 3 test cases:
1. List + Write on subpath: verifies bucket + object ARN aggregation
2. Read + Tagging on bucket: verifies action-specific ARN combinations
3. Admin with other actions: verifies Admin dominates resource ARNs

All 45+ policy engine tests pass

* fix: remove bucket-level ARN from Read action for consistency

ISSUE: The Read action was including bucket-level ARNs (arn:aws:s3:::bucket)
even though the only S3 actions in Read are s3:GetObject and s3:GetObjectVersion,
which are object-level operations. This created a mismatch between the actions
and resources in the policy statement.

ROOT CAUSE: s3:ListBucket was previously removed from the Read action, but the
bucket-level ARN was not removed, creating an inconsistency.

SOLUTION: Update Read action to only generate object-level ARNs using
buildObjectResourceArn, consistent with how Write and Tagging actions work.

This ensures:
- Read:mybucket generates arn:aws:s3:::mybucket/* (not bucket ARN)
- Read:bucket/prefix/* generates arn:aws:s3:::bucket/prefix/* (object-level only)
- Consistency: same actions, same resources, same logic across all object operations

Updated test expectations:
- TestConvertSingleActionSubpath: Read_on_subpath now expects only object ARN
- TestConvertSingleActionNestedPaths: Read nested path now expects only object ARN
- TestConvertIdentityToPolicy: Read resources now 1 instead of 2
- TestCreatePolicyFromLegacyIdentityMultipleActions: Read+Tagging aggregates to 1 ARN

All 45+ policy engine tests pass

* doc

* fmt

* fix: address Copilot code review on Read action consistency and missing S3 action mappings

- Clarify MapToStatementAction comment to reflect exact lookup (not pattern matching)
- Add missing S3 actions to baseS3ActionMap:
  - ListBucketVersions, ListAllMyBuckets for bucket operations
  - GetBucketCors, PutBucketCors, DeleteBucketCors for CORS
  - GetBucketNotification, PutBucketNotification for notifications
  - GetBucketObjectLockConfiguration, PutBucketObjectLockConfiguration for object lock
  - GetObjectVersionTagging for version tagging
  - GetObjectVersionAcl, PutBucketAcl for ACL operations
  - PutBucketTagging, DeleteBucketTagging for bucket tagging

- Fix Read action scope inconsistency with GetActionMappings():
  - Previously: only included GetObject, GetObjectVersion
  - Now: includes full Read set (14 actions) from GetActionMappings
  - Includes both bucket-level (ListBucket*, GetBucket*) and object-level (GetObject*) ARNs
  - Bucket ARN enables ListBucket operations, object ARN enables GetObject operations

- Update all test expectations:
  - TestConvertSingleActionSubpath: Read now returns 2 ARNs (bucket + objects)
  - TestConvertSingleActionNestedPaths: Read nested path now includes bucket ARN
  - TestGetResourcesFromLegacyAction: Read test cases updated for consistency
  - TestCreatePolicyFromLegacyIdentityMultipleActions: Read_and_Tagging now returns 2 ARNs
  - TestConvertIdentityToPolicy: Updated to expect 14 Read actions and 2 resources

Fixes: Inconsistency between convertSingleAction Read case and GetActionMappings function

* fmt

* fix: align convertSingleAction with GetActionMappings and add bucket validation

- Fix Write action: now includes all 16 actions from GetActionMappings (object and bucket operations)
  - Includes PutBucketVersioning, PutBucketCors, PutBucketAcl, PutBucketTagging, etc.
  - Generates both bucket and object ARNs to support bucket-level operations

- Fix List action: add ListAllMyBuckets from GetActionMappings
  - Previously: ListBucket, ListBucketVersions
  - Now: ListBucket, ListBucketVersions, ListAllMyBuckets
  - Add bucket validation to prevent malformed ARNs with empty bucket

- Fix Tagging action: include bucket-level tagging operations
  - Previously: only object-level (GetObjectTagging, PutObjectTagging, DeleteObjectTagging)
  - Now: includes bucket-level (GetBucketTagging, PutBucketTagging, DeleteBucketTagging)
  - Generates both bucket and object ARNs to support bucket-level operations

- Add bucket validation to prevent malformed ARNs:
  - Admin: return error if bucket is empty
  - List: generate empty resources if bucket is empty
  - Tagging: check bucket before generating ARNs
  - GetBucketObjectLockConfiguration, PutBucketObjectLockConfiguration: validate bucket

- Fix TrimRight issue in extractBucketAndPrefix:
  - Change from strings.TrimRight(pattern, "/") to remove only one trailing slash
  - Prevents loss of prefix when pattern has multiple trailing slashes

- Update all test cases:
  - TestConvertSingleActionSubpath: Write now returns 16 actions and bucket+object ARNs
  - TestConvertSingleActionNestedPaths: Write includes bucket ARN
  - TestGetResourcesFromLegacyAction: Updated Write and Tagging expectations
  - TestCreatePolicyFromLegacyIdentityMultipleActions: Updated action/resource counts

Fixes: Inconsistencies between convertSingleAction and GetActionMappings for Write/List/Tagging actions

* fmt

* fix: resolve ListMultipartUploads/ListParts mapping inconsistency and add action validation

- Fix ListMultipartUploads and ListParts mapping in helpers.go:
  - Changed from ACTION_LIST to ACTION_WRITE for consistency with GetActionMappings
  - These operations are part of the multipart write workflow and should map to Write action
  - Prevents inconsistent behavior when same actions processed through different code paths

- Add documentation to clarify multipart operations in Write action:
  - Explain why ListMultipartUploads and ListParts are part of Write permissions
  - These are required for meaningful multipart upload workflow management

- Add action validation to CreatePolicyFromLegacyIdentity:
  - Validates action format before processing using ValidateActionMapping
  - Logs warnings for invalid actions instead of silently skipping them
  - Provides clearer error messages when invalid action types are used
  - Ensures users know when their intended permissions weren't applied
  - Consistent with ConvertLegacyActions validation approach

Fixes: Inconsistent action type mappings and silent failure for invalid actions
2025-12-24 10:29:30 -08:00
2024-07-29 09:13:41 -07:00
2025-10-13 18:05:17 -07:00
2025-12-22 18:48:13 -08:00
2019-04-30 03:23:20 +00:00
2023-01-05 11:01:22 -08:00

SeaweedFS

Slack Twitter Build Status GoDoc Wiki Docker Pulls SeaweedFS on Maven Central Artifact Hub

SeaweedFS Logo

Sponsor SeaweedFS via Patreon

SeaweedFS is an independent Apache-licensed open source project with its ongoing development made possible entirely thanks to the support of these awesome backers. If you'd like to grow SeaweedFS even stronger, please consider joining our sponsors on Patreon.

Your support will be really appreciated by me and other supporters!

Gold Sponsors

nodion piknik keepsec


Table of Contents

Quick Start

Quick Start for S3 API on Docker

docker run -p 8333:8333 chrislusf/seaweedfs server -s3

Quick Start with Single Binary

  • Download the latest binary from https://github.com/seaweedfs/seaweedfs/releases and unzip a single binary file weed or weed.exe. Or run go install github.com/seaweedfs/seaweedfs/weed@latest.
  • export AWS_ACCESS_KEY_ID=admin ; export AWS_SECRET_ACCESS_KEY=key as the admin credentials to access the object store.
  • Run weed server -dir=/some/data/dir -s3 to start one master, one volume server, one filer, and one S3 gateway.

Also, to increase capacity, just add more volume servers by running weed volume -dir="/some/data/dir2" -master="<master_host>:9333" -port=8081 locally, or on a different machine, or on thousands of machines. That is it!

Quick Start SeaweedFS S3 on AWS

Introduction

SeaweedFS is a simple and highly scalable distributed file system. There are two objectives:

  1. to store billions of files!
  2. to serve the files fast!

SeaweedFS started as an Object Store to handle small files efficiently. Instead of managing all file metadata in a central master, the central master only manages volumes on volume servers, and these volume servers manage files and their metadata. This relieves concurrency pressure from the central master and spreads file metadata into volume servers, allowing faster file access (O(1), usually just one disk read operation).

There is only 40 bytes of disk storage overhead for each file's metadata. It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases.

SeaweedFS started by implementing Facebook's Haystack design paper. Also, SeaweedFS implements erasure coding with ideas from f4: Facebooks Warm BLOB Storage System, and has a lot of similarities with Facebooks Tectonic Filesystem

On top of the object store, optional Filer can support directories and POSIX attributes. Filer is a separate linearly-scalable stateless server with customizable metadata stores, e.g., MySql, Postgres, Redis, Cassandra, HBase, Mongodb, Elastic Search, LevelDB, RocksDB, Sqlite, MemSql, TiDB, Etcd, CockroachDB, YDB, etc.

For any distributed key value stores, the large values can be offloaded to SeaweedFS. With the fast access speed and linearly scalable capacity, SeaweedFS can work as a distributed Key-Large-Value store.

SeaweedFS can transparently integrate with the cloud. With hot data on local cluster, and warm data on the cloud with O(1) access time, SeaweedFS can achieve both fast local access time and elastic cloud storage capacity. What's more, the cloud storage access API cost is minimized. Faster and cheaper than direct cloud storage!

Back to TOC

Features

Additional Features

  • Can choose no replication or different replication levels, rack and data center aware.
  • Automatic master servers failover - no single point of failure (SPOF).
  • Automatic Gzip compression depending on file MIME type.
  • Automatic compaction to reclaim disk space after deletion or update.
  • Automatic entry TTL expiration.
  • Any server with some disk space can add to the total storage space.
  • Adding/Removing servers does not cause any data re-balancing unless triggered by admin commands.
  • Optional picture resizing.
  • Support ETag, Accept-Range, Last-Modified, etc.
  • Support in-memory/leveldb/readonly mode tuning for memory/performance balance.
  • Support rebalancing the writable and readonly volumes.
  • Customizable Multiple Storage Tiers: Customizable storage disk types to balance performance and cost.
  • Transparent cloud integration: unlimited capacity via tiered cloud storage for warm data.
  • Erasure Coding for warm storage Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.

Back to TOC

Filer Features

Kubernetes

Back to TOC

Example: Using Seaweed Object Store

By default, the master node runs on port 9333, and the volume nodes run on port 8080. Let's start one master node, and two volume nodes on port 8080 and 8081. Ideally, they should be started from different machines. We'll use localhost as an example.

SeaweedFS uses HTTP REST operations to read, write, and delete. The responses are in JSON or JSONP format.

Start Master Server

> ./weed master

Start Volume Servers

> weed volume -dir="/tmp/data1" -max=5  -master="localhost:9333" -port=8080 &
> weed volume -dir="/tmp/data2" -max=10 -master="localhost:9333" -port=8081 &

Write File

To upload a file: first, send a HTTP POST, PUT, or GET request to /dir/assign to get an fid and a volume server URL:

> curl http://localhost:9333/dir/assign
{"count":1,"fid":"3,01637037d6","url":"127.0.0.1:8080","publicUrl":"localhost:8080"}

Second, to store the file content, send a HTTP multi-part POST request to url + '/' + fid from the response:

> curl -F file=@/home/chris/myphoto.jpg http://127.0.0.1:8080/3,01637037d6
{"name":"myphoto.jpg","size":43234,"eTag":"1cc0118e"}

To update, send another POST request with updated file content.

For deletion, send an HTTP DELETE request to the same url + '/' + fid URL:

> curl -X DELETE http://127.0.0.1:8080/3,01637037d6

Save File Id

Now, you can save the fid, 3,01637037d6 in this case, to a database field.

The number 3 at the start represents a volume id. After the comma, it's one file key, 01, and a file cookie, 637037d6.

The volume id is an unsigned 32-bit integer. The file key is an unsigned 64-bit integer. The file cookie is an unsigned 32-bit integer, used to prevent URL guessing.

The file key and file cookie are both coded in hex. You can store the <volume id, file key, file cookie> tuple in your own format, or simply store the fid as a string.

If stored as a string, in theory, you would need 8+1+16+8=33 bytes. A char(33) would be enough, if not more than enough, since most uses will not need 2^32 volumes.

If space is really a concern, you can store the file id in your own format. You would need one 4-byte integer for volume id, 8-byte long number for file key, and a 4-byte integer for the file cookie. So 16 bytes are more than enough.

Read File

Here is an example of how to render the URL.

First look up the volume server's URLs by the file's volumeId:

> curl http://localhost:9333/dir/lookup?volumeId=3
{"volumeId":"3","locations":[{"publicUrl":"localhost:8080","url":"localhost:8080"}]}

Since (usually) there are not too many volume servers, and volumes don't move often, you can cache the results most of the time. Depending on the replication type, one volume can have multiple replica locations. Just randomly pick one location to read.

Now you can take the public URL, render the URL or directly read from the volume server via URL:

 http://localhost:8080/3,01637037d6.jpg

Notice we add a file extension ".jpg" here. It's optional and just one way for the client to specify the file content type.

If you want a nicer URL, you can use one of these alternative URL formats:

 http://localhost:8080/3/01637037d6/my_preferred_name.jpg
 http://localhost:8080/3/01637037d6.jpg
 http://localhost:8080/3,01637037d6.jpg
 http://localhost:8080/3/01637037d6
 http://localhost:8080/3,01637037d6

If you want to get a scaled version of an image, you can add some params:

http://localhost:8080/3/01637037d6.jpg?height=200&width=200
http://localhost:8080/3/01637037d6.jpg?height=200&width=200&mode=fit
http://localhost:8080/3/01637037d6.jpg?height=200&width=200&mode=fill

Rack-Aware and Data Center-Aware Replication

SeaweedFS applies the replication strategy at a volume level. So, when you are getting a file id, you can specify the replication strategy. For example:

curl http://localhost:9333/dir/assign?replication=001

The replication parameter options are:

000: no replication
001: replicate once on the same rack
010: replicate once on a different rack, but same data center
100: replicate once on a different data center
200: replicate twice on two different data center
110: replicate once on a different rack, and once on a different data center

More details about replication can be found on the wiki.

You can also set the default replication strategy when starting the master server.

Allocate File Key on Specific Data Center

Volume servers can be started with a specific data center name:

 weed volume -dir=/tmp/1 -port=8080 -dataCenter=dc1
 weed volume -dir=/tmp/2 -port=8081 -dataCenter=dc2

When requesting a file key, an optional "dataCenter" parameter can limit the assigned volume to the specific data center. For example, this specifies that the assigned volume should be limited to 'dc1':

 http://localhost:9333/dir/assign?dataCenter=dc1

Other Features

Back to TOC

Object Store Architecture

Usually distributed file systems split each file into chunks, a central master keeps a mapping of filenames, chunk indices to chunk handles, and also which chunks each chunk server has.

The main drawback is that the central master can't handle many small files efficiently, and since all read requests need to go through the chunk master, so it might not scale well for many concurrent users.

Instead of managing chunks, SeaweedFS manages data volumes in the master server. Each data volume is 32GB in size, and can hold a lot of files. And each storage node can have many data volumes. So the master node only needs to store the metadata about the volumes, which is a fairly small amount of data and is generally stable.

The actual file metadata is stored in each volume on volume servers. Since each volume server only manages metadata of files on its own disk, with only 16 bytes for each file, all file access can read file metadata just from memory and only needs one disk operation to actually read file data.

For comparison, consider that an xfs inode structure in Linux is 536 bytes.

Master Server and Volume Server

The architecture is fairly simple. The actual data is stored in volumes on storage nodes. One volume server can have multiple volumes, and can both support read and write access with basic authentication.

All volumes are managed by a master server. The master server contains the volume id to volume server mapping. This is fairly static information, and can be easily cached.

On each write request, the master server also generates a file key, which is a growing 64-bit unsigned integer. Since write requests are not generally as frequent as read requests, one master server should be able to handle the concurrency well.

Write and Read files

When a client sends a write request, the master server returns (volume id, file key, file cookie, volume node URL) for the file. The client then contacts the volume node and POSTs the file content.

When a client needs to read a file based on (volume id, file key, file cookie), it asks the master server by the volume id for the (volume node URL, volume node public URL), or retrieves this from a cache. Then the client can GET the content, or just render the URL on web pages and let browsers fetch the content.

Please see the example for details on the write-read process.

Storage Size

In the current implementation, each volume can hold 32 gibibytes (32GiB or 8x2^32 bytes). This is because we align content to 8 bytes. We can easily increase this to 64GiB, or 128GiB, or more, by changing 2 lines of code, at the cost of some wasted padding space due to alignment.

There can be 4 gibibytes (4GiB or 2^32 bytes) of volumes. So the total system size is 8 x 4GiB x 4GiB which is 128 exbibytes (128EiB or 2^67 bytes).

Each individual file size is limited to the volume size.

Saving memory

All file meta information stored on a volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bit key, 32bit offset, 32bit size>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.

Tiered Storage to the cloud

The local volume servers are much faster, while cloud storages have elastic capacity and are actually more cost-efficient if not accessed often (usually free to upload, but relatively costly to access). With the append-only structure and O(1) access time, SeaweedFS can take advantage of both local and cloud storage by offloading the warm data to the cloud.

Usually hot data are fresh and warm data are old. SeaweedFS puts the newly created volumes on local servers, and optionally upload the older volumes on the cloud. If the older data are accessed less often, this literally gives you unlimited capacity with limited local servers, and still fast for new data.

With the O(1) access time, the network latency cost is kept at minimum.

If the hot/warm data is split as 20/80, with 20 servers, you can achieve storage capacity of 100 servers. That's a cost saving of 80%! Or you can repurpose the 80 servers to store new data also, and get 5X storage throughput.

Back to TOC

Compared to Other File Systems

Most other distributed file systems seem more complicated than necessary.

SeaweedFS is meant to be fast and simple, in both setup and operation. If you do not understand how it works when you reach here, we've failed! Please raise an issue with any questions or update this file with clarifications.

SeaweedFS is constantly moving forward. Same with other systems. These comparisons can be outdated quickly. Please help to keep them updated.

Back to TOC

Compared to HDFS

HDFS uses the chunk approach for each file, and is ideal for storing large files.

SeaweedFS is ideal for serving relatively smaller files quickly and concurrently.

SeaweedFS can also store extra large files by splitting them into manageable data chunks, and store the file ids of the data chunks into a meta chunk. This is managed by "weed upload/download" tool, and the weed master or volume servers are agnostic about it.

Back to TOC

Compared to GlusterFS, Ceph

The architectures are mostly the same. SeaweedFS aims to store and read files fast, with a simple and flat architecture. The main differences are

  • SeaweedFS optimizes for small files, ensuring O(1) disk seek operation, and can also handle large files.
  • SeaweedFS statically assigns a volume id for a file. Locating file content becomes just a lookup of the volume id, which can be easily cached.
  • SeaweedFS Filer metadata store can be any well-known and proven data store, e.g., Redis, Cassandra, HBase, Mongodb, Elastic Search, MySql, Postgres, Sqlite, MemSql, TiDB, CockroachDB, Etcd, YDB etc, and is easy to customize.
  • SeaweedFS Volume server also communicates directly with clients via HTTP, supporting range queries, direct uploads, etc.
System File Metadata File Content Read POSIX REST API Optimized for large number of small files
SeaweedFS lookup volume id, cacheable O(1) disk seek Yes Yes
SeaweedFS Filer Linearly Scalable, Customizable O(1) disk seek FUSE Yes Yes
GlusterFS hashing FUSE, NFS
Ceph hashing + rules FUSE Yes
MooseFS in memory FUSE No
MinIO separate meta file for each file Yes No

Back to TOC

Compared to GlusterFS

GlusterFS stores files, both directories and content, in configurable volumes called "bricks".

GlusterFS hashes the path and filename into ids, and assigned to virtual volumes, and then mapped to "bricks".

Back to TOC

Compared to MooseFS

MooseFS chooses to neglect small file issue. From moosefs 3.0 manual, "even a small file will occupy 64KiB plus additionally 4KiB of checksums and 1KiB for the header", because it "was initially designed for keeping large amounts (like several thousands) of very big files"

MooseFS Master Server keeps all meta data in memory. Same issue as HDFS namenode.

Back to TOC

Compared to Ceph

Ceph can be setup similar to SeaweedFS as a key->blob store. It is much more complicated, with the need to support layers on top of it. Here is a more detailed comparison

SeaweedFS has a centralized master group to look up free volumes, while Ceph uses hashing and metadata servers to locate its objects. Having a centralized master makes it easy to code and manage.

Ceph, like SeaweedFS, is based on the object store RADOS. Ceph is rather complicated with mixed reviews.

Ceph uses CRUSH hashing to automatically manage data placement, which is efficient to locate the data. But the data has to be placed according to the CRUSH algorithm. Any wrong configuration would cause data loss. Topology changes, such as adding new servers to increase capacity, will cause data migration with high IO cost to fit the CRUSH algorithm. SeaweedFS places data by assigning them to any writable volumes. If writes to one volume failed, just pick another volume to write. Adding more volumes is also as simple as it can be.

SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read.

SeaweedFS Filer uses off-the-shelf stores, such as MySql, Postgres, Sqlite, Mongodb, Redis, Elastic Search, Cassandra, HBase, MemSql, TiDB, CockroachCB, Etcd, YDB, to manage file directories. These stores are proven, scalable, and easier to manage.

SeaweedFS comparable to Ceph advantage
Master MDS simpler
Volume OSD optimized for small files
Filer Ceph FS linearly scalable, Customizable, O(1) or O(logN)

Back to TOC

Compared to MinIO

MinIO follows AWS S3 closely and is ideal for testing for S3 API. It has good UI, policies, versionings, etc. SeaweedFS is trying to catch up here. It is also possible to put MinIO as a gateway in front of SeaweedFS later.

MinIO metadata are in simple files. Each file write will incur extra writes to corresponding meta file.

MinIO does not have optimization for lots of small files. The files are simply stored as is to local disks. Plus the extra meta file and shards for erasure coding, it only amplifies the LOSF problem.

MinIO has multiple disk IO to read one file. SeaweedFS has O(1) disk reads, even for erasure coded files.

MinIO has full-time erasure coding. SeaweedFS uses replication on hot data for faster speed and optionally applies erasure coding on warm data.

MinIO does not have POSIX-like API support.

MinIO has specific requirements on storage layout. It is not flexible to adjust capacity. In SeaweedFS, just start one volume server pointing to the master. That's all.

Dev Plan

  • More tools and documentation, on how to manage and scale the system.
  • Read and write stream data.
  • Support structured data.

This is a super exciting project! And we need helpers and support!

Back to TOC

Installation Guide

Installation guide for users who are not familiar with golang

Step 1: install go on your machine and setup the environment by following the instructions at:

https://golang.org/doc/install

make sure to define your $GOPATH

Step 2: checkout this repo:

git clone https://github.com/seaweedfs/seaweedfs.git

Step 3: download, compile, and install the project by executing the following command

cd seaweedfs/weed && make install

Once this is done, you will find the executable "weed" in your $GOPATH/bin directory

For more installation options, including how to run with Docker, see the Getting Started guide.

Back to TOC

Hard Drive Performance

When testing read performance on SeaweedFS, it basically becomes a performance test of your hard drive's random read speed. Hard drives usually get 100MB/s~200MB/s.

Solid State Disk

To modify or delete small files, SSD must delete a whole block at a time, and move content in existing blocks to a new block. SSD is fast when brand new, but will get fragmented over time and you have to garbage collect, compacting blocks. SeaweedFS is friendly to SSD since it is append-only. Deletion and compaction are done on volume level in the background, not slowing reading and not causing fragmentation.

Back to TOC

Benchmark

My Own Unscientific Single Machine Results on Mac Book with Solid State Disk, CPU: 1 Intel Core i7 2.6GHz.

Write 1 million 1KB file:

Concurrency Level:      16
Time taken for tests:   66.753 seconds
Completed requests:      1048576
Failed requests:        0
Total transferred:      1106789009 bytes
Requests per second:    15708.23 [#/sec]
Transfer rate:          16191.69 [Kbytes/sec]

Connection Times (ms)
              min      avg        max      std
Total:        0.3      1.0       84.3      0.9

Percentage of the requests served within a certain time (ms)
   50%      0.8 ms
   66%      1.0 ms
   75%      1.1 ms
   80%      1.2 ms
   90%      1.4 ms
   95%      1.7 ms
   98%      2.1 ms
   99%      2.6 ms
  100%     84.3 ms

Randomly read 1 million files:

Concurrency Level:      16
Time taken for tests:   22.301 seconds
Completed requests:      1048576
Failed requests:        0
Total transferred:      1106812873 bytes
Requests per second:    47019.38 [#/sec]
Transfer rate:          48467.57 [Kbytes/sec]

Connection Times (ms)
              min      avg        max      std
Total:        0.0      0.3       54.1      0.2

Percentage of the requests served within a certain time (ms)
   50%      0.3 ms
   90%      0.4 ms
   98%      0.6 ms
   99%      0.7 ms
  100%     54.1 ms

Run WARP and launch a mixed benchmark.

make benchmark
warp: Benchmark data written to "warp-mixed-2025-12-05[194844]-kBpU.csv.zst"

Mixed operations.
Operation: DELETE, 10%, Concurrency: 20, Ran 42s.
 * Throughput: 55.13 obj/s

Operation: GET, 45%, Concurrency: 20, Ran 42s.
 * Throughput: 2477.45 MiB/s, 247.75 obj/s

Operation: PUT, 15%, Concurrency: 20, Ran 42s.
 * Throughput: 825.85 MiB/s, 82.59 obj/s

Operation: STAT, 30%, Concurrency: 20, Ran 42s.
 * Throughput: 165.27 obj/s

Cluster Total: 3302.88 MiB/s, 550.51 obj/s over 43s.

Back to TOC

Enterprise

For enterprise users, please visit seaweedfs.com for the SeaweedFS Enterprise Edition, which has a self-healing storage format with better data protection.

Back to TOC

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

The text of this page is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).

Back to TOC

Stargazers over time

Stargazers over time

Description
SeaweedFS is a distributed storage system for object storage (S3), file systems, and Iceberg tables, designed to handle billions of files with O(1) disk access and effortless horizontal scaling.
Readme Apache-2.0 163 MiB
Languages
Go 81.6%
Rust 6.3%
templ 4.3%
Java 3%
Shell 1.8%
Other 2.8%