* filer: improve FoundationDB performance by disabling batch by default
This PR addresses a performance issue where FoundationDB filer was achieving
only ~757 ops/sec with 12 concurrent S3 clients, despite FDB being capable
of 17,000+ ops/sec.
Root cause: The write batcher was waiting up to 5ms for each operation to
batch, even though S3 semantics require waiting for durability confirmation.
This added artificial latency that defeated the purpose of batching.
Changes:
- Disable write batching by default (batch_enabled = false)
- Each write now commits immediately in its own transaction
- Reduce batch interval from 5ms to 1ms when batching is enabled
- Add batch_enabled config option to toggle behavior
- Improve batcher to collect available ops without blocking
- Add benchmarks comparing batch vs no-batch performance
Benchmark results (16 concurrent goroutines):
- With batch: 2,924 ops/sec (342,032 ns/op)
- Without batch: 4,625 ops/sec (216,219 ns/op)
- Improvement: +58% faster
Configuration:
- Default: batch_enabled = false (optimal for S3 PUT latency)
- For bulk ingestion: set batch_enabled = true
Also fixes ARM64 Docker test setup (shell compatibility, fdbserver path).
* fix: address review comments - use atomic counter and remove duplicate batcher
- Use sync/atomic.Uint64 for unique filenames in concurrent benchmarks
- Remove duplicate batcher creation in createBenchmarkStoreWithBatching
(initialize() already creates batcher when batchEnabled=true)
* fix: add realistic default values to benchmark store helper
Set directoryPrefix, timeout, and maxRetryDelay to reasonable defaults
for more realistic benchmark conditions.
* add foundationdb
* Update foundationdb_store.go
* fix
* apply the patch
* avoid panic on error
* address comments
* remove extra data
* address comments
* adds more debug messages
* fix range listing
* delete with prefix range; list with right start key
* fix docker files
* use the more idiomatic FoundationDB KeySelectors
* address comments
* proper errors
* fix API versions
* more efficient
* recursive deletion
* clean up
* clean up
* pagination, one transaction for deletion
* error checking
* Use fdb.Strinc() to compute the lexicographically next string and create a proper range
* fix docker
* Update README.md
* delete in batches
* delete in batches
* fix build
* add foundationdb build
* Updated FoundationDB Version
* Fixed glibc/musl Incompatibility (Alpine → Debian)
* Update container_foundationdb_version.yml
* build SeaweedFS
* build tag
* address comments
* separate transaction
* address comments
* fix build
* empty vs no data
* fixes
* add go test
* Install FoundationDB client libraries
* nil compare