93 Commits

Author SHA1 Message Date
Chris Lu
9552e80b58 filer.sync: show active chunk transfers when sync progress stalls (#8889)
* filer.sync: show active chunk transfers when sync progress stalls

When the sync watermark is not advancing, print each in-progress chunk
transfer with its file path, bytes received so far, and current status
(downloading, uploading, or waiting with backoff duration). This helps
diagnose which files are blocking progress during replication.

Closes #8542

* filer.sync: include last error in stall diagnostics

* filer.sync: fix data races in ChunkTransferStatus

Add sync.RWMutex to ChunkTransferStatus and lock around all field
mutations in fetchAndWrite. ActiveTransfers now returns value copies
under RLock so callers get immutable snapshots.
2026-04-02 13:08:24 -07:00
Chris Lu
81369b8a83 improve: large file sync throughput for remote.cache and filer.sync (#8676)
* improve large file sync throughput for remote.cache and filer.sync

Three main throughput improvements:

1. Adaptive chunk sizing for remote.cache: targets ~32 chunks per file
   instead of always starting at 5MB. A 500MB file now uses ~16MB chunks
   (32 chunks) instead of 5MB chunks (100 chunks), reducing per-chunk
   overhead (volume assign, gRPC call, needle write) by 3x.

2. Configurable concurrency at every layer:
   - remote.cache chunk concurrency: -chunkConcurrency flag (default 8)
   - remote.cache S3 download concurrency: -downloadConcurrency flag
     (default raised from 1 to 5 per chunk)
   - filer.sync chunk concurrency: -chunkConcurrency flag (default 32)

3. S3 multipart download concurrency raised from 1 to 5: the S3 manager
   downloader was using Concurrency=1, serializing all part downloads
   within each chunk. This alone can 5x per-chunk download speed.

The concurrency values flow through the gRPC request chain:
  shell command → CacheRemoteObjectToLocalClusterRequest →
  FetchAndWriteNeedleRequest → S3 downloader

Zero values in the request mean "use server defaults", maintaining
full backward compatibility with existing callers.

Ref #8481

* fix: use full maxMB for chunk size cap and remove loop guard

Address review feedback:
- Use full maxMB instead of maxMB/2 for maxChunkSize to avoid
  unnecessarily limiting chunk size for very large files.
- Remove chunkSize < maxChunkSize guard from the safety loop so it
  can always grow past maxChunkSize when needed to stay under 1000
  chunks (e.g., extremely large files with small maxMB).

* address review feedback: help text, validation, naming, docs

- Fix help text for -chunkConcurrency and -downloadConcurrency flags
  to say "0 = server default" instead of advertising specific numeric
  defaults that could drift from the server implementation.
- Validate chunkConcurrency and downloadConcurrency are within int32
  range before narrowing, returning a user-facing error if out of range.
- Rename ReadRemoteErr to readRemoteErr to follow Go naming conventions.
- Add doc comment to SetChunkConcurrency noting it must be called
  during initialization before replication goroutines start.
- Replace doubling loop in chunk size safety check with direct
  ceil(remoteSize/1000) computation to guarantee the 1000-chunk cap.

* address Copilot review: clamp concurrency, fix chunk count, clarify proto docs

- Use ceiling division for chunk count check to avoid overcounting
  when file size is an exact multiple of chunk size.
- Clamp chunkConcurrency (max 1024) and downloadConcurrency (max 1024
  at filer, max 64 at volume server) to prevent excessive goroutines.
- Always use ReadFileWithConcurrency when the client supports it,
  falling back to the implementation's default when value is 0.
- Clarify proto comments that download_concurrency only applies when
  the remote storage client supports it (currently S3).
- Include specific server defaults in help text (e.g., "0 = server
  default 8") so users see the actual values in -h output.

* fix data race on executionErr and use %w for error wrapping

- Protect concurrent writes to executionErr in remote.cache worker
  goroutines with a sync.Mutex to eliminate the data race.
- Use %w instead of %v in volume_grpc_remote.go error formatting
  to preserve the error chain for errors.Is/errors.As callers.
2026-03-17 16:49:56 -07:00
Chris Lu
b665c329bc fix(replication): resume partial chunk reads on EOF instead of re-downloading (#8607)
* fix(replication): resume partial chunk reads on EOF instead of re-downloading

When replicating chunks and the source connection drops mid-transfer,
accumulate the bytes already received and retry with a Range header
to fetch only the remaining bytes. This avoids re-downloading
potentially large chunks from scratch on each retry, reducing load
on busy source servers and speeding up recovery.

* test(replication): add tests for downloadWithRange including gzip partial reads

Tests cover:
- No offset (no Range header sent)
- With offset (Range header verified)
- Content-Disposition filename extraction
- Partial read + resume: server drops connection mid-transfer, client
  resumes with Range from the offset of received bytes
- Gzip partial read + resume: first response is gzip-encoded (Go auto-
  decompresses), connection drops, resume request gets decompressed data
  (Go doesn't add Accept-Encoding when Range is set, so the server
  decompresses), combined bytes match original

* fix(replication): address PR review comments

- Consolidate downloadWithRange into DownloadFile with optional offset
  parameter (variadic), eliminating code duplication (DRY)
- Validate HTTP response status: require 206 + correct Content-Range
  when offset > 0, reject when server ignores Range header
- Use if/else for fullData assignment for clarity
- Add test for rejected Range (server returns 200 instead of 206)

* refactor(replication): remove unused ReplicationSource interface

The interface was never referenced and its signature didn't match
the actual FilerSource.ReadPart method.

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-11 22:38:22 -07:00
Chris Lu
0647f66bb5 filer.sync: add exponential backoff on unexpected EOF during replication (#8557)
* filer.sync: add exponential backoff on unexpected EOF during replication

When the source volume server drops connections under high traffic,
filer.sync retries aggressively (every 1-6s), hammering the already
overloaded source. This adds a longer exponential backoff (10s to 2min)
specifically for "unexpected EOF" errors, reducing pressure on the
source while still retrying indefinitely until success.

Also adds more logging throughout the replication path:
- Log source URL and error at V(0) when ReadPart or io.ReadAll fails
- Log content-length and byte counts at V(4) on success
- Log backoff duration in retry messages

Fixes #8542

* filer.sync: extract backoff helper and fix 2-minute cap

- Extract nextEofBackoff() and isEofError() helpers to deduplicate
  the backoff logic between fetchAndWrite and uploadManifestChunk
- Fix the cap: previously 80s would double to 160s and pass the
  < 2min check uncapped. Now doubles first, then clamps to 2min.

* filer.sync: log source URL instead of empty upload URL on read errors

UploadUrl is not populated until after the reader is consumed, so the
V(0) and V(4) logs were printing an empty string. Add SourceUrl field
to UploadOption and populate it from the HTTP response in fetchAndWrite.

* filer.sync: guard isEofError against nil error

* filer.sync: use errors.Is for EOF detection, fix log wording

- Replace broad substring matching ("read input", "unexpected EOF")
  with errors.Is(err, io.ErrUnexpectedEOF) and errors.Is(err, io.EOF)
  so only actual EOF errors trigger the longer backoff
- Fix awkward log phrasing: "interrupted replicate" → "interrupted
  while replicating"

* filer.sync: remove EOF backoff from uploadManifestChunk

uploadManifestChunk reads from an in-memory bytes.Reader, so any EOF
errors there are from the destination side, not a broken source stream.
The long source-oriented backoff is inappropriate; let RetryUntil
handle destination retries at its normal cadence.

---------

Co-authored-by: Copilot <copilot@github.com>
2026-03-08 14:33:37 -07:00
Chris Lu
7fcbffed7f filer.sync: support manifest chunks (#8299)
* filer.sync support manifest chunks

* filersink: address manifest sync review feedback
2026-02-10 20:18:35 -08:00
Chris Lu
be0379f6fd Fix filer.sync retry on stale chunk (#8298)
* Fix filer.sync stale chunk uploads

* Tweak filersink stale logging
2026-02-10 19:06:35 -08:00
promalert
9012069bd7 chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code

Signed-off-by: promalert <promalert@outlook.com>

* goimports -w .

---------

Signed-off-by: promalert <promalert@outlook.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
2026-01-07 13:06:08 -08:00
Chris Lu
cc2edfaf68 fix: enable RetryForever for active-active cluster sync to prevent out-of-sync (#7840)
Fixes #7230

When a cluster goes down during file replication, the chunk upload process
would fail after a limited number of retries. Once the remote cluster came
back online, those failed uploads were never retried, leaving the clusters
out-of-sync.

This change enables the RetryForever flag in the UploadOption when
replicating chunks between filers. This ensures that upload operations
will keep retrying indefinitely, and once the remote cluster comes back
online, the pending uploads will automatically succeed.

Users no longer need to manually run fs.meta.save and fs.meta.load as
a workaround for out-of-sync clusters.
2025-12-22 00:58:23 -08:00
Chris Lu
69553e5ba6 convert error fromating to %w everywhere (#6995) 2025-07-16 23:39:27 -07:00
Aleksey Kosov
283d9e0079 Add context with request (#6824) 2025-05-28 11:34:02 -07:00
Aleksey Kosov
165af32d6b added context to filer_client method calls (#6808)
Co-authored-by: akosov <a.kosov@kryptonite.ru>
2025-05-22 09:46:49 -07:00
vadimartynov
86d92a42b4 Added tls for http clients (#5766)
* Added global http client

* Added Do func for global http client

* Changed the code to use the global http client

* Fix http client in volume uploader

* Fixed pkg name

* Fixed http util funcs

* Fixed http client for bench_filer_upload

* Fixed http client for stress_filer_upload

* Fixed http client for filer_server_handlers_proxy

* Fixed http client for command_fs_merge_volumes

* Fixed http client for command_fs_merge_volumes and command_volume_fsck

* Fixed http client for s3api_server

* Added init global client for main funcs

* Rename global_client to client

* Changed:
- fixed NewHttpClient;
- added CheckIsHttpsClientEnabled func
- updated security.toml in scaffold

* Reduce the visibility of some functions in the util/http/client pkg

* Added the loadSecurityConfig function

* Use util.LoadSecurityConfiguration() in NewHttpClient func
2024-07-16 23:14:09 -07:00
chrislu
c6dec11ea5 [filer.sync] skip overwriting existing fresh entry 2024-07-16 09:38:10 -07:00
chrislu
81fdf3651b grpc connection to filer add sw-client-id header 2023-01-20 01:48:12 -08:00
chrislu
6ede19e825 add a simple file replication progress bar 2022-12-20 19:47:21 -08:00
chrislu
6c7fe40305 filer sink retries reading file chunks, skipping missing chunks
if the file chunk is not available during replication time, the file is skipped
2022-12-19 11:31:58 -08:00
chrislu
70a4c98b00 refactor filer_pb.Entry and filer.Entry to use GetChunks()
for later locking on reading chunks
2022-11-15 06:33:36 -08:00
chrislu
ea2637734a refactor filer proto chunk variable from mtime to modified_ts_ns 2022-10-28 12:53:19 -07:00
chrislu
0d817bc347 fix invalid memory address or nil pointer dereference on filer.sync
fix https://github.com/seaweedfs/seaweedfs/issues/3826
2022-10-11 21:58:17 -07:00
chrislu
ea271600ec fix parameters 2022-10-04 12:36:05 -07:00
chrislu
0452ae6a6c filer.sync: limit concurrency when fetching file chunks
fix https://github.com/seaweedfs/seaweedfs/issues/3787
2022-10-04 11:35:07 -07:00
chrislu
b463ca1a2f filer replication: compare content changes directly
Fix https://github.com/seaweedfs/seaweedfs/issues/3714

The destination chunks may be empty. For example, the file is updated and the volume is vacuumed. In this case, the sync would miss the old chunks. This is fine. However, the entry would have correct metadata but missing chunks.

For this case, the simple metadata comparison would be wrongly skipping data changes, and the file will stay empty unless file content md5 is changed.
2022-09-20 08:35:10 -07:00
Ryan Russell
d734fff322 docs: replicte -> replicate (#3664) 2022-09-14 10:01:18 -07:00
chrislu
cb6cf331ca filer.backup and filer.sync: include headers during backup and sync
fix https://github.com/seaweedfs/seaweedfs/issues/3532
2022-09-04 18:26:36 -07:00
askeipx
2e78a522ab remove old raft servers if they don't answer to pings for too long (#3398)
* remove old raft servers if they don't answer to pings for too long

add ping durations as options

rename ping fields

fix some todos

get masters through masterclient

raft remove server from leader

use raft servers to ping them

CheckMastersAlive for hashicorp raft only

* prepare blocking ping

* pass waitForReady as param

* pass waitForReady through all functions

* waitForReady works

* refactor

* remove unneeded params

* rollback unneeded changes

* fix
2022-08-23 23:18:21 -07:00
chrislu
4081d50607 filer sink: retryable data chunk uploading 2022-08-20 19:09:15 -07:00
Konstantin Lebedev
4d08393b7c filer prefer volume server in same data center (#3405)
* initial prefer same data center
https://github.com/seaweedfs/seaweedfs/issues/3404

* GetDataCenter

* prefer same data center for ReplicationSource

* GetDataCenterId

* remove glog
2022-08-04 17:35:00 -07:00
chrislu
26dbc6c905 move to https://github.com/seaweedfs/seaweedfs 2022-07-29 00:17:28 -07:00
chrislu
139e039c44 filer.sync: pass attributes for mount
fix https://github.com/chrislusf/seaweedfs/issues/3012
2022-05-06 03:54:12 -07:00
chrislu
9405eaefdb filer.sync: fix replicating partially updated file
Run two servers with volumes and fillers:
server -dir=Server1alpha -master.port=11000 -filer -filer.port=11001 -volume.port=11002
server -dir=Server1sigma -master.port=11006 -filer -filer.port=11007 -volume.port=11008

Run Active-Passive filler.sync:
filer.sync -a localhost:11007 -b localhost:11001 -isActivePassive

Upload file to 11007 port:
curl -F file=@/Desktop/9.xml "http://localhost:11007/testFacebook/"

If we request a file on two servers now, everything will be correct, even if we add data to the file and upload it again:
curl "http://localhost:11007/testFacebook/9.xml"
EQUALS
curl "http://localhost:11001/testFacebook/9.xml"

However, if we change the already existing data in the file (for example, we change the first line in the file, reducing its length), then this file on the second server will not be valid and will not be equivalent to the first file

Снимок экрана 2022-02-07 в 14 21 11

This problem occurs on line 202 in the filer_sink.go file. In particular, this is due to incorrect mapping of chunk names in the DoMinusChunks function. The names of deletedChunks do not match the chunks of existingEntry.Chunks, since the first chunks come from another server and have a different addressing (name) compared to the addressing on the server where the file is being overwritten.

Deleted chunks are not actually deleted on the server to which the file is replicated.
2022-02-07 03:46:28 -08:00
chrislu
9f9ef1340c use streaming mode for long poll grpc calls
streaming mode would create separate grpc connections for each call.
this is to ensure the long poll connections are properly closed.
2021-12-26 00:15:03 -08:00
Chris Lu
e5fc35ed0c change server address from string to a type 2021-09-12 22:47:52 -07:00
Chris Lu
6923af7280 refactoring 2021-09-06 16:20:49 -07:00
Chris Lu
99b599aa8a remote.mount 2021-07-26 22:53:44 -07:00
Chris Lu
7359193e97 go fmt 2021-07-21 14:38:12 -07:00
Chris Lu
7ab389e7ec optimization: improve random range query for large files 2021-07-19 23:07:22 -07:00
Chris Lu
450222dd64 add remote to filer.Entry and filer_pb entry, add RemoteConf 2021-07-19 02:47:27 -07:00
Chris Lu
8f8738867f add retry to assign volume
fix https://github.com/chrislusf/seaweedfs/issues/2056
2021-05-07 07:29:26 -07:00
Chris Lu
540441fd38 go fmt 2021-02-28 20:34:14 -08:00
Chris Lu
678c54d705 data sink: add incremental mode 2021-02-28 16:19:03 -08:00
Chris Lu
a0e84c4fbc go fmt 2021-02-10 23:41:05 -08:00
Chris Lu
821c46edf1 Merge branch 'master' into support_ssd_volume 2021-02-09 11:37:07 -08:00
Chris Lu
990fa69bfe add back AdjustedUrl() related code 2021-01-28 14:36:29 -08:00
Chris Lu
00707ec00f mount: outsideContainerClusterMode proxy through filer
Running mount outside of the cluster would not need to expose all the volume servers to outside of the cluster. The chunk read and write will go through the filer.
2021-01-24 19:01:58 -08:00
Chris Lu
6ca10725b8 Revert "mount: when outside cluster network, use filer as proxy to access volume servers"
This reverts commit 096e088d7b.
2021-01-24 03:15:19 -08:00
Chris Lu
096e088d7b mount: when outside cluster network, use filer as proxy to access volume servers 2021-01-24 01:41:38 -08:00
Chris Lu
80b8692688 filer.sync: replicate outside of either cluster, only need to see filers 2021-01-24 00:01:44 -08:00
Chris Lu
2b76854641 add "weed filer.cat" to read files directly from volume servers 2021-01-06 04:22:00 -08:00
Chris Lu
1bf22c0b5b go fmt 2020-12-16 09:14:05 -08:00
Chris Lu
51eadaf2b6 rename parameter name to "disk" 2020-12-13 12:05:31 -08:00