fix: S3 downloads failing after idle timeout (#7626)

* fix: S3 downloads failing after idle timeout (#7618)

The idle timeout was incorrectly terminating active downloads because
read and write deadlines were managed independently. During a download,
the server writes data but rarely reads, so the read deadline would
expire even though the connection was actively being used.

Changes:
1. Simplify to single Timeout field - since this is a 'no activity timeout'
   where any activity extends the deadline, separate read/write timeouts
   are unnecessary. Now uses SetDeadline() which sets both at once.

2. Implement proper 'no activity timeout' - any activity (read or write)
   now extends the deadline. The connection only times out when there's
   genuinely no activity in either direction.

3. Increase default S3 idleTimeout from 10s to 120s for additional safety
   margin when fetching chunks from slow storage backends.

Fixes #7618

* Update weed/util/net_timeout.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit is contained in:
Chris Lu
2025-12-04 18:31:46 -08:00
committed by GitHub
parent f9b4a4c396
commit 3183a49698
5 changed files with 32 additions and 99 deletions

View File

@@ -84,7 +84,7 @@ func init() {
s3StandaloneOptions.allowDeleteBucketNotEmpty = cmdS3.Flag.Bool("allowDeleteBucketNotEmpty", true, "allow recursive deleting all entries along with bucket")
s3StandaloneOptions.localFilerSocket = cmdS3.Flag.String("localFilerSocket", "", "local filer socket path")
s3StandaloneOptions.localSocket = cmdS3.Flag.String("localSocket", "", "default to /tmp/seaweedfs-s3-<port>.sock")
s3StandaloneOptions.idleTimeout = cmdS3.Flag.Int("idleTimeout", 10, "connection idle seconds")
s3StandaloneOptions.idleTimeout = cmdS3.Flag.Int("idleTimeout", 120, "connection idle seconds")
s3StandaloneOptions.concurrentUploadLimitMB = cmdS3.Flag.Int("concurrentUploadLimitMB", 128, "limit total concurrent upload size")
s3StandaloneOptions.concurrentFileUploadLimit = cmdS3.Flag.Int("concurrentFileUploadLimit", 0, "limit number of concurrent file uploads, 0 means unlimited")
}