* Add volume dir tags to topology Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add preferred tag config for EC Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Prioritize EC destinations by tags Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add EC placement planner tag tests Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Refactor EC placement tests to reuse buildActiveTopology Remove buildActiveTopologyWithDiskTags helper function and consolidate tag setup inline in test cases. Tests now use UpdateTopology to apply tags after topology creation, reusing the existing buildActiveTopology function rather than duplicating its logic. All tag scenario tests pass: - TestECPlacementPlannerPrefersTaggedDisks - TestECPlacementPlannerFallsBackWhenTagsInsufficient Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Consolidate normalizeTagList into shared util package Extract normalizeTagList from three locations (volume.go, detection.go, erasure_coding_handler.go) into new weed/util/tag.go as exported NormalizeTagList function. Replace all duplicate implementations with imports and calls to util.NormalizeTagList. This improves code reuse and maintainability by centralizing tag normalization logic. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add PreferredTags to EC config persistence Add preferred_tags field to ErasureCodingTaskConfig protobuf with field number 5. Update GetConfigSpec to include preferred_tags field in the UI configuration schema. Add PreferredTags to ToTaskPolicy to serialize config to protobuf. Add PreferredTags to FromTaskPolicy to deserialize from protobuf with defensive copy to prevent external mutation. This allows EC preferred tags to be persisted and restored across worker restarts. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add defensive copy for Tags slice in DiskLocation Copy the incoming tags slice in NewDiskLocation instead of storing by reference. This prevents external callers from mutating the DiskLocation.Tags slice after construction, improving encapsulation and preventing unexpected changes to disk metadata. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add doc comment to buildCandidateSets method Document the tiered candidate selection and fallback behavior. Explain that for a planner with preferredTags, it accumulates disks matching each tag in order into progressively larger tiers, emits a candidate set once a tier reaches shardsNeeded, and finally falls back to the full candidates set if preferred-tag tiers are insufficient. This clarifies the intended semantics for future maintainers. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Apply final PR review fixes 1. Update parseVolumeTags to replicate single tag entry to all folders instead of leaving some folders with nil tags. This prevents nil pointer dereferences when processing folders without explicit tags. 2. Add defensive copy in ToTaskPolicy for PreferredTags slice to match the pattern used in FromTaskPolicy, preventing external mutation of the returned TaskPolicy. 3. Add clarifying comment in buildCandidateSets explaining that the shardsNeeded <= 0 branch is a defensive check for direct callers, since selectDestinations guarantees shardsNeeded > 0. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix nil pointer dereference in parseVolumeTags Ensure all folder tags are initialized to either normalized tags or empty slices, not nil. When multiple tag entries are provided and there are more folders than entries, remaining folders now get empty slices instead of nil, preventing nil pointer dereference in downstream code. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix NormalizeTagList to return empty slice instead of nil Change NormalizeTagList to always return a non-nil slice. When all tags are empty or whitespace after normalization, return an empty slice instead of nil. This prevents nil pointer dereferences in downstream code that expects a valid (possibly empty) slice. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add nil safety check for v.tags pointer Add a safety check to handle the case where v.tags might be nil, preventing a nil pointer dereference. If v.tags is nil, use an empty string instead. This is defensive programming to prevent panics in edge cases. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add volume.tags flag to weed server and weed mini commands Add the volume.tags CLI option to both the 'weed server' and 'weed mini' commands. This allows users to specify disk tags when running the combined server modes, just like they can with 'weed volume'. The flag uses the same format and description as the volume command: comma-separated tag groups per data dir with ':' separators (e.g. fast:ssd,archive). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
477 lines
18 KiB
Go
477 lines
18 KiB
Go
package command
|
|
|
|
import (
|
|
"fmt"
|
|
"net/http"
|
|
httppprof "net/http/pprof"
|
|
"os"
|
|
"runtime/pprof"
|
|
"strconv"
|
|
"strings"
|
|
"time"
|
|
|
|
"github.com/spf13/viper"
|
|
"google.golang.org/grpc"
|
|
"google.golang.org/grpc/reflection"
|
|
|
|
"github.com/seaweedfs/seaweedfs/weed/glog"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/pb/volume_server_pb"
|
|
"github.com/seaweedfs/seaweedfs/weed/security"
|
|
weed_server "github.com/seaweedfs/seaweedfs/weed/server"
|
|
"github.com/seaweedfs/seaweedfs/weed/server/constants"
|
|
stats_collect "github.com/seaweedfs/seaweedfs/weed/stats"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage"
|
|
"github.com/seaweedfs/seaweedfs/weed/storage/types"
|
|
"github.com/seaweedfs/seaweedfs/weed/util"
|
|
"github.com/seaweedfs/seaweedfs/weed/util/grace"
|
|
"github.com/seaweedfs/seaweedfs/weed/util/httpdown"
|
|
"github.com/seaweedfs/seaweedfs/weed/util/version"
|
|
)
|
|
|
|
var (
|
|
v VolumeServerOptions
|
|
)
|
|
|
|
type VolumeServerOptions struct {
|
|
port *int
|
|
portGrpc *int
|
|
publicPort *int
|
|
folders []string
|
|
folderMaxLimits []int32
|
|
idxFolder *string
|
|
ip *string
|
|
id *string
|
|
publicUrl *string
|
|
bindIp *string
|
|
mastersString *string
|
|
mserverString *string // deprecated, for backward compatibility
|
|
masters []pb.ServerAddress
|
|
idleConnectionTimeout *int
|
|
dataCenter *string
|
|
rack *string
|
|
whiteList []string
|
|
indexType *string
|
|
diskType *string
|
|
tags *string
|
|
fixJpgOrientation *bool
|
|
readMode *string
|
|
cpuProfile *string
|
|
memProfile *string
|
|
compactionMBPerSecond *int
|
|
maintenanceMBPerSecond *int
|
|
fileSizeLimitMB *int
|
|
concurrentUploadLimitMB *int
|
|
concurrentDownloadLimitMB *int
|
|
pprof *bool
|
|
preStopSeconds *int
|
|
metricsHttpPort *int
|
|
metricsHttpIp *string
|
|
// pulseSeconds *int
|
|
inflightUploadDataTimeout *time.Duration
|
|
inflightDownloadDataTimeout *time.Duration
|
|
hasSlowRead *bool
|
|
readBufferSizeMB *int
|
|
ldbTimeout *int64
|
|
debug *bool
|
|
debugPort *int
|
|
}
|
|
|
|
func init() {
|
|
cmdVolume.Run = runVolume // break init cycle
|
|
v.port = cmdVolume.Flag.Int("port", 8080, "http listen port")
|
|
v.portGrpc = cmdVolume.Flag.Int("port.grpc", 0, "grpc listen port")
|
|
v.publicPort = cmdVolume.Flag.Int("port.public", 0, "port opened to public")
|
|
v.ip = cmdVolume.Flag.String("ip", util.DetectedHostAddress(), "ip or server name, also used as identifier")
|
|
v.id = cmdVolume.Flag.String("id", "", "volume server id. If empty, default to ip:port")
|
|
v.publicUrl = cmdVolume.Flag.String("publicUrl", "", "Publicly accessible address")
|
|
v.bindIp = cmdVolume.Flag.String("ip.bind", "", "ip address to bind to. If empty, default to same as -ip option.")
|
|
v.mastersString = cmdVolume.Flag.String("master", "localhost:9333", "comma-separated master servers")
|
|
v.mserverString = cmdVolume.Flag.String("mserver", "", "comma-separated master servers (deprecated, use -master instead)")
|
|
v.preStopSeconds = cmdVolume.Flag.Int("preStopSeconds", 10, "number of seconds between stop send heartbeats and stop volume server")
|
|
// v.pulseSeconds = cmdVolume.Flag.Int("pulseSeconds", 5, "number of seconds between heartbeats, must be smaller than or equal to the master's setting")
|
|
v.idleConnectionTimeout = cmdVolume.Flag.Int("idleTimeout", 30, "connection idle seconds")
|
|
v.dataCenter = cmdVolume.Flag.String("dataCenter", "", "current volume server's data center name")
|
|
v.rack = cmdVolume.Flag.String("rack", "", "current volume server's rack name")
|
|
v.indexType = cmdVolume.Flag.String("index", "memory", "Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance.")
|
|
v.diskType = cmdVolume.Flag.String("disk", "", "[hdd|ssd|<tag>] hard drive or solid state drive or any tag")
|
|
v.tags = cmdVolume.Flag.String("tags", "", "comma-separated tag groups per data dir; each group uses ':' (e.g. fast:ssd,archive)")
|
|
v.fixJpgOrientation = cmdVolume.Flag.Bool("images.fix.orientation", false, "Adjust jpg orientation when uploading.")
|
|
v.readMode = cmdVolume.Flag.String("readMode", "proxy", "[local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'.")
|
|
v.cpuProfile = cmdVolume.Flag.String("cpuprofile", "", "cpu profile output file")
|
|
v.memProfile = cmdVolume.Flag.String("memprofile", "", "memory profile output file")
|
|
v.compactionMBPerSecond = cmdVolume.Flag.Int("compactionMBps", 0, "limit background compaction or copying speed in mega bytes per second")
|
|
v.maintenanceMBPerSecond = cmdVolume.Flag.Int("maintenanceMBps", 0, "limit maintenance (replication / balance) IO rate in MB/s. Unset is 0, no limitation.")
|
|
v.fileSizeLimitMB = cmdVolume.Flag.Int("fileSizeLimitMB", 256, "limit file size to avoid out of memory")
|
|
v.ldbTimeout = cmdVolume.Flag.Int64("index.leveldbTimeout", 0, "alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.")
|
|
v.concurrentUploadLimitMB = cmdVolume.Flag.Int("concurrentUploadLimitMB", 0, "limit total concurrent upload size, 0 means unlimited")
|
|
v.concurrentDownloadLimitMB = cmdVolume.Flag.Int("concurrentDownloadLimitMB", 0, "limit total concurrent download size, 0 means unlimited")
|
|
v.pprof = cmdVolume.Flag.Bool("pprof", false, "enable pprof http handlers. precludes -memprofile and -cpuprofile")
|
|
v.metricsHttpPort = cmdVolume.Flag.Int("metricsPort", 0, "Prometheus metrics listen port")
|
|
v.metricsHttpIp = cmdVolume.Flag.String("metricsIp", "", "metrics listen ip. If empty, default to same as -ip.bind option.")
|
|
v.idxFolder = cmdVolume.Flag.String("dir.idx", "", "directory to store .idx files")
|
|
v.inflightUploadDataTimeout = cmdVolume.Flag.Duration("inflightUploadDataTimeout", 60*time.Second, "inflight upload data wait timeout of volume servers")
|
|
v.inflightDownloadDataTimeout = cmdVolume.Flag.Duration("inflightDownloadDataTimeout", 60*time.Second, "inflight download data wait timeout of volume servers")
|
|
v.hasSlowRead = cmdVolume.Flag.Bool("hasSlowRead", true, "<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase.")
|
|
v.readBufferSizeMB = cmdVolume.Flag.Int("readBufferSizeMB", 4, "<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally.")
|
|
v.debug = cmdVolume.Flag.Bool("debug", false, "serves runtime profiling data via pprof on the port specified by -debug.port")
|
|
v.debugPort = cmdVolume.Flag.Int("debug.port", 6060, "http port for debugging")
|
|
}
|
|
|
|
var cmdVolume = &Command{
|
|
UsageLine: "volume -port=8080 -dir=/tmp -max=5 -ip=server_name -master=localhost:9333",
|
|
Short: "start a volume server",
|
|
Long: `start a volume server to provide storage spaces
|
|
|
|
`,
|
|
}
|
|
|
|
var (
|
|
volumeFolders = cmdVolume.Flag.String("dir", os.TempDir(), "directories to store data files. dir[,dir]...")
|
|
maxVolumeCounts = cmdVolume.Flag.String("max", "8", "maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size.")
|
|
volumeWhiteListOption = cmdVolume.Flag.String("whiteList", "", "comma separated Ip addresses having write permission. No limit if empty.")
|
|
minFreeSpacePercent = cmdVolume.Flag.String("minFreeSpacePercent", "1", "minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead).")
|
|
minFreeSpace = cmdVolume.Flag.String("minFreeSpace", "", "min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.")
|
|
)
|
|
|
|
func runVolume(cmd *Command, args []string) bool {
|
|
if *v.debug {
|
|
grace.StartDebugServer(*v.debugPort)
|
|
}
|
|
|
|
util.LoadSecurityConfiguration()
|
|
|
|
// If --pprof is set we assume the caller wants to be able to collect
|
|
// cpu and memory profiles via go tool pprof
|
|
if !*v.pprof {
|
|
grace.SetupProfiling(*v.cpuProfile, *v.memProfile)
|
|
}
|
|
|
|
switch {
|
|
case *v.metricsHttpIp != "":
|
|
// noting to do, use v.metricsHttpIp
|
|
case *v.bindIp != "":
|
|
*v.metricsHttpIp = *v.bindIp
|
|
case *v.ip != "":
|
|
*v.metricsHttpIp = *v.ip
|
|
}
|
|
go stats_collect.StartMetricsServer(*v.metricsHttpIp, *v.metricsHttpPort)
|
|
|
|
// Backward compatibility: if -mserver is provided, use it
|
|
if *v.mserverString != "" {
|
|
*v.mastersString = *v.mserverString
|
|
}
|
|
|
|
minFreeSpaces := util.MustParseMinFreeSpace(*minFreeSpace, *minFreeSpacePercent)
|
|
v.masters = pb.ServerAddresses(*v.mastersString).ToAddresses()
|
|
v.startVolumeServer(*volumeFolders, *maxVolumeCounts, *volumeWhiteListOption, minFreeSpaces)
|
|
|
|
return true
|
|
}
|
|
|
|
func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, volumeWhiteListOption string, minFreeSpaces []util.MinFreeSpace) {
|
|
|
|
// Set multiple folders and each folder's max volume count limit'
|
|
v.folders = strings.Split(volumeFolders, ",")
|
|
for _, folder := range v.folders {
|
|
if err := util.TestFolderWritable(util.ResolvePath(folder)); err != nil {
|
|
glog.Fatalf("Check Data Folder(-dir) Writable %s : %s", folder, err)
|
|
}
|
|
}
|
|
|
|
// set max
|
|
maxCountStrings := strings.Split(maxVolumeCounts, ",")
|
|
for _, maxString := range maxCountStrings {
|
|
if max, e := strconv.ParseInt(maxString, 10, 64); e == nil {
|
|
v.folderMaxLimits = append(v.folderMaxLimits, int32(max))
|
|
} else {
|
|
glog.Fatalf("The max specified in -max not a valid number %s", maxString)
|
|
}
|
|
}
|
|
if len(v.folderMaxLimits) == 1 && len(v.folders) > 1 {
|
|
for i := 0; i < len(v.folders)-1; i++ {
|
|
v.folderMaxLimits = append(v.folderMaxLimits, v.folderMaxLimits[0])
|
|
}
|
|
}
|
|
if len(v.folders) != len(v.folderMaxLimits) {
|
|
glog.Fatalf("%d directories by -dir, but only %d max is set by -max", len(v.folders), len(v.folderMaxLimits))
|
|
}
|
|
|
|
if len(minFreeSpaces) == 1 && len(v.folders) > 1 {
|
|
for i := 0; i < len(v.folders)-1; i++ {
|
|
minFreeSpaces = append(minFreeSpaces, minFreeSpaces[0])
|
|
}
|
|
}
|
|
if len(v.folders) != len(minFreeSpaces) {
|
|
glog.Fatalf("%d directories by -dir, but only %d minFreeSpacePercent is set by -minFreeSpacePercent", len(v.folders), len(minFreeSpaces))
|
|
}
|
|
|
|
// set disk types
|
|
var diskTypes []types.DiskType
|
|
diskTypeStrings := strings.Split(*v.diskType, ",")
|
|
for _, diskTypeString := range diskTypeStrings {
|
|
diskTypes = append(diskTypes, types.ToDiskType(diskTypeString))
|
|
}
|
|
if len(diskTypes) == 1 && len(v.folders) > 1 {
|
|
for i := 0; i < len(v.folders)-1; i++ {
|
|
diskTypes = append(diskTypes, diskTypes[0])
|
|
}
|
|
}
|
|
if len(v.folders) != len(diskTypes) {
|
|
glog.Fatalf("%d directories by -dir, but only %d disk types is set by -disk", len(v.folders), len(diskTypes))
|
|
}
|
|
|
|
var tagsArg string
|
|
if v.tags != nil {
|
|
tagsArg = *v.tags
|
|
}
|
|
folderTags := parseVolumeTags(tagsArg, len(v.folders))
|
|
|
|
// security related white list configuration
|
|
v.whiteList = util.StringSplit(volumeWhiteListOption, ",")
|
|
|
|
if *v.ip == "" {
|
|
*v.ip = util.DetectedHostAddress()
|
|
glog.V(0).Infof("detected volume server ip address: %v", *v.ip)
|
|
}
|
|
if *v.bindIp == "" {
|
|
*v.bindIp = *v.ip
|
|
}
|
|
|
|
if *v.publicPort == 0 {
|
|
*v.publicPort = *v.port
|
|
}
|
|
if *v.portGrpc == 0 {
|
|
*v.portGrpc = 10000 + *v.port
|
|
}
|
|
if *v.publicUrl == "" {
|
|
*v.publicUrl = util.JoinHostPort(*v.ip, *v.publicPort)
|
|
}
|
|
|
|
volumeMux := http.NewServeMux()
|
|
publicVolumeMux := volumeMux
|
|
if v.isSeparatedPublicPort() {
|
|
publicVolumeMux = http.NewServeMux()
|
|
}
|
|
|
|
if *v.pprof {
|
|
volumeMux.HandleFunc("/debug/pprof/", httppprof.Index)
|
|
volumeMux.HandleFunc("/debug/pprof/cmdline", httppprof.Cmdline)
|
|
volumeMux.HandleFunc("/debug/pprof/profile", httppprof.Profile)
|
|
volumeMux.HandleFunc("/debug/pprof/symbol", httppprof.Symbol)
|
|
volumeMux.HandleFunc("/debug/pprof/trace", httppprof.Trace)
|
|
}
|
|
|
|
volumeNeedleMapKind := storage.NeedleMapInMemory
|
|
switch *v.indexType {
|
|
case "leveldb":
|
|
volumeNeedleMapKind = storage.NeedleMapLevelDb
|
|
case "leveldbMedium":
|
|
volumeNeedleMapKind = storage.NeedleMapLevelDbMedium
|
|
case "leveldbLarge":
|
|
volumeNeedleMapKind = storage.NeedleMapLevelDbLarge
|
|
}
|
|
|
|
// Determine volume server ID: if not specified, use ip:port
|
|
volumeServerId := util.GetVolumeServerId(*v.id, *v.ip, *v.port)
|
|
|
|
volumeServer := weed_server.NewVolumeServer(volumeMux, publicVolumeMux,
|
|
*v.ip, *v.port, *v.portGrpc, *v.publicUrl, volumeServerId,
|
|
v.folders, v.folderMaxLimits, minFreeSpaces, diskTypes, folderTags,
|
|
*v.idxFolder,
|
|
volumeNeedleMapKind,
|
|
v.masters, constants.VolumePulsePeriod, *v.dataCenter, *v.rack,
|
|
v.whiteList,
|
|
*v.fixJpgOrientation, *v.readMode,
|
|
*v.compactionMBPerSecond,
|
|
*v.maintenanceMBPerSecond,
|
|
*v.fileSizeLimitMB,
|
|
int64(*v.concurrentUploadLimitMB)*1024*1024,
|
|
int64(*v.concurrentDownloadLimitMB)*1024*1024,
|
|
*v.inflightUploadDataTimeout,
|
|
*v.inflightDownloadDataTimeout,
|
|
*v.hasSlowRead,
|
|
*v.readBufferSizeMB,
|
|
*v.ldbTimeout,
|
|
)
|
|
// starting grpc server
|
|
grpcS := v.startGrpcService(volumeServer)
|
|
|
|
// starting public http server
|
|
var publicHttpDown httpdown.Server
|
|
if v.isSeparatedPublicPort() {
|
|
publicHttpDown = v.startPublicHttpService(publicVolumeMux)
|
|
if nil == publicHttpDown {
|
|
glog.Fatalf("start public http service failed")
|
|
}
|
|
}
|
|
|
|
// starting the cluster http server
|
|
clusterHttpServer := v.startClusterHttpService(volumeMux)
|
|
|
|
grace.OnReload(volumeServer.LoadNewVolumes)
|
|
grace.OnReload(volumeServer.Reload)
|
|
|
|
stopChan := make(chan bool)
|
|
grace.OnInterrupt(func() {
|
|
fmt.Println("volume server has been killed")
|
|
|
|
// Stop heartbeats
|
|
if !volumeServer.StopHeartbeat() {
|
|
volumeServer.SetStopping()
|
|
glog.V(0).Infof("stop send heartbeat and wait %d seconds until shutdown ...", *v.preStopSeconds)
|
|
time.Sleep(time.Duration(*v.preStopSeconds) * time.Second)
|
|
}
|
|
|
|
shutdown(publicHttpDown, clusterHttpServer, grpcS, volumeServer)
|
|
stopChan <- true
|
|
})
|
|
|
|
ctx := MiniClusterCtx
|
|
if ctx != nil {
|
|
select {
|
|
case <-stopChan:
|
|
case <-ctx.Done():
|
|
shutdown(publicHttpDown, clusterHttpServer, grpcS, volumeServer)
|
|
}
|
|
} else {
|
|
select {
|
|
case <-stopChan:
|
|
}
|
|
}
|
|
|
|
}
|
|
|
|
func parseVolumeTags(tagsArg string, folderCount int) [][]string {
|
|
if folderCount <= 0 {
|
|
return nil
|
|
}
|
|
tagEntries := []string{}
|
|
if strings.TrimSpace(tagsArg) != "" {
|
|
tagEntries = strings.Split(tagsArg, ",")
|
|
}
|
|
folderTags := make([][]string, folderCount)
|
|
|
|
// If exactly one tag entry provided, replicate it to all folders
|
|
if len(tagEntries) == 1 {
|
|
normalized := util.NormalizeTagList(strings.Split(tagEntries[0], ":"))
|
|
for i := 0; i < folderCount; i++ {
|
|
folderTags[i] = append([]string(nil), normalized...)
|
|
}
|
|
} else {
|
|
// Otherwise, assign tags to folders that have explicit entries
|
|
for i := 0; i < folderCount; i++ {
|
|
if i < len(tagEntries) {
|
|
folderTags[i] = util.NormalizeTagList(strings.Split(tagEntries[i], ":"))
|
|
} else {
|
|
// Initialize remaining folders with empty tag slice
|
|
folderTags[i] = []string{}
|
|
}
|
|
}
|
|
}
|
|
return folderTags
|
|
}
|
|
|
|
func shutdown(publicHttpDown httpdown.Server, clusterHttpServer httpdown.Server, grpcS *grpc.Server, volumeServer *weed_server.VolumeServer) {
|
|
|
|
// firstly, stop the public http service to prevent from receiving new user request
|
|
if nil != publicHttpDown {
|
|
glog.V(0).Infof("stop public http server ... ")
|
|
if err := publicHttpDown.Stop(); err != nil {
|
|
glog.Warningf("stop the public http server failed, %v", err)
|
|
}
|
|
}
|
|
|
|
glog.V(0).Infof("graceful stop cluster http server ... ")
|
|
if err := clusterHttpServer.Stop(); err != nil {
|
|
glog.Warningf("stop the cluster http server failed, %v", err)
|
|
}
|
|
|
|
glog.V(0).Infof("graceful stop gRPC ...")
|
|
grpcS.GracefulStop()
|
|
|
|
volumeServer.Shutdown()
|
|
|
|
pprof.StopCPUProfile()
|
|
|
|
}
|
|
|
|
// check whether configure the public port
|
|
func (v VolumeServerOptions) isSeparatedPublicPort() bool {
|
|
return *v.publicPort != *v.port
|
|
}
|
|
|
|
func (v VolumeServerOptions) startGrpcService(vs volume_server_pb.VolumeServerServer) *grpc.Server {
|
|
grpcPort := *v.portGrpc
|
|
grpcL, err := util.NewListener(util.JoinHostPort(*v.bindIp, grpcPort), 0)
|
|
if err != nil {
|
|
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
|
|
}
|
|
grpcS := pb.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.volume"))
|
|
volume_server_pb.RegisterVolumeServerServer(grpcS, vs)
|
|
reflection.Register(grpcS)
|
|
go func() {
|
|
if err := grpcS.Serve(grpcL); err != nil {
|
|
glog.Fatalf("start gRPC service failed, %s", err)
|
|
}
|
|
}()
|
|
return grpcS
|
|
}
|
|
|
|
func (v VolumeServerOptions) startPublicHttpService(handler http.Handler) httpdown.Server {
|
|
publicListeningAddress := util.JoinHostPort(*v.bindIp, *v.publicPort)
|
|
glog.V(0).Infoln("Start Seaweed volume server", version.Version(), "public at", publicListeningAddress)
|
|
publicListener, e := util.NewListener(publicListeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
|
|
if e != nil {
|
|
glog.Fatalf("Volume server listener error:%v", e)
|
|
}
|
|
|
|
pubHttp := httpdown.HTTP{StopTimeout: 5 * time.Minute, KillTimeout: 5 * time.Minute}
|
|
publicHttpDown := pubHttp.Serve(&http.Server{Handler: handler}, publicListener)
|
|
go func() {
|
|
if err := publicHttpDown.Wait(); err != nil {
|
|
glog.Errorf("public http down wait failed, %v", err)
|
|
}
|
|
}()
|
|
|
|
return publicHttpDown
|
|
}
|
|
|
|
func (v VolumeServerOptions) startClusterHttpService(handler http.Handler) httpdown.Server {
|
|
var (
|
|
certFile, keyFile string
|
|
)
|
|
if viper.GetString("https.volume.key") != "" {
|
|
certFile = viper.GetString("https.volume.cert")
|
|
keyFile = viper.GetString("https.volume.key")
|
|
}
|
|
|
|
listeningAddress := util.JoinHostPort(*v.bindIp, *v.port)
|
|
glog.V(0).Infof("Start Seaweed volume server %s at %s", version.Version(), listeningAddress)
|
|
listener, e := util.NewListener(listeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
|
|
if e != nil {
|
|
glog.Fatalf("Volume server listener error:%v", e)
|
|
}
|
|
|
|
httpDown := httpdown.HTTP{
|
|
KillTimeout: time.Minute,
|
|
StopTimeout: 30 * time.Second,
|
|
CertFile: certFile,
|
|
KeyFile: keyFile}
|
|
httpS := &http.Server{Handler: handler}
|
|
|
|
if viper.GetString("https.volume.ca") != "" {
|
|
clientCertFile := viper.GetString("https.volume.ca")
|
|
httpS.TLSConfig = security.LoadClientTLSHTTP(clientCertFile)
|
|
security.FixTlsConfig(util.GetViper(), httpS.TLSConfig)
|
|
}
|
|
|
|
clusterHttpServer := httpDown.Serve(httpS, listener)
|
|
go func() {
|
|
if e := clusterHttpServer.Wait(); e != nil {
|
|
glog.Fatalf("Volume server fail to serve: %v", e)
|
|
}
|
|
}()
|
|
return clusterHttpServer
|
|
}
|