Files
seaweedFS/weed/s3api/object_lock_utils.go
Chris Lu f77e6ed2d4 fix: admin UI bucket delete now properly deletes collection and checks Object Lock (#7734)
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock

Fixes #7711

The admin UI's DeleteS3Bucket function was missing two critical behaviors:

1. It did not delete the collection from the master (unlike s3.bucket.delete
   shell command), leaving orphaned volume data that caused fs.verify errors.

2. It did not check for Object Lock protections before deletion, potentially
   allowing deletion of buckets with locked objects.

Changes:
- Add shared Object Lock checking utilities to object_lock_utils.go:
  - EntryHasActiveLock: standalone function to check if an entry has active lock
  - HasObjectsWithActiveLocks: shared function to scan bucket for locked objects
- Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function
- Update admin UI DeleteS3Bucket to:
  - Check Object Lock using shared HasObjectsWithActiveLocks utility
  - Delete the collection before deleting filer entries (matching s3.bucket.delete)

* refactor: S3 API uses shared Object Lock utilities

Removes 114 lines of duplicated code from s3api_bucket_handlers.go by
having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks
function in object_lock_utils.go.

Now both S3 API and Admin UI use the same shared utilities:
- EntryHasActiveLock
- HasObjectsWithActiveLocks
- recursivelyCheckLocksWithClient
- checkVersionsForLocksWithClient

* feat: s3.bucket.delete shell command now checks Object Lock

Add Object Lock protection to the s3.bucket.delete shell command.
If the bucket has Object Lock enabled and contains objects with active
retention or legal hold, deletion is prevented.

Also refactors Object Lock checking utilities into a new s3_objectlock
package to avoid import cycles between shell, s3api, and admin packages.

All three components now share the same logic:
- S3 API (DeleteBucketHandler)
- Admin UI (DeleteS3Bucket)
- Shell command (s3.bucket.delete)

* refactor: unified Object Lock checking and consistent deletion parameters

1. Add CheckBucketForLockedObjects() - a unified function that combines:
   - Bucket entry lookup
   - Object Lock enabled check
   - Scan for locked objects

2. All three components now use this single function:
   - S3 API (via s3api.CheckBucketForLockedObjects)
   - Admin UI (via s3api.CheckBucketForLockedObjects)
   - Shell command (via s3_objectlock.CheckBucketForLockedObjects)

3. Aligned deletion parameters across all components:
   - isDeleteData: false (collection already deleted separately)
   - isRecursive: true
   - ignoreRecursiveError: true

* fix: properly handle non-EOF errors in Recv() loops

The Recv() loops in recursivelyCheckLocksWithClient and
checkVersionsForLocksWithClient were breaking on any error, which
could hide real stream errors and incorrectly report 'no locks found'.

Now:
- io.EOF: break loop (normal end of stream)
- any other error: return it so caller knows the stream failed

* fix: address PR review comments

1. Add path traversal protection - validate entry names before building
   subdirectory paths. Skip entries with empty names, '.', '..', or
   containing path separators.

2. Use exact match for .versions folder instead of HasSuffix() to avoid
   mismatching unrelated directories like 'foo.versions'.

3. Replace path.Join with simple string concatenation since we now
   validate entry names.

* refactor: extract paginateEntries helper to reduce duplication

The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient
functions shared significant structural similarity. Extracted a generic
paginateEntries helper that:
- Handles pagination logic (lastFileName tracking, Limit)
- Handles stream receiving with proper EOF vs error handling
- Validates entry names (path traversal protection)
- Calls a processEntry callback for business logic

This centralizes pagination logic and makes the code more maintainable.

* feat: add context propagation for timeout and cancellation support

All Object Lock checking functions now accept context.Context parameter:
- paginateEntries(ctx, client, dir, processEntry)
- recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime)
- checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime)
- HasObjectsWithActiveLocks(ctx, client, bucketPath)
- CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName)

This enables:
- Timeout support for large bucket scans
- Cancellation propagation from HTTP requests
- The S3 API handler now uses r.Context() for proper request lifecycle

* fix: address PR review comments

1. Add DefaultBucketsPath constant in admin_server.go instead of
   hardcoding "/buckets" in multiple places.

2. Add defensive normalization in EntryHasActiveLock:
   - TrimSpace to handle whitespace around values
   - ToUpper for case-insensitive comparison of legal hold and
     retention mode values
   - TrimSpace on retention date before parsing

* fix: use ctx variable consistently instead of context.Background()

In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx
variable defined at the start of the function for all gRPC calls
instead of creating new context.Background() instances.
2025-12-13 13:41:25 -08:00

391 lines
12 KiB
Go

package s3api
import (
"context"
"encoding/xml"
"fmt"
"strconv"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_objectlock"
)
// ObjectLockUtils provides shared utilities for Object Lock configuration
// These functions are used by both Admin UI and S3 API handlers to ensure consistency
// VersioningUtils provides shared utilities for bucket versioning configuration
// These functions ensure Admin UI and S3 API use the same versioning keys
// StoreVersioningInExtended stores versioning configuration in entry extended attributes
func StoreVersioningInExtended(entry *filer_pb.Entry, enabled bool) error {
if entry.Extended == nil {
entry.Extended = make(map[string][]byte)
}
if enabled {
entry.Extended[s3_constants.ExtVersioningKey] = []byte(s3_constants.VersioningEnabled)
} else {
entry.Extended[s3_constants.ExtVersioningKey] = []byte(s3_constants.VersioningSuspended)
}
return nil
}
// LoadVersioningFromExtended loads versioning configuration from entry extended attributes
func LoadVersioningFromExtended(entry *filer_pb.Entry) (bool, bool) {
if entry == nil || entry.Extended == nil {
return false, false // not found, default to suspended
}
// Check for S3 API compatible key
if versioningBytes, exists := entry.Extended[s3_constants.ExtVersioningKey]; exists {
enabled := string(versioningBytes) == s3_constants.VersioningEnabled
return enabled, true
}
return false, false // not found
}
// CreateObjectLockConfiguration creates a new ObjectLockConfiguration with the specified parameters
func CreateObjectLockConfiguration(enabled bool, mode string, days int, years int) *ObjectLockConfiguration {
if !enabled {
return nil
}
config := &ObjectLockConfiguration{
ObjectLockEnabled: s3_constants.ObjectLockEnabled,
}
// Add default retention rule if mode and period are specified
if mode != "" && (days > 0 || years > 0) {
config.Rule = &ObjectLockRule{
DefaultRetention: &DefaultRetention{
Mode: mode,
Days: days,
Years: years,
DaysSet: days > 0,
YearsSet: years > 0,
},
}
}
return config
}
// ObjectLockConfigurationToXML converts ObjectLockConfiguration to XML bytes
func ObjectLockConfigurationToXML(config *ObjectLockConfiguration) ([]byte, error) {
if config == nil {
return nil, fmt.Errorf("object lock configuration is nil")
}
return xml.Marshal(config)
}
// StoreObjectLockConfigurationInExtended stores Object Lock configuration in entry extended attributes
func StoreObjectLockConfigurationInExtended(entry *filer_pb.Entry, config *ObjectLockConfiguration) error {
if entry.Extended == nil {
entry.Extended = make(map[string][]byte)
}
if config == nil {
// Remove Object Lock configuration
delete(entry.Extended, s3_constants.ExtObjectLockEnabledKey)
delete(entry.Extended, s3_constants.ExtObjectLockDefaultModeKey)
delete(entry.Extended, s3_constants.ExtObjectLockDefaultDaysKey)
delete(entry.Extended, s3_constants.ExtObjectLockDefaultYearsKey)
return nil
}
// Store the enabled flag
entry.Extended[s3_constants.ExtObjectLockEnabledKey] = []byte(config.ObjectLockEnabled)
// Store default retention configuration if present
if config.Rule != nil && config.Rule.DefaultRetention != nil {
defaultRetention := config.Rule.DefaultRetention
// Store mode
if defaultRetention.Mode != "" {
entry.Extended[s3_constants.ExtObjectLockDefaultModeKey] = []byte(defaultRetention.Mode)
}
// Store days
if defaultRetention.DaysSet && defaultRetention.Days > 0 {
entry.Extended[s3_constants.ExtObjectLockDefaultDaysKey] = []byte(strconv.Itoa(defaultRetention.Days))
}
// Store years
if defaultRetention.YearsSet && defaultRetention.Years > 0 {
entry.Extended[s3_constants.ExtObjectLockDefaultYearsKey] = []byte(strconv.Itoa(defaultRetention.Years))
}
} else {
// Remove default retention if not present
delete(entry.Extended, s3_constants.ExtObjectLockDefaultModeKey)
delete(entry.Extended, s3_constants.ExtObjectLockDefaultDaysKey)
delete(entry.Extended, s3_constants.ExtObjectLockDefaultYearsKey)
}
return nil
}
// LoadObjectLockConfigurationFromExtended loads Object Lock configuration from entry extended attributes
func LoadObjectLockConfigurationFromExtended(entry *filer_pb.Entry) (*ObjectLockConfiguration, bool) {
if entry == nil || entry.Extended == nil {
return nil, false
}
// Check if Object Lock is enabled
enabledBytes, exists := entry.Extended[s3_constants.ExtObjectLockEnabledKey]
if !exists {
return nil, false
}
enabled := string(enabledBytes)
if enabled != s3_constants.ObjectLockEnabled && enabled != "true" {
return nil, false
}
// Create basic configuration
config := &ObjectLockConfiguration{
ObjectLockEnabled: s3_constants.ObjectLockEnabled,
}
// Load default retention configuration if present
if modeBytes, exists := entry.Extended[s3_constants.ExtObjectLockDefaultModeKey]; exists {
mode := string(modeBytes)
// Parse days and years
var days, years int
if daysBytes, exists := entry.Extended[s3_constants.ExtObjectLockDefaultDaysKey]; exists {
if parsed, err := strconv.Atoi(string(daysBytes)); err == nil {
days = parsed
}
}
if yearsBytes, exists := entry.Extended[s3_constants.ExtObjectLockDefaultYearsKey]; exists {
if parsed, err := strconv.Atoi(string(yearsBytes)); err == nil {
years = parsed
}
}
// Create rule if we have a mode and at least days or years
if mode != "" && (days > 0 || years > 0) {
config.Rule = &ObjectLockRule{
DefaultRetention: &DefaultRetention{
Mode: mode,
Days: days,
Years: years,
DaysSet: days > 0,
YearsSet: years > 0,
},
}
}
}
return config, true
}
// ExtractObjectLockInfoFromConfig extracts basic Object Lock information from configuration
// Returns: enabled, mode, duration (for UI display)
func ExtractObjectLockInfoFromConfig(config *ObjectLockConfiguration) (bool, string, int32) {
if config == nil || config.ObjectLockEnabled != s3_constants.ObjectLockEnabled {
return false, "", 0
}
if config.Rule == nil || config.Rule.DefaultRetention == nil {
return true, "", 0
}
defaultRetention := config.Rule.DefaultRetention
// Convert years to days for consistent representation
days := 0
if defaultRetention.DaysSet {
days = defaultRetention.Days
}
if defaultRetention.YearsSet && defaultRetention.Years > 0 {
days += defaultRetention.Years * 365
}
return true, defaultRetention.Mode, int32(days)
}
// CreateObjectLockConfigurationFromParams creates ObjectLockConfiguration from individual parameters
// This is a convenience function for Admin UI usage
func CreateObjectLockConfigurationFromParams(enabled bool, mode string, duration int32) *ObjectLockConfiguration {
if !enabled {
return nil
}
return CreateObjectLockConfiguration(enabled, mode, int(duration), 0)
}
// ValidateObjectLockParameters validates Object Lock parameters before creating configuration
func ValidateObjectLockParameters(enabled bool, mode string, duration int32) error {
if !enabled {
return nil
}
if mode != s3_constants.RetentionModeGovernance && mode != s3_constants.RetentionModeCompliance {
return ErrInvalidObjectLockMode
}
if duration <= 0 {
return ErrInvalidObjectLockDuration
}
if duration > MaxRetentionDays {
return ErrObjectLockDurationExceeded
}
return nil
}
// ====================================================================
// OBJECT LOCK VALIDATION FUNCTIONS
// ====================================================================
// These validation functions provide comprehensive validation for
// all Object Lock related configurations and requests.
// ValidateRetention validates retention configuration for object-level retention
func ValidateRetention(retention *ObjectRetention) error {
// Check if mode is specified
if retention.Mode == "" {
return ErrRetentionMissingMode
}
// Check if retain until date is specified
if retention.RetainUntilDate == nil {
return ErrRetentionMissingRetainUntilDate
}
// Check if mode is valid
if retention.Mode != s3_constants.RetentionModeGovernance && retention.Mode != s3_constants.RetentionModeCompliance {
return ErrInvalidRetentionModeValue
}
// Check if retain until date is in the future
if retention.RetainUntilDate.Before(time.Now()) {
return ErrRetentionDateMustBeFuture
}
return nil
}
// ValidateLegalHold validates legal hold configuration
func ValidateLegalHold(legalHold *ObjectLegalHold) error {
// Check if status is valid
if legalHold.Status != s3_constants.LegalHoldOn && legalHold.Status != s3_constants.LegalHoldOff {
return ErrInvalidLegalHoldStatus
}
return nil
}
// ValidateObjectLockConfiguration validates object lock configuration at bucket level
func ValidateObjectLockConfiguration(config *ObjectLockConfiguration) error {
// ObjectLockEnabled is required for bucket-level configuration
if config.ObjectLockEnabled == "" {
return ErrObjectLockConfigurationMissingEnabled
}
// Validate ObjectLockEnabled value
if config.ObjectLockEnabled != s3_constants.ObjectLockEnabled {
// ObjectLockEnabled can only be 'Enabled', any other value (including 'Disabled') is malformed XML
return ErrInvalidObjectLockEnabledValue
}
// Validate Rule if present
if config.Rule != nil {
if config.Rule.DefaultRetention == nil {
return ErrRuleMissingDefaultRetention
}
return validateDefaultRetention(config.Rule.DefaultRetention)
}
return nil
}
// validateDefaultRetention validates default retention configuration for bucket-level settings
func validateDefaultRetention(retention *DefaultRetention) error {
glog.V(2).Infof("validateDefaultRetention: Mode=%s, Days=%d (set=%v), Years=%d (set=%v)",
retention.Mode, retention.Days, retention.DaysSet, retention.Years, retention.YearsSet)
// Mode is required
if retention.Mode == "" {
return ErrDefaultRetentionMissingMode
}
// Mode must be valid
if retention.Mode != s3_constants.RetentionModeGovernance && retention.Mode != s3_constants.RetentionModeCompliance {
return ErrInvalidDefaultRetentionMode
}
// Check for invalid Years value (negative values are always invalid)
if retention.YearsSet && retention.Years < 0 {
return ErrInvalidRetentionPeriod
}
// Check for invalid Days value (negative values are invalid)
if retention.DaysSet && retention.Days < 0 {
return ErrInvalidRetentionPeriod
}
// Check for invalid Days value (zero is invalid when explicitly provided)
if retention.DaysSet && retention.Days == 0 {
return ErrInvalidRetentionPeriod
}
// Check for neither Days nor Years being specified
if !retention.DaysSet && !retention.YearsSet {
return ErrDefaultRetentionMissingPeriod
}
// Check for both Days and Years being specified
if retention.DaysSet && retention.YearsSet {
return ErrDefaultRetentionBothDaysAndYears
}
// Validate Days if specified
if retention.DaysSet && retention.Days > 0 {
if retention.Days > MaxRetentionDays {
return ErrDefaultRetentionDaysOutOfRange
}
}
// Validate Years if specified
if retention.YearsSet && retention.Years > 0 {
if retention.Years > MaxRetentionYears {
return ErrDefaultRetentionYearsOutOfRange
}
}
return nil
}
// ====================================================================
// SHARED OBJECT LOCK CHECKING FUNCTIONS
// ====================================================================
// These functions delegate to s3_objectlock package to avoid code duplication.
// They are kept here for backward compatibility with existing callers.
// EntryHasActiveLock checks if an entry has an active retention or legal hold
// Delegates to s3_objectlock.EntryHasActiveLock
func EntryHasActiveLock(entry *filer_pb.Entry, currentTime time.Time) bool {
return s3_objectlock.EntryHasActiveLock(entry, currentTime)
}
// HasObjectsWithActiveLocks checks if any objects in the bucket have active retention or legal hold
// Delegates to s3_objectlock.HasObjectsWithActiveLocks
func HasObjectsWithActiveLocks(ctx context.Context, client filer_pb.SeaweedFilerClient, bucketPath string) (bool, error) {
return s3_objectlock.HasObjectsWithActiveLocks(ctx, client, bucketPath)
}
// CheckBucketForLockedObjects is a unified function that checks if a bucket has Object Lock enabled
// and if so, scans for objects with active locks.
// Delegates to s3_objectlock.CheckBucketForLockedObjects
func CheckBucketForLockedObjects(ctx context.Context, client filer_pb.SeaweedFilerClient, bucketsPath, bucketName string) error {
return s3_objectlock.CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName)
}