Beitrag von Holger Jakob, April 2023

ECS Access Control Settings do not prevent Object Deletion

When trying to implement a simple possibility to prevent deletion of objects, we ran into an issue that the ACS Control did not get reflected down to the object level.

ECS Access Control List

Dell ECS allows to configure an Access Control List (ACL) on an object. The following Bucket ACL permissions are available:

  • Read: Allows user to list the objects in the bucket.
  • Read ACL: Allows user to read the bucket ACL.
  • Write: Allows user to create or update any object in the bucket.
  • Write ACL: Allows user to write the ACL for the bucket.
  • Execute: Sets the execute permission when accessed as a file system. This permission has no effect when the object is accessed using the ECS object protocols.
  • Full Control: Allows user to Read, Write, Read ACL, and Write ACL.
    NOTE: Non-owners can Read, Write, Read ACL, and Write ACL if the permission has been granted or can only list the objects.
  • Privileged Write: Allows user to perform writes to a bucket or object when the user does not have normal write permission. Required for CAS buckets.
  • Delete: Allows user to delete buckets and objects. Required for CAS buckets.
  • None: User has no privileges on the bucket.

Denying Deletion Scenario

Our Scenario was an application, that is writing data to a bucket using S3. The application does not support Object Lock functionality to enable retention settings for objects written. Still, the application should generally not be able to delete information except for when an admin cleans up older data. E.g., the application cleans up no longer required data at the beginning of the year to get rid of data older than 15 years.
Our approach was to remove the Bucket ACL Delete permission. The expectation was that a deletion of an object is prevented by ECS.
However, as you can guess by now, the deletion of objects was not prevented, and our test data was deleted from the bucket.

How deletions can be prevented

There are two possibilities to prevent the deletion of objects in such a case. They are:

  • Use the S3 Identity and Access Management (IAM) functionality
  • Use a bucket policy

S3 Identity and Access Management

S3 IAM allow the granular setting of S3 object calls that are allowed for a user in a bucket. They are the best practice way and should really be implemented and we’ll detail this in an upcoming Blog Post.

S3 Bucket Policy

Here we want to focus on the possibility to prevent deletion of objects in the case, that the application is already integrated, and objects were already stored using the traditional ECS User and Bucket configuration. 

ECS allows to configure a bucket policy using the ECS UI which comes in handy for this case. The policy functionality is like what can be done with S3 IAM policies. Typical use cases would be to grant bucket permissions to all users or to assign permissions to created objects (like public read).

In our example, we’ll implement what we would have expected from the bucket ACL, prevent the user from deleting an object in the bucket.

The following policy is an example for a user called nodelete to be denied deletion of all objects in the bucket named bucketwithoutdeletepermission.

{
  "Version": "2012-10-17",
  "Id": "S3PreventObjectDelete",
  "Statement": [
    {
      "Sid": "Prevent Object Deletion",
      "Effect": "Deny",
      "Principal": [
        "nodelete"
      ],
      "Action": [
        "s3:DeleteObject",
        "s3:DeleteObjectVersion"
      ],
      "Resource": [
        "bucketwihtoutdeletepermission/*"
      ]
    }
  ]
}

Edit the bucket policy and paste this example above to the policy editor window and save the policy.
Side note: Do not change the Version string 😊
The ID can absolutely be different, but not the Version even though that looks like a date.
Check out https://docs.aws.amazon.com for more information.

Upcoming Blog Posts

At the moment of writing this post, the ideas are around to write up what we have learned when implementing IAM for customers. The focus is on use cases where a Namespace represents a customer with multiple applications and access to buckets and their needs to be limited to each application. This service provider focused question needed a bit of investigation and structure to implement a secure way to change each applications permission independently. 

As we came across the need to migrate from a disk based ECS to an all flash EXF900 environment, we struggled with getting all the components to get an ECSSync environment working that supports CAS to CAS migrations as well and not only S3 to S3. The procedure and the required downloads will be described in a second upcoming blog post. 

Looking for other ECS related information?

Do not hesitate to reach out to Backup ONE AG using the contact form in our website. We’ll be happy to assist. Ideas for additional posts are welcome 😊.