AWS Storage Blog
Using job tags to manage permissions for Amazon S3 Batch Operations jobs
As organizations grow their use of AWS, they often find that a variety of teams and applications begin to use the data stored in Amazon S3. While customers love the agility benefits of this, they also seek to govern their data’s security, productivity, and cost.
Earlier this year we announced support for job tags with Amazon S3 Batch Operations. In this post, we demonstrate how an organization can use S3 Batch Operations job tags to grant employees permission to create and edit only their departments’ jobs. The organization could also choose to assign job permissions based on the stage of development they correspond to, such as quality assurance (QA) or Production.
Having the level of granular control over S3 Batch Operations jobs that we demonstrate in this post is especially valuable in shared AWS accounts and data lakes, where multiple teams and departments may be accessing similar or overlapping datasets. Using job tags and the policies outlined in this blog post augments your existing IT governance, improving the understanding of what teams or applications are driving costs and adding a new means to audit data access. This extends the controls and granular permissions you’ve configured in other AWS Services and further gives you the ability to set the precedence of different types of S3 Batch Operations jobs, assuring that the most critical work proceeds with the highest priority. We start with enforcing the set of allowable tag values, move to requiring tags on job creation, and then cover examples of restricting edit permissions based on a job’s tags.
S3 Batch Operations and job tags
S3 Batch Operations is a feature that makes it easy to perform repetitive or bulk actions like copying, adding S3 Object Lock retention dates, or updating tag sets across millions of objects with a single request. All you provide is the list of objects, and S3 Batch Operations handles the repetitive work, including managing retries and displaying progress. You can learn more about how customers are using S3 Batch Operations by reading this blog post from Autodesk or hearing from Capital One, Teespring, and ePlus here.
Job tags provide a new way for you to control access to your jobs and enable you to require that users apply specific tags when creating jobs. You can apply up to 50 job tags to each S3 Batch Operations job. This enables you to set granular policies restricting the set of users that can edit the job. Specifically, the presence of job tags can grant or limit a user’s ability to cancel a job, activate a job in the confirmation state, or change a job’s priority level.
As an administrator, you can also enforce the application of tags for all new jobs and specify the allowed key-value pairs for the tags. You can express all of these conditions using the same IAM policy language that you use with tags across AWS, making it easy to get started.
Prerequisites
To follow along with the process outlined in this post, you need an AWS account, that’s it! This post is relevant whether you have petabytes of existing storage or are planning a migration to the cloud. You might also find much of the existing S3 Batch Operations documentation useful, including documentation on S3 Batch Operations basics, operations, and managing S3 Batch Operations jobs.
Getting started: Deciding how you want to use job tags
Each organization is different so you want to customize the following details to fit your specific use of Amazon S3. For our sample organization, we are setting out to divide S3 Batch Operations create and edit permissions based on the department and the stage of development. The departments we highlight are Finance, Compliance, Business Intelligence, and Engineering, with each using S3 Batch Operations in different ways. In this company, administrators are using attribute-based access control (ABAC), which is an IAM authorization strategy that defines permissions by attaching tags to both IAM users and AWS resources.
Since the company has four departments, we assigned users and jobs one of the following department tags:
Tag key: Tag value
- department: Finance
- department: Compliance
- department: BusinessIntelligence
- department: Engineering
Please note that these job tag keys and values are case-sensitive.
Using the ABAC access control strategy, an administrator could grant a user in their company’s Finance department permission to create and manage S3 Batch Operations jobs within their department. All the administrator must do is associate the tag department=Finance
with their IAM user:
The administrator could also create and attach a managed policy to the IAM user that enables any user in their company to create or modify S3 Batch Operations jobs within their respective departments:
The following departmental-bops-user managed policy is composed of three separate statements. We examine each statement in turn in the sections that follow.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:CreateJob", "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/department": "${aws:PrincipalTag/department}" } } }, { "Effect": "Allow", "Action": [ "s3:UpdateJobPriority", "s3:UpdateJobStatus" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/department": "${aws:PrincipalTag/department}" } } }, { "Effect": "Allow", "Action": "s3:PutJobTagging", "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/department": "${aws:PrincipalTag/department}", "aws:ResourceTag/department": "${aws:PrincipalTag/department}" } } } ] }
Requiring tags on job creation
The first policy statement allows the user to create an S3 Batch Operations job if the job creation request includes a job tag that matches their respective department. This is expressed using the "${aws:PrincipalTag/department}"
syntax, which is replaced by the IAM user’s department tag at policy evaluation time. The condition is satisfied when the value provided for the department tag in the request ("aws:RequestTag/department"
) matches the user’s department:
{ "Effect": "Allow", "Action": "s3:CreateJob", "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/department": "${aws:PrincipalTag/department}" } } }
Controlling job edit permissions based on tags
The second statement in the policy enables users to change the priority of jobs or update a job’s status. The permission is granted if the job the user is updating matches the user’s department:
{ "Effect": "Allow", "Action": [ "s3:UpdateJobPriority", "s3:UpdateJobStatus" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/department": "${aws:PrincipalTag/department}" } } }
Finally, users can update a job’s tags at any time via a PutJobTagging request. Given that, it is important that these updates prevent users from changing the job’s department tag, while still allowing users the flexibility to label jobs with other tags that make sense for their use case. To achieve this, the third statement allows users to update a job’s tags as long as their department tag is preserved and the job they’re updating is within their department:
{ "Effect": "Allow", "Action": "s3:PutJobTagging", "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/department": "${aws:PrincipalTag/department}", "aws:ResourceTag/department": "${aws:PrincipalTag/department}" } } }
Tagging jobs by stage and enforcing limits on job priority
All S3 Batch Operations jobs have a numeric priority, which S3 Batch Operations uses to prioritize job execution. For this example, administrators want to prevent lower priority QA stage jobs from disrupting high priority Production stage jobs. To meet this goal, administrators want to restrict the maximum priority most users can assign to jobs, with higher priority ranges reserved for a limited set of privileged users:
- QA stage priority range (low): 1–100
- Production stage priority range (high): 1–300
To do this, we introduce a new tag representing the stage of the job:
Tag key: Tag value
- stage: QA
- stage: Production
Creating and updating low-priority jobs
This policy introduces two new restrictions on job creation and update, in addition to the department-based restriction:
- It allows users to create or update jobs in their department with a new condition that requires the job to include the tag stage=QA.
- It allows users to create or update a job’s priority up to a new maximum priority of 100.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:CreateJob", "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/department": "${aws:PrincipalTag/department}", "aws:RequestTag/stage": "QA" }, "NumericLessThanEquals": { "s3:RequestJobPriority": 100 } } }, { "Effect": "Allow", "Action": [ "s3:UpdateJobStatus" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/department": "${aws:PrincipalTag/department}" } } }, { "Effect": "Allow", "Action": "s3:UpdateJobPriority", "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/department": "${aws:PrincipalTag/department}", "aws:ResourceTag/stage": "QA" }, "NumericLessThanEquals": { "s3:RequestJobPriority": 100 } } }, { "Effect": "Allow", "Action": "s3:PutJobTagging", "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/department" : "${aws:PrincipalTag/department}", "aws:ResourceTag/department": "${aws:PrincipalTag/department}", "aws:RequestTag/stage": "QA", "aws:ResourceTag/stage": "QA" } } }, { "Effect": "Allow", "Action": "s3:GetJobTagging", "Resource": "*" } ] }
Creating and updating high priority jobs
A small number of users within the company require the ability to create high priority jobs in either QA or Production. To support this need, we create a managed policy that is adapted from the low priority departmental policy in the previous section. The new policy:
- Allows users to create or update jobs in their department with either the tag stage=QA or stage=Production.
- Allows users to create or update a job’s priority up to a higher maximum of 300.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:CreateJob", "Resource": "*", "Condition": { "ForAnyValue:StringEquals" : { "aws:RequestTag/stage": [ "QA", "Production" ] }, "StringEquals": { "aws:RequestTag/department": "${aws:PrincipalTag/department}" }, "NumericLessThanEquals": { "s3:RequestJobPriority": 300 } } }, { "Effect": "Allow", "Action": [ "s3:UpdateJobStatus" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceTag/department": "${aws:PrincipalTag/department}" } } }, { "Effect": "Allow", "Action": "s3:UpdateJobPriority", "Resource": "*", "Condition": { "ForAnyValue:StringEquals" : { "aws:ResourceTag/stage": [ "QA", "Production" ] }, "StringEquals": { "aws:ResourceTag/department": "${aws:PrincipalTag/department}" }, "NumericLessThanEquals": { "s3:RequestJobPriority": 300 } } }, { "Effect": "Allow", "Action": "s3:PutJobTagging", "Resource": "*", "Condition": { "StringEquals": { "aws:RequestTag/department" : "${aws:PrincipalTag/department}", "aws:ResourceTag/department": "${aws:PrincipalTag/department}" }, "ForAnyValue:StringEquals" : { "aws:RequestTag/stage": [ "QA", "Production" ], "aws:ResourceTag/stage": [ "QA", "Production" ] } } } ] }
Things to know
S3 Batch Operations performs the same operation across all the objects listed in the manifest. If you want your copied objects to have different storage classes or other properties, create multiple jobs. You can also use an AWS Lambda function and have it assign specific properties to each object.
Cleaning up
To clean up the resources you created as part of this blog post, remove the applicable set of IAM tags or IAM managed policies. If you’ve run any S3 Batch Operations jobs and want to remove the completion report generated by the job, you can do so directly in the Amazon S3 bucket where the report is stored.
Conclusion
In this post, we demonstrated how S3 Batch Operations job tags can grant or limit users’ access to create and update jobs within their departments and across a range of job priorities. In particular, we covered how you could require that job tags be added for all new S3 Batch Operations jobs, assign each user’s departments using IAM tags, and how IAM managed policies could then grant granular access to S3 Batch Operations jobs based on those departments.
Using job tags along with the user and resource policies you already use in other AWS services today can give your teams and applications a consistent experience in running large-scale S3 Batch Operations jobs, enforcing IT governance policies, and enhancing traceability and reporting of the jobs performed. We hope you can draw from the examples above to implement permission controls for users across your organization, especially when multiple teams are accessing shared datasets or data lakes within the same account.
Thanks for reading this post and getting started with job tags for S3 Batch Operations. We can’t wait to hear your feedback and feature requests for S3 Batch Operations in the comments.