AWS Security Blog

Analyzing OS-Related Security Events on EC2 with SplunkStorm

September 3, 2021: This blog post was updated to clarify that the S3 bucket name DOC-EXAMPLE-BUCKET is a placeholder name that readers should replace with their own S3 bucket name.


An important objective of analyzing OS-generated data is to detect, correlate, and report on potential security events. Several partner solutions available in AWS Marketplace provide this functionality, including Splunk.  Splunk is also used for many other use cases relevant to AWS, including devops, where developers and operations use Splunk to analyze logs for better performance and availability within AWS environments.  Splunk has been a long time AWS partner and has recently developed a CloudFormation template to make it easier to deploy Splunk Storm on AWS.  Bill Shinn, an AWS Security Solution Architect, describes how to deploy Splunk using AWS CloudFormation and then use it to analyze user activity on EC2 instances.

Introduction

One reason that customers review security and audit logs is to detect and analyze security risks—specifically, to look for evidence of unauthorized access and actions. In fact, the entire security information and event management (SIEM) market is based on log management. Collecting, correlating, and diagnosing logs based on rules prescribed by IT and security is the core function of these systems.

For SIEM in AWS, customers can examine user activity at the operating system level, where alerts triggered by Windows or Linux events include access denied, files deleted by unauthorized users, employment termination, and unscheduled password changes. Alerts are then correlated with other events generated from different systems, users, and instances.

Devops is another popular use case that Splunk is often employed.  Many developers who write applications on AWS will use Splunk Storm at no additional cost to analyze performance and availability issues before putting an application into production. Once in production, the app can be managed by the operations team and subsequently trouble-shooted if anything were to go wrong with the app.  This use case is based on the logs generated by the app, AWS services and the analysis injected by the developer team.

Splunk Storm indexes and stores machine data in real time from virtually any source, format, platform, or cloud provider, without the need for custom parsers or connectors. “Machine data” here includes web and application logs, syslog streams from applications, load balancers, OS-related security events from AWS EC2 instances, database logs, stack traces, and more. Whether your application is written in Ruby, Java, Python, PHP, .NET, Node.js or any other framework, you can send data to Splunk Storm.

Sending system log messages from Amazon EC2 instances to Splunk Storm lets you visualize and take action on important events. Setting this up can be automated using AWS CloudFormation. In this paper, I’ll walk you through a CloudFormation template you can use to get started sending system log information to Splunk Storm after launching an EC2 instance. The diagram below provides a high-level overview of how the AWS CloudFormation template creates an Amazon EC2 instance, pulls content from Amazon S3, and then configures the instance to forward logs to Splunk Storm.

Diagram showing a high-level overview of the process

Setting Up Splunk Storm with CloudFormation

To start, download the sample CloudFormation template for using Splunk.

Launch the template. This automatically installs the Splunk Universal Forwarder, a data collection agent, into an EC2 instance and configures it to start logging security events into a Splunk Storm project for analysis. Before you can install the Universal Forwarder, you need a Splunk Storm account and at least one project. If you have not done this already, go to www.splunkstorm.com and create your Splunk account and your first project.

Navigate to the Universal Forwarder site and download the RPM and credentials packages for your project:

Screenshot showing the download of Splunk universal forwarder and credentials package

The credentials package lets your Universal Forwarder connect and send logs to your Splunk Storm project.

For this walkthrough, we’re using the 64-bit 2.6+ Linux distribution. If you want to use a different operating system or want to use a package management system that does not work with RPMs, you’ll need to modify the sample template. You can modify the template in a text editor, or by using the AWS Toolkits for Visual Studio, or Eclipse.

Upload the Universal Forwarder installation to a publicly accessible Amazon S3 bucket in your AWS account. Be sure to note the URL where this installation package is stored. (For information on how to create a bucket and make it available, see Access Control in the Amazon S3 documentation.).

Next, store the project’s credential package (.spl file) in your S3 bucket and ensure it is only accessible to your AWS account.

Important: The credentials package  contains private credential material about your Splunk account which allows a user to add forwarders and potentially make other configuration changes to your Splunk Storm project. Therefore, this file should never be stored in a location that allows unauthorized access.

By default, S3 objects are not shared and are accessible only to the AWS account that stores the object. Take note of the Amazon Resource Name (ARN) for this S3 object; you’ll need to supply it as an input parameter when you launch the CloudFormation template.

What the CloudFormation Template Does

The CloudFormation template performs the follow actions:

  1. Creates an EC2 instance using the Amazon Linux AMIs for 64-bit operating systems. The AMI IDs are up to date for all regions and instance types as of today’s posting. These periodically change, but you can find the latest AMI IDs here. If an ID changes, you need to update the CloudFormation template accordingly.
  2. Creates an EC2 security group that allows only SSH access to the instance. This should be modified to account for any application requirements of the instance. Customers can find more information on managing Security Groups here.
  3. Creates an IAM role, instance profile, and policy combination with permissions sufficient to access the S3 bucket that stores the Splunk Storm credentials and to download the content. The template showcases the capability of CloudFormation to automate the creation of IAM resources. The template also demonstrates how to use CloudFormation metadata resources (AWS::CloudFormation::Init files and AWS::CloudFormation::Authentication) to configure an AWS EC2 instance when it starts.

The snippets below are from the CloudFormation template accompanying this article and help demonstrate how an access model is constructed.

The Amazon EC2 instance will be launched with an IAM role. The following snippet creates the role and the associated access policy. The policy only allows access to retrieve a single S3 object; in this case, this only allows the ListBucket and GetObject actions to be used specifically on the S3 bucket containing the Splunk Storm credentials package.

"splunkstormIAMRole" : {
   "Type" : "AWS::IAM::Role",
   "Properties" : {
    "AssumeRolePolicyDocument" : {
     "Statement" : [
      {
       "Effect" : "Allow",
       "Principal" : {
        "Service" : [
         "ec2.amazonaws.com"
        ]
       },
       "Action" : [
        "sts:AssumeRole"
       ]
      }
     ]
    },
    "Path" : "/"
   }
  },
  "splunkstormIAMPolicy" : {
   "Type" : "AWS::IAM::Policy",
   "Properties" : {
    "PolicyName" : "splunkstormPolicy",
    "PolicyDocument" : {
     "Statement" : [
      {
       "Effect" : "Allow",
       "Action" : [
        "s3:ListBucket",
        "s3:GetObject"
       ],
       "Resource" : {
        "Ref" : "SplunkSPLFileS3ARN"
       }
      }
     ]
    },
    "Roles" : [
     {
      "Ref" : "splunkstormIAMRole"
     }
    ]
   }
  },

 "splunkstormInstanceProfile" : {
   "Type" : "AWS::IAM::InstanceProfile",
   "Properties" : {
    "Path" : "/",
    "Roles" : [
     {
      "Ref" : "splunkstormIAMRole"
     }
    ]
   }
  },

This subsection -section of the EC2 instance resource instructs EC2 to launch the instance using the IAM instance profile (role) you created. This makes temporary security credentials available to applications or scripts running from the instance:

 "IamInstanceProfile" : {
     "Ref" : "splunkstormInstanceProfile"
    },

This section is referenced by cfn-init, a helper script running on the instance just after it is launched. It tells the helper script to use the role when obtaining temporary security credentials necessary to obtain the Splunk Storm credentials package:

  "AWS::CloudFormation::Authentication" : {
     "s3credsForSPL" : {
      "buckets" : [
       {
        "Ref" : "SplunkSPLFileS3BucketName"
       }
      ],
      "type" : "S3",
      "roleName" : {
       "Ref" : "splunkstormIAMRole"
      }
     }
    }

1. Downloads and installs the Splunk Universal Forwarder to the EC2 instance. This is done via a setup script that is created using CloudFormation metadata resources (AWS::CloudFormation::Init files and commands). The setup script takes the following actions:

a. Changes the default admin password on the Splunk Universal Forwarder to whatever you supply as an input parameter when launching the CloudFormation template. Prior to starting the forwarder, the template changes the default hostname of the Splunk universal forwarder to the EC2 instance ID. This is done by retrieving the instance ID from the EC2 metadata service and writing into $SPLUNK_HOME/etc/system/local/inputs.conf. Instance IDs persist across EBS-backed EC2 instance restarts, while internal and external IPs may not. Except in the case of a VPC instance where you assign IPs that can remain the same across restarts. This means that when the instance restarts, the HOSTNAME field in a compliant syslog message may change leaving no clear way to associate message from the same instance across restarts.

Note: You may wish to modify this functionality to swap out instance ID with any other value that meets your persistence goals, such as a CMDB identifier, the EC2 instance ID combined with some AWS account property, or any value that allows you to uniquely identify the log messages. Starts the Splunk agent.

b. Adds the credentials package as a Splunk “app.” This configures the agent to forward events to the SplunkStorm project.

c.  Adds the file /var/log/secure as a monitored file. You may want to update the functionality with a CloudFormation parameter supporting one or more file and directory paths to monitor, or even to import a list of objects stored in S3 buckets containing the list of directories and files to monitor. A great use case is where an application manifest or config file contains a log4j configuration data or other metadata -about where an application logs.

d. Ensures that the agent is configured to start up across EC2 instance restarts (in the case of EBS-backed instances). Instead of using the CloudFormation metadata services resource, the template uses the enable-boot script included in the Universal Forwarder package.

You can launch the CloudFormation stack from the AWS Management Console, or upload the template to an S3 bucket location and launch it from the command line. For example, the following command uses the AWS CLI tools to launch the CloudFormation stack with the following command. (You need to substitute your values, such as your S3 bucket name where you see DOC-EXAMPLE-BUCKET, where appropriate and make sure you launch the stack in the same region where you stored the template.)

aws cloudformation create-stack --stack-name yourstackname --capabilities CAPABILITY_IAM --disable-rollback --parameters '{"parameter_value" : "yourkeypair", "parameter_key" : "KeyName"}' '{"parameter_value" : "m1.xlarge", "parameter_key" : "EC2InstanceType"}' '{"parameter_key" : "SplunkForwarderAdminPassword", "parameter_value" : "agoodpassword"}' '{"parameter_value" : "https://s3-us-west-2.amazonaws.com/DOC-EXAMPLE-BUCKET/splunkforwarder-5.0.2-149561-linux-2.6-x86_64.rpm","parameter_key" : "SplunkForwarderPackageLocation"}' '{"parameter_value" : "http://s3-us-west-2.s3.amazonaws.com/DOC-EXAMPLE-BUCKET/stormforwarder_replacewithyour-spl-value.spl" , "parameter_key" : "SplunkSPLFileLocation"}' '{"parameter_value" : "arn:aws:s3:::DOC-EXAMPLE-BUCKET/stormforwarder_your_spl_filename1234567.spl" , "parameter_key" : "SplunkSPLFileS3ARN"}' '{"parameter_value" : "DOC-EXAMPLE-BUCKET","parameter_key" : "SplunkSPLFileS3BucketName"}' --template-url https://s3-us-west-2.amazonaws.com/DOC-EXAMPLE-BUCKET/splunkstorm.template --debug

In this example, the CloudFormation stack is launched in the US West (Oregon) AWS region and all associated files are stored in the S3 endpoint in that region, You’ll need to adjust for different regions.

After the CloudFormation stack has completed launching, the Splunk Universal Forwarder will begin forwarding log messages from /var/log/secure to Splunk Storm for analysis. Log into the instances using SSH and you will see the audit trail stored in Splunk Storm. In the following example, you can see failed SSH login activity for the instance that you just created.

Screenshot showing failed SSH login activity

You can customize this template for use in your applications. Be sure to capture log messages from your applications related to accessing sensitive data, making critical configuration changes, and attempted access to interfaces that could permit unauthorized access.

Conclusion

Many AWS customers already have log management solutions, but those solutions are primarily for on-premises environments. There are now several AWS Marketplace solution partners, including Splunk Storm, that make it possible to gather EC2 logs for security and compliance use cases. To learn more about Splunk Storm or other solutions, please visit the AWS Marketplace website.

– Ben