亚马逊AWS官方博客

使用 AWS Security Hub 快速建立安全态势聚合场景的两类架构

内容简介

许多企业为了维护系统的安全稳定,部署了许多功能强大的安全工具,从防火墙、终端节点保护到漏洞与合规性扫描程序,不一而足。但您的团队每天要处理数成千上万的安全警报(Security Finding),这通常会让数量本就很少的安全人员手忙脚乱地在这些工具之间来回切换。而长期面对大量且纷乱的安全告警,会使人不知所措,很容易产生“告警疲劳”而放弃分析和响应,即使花了大价钱购买或部署的 SIEM(Security Information and Event Management )工具也被束之高阁,空留大屏幕上的显示数字,却无法及时处理告警,也解决不了安全隐患。

AWS Security Hub 是亚马逊云科技提供的可以集中查看和管理安全警报并自动执行安全检查的服务。

本文将带您实践一种基于 Security Hub 的自动化安全事态分析方案,并提供自动化部署 CloudFormation template 和示例 CLI 命令行。

先决条件

架构与工作原理

  • 架构一
    如下图左侧部分所示,此种架构是将自定义的场景条件的告警(finding)写入 Amazon DynamoDB 中,再由 AWS Lambda 对 DynamoDB 进行聚合条件的查询,如果符合就生成一条 Critical 级别 的 Security Hub Finding。此方案的问题在于每当安全员设计一个新的安全场景(user case)时都需要将聚合逻辑作为一个 function 写入 AWS Lambda 中,代码量会越来越大,对安全人员本身的开发能力要求较高,更多细节详见 Blog: correlate-security-findings-with-aws-security-hub-and-amazon-eventbridge
  • 架构二
    如下图右侧部分所示,采用 Security Hub 自带的 insight 功能替代架构一的 DynamoDB 和 Lambda 的聚合逻辑部分,只需要使用一个非常简单的 Lambda function。更新 user case 时无须修改 Lambda 代码,部署 user case 操作更简单方便,对于安全人员无写代码的负担,节省工作时间。

两种架构的对比图

部署方法

让我们使用第二种架构部署一个示例安全场景(user case):S3 数据丢失。

场景聚合条件为:

S3 存储桶没有开启 Versioning
Guardduty 产生报警 Impact:S3/AnomalousBehavior.Delete

当上述两个条件同时被触发时,说明 S3 中的数据被删除并可能无法恢复,面临永久损失的风险,安全人员应立刻响应。因此方案由 Lambda 自动生成一条 Critical 级别的 Security Hub finding。

第一步 新建两个 Custom Insights

使用以下示例命令行设置参数:

region=(将要部署的AWS区域,例如:us-east-1)
insight1='s3versioning'
insight2='guarddutys3delete'

运行以下示例命令行创建 insights:

arn1=$(aws securityhub create-insight \
--filters \
 '{"RecordState": [{ "Comparison": "EQUALS", "Value": "ACTIVE"}], "WorkflowStatus": [{"Comparison": "EQUALS", "Value": "NEW"}],"ProductName": [{"Comparison": "EQUALS", "Value": "Config"}], "ComplianceStatus": [{"Comparison": "EQUALS", "Value": "FAILED"}]}' \
 --group-by-attribute "ResourceId" \
--name $insight1 \
--query 'InsightArn' --output text --region=$region)
echo $arn1
arn2=$(aws securityhub create-insight \
--filters \
 '{"RecordState": [{ "Comparison": "EQUALS", "Value": "ACTIVE"}], "WorkflowStatus": [{"Comparison": "EQUALS", "Value": "NEW"}],"ResourceType": [{"Comparison": "EQUALS", "Value": "AwsS3Bucket"}], "Type": [{"Comparison": "EQUALS", "Value": "TTPs/Impact:S3-AnomalousBehavior.Delete"}]}' \
 --group-by-attribute "ResourceId" \
--name $insight2 \
--query 'InsightArn' --output text --region=$region)
echo $arn2

第二步 运行 CloudFormation Stack

请将附录提供的模板文件保存至本地 CLI 运行目录,以下为参数说明:

stackname=(CloudFormation Stack 的名字,例如’usercase-s3dataloss’)
templatename=(附录中保存的 CloudFormation yaml 文件,例如’blogtemplate.yaml’)
findingtype=(生成的 finding 的类型,需要遵照 Security Hub 的标准规定,本例中请使用 ‘Effects/Data Exposure/S3DataLost’)
title=(生成的 finding title, 例如’SIEM Alert-S3 data lost’)
resourcetype=(生成的 finding 中 resource 的类型,本例中请使用’AwsS3Bucket’)

运行以下 CLI 示例命令行设置参数:

stackname='usercase-s3dataloss'
templatename='blogtemplate.yaml'
findingtype='Effects/Data Exposure/S3DataLost'
title='SIEM Alert-S3 data lost'
resourcetype='AwsS3Bucket'

运行以下 CLI 示例命令行创建 CloudFormation Stack:

aws cloudformation create-stack --stack-name $stackname --template-body file://$templatename \
--parameters  \
ParameterKey=arn1,ParameterValue=$arn1  \
ParameterKey=arn2,ParameterValue=$arn2  \
ParameterKey=findingtype,ParameterValue=$findingtype  \
ParameterKey=title,ParameterValue=$title  \
ParameterKey=resourcetype,ParameterValue=$resourcetype  \
--capabilities CAPABILITY_IAM \
--region=$region

查看结果

满足场景的触发条件后,在 Security Hub 控制台中可以看到由 Lambda 自动生成的一条 Critical 的告警,如下图所示:

user case 告警示例

小结

在这篇博文中,我介绍了如何使用亚马逊云科技的原生服务 Security Hub 和 Lambda 实现快速聚合安全事态实现自动分析告警的方法。示例中的 user case 是由两个条件组成的,您也可以扩展使用三个或四个条件。在安全运营过程中,部署 5-10 个聚合场景,可以帮忙您的安全团队节省人工分析大量日志的时间,使安全团队将精力更加集中于处理真正紧急和高风险的安全事件中去。

附录

Cloudformation template示例

AWSTemplateFormatVersion: 2010-09-09
Parameters:
  arn1:
    Type: String
    Default: arn:aws:securityhub:::insight/securityhub/default/10
    Description: the first securityhub insight arn
  arn2:
    Type: String
    Default: arn:aws:securityhub:::insight/securityhub/default/12
    Description: the second securityhub insight arn
  findingtype:
    Type: String
    Default: Software and Configuration Checks/Amazon Security Best Practices
    Description: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-type-taxonomy.html
  title:
    Type: String
    Default: SIEM Alert-Publicly shared S3 with sensitive data
    Description: the finding title
  resourcetype:
      Type: String
      Default: AwsS3Bucket
      Description: resourcetype used to grouped by in the insight
  ConfigRuleName:
    Type: String
    Default: s3-bucket-versioning-enabled
    Description: The name that you assign to the AWS Config rule.
    MinLength: '1'
    ConstraintDescription: This parameter is required.
  isMfaDeleteEnabled:
    Type: String
    Default: ''
    Description: MFA delete is enabled for your S3 buckets.
Resources:
  AWSConfigRule:
    Type: 'AWS::Config::ConfigRule'
    Properties:
      ConfigRuleName: !Ref ConfigRuleName
      Description: Checks whether versioning is enabled for your S3 buckets.
      InputParameters:
        isMfaDeleteEnabled: !If 
          - isMfaDeleteEnabled
          - !Ref isMfaDeleteEnabled
          - !Ref 'AWS::NoValue'
      Scope:
        ComplianceResourceTypes:
          - 'AWS::S3::Bucket'
      Source:
        Owner: AWS
        SourceIdentifier: S3_BUCKET_VERSIONING_ENABLED  
  EventRule:
    Type: 'AWS::Events::Rule'
    Properties:
      Description: EventRule detect macie finding in securityhub to trigger lambda
      EventPattern:
        source:
          - aws.securityhub
        detail-type:
          - Security Hub Findings - Imported
        detail:
          findings:
            ProductName:
              - GuardDuty
            Workflow:
              Status:
                - NEW
            RecordState:
              - ACTIVE
      State: ENABLED
      Targets:
        - Arn:
            'Fn::GetAtt':
              - LambdaFunction
              - Arn
          Id: '1'
  PermissionForEventsToInvokeLambda:
    Type: 'AWS::Lambda::Permission'
    Properties:
      FunctionName:
        Ref: LambdaFunction
      Action: 'lambda:InvokeFunction'
      Principal: events.amazonaws.com
      SourceArn:
        'Fn::GetAtt':
          - EventRule
          - Arn
  LambdaFunction:
    Type: 'AWS::Lambda::Function'
    Properties:
      Runtime: python3.9
      Role: !GetAtt IAMRole.Arn
      Handler: index.lambda_handler
      Timeout: 600
      Environment:
        Variables:
          arn1: !Ref arn1
          arn2: !Ref arn2
          findingtype: !Ref findingtype
          title: !Ref title
          resourcetype: !Ref resourcetype

      Code:
        ZipFile: |
          import json
          import os
          import boto3
          import datetime
          from datetime import date
          import logging
          logger=logging.getLogger()
          logger.setLevel(logging.INFO)
          sh = boto3.client('securityhub')
          #get insights result
          def getinsight(insight):
              condition=[]
              response = sh.get_insight_results(
              InsightArn=insight)
              result=response["InsightResults"]["ResultValues"]
              if len(result)>0:
                  for i in result:
                      condition.append(i['GroupByAttributeValue'])
              return(condition)
              # create a critical securityhub_finding ,you may modify some field such as title,alertdes
          def create_securityhub_finding (aws_account_id,region,resourceid):
              alerttype=os.environ['findingtype']
              title=os.environ['title']
              resourcetype=os.environ['resourcetype']
              alertdes='This alert is generated by lambda for user case S3 is public accesable and contain sensitive data.Detail can check managed insight 2 and 10'
              # you may modify above Parameters for different user cases

              d = datetime.datetime.now()
              new_recorded_time = d.isoformat() + "Z"
              findings=[]
              sh_payload = {
              "SchemaVersion": '2018-10-08',
              "Title": title,
              "AwsAccountId":aws_account_id ,
              "CreatedAt":new_recorded_time ,
              "UpdatedAt":new_recorded_time ,
              "Description": alertdes,
              "FindingProviderFields": {
                  "Severity": {
                      "Label": "CRITICAL",
                      "Original":"CRITICAL"
                  },
                  "Types": [alerttype]
              },
              "GeneratorId": "SIEM Alert generated by insights",
              "Id": 'arn:aws:siem:'+region+':'+aws_account_id+':finding/insight/'+resourceid+str(d),
              "ProductArn": 'arn:aws:securityhub:' + region + ':'+aws_account_id+':product/'+aws_account_id+'/default',# 中国区改为aws-cn
              "Resources": [{
                  'Type': resourcetype,
                  'Region': region,
                  'Id': resourceid
              }],
              "Note": {
                  "Text": "Please review the incident and take actions",
                  "UpdatedBy":"lambda",
                  "UpdatedAt":new_recorded_time
              }
          }
              findings.append(sh_payload)
              
              logger.info('Creating custom Security Hub finding...')
              try:
                  response = sh.batch_import_findings(
                  Findings=findings
                  )
                  logger.info("Successfully imported {} Security Hub findings".format(response['SuccessCount']))
              except Exception as e:
                  print(e)

          def lambda_handler(event, context):
              group1=[]
              group2=[]
              aws_account_id = event["account"]
              region = event['region']
              #查询insight1和2的结果 inqury the result from both insights
              logger.info('inquerying insight result for this user case...')
              group1=getinsight(os.environ['arn1'])
              group2=getinsight(os.environ['arn2'])
              resourceids=[x for x in group1 if x in group2]
              if len(resourceids)>0:
                  logger.info("SIEM is triggered")
                  for each in resourceids:
                      create_securityhub_finding (aws_account_id,region,each)  #生成一条new finding写入Securityhub, for each resourceid ,write a new critical finding in securityhub
              else:
                  logger.info('The user case is not triggered')
              return(resourceids)
      Description: user case for s3 which set public access and has sensitive data,triggered by macie sensitive data finding from eventbridge rule
      TracingConfig:
        Mode: Active
  IAMRole:
    Type: 'AWS::IAM::Role'
    Properties:
      Description: basic lambda role plus securthub write policy
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Policies:
        - PolicyName: siem2-macie-eb-lambda-policy
          PolicyDocument:
            Statement:
              - Effect: Allow
                Action:
                  - 'securityhub:GetInsightResults'
                  - 'securityhub:BatchImportFindings'
                  - 's3:PutObjectTagging'
                  - 'logs:CreateLogGroup'
                  - 'logs:CreateLogStream'
                  - 'logs:PutLogEvents'
                Resource:
                  - '*'
Conditions:
  isMfaDeleteEnabled: !Not 
    - !Equals 
      - ''
      - !Ref isMfaDeleteEnabled

本篇作者

Jessica Wang

亚马逊云科技专业服务团队高级安全顾问,负责为客户提供云安全咨询、架构设计和技术实施等服务。

王枫楠

亚马逊云科技专业服务团队安全顾问。专注于企业整体云上安全架构设计、最佳实践以及落地实施。