AWS Messaging & Targeting Blog
Creating custom Pinpoint dashboards using Amazon QuickSight, part 1
Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.
Amazon Pinpoint helps you create customer-centric engagement experiences across the mobile, web, and other messaging channels. It also provides a variety of Key Performance Indicators (KPIs) that you can use to track the performance of your messaging programs.
You can access these KPIs through the console, or by using the Amazon Pinpoint API. In some cases, you might want to create custom dashboards that aren’t included by default, or even combine these metrics with other data. Over the next few days, we’ll discuss several different methods that you can use to create your own custom dashboards.
In this post, you’ll learn how to use the Amazon Pinpoint API to retrieve metrics, and then display them in visualizations that you create in Amazon QuickSight. This option is ideal for creating custom dashboards that highlight a specific set of metrics, or for embedding these metrics in your existing application or website.
In the next post (which we’ll post on Monday, August 19), you’ll learn how to export raw event data to an S3 bucket, and use that data to create dashboards by using QuickSight’s Super-fast, Parallel, In-memory Calculation Engine (SPICE). This option enables you to perform in-depth analyses and quickly update visualizations. It’s also cost-effective, because all of the event data is stored in an S3 bucket.
The final post (which we’ll post on Wednesday, August 21) will also discuss the process of creating visualizations from event stream data. However, in this solution, the data will be sent from Amazon Kinesis to a Redshift cluster. This option is ideal if you need to process very large volumes of event data.
Creating a QuickSight dashboard that uses specific metrics
You can use the Amazon Pinpoint API to programmatically access many of the metrics that are shown on the Analytics pages of the Amazon Pinpoint console. You can learn more about using the API to obtain specific KPIs in our recent blog post, Tracking Campaign Performance Using the Metrics APIs.
The following sections show you how to parse and store those results in Amazon S3, and then create custom dashboards by using Amazon Quicksight. The steps below are meant to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.
Step 1: Package the Dependencies
Lambda currently uses a version of the AWS SDK that is a few versions behind the current version. However, the ability to retrieve Pinpoint metrics programmatically is a relatively new feature. For this reason, you have to download the latest version of the SDK libraries to your computer, create a .zip archive, and then upload that archive to Lambda.
To package the dependencies
-
- Paste the following code into a text editor:
from datetime import datetime import boto3 import json AWS_REGION = "<us-east-1>" PROJECT_ID = "<projectId>" BUCKET_NAME = "<bucketName>" BUCKET_PREFIX = "quicksight-data" DATE = datetime.now() # Get today's push open rate KPI values. def get_kpi(kpi_name): client = boto3.client('pinpoint',region_name=AWS_REGION) response = client.get_application_date_range_kpi( ApplicationId=PROJECT_ID, EndTime=DATE.strftime("%Y-%m-%d"), KpiName=kpi_name, StartTime=DATE.strftime("%Y-%m-%d") ) rows = response['ApplicationDateRangeKpiResponse']['KpiResult']['Rows'][0]['Values'] # Create a JSON object that contains the values we'll use to build QuickSight visualizations. data = construct_json_object(rows[0]['Key'], rows[0]['Value']) # Send the data to the S3 bucket. write_results_to_s3(kpi_name, json.dumps(data).encode('UTF-8')) # Create the JSON object that we'll send to S3. def construct_json_object(kpi_name, value): data = { "applicationId": PROJECT_ID, "kpiName": kpi_name, "date": str(DATE), "value": value } return data # Send the data to the designated S3 bucket. def write_results_to_s3(kpi_name, data): # Create a file path with folders for year, month, date, and hour. path = ( BUCKET_PREFIX + "/" + DATE.strftime("%Y") + "/" + DATE.strftime("%m") + "/" + DATE.strftime("%d") + "/" + DATE.strftime("%H") + "/" + kpi_name ) client = boto3.client('s3') # Send the data to the S3 bucket. response = client.put_object( Bucket=BUCKET_NAME, Key=path, Body=bytes(data) ) def lambda_handler(event, context): get_kpi('email-open-rate') get_kpi('successful-delivery-rate') get_kpi('unique-deliveries')
In the preceding code, make the following changes:
- Replace <us-east-1> with the name of the AWS Region that you use Amazon Pinpoint in.
- Replace <projectId> with the ID of the Amazon Pinpoint project that the metrics are associated with.
- Replace <bucketName> with the name of the Amazon S3 bucket that you want to use to store the data. For more information about creating S3 buckets, see Create a Bucket in the Amazon S3 Getting Started Guide.
- Optionally, modify the
lambda_handler
function so that it calls theget_kpi
function for the specific metrics that you want to retrieve.
When you finish, save the file as retrieve_pinpoint_kpis.py.
- Paste the following code into a text editor:
- Use pip to download the latest versions of the
boto3
andbotocore
libraries. Add these libraries to a .zip file. Also add retrieve_pinpoint_kpis.py to the .zip file. You can learn more about all of these tasks in Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide.
Step 2: Set up the Lambda function
In this section, you upload the package that you created in the previous section to Lambda.
To set up the Lambda function
- In the Lambda console, create a new function from scratch. Choose the Python 3.7 runtime.
- Choose a Lambda execution role that contains the following permissions:
- Allows the action
mobiletargeting:GetApplicationDateRangeKpi
for the resourcearn:aws:mobiletargeting:<awsRegion>:<yourAwsAccountId>:apps/*/kpis/*/*
, where <awsRegion> is the Region where you use Amazon Pinpoint, and <yourAwsAccountId> is your AWS account number. - Allows the action
s3:PutObject
for the resourcearn:aws:s3:::<my_bucket>/*
, where <my_bucket> is the name of the S3 bucket where you want to store the metrics.- For more information about creating IAM roles, see Creating a Role to Delegate Permissions to an AWS Service in the AWS Identity and Access Management User Guide.
- Allows the action
- Upload the .zip file that you created in the previous section.
- Change the Handler value to
retrieve_pinpoint_kpis.lambda_handler
. - Save your changes.
Step 3: Schedule the execution of the function
At this point, the Lambda function is ready to run. The next step is to set up the trigger that will cause it to run. In this case, since we’re retrieving an entire day’s worth of data, we’ll set up a scheduled trigger that runs every day at 11:59 PM.
To set up the trigger
- In the Lambda console, in the Designer section, choose Add trigger.
- Create a new CloudWatch Events rule that uses the Schedule expression rule type.
- For the schedule expression, enter
cron(59 23 ? * * *)
.
Step 4: Create QuickSight Analyses
Once the data is populated in S3, you can start creating analyses in Amazon QuickSight. The process of creating new analyses involves a couple of tasks: creating a new data set, and creating your visualizations.
To create analyses in QuickSight
1. In a text editor, create a new file. Paste the following code:
{
"fileLocations": [
{
"URIPrefixes": [
"s3://<bucketName>/quicksight-data/"
]
}
],
"globalUploadSettings": {
"format": "JSON"
}
}
In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the metrics data. Save the file as manifest.json.
2. Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
3. Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
4. Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.