The Internet of Things on AWS – Official Blog
Using AWS IoT Greengrass Version 2 with Amazon SageMaker Neo and NVIDIA DeepStream Applications
AWS IoT Greengrass Version 2 was released for general availability during re:Invent 2020. AWS IoT Greengrass is an Internet of Things (IoT) open source edge runtime and cloud service that helps you build, deploy, and manage device software. Customers use AWS IoT Greengrass for their IoT applications on millions of devices in homes, factories, vehicles, and businesses. AWS IoT Greengrass V2 now offers an open source edge runtime, improved modularity, new local development tools, and improved fleet deployment features. This new version provides a component framework that manages dependencies, and allows you to reduce the size of deployments since you only need to deploy the flexible components required for your application. A component is defined with a YAML or JSON formatted recipe. Additionally, applications no longer have to be AWS Lambda based; you can now package a command-line application directly in your recipe in whatever language you choose. Of course, AWS IoT Greengrass V2 provides components that enable to run Lambda applications as well. You can access the AWS IoT Greengrass V2 open source project here on GitHub and learn more about what’s new in AWS IoT Greengrass Version 2 in the AWS IoT Greengrass Developer Guide.
This post will walk you through the following two use cases of the integration between AWS IoT Greengrass V2 and NVIDIA Jetson modules:
- How to deploy and use GPU accelerated Image Classification on NVIDIA Jetson modules (Nano, TX2, Xavier NX and AGX Xavier supported) with Amazon SageMaker Neo and
- How to deploy a Video Analytics Pipeline with NVIDIA DeepStream on Jetson modules.
The two use case walkthrough sections are not dependent on each other. If you want to only deploy the NVIDIA DeepStream application sample, you can skip Section 1 and go directly to Section 2.
Pre-requisites
- NVIDIA Jetson module (Nano, TX2, Xavier NX or AGX Xavier)
- NVIDIA Jetpack 4.4
- Git
- opencv-python (if you are using a sd image this is included otherwise install with NVIDIA SDK Manager)
- NumPy (install this ahead of time as it can take some time to install)
- AWS IoT Greengrass V2
- AWS account with administrator access – If you don’t have one, see Set Up an AWS Account
- AWS Command Line Interface (CLI) with AWS IoT Greengrass V2 support
- [For part 2] NVIDIA DeepStream SDK installed on Jetson module
Section 0: AWS IoT Greengrass Version 2 Installation
This post is not an introduction to AWS IoT Greengrass V2. For detailed steps for installing and running AWS IoT Greengrass V2 on edge devices, refer to the getting started section of the developer guide.
If you just want to install AWS IoT Greengrass Version 2 on Jetson modules and get started quickly, then you can take the installation script we prepared in GitHub and run this bash script on your Jetson module. Once it successfully installs, you can move on to the next sections.
Section 1: Image classification with Amazon SageMaker Neo compiled models
In this section, we are going to walk you through how to run an Amazon SageMaker Neo-compiled and optimized image classification neural network model. This is common in use cases such as animal image classification.
This example will take a pre-made JPEG image of a dog converted to a NPY file, perform inference (classification) on it, and send the results as a message to AWS IoT Core via MQTT.
In the context of AWS IoT Greengrass V2, this section will deploy three AWS IoT Greengrass components on your Jetson module:
- variant.Jetson.DLR – installs the appropriate Amazon SageMaker Neo DLR on your device. Learn more about AWS IoT Greengrass V2 DLR Installer in the Developer Guide.
- variant.Jetson.ImageClassification.ModelStore – installs ResNet18 image classification models optimized for Jetson modules
- aws.greengrass.JetsonDLRImageClassification – Contains the Python example that does image classification and sends a message to AWS IoT Core using the MQTT protocol.
PLEASE NOTE: This example deployment will install some Python packages outside of a virtual environment. To be specific, python-opencv is specially installed as part of Jetpack 4.4. so the installation the Debian package may run for an extended period of time. NumPy can also take a long time to install.
Checkout components for deployment
In this section we will clone the sample repository from GitHub and prepare the components for deployment. You will get Git installed to proceed.
To prepare the samples for deployment:
1. From your Jetson module, check out the GitHub repository with the following command:
git clone https://github.com/aws-samples/aws-iot-greengrass-v2-deploy-nvidia-deepstream.git
2. In the GitHub repository, copy the recipes in the jetson_inference/recipes into your local GreengrassCore (i.e., ~/GreengrassCore/recipes). See the directory trees below that show the source paths in GitHub vs what it should look like in your GreengrassCore home directory after you copy them.
Directory structure for deployment
3. Copy the directory contents of jetson_inference/artifacts to your GreengrassCore/artifacts directory so that the folder structure looks like the following.
GitHub Source
4. Next, we will upload the component versions to the AWS IoT Greengrass V2 cloud service. Run the following commands on your Jetson device in your GreengrassCore home directory (~/GreengrassCore) you copied the recipes and artifacts into:
aws greengrassv2 create-component-version --inline-recipe fileb://recipes/aws.greengrass.JetsonDLRImageClassification-1.0.0.json
aws greengrassv2 create-component-version --inline-recipe fileb://recipes/variant.Jetson.DLR-1.0.0.json
aws greengrassv2 create-component-version --inline-recipe fileb://recipes/variant.Jetson.ImageClassification.ModelStore-1.0.0.json
Deploy the sample components provided to Jetson module through AWS IoT Greengrass V2
Before starting this section, verify that you have successfully installed AWS IoT Greengrass V2 on your Jetson device. Uploading components to the AWS IoT Greengrass V2 service will allow you to install the examples to your AWS IoT Greengrass Core software installation on your Jetson device. Verify you have a valid installation of AWS IoT Greengrass Core software v2 – refer to the Pre-requisites section for help.
To deploy the components to your Jetson module:
- Navigate to the AWS IoT Core Console (https://console.aws.amazon.com/iot/home).
- Choose Greengrass.Components to deploy
- Choose Components – You should see the three components you created via the AWS CLI.
- Choose any one of the three components you created.
- Choose Deploy.
- Choose Create new deployment.
- Choose Next.
- For Name give the deployment a name.
- For Target type, choose Thing Group and enter the name of your device core, which can be found at the following link to the AWS IoT Greengrass Core page within the AWS Management Console: https://console.aws.amazon.com/iot/home?region=us-east-1#/greengrass/v2/cores).
- Choose Next.
- On the Select Components screen, make sure to select all three of the components you created and choose Next.
- On the Configure Components screen, choose Next.
- On the Configure advanced settings screen, choose Next.
- On the Review screen choose Deploy.
Verify Inference Results
If you have successfully deployed the three components, inference should start immediately.
This will show inference data coming from your Jetson device resulting from a successful deployment. If you do not see any data, please go through and verify successful completion of each procedure outlined in the “Deploy the Components to your account” section, or consult the AWS IoT Greengrass V2 Troubleshooting Guide.
To view results with the MQTT Test Client:
- On the AWS Management Console, choose AWS IoT Core
- Choose Test
- Choose MQTT Test ClientEnter topic to filter messages
- Enter demo/topic for Subscription Topic
- Choose Subscribe to topicInference/classification messages
Section 2: Deploy NVIDIA DeepStream Application with AWS IoT Greengrass V2
NVIDIA’s Jetson product family enables customers to extend server-class compute to devices operating at the edge. NVIDIA has developed a streaming analytics toolkit called DeepStream to leverage TensorRT and CUDA to optimize AI performance at the edge. The DeepStream SDK provides an end-to-end video processing and ML inferencing analytics solution for transforming pixels and sensor data into actionable insights.
In this section, we will present how AWS can help deploy DeepStream apps and new ML models run by DeepStream apps on NVIDIA Jetson modules at scale with AWS IoT Greengrass V2.
For this demonstration, we will use the sample model and sample DeepStream application developed by NVIDIA in their DeepStream SDK as an example. You are also welcome to use your customized models and DeepStream apps.
Before starting the deployment process, first verify your DeepStream installation on your Jetson module:
1. Enter this command on your terminal to start the reference application
$ deepstream-app -c <path_to_config_file>
** <path_to_config_file>
is the pathname of one of the reference application’s configuration files
2. Verify DeepStream application runs successfully on your terminal
Note, if you are using a Jetson Nano , we recommend using /opt/nvidia/deepstream/<your deepstream version>/samples/configs/deepstream-app/source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
as your configuration file.
If the sample app starts without reporting error, then proceed to the deployment section in this article. If you have trouble running this sample app, please refer to DeepStream documentation troubleshooting section for more details.
To deploy the NVIDIA DeepStream Application with AWS IoT Greengrass V2:
Step 0: Create a DeepStream application package
Before completing the steps in this section, first determine if you’d like to use a sample DeepStream application package or use your own customized deployment package. We have created a sample DeepStream application package that consists of three components:
- ML model (directly sourced from DeepStream SDK provided by NVIDIA: https://developer.nvidia.com/deepstream-download)
- Modified version of sample DeepStream application configuration file.
- Modified version of sample DeepStream primary GIE configuration file.
You can use this as an example to follow along the deployment steps, or you can use your own customized version of DeepStream configuration files and your own trained ML models.
Step 1: Prepare local environment
- 1. If you have not yet cloned the GitHub repository, run the following command to clone it locally
git clone https://github.com/aws-samples/aws-iot-greengrass-v2-deploy-nvidia-deepstream.git
- 2. Export the path to the GitHub repository locally as an environment variable by running:
cd aws-iot-greengrass-v2-deploy-nvidia-deepstream
export DEMO_PATH=${PWD}
Step 2: Upload your package into an Amazon S3 bucket
- Create an S3 bucket by running the following command. (if you already have an S3 bucket, skip this step and use your existing bucket in Step XX):
aws s3 create-bucket –bucket [YOUR_S3_BUCKET_NAME]
- Enter into nvidia_deepstream_integration folder in your GitHub repository by running:
cd $DEMO_PATH/nvidia_deepstream_integration
- Upload our prepared sample deployment package in our S3 bucket:
aws s3 cp jetson_deployment.zip s3:// [YOUR_S3_BUCKET_NAME]/jetson_deployment.zip
Step 3: Create an AWS IoT Greengrass V2 component
- Rename greengrass_component.json file by adding a postfix that AWS IoT Greengrass V2 uses as version number. For example:
mv greengrass_component.json greengrass_component-1.0.0.json
- Open greengrass_component.json file with your text editor, and replace the placeholder [YOUR_S3_BUCKET_NAME] with the actual bucket name that you used in Step 2.
- Upload an AWS IoT Greengrass V2 component to the AWS IoT Greengrass cloud service.
aws greengrassv2 create-component-version --inline-recipe fileb://greengrass_component-1.0.0.json
You will see the following message returned by AWS CLI:
{
“arn”: “arn:aws:greengrass:us-west-2:XXXXXXXXXXXX:components:deepstream-deployment:versions:1.0.0”,
“componentName”: “deepstream-deployment”,
“componentVersion”: “1.0.0”,
“creationTimestamp”: “2021-03-19T14:13:30.126000-07:00”,
“status”: {
“componentState”: “REQUESTED”,
“message”: “NONE”,
“errors”: {}
}
}
** Note: the default AWS IoT Greengrass V2 role does not have S3 access. So please manually add S3 access to your AWS IoT Greengrass V2 role if you have not already done so by running the following AWS CLI command or doing it manually in the AWS Management Console.
aws iam attach-role-policy --role-name [Your_Greengrass_V2_role_name] --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
If you encounter Invalid choice: 'greengrassv2'
, this indicates that you need to download and update your AWS CLI service to the latest version.
Step 4: Deploy the AWS IoT Greengrass V2 component
- Navigate to the following page on AWS IoT console and click on “Deploy”.
- Verify that the component ran successfully either from your AWS IoT Greengrass Core software v2 runtime log located at /greengrass/v2/logs.
- In that folder, there’s greengrass.log (for the nucleus) and <componentName>.log for each component.
- You can also verify by observing if you are receiving inferencing results on your configured DeepStream pipeline sink.
Section 3: Additional Resources
Local deployment of AWS IoT Greengrass V2 components
You can also deploy locally without an internet connection as outlined in the AWS IoT Greengrass V2 Getting Started Guide (Create your first component section).
Change camera source
You can replace the image inference with a camera interface. Because most Jetson modules do not come with cameras, your method for interfacing with the camera may vary for your type of camera. Please refer to inference.py for more details.
DeepStream IoT Test Applications (test 4 or test 5 in DeepStream application)
DeepStream applications have a function called Gst-nvmsgbroker. This plugin can send payload messages to AWS IoT Core** using MQTT protocol. It accepts any buffer with NvDsPayload metadata attached and uses the nvds_msgapi_* interface to send the messages to the server. If you need to use AWS IoT Core or AWS IoT Greengrass as a MsgBroker sink for your DeepStream application, you need the shared library from this GitHub: AWS IoT Core Integration with NVIDIA DeepStream.
NVIDIA DeepStream integration with AWS IoT Greengrass V1 (legacy)
To review the integration between DeepStream and AWS IoT Greengrass V1, please refer to the following GitHub repository. https://github.com/aws-samples/aws-iot-greengrass-deploy-nvidia-deepstream-on-edge
Summary: Start building!
In this post we’ve shown two ways to use AWS IoT Greengrass V2 on NVIDIA Jetson devices: classify images using SageMaker Neo and deploy a DeepStream video analytics pipeline for video data. To help you evaluate, test, and develop with this new release of AWS IoT Greengrass, the first 1,000 devices in your account will not incur any AWS IoT Greengrass charges until December 31, 2021. You will still incur charges for other AWS services you use with your applications running on AWS IoT Greengrass such as AWS IoT Core. We can’t wait to see what you build!
References
Chihuahua picture is part of the Stanford ImageNet resource collection located at http://vision.stanford.edu/aditya86/ImageNetDogs.
NVIDIA DeepStream Developer Guide: https://developer.nvidia.com/deepstream-getting-started
AWS IoT Greengrass V2 Developer Guide: https://docs.aws.amazon.com/greengrass/index.html
About The Authors
Ryan Vanderwerf is a Partner Solutions Architect focusing on IoT partnerships
Yuxin Yang is an IoT Consultant in AWS Professional Services