AWS Open Source Blog

How to use InfluxDB and Grafana to visualize ML output with AWS IoT Greengrass

Machine learning (ML) algorithms are widely used for computer vision (CV) applications, such as image classification, object detection, and semantic segmentation. With the latest development of the Industrial Internet of Things (IIoT), ML algorithms can be directly implemented at the edge device to process image data and perform anomaly detection, such as for product quality assurance tasks at shop floors with low latency. The recently released AWS IoT Greengrass version 2 (Greengrass v2) helps developers deploy CV ML applications at the edge with the necessary customer data pipeline components, including data ingestion and data preprocessing logics.

In this blog post, we’ll show an end-to-end workflow for using open source tools with AWS IoT Greengrass version 2 to visualize ML inference results in near real-time on an edge device.

Introduction

In many anomaly detection and object classification applications, users want to visualize the ML inference output at a local edge machine with a real-time dashboard tool so they can review, approve, or intervene in the ML inference outcomes. With a near real-time business intelligence (BI) tool at the edge, the operational technology team can consume the ML outputs in a timely manner and make appropriate business decisions.

In this article, we present a workflow showing how to implement a CV ML application with visualization at the edge with two open source tools: InfluxDB and Grafana. For demonstration purposes, this setup uses a previously trained Resnet-18 classification model published on PyTorch to classify objects of 1000 categories. This pretrained model is further compiled with Amazon SageMaker Neo and is deployed as an edge inference component with Greengrass.

A second component is developed to visualize ML inference results at the edge using InfluxDB and Grafana. These open source tools are able to achieve high-speed ingestion and visualization of time-stamped image data in near real time.

The data transfer and communication between these two customer components is accomplished via interprocess communication (IPC) with AWS IoT Greengrass v2. In this setup, the CV ML component first publishes the inferencing result to the Greengrass IPC broker. A subscriber is listening on the same topic to which the ML inferencing script publishes, and the received inference result is then written to InfluxDB, and finally displayed on the predictive dashboard with image plugin by Grafana.

The following walkthrough contains detailed steps to develop this image visualization workflow with open source tools at the edge. No specialized ML experience is needed to follow this example and build the described workflow.

Time to read 20 minutes
Time to complete 120 minutes
Cost to complete (estimated) less than 1 dollar (at publication time)
Learning level Advanced (300)
Services used Amazon EC2 instance, AWS IoT Greengrass v2, Amazon SageMaker Neo, Amazon SageMaker notebook, InfluxDB, and Grafana

Solution overview

The following image shows the solution architecture for this edge workflow. Two user-defined components, ML inference and IPC message subscriber with data entry to InfluxDB, are deployed with AWS IoT Greengrass v2 at the edge. Once the time-stamped ML inference data is written to InfluxDB, a table type Grafana dashboard is built to filter and display ML inference results.

In this example, we used an Ubuntu 18.04 LTS EC2 instance with the Greengrass v2 runtime to simulate an edge device with the AWS IoT platform.

Figure 1: Solution architecture for this edge workflow.

Figure 1: Solution architecture for this edge workflow.

Walkthrough

In the following sections, we will cover four steps:

  1. Set up InfluxDB, Grafana, and Greengrass v2 on Amazon Elastic Compute Cloud (Amazon EC2) instance.
  2. Create and deploy the ML component.
  3. Create the subscriber and ingest data to InfluxDB component.
  4. Create a Grafana dashboard to visualize ML inference results.

The source code for each component is hosted in a GitHub repository.

Prerequisites

The following prerequisites are necessary to complete the setup as described:

  • An AWS account. If you don’t have an AWS account, follow the instructions to create one, unless you have been provided Event Engine details.
  • A user role with administrator access. The service access associated with this role can be constrained further when the workflow goes to production.
  • Recent modern browser (for example, latest Firefox or Chrome).
  • No specialized knowledge is required to build this solution, but basic Linux and Python knowledge will help.

Step 1. Set up InfluxDB, Grafana, and Greengrass on Amazon EC2 instance

EC2 instance preparation work

To start, you must set up an Amazon EC2 instance of type c5.xlarge with Ubuntu 18.04 base image in the AWS account with the following user data:

apt-get update
apt-get install -y python3-pip zip jq
pip3 install Pillow numpy dlr
git clone https://github.com/aws/aws-iot-device-sdk-python-v2.git
python3 -m pip install ./aws-iot-device-sdk-python-v2

After the EC2 instance’s status changes to running, follow the instructions to set up AWS Systems Manager Session Manager Access role to access this EC2 instance without SSH.

Finally, you must install the AWS Command Line Interface (CLI) v2 on the Amazon EC2 instance:

wget https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
     unzip awscli-exe-linux-x86_64.zip
     sudo ./aws/install

Connect with the EC2 instance via AWS Systems Manager Session Manager (SSM)

To connect with the Amazon EC2 instance via the AWS Systems Manager Session Manager, complete the following steps:

  1. Open the Amazon EC2 console.
  2. In the navigation pane, choose Instances (running).
  3. Select the Ubuntu instance you set up previously and choose Connect.
  4. For Connection method, select Session Manager.
  5. Choose Connect.

If the Connect option under Session Manager is not available, you’ll need to refer to the tutorial and update the policies to allow you to start sessions from the Amazon EC2 console.

Install Greengrass on EC2 instance

Next, copy the greengrass_install.sh script provided by the GitHub repo to the home directory on your EC2 instance.

  1. Modify the script with your own AWS Region.
  2. Run the following commands:
    chmod 777 greengrass_install.sh
    ./greengrass_install.sh
    systemctl status greengrass.service

Greengrass should show active status similar to the following example:

Figure 2: Greengrass should show active status similar to this output.

Figure 2: Greengrass should show active status similar to this output.

You also can check the Greengrass Core device status from the AWS IoT Greengrass console in your specified AWS Region as shown in the following image:

Figure 3: Greengrass Core device status from the AWS IoT Greengrass console in your specified AWS Region.

Figure 3: Greengrass Core device status from the AWS IoT Greengrass console in your specified AWS Region.

Set up InfluxDB credential and bucket

To start, install InfluxDB 2.0 and Grafana v8.0.4 on this Amazon EC2 instance by following the GitHub instructions.

Before writing data to InfluxDB, you must configure a user credential and a bucket. InfluxDB v2.0 stores time series data in the bucket for a given retention period, and it will drop all points with timestamps older than the bucket’s retention period. Because of limited disk space of the edge device, we recommend you do not use the infinite retention period.

InfluxDB also needs user authentication for single unified access control. You can configure InfluxDB with InfluxDB CLI with this command in SSM as:

influx setup

Next, configure InfluxDB with the following prompted hints:

  1. Enter a primary user name (for example, influxblog).
  2. Enter a password for your user.
  3. Confirm your password by entering it again.
  4. Enter a name for your primary organization (for example, ggv2demo).
  5. Enter a name for your primary bucket (for example, mloutput).
  6. Enter a retention period for your primary bucket. For this value, you should enter the number of hours the time series data should be kept at the edge device (for example, 24).
  7. Confirm the details for your primary user, organization.
  8. Before starting, note the token string for InfluxDB authentication with the following command:
    influx auth list

    The token string will be shown under Token as follows:

    Figure 4: The token string will be shown under Token, as shown in this output.

    Figure 4: The token string will be shown under Token, as shown in this output.

After the initial setup is finished, the user profile and database configuration will be used in step 3 for the Greengrass subscriber component to write data points to InfluxDB.

Finally, you must set up a connection configuration profile and set it to active to use the Influx CLI to query data securely:

influx config create -n "Replace with a chosen config name" -u "http://localhost:8086" -o "replace with organization" -t "Replace with your token string"

Set up Grafana dashboard with port forwarding for Session Manager

In a production environment, it is recommended that the operational technology team directly access the Grafana dashboard at the edge with the enhanced authentication method. With the enhanced authentication mechanism, such as OAuth with Amazon Cognito, users can easily access the dashboard with a URL or IP address of the edge device.

Because of the limited length of this blog post, only the port forwarding function for AWS SSM is configured here to prevent anonymous access to the Grafana application at the edge. As noted in New – Port Forwarding Using AWS System Manager Session Manager, port forwarding allows you to create tunnels securely between your instances deployed in private subnets, without the need to start the SSH service on the server, to open the SSH port in the security group, or the need to use a bastion host.

To start, check the Grafana application status in SSM with the following command, then do the following steps:

sudo systemctl status grafana-server
  1. Configure the Grafana server to start at boot:
    sudo systemctl enable grafana-server.service
  2. Follow the documentation to configure AWS CLI with AWS account user profile credential on a laptop. Ensure that the AWS CLI version is 1.16.220 or more recent.
  3. Install the Session Manager plugin for AWS CLI on your laptop.
  4. Start a port forwarding session with SSM for the Amazon EC2 instance on your laptop:
    [windows]
    aws ssm start-session --target Replace with your EC2 instanceID --document-name AWS-StartPortForwardingSession --parameters portNumber="3000",localPortNumber="3000"
    [bash]
    aws ssm start-session \
    --target Your-EC2-instance-id \
    --document-name AWS-StartPortForwardingSession \ 
    --parameters '{"portNumber":["3000"], "localPortNumber":["3000"]}'
    

    The successful connection will produce the following response:

    Starting session with SessionId: 
    Port 3000 opened for sessionId 
    Waiting for connections...
    
  5. Open a web browser on your laptop and log in to Grafana with the following address:
    http://localhost:3000

    The following Grafana login page should be shown:

    Figure 5: Grafana login page, which says Welcome to Grafana and requires log in.

    Figure 5: Grafana login page.

  6. Follow the prompts to set up new password for Grafana and note the password information for Grafana dashboard access next time.

Step 2: Create and deploy the ML component

In this example, the pretrained PyTorch Resnet-18 is compiled with Amazon SageMaker Neo for inference. SageMaker Neo automatically optimizes machine learning models for inference on cloud instances and edge devices to run faster with no loss in accuracy.

To do this, start with a machine learning model already built with DarkNet, Keras, MXNet, PyTorch, TensorFlow, TensorFlow-Lite, ONNX, or XGBoost and trained in Amazon SageMaker or anywhere else. Then, choose your target hardware platform, which can be a SageMaker hosting instance or an edge device based on processors from Ambarella, Apple, Arm, Intel, MediaTek, Nvidia, NXP, Qualcomm, RockChip, Texas Instruments, or Xilinx.

With a single click, SageMaker Neo optimizes the trained model and compiles it into an executable. The compiler uses an ML model to apply the performance optimizations that extract the best available performance for your model on the cloud instance or edge device. We use SageMaker Neo with AWS IoT Greengrass for the following reasons:

  • Installing PyTorch framework needed on edge device is no longer required.
  • Amazon SageMaker Neo uses TensorRT + TVM technology to optimize performance. (We are able to achieve a 3x performance increase by switching from the PyTorch model to a SageMaker Neo compiled model.)

Download sample model and compile with Amazon SageMaker Neo

In this step, we will launch a lightweight SageMaker notebook instance to go through the procedure of compiling a PyTorch Resnet-18 model with SageMaker Neo service.

  1. Launch a small SageMaker notebook instance (t2 or t3).
  2. Create a new Jupyter notebook with the conda_pytorch_latest_p36 kernel.
  3. Upload the notebook we have prepared in GitHub, then change the SageMaker client region parameter to the same region as the EC2 instance:
    sagemaker_client = boto3.client(‘sagemaker’, region_name=’Replace with your region’)
  4. Run all the steps in order to generate the SageMaker Neo compiled model.
  5. Check the Amazon Simple Storage Service (Amazon S3) bucket and make sure that the following artifacts exist in the Amazon S3 path you defined in your notebook. In our demo, we have stored them in the following:
    s3://sagemaker-us-xxxx-x-xxxxxxxx/TorchVision-ResNet18-Neo-YYYY-MM-DD-hh-mm-ss-xxx/output/
Figure 6: Amazon S3 budket showing artifacts in the S3 path defined in the notebook.

Figure 6: Amazon S3 bucket showing artifacts in the S3 path defined in the notebook.

Prepare edge inferencing Greengrass component

Components are building blocks that allow easy creation of complex workflows, such as ML inference, local processing, messaging, and data management. Components running on your core device can use the AWS IoT Greengrass Core IPC library in the AWS IoT Device SDK to communicate with other Greengrass components and processes.

The following steps show how to deploy the previous packaged ML model as an edge inference component on Greengrass v2.

  1. Change directory to modules/edge-inference/aws-gg-deploy in the directory where the Git repository was cloned to the EC2 instance.
  2. Modify the deployment script deploy-edge.sh by replacing the following placeholders with your customized values in the _setEnv() section:
    • YOUR_AWS_ACCOUNT_NUMBER
    • YOUR_AWS_REGION
    • S3_BUCKET for model artifacts
    • COMPILATION_NAME (this is obtained as the first folder of the model artifact in S3, following the pattern as TorchVision-ResNet18-Neo-YYYY-MM-DD-hh-mm-ss-xxx.

    Then do Ctrl-X and select Y to save the modified deploy.sh file with the same name.

  3. Run the following script to deploy this component:
    export AWS_ACCESS_KEY_ID=AWS_ACCESS_KEY_ID
    export AWS_SECRET_ACCESS_KEY= AWS_SECRET_ACCESS_KEY
    export AWS_SESSION_TOKEN= AWS_SESSION_TOKEN
    chmod 777 deploy-edge.sh
             ./deploy-edge.sh

This Bash script takes approximately 10 seconds to finish. When it finishes, confirm that you have the component created in your AWS IoT Greengrass v2 console as shown in the following image:

Step 3: Create the subscriber and data entry to InfluxDB component

Prepare InfluxDB subscriber Greengrass component

  1. Change directory to modules/influxdb-subscriber/aws-gg-deploy to the directory where the GIT repository was cloned to the Amazon EC2 instance.
  2. Modify the deploy-edge.sh script by replacing the following placeholders with your customized values:
    • YOUR_AWS_ACCOUNT_NUMBER
    • YOUR_AWS_REGION
    • S3_BUCKET for component artifacts (in this example, the S3 bucket used to store edge-inference component artifacts is also used in this influxdb-subscriber component)

    Save the deploy.sh file with Ctrl-X, then select Y to save the modified file with the same name before exit.

  3. Modify the recipe-file-template.yaml file in the same directory by adding arguments of your InfluxDB configuration in the Run command in the Manifests section as shown in the following snippet:
    Run: Script: python3 -u {artifacts:decompressedPath}/$artifacts_zip_file_name/$artifacts_entry_file --token="Replace with your token string" --b="Replace with your bucket name" --o="Replace with your org name" --m="Replace with a measurement name of your choice"

    Replace the default token_string, bucket_name, org, and measurement_name for InfluxDB in main.py with your own InfluxDB parameters. Save this recipe template file with Ctrl-X, then select Y to save the modified recipe file with the same name.

  4. Run the following script to deploy this component:
    export AWS_ACCESS_KEY_ID= AWS_ACCESS_KEY_ID 
    export AWS_SECRET_ACCESS_KEY= AWS_SECRET_ACCESS_KEY
    export AWS_SESSION_TOKEN= AWS_SESSION_TOKEN
    chmod 777 deploy-edge.sh
    ./deploy-edge.sh

    This Bash script takes approximately 10 seconds to finish.

  5. Once the component is running, you can check the outputs of the Python script checkpoints (for example, ***Write Points Finished ***) within the log to check the component status, as shown in the following figure:
    sudo tail –200f ./greengrass/v2/logs/influxdb-subscriber.log

For the base64 string containing the image output from ML inference component, a prefix is added to the data before writing it to InfluxDB as:

payload_message['Picture'] = "data:image/png;base64, " + payload_message['Picture']

Step 4: Create a Grafana dashboard to visualize ML inference results

To configure the InfluxDB bucket as a Grafana data source, complete the following steps:

  1. Choose Data sources, as shown in the following image:
  2. Choose Add data source and select InfluxDB:
  3. Set up the InfluxDB data source with the documentation reference, as shown. In this example, we chose Flux as the query language, which has broader functionality with InfluxDB v2.0.
  4. Fill in the authentication information related with InfluxDB (user name, password, org, token, and bucket):
  5. Choose Save and Test. If successful, the result should show how many buckets were found under this org.

To build a table type dashboard for ML inference outputs, do the following:

  1. Move your cursor to the + icon on the side menu and choose Create dashboard. Then, select Add an empty panel.
  2. Under Query, enter the following Flux query:
    from(bucket: "REPLACE YOUR BUCKET") |> range(start: -1h) |> filter (fn: (r)=> r._measurement =="REPLACE YOUR MEASUREMENT NAME") |> keep(columns: ["_time", "_field", "_value"]) |> pivot(rowKey:["_time"], columnKey:["_field"], valueColumn:"_value")
  3. Next, change the visualization on the right panel to Table. The data points in InfluxDB should be shown in the panel as shown in the following figure:

    In this dashboard, the statistics for each ML model inference are visualized, including InferenceStartTime, InferenceEndTime, InferenceTotalTime in ms, Probability, and Prediction. Thus, the operational technology team can review the ML inference results with this dashboard in real time.
  4. In this step, you can configure the Picture field that contains base64 string data as an image display. To do so, select the Overrides tab, then choose the add field override tab. Select the Fields with name option from the drop-down menu and choose the Picture field. Next, click on Add override property tab, choose Cell display mode type as Image for column Picture field, as shown in the following figure.
  5. Once this information has been configured, select the Apply tab, and the original Base64 string data Picture column will display images of dogs, as shown in the following figure.

    After 10 seconds, you can refresh the Grafana dashboard, and the latest inference results will be available to view. With these steps, a simple table type Grafana dashboard is built to show time-stamped ML inference output at the edge, so users can remotely examine image outputs and approve/reject outputs.

This Grafana dashboard also helps users monitor ML model performance of each inference by clearly showing model type, inference duration, probability of the inference, and final prediction result. This workflow can be further enhanced as human-in-the-loop workflow, so the inference results can be used as future training data to improve the ML model accuracy.

Clean up

You also must perform these clean up steps in the following areas.

AWS IoT

Open the AWS IoT Core console. Under AWS IoT, do the following:

  1. Under the Greengrass Core Device tab, select the DemoJetson core device and select Delete on top right.
  2. Under Manage, Thing Group, delete DemoJetsonGroup from the Thing Group.
  3. Delete things under Manage, Things: DemoJetson .
  4. Under Policies, delete GreengrassV2IoTThingPolicy and GreengrassTESCertificatePolicyGreengrassV2TokenExchangeRoleAlias2.
  5. Under Secure, Role Aliases delete GreengrassV2TokenExchangeRoleAlias.

Amazon S3

  1. Navigate to the S3 console and locate the component bucket you used previously.
  2. Empty the component bucket.
  3. Delete the component bucket.

Amazon EC2

  1. Navigate to the Amazon EC2 console.
  2. Stop the EC2 instance by selecting Stop Instance under Instance State.
  3. After the instance stops, select Terminate Instance under Instance State to shut down this EC2 instance.

Amazon SageMaker

  1. Navigate to the SageMaker notebook instance console.
  2. Stop the SageMaker notebook instance by selecting the instance you started for model preparation, and select Actions, Stop.
  3. After the instance has been stopped, select Terminate under Actions to shut down this EC2 instance.

IAM roles

  1. Navigate to the IAM console.
  2. Delete the IAM role created from Ubuntu EC2 instance.
  3. Delete the Amazon EC2 SSM access policy.
  4. Delete the IAM user created for Greengrass v2.
  5. Delete the policy that was attached to the Greengrass user.
  6. Delete the IAM role named in the format of AmazonSageMaker-ExecutionRole-xxxxxxxxxx.

These steps complete the deletion of the resource created for this example.

Conclusion

This article shows an end-to-end workflow for using open source tools (InfluxDB and Grafana) to visualize ML inference results in near real time on an edge device. The latest AWS IoT Greengrass v2 reduced complexity of this IoT edge workflow by providing an IPC library to allow communications between different edge modules, so open source tools can be integrated easily with other ML components in an edge workflow.

With InfluxDB’s time series database, image files can be written to it as time-stamped base64 string data, queried with the Flux tool, and visualized by Grafana in near real time. This workflow can significantly improve the user experience of IoT edge ML applications and help the operational technology team achieve remote monitoring of ML at the edge.

Call to action

This ML edge workflow can also be extended for different database and BI tool combinations (for example, InfluxDB and Prometheus, MySQL and Grafana, RedisTimeSeries, and Grafana, etc.). Users can also adapt the subscriber component and utilize different tools for their specific use cases.

In this article, we mainly focused on the improved modularity function of AWS IoT Greengrass v2; however, other features for Greengrass can benefit edge workflow development. For example, Amazon SageMaker Edge Manager is now integrated with Greengrass to simplify ML fleet deployments. For more details, please refer to this document.

In the future, other customer components, such as camera data ingestion, and image preprocessing and enhancement, also can be developed as individual modules and be integrated with this existing edge workflow to build more robust CV ML edge applications.

Julia Hu

Julia Hu

Julia Hu is a Sr. AI/ML Solutions Architect with Amazon Web Services. She has extensive experience in IoT architecture and Applied Science, and is part of both the Machine Learning and IoT Technical Field Community. She works with customers, ranging from start-ups to enterprises, to develop AWSome IoT machine learning (ML) solutions, at the Edge and in the Cloud. She enjoys leveraging latest IoT and big data technology to scale up her ML solution, reduce latency, and accelerate industry adoption.

Matthieu Fuzellier

Matthieu Fuzellier

Matthieu Fuzellier leads the IoT Data Global Specialty Practice for AWS Professional Services. He helps AWS customers design and implement solutions centered around IoT and machine data.

Srikanth Kodali

Srikanth Kodali

Srikanth Kodali is a Senior IoT data analytics architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance on building IoT data and analytics solutions, helping them improve the value of their solutions when using AWS.

Yuxin Yang

Yuxin Yang

Yuxin is an AI/ML architect at AWS, certified in the AWS Machine Learning Specialty. She enables customers to accelerate their outcomes through building end-to-end AI/ML solutions, including predictive maintenance, computer vision and reinforcement learning. Yuxin earned her MS from Stanford University, where she focused on deep learning and big data analytics.