AWS Machine Learning Blog

Amazon Lookout for Vision now supports visual inspection of product defects at the edge

Discrete and continuous manufacturing lines generate a high volume of products at low latency, ranging from milliseconds to a few seconds. To identify defects at the same throughput of production, camera streams of images must be processed at low latency. Additionally, factories may have low network bandwidth or intermittent cloud connectivity. In such scenarios, you may need to run the defect detection system on your on-premises compute infrastructure, and upload the processed results for further development and monitoring purposes to the AWS Cloud. This hybrid approach with both local edge hardware and the cloud can address the low latency requirements and help reduce storage and network transfer costs to the cloud. This may also fulfill your data privacy and other regulatory requirements.

In this post, we show you how to detect defective parts using Amazon Lookout for Vision machine learning (ML) models running on your on-premises edge appliance.

Lookout for Vision is an ML service that helps spot product defects using computer vision to automate the quality inspection process in your manufacturing lines, with no ML expertise required. The fully managed service enables you to build, train, optimize, and deploy the models in the AWS Cloud or edge. You can use the cloud APIs or deploy Amazon Lookout for Vision models on any NVIDIA Jetson edge appliance or x86 compute platform running Linux with an NVIDIA GPU accelerator. You can use AWS IoT Greengrass to deploy, and manage your edge compatible customized models on your fleet of devices.

Solution overview

In this post, we use a printed circuit board dataset composed of normal and defective images such as scratches, solder blobs, and damaged components on the board. We train a Lookout for Vision model in the cloud to identify defective and normal printed circuit boards. We compile the model to a target ARM architecture, package the trained Lookout for Vision model as an AWS IoT Greengrass component, and deploy the model to an NVIDIA Jetson edge device using the AWS IoT Greengrass console. Finally, we demonstrate a Python-based sample application running on the NVIDIA Jetson edge device that sources the printed circuit board image from the edge device file system, runs the inference on the Lookout for Vision model using the gRPC interface, and sends the inference data to an MQTT topic in the AWS Cloud.

The following diagram illustrates the solution architecture.

The solution has the following workflow:

  1. Upload a training dataset to Amazon Simple Storage Service (Amazon S3).
  2. Train a Lookout for Vision model in the cloud.
  3. Compile the model to the target architecture (ARM) and deploy the model to the NVIDIA Jetson edge device using the AWS IoT Greengrass console.
  4. Source images from local disk.
  5. Run inferences on the deployed model via the gRPC interface.
  6. Post the inference results to an MQTT client running on the edge device.
  7. Receive the MQTT message on a topic in AWS IoT Core in the AWS Cloud for further monitoring and visualization.

Steps 4, 5 and 6 are coordinated with the sample Python application.

Prerequisites

Before you get started, complete the following prerequisites:

  1. Create an AWS account.
  2. On your NVIDIA Jetson edge device, complete the following:
    1. Set up your edge device (we have set IoT THING_NAME = l4vJetsonXavierNx when installing AWS IoT Greengrass V2).
    2. Clone the sample project containing the Python-based sample application (warmup-model.py to load the model, and sample-client-file-mqtt.py to run inferences). Load the Python modules. See the following code:
git clone https://github.com/aws-samples/ds-peoplecounter-l4v-workshop.git
cd ds-peoplecounter-l4v-workshop 
pip3 install -r requirements.txt
cd lab2/inference_client  
# Replace ENDPOINT variable in sample-client-file-mqtt.py with the 
# value on the AWS console AWS IoT->Things->l4JetsonXavierNX->Interact.  
# Under HTTPS. It will be of type <name>-ats.iot.<region>.amazon.com 

Dataset and model training

We use the printed circuit board dataset to demonstrate the solution. The dataset contains normal and anomalous images. Here are a few sample images from the dataset.

The following image shows a normal printed circuit board.

The following image shows a printed circuit board with scratches.

The following image shows a printed circuit board with a soldering defect.

To train a Lookout for Vision model, we follow the steps outlined in Amazon Lookout for Vision – New ML Service Simplifies Defect Detection for Manufacturing. After you complete these steps, you can navigate to the project and the Models page to check the performance of the trained model. You can start the process of exporting the model to the target edge device any time after the model is trained.

Compile and package the model as an AWS IoT Greengrass component

In this section, we walk through the steps to compile the printed circuit board model to our target edge device and package the model as an AWS IoT Greengrass component.

  1. On the Lookout for Vision console, choose your project.
  2. In the navigation pane, choose Edge model packages.
  3. Choose Create model packaging job.

  1. For Job name, enter a name.
  2. For Job description, enter an optional description.
  3. Choose Browse models.

  1. Select the model version (the printed circuit board model built in the previous section).
  2. Choose Choose.

  1. Select Target device and enter the compiler options.

Our target device is on JetPack 4.5.1. See this page for additional details on supported platforms. You can find the supported compiler options such as trt-ver and cuda-ver in the NVIDIA JetPack 4.5.1 archive.

  1. Enter the details for Component name, Component description (optional), Component version, and Component location.

Amazon Lookout for Vision stores the component recipes and artifacts in this Amazon S3 location.

  1. Choose Create model packaging job.

You can see your job name and status showing as In progress. The model packaging job may take a few minutes to complete.

When the model packaging job is complete, the status shows as Success.

  1. Choose your job name (in our case it’s ComponentCircuitBoard) to see the job details.

The Greengrass component and model artifacts have been created in your AWS account.

  1. Choose Continue deployment to Greengrass to deploy the component to the target edge device.

Deploy the model

In this section, we walk through the steps to deploy the printed circuit board model to the edge device using the AWS IoT Greengrass console.

  1. Choose Deploy to initiate the deployment steps.

  1. Select Core device (because the deployment is to a single device) and enter a name for Target name.

The target name is the same name you used to name the core device during the AWS IoT Greengrass V2 installation process.

  1. Choose your component. In our case, the component name is ComponentCircuitBoard, which contains the circuit board model.
  2. Choose Next.

  1. Configure the component (optional).
  2. Choose Next.

  1. Expand Deployment policies.

  1. For Component update policy, select Notify components.

This allows the already deployed component (a prior version of the component) to defer an update until they are ready to update.

  1. For Failure handling policy, select Don’t roll back.

In case of a failure, this option allows us to investigate the errors in deployment.

  1. Choose Next.

  1. Review the list of components that will be deployed on the target (edge) device.
  2. Choose Next.

You should see the message Deployment successfully created.

  1. To validate the model deployment was successful, run the following command on your edge device:
sudo /greengrass/v2/bin/greengrass-cli component list

You should see a similar looking output running the ComponentCircuitBoard lifecycle startup script:

 Components currently running in Greengrass:
 
 Component Name: aws.iot.lookoutvision.EdgeAgent
    Version: 0.1.34
    State: RUNNING
    Configuration: {"Socket":"unix:///tmp/aws.iot.lookoutvision.EdgeAgent.sock"}
 Component Name: ComponentCircuitBoard
    Version: 1.0.0
    State: RUNNING
    Configuration: {"Autostart":false}

Run inferences on the model

We’re now ready to run inferences on the model. On your edge device, run the following command to load the model:

# run command to load the model
# This will load the model into running state 
python3 warmup-model.py

To generate inferences, run the following command with the source file name:

python3 sample-client-file-mqtt.py /path/to/images

The following screenshot shows that the model correctly predicts the image as anomalous (bent pin) with a confidence score of 0.999766.

The following screenshot shows that the model correctly predicts the image as anomalous (solder blob) with a confidence score of 0.7701461.

The following screenshot shows that the model correctly predicts the image as normal with a confidence score of 0.9568462.

The following screenshot shows that the inference data posted an MQTT topic in AWS IoT Core.

Customer Stories

With AWS IoT Greengrass and Amazon Lookout for Vision, you can now automate visual inspection with CV for processes like quality control and defect assessment – all on the edge and in real time. You can proactively identify problems such as parts damage (like dents, scratches, or poor welding), missing product components, or defects with repeating patterns, on the production line itself – saving you time and money. Customers like Tyson and Baxter are discovering the power of Amazon Lookout for Vision to increase quality and reduce operational costs by automating visual inspection.

“Operational excellence is a key priority at Tyson Foods. Predictive maintenance is an essential asset for achieving this objective by continuously improving overall equipment effectiveness (OEE). In 2021, Tyson Foods launched a machine learning based computer vision project to identify failing product carriers during production to prevent them from impacting Team Member safety, operations, or product quality.

The models trained using Amazon Lookout for Vision performed well. The pin detection model achieved 95% accuracy across both classes. The Amazon Lookout for Vision model was tuned to perform at 99.1% accuracy for failing pin detection. By far the most exciting result of this project was the speedup in development time. Although this project utilizes two models and a more complex application code, it took 12% less developer time to complete. This project for monitoring the condition of the product carriers at Tyson Foods was completed in record time using AWS managed services such as Amazon Lookout for Vision.”

Audrey Timmerman, Sr Applications Developer, Tyson Foods.

“We use Amazon Lookout for Vision to automate inspection tasks and solve complex process management problems that can’t be addressed by manual inspection or traditional machine vision alone. Lookout for Vision’s cloud and edge capabilities provide us the ability to leverage computer vision and AI/ML-based solutions at scale in a rapid and agile manner, helping us to drive efficiencies on the manufacturing shop floor and enhance our operator’s productivity and experience.”

K. Karan, Global Senior Director – Digital Transformation, Integrated Supply Chain, Baxter International Inc.

Conclusion

In this post, we described a typical scenario for industrial defect detection at the edge. We walked through the key components of the cloud and edge lifecycle with an end-to-end example using Lookout for Vision and AWS IoT Greengrass. With Lookout for Vision, we trained an anomaly detection model in the cloud using the printed circuit board dataset, compiled the model to a target architecture, and packaged the model as an AWS IoT Greengrass component. With AWS IoT Greengrass, we deployed the model to an edge device. We demonstrated a Python-based sample application that sources printed circuit board images from the edge device local file system, runs the inferences on the Lookout for Vision model at the edge using the gRPC interface, and sends the inference data to an MQTT topic in the AWS Cloud.

In a future post, we will show how to run inferences on a real-time stream of images using a GStreamer media pipeline.

Start your journey towards industrial anomaly detection and identification by visiting the Amazon Lookout for Vision and AWS IoT Greengrass resource pages.


About the Authors

Amit Gupta is an AI Services Solutions Architect at AWS. He is passionate about enabling customers with well-architected machine learning solutions at scale.

 Ryan Vanderwerf is a partner solutions architect at Amazon Web Services. He previously provided Java virtual machine-focused consulting and project development as a software engineer at OCI on the Grails and Micronaut team. He was chief architect/director of products at ReachForce, with a focus on software and system architecture for AWS Cloud SaaS solutions for marketing data management. Ryan has built several SaaS solutions in several domains such as financial, media, telecom, and e-learning companies since 1996.

Prathyusha Cheruku is an AI/ML Computer Vision Product Manager at AWS. She focuses on building powerful, easy-to-use, no code/ low code deep learning-based image and video analysis services for AWS customers.