AWS Open Source Blog

Amazon Lookout for Vision Python SDK: Cross-validation and Integration with Other AWS Services

Amazon Lookout for Vision uses machine learning (ML) to understand images from any camera with a high degree of accuracy and at large scale. For example, Lookout for Vision can be used to identify missing components in products, damage to vehicles or structures, irregularities in production lines, minuscule defects in silicon wafers, and other similar problems.

The service allows customers to reduce the need for costly and inconsistent manual inspection while improving quality control, defect and damage assessment, and compliance. In minutes, you can begin using Amazon Lookout for Vision to automate inspection of images and objects — with no machine learning experience required. This previous blog post will help you get started.

Lookout for Vision lets manufacturers increase quality and reduce operational costs by quickly identifying differences in images of objects at scale. It was made available in Preview at AWS re:Invent 2020 and became generally available in February 2021, the same month we released the first version of our open source Python SDK for this service.

In this blog post, we will show you how to use the open source Python SDK for Lookout for Vision in either AWS Glue or AWS Lambda. But first, we want to introduce one of the newest capabilities of the open source SDK: cross-validation.

Run cross-validation using the Python SDK

Cross-validation is a technique used in ML to obtain a robust estimate of the model performance on unseen data. Instead of having only one validation set, k-fold cross-validation implies k validation sets and averages the validation score over all k sets.

In the newest release of the Lookout for Vision open source Python SDK, k-fold cross validation has been made available. The benefit of using k-fold cross-validation for Lookout for Vision is to get a more reliable validation score compared to when using a holdout validation set. The SDK will create k projects which are used to train k models in parallel using the split data.

To run k-fold cross-validation with the Python SDK, the following three steps are required:

1. The training data is shuffled and split into k folds producing k different training and validation set pairs.

The following line of code shuffles the data stored within the normal and anomaly directories by using a random seed, and then splits each of the two classes into n different splits. For details on the above folder structure, please refer to this blog post. Finally, it generates k pairs of training and validation data. These k pairs are returned within the four dictionaries. These dictionaries contain the i-th split as the key and the paths to the images which are part of the i-th split as values.

training_normal, training_anomaly, validation_normal, validation_anomaly = img.kfold_split(
    n_splits=n_splits,
    normal=normal,
    anomaly=anomaly,
    seed=0)

2. Each of the k pairs are uploaded to Amazon Simple Storage Service (Amazon S3) and provide the training and validation data for k different Lookout for Vision projects.

The following line of code uploads the images stored within each training and validation split to Amazon S3 to the specified bucket and prefix. Each split is uploaded to a different prefix of the following form: s3://<bucket>/<prefix>/project_name_<i-th split>, where i-th split takes integer values between 0 and k-1. To upload the images, use the SDK:

img.kfold_upload (bucket, prefix, project_name, training_normal, training_anomaly, validation_normal, validation_anomaly)

Different models need to be trained on each of the k training sets and validated on the corresponding validation set.

The following line of code trains k models based on the k splits of the data. By setting parallel_training=True, two models can be trained in parallel, reducing the execution time by half. After the k models have been trained, the models are deleted when delete_kfold_projects=True is set.

3. As a last step run the following code:

kfold_summary = l4v.train_k_fold(input_bucket=bucket,
                                 output_bucket=bucket,
                                 s3_path=prefix,
                                 n_splits=n_splits,
                                 parallel_training=True,
                                 delete_kfold_projects=True)

By using the k_fold_model_summary method of the Metrics class, you can now extract all different training and validation metrics from the projects. If you come to the conclusion that Lookout for Vision generalizes well among different k splits on your images you can take all images, train one model in one project, and deploy as described in this blog.

Use AWS Glue with the open source Python SDK

Now, we will walk you through batch predicting your images in AWS Glue. Images need to be stored in an S3 bucket. AWS Glue then runs as a Python shell job that can be scheduled. For example, you can let the prediction job run in off-business hours so that you have the results once workers return in the morning. In order to get you started, we provide an AWS Cloud Development Kit (AWS CDK) stack on GitHub. This stack will deploy architectures for you. One of them is in the following diagram using AWS Glue:

An architecture diagram of Amazon S3 image storage and AWS Glue

The deployed solution requires you to have a pre-trained and hosted Lookout for Vision model in your AWS account. If you don’t have this yet, please follow either our official documentation or visit this blog post. After deploying the template, the following components will be created within the AWS account:

1. A Python shell AWS Glue job, which imports the Lookout for Vision Python library. It reads input images from Amazon S3, predicts using your trained Lookout for Vision model, and stores the predicted result in another S3 bucket.

2. An AWS Glue workflow which can be used to schedule the Glue Python shell job. After deploying the solution, the AWS Glue workflow is in “On-Demand” mode. You can schedule this job following this documentation.

3. An AWS Identity and Access Management (IAM) role that this AWS Glue job uses to interact with other AWS services, such as Amazon S3 and Lookout for Vision. For demo purposes, full access is given to this role in our example. We recommend reducing this access to the minimum.

Use AWS Lambda with the open source Python SDK

This year, AWS Lambda began allowing you to leverage a Docker container. We make use of this mechanism in order to run our batch_predict job within an AWS Lambda function. This is also part of the AWS CDK stack we mentioned above. The code that runs in this AWS Lambda function is almost identical to the one running in the AWS Glue job. We want to give you yet another option on how to use and leverage the open source Python SDK for Lookout for Vision in a serverless fashion.

Architecture diagram of Amazon S3 and an AWS Lambda scheduled job

The architecture also looks similar to the one with AWS Glue. The code of our AWS Lambda function is as follows:

import os
import boto3
import sagemaker

# Import all the libraries needed to get started:
from lookoutvision.lookoutvision import LookoutForVision

# Training & Inferencea
input_bucket = os.getenv('s3_input_data_folder')
project_name = os.getenv('l4vProjectName')
model_version = os.getenv('l4vModelVersion')
# Inference
output_bucket = os.getenv('s3_output_data_folder')
input_prefix = 'lambdapredictinputimages/'
output_prefix = 'lambdapredictedresults/'

def lambda_handler(event, context):

    l4v = LookoutForVision(project_name=project_name)

    # Run the batch prediction
    l4v.batch_predict(
        model_version=model_version,
        input_bucket=input_bucket,
        input_prefix=input_prefix,
        output_bucket =output_bucket,
        output_prefix=output_prefix,
        content_type="image/jpeg")

    return {
        "statusCode": 200,
        "body": "Success"
    }

As you can see, the function itself is short. Based on environmental variables that are provisioned through the deployed AWS CDK stack, see the README on GitHub for further details. The batch_predict function will then pick up the images from Amazon S3, predict against the Lookout for Vision model and store the results in the output S3 bucket. Another way to use this functionality would be to hook the AWS Lambda behind an AWS API Gateway, send an image through the event variable and run predict as part of the Lookout for Vision Python SDK in order to classify the image and send the prediction back to the front end.

Stop the model

When you are done, please stop the models you’ve deployed via stop_model(). Delete the Amazon CloudFormation stacks that you might have created. For further instructions, please refer to our previous post, Build, train, and deploy Amazon Lookout for Vision models using the Python SDK.

Conclusion

In this blog post, we demonstrated the new cross-validation feature of Lookout for Vision and gave some ideas on how to integrate the open source Python SDK into two popular AWS services. You can now begin integrating Lookout for Vision into an MLOps pipeline using AWS Step Functions and AWS Lambda. You can also run your batch prediction workloads using AWS Glue.

We also used the open source Python SDK within AWS Lambda by hosting it in a Docker container. This concept can be applied with any other service that can use Docker images stored in Amazon Elastic Container Registry (Amazon ECR). It’s now your turn to build.

Michael Wallner

Michael Wallner

Michael Wallner is a Senior Consultant Data & AI with AWS Professional Services and is passionate about enabling customers on their journey to become data-driven and AWSome in the AWS cloud. On top, he likes thinking big with customers to innovate and invent new ideas for them.

Bandana Das

Bandana Das

Bandana Das is a senior Data Architect in Amazon Web Service and specializes in Data and Analytics. She builds event-driven data architectures to support customer in Data management and data-driven decision making. She is also passionate about enabling customers on their Data management journey to the cloud.