AWS for Industries
AI-Driven Visual Inspection of Wind Turbines Based on Drone Imaging
Introduction
Keeping wind turbines operational also means keeping them well maintained. One of the steps in maintenance is regular visual inspection. Wind farm operation and maintenance companies started to utilize drones with attached cameras to perform visual inspections. According to a recent study, drone-based inspection reduces costs by up to 70% and decreases revenue lost due to downtime by up to 90% compared to conventional rope-access human inspection. Moreover, working with drones is much safer than humans with rope access. At this point, our customers look for automating the inspection process. In this blog post, we will discuss AI/ML-based image recognition as a part of event-driven architecture to automate the inspection process.
This blog post will demonstrate how to use Machine Learning on AWS for visual inspection augmented with serverless technologies to build an automated process based on a custom business logic. We will employ Amazon Rekognition Custom Labels to identify objects and scenes specific to your case. We also discuss important metrics regarding AI/ML-based inspection. Finally, AWS Serverless Application Model (AWS SAM) will help us to deploy an event-driven solution that builds the business logic on top of Amazon Rekognition findings.
AI/ML-Based Image Recognition
Image Recognition Types for Inspection
There are typically three types of image recognition. The first one is Multi-Label Image Classification. In Multi-Label Image Classification, the image is treated as a whole to label the scene. For example, Figure 1.a shows an image of a turbine that has icing. Hence, the result of the label would be “icing” and attributed to the whole scene. This method is simple and requires the least amount of data to train compared to other methods. The second option is Object Detection with Bounding Box (Figure 1.b). This method not only provides object also spatial information via bounding box. Object Detection with Bounding Box requires more data for training compared to Multi-Label Image Classification. The final method is Semantic Segmentation, which is a pixel-level classification (Figure 2.c). Semantic Segmentation requires the largest amount of data among the others; yet, it provides a more granular information. As of today, Amazon Rekognition only supports Multi-Label Image Classification and Object Detection with Bounding Box. You can use Amazon SageMaker to perform Semantic Segmentation.
Figure 1. Output example of (a) multi-label image classification, (b) object detection with bounding box, and (c) semantic segmentation.
The pictures of turbines, presumably taken by the drones, would always show a turbine scene. Therefore, any attempt to perform an image recognition on the whole image scene, such as via Multi-Label Image Classification, would not be an effective approach. Instead, you may want to identify and locate the issues on the turbine. In this blog post, we selected Object Detection with Bounding Box for two reasons. Firstly, Object Detection with Bounding Box provides a good balance between the adequate information to use and the amount of data required to train. Secondly, we aimed to employ Amazon Rekognition Custom Labels, which does not require writing a single line of code. If you did want to perform a pixel-level identification, Amazon SageMaker with Semantic Segmentation would be a good solution.
Label Identification and Types of Issue Classes
This blog post considers three example issues (labels) on the wind turbines. These are wear, icing, and corrosion (Figure 2a-c). We used publicly available images. However, you are not limited to three issues as we are here. You can go up to 250 different types of issues (labels).
Figure 2. Example pictures for (a) turbine blade with a wear at the leading edge (picture taken from Keegan et al. 2013), (b) icing on blades (picture taken from Fakorede et al. 2016) and (c) a corrosion (Image Source: by Cameron Venti on Unsplash)
You can improve the Amazon Rekognition Custom Labels model by including a baseline label class which can represent “no issue” condition. Unless there is a baseline class, the model may be biased towards finding an issue, so that this may increase false positives. Moreover, the machine learning model would not attribute the normal (no-issue) portions of the turbines with an “issue.” The model learns that only the “objects” (issues) over the turbine would be attributed to an issue rather than the turbine itself to an issue. As a result, including the baseline also helps us to reduce false negatives as well. We also recommend a balanced training set where the number of baseline samples are closer to sample size with issues to reduce bias. Please note that this discussion is applicable to Multi-Label Image Classification and Object Detection with Bounding Box. Semantic Segmentation would use background and “other” categories as the best practice. In Amazon SageMaker, you can use Amazon SageMaker Clarify to detect biases and fairness of your dataset.
Evaluating the Training Results
Preparing a dataset, labeling, training, and deploying Amazon Rekognition is out of the scope of this blog. To perform those steps, visit Getting Started with Amazon Rekognition Custom Labels. We will discuss the assessment of the model training results from visual inspection perspective, which is important.
Our model had around 20–30 sample images for each label category, where the baseline has the largest set. We were limited by the number and quality of publicly available turbine pictures with the issues on them. However, we assume the visual inspection companies would not have such a problem to find good example pictures as we had while preparing this blog post.
Figure 3 shows the results of the trained Amazon Rekognition model, which shows that the overall recall was 0.958 and the precision was 0.969. In plain terms, 95.8% of all labels (issues and baseline cases) were captured by the model, while 96.9% of all predicted labels were correct. The overall F1 score, which is the harmonic average of recall and precision, was found to be 0.961.
Noting that if we did not put baseline, the overall recall was 0.869, overall precision 0.833 and F1 score of 0.843. Adding the baseline increased these key performance parameters by more than 10% in our case.
Inspection is a part of safety, missing actual problems would have bigger impact than incorrectly identifying issues in safety. For this reason, you may want to maximize recall while tuning your model.
Figure 3. Evaluation results
In Per label performance column in Figure 3, the recall for baseline, icing and wear got the perfect score of 1.00. However, the recall for the corrosion label is 0.833. It means that 17.7% (1 – 0.833 = 0.177) of corrosion cases in our test set were failed to be identified. In this case, we can work for improving an Amazon Rekognition Custom Labels Model towards enhancing the recall. One of the ways to improve the recall is to reduce the threshold of confidence. It means more cases with lower confidence will be identified as corrosion. However, this will reduce the accuracy. The threshold can be reduced during the inference step. While analyzing the images with your model, you can reduce –min-confidence.
Solution
Architecture
As shown in the Figure 4, the images are first put in to Amazon S3 (my-pictures-bucket) (step 1), which invokes AWS Lambda (Inference Lambda) (step 2). The Inference Lambda function calls Amazon Rekognition Custom Labels API (step 3) to infer the image (object) stored in my-pictures-bucket S3 bucket (step 4). Hence, Amazon Rekognition should have IAM permissions and bucket policies to perform action such as GetObject from that S3 bucket. The JSON response with labeled data is received by the Inference Lambda (step 5) and added as an Item to Amazon DynamoDB table (step 6). The DynamoDB table has Partition Key of ID and the Sort Key timestamp. The item has two Attributes object key for the image file and the inference output from the Amazon Rekognition. Amazon DynamoDB Streams invokes another Lambda function called Assessment Lambda (step 7). This Lambda function represents the stage where the decision logic comes into play. The Assessment Lambda will assess the results of the inference in the DynamoDB table. If the inference results meet the conditions, the Lambda function will take an action, accordingly. For simplicity, our Assessment Lambda checks the issue lists to be alerted, which is an input from the deployment; MyLabelListToBeNotified. However, you can use any business logic. If one of the alert requiring labels is found in the list, the Assessment Lambda first signs the URL of the image object in the my-pictures-bucket (step 8) and then invokes the Amazon SNS (step 9). Amazon SNS sends an email about the issue (label), details (confidence), and the signed link to the original image object for further review (step 10).
Figure 4. Architecture deployed with AWS SAM shown by orange dashed box.
In the following section, you will first deploy the Amazon S3 bucket (my-training-bucket) and then Amazon Rekognition Custom Labels endpoint. Finally, you can use the AWS SAM to deploy the remaining services.
Deployment
Prerequisites
You will use both AWS CLI and AWS SAM, so install and configure the AWS CLI and AWS SAM CLI. The application build process also requires the installation of Docker and Python.
As explained in the previous sections, this blog post assumes that there is an Amazon Rekognition endpoint deployed. Please refer to Getting Started with Amazon Rekognition Custom Labels.
Deployment Steps
First, you can download the example AWS SAM folder from GitHub repo. You can use the following command that first clones the repo and then navigates to the folder where the AWS SAM template and two Lambda functions are located;
git clone https://github.com/aws-samples/ai-based-windturbine-inspection-blogpost.git
cd SAM-repo
Once you get to the location, you can start using AWS SAM deployment as follows;
sam build
sam deploy –guided
The guided deployment prompts you with several parameters;
- Enter my-stack-for-recognition-turbines as the Stack Name.
- You should copy the endpoint ARN from the previous step and paste here as
CustomLabelsEndpoint
. - You can write your email address to
MyEmailAdress
if you would like to receive the notification emails sent by Amazon SNS. - You can enter the issue labels in
MyLabelListToBeNotified
that you would like to get the notification of. The default is wear, corrosion, icing. - You can give the names to
InputBucketNamePrefix
andMyDynamoDBTable
.
Finally, enter “y” for all questions and wait until all the deployment is completed. Once it is completed, you should receive an email from no-reply@sns.amazonaws.com to confirm your subscription. Click to Confirm subscription.
Testing the Solution
You can select one of the pictures from the validation dataset. We selected the one shown in Figure 2a (example-picture.png). You can download this into your local disk to simulate an external upload process. You can use the below commands;
aws s3api put-object --bucket my-pictures-bucket --key example-picture.png --body ./example-picture.png
After the image has been uploaded, you should receive an email with inference result, confidence and hash to the image object in Amazon S3 (Figure 4). You can follow the inference results from the Amazon DynamoDB table created during the deployment.
Clean Up
a. You can start with stopping the Amazon Rekognition Custom Labels Endpoint. The CLI command can be found in API Codes under Use your model tab of Amazon Rekognition Custom Labels page.
aws rekognition stop-project-version --project-version-arn "arn:aws:rekognition:region:111111111111:project/my-recognition-project/version/ my-recognition-project.{timestamp}/hash" --region region
b. You can now delete the Amazon Rekognition project;
aws rekognition delete-project --project-arn arn:aws:rekognition:region:111111111111:project/my-recognition-project/version/ my-recognition-project.{timestamp}/hash
c. Before deleting the CloudFormation stack, please ensure that the Amazon S3 bucket where you uploaded pictures is emptied.
aws s3 rm s3://windturbine-pictures-get-photo --recursive
d. Then, you can delete the CloudFormation stack;
aws cloudformation delete-stack my-stack-for-recognition-turbines
e. You can then delete the S3 bucket containing the training pictures.
aws s3api delete-bucket –-bucket my-training-bucket
Conclusion
In this blog post, we explained an event-driven serverless architecture to incorporate business logic on top of an image recognition for wind turbine inspection. We discussed important machine learning concepts for image recognition for better automatic visual inspection. We finally deployed the serverless architecture using AWS SAM. Please, learn more about AWS for Energy.