AWS Robotics Blog
How Boston Dynamics and AWS use mobility and computer vision for dynamic sensing
This blog was co-authored by Brad Bonn, Customer Application Engineer at Boston Dynamics.
Industrial facilities have massive physical instrumentation that often needs to be monitored for chronic trends or emergent issues. Attempting a digital transformation to streamline this process means deploying hundreds of thousands of individual discrete sensors, along with an extensive intercommunication system.
In the world of dynamic sensing, however, the same asset management task could be accomplished with a much smaller set of sensors and almost no communications infrastructure. AWS and Boston Dynamics are working together to bring this capability of dynamic sensing to life, taking sensors where they’re needed through mobile robots and using AWS services to process the data into critical insights for industrial teams.
Automation Through Agile Mobile Robots
Boston Dynamics’ Spot® is a quadruped robot that can be tasked with pre-determined missions to gather data in hazardous locations without human intervention. Operators can control Spot with a joystick via the Spot tablet, and record an Autowalk mission – a pre-programmed route that Spot is able to navigate using obstacle avoidance and autonomous capabilities. Operators can instruct Spot to perform programmable tasks through an API interface that allows developers to acquire, store, and retrieve sensor or camera data. Once an Autowalk mission has been recorded, Spot can be commanded to repeat the Autowalk mission without an operator explicitly driving the robot along the mission route. Spot is capable of serving in countless use-cases—from detecting hotspots in power generation facilities to gas leaks on oil rigs and beyond—and is ready to be integrated with various technology platforms via the Spot software development kit (SDK), a Python-based kit.
Machine-Learning with AWS
One way customers are using Spot is for inspections with machine-learning (ML) powered insights in less-than-ideal environments for human workers. Spot can be set on a mission that includes designated inspection points, such as areas with valves or meters, that would otherwise need to be monitored by people on-site. Triggered by a Data Acquisition Plugin during an mission, Spot can capture local imagery with either the standard robot cameras, or an optional Spot CAM with panoramic and 30x optical pan-tilt-zoom capabilities. The captured imagery data can then be processed by a computer vision (CV) ML model—for instance, detecting whether valves are open or closed. Detections from the ML inference can subsequently be stored onboard the robot, until Spot retreats back to an area where a network connection is accessible. AWS IoT Greengrass 2.0 and Amazon SageMaker Edge Manager can automate and orchestrate the deployment of software and ML models to a Spot that is operating with intermittent network connectivity.
Streamlining Data Processing
Applying computation resources to raw sensor data in order to gain meaningful insights requires developing ML models through lengthy training processes and iterations that leverage large quantities of data. Once these models are built, making them useful requires significant amounts of computation resources to identify patterns in the shortest possible time. This iterative process of collecting, training, testing, and retraining is necessary to ensure the highest degree of accuracy for the models in practice.
There are a few shortcuts a developer can take in a computer vision workflow, but there are also ways to streamline it, such as:
- Automating the upload and storage of the raw images during the collection phase to create a large selection pool for tagging
- Utilizing heavy parallel compute resources to iterate through annotated images and training a model (i.e. object detection or image classification)
- Having an automated pipeline for testing new models with real-world data so that inaccuracies can be quickly identified, and eliminated with subsequent model re-training
Endless Integration Opportunities
Spot has a unique place in the world of computer vision by nature of the fact it is designed to navigate through our physical world. In the industrial environments that it is typically found, WiFi is often non-existent, or at the very least, unreliable. How then can developers train, test, and retrain their models with a robot that may only be reachable intermittently? In addition, how does a remote sensing application leverage cloud computing without continuous internet connectivity?
Overcoming these challenges to operationalize a computer vision-based solution means running inference on the edge, while utilizing asynchronous IoT messaging to cloud storage and compute resources. The end result is ideally a pipeline which provides a means of asynchronously collecting raw images, cloud computing for model building from the tagging process of those images, and an automated delivery process of the iterated model back to the edge for testing.
AWS IoT Greengrass 2.0 is an open-source edge runtime that is compatible with Spot’s compute payloads and enables delivery and execution of applications or ML models on the robot. AWS IoT Greengrass also enables Spot to send data back to the cloud with flexibility for varied use-cases.
With AWS IoT Greengrass, developers can build custom components that use the Spot SDK for capturing imagery or sensory data, performing ML inference, and reporting detections back to the cloud. Integrating Amazon SageMaker Edge Manager into the solution empowers developers to optimize and package models targeted specifically for the Spot compute payloads, and run inference without having to install their choice of ML framework and libraries (i.e. PyTorch, TensorFlow, MXNet) giving them the flexibility to train whichever model meets their needs.
AWS IoT Greengrass 2.0 and Amazon SageMaker Edge Manager provide a singular delivery system that enables all the links in this development chain on an edge computing device. Meanwhile, Spot’s agile platform enables repeatable, autonomous data collection which can then be acted upon using the robot’s API. Together, Spot and AWS comprise an end-to-end method for making AI literally strut its stuff.
Learn more about Spot, AWS IoT Greengrass or contact AWS for further information.