AWS for M&E Blog

Create a scalable workflow using the Intel Library for Video Super Resolution

Introduction

The rise of Free Ad-supported Streaming Television (FAST) channels has boosted the repurposing and distribution of archival content, including classic movies and TV shows, across modern platforms and devices. Much of this content is available only in lower-resolution, standard definition (SD) formats, and needs enhancement to meet viewer expectations. Traditionally, low-complexity methods like Lanczos and bicubic are used for upscaling. However, they often introduce image artifacts such as blurring and pixelation.

Deep learning (DL) techniques such as Super-Resolution Convolutional Neural Network (SRCNN) and Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR) have shown remarkable results in objective quality assessments, such as VMAF, SSIM, and PSNR. However, they are computationally expensive, potentially making them less suitable for channels with limited budgets, which is often the case for FAST offerings. Amazon Web Services (AWS) and Intel propose a cost-efficient solution for video super-resolution. The solution leverages the benefits of AWS Batch (to process video assets) using the Intel Library for Video Super Resolution (VSR)—balancing quality and performance for real-world use cases. In this blog post, we describe a step-by-step implementation using an AWS CloudFormation template.

Solution

Implementing the Intel Library for Video Super Resolution, based on the enhanced RAISR algorithm, requires Amazon EC2-specific instance types, such as c5.2xlarge, c6i.2xlarge, and c7i.2xlarge. We leverage AWS Batch to compute jobs and automate the entire pipeline rather than dealing with all the underlying infrastructure, including start and stop instances.

The following are the main components of the solution:

  1. Create a compute environment in AWS Batch, where CPU requirements are defined, including the type of EC2 instance allowed.
  2. Create a job queue associated with the proper computing environment. Each job submitted in this queue will be executed using the specific EC2 instances.
  3. Job definition. At this point, it is necessary to have a container registered in the Amazon Elastic Container Registry (Amazon ECR). Building the docker image is further detailed within this GitHub link. The container includes installing the Intel Library for VSR, open-source FFmpeg tool, and AWS Command Line Interface (AWS CLI) to perform API calls to S3 buckets. Once the job is properly defined (with the image registered in Amazon ECR), the jobs can start being submitted into the queue.

The following diagram represents the general architecture as previously described:

Figure 1: High-level architecture.

Implementation

A CloudFormation template is available in this GitHub. Following are the steps to deploy the proposed solution:

  1. Download yml from the GitHub repository
  2. Go to CloudFormation from the AWS Console to create a new stack using template.yml

Figure 2: Choose an existing template.

  1. The template allows definition of the next parameters:
    • Memory: Memory associated to the job definition. This value can adjust the minimum and maximum memory that is required depending of the super-resolution job. For example, 1080p, AVC 30fps, 15:00 duration → 4000 memory and 4vCPU).
    • Subnet: AWS Batch deploys the proper EC2 instance types (c5.2xlarge, c6i.2xlarge, and c7i.2xlarge) in a selected customer subnet with Internet access.
    • VPCName: Existing virtual private cloud (VPC) where a selected Subnet is associated.
    • VSRImage: This field uses an existing public image, but a customer can create their own image and insert the URL in this field. Instructions to create custom image are found here.
    • VCPU: Virtual CPU (VCPU) associated to the job definition. This value can also be adjusted.

Figure 3: Input parameters.

The next step creates a CloudFormation stack using the defined parameters.

Figure 4: Create CloudFormation stack.

  1. Once the stack has been successfully created, two new Amazon S3 buckets, starting with vsr-input and vsr-output, should be listed.

Figure 5: Verify two S3 buckets have been created (vsr-input and vsr-output).

  1. Upload a SD file to the vsr-input-xxxx-{region-name}

Figure 6: Uploading a SD file to the input bucket.

  1. Go to Batch from the AWS console (figure 7), click on it to open the dashboard and validate a new queue (queue-vsr) and compute environment (VideoSuperResolution) have been created (Figure 8).

Figure 7: AWS Batch.

Figure 8: AWS Batch Dashboard.

  1. Within the Batch dashboard click on Jobs (left-side menu). Click on Submit a new job, then select the proper job definition (vsr-jobDefiniton-xxxx) and queue (queue-vsr).

Figure 9: Configuring a Job using a existing Job definition and Job queue.

  1. In the next screen, click on Load from job definition and modify the name of the input and output files. For example, a user uploads a file input-low-resolution.ts and wants to name a super-resolution output file as output-high-resolution.ts. In this case a proper array of linux commands to add in the next interface would be:

[“/bin/sh”,”main.sh”,”s3://vsr-input-106171535299-us-east-1-f37dd060″,”input-low-resolution.ts”,”s3://vsr-output-106171535299-us-east-1-f37dd060″,”output-high-resolution.ts”]

Figure 10: Job Command.

  1. Review and submit the job. Wait until the Status transitions from Submitted (Figure 11) to Runnable and then to Succeeded Figure 12). The AWS console will also show additional details such as the number of job attempts and other details.

Figure 11: Submitted Job.

Figure 12: Successed Job.

  1. Go to the output Amazon S3 bucket to validate the super-resolution file has been created and uploaded to the vsr-output automatically.

Figure 13: Validation of the Super-resolution file.

Compare subjective and objective visual quality

The open-source tool compare-video tool can be used to perform a subjective quality evaluation between original and super-resolution videos. In addition, an objective evaluation can be performed using VMAF. For objective evaluation, VMAF uses traditional upscaling methods, such as Lanczos or Bicubic, to match both resolutions before executing a frame-by-frame comparison. Following are visual examples:

Figure 14: Left image: LR upscaled using Lanczos, Right image: HR using VSR library

Figure 15: Left image: LR upscaled using Lanczos, Right image: HR using VSR library

Figure 16: Left image: LR upscaled using Lanczos, Right image: HR using VSR library

Figure 17: Left image: LR upscaled using Lanczos, Right image: HR using VSR library

Clean up

To delete the example stack we created during this solution, go to CloudFormation, click on Delete stack, and wait until it successfully completes (Figure 17).

Figure 18: Delete Stack.

Conclusion

In this post, we described a solution that enables the use of super resolution for a smooth integration with existing transcoding pipelines. This is a cost-effective way to help with adoption of future super-resolution enhancements.

By applying video super resolution using the Intel Video Super Resolution Library, you can upscale and sharpen low-resolution footage, transforming pixelated or blurry videos into crisp, high-definition content. Unlock new monetization opportunities by enabling the repurposing and distribution of archival footage across modern platforms and devices.

Special thanks to Surbhi Madan from the Intel team who contributed greatly to make this solution possible.

Contact an AWS Representative to learn how we can help accelerate your business.

Further Reading

Intel Open Omics Acceleration Framework on AWS: fast, cost-efficient, and seamless

AWS Batch Dos and Don’ts: Best Practices in a Nutshell

Create super resolution for legacy media content at scale with generative AI and AWS

Intel® Library for Video Super Resolution (Intel® Library for VSR)

Carlos Salazar

Carlos Salazar

Carlos Salazar is an Edge Specialist Solutions Architect, Math Lover and Video Compression/ML PhD. With 13+ years of experience in the Video Analysis industry, compression, codecs and above all he has a lot of passion in topics related to video algorithms, super resolution, video restoration/curation and AI/ML. He is also an active member of several organizations such as ACM MHV, ITU, and DASH org among others.

Osmar Bento

Osmar Bento

Osmar Bento is a Senior Solution Architect specializing in Direct-to-Consumer experiences for M&E, Gaming, and Sports at AWS. Osmar collaborates with customers to innovate and create tailored solutions using the AWS platform, enhancing their media and entertainment operations.

Arturo Velasco

Arturo Velasco

Arturo Velasco is Media and Entertainment Specialist Solutions Architect, with 12+ years of experience in the industry, background includes satellite direct-to-home, IPTV, Cable HFC, and OTT video systems. His goal is to help customers understand how they can make use of best practices and evangelize Media and Entertainment solutions build on AWS.