Partner Success with AWS / Software & Internet / United States
LILT Fine-Tunes Multilingual Generative AI Models with NVIDIA NeMo on AWS
Deployed
NVIDIA NeMo for faster model fine-tuning
Enabled
real-time model fine-tuning to incorporate linguists’ edits
Increased
throughput by 30X
Overview
LILT, a multilingual content translation and generation platform, helps customers use generative artificial intelligence (AI) to localize content, support go-to-market outreach, and improve customer experiences. Since its inception in 2015, LILT has run on Amazon Web Services (AWS) and used NVIDIA G4dn GPUs to power its platform. Recently, LILT also deployed the AWS Partner NVIDIA software stack, NVIDIA NeMo, on AWS to build its multilingual generative AI models and accelerate model fine-tuning for faster translation and higher-quality content generation.
Growing Content Production Solution Calls for Faster Model Fine-Tuning
LILT is known for bringing human-powered, technology-assisted translations to global enterprises. Its translation and content generation solution empowers product, marketing, support, ecommerce, and localization teams to deliver exceptional customer experiences to global audiences. To produce content, LILT developed its own multilingual generative AI models using NVIDIA GPUs to power model training, fine-tuning, and retraining. However, as LILT scaled, it needed the ability to fine-tune and retrain models that were five to 50 times bigger—all in real time—and capture higher-quality edits from its linguists.
The speed and frameworks that NVIDIA provides are so important to LILT. With these, we’re able to improve our models with the terms, tone, and colloquialisms our customers use—ultimately delivering better multilingual content and translations.”
Omar Orqueda
Vice President, AI Research and Engineering, LILT
Fine-Tuning Models in Real Time with NVIDIA NeMo
By adopting NVIDIA NeMo, a cloud-native framework for building generative AI models, LILT has accelerated fine-tuning. NVIDIA NeMo is included as a part of NVIDIA AI Enterprise, a secure, cloud-native, end-to-end software platform that streamlines building and deployment of production-grade AI applications, including generative AI. “The speed and frameworks that NVIDIA provides are so important to LILT,” said Omar Orqueda, vice president, AI research and engineering at LILT. “With these, we’re able to improve our models with the terms, tone, and colloquialisms our customers use—ultimately delivering better multilingual content.”
The combination of NVIDIA’s G4dn.12xlarge GPU instance and NVIDIA NeMo allows LILT to implement a real-time human-in-the-loop approach for all verified translations. This ensures all suggested changes from LILT’s linguists are used for model fine-tuning—helping to produce more accurate content. “The NVIDIA computing power makes our linguists more efficient,” Orqueda said. “They create training data for us that we can use to make our models better.”
Providing Customers the Freedom to Choose with Amazon Bedrock
As a solution that supports many different use cases, LILT structured its platform to provide customers the flexibility to choose from a variety of models. For multilingual content creation, LILT offers Amazon Bedrock, a fully managed service that makes foundation models from leading AI startups and Amazon available via an API. “We believe in the power of choice for LILT customers, and Amazon Bedrock lets them decide which model is best suited for their needs,” Orqueda said.
The fact that Amazon Bedrock can be deployed on AWS GovCloud (US) also allows LILT’s public sector customers to access a suite of models. “Many of our customers in the public sector need to deploy very sensitive content, so they’re using LILT on AWS GovCloud,” Orqueda said. “That way they can scale their loads and translate content in real time.”
Translating 150K Characters Per Second with 30X Faster Throughput
As a solution that supports many different use cases, LILT structured its platform to provide customers the flexibility to choose from a variety of models. For multilingual content creation, LILT offers Amazon Bedrock, a fully managed service that makes foundation models from leading AI startups and Amazon available via an API. “We believe in the power of choice for LILT customers, and Amazon Bedrock lets them decide which model is best suited for their needs,” Orqueda said.
The fact that Amazon Bedrock can be deployed on AWS GovCloud also allows LILT’s public sector customers to access a suite of models. “Many of our customers in the public sector need to deploy very sensitive content, so they’re using LILT on AWS GovCloud,” Orqueda said. “That way they can scale their loads and translate content in real time.”
Ensuring Large Volume Translations Are Fast and Accurate
Not only does LILT allow customers to choose their model, but they can also adapt it based on their own requirements. This ensures high-quality batch machine translations so that customers receive accurate content even when there is no human in the loop. “In some cases when the answer is needed right away, customers can use our machine translations based on a model that still gives extremely excellent outputs,” Orqueda said.
In just a few years, LILT has grown its business and developed models that deliver a quality product for customers. “Looking ahead, LILT will continue delivering an extremely good experience for customers by using the best possible models and also creating the right training data for their particular use cases,” concluded Orqueda. “The next stage for us is making LILT a completely open platform where every single company can offer their multilingual services.”
About LILT
LILT is a multilingual content translation and generation platform bringing human-powered, AI-assisted language translation and localization services to global enterprises.
About AWS Partner NVIDIA
Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI, and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.
AWS Services Used
Amazon Bedrock
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
AWS GovCloud (US)
AWS GovCloud (US) enables customers to adhere to ITAR regulations, the FedRAMP requirements, Defense Federal Acquisition Regulation Supplement (DFARS), DoD (SRG) Impact Levels 2 and 4 and 5, and several other security and compliance requirements.
Learn more »
More Software & Internet Success Stories
Get Started
Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.