AWS Insights

NinjaTech AI is powering the next generation of productivity agents with AWS

A GIF of Amazon Bedrock being used in NinjaTech AI's MyNinja user interface.In an era where knowledge workers are drowning in everyday tasks, one AI startup is harnessing the power of Amazon Web Services (AWS) to build a new class of intelligent AI agents that can take the time-consumption out of everyday work.

NinjaTech AI‘s platform, MyNinja.ai, is a multi-agent, multi-model AI system that can handle a wide range of productivity-boosting functions—from drafting job postings and scheduling interviews to conducting complex research, writing code, and generating images. But what sets Ninja apart is its orchestration across the multiple agents and models; it does this by using deep integration with AWS technologies, which provide the scalable infrastructure and advanced AI capabilities to offer this orchestration in a single interface, for a single subscription price.

According to research from Goldman Sachs, the average knowledge worker can spend up to 15 hours per week on everyday time-consuming tasks, including reading and responding to emails, researching and compiling information, scheduling and preparing for meetings, or booking travel. For managers, that figure can be up 20 hours per week.

“Our mission is to help everyone be more productive by democratizing access to the best AI agents and the best foundation models in the world,”says Babak Pahlavan, founder and CEO of NinjaTech AI. “If we can give even meaningful time back to knowledge workers every week in an affordable and delightful way, that equates to a substantial increase in productivity.”

Accelerating model training and deployment with custom AWS chips

At the heart of MyNinja’s capabilities is NinjaTech AI’s own custom large language model, which the company has fine-tuned using Meta’s Llama 3 as a starting point. But training and iterating on a model of that scale requires serious computational power – something that NinjaTech AI has found in AWS’s Trainium and Inferentia chipsets.

Every day, says Arash Sadrieh, NinjaTech AI’s co-founder and chief science officer, the company’s scientists are trying to make its custom model’s results more aligned to users’ expectations. NinjaTech AI uses a cluster of Trainium chips to accelerate the training of its AI model.

“The good thing is we can basically request for this acceleration on demand, and we can finish the fine-tuning round in under just three hours.”

The company also uses Amazon SageMaker to quickly evaluate the model’s outputs and determine where further refinements are needed. SageMaker is a fully managed service that brings together a broad set of tools to enable high-performance, low-cost machine learning (ML) for any use case. “It’s really helpful with this fast feedback,” says Sam Naghshineh, co-founder and chief technology officer, adding that NinjaTech AI scientists can then quickly evaluate the output to make sure it’s delivering what the company wants, and then potentially conduct another round of fine-tuning of the model.

Importantly, the price-performance benefits of Trainium and Inferentia have been a game-changer for the startup. “Being a startup, cost and flexibility is absolutely essential for us,” Pahlavan says, noting that it would be 80 percent more expensive for NinjaTech AI to use regular GPUs for training its custom model.

Tapping into leading AI models with Amazon Bedrock

While NinjaTech AI’s custom model – Ninja LLM – forms the foundation of Ninja’s capabilities, the platform also integrates directly with Amazon Bedrock, a serverless managed service that provides access to a bevy of leading LLMs through a single API. Bedrock has a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.

Bedrock lets you experiment with and evaluate top models for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources.

Critically, Bedrock gives MyNinja users access to a wide range of premium large language models from providers like Anthropic, Cohere, and AI21 Labs, as well as Claude, Mistral, and AWS’s own Titan models. Prior to Bedrock, NinjaTech AI also supported models from OpenAI and Google through direct integrations, but with Bedrock, in a span of 10 days, they were able to add 14 models through a single integration, now giving users access to 24 LLMs, with more to come.

A GIF of NinjaTech AI's MyNinja image generator

Kurt Wilkinson, NinjaTech’s VP of Go-To-Market, says the company has always believed that there is “no single large language model that will rule them all,” and that “giving our users a wide range of models to work with is key. Each LLM has different strengths, and their performances vary across a variety of tasks and purposes,” he says.

“By integrating Amazon Bedrock directly into MyNinja.ai’s interface, we’re giving users the ability to access an incredible variety of premium LLMs for a wide range of potential tasks and compare their answers side-by-side without the need to jump between application tabs,” Wilkinson says. “We are determined to give users a wide choice of models for an affordable price, and to make the navigation of that choice easy.”

A key factor in the company’s decision to use Bedrock is its ability to have all of the models in one place and not having to do individual integrations with each model provider.

Another benefit is that all of the models can be accessed through a single API, which gives companies greater flexibility to use different models and upgrade to the latest model versions with minimal code changes.

“It’s not just about access to the models, it’s about it being simple to execute, and a single point of access,” Babak Pahlavan says. “All of that made this much faster than if we had to do each of them individually.”

Delivering productivity at scale

Ultimately, NinjaTech AI’s vision is to free up knowledge workers from the time-consuming nature of repetitive tasks, giving them back valuable time in their day. And the company believes that AWS is essential to making that vision a reality.

“Our goal is that, even if we can give you one hour back each week by automatically doing those tasks like scheduling your meeting, negotiating the times, and doing everything on your behalf, that would be a great productivity boost for our users,” Pahlavan says.

Learn more:
Generative AI on AWS
NinjaTech AI simplifies everyday tasks with AWS generative AI