Meta Llama in Amazon Bedrock

Build the future of AI with Llama

Introducing Llama 3.2

Introducing Llama 3.2 from Meta, a new generation of vision and lightweight models that fit on edge devices, enabling more personalized AI experiences. Llama 3.2 includes small and medium-sized vision LLMs (11B and 90B) and lightweight, text-only models (1B and 3B) that support image reasoning and on-device use cases. The new models are designed to be more accessible and efficient, with a focus on responsible innovation and system-level safety.

Llama 3.2 90B is Meta’s most advanced model and is ideal for enterprise-level applications. Llama 3.2 is the first Llama model to support vision tasks, with a new model architecture that integrates image encoder representations into the language model. This model excels at general knowledge, long-form text generation, multilingual translation, coding, math, and advanced reasoning. It also introduces image reasoning capabilities, allowing for sophisticated image understanding and visual reasoning. This model is ideal for the following use cases: image captioning, image-text-retrieval, visual grounding, visual question answering and visual reasoning, and document visual question answering.

Llama 3.2 11B is well-suited for content creation, conversational AI, language understanding, and enterprise applications requiring visual reasoning. The model demonstrates strong performance in text summarization, sentiment analysis, code generation, and following instructions, with the added ability to reason about images. This model is ideal for the following use cases: image captioning, image-text-retrieval, visual grounding, visual question answering and visual reasoning, and document visual question answering.

Llama 3.2 3B offers a more personalized AI experience, with on-device processing. Llama 3.2 3B is designed for applications requiring low-latency inferencing and limited computational resources. It excels at text summarization, classification, and language translation tasks. This model is ideal for the following use cases: mobile AI-powered writing assistants and customer service applications.

Llama 3.2 1B is the most lightweight model in the Llama 3.2 collection of models and is perfect for retrieval and summarization for edge devices and mobile applications. It enables on-device AI capabilities while preserving user privacy and minimizing latency. This model is ideal for the following use cases: personal information management and multilingual knowledge retrieval.

Benefits

Llama 3.2 offers a more personalized AI experience, with on-device processing. The Llama 3.2 models are designed to be more efficient, with reduced latency and improved performance, making them suitable for a wide range of applications.
128K context length allows Llama to capture even more nuanced relationships in data.
Llama models are trained on 15 trillions of tokens from online public data sources to better comprehend language intricacies.
Llama 3.2 is multilingual and supports eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
Amazon Bedrock's managed API makes using Llama models easier than ever. Organizations of all sizes can access the power of Llama without worrying about the underlying infrastructure. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy the generative AI capabilities of Llama into your applications using the AWS services you are already familiar with. This means you can focus on what you do best—building your AI applications.

Meet Llama

For over the past decade, Meta has been focused on putting tools into the hands of developers, and fostering collaboration and advancements among developers, researchers, and organizations. Llama models are available in a range of parameter sizes, enabling developers to select the model that best fits their needs and inference budget. Llama models in Amazon Bedrock open up a world of possibilities because developers don't need to worry about scalability or managing infrastructure. Amazon Bedrock is a very simple turnkey way for developers to get started using Llama.

Use cases

Llama models excel at image understanding and visual reasoning, language nuances, contextual understanding, and complex tasks like visual data analysis, image captioning, dialogue generation, translation and dialogue generation, and can handle multi-step tasks effortlessly. Additional use cases Llama models are a great fit for include sophisticated visual reasoning and understanding, image-text-retrieval, visual grounding, document visual question answering, text summarization and accuracy, text classification, sentiment analysis and nuance reasoning, language modeling, dialog systems, code generation, and following instructions.

Model versions

Llama 3.2 90B

Multimodal model that takes both text and image inputs and outputs. Ideal for applications requiring sophisticated visual intelligence, such as image analysis, document processing, multimodal chatbots, and autonomous systems.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: No

Supported use cases: Image understanding, visual reasoning, and multimodal interaction, enabling advanced applications such as image captioning, image-text retrieval, visual grounding, visual question answering, and document visual question answering, with a unique ability to reason and draw conclusions from visual and textual inputs.

Read the blog

Llama 3.2 11B

Multimodal model that takes both text and image inputs and outputs. Ideal for applications requiring sophisticated visual intelligence, such as image analysis, document processing, and multimodal chatbots.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: No

Supported use cases: Image understanding, visual reasoning, and multimodal interaction, enabling advanced applications such as image captioning, image-text retrieval, visual grounding, visual question answering, and document visual question answering.

Read the blog

Llama 3.2 3B

Text-only lightweight model built to deliver highly accurate and relevant results. Designed for applications requiring low-latency inferencing and limited computational resources. Ideal for query and prompt rewriting, mobile AI-powered writing assistants, and customer service applications, particularly on edge devices where its efficiency and low latency enable seamless integration into various applications, including mobile AI-powered writing assistants and customer service chatbots.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: No

Supported use cases: Advanced text generation, summarization, sentiment analysis, emotional intelligence, contextual understanding, and common sense reasoning.

Read the blog

Llama 3.2 1B

Text-only lightweight model built to deliver fast and accurate responses. Ideal for edge devices and mobile applications. The model enables on-device AI capabilities while preserving user privacy and minimizing latency.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: No

Supported use cases: Multilingual dialogue use cases such as personal information management, multilingual knowledge retrieval, and rewriting tasks.

Read the blog

Llama 3.1 8B

Ideal for limited computational power and resources, faster training times, and edge devices.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: Yes

Supported use cases: Text summarization, text classification, sentiment analysis, and language translation.

Read the blog

Llama 3.1 70B

Ideal for content creation, conversational AI, language understanding, research development, and enterprise applications. With new latency-optimized inference capabilities available in public preview, this model sets a new performance benchmark for AI solutions that process extensive text inputs, enabling applications to respond more quickly and handle longer queries more efficiently.

Max tokens: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: Yes

Supported use cases: Text summarization, text classification, sentiment analysis, and language translation.

Read the blog

Llama 3.1 405B

Ideal for enterprise level applications, research and development, synthetic data generation and model distillation. With latency-optimized inference capabilities available in public preview, this model delivers exceptional performance and scalability, enabling organizations to accelerate their AI initiatives while maintaining high quality outputs across diverse use cases.

Max tokens
: 128K

Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Fine-tuning supported: Coming soon

Supported use cases: General knowledge, long-form text generation, machine translation, enhanced contextual understanding, advanced reasoning and decision making, better handling of ambiguity  and uncertainty, increased creativity and diversity, steerability, math, tool use, multilingual translation, and coding.

Read the blog

Llama 3 8B

Ideal for limited computational power and resources, faster training times, and edge devices.

Max tokens: 8K

Languages: English

Fine-tuning supported: No

Supported use cases: Text summarization, text classification, sentiment analysis, and language translation

Read the blog

Llama 3 70B

Ideal for content creation, conversational AI, language understanding, research development, and enterprise applications. 

Max tokens: 8K

Languages: English

Fine-tuning supported: No

Supported use cases: Text summarization and accuracy, text classification and nuance, sentiment analysis and nuance reasoning, language modeling, dialogue systems, code generation, and following instructions.

Read the blog

Llama 2 13B

Fine-tuned model in the parameter size of 13B. Suitable for smaller-scale tasks such as text classification, sentiment analysis, and language translation.

Max tokens: 4K

Languages: English

Fine-tuning supported: Yes

Supported use cases: Assistant-like chat

Read the blog

Llama 2 70B

Fine-tuned model in the parameter size of 70B. Suitable for larger-scale tasks such as language modeling, text generation, and dialogue systems.

Max tokens: 4K

Languages: English

Fine-tuning supported: Yes

Supported use cases: Assistant-like chat

Read the blog

Nomura uses Llama models from Meta in Amazon Bedrock to democratize generative AI

 

Aniruddh Singh, Nomura's Executive Director and Enterprise Architect, outlines the financial institution’s journey to democratize generative AI firm-wide using Amazon Bedrock and Llama models from Meta. Amazon Bedrock provides critical access to leading foundation models like Llama, enabling seamless integration. Llama offers key benefits to Nomura, including faster innovation, transparency, bias guardrails, and robust performance across text summarization, code generation, log analysis, and document processing. 

TaskUs revolutionizes customer experiences using Llama models from Meta in Amazon Bedrock

TaskUs, a leading provider of outsourced digital services and next-generation customer experience to the world’s most innovative companies, helps its clients represent, protect, and grow their brands. Its innovative TaskGPT platform, powered by Amazon Bedrock and Llama models from Meta, empowers teammates to deliver exceptional service. TaskUs builds tools on TaskGPT that leverage Amazon Bedrock and Llama for cost-effective paraphrasing, content generation, comprehension, and complex task handling.