Amazon SageMaker JumpStart の開始方法
概要
Amazon SageMaker JumpStart は、機械学習ジャーニーを加速させることができる機械学習 (ML) ハブです。モデルハブの事前トレーニングされたモデル、事前トレーニングされた基盤モデル、および事前構築されたソリューションを利用して、一般的なユースケースを解決するための組み込みアルゴリズムの使用を開始する方法をご覧ください。使用を開始するには、すぐに実行できるドキュメントまたはサンプルノートブックを参照してください。
Total results: 567
- 人気
- 注目記事から表示
- A~Z (モデル名)
- Z~A (モデル名)
-
foundation model
FeaturedText GenerationMeta-Llama-3-70B-Instruct
Meta70B instruction tuned variant of Llama 3 models. Llama 3 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.Fine-tunable -
foundation model
FeaturedText GenerationMeta-Llama-3.1-405B-FP8
Meta405B variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.Fine-tunable -
foundation model
FeaturedText GenerationMeta-Llama-3.1-8B-Instruct
Meta8B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.Fine-tunable -
foundation model
FeaturedText GenerationMeta-Llama-3.1-70B-Instruct
Meta70B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.Fine-tunable -
foundation model
FeaturedText GenerationMeta Llama 3.2 3B Instruct
Meta3B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.Fine-tunable -
foundation model
FeaturedText GenerationMeta Llama 3.2 1B Instruct
Meta1B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.Fine-tunable -
foundation model
FeaturedVision LanguageMeta Llama 3.2 11B Vision Instruct
Meta11b instruction-tuned variants of Llama 3.2 models that supports both text and image as input.Deploy only -
foundation model
FeaturedText GenerationLlama 2 13B
Meta13B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.Fine-tunable -
foundation model
FeaturedText GenerationLlama 3
MetaLlama three from Meta comes in two parameter sizes — 8B and 70B with 8k context length — that can support a broad range of use cases with improvements in reasoning, code generation, and instruction following.
Deploy Only -
foundation model
FeaturedText GenerationLlama 2 70B
Meta70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.Fine-tunable -
foundation model
FeaturedText GenerationLlama 2 7B
Meta7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.Fine-tunable -
foundation model
FeaturedText GenerationLlama 2 70B Chat
Meta70B dialogue use case optimized variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.Fine-tunable