開始使用 Amazon SageMaker JumpStart

概觀

Amazon SageMaker JumpStart 是機器學習 (ML) 中心,可協助您加速機器學習之旅。探索如何透過模型中心預先訓練的模型、預先訓練的基礎模型和預先建置的解決方案開始使用內建演算法,以解決常見使用案例。若要開始使用,請參閱可快速執行的文件或範例筆記本。

產品類型
文字任務
視覺任務
表格式任務
音訊任務
多模態
強化學習
Showing results: 1-12
Total results: 567
  • 熱門度
  • 精選優先
  • A-Z 型號名稱
  • Z-A 型號名稱
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3 models. Llama 3 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-405B-FP8

    Meta
    405B variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-8B-Instruct

    Meta
    8B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 3B Instruct

    Meta
    3B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 1B Instruct

    Meta
    1B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Vision Language

    Meta Llama 3.2 11B Vision Instruct

    Meta
    11b instruction-tuned variants of Llama 3.2 models that supports both text and image as input.
    Deploy only
  • foundation model

    Featured
    Text Generation

    Llama 2 13B

    Meta
    13B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama three from Meta comes in two parameter sizes — 8B and 70B with 8k context length — that can support a broad range of use cases with improvements in reasoning, code generation, and instruction following.

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 70B Chat

    Meta
    70B dialogue use case optimized variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 48