Começando a usar o Amazon SageMaker JumpStart

Visão geral

O Amazon SageMaker JumpStart é um hub de machine learning (ML) que pode ajudar você a acelerar sua jornada de ML. Explore como você pode começar a usar algoritmos integrados com modelos pré-treinados de hubs de modelos, modelos básicos pré-treinados e soluções predefinidas para resolver casos de uso comuns. Para começar, consulte a documentação ou exemplos de blocos de notas que você pode executar rapidamente.

Tipo de produto
Tarefas de texto
Tarefas de visão
Tarefas tabulares
Tarefas de áudio
Multimodal
Aprendizado reforçado
Showing results: 1-12
Total results: 567
  • Popularidade
  • Apresentado pela primeira vez
  • Nome do modelo A-Z
  • Nome do modelo Z-A
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3 models. Llama 3 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-405B-FP8

    Meta
    405B variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-8B-Instruct

    Meta
    8B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 3B Instruct

    Meta
    3B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 1B Instruct

    Meta
    1B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Vision Language

    Meta Llama 3.2 11B Vision Instruct

    Meta
    11b instruction-tuned variants of Llama 3.2 models that supports both text and image as input.
    Deploy only
  • foundation model

    Featured
    Text Generation

    Llama 2 13B

    Meta
    13B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama three from Meta comes in two parameter sizes — 8B and 70B with 8k context length — that can support a broad range of use cases with improvements in reasoning, code generation, and instruction following.

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 70B Chat

    Meta
    70B dialogue use case optimized variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 48