Amazon SageMaker JumpStart 시작하기

개요

Amazon SageMaker JumpStart는 기계 학습(ML) 여정을 앞당기는 데 도움이 될 수 있는 ML 허브입니다. 모델 허브의 사전 훈련된 모델, 사전 훈련된 파운데이션 모델, 일반적인 사용 사례를 해결하기 위한 사전 구축된 솔루션으로 기본 제공 알고리즘을 시작하는 방법을 알아봅니다. 시작하려면 빠르게 실행할 수 있는 설명서 또는 예시 노트북을 참조하세요.

제품 유형
텍스트 태스크
비전 태스크
테이블 형식 태스크
오디오 태스크
다중 모달
강화 학습
Showing results: 1-12
Total results: 567
  • 인기도
  • 추천 우선
  • A-Z 모델 이름
  • Z-A 모델 이름
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3 models. Llama 3 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-405B-FP8

    Meta
    405B variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-8B-Instruct

    Meta
    8B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta-Llama-3.1-70B-Instruct

    Meta
    70B instruction tuned variant of Llama 3.1 models. Llama 3.1 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 3B Instruct

    Meta
    3B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Meta Llama 3.2 1B Instruct

    Meta
    1B instruction tuned variant of Llama 3.2 models. Llama 3.2 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance.
    Fine-tunable
  • foundation model

    Featured
    Vision Language

    Meta Llama 3.2 11B Vision Instruct

    Meta
    11b instruction-tuned variants of Llama 3.2 models that supports both text and image as input.
    Deploy only
  • foundation model

    Featured
    Text Generation

    Llama 2 13B

    Meta
    13B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama three from Meta comes in two parameter sizes — 8B and 70B with 8k context length — that can support a broad range of use cases with improvements in reasoning, code generation, and instruction following.

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 70B Chat

    Meta
    70B dialogue use case optimized variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 48