Overview
The Inference server offers the full infrastructure to run fast inference on GPUs.
It includes llama.cpp inference, latest CUDA and NVIDIA Docker container toolkit.
Leverage the multitude of models freely available to run inference with 8 bit or lower quantized models which makes inference possible on e.g. 16 GB or 24 GB memory GPUs.
Llama.cpp offer efficient inference of quantized models in interactive and server mode. It features
- Plain C/C++ implementation without dependencies
- 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit integer quantization support
- Running inference on GPU and CPU simultaneously allowing to run larger models in case GPU memory is insufficient
- AVX, AVX2 and AVX512 support for x86 architectures
- Supported models: LLaMA, LLaMA 2, Falcon, Alpaca, GPT4All, Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2, Vigogne (French), Vicuna, Koala, OpenBuddy (Multilingual), Pygmalion 7B / Metharme 7B, WizardLM, Baichuan-7B and its derivations (such as baichuan-7b-sft), Aquila-7B / AquilaChat-7B, Starcoder models, Mistral AI v0.1, Refact
Here is our guide How to use the AI SP Inference Server
The Inference server supports in addition
- llama-cpp-python: OpenAI API compatible Llama.cpp inference server
- Open Interpreter: let language models run code on your computer. An open-source, locally running implementation of OpenAIs Code Interpreter.
- Tabby coding assistant: a self-hosted AI coding assistant, offering an open-source alternative to GitHub Copilot
Includes remote desktop access via NICE DCV high-end remote desktops or via ssh (putty, ...).
Highlights
- Ready to run Inference. Everything pre-installed. Download a model for coding, text generation, chat, ... and start creating output
- Different options to run Inference servers for text generation, coding integration for IDE support, summarizing, sentiment analysis, ...
- You own the data and inference. No data is shared with any public service for AI inference.
Details
Typical total price
$1.106/hour
Features and programs
Financing for AWS Marketplace purchases
Pricing
Instance type | Product cost/hour | EC2 cost/hour | Total/hour |
---|---|---|---|
g4dn.xlarge | $0.06 | $0.526 | $0.586 |
g4dn.2xlarge | $0.08 | $0.752 | $0.832 |
g4dn.4xlarge | $0.12 | $1.204 | $1.324 |
g4dn.8xlarge | $0.16 | $2.176 | $2.336 |
g4dn.12xlarge | $0.32 | $3.912 | $4.232 |
g4dn.16xlarge | $0.36 | $4.352 | $4.712 |
g4dn.metal | $0.48 | $7.824 | $8.304 |
g5.xlarge Recommended | $0.10 | $1.006 | $1.106 |
g5.2xlarge | $0.13 | $1.212 | $1.342 |
g5.4xlarge | $0.18 | $1.624 | $1.804 |
Additional AWS infrastructure costs
Type | Cost |
---|---|
EBS General Purpose SSD (gp2) volumes | $0.10/per GB/month of provisioned storage |
Vendor refund policy
No refund. Instance is billed by hour of actual use, terminate at any time and product charges are stopped .
Legal
Vendor terms and conditions
Content disclaimer
Delivery details
64-bit (x86) Amazon Machine Image (AMI)
Amazon Machine Image (AMI)
An AMI is a virtual image that provides the information required to launch an instance. Amazon EC2 (Elastic Compute Cloud) instances are virtual servers on which you can run your applications and workloads, offering varying combinations of CPU, memory, storage, and networking resources. You can launch as many instances from as many different AMIs as you need.
Version release notes
Includes NVIDIA-560.35.03 driver and CUDA 12.6.1. Includes Llama.cpp, llama-cpp-python, Open Interpreter as of Sept 21, 2024
Additional details
Usage instructions
Make sure the instance security groups allow inbound traffic to TCP and UDP port 8443 and 22.
To connect to your Inference Server you have different options:
Option 1: Connect with the native NICE DCV Client for best performance
- Download the NICE DCV client from: https://download.nice-dcv.com/ (includes Windows portable client)
- In the DCV client connection field enter the instance public IP to connect.
- Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.
Option 2: Connect with NICE DCV Web Client for convenience
- Connect with the following URL: https://IP_OR_FQDN:8443/, e.g. https://3.70.184.235:8443/
- Sign in using the following credentials: User: ubuntu. Password: last 6 digits of the instance ID.
Option 3: Set your own password and connect
- Connect to your remote machine with ssh -i <your-pem-key> ubuntu@<public-dns>
- Set the password for the user "ubuntu" with sudo passwd ubuntu. This is the password you will use to log in to DCV
- Connect to your remote machine with the NICE DCV native client or web client as described above
- Enter your credentials and you are ready to rock
Please do not perform an update to a new kernel or higher releases as it might disable the GPU driver.
Here is our guide How to use the AI SP Inference Server
Quick start
How to run neural network inference with llama.cpp for quantized models - example with Xwin-LM-13B:
# depending on the instance type g4dn or g5 please use one of the 'cd' below cd ~/inference/llama.cpp-g4dn cd ~/inference/llama.cpp-g5 # now download the model - example is Xwin-LM-13B with 5bit quantization cd models wget https://huggingface.co/TheBloke/Xwin-LM-13B-V0.1-GGUF/resolve/main/xwin-lm-13b-v0.1.Q5_K_M.gguf cd .. # start inference ./main -m models/xwin-lm-13b-v0.1.Q5_K_M.gguf -p 'Building a website can be done in 10 simple steps:\nStep 1:' -n 600 -e -c 2700 --color --temp 0.1 --log-disable -ngl 52 # move 52 layers into the GPU # or you can put your prompt into the file "prompt.txt" and run bash run.sh # please note that llama.cpp also supports a chat mode by adding the option '-i': ./main -i -m models/xwin-lm-13b-v0.1.Q5_K_M.gguf -p 'Building a website can be done in 10 simple steps:\nStep 1:' -n 600 -e -c 2700 --color --temp 0.1 --log-disable -ngl 52 # move 52 layers into the GPUHave fun infering!
(At the moment the AMI supports g4dn and g5 instances - you can clone and compile for other instance types like p3).
Support
Vendor support
Guide to use the Inference server (https://www.ai-sp.com/how-to-use-the-ai-sp-inference-server/ ). Free support is available through forums (https://forums.aws.amazon.com/forum.jspa?forumID=366 )
AWS infrastructure support
AWS Support is a one-on-one, fast-response support channel that is staffed 24x7x365 with experienced and technical support engineers. The service helps customers of all sizes and technical abilities to successfully utilize the products and features provided by Amazon Web Services.