NVIDIA NIM on AWS Supercharges AI Inference

Generative AI is rapidly transforming industries, driving demand for secure, high-performance inference solutions to scale increasingly complex models efficiently and cost-effectively. Expanding its collaboration with NVIDIA, Amazon Web Services (AWS) revealed today at its annual AWS re:Invent conference that it has extended NVIDIA NIM microservices across key AWS AI services to support faster AI inference Read Article

Generative AI is rapidly transforming industries, driving demand for secure, high-performance inference solutions to scale increasingly complex models efficiently and cost-effectively.

Expanding its collaboration with NVIDIA, Amazon Web Services (AWS) revealed today at its annual AWS re:Invent conference that it has extended NVIDIA NIM microservices across key AWS AI services to support faster AI inference and lower latency for generative AI applications.

NVIDIA NIM microservices are now available directly from the AWS Marketplace, as well as Amazon Bedrock Marketplace and Amazon SageMaker JumpStart, making it even easier for developers to deploy NVIDIA-optimized inference for commonly used models at scale.

NVIDIA NIM, part of the NVIDIA AI Enterprise software platform available in the AWS Marketplace, provides developers with a set of easy-to-use microservices designed for secure, reliable deployment of high-performance, enterprise-grade AI model inference across clouds, data centers and workstations.

These prebuilt containers are built on robust inference engines, such as NVIDIA Triton Inference Server, NVIDIA TensorRT, NVIDIA TensorRT-LLM and PyTorch, and support a broad spectrum of AI models — from open-source community ones to NVIDIA AI Foundation models and custom ones.

NIM microservices can be deployed across various AWS services, including Amazon Elastic Compute Cloud (EC2), Amazon Elastic Kubernetes Service (EKS) and Amazon SageMaker.

Developers can preview over 100 NIM microservices built from commonly used models and model families, including Meta’s Llama 3, Mistral AI’s Mistral and Mixtral, NVIDIA’s Nemotron, Stability AI’s SDXL and many more on the NVIDIA API catalog. The most commonly used ones are available for self-hosting to deploy on AWS services and are optimized to run on NVIDIA accelerated computing instances on AWS.

NIM microservices now available directly from AWS include:

  • NVIDIA Nemotron-4, available in Amazon Bedrock Marketplace, Amazon SageMaker Jumpstart and AWS Marketplace. This is a cutting-edge LLM designed to generate diverse synthetic data that closely mimics real-world data, enhancing the performance and robustness of custom LLMs across various domains.
  • Llama 3.1 8B-Instruct, available on AWS Marketplace. This 8-billion-parameter multilingual large language model is pretrained and instruction-tuned for language understanding, reasoning and text-generation use cases.
  • Llama 3.1 70B-Instruct, available on AWS Marketplace. This 70-billion-parameter pretrained, instruction-tuned model is optimized for multilingual dialogue.
  • Mixtral 8x7B Instruct v0.1, available on AWS Marketplace. This high-quality sparse mixture of experts model with open weights can follow instructions, complete requests and generate creative text formats.

NIM on AWS for Everyone

Customers and partners across industries are tapping NIM on AWS to get to market faster, maintain security and control of their generative AI applications and data, and lower costs.

SoftServe, an IT consulting and digital services provider, has developed six generative AI solutions fully deployed on AWS and accelerated by NVIDIA NIM and AWS services. The solutions, available on AWS Marketplace, include SoftServe Gen AI Drug Discovery, SoftServe Gen AI Industrial Assistant, Digital Concierge, Multimodal RAG System, Content Creator and Speech Recognition Platform.

They’re all based on NVIDIA AI Blueprints, comprehensive reference workflows that accelerate AI application development and deployment and feature NVIDIA acceleration libraries, software development kits and NIM microservices for AI agents, digital twins and more.

Start Now With NIM on AWS

Developers can deploy NVIDIA NIM microservices on AWS according to their unique needs and requirements. By doing so, developers and enterprises can achieve high-performance AI with NVIDIA-optimized inference containers across various AWS services.

Visit the NVIDIA API catalog to try out over 100 different NIM-optimized models, and request either a developer license or 90-day NVIDIA AI Enterprise trial license to get started deploying the microservices on AWS services. Developers can also explore NIM microservices in the AWS Marketplace, Amazon Bedrock Marketplace or Amazon SageMaker JumpStart.

See notice regarding software product information.