Skip to content
Deploying Your Omniverse Kit Apps at Scale
Source: developer.nvidia.com

Deploying Your Omniverse Kit Apps at Scale

Sources: https://developer.nvidia.com/blog/deploying-your-omniverse-kit-apps-at-scale, https://developer.nvidia.com/blog/deploying-your-omniverse-kit-apps-at-scale/, NVIDIA Dev Blog

Overview

NVIDIA Omniverse Kit App Streaming is designed to reduce installation friction by streaming Kit-based applications directly to a browser. Server-side rendering and streaming run on NVIDIA RTX GPUs, enabling low-latency interaction with demanding digital twin and simulation workloads. Users access the apps through a Chromium-based browser or any web-based client, eliminating the need for powerful local hardware or software installation. The streaming architecture supports cloud-native and on-prem deployments and exposes a Kubernetes-native model for scalable delivery. Omniverse Kit App Streaming is a collection of APIs and Kit extensions that stream Universal Scene Description (OpenUSD) based industrial and physical AI applications built with the Omniverse Kit SDK. This approach enables streaming to customers anywhere, securely, and at scale, with optional on-demand GPU resources from providers such as Azure, AWS, or on-prem clusters. The latest RTX Pro 6000 Blackwell Server Edition GPUs are mentioned as part of the server-side pool for high-end workloads.

Key features

  • Browser-based access to streaming Kit apps via WebRTC signaling and core extensions
  • Server-side execution on NVIDIA RTX GPUs for demanding visualization and simulation workloads
  • Flexible deployment options: on-premise, cloud, and fully managed paths
  • Kubernetes-native streaming architecture with containerized microservices
  • Kit App Template embedded web viewer with built-in streaming support
  • Automated building, testing, packaging, and deployment workflows aligned with the template repository
  • Containerized packaging script that outputs a deployable Docker image
  • Registry integration with NVIDIA NGC Private Registry for image distribution
  • Declarative deployment using Kubernetes tooling and Helm charts
  • Azure Marketplace preconfigured solution template for quick setup
  • Fully managed deployment path via NVIDIA Omniverse on DGX Cloud
  • Real-world examples including Siemens Teamcenter Digital Reality Viewer, Sight Machine Operator Agent, and Hexagon HxDR Reality Cloud Studio
  • Comprehensive references and official deployment guides for up-to-date instructions

Common use cases

  • Deliver browser-based, photorealistic 3D visualizations and interactive simulations of digital twins without requiring users to install local software
  • Scale industrial AI workflows with on-demand GPU-backed streaming for architecture, engineering, and manufacturing use cases
  • Provide customers with a centralized, cloud-based or on-prem streaming platform that supports high-fidelity visualization from standard laptops
  • Deploy turnkey streaming apps via templates or Azure Marketplace to accelerate time to value
  • Run production-grade streaming workloads in a centralized GPU cluster, enabling collaboration and real-time feedback across teams
  • Use DGX Cloud to simplify provisioning, scaling, and maintenance of GPU resources while focusing on application development

Setup & installation

Note that the source material emphasizes workflows, templates, and helm-based deployments but does not provide exact shell commands. The following outlines map to the described steps and reference points. For exact commands, consult the Kit App Template repository and official deployment guides cited in References.

  1. Explore documentation and the Kit App Template
  • Review the Omniverse Kit App Streaming documentation to understand the containerized microservices and how they work together to deliver a Kubernetes-native streaming experience.
  • Use the Kit App Template embedded web viewer, which includes built-in streaming components such as WebRTC signaling, messaging, and core extensions. When generating a new app, enable a streaming app layer like omni_default_streaming to ensure required services are included.
  1. Build, test, and validate locally or in a sandbox
  • Build your Kit application using the template workflow.
  • Validate functionality and performance in a test environment, which can be local or cloud-based with GPU sandboxing. Refer to the Kit App testing documentation for specifics.
  1. Containerize the application
  • After building and testing, containerize your application using the built-in packaging script on a Linux workstation. The script outputs a deployable Docker image that includes all required dependencies and streaming extensions. Push the image to a registry accessible by your deployment environment (for example, NVIDIA NGC’s Private Registry).
  1. Deploy on a Kubernetes cluster
  • Deploy core Omniverse Kit App Streaming services with NVIDIA’s official Helm charts and CRDs on a GPU-enabled Kubernetes cluster of your choice (Azure, AWS, or on-prem).
  • Register your container image with your Omniverse Kit App Streaming instance using Kubernetes-native tooling to control launch, scaling, and management declaratively.
  1. Optional deployment paths
  • Azure Marketplace: Use the preconfigured solution template to provision core infrastructure and services automatically, then upload your containerized Kit application.
  • DGX Cloud: Leverage the fully managed Omniverse Kit App Streaming path where NVIDIA handles provisioning, scaling, and GPU resource maintenance.
  1. Reference deployments and production readiness
  • Review official deployment guides and architectural overviews for detailed instructions and best practices. See examples such as Siemens Teamcenter Digital Reality Viewer and Sight Machine Operator Agent for production-style usage.

Note: The source emphasizes following up-to-date instructions in the template repository and official deployment guides. Exact commands, file paths, and Helm values are context-specific and may evolve over time.

Quick start (minimal runnable example)

  • The source provides templates and examples rather than a single minimal runnable script. A minimal runnable example would rely on the Kit App Template workflow, container packaging script, and a Kubernetes deployment manifest governed by Helm charts. Due to the lack of explicit runnable commands in the source excerpt, a runnable example is not provided here. Refer to the official template repository and deployment guides for a concrete start.

Pros and cons

  • Pros
  • Browser-based access eliminates the need for local high-end hardware
  • Flexible deployment options across on-prem, cloud, and managed paths
  • Kubernetes-native streaming with containerized microservices supports scalable deployments
  • Integrated template viewer with WebRTC signaling simplifies integration
  • Azure Marketplace and DGX Cloud offer accelerated setup and managed infrastructure
  • Real-world production deployments demonstrate practical applicability
  • Cons
  • Self-managed deployments require ownership of core streaming services and operational discipline
  • Exact commands and deployment configurations evolve; staying up-to-date requires following official templates and guides
  • Initial setup may involve multiple moving parts (containers, Helm charts, registries, GPUs), which can increase complexity

Alternatives (brief comparisons)

| Path | Key characteristics | Trade-offs |---|---|---| | Self-managed Kubernetes with Helm | Full control over streaming services; on-prem or cloud clusters | Higher operational overhead; must manage Helm charts, CRDs, and security |Azure Marketplace preconfigured template | Quick start with core infrastructure pre-provisioned | Limited to template capabilities; may involve vendor-specific constraints |DGX Cloud fully managed | NVIDIA handles provisioning, scaling, and GPU maintenance | Less control over infrastructure; tied to DGX Cloud availability |Template-based example deployments (Siemens, Sight Machine, Hexagon) | Real-world case studies showing end-to-end usage | Customization depends on organization; examples illustrate possibilities rather than a one-size-fits-all solution |

Pricing or License

Pricing or licensing details are not explicitly provided in the source. The content references architectures, deployment paths, and templates but does not include a price table or license terms. Users should consult the Azure Marketplace listing, DGX Cloud pricing, and NVIDIA licensing terms in the referenced guides for up-to-date information.

References

More resources