Deploying Your Omniverse Kit Apps at Scale
Sources: https://developer.nvidia.com/blog/deploying-your-omniverse-kit-apps-at-scale, developer.nvidia.com
TL;DR
- Streaming enables browser-based access to Omniverse Kit apps, reducing reliance on powerful local hardware.
- Flexible deployment paths exist: self-managed, Azure Marketplace, DGX Cloud, and on-prem or cloud GPU Kubernetes.
- The Kit App Template with built-in streaming accelerates setup; apps are containerized and deployed via Kubernetes.
- Real-world deployments (e.g., Siemens, Sight Machine, Hexagon) illustrate scalable, GPU-powered digital twins and 3D visualization in enterprise settings.
- Validate, package, and deploy step by step to deliver RTX-powered experiences directly in the browser.
Context and background
NVIDIA Omniverse Kit App Streaming is a collection of APIs and Kit extensions that lets you stream OpenUSD-based industrial and physical AI applications—built with the Omniverse Kit SDK—directly to customers. Users access streaming applications through a Chromium-based browser or any web-based app. By running the applications server-side on NVIDIA RTX GPUs, including the latest RTX Pro 6000 Blackwell Server Edition series, users can interact with demanding digital twins and simulations with low latency, without requiring powerful local hardware or local software installs. This approach supports a cloud-native, scalable deployment that meets developers where they are, whether on-premise or in the cloud. For a scalable delivery model, NVIDIA provides multiple deployment options that can be combined or chosen based on the project lifecycle. The core idea is to stream the application from the server to the browser, while the control plane is managed with Kubernetes-native tooling. The fastest way to scaffold and test a streaming-enabled Kit application is to use the Kit App Template embedded web viewer. This template includes built-in support for Omniverse Kit App Streaming and demonstrates how WebRTC signaling, messaging, and core extensions integrate into the application layer. When generating a new app with this template, you’ll be prompted to enable a streaming application layer, such as omni_default_streaming, to ensure the right extensions and services are included from the start. NVIDIA recommends following the up-to-date instructions in the template repository as build processes and options evolve. Once the Kit app is built, you validate its functionality and performance in a test environment, locally or using GPUs from a cloud provider in a sandbox setup. The packaging workflow packages the application and all required dependencies and streaming extensions into a deployable Docker container. The output is a ready-to-run image that can be uploaded to a container registry for deployment across cloud or self-hosted environments, such as NVIDIA NGC’s Private Registry. With the container image available, you register your application with your Omniverse Kit App Streaming instance using Kubernetes-native tooling to gain declarative control over launch, scale, and management—whether on-prem or in the cloud. To deploy at scale, you can run on a GPU-enabled Kubernetes cluster in your environment of choice: Microsoft Azure, AWS, or on-prem. NVIDIA offers a one-click solution template on the Azure Marketplace that provisions core infrastructure and services automatically, enabling a GPU-enabled Kubernetes environment with core Omniverse Kit App Streaming components. Once core services are deployed via the provided Helm charts, you can enhance the deployment with optional services to improve scalability, security, and production-readiness. In addition, Omniverse on DGX Cloud provides a managed path where NVIDIA handles provisioning, scaling, and GPU maintenance. For those seeking fully managed infrastructure, this is designed for straightforward production deployments where developers upload containerized Kit apps to NVIDIA NGC for deployment. If you require maximum control, a self-managed deployment of Omniverse Kit App Streaming offers the most flexibility. You’ll deploy and maintain all core streaming services using NVIDIA’s official Helm charts, CRDs, and container images, tailoring every component to your operational and security needs. Deployment examples include Siemens Teamcenter Digital Reality Viewer, which combines real-time ray tracing powered by Omniverse libraries with Siemens’ cloud infrastructure and a centralized GPU cluster, and Sight Machine’s Operator Agent, which leverages Azure provisioning and Omniverse streaming to visualize complex production environments directly in the browser. Hexagon’s HxDR Reality Cloud Studio is another example of cloud-based streaming in practice. All of these deployments illustrate delivering immersive browser experiences at scale to customers without demanding local hardware. Accessible directly from cloud and tooling ecosystems, Omniverse Kit App Streaming is designed to streamline the path to production while maintaining flexibility to meet industry and developer requirements. This includes access via NVIDIA’s cloud and registry ecosystems and deployment workflows that accommodate both rapid iteration and robust operations.
What’s new
This article dives into getting started with Omniverse Kit App Streaming and explores deployment options for scale. It highlights how containerized microservices work together to deliver a Kubernetes-native streaming experience, the quick-start Kit App Template with streaming support, and a workflow that takes a Kit app from local development to a production-ready containerized service registered in a Kubernetes cluster. It also showcases real-world deployment patterns and considerations for cloud providers (Azure, AWS) and managed services (DGX Cloud, NGC Private Registry). The emphasis is on delivering browser-based access to demanding 3D apps with low latency and scalable architecture.
Why it matters (impact for developers/enterprises)
- Lower friction for end users: browser-based access reduces the need for powerful local hardware and complex software installs.
- Flexible deployment models: choose between self-managed, Azure Marketplace, DGX Cloud, or cloud/on-prem Kubernetes deployments to fit budgets, SLAs, and security requirements.
- Scalable, GPU-powered rendering and simulation: server-side RTX GPUs enable immersive digital twins and physics-based visualization at scale.
- Enterprise-ready patterns: reference deployments from Siemens, Sight Machine, and Hexagon demonstrate real-world adoption and production readiness.
- Secure, declarative management: Kubernetes-native tooling enables deterministic scaling, rollout, and management of streaming services.
Technical details or Implementation
- Architecture: Omniverse Kit App Streaming is a collection of APIs and Kit extensions that stream OpenUSD-based apps from server-side GPUs to a Chromium-based browser. The streaming stack includes WebRTC signaling, messaging, and core extensions integrated into the application layer.
- Quick-start scaffolding: Use the Kit App Template embedded web viewer to quickly scaffold and test a streaming-enabled Kit app. When generating a new app, enable the streaming layer (e.g., omni_default_streaming) to include the necessary services.
- Template repository guidance: Build processes and options evolve; follow the latest instructions in the template repository for consistency with current practices.
- Testing and validation: Validate functionality and performance in local or sandbox cloud environments before going live.
- Packaging and containerization: Use the built-in packaging script on a Linux workstation to package the app, dependencies, and streaming extensions into a Docker image. Push the image to a registry accessible by your deployment environment (e.g., NVIDIA NGC’s Private Registry).
- Deployment options and tooling:
- Kubernetes-native deployment: Register the container image with your Omniverse Kit App Streaming instance using Kubernetes-native tooling for declarative control over launch, scale, and management.
- On-prem or cloud GPU clusters: Deploy on GPU-enabled Kubernetes clusters in Azure, AWS, or on-prem infrastructure.
- Azure Marketplace: A preconfigured solution template on the Azure Marketplace enables a fast, frictionless setup with just a few clicks. It provisions a GPU-enabled Kubernetes environment with all core components pre-installed.
- DGX Cloud: Omniverse on DGX Cloud provides a fully managed deployment path where NVIDIA handles provisioning, scaling, and GPU maintenance, enabling teams to focus on building and delivering applications.
- Optional enhancements: After core deployment, you can configure optional services to improve scalability, security, and production-readiness across supported environments.
- Real-world deployment patterns:
- Siemens Teamcenter Digital Reality Viewer combines Omniverse-powered real-time ray tracing with Siemens cloud infrastructure to deliver a centralized, high-performance, secure, browser-based viewer.
- Sight Machine leverages Azure automation and Omniverse Kit App Streaming to deliver immersive browser-based 3D visualizations of production environments with AI-driven recommendations in the UI.
- Hexagon’s HxDR Reality Cloud Studio demonstrates cloud-based streaming for architecture, engineering, and construction workflows. | Deployment path | Description | Notes |---|---|---| | Self-managed | Deploy core streaming services with Helm charts, CRDs, and container images; you own infra | Full control, customizable, and suitable for strict compliance needs |Azure Marketplace | Preconfigured solution template; quick GPU-enabled Kubernetes setup with core components | Frictionless start; just upload your containerized Kit app |NVIDIA DGX Cloud | Fully managed deployment; NVIDIA handles provisioning, scaling, and GPU maintenance | Easiest path for large-scale production |On-prem / Cloud Kubernetes | Deploy on GPU-enabled Kubernetes clusters (Azure, AWS, or on-prem) | Flexible across environments |
- Real-world operational steps often follow this path: test in sandbox, containerize with the packaging script, push to a registry, then declare and run on a Kubernetes cluster with Helm-based deployment manifests.
- Accessibility: end users access streaming-enabled Kit apps through a Chromium-based browser, or any web-based application, with server-side rendering and streaming powering the experience.
- References to official deployment guides and developer quick start templates are available to ensure up-to-date practices and architecture diagrams.
Key takeaways
- Omniverse Kit App Streaming enables scalable, browser-based access to high-fidelity 3D apps powered by RTX GPUs.
- Multiple deployment paths accommodate different needs: self-managed, Azure Marketplace, DGX Cloud, or on-prem/cloud Kubernetes.
- The Kit App Template accelerates setup by including streaming components and warnings to enable omni_default_streaming during app generation.
- Containerization and Kubernetes-native deployment are central to scaling and managing Kit apps in production.
- Real-world deployments illustrate how large organizations leverage streaming for digital twins and immersive 3D visualization in the browser.
FAQ
-
What is Omniverse Kit App Streaming?
It is a collection of APIs and Omniverse Kit extensions that lets you stream OpenUSD-based industrial and physical AI applications—built with the Omniverse Kit SDK—directly to customers, accessible via a Chromium-based browser.
-
What deployment options are available to scale Kit App Streaming?
Options include self-managed deployments using Helm charts and CRDs, Azure Marketplace preconfigured templates, DGX Cloud fully managed deployments, and GPU-enabled Kubernetes deployments on Azure, AWS, or on-prem.
-
How do you scaffold, test, and package a Kit App for streaming?
Start with the Kit App Template web viewer, enable the streaming layer (e.g., omni_default_streaming), build the app, validate in a test environment, then package with the built-in packaging script into a Docker image and push to a registry like NVIDIA NGC’s Private Registry.
-
What are example production deployments of Kit App Streaming?
Siemens Teamcenter Digital Reality Viewer, Sight Machine’s Operator Agent, and Hexagon’s HxDR Reality Cloud Studio illustrate enterprise deployments of browser-based 3D visualization with streaming capabilities.
References
More news
First look at the Google Home app powered by Gemini
The Verge reports Google is updating the Google Home app to bring Gemini features, including an Ask Home search bar, a redesigned UI, and Gemini-driven controls for the home.
NVIDIA HGX B200 Reduces Embodied Carbon Emissions Intensity
NVIDIA HGX B200 lowers embodied carbon intensity by 24% vs. HGX H100, while delivering higher AI performance and energy efficiency. This article reviews the PCF-backed improvements, new hardware features, and implications for developers and enterprises.
Shadow Leak shows how ChatGPT agents can exfiltrate Gmail data via prompt injection
Security researchers demonstrated a prompt-injection attack called Shadow Leak that leveraged ChatGPT’s Deep Research to covertly extract data from a Gmail inbox. OpenAI patched the flaw; the case highlights risks of agentic AI.
Predict Extreme Weather in Minutes Without a Supercomputer: Huge Ensembles (HENS)
NVIDIA and Berkeley Lab unveil Huge Ensembles (HENS), an open-source AI tool that forecasts low-likelihood, high-impact weather events using 27,000 years of data, with ready-to-run options.
Scaleway Joins Hugging Face Inference Providers for Serverless, Low-Latency Inference
Scaleway is now a supported Inference Provider on the Hugging Face Hub, enabling serverless inference directly on model pages with JS and Python SDKs. Access popular open-weight models and enjoy scalable, low-latency AI workflows.
Google expands Gemini in Chrome with cross-platform rollout and no membership fee
Gemini AI in Chrome gains access to tabs, history, and Google properties, rolling out to Mac and Windows in the US without a fee, and enabling task automation and Workspace integrations.