NVIDIA Unveils New RTX Neural Rendering, DLSS 4 and ACE AI Upgrades at Gamescom 2025
TL;DR
- NVIDIA announced major updates to NVIDIA RTX neural rendering, DLSS 4 and NVIDIA ACE at Gamescom 2025, expanding developer integrations and cloud availability.
- DLSS 4 now spans 175+ supported titles and is available via GeForce NOW Blackwell instances; Streamline SDK and Unreal plugins support engine integration.
- RTX Kit and the NvRTX Unreal branch add ReSTIR PT, RTX Mega Geometry (experimental), and RTX Neural Texture Compression D3D12 path for Cooperative Vectors.
- NVIDIA ACE advances include on-device Riva ASR with 90–100 ms time to first text, and NVIGI 1.2 providing Direct3D inference support via DLLs for all GPU vendors.
- Nsight Graphics 2025.4 promotes Graphics Capture to production-ready. GeForce NOW is integrated into Discord for instant cloud play demos.
Context and background
At Gamescom 2025, NVIDIA revealed a bundle of updates to its gaming-focused neural rendering and generative AI stacks. The announcements expand the company’s RTX neural rendering ecosystem, its ACE character and speech AI suite, and related developer tools and SDKs. These updates aim to help studios deliver higher fidelity graphics, faster frame rates, lower latency, and richer AI-driven interactions across PC and cloud gaming. Several of the announced technologies are already being used or showcased in upcoming titles. The post highlights that the underlying neural rendering technology is behind games such as Resident Evil Requiem, Borderlands 4, and The Oversight Bureau. For developers and studios, NVIDIA is delivering both engine plugins and low-level SDKs to integrate these capabilities into custom and popular engines like Unreal Engine. Reference: full announcement at the NVIDIA blog: https://developer.nvidia.com/blog/announcing-the-latest-nvidia-gaming-ai-and-neural-rendering-technologies/
What’s new
The update covers multiple technology areas across rendering, AI inference, engine integrations, and developer tooling:
- DLSS 4: NVIDIA describes DLSS 4 as a suite of neural rendering technologies that increase FPS, reduce latency, and enhance image quality. DLSS 4 was first announced alongside the GeForce RTX 50 Series at CES and now supports over 175 titles. GeForce NOW cloud gaming with NVIDIA Blackwell also benefits from DLSS 4 features such as Multi Frame Generation.
- Engine SDKs and plugins: The Streamline SDK remains available for custom engines with the latest updates and fixes. For Unreal Engine developers, the DLSS 4 plugin is provided for Unreal Engine versions 5.2 through 5.6.
- RTX Kit and NvRTX Unreal branch: RTX Kit components enable training and deploying AI inside shaders, building fully path-traced games in real time, and creating photorealistic characters. The NVIDIA RTX Branch of Unreal Engine (NvRTX) 5.6 adds or exposes advanced features, including the ReSTIR PT path tracer and experimental RTX Mega Geometry.
- DirectX 12 Agility SDK preview with Cooperative Vectors: This preview enables direct access to RTX Tensor Cores from within DirectX shaders, which NVIDIA says provides substantial performance improvements and memory savings for real-time graphics.
- RTX Neural Texture Compression SDK: Updated with a D3D12 path that supports Cooperative Vectors, offering tools for evaluation, compression, engine integration, and a decompression library to reduce memory usage of high-quality textures without sacrificing visual quality.
- ACE and Riva ASR on-device: The NVIDIA Riva Automatic Speech Recognition (ASR) model is now available for on-device inference, offering high word accuracy and real-time performance with 90–100 ms time to first text, word-boosting and streaming for English transcription. The Oversight Bureau demonstrates this capability in voice-driven gameplay. Developers can use Riva ASR on-device through the NVIDIA In-Game Inferencing (NVIGI) SDK.
- NVIGI 1.2: The NVIGI plugin-based inference manager now provides Direct3D support for inference language models across all GPU vendors as DLLs, simplifying integration into C++ gaming and interactive applications.
- Nsight Graphics 2025.4: The Graphics Capture activity has been promoted from beta to production-ready, enabling immediate capture to persistent disk and supporting RTX Kit features such as RTX Mega Geometry and RTX Hair, along with D3D DirectStorage and Vulkan Device-Generated Commands.
- GeForce NOW inside Discord: NVIDIA and Discord demonstrated an integrated experience allowing players to launch cloud-streamed games inside Discord with no downloads or installs. A limited-time trial of the GeForce NOW Performance experience streams up to 60 fps at 1440p. The demo at Gamescom used Epic’s Fortnite as the first showcased integration.
Why it matters (impact for developers/enterprises)
- Broader reach via cloud: DLSS 4 on GeForce NOW Blackwell instances and the Discord integration allow developers to deliver maximum graphical fidelity to players who lack high-end hardware, expanding addressable audiences and reducing friction to try games.
- Faster graphics iteration and realism: ReSTIR PT, RTX Mega Geometry, and enhanced path-tracing workflows in NvRTX provide new options for studios pursuing physically based, fully path-traced scenes with many lights, improved indirect illumination, and higher-quality reflections.
- Memory and performance benefits: Cooperative Vectors in the DirectX 12 Agility SDK and the D3D12 path in the RTX Neural Texture Compression SDK promise performance improvements and reduced memory usage for high-quality textures—critical for AAA titles and large open worlds.
- On-device, low-latency AI: Riva ASR on-device and NVIGI 1.2’s cross-vendor Direct3D DLL inference support let developers integrate speech and other AI features with low latency and simplified deployment in native C++ games.
- Improved debugging and capture workflows: Nsight Graphics’ production-ready Graphics Capture helps teams reproduce and diagnose rendering and AI-related issues with direct support for new RTX Kit features.
Technical details or Implementation
Key implementation points from the announcement:
| Component | Key implementation detail |
|---|---|
| DLSS 4 | Suite of neural rendering tech; supported in UE via plugin for 5.2–5.6 and available in Streamline SDK for custom engines |
| Cooperative Vectors (DX12 Agility) | Direct access to RTX Tensor Cores from DirectX shaders; performance and memory benefits |
| RTX Neural Texture Compression SDK | D3D12 path with Cooperative Vectors; includes compression, evaluation, engine integration tools, and decompression library |
| Riva ASR (on-device) | 90–100 ms time to first text, word-boosting and streaming for English; available via NVIGI SDK |
| NVIGI 1.2 | Direct3D support to inference language models across all GPU vendors as DLLs |
| Developers can download RTX Kit technologies and the NVIDIA RTX Branch for Unreal Engine 5.6, compile sample scenes (including an updated Zorah sample showcasing ReSTIR DI), and watch the technical overview webinar referenced in the announcement. |
Key takeaways
- DLSS 4 adoption continues to grow (175+ titles) and is now integrated into cloud gaming via GeForce NOW Blackwell.
- The NvRTX Unreal branch exposes advanced path tracing (ReSTIR PT) and experimental RTX Mega Geometry for ray tracing full-quality Nanite geometry.
- Cooperative Vectors in the DirectX 12 Agility SDK and D3D12 support in RTX Neural Texture Compression aim to improve performance and lower memory footprint.
- On-device Riva ASR and NVIGI 1.2 lower latency and simplify AI model deployment in games.
- Nsight Graphics 2025.4 is production-ready for frame capture workflows supporting the latest RTX Kit features.
FAQ
-
Which Unreal Engine versions support the DLSS 4 plugin?
The DLSS 4 plugin is available for Unreal Engine 5.2 through 5.6.
-
What is the reported time to first text for Riva ASR on-device?
NVIDIA reports a time to first text of 90–100 ms for the on-device Riva ASR model.
-
Is ReSTIR PT generally available?
ReSTIR PT is currently available in the NvRTX branch of Unreal Engine 5 (NvRTX 5.6) and is not listed as broadly released outside that branch.
-
How does NVIGI 1.2 simplify inference integration?
NVIGI 1.2 provides Direct3D support to run inference for language models across all GPU vendors as DLLs, making it easier to integrate models in C++ gaming and interactive apps.
-
Can players launch cloud games from Discord without installing GeForce NOW?
The integrated experience demonstrated at Gamescom lets players jump into cloud-streamed games inside Discord without downloads, installs, or needing GeForce NOW installed on the PC; a limited-time trial of the GeForce NOW Performance experience was used for demos.
References
More news
Anthropic tightens Claude usage policy, bans CBRN and high‑yield explosive assistance
Anthropic updated Claude’s usage policy to explicitly ban help developing CBRN and high‑yield explosives, tighten cybersecurity prohibitions, refine political rules, and clarify high‑risk requirements.
Build a scalable containerized web application on AWS using the MERN stack with Amazon Q Developer – Part 1
In a traditional SDLC, a lot of time is spent in the different phases researching approaches that can deliver on requirements: iterating over design changes, writing, testing and reviewing code, and configuring infrastructure. In this post, you learned about the experience and saw productivity gains
Building a RAG chat-based assistant on Amazon EKS Auto Mode and NVIDIA NIMs
In this post, we demonstrate the implementation of a practical RAG chat-based assistant using a comprehensive stack of modern technologies. The solution uses NVIDIA NIMs for both LLM inference and text embedding services, with the NIM Operator handling their deployment and management. The architectu
GPT-5: smaller-than-expected leap, but faster, cheaper, and stronger at coding
OpenAI's GPT-5 delivered incremental accuracy gains but notable improvements in cost, latency, coding performance, and fewer hallucinations. The launch met heavy hype and mixed reactions.
Sam Altman: ‘Yes,’ AI Is in a Bubble — What He Told The Verge
OpenAI CEO Sam Altman told The Verge he believes AI is in a bubble, compared it to the dot‑com era, warned about exuberant startup valuations, and said OpenAI expects massive data‑center spending.
Introducing Amazon Bedrock AgentCore Identity: Securing agentic AI at scale
In this post, we explore Amazon Bedrock AgentCore Identity, a comprehensive identity and access management service purpose-built for AI agents that enables secure access to AWS resources and third-party tools. The service provides robust identity management features including agent identity director