Skip to content
A person sitting at a computer with robotics.
Source: developer.nvidia.com

Getting Started with NVIDIA Isaac for Healthcare Using the Telesurgery Workflow

Sources: https://developer.nvidia.com/blog/getting-started-with-nvidia-isaac-for-healthcare-using-the-telesurgery-workflow, https://developer.nvidia.com/blog/getting-started-with-nvidia-isaac-for-healthcare-using-the-telesurgery-workflow/, NVIDIA Dev Blog

Overview

Telesurgery is no longer a futuristic idea; it is increasingly essential to how care is delivered. With a global shortage of surgeons projected to reach 4.5 million by 2030 and rural hospitals facing limited access to specialists, experts operating remotely is shifting from experimental to a necessary capability. NVIDIA Isaac for Healthcare provides a production-ready, modular telesurgery workflow that you can adapt, extend, and deploy in both training and clinical settings. The workflow is built on a powerful three-computer architecture that brings together NVIDIA DGX, NVIDIA OVX, and NVIDIA IGX/NVIDIA AGX to unify the full development stack. It offers a comprehensive set of tools and building blocks that enable you to move from simulation to clinical deployment using the same architectural foundation. The goal is to give developers a reliable, low-latency pipeline that bridges simulation and the operating room so you can connect cameras, configure data streams, and start experimenting with robot control. Fork the repo, experiment with new control devices, integrate novel imaging systems, or benchmark your own latency setup—every contribution moves telesurgery closer to everyday reality. The telesurgery workflow connects a surgeon’s control station to a patient-side surgical robot over a high-speed network. The result is that clinicians can perform procedures in crisis situations, remote hospitals, or across continents without compromising responsiveness. The architecture emphasizes low latency as a critical requirement, and the design decisions are purpose-built to meet clinical needs across diverse environments. Because the workflow is containerized, it runs consistently across different environments: both deployment modes share identical control schemes and networking protocols, ensuring that skills developed in simulation transfer directly to real procedures. This modular approach lets institutions begin with simulation-based training and smoothly transition to live surgery when ready. System architecture and latency claims are central to the telesurgery workflow. Benchmarking for display used a G-Sync-enabled monitor with a 240 Hz refresh rate in Vulkan exclusive display mode, and latency measurements were captured using NVIDIA LDAT (Latency and Display Analysis Tool). The setup includes a Holoscan Sensor Bridge option sourced from ecosystem FPGA partners such as Lattice and Microchip. The main takeaway from the engineering benchmarks is a latency target of less than 50 milliseconds, enabling responsive remote guidance and control that is suitable for safe remote procedures. Early pilot deployments indicate that telesurgery is more than a workflow; it is a foundational building block for a new model of healthcare delivery. Isaac for Healthcare provides a reliable, low-latency pipeline that bridges simulation and the operating room, enabling researchers and clinicians to connect cameras, configure data streams, and experiment with robot control within a consistent, containerized environment. This consistency is designed to help teams rapidly iterate from training to clinical deployment while preserving the same control schemes and networking protocols across environments. From here, you can connect cameras, configure data streaming (DDS), and start experimenting with robot control. The modular design supports starting with simulation-based training and transitioning to live procedures when your institution is ready. The workflow is designed to be adaptable so you can fork the repository, experiment with new control devices, integrate novel imaging systems, or benchmark your latency setup. Each contribution sharpens the telesurgery workflow and brings it closer to routine clinical use. Documentation and code are part of a broader ecosystem of related projects and community resources, all contributing to a production-ready telesurgery pipeline.

Key features

  • Production-ready, modular telesurgery workflow covering video and sensor streaming, robot control, haptics, and simulation
  • Unified development stack across a three-computer architecture: NVIDIA DGX, NVIDIA OVX, and NVIDIA IGX/NVIDIA AGX
  • Containerized deployment for consistent behavior across environments; identical control schemes and networking protocols across deploy/run modes
  • Explicit focus on low latency as a core requirement, with measured sub-50 ms latency targets in tested configurations
  • Hardware-agnostic photonics and sensing options, including Holoscan Sensor Bridge via ecosystem FPGA partners Lattice and Microchip
  • The ability to connect cameras, configure data streams, and experiment with robot control in a seamless workflow
  • Clear path from simulation-based training to clinical deployment using the same architecture
  • Benchmarking and testing practices (e.g., LDAT) to validate latency and performance against clinical requirements
  • Community-driven, forkable repository and a pathway for contributions to advance telesurgery capabilities

Common use cases

  • Training and simulation-based preparation for robotic procedures
  • Remote or remote-crisis procedures where expert surgeons operate from distant locations
  • Clinical deployment in hospitals that lack on-site subspecialists, enabling expert guidance across geographies
  • Rapid prototyping and experimentation with new control devices, imaging systems, and sensing modalities
  • Latency benchmarking and system validation to ensure safe and responsive remote operations

Setup & installation

Setup and installation details are not provided in the source. The workflow is described as containerized and designed to run across environments, but exact installation steps, environment setup, or repository URLs are not specified in the provided material.

# Setup & installation details are not provided in the source.
echo "No setup commands available in the source."

Quick start

A minimal runnable example is not provided in the source. The article encourages forking the repository and experimenting with control devices and imaging systems, but it does not include runnable setup steps or a ready-to-run example.

# Quick start not provided in the source.
echo "Refer to the official article for repository and setup instructions."

Pros and cons

  • Pros:
  • Low-latency, high-bandwidth pipeline designed for real-time tele-operation
  • Modular, production-ready workflow that supports both simulation and clinical deployment
  • Unified stack across multiple NVIDIA platforms to streamline development and deployment
  • Containerized deployment improves portability and reproducibility across environments
  • Capability to plug in cameras, imaging systems, and DDS configurations with experiment-friendly interfaces
  • Cons:
  • Requires access to suitable hardware and network infrastructure to realize low-latency performance
  • Involves a multi-component architecture (DGX, OVX, IGX/AGX) that can increase integration complexity
  • Not all setup and deployment details are provided in the source, requiring consultation of official documentation to implement fully

Alternatives (brief comparisons)

OptionNotes
Not explicitly elaborated in the sourceThe article focuses on NVIDIA Isaac for Healthcare telesurgery workflow; explicit alternatives are not discussed in the provided material

Pricing or License

Not specified in the source.

References

More resources