Trackio: Track experiments locally with a Gradio dashboard and Spaces sync
Sources: https://huggingface.co/blog/trackio, Hugging Face Blog
Overview
Trackio is an open-source Python library that lets you track any metrics and visualize them using a local Gradio dashboard. You can also sync this dashboard to Hugging Face Spaces, which means you can share the dashboard with other users simply by sharing a URL. Since Spaces can be private or public, you can share a dashboard publicly or just within members of your Hugging Face organization — all for free. Trackio is designed as a drop-in replacement for experiment tracking libraries like wandb, with an API that is compatible with wandb.init, wandb.log, and wandb.finish, so you can simply import trackio as wandb in your code. At Hugging Face, Trackio is positioned as an open-source, lightweight solution intended to improve experimentation speed, sharing, and transparency. It aims to help researchers log metrics, parameters, and hyperparameters during training and visualize them afterwards to understand training progress. The project emphasizes accessibility, data portability, and straightforward integration with common HF tooling.
Key features
- Local Gradio dashboard for visualizing metrics and hyperparameters.
- Sync the local dashboard to Hugging Face Spaces for easy sharing via URL.
- Open-source and free to use.
- Drop-in replacement for wandb; API compatible with wandb.init, wandb.log, and wandb.finish (import as wandb).
- Easy embedding of plots in blogs and docs via iframes; sharing without requiring accounts.
- Transparent data handling: log and share metrics like GPU energy usage via nvidia-smi for environmental impact tracking.
- Data accessibility: extract and analyze recorded data without proprietary lock-in.
- Lightweight design to support rapid experimentation; suitable for iterating on new tracking features.
- Ephemeral SQLite database on Spaces for logged data; converted to Parquet and backed up to a Hugging Face Dataset every 5 minutes.
- Optional dataset naming: pass dataset_id to trackio.init() to name the backing dataset.
- Native integration with Hugging Face libraries like transformers and accelerate; log metrics with minimal setup. With transformers.Trainer or accelerate, no extra setup is required—just start tracking.
- Status: Beta and lightweight; some features found in other tools (e.g., artifact management, advanced visualizations) are not available yet.
- Community input welcome: issues page for features and feedback.
Common use cases
- Track metrics, hyperparameters, and intermediate states during model training.
- Visualize training progress locally and share progress via a Space URL or embed in documents/blogs.
- Compare multiple runs side-by-side and make metrics accessible to teammates with links.
- Quantify energy usage, e.g., GPU energy, for model cards and environmental impact discussions.
- Experiment with new tracking features quickly due to Trackio’s lightweight design.
- Integrate smoothly with Hugging Face workflows (transformers, accelerate) for minimal setup.
Setup & installation
Install Trackio with pip:
pip install trackio
Using Trackio as a drop-in replacement for wandb in Python:
import trackio as wandb
wandb.init(project="my-project")
wandb.log({"epoch": 1, "loss": 0.25})
wandb.finish()
To synchronize your local dashboard to Hugging Face Spaces, pass a space_id to init:
import trackio as wandb
wandb.init(space_id="SPACE_ID", project="my-project")
wandb.log({"accuracy": 0.92})
Launching the dashboard can be done from the terminal or within Python. The article notes there are straightforward commands to start the dashboard locally and to tie it to Spaces, enabling public or private sharing depending on your Spaces configuration. With Spaces, data logged goes to an ephemeral SQLite database on Spaces, which Trackio converts to a Parquet dataset and backs up to a Hugging Face Dataset every 5 minutes.
Quick start (minimal runnable example)
A minimal example that logs a couple of metrics during a short run:
import trackio as wandb
wandb.init(project="quick-start")
for epoch in range(3):
loss = 0.3 * (epoch + 1)
wandb.log({"epoch": epoch + 1, "loss": loss})
wandb.finish()
If you want to publish the dashboard to Spaces:
import trackio as wandb
wandb.init(space_id="SPACE_ID", project="quick-start")
wandb.log({"accuracy": 0.88})
Note: The article emphasizes a drop-in experience for users already familiar with wandb, so you can write code similarly while gaining local visualization and Space-based sharing.
Pros and cons
Pros:
- Local, self-contained dashboard with a path to sharing via Spaces.
- Free, open-source, and easy to adopt; no barrier to entry for sharing results.
- Lightweight design suitable for rapid experimentation and feature iteration.
- GPU energy metrics via nvidia-smi support for environmental impact reporting.
- Tight integration with Hugging Face tools (Transformers, Accelerate) with minimal setup.
- Data portability and accessibility; you can extract data for custom analyses.
- Space-backed backups to Parquet datasets for long-term accessibility. Cons:
- Currently in beta; artifact management and some advanced visualizations are not available yet.
- Some features rely on Space availability and configuration (public/private spaces matter for sharing).
- As a relatively new tool, the ecosystem around Trackio (plugins, third-party integrations) is still growing.
Alternatives (brief comparisons)
| Tool | Notes |
|---|---|
| Trackio | Open-source, free; local Gradio dashboard; Spaces sync; drop-in wandb replacement; lightweight and in beta; energy metrics and data accessibility emphasized. No artifact management yet. |
| Other tracking tools (in general) | Not detailed in this article; Trackio positions itself as a lightweight, transparent alternative with a focus on local visualization and Spaces sharing. |
Pricing or License
Trackio is described as open-source and free in the Hugging Face blog post, positioning itself as a no-cost option for experiment tracking. The article invites community involvement and issues for feature requests, consistent with open-source projects.
References
More resources
Make ZeroGPU Spaces faster with PyTorch ahead-of-time (AoT) compilation
Learn how PyTorch AoT compilation speeds up ZeroGPU Spaces by exporting a compiled model once and reloading instantly, with FP8 quantization, dynamic shapes, and careful integration with the Spaces GPU workflow.
Getting Started with NVIDIA Isaac for Healthcare Using the Telesurgery Workflow
A production-ready, modular telesurgery workflow from NVIDIA Isaac for Healthcare unifies simulation and clinical deployment across a low-latency, three-computer architecture. It covers video/sensor streaming, robot control, haptics, and simulation to support training and remote procedures.
NVFP4 Trains with Precision of 16-Bit and Speed and Efficiency of 4-Bit
NVFP4 is a 4-bit data format delivering FP16-level accuracy with the throughput and memory efficiency of 4-bit precision, extended to pretraining for large language models. This profile covers 12B-scale experiments, stability, and industry collaborations.
Inside NVIDIA Blackwell Ultra: The Chip Powering the AI Factory Era
An in‑depth profile of NVIDIA Blackwell Ultra, its dual‑die NV‑HBI design, NVFP4 precision, 288 GB HBM3e per GPU, and system‑level interconnects powering AI factories and large‑scale inference.
NVIDIA NeMo-RL Megatron-Core: Optimized Training Throughput
Overview of NeMo-RL v0.3 with Megatron-Core backend for post-training large models, detailing 6D/4D parallelism, GPU-optimized kernels, and simplified configuration to boost reinforcement learning throughput on models at scale.
Generate Images with Claude and Hugging Face: Tools, Setup, and Examples
Learn how to connect Claude to Hugging Face Spaces via the MCP Server to generate images with Krea and Qwen-Image, leverage free credits, and explore the Hugging Face AI App Directory.