Skip to main content
torch.jstorch.jstorch.js
Getting StartedPlaygroundContact
Login
torch.jstorch.jstorch.js
Documentation
IntroductionType SafetyTensor IndexingEinsumEinopsAutogradTraining a ModelProfiling & MemoryPyTorch MigrationBest PracticesRuntimesPerformance
IntroductionQuickstartClient APIWorker APIFile I/OBackends
IntroductionBackendsTesting
torch.js· 2026
LegalTerms of UsePrivacy Policy
  1. docs
  2. torch-node
  3. Backends

Backends

torch.node.js supports two GPU backends with the same unified API. While the default is Dawn (Googles Chromium WebGPU engine), you can optionally use wgpu-native (Rust-based WebGPU) for specific use cases.

Comparison of Dawn and wgpu-native backends

Dawn (Default)

The default backend uses Google Dawn, the exact same engine that powers WebGPU in the Chrome browser.

import torch from '@torchjsorg/torch-node';

// Uses Dawn by default
const tensor = torch.zeros([2, 3]);

Advantages

  • Full Parity: Matches Chrome browser behavior exactly.
  • Robust: Benefit from Googles extensive testing and performance optimization.
  • Polling: Handles GPU polling automatically without manual intervention.

wgpu-native

An alternative backend built on wgpu-native, the Rust implementation used by Firefox.

import torch from '@torchjsorg/torch-node/wgpu-native';

// Uses wgpu-native backend
const tensor = torch.zeros([2, 3]);

Advantages

  • Synchronous Validation: Catch shader errors immediately during execution.
  • Resource Control: Offers finer control over WebGPU adapter reuse.
  • Rust Speed: Extremely low overhead for high-frequency operations.

Backend Selection

BackendBest ForEngine
DawnProduction, Chrome parityC++ / Chromium
wgpu-nativeDebugging, Firefox parityRust / wgpu

Environment Configuration

You can force specific adapter behavior using environment variables:

# Request a specific hardware vendor
WGPU_ADAPTER_NAME="NVIDIA";

# Enable extra validation layers for debugging
WGPU_VALIDATION=1;
Previous
Introduction
Next
Testing