Skip to main content
torch.js has not been released yet.
torch.js logotorch.js logotorch.js
PlaygroundContact
Login
Documentation
IntroductionType SafetyTensor ExpressionsTensor IndexingEinsumEinopsAutogradTraining a ModelProfiling & MemoryPyTorch MigrationBest PracticesRuntimesPerformancePyTorch CompatibilityBenchmarksDType Coverage
torch.js· 2026
LegalTerms of UsePrivacy Policy
  1. docs
  2. torch.js
  3. Runtime Environments

Runtime Environments

torch.js is a "universal" machine learning library. The exact same tensor code runs in modern web browsers and on servers via Node.js, with native GPU acceleration in both environments.

The torch.js universal architecture: Browser vs Node.js

Overview

We provide two primary entry points depending on your environment.

RuntimePackageGPU BackendBest For
Browser@torchjsorg/torch.jsBrowser WebGPUDemos, Apps, Visualization
Node.js@torchjsorg/torch-nodewgpu-nativeServer-side, CLI, Dataset processing
Cloud@torchjsorg/dawnGoogle DawnProduction Node.js, CI/CD

1. Browser Environment

In the browser, torch.js leverages the device's GPU directly through the standard WebGPU API.

import torch from '@torchjsorg/torch.js';

async function run() {
  // Initialize the GPU
  await torch.init();

  const x = torch.randn(1024, 1024);
  console.log('Running on GPU:', x.device);
}

Fallback Support: If a user's browser doesn't support WebGPU, torch.js automatically falls back to a CPU-based implementation so your code doesn't crash.

2. Node.js Environment

For server-side use, we provide torch-node. This package bundles wgpu-native, allowing the same shaders used in the browser to run directly on your server's hardware (Vulkan, Metal, or DX12).

import torch from '@torchjsorg/torch-node'; // Use node-specific entry
import fs from 'fs/promises';

async function trainOnServer() {
  const model = createModel();
  const data = await fs.readFile('dataset.bin'); // Node-only: filesystem access

  // ... training loop ...
}

Code Sharing Patterns

Because the API is identical, you can write your model architecture once and use it everywhere.

// shared/model.ts
import type { Tensor } from '@torchjsorg/torch.js';
import torch from '@torchjsorg/torch.js';

export function MyModel() {
  return torch.nn.Sequential(torch.nn.Linear(784, 128), torch.nn.ReLU(), torch.nn.Linear(128, 10));
}

Feature Comparison

FeatureBrowserNode.js
WebGPU AccelerationYes (Built-in)Yes (wgpu-native)
Filesystem AccessNo (Use Fetch/IDB)Yes (Full FS access)
Headless SupportNo (Needs Window)Yes (Server/CLI friendly)
PerformanceGoodExcellent (No browser overhead)
InstallationZero-install for userRequires npm install

Next Steps

  • Performance Guide - Benchmarking torch.js across runtimes.
  • PyTorch Migration Guide - Side-by-side syntax comparison.
Previous
Best Practices
Next
Performance