torch.allclose
function allclose(input: Tensor, other: Tensor, options?: AllcloseOptions): Promise<boolean>function allclose(input: Tensor, other: Tensor, rtol: number, atol: number, equal_nan: boolean, options?: AllcloseOptions): Promise<boolean>Test if all elements are close (within tolerance): returns single boolean result.
Async version of isclose() - combines element-wise comparison with reduction to return a single boolean. Returns true only if ALL pairs of elements are close within the specified tolerances. Essential for:
- Assertion testing in unit tests (compare model outputs to expected values)
- Gradient checking: validating analytical gradients match numerical gradients
- Convergence detection: checking if iterative optimization has converged
- Cross-validation: comparing torch.js results against PyTorch baseline
- Approximate equality: testing floating-point results with tolerance
- Batch validation: confirming all samples in a batch meet quality criteria
- Integration testing: verifying multi-step computations produce expected results
Tolerance formula: All elements must satisfy: |input[i] - other[i]| <= (atol + rtol * |other[i]|)
- Absolute tolerance (atol): Fixed threshold (default 1e-8), handles near-zero values
- Relative tolerance (rtol): Percentage of other value (default 1e-5), scales with magnitude
Async nature: Reads comparison result from GPU (requires await). Use isclose() if you need element-wise results or want to avoid GPU readback cost.
- Async operation: Must be awaited; use in async functions or with .then()
- GPU readback cost: Reading GPU result to CPU is expensive - don't check in tight loops
- Returns single boolean: Unlike isclose() which returns tensor of booleans
- Symmetric formula: Unlike isclose, the tolerance is symmetric - |a-b| = |b-a|
- Automatic broadcasting: Shapes automatically broadcast like isclose()
- Short-circuit possible: Implementation may return false immediately if any element differs
- NaN default: By default, NaN != NaN even in allclose (use equal_nan=true to treat as equal)
- Async operation: Must await the result - forgetting await will return Promise
- GPU performance cost: Reading GPU data to check result is slow - cache results if comparing same tensors multiple times
- Single NaN fails comparison: One NaN value makes entire result false (unless equal_nan=true)
- Shape must broadcast: Will throw error if shapes are incompatible
- Complex dtypes: May have unexpected behavior with complex numbers
Parameters
inputTensor- First tensor (any shape and dtype)
otherTensor- Second tensor (must be broadcastable with input)
optionsAllcloseOptionsoptional- Tolerance options: -
rtol: Relative tolerance (default: 1e-5). Proportion of |other| value allowed to differ -atol: Absolute tolerance (default: 1e-8). Maximum absolute difference allowed -equal_nan: If true, treat NaN as equal to NaN (default: false)
Returns
Promise<boolean>– True if ALL elements are close, False if any element differsExamples
// Basic usage: test if tensors match within tolerance
const output = model.forward(input);
const expected = torch.tensor([0.1, 0.2, 0.3]);
const all_match = await torch.allclose(output, expected, { atol: 1e-6 });
console.assert(all_match, "Model output should match expected");// Gradient checking: validating backpropagation correctness
const numerical_grad = computeNumericalGradient(model, input, loss_fn, epsilon=1e-5);
const analytical_grad = model.computeGradient(input); // From backprop
const gradients_match = await torch.allclose(numerical_grad, analytical_grad, {
rtol: 1e-5, // 0.001% relative tolerance
atol: 1e-7 // Plus 1e-7 absolute tolerance for small gradients
});
if (!gradients_match) {
console.error("Gradient mismatch! Check backward() implementation");
}// Convergence checking: has optimization converged?
let converged = false;
for (let i = 0; i < max_iterations; i++) {
const prev_loss = computeLoss(params);
optimizer.step();
const new_loss = computeLoss(params);
// Check if loss stabilized
converged = await torch.allclose(prev_loss, new_loss, { rtol: 1e-4 });
if (converged) {
console.log(`Converged after ${i} iterations`);
break;
}
}// Comparing torch.js vs PyTorch: cross-framework validation
const torchjs_result = torch.some_operation(input);
const pytorch_result = torch.tensor(python_pytorch_result); // From Python
const matches_pytorch = await torch.allclose(torchjs_result, pytorch_result, {
rtol: 1e-5,
atol: 1e-8
});
if (!matches_pytorch) {
console.warn("torch.js result differs from PyTorch - possible implementation bug");
}// Batch validation: ensure all samples meet criteria
const predictions = model.forward(batch); // Shape: [batch_size, num_classes]
const targets = expectedValues;
// Check if entire batch prediction matches expected (within tolerance)
const batch_correct = await torch.allclose(predictions, targets, { atol: 0.1 });
if (batch_correct) {
console.log("All samples in batch predicted correctly!");
}// Comparing absolute vs relative tolerance effects
const a = torch.tensor([1e-9, 100.0]);
const b = torch.tensor([2e-9, 100.0001]);
// Default: rtol=1e-5, atol=1e-8
const default_match = await torch.allclose(a, b); // May differ depending on values
// Large absolute tolerance: good for near-zero values
const with_atol = await torch.allclose(a, b, { atol: 1e-8 });
// Large relative tolerance: good for large values
const with_rtol = await torch.allclose(a, b, { rtol: 1e-4 });See Also
- PyTorch torch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False)
- isclose - Element-wise version (returns boolean tensor without GPU readback)
- equal - Exact equality check (no tolerance, synchronous)
- eq - Element-wise equality comparison
- logical_not - Negate boolean results