torch.autograd.backward
function backward(tensors: Tensor | Tensor[]): voidfunction backward(tensors: Tensor | Tensor[], grad_tensors: Tensor | Tensor[] | null, retain_graph: boolean, create_graph: boolean, inputs: Tensor | Tensor[], options: BackwardOptions): voidComputes the sum of gradients of given tensors with respect to graph leaves.
The graph is differentiated using the chain rule. If any of tensors are non-scalar (i.e. their data has more than one element) and require gradient, then the Jacobian-vector product would be computed, in this case the function additionally requires specifying grad_tensors.
This function accumulates gradients in the leaves - you might need to zero
.grad attributes or set them to None before calling it.
Overload conventions:
backward(tensors, options?)- Compute gradients with optional settingsbackward(tensors, grad_tensors, options?)- Specify initial gradients (v in Jv product)
Parameters
Examples
import * as torch from '@torchjsorg/torch.js';
const x = torch.randn(3, { requires_grad: true });
const y = x.pow(2).sum();
// Compute gradients
torch.autograd.backward(y);
console.log(x.grad); // dy/dx = 2x
// Using options
torch.autograd.backward(y, { retain_graph: true });
// Multiple tensors
const a = torch.randn(2, { requires_grad: true });
const b = torch.randn(2, { requires_grad: true });
const loss1 = a.sum();
const loss2 = b.sum();
torch.autograd.backward([loss1, loss2]);