torch.nn.BackwardHook
export type BackwardHook = (
module: Module,
gradInput: (Tensor | null)[],
gradOutput: (Tensor | null)[]
) => (Tensor | null)[] | void;Hook called during the backward pass of a module.
BackwardHook receives gradients flowing backward through the module. It can inspect or modify both input gradients (computed by the backward pass) and output gradients (received from downstream). Called after backward computation.
Signature:
(module: Module, gradInput: (Tensor | null)[], gradOutput: (Tensor | null)[]) => (Tensor | null)[] | voidParameters:
module: The module whose backward just completedgradInput: Gradients w.r.t. module inputs (computed by backward)gradOutput: Gradients w.r.t. module outputs (from downstream)
Return Value:
- Return modified gradInput to change upstream gradient flow
- Return void to use original gradients unchanged
- Can implement gradient clipping, scaling, or filtering
Use Cases:
- Debug gradient flow during training
- Implement gradient clipping or normalization
- Monitor gradient magnitudes and statistics
- Implement custom gradient transformations
- Detect gradient anomalies (NaN, Inf, explosion)
Examples
const hook: BackwardHook = (module, gradInput, gradOutput) => {
// Clip gradients to prevent explosion
const clipped = gradInput.map(g =>
g ? torch.clamp(g, -1, 1) : g
);
return clipped;
};
model.register_backward_hook(hook);