torch.nn.register_module_full_backward_hook
function register_module_full_backward_hook(hook: BackwardHook): RemovableHandleRegisters a global backward hook that receives both input and output gradients.
Alias for register_module_backward_hook(). Installs a hook called after every module's backward computation with access to both gradient types: gradInput (computed by backward) and gradOutput (received from downstream). This "full" variant provides complete gradient information. Useful for:
- Comprehensive gradient analysis: Seeing both computed and received gradients
- Gradient debugging: Understanding complete gradient flow through layers
- Gradient scaling: Per-layer gradient transformations based on both types
- Layer-specific diagnostics: Comparing input vs output gradients
- Advanced training techniques: Implementing custom optimization strategies
The "full" version provides more information than simpler hooks but runs after backward completes. Same functionality as register_module_backward_hook().
- Full information: Provides both gradInput and gradOutput
- Post-backward: Runs after backward computation completes
- Comprehensive: More information than simpler hook variants
- Alias: Same as register_module_backward_hook()
- Performance: More overhead due to providing complete information
- Backward only: Only executes during backward pass
- Gradient handling: Some gradients may be null for certain layer types
Parameters
hookBackwardHook- BackwardHook function called with (module, gradInput, gradOutput)
Returns
RemovableHandle– RemovableHandle to unregister this hook using .remove()Examples
// Compare input and output gradients
const hook = (module, gradInput, gradOutput) => {
const in_norm = gradInput[0]?.norm().item() ?? 0;
const out_norm = gradOutput[0]?.norm().item() ?? 0;
console.log(`Layer: in=${in_norm.toFixed(3)}, out=${out_norm.toFixed(3)}`);
};
torch.nn.register_module_full_backward_hook(hook);// Gradient flow analysis across layers
const gradients = {};
const hook = (module, gradInput, gradOutput) => {
const name = module.constructor.name;
gradients[name] = {
in: gradInput[0]?.norm().item(),
out: gradOutput[0]?.norm().item()
};
};
torch.nn.register_module_full_backward_hook(hook);
model.forward(x).backward();
console.log('Gradient analysis:', gradients);// Apply layer-specific gradient scaling
const hook = (module, gradInput, gradOutput) => {
// Scale gradients based on layer type
const scale = module instanceof torch.nn.Linear ? 0.5 : 1.0;
return gradInput.map(g => g ? g.mul(scale) : g);
};
torch.nn.register_module_full_backward_hook(hook);See Also
- PyTorch torch.nn.modules.module.register_module_full_backward_hook
- register_module_backward_hook - Same function (different name)
- register_module_full_backward_pre_hook - Pre-backward variant
- register_module_forward_hook - Forward pass equivalent