torch.autograd.set_detect_anomaly
function set_detect_anomaly(enabled: boolean, options?: { check_nan?: boolean }): voidSet whether anomaly detection is enabled globally.
When enabled, the autograd engine will:
- Track stack traces for all operations (for better error messages)
- Check for NaN/Inf values in gradients during backward
- Warn about potentially problematic operations
This has a significant performance overhead and should only be used for debugging.
Parameters
enabledboolean- Whether to enable anomaly detection
options{ check_nan?: boolean }optional- Additional options
Examples
// Enable anomaly detection for debugging
torch.autograd.set_detect_anomaly(true);
try {
// Your training code here
loss.backward();
} catch (e) {
console.error('Anomaly detected:', e);
}
// Disable when done debugging
torch.autograd.set_detect_anomaly(false);