torch.autograd.is_anomaly_detection_enabled
function is_anomaly_detection_enabled(): booleanCheck if anomaly detection is currently enabled.
Returns whether the global anomaly detection mode is active. When enabled, the autograd engine tracks stack traces and checks for problematic patterns like NaN/Inf gradients, in-place modifications, and other gradient computation issues.
Anomaly detection is essential for debugging gradient issues:
- Identifies which operations produced NaN/Inf gradients
- Provides stack traces showing where operations were called
- Helps detect in-place modifications that break gradients
- Warns about potentially problematic patterns
When to use:
- Debugging training divergence or NaN loss
- Investigating gradient computation errors
- Understanding gradient flow in complex models
- Identifying incorrect in-place operations
Performance impact: Anomaly detection has significant overhead and should only be used during debugging, not in production training loops.
- Global state: Returns the global anomaly detection setting, not per-operation
- Performance overhead: Anomaly detection has significant cost; only enable when debugging
- Stack traces: When enabled, all operations are instrumented to capture stack traces
- NaN/Inf checking: Can be separately toggled via set_detect_anomaly options
- Not for production: Anomaly detection should only be used during development
- Memory overhead: Stores stack traces for all operations, increases memory usage
- Debugging only: Disable before final training runs for performance
Returns
boolean– True if anomaly detection is currently enabled, false otherwiseExamples
// Check current anomaly detection status
if (torch.autograd.is_anomaly_detection_enabled()) {
console.log('Anomaly detection is active');
}// Use in conditional debugging
if (!torch.autograd.is_anomaly_detection_enabled()) {
torch.autograd.set_detect_anomaly(true);
}See Also
- PyTorch torch.autograd.is_anomaly_enabled()
- set_detect_anomaly - Enable or disable anomaly detection
- detect_anomaly - Context manager for temporary anomaly detection
- torch.autograd - Autograd module