torch.autograd.disable_saved_tensors_hooks
function disable_saved_tensors_hooks<T>(fn: () => T): TContext manager to temporarily disable saved tensors hooks.
Disables any pack/unpack hooks registered via saved_tensors_hooks or save_on_cpu for a specific code block, then automatically restores them. Useful when you want to temporarily bypass custom tensor storage mechanisms for specific operations while keeping the hooks active for the rest of your code.
Use cases:
- Specific operations need default storage: Bypass custom storage for certain operations
- Nested hook control: Disable outer hooks for an inner context
- Testing/debugging: Temporarily disable hooks to isolate issues
- Mixed storage strategies: Use different storage for different parts of computation
- Performance optimization: Skip hook overhead for specific bottleneck operations
Hook system: Saved tensors hooks customize how intermediate activations are stored during forward pass for reuse in backward. Common uses include:
- save_on_cpu: Store activations on CPU to save GPU memory
- Custom compression: Store compressed versions for space efficiency
- Gradient checkpointing: Recompute instead of storing
When hooks are disabled, saved tensors use default storage (GPU memory).
- Restores previous state: Automatically re-enables hooks that were active before, even if fn throws
- Nested safety: Safe to nest multiple disable calls; innermost takes precedence
- Try/finally safety: Exception-safe - always restores hook state
- Affects saved tensors only: Only impacts tensors saved for backward; doesn't affect other operations
- Temporary effect: Only affects code within fn(); hooks are immediately re-enabled after
- Memory overhead: Disabling save_on_cpu uses GPU memory instead of CPU
- Performance impact: Default storage may be slower than optimized hooks
- Backward compatibility: If hooks are not active, this has no effect
Parameters
fn() => T- Function to execute with all saved tensors hooks disabled
Returns
T– The result of calling fn()Examples
// Use hooks for most operations but disable for specific ones
torch.autograd.graph.saved_tensors_hooks(packHook, unpackHook, () => {
const x = torch.randn(1000, 1000, { requires_grad: true });
const y = x.matmul(x); // Uses hooks
torch.autograd.graph.disable_saved_tensors_hooks(() => {
// Skip hooks for this specific operation
const z = y.sin();
});
const w = z.sum(); // Uses hooks again
});// Disable CPU-saving hooks for memory-intensive operations
torch.autograd.graph.save_on_cpu(() => {
const batch = torch.randn(32, 1000, 1000);
const features = model.encoder(batch); // Saved on CPU
torch.autograd.graph.disable_saved_tensors_hooks(() => {
// Critical operation that needs fast access
const important = features.matmul(features.t());
});
const output = model.decoder(important); // Saved on CPU again
});// Nested hook management
torch.autograd.graph.saved_tensors_hooks(compressPackHook, decompressUnpackHook, () => {
const x = compute_something();
torch.autograd.graph.disable_saved_tensors_hooks(() => {
// This operation doesn't use compression
const y = x.clone();
});
const z = compute_more(); // Compression active again
});See Also
- PyTorch torch.autograd.graph.disable_saved_tensors_hooks()
- saved_tensors_hooks - Register custom pack/unpack hooks
- save_on_cpu - Convenience context for CPU-based storage
- torch.autograd.graph - Autograd graph manipulation module