torch.Tensor.Tensor.destroy
Tensor.destroy(): voidManually release GPU resources and destroy tensor.
Releases the GPU buffer associated with this tensor, returning it to the memory pool for reuse. Also recursively destroys gradient tensors. This is called automatically via the FinalizationRegistry, but can be called manually for immediate cleanup.
When to use:
- Processing large batches of temporary tensors
- Explicit resource management in loops
- Recovering GPU memory before next operation
- Debugging memory leaks
- Automatic cleanup: FinalizationRegistry auto-destroys when tensor garbage collected
- GPU only: Only frees GPU buffers; CPU tensors don't need explicit destruction
- Cannot reuse: Tensor becomes unusable after destroy() - creates new tensors instead
- Gradient destruction: Also destroys associated gradient tensor
- No double-free: Safe to call multiple times - handles cleanup internally
- Use after destroy: Accessing tensor after destroy() will fail
- Manual management: Usually not needed - use for specific optimization needs
Returns
void (modifies tensor in-place, marks as destroyed)
Examples
// Process large batches with explicit cleanup
for (let i = 0; i < batchCount; i++) {
const batch = loadBatch(i);
const result = model(batch);
processResult(result);
// Free GPU memory immediately
batch.destroy();
result.destroy();
}
// Cleanup temporary tensors
const intermediate = torch.randn([1000, 1000]);
const processed = torch.relu(intermediate);
intermediate.destroy(); // Free intermediate immediately
// Cleanup gradients after processing
for (const param of model.parameters()) {
if (param.grad) {
param.grad.destroy();
param.grad = null;
}
}See Also
- [PyTorch N/A (torch.js specific - automatic garbage collection in PyTorch)](https://pytorch.org/docs/stable/generated/N/A .html)
- clone - Create independent tensor copy
- detach - Detach from computation graph