torch.reset_peak_memory_stats
function reset_peak_memory_stats(): voidResets the peak memory statistics counter.
The peak memory statistic tracks the maximum GPU memory allocated since the last reset. Call this function to reset the peak counter to the current memory usage, allowing you to measure memory consumption for specific code sections in isolation. Essential for profiling and optimization to understand which operations use the most memory. Useful for:
- Profiling memory usage of specific operations or functions
- Comparing memory efficiency of different implementations
- Measuring peak memory for individual layers or modules
- Isolating memory impact of new features during development
- Creating reproducible benchmarks
- Peak resets to current usage: After reset, peak_bytes reflects memory used from that point forward
- Affects peak_bytes only: Does not affect active_bytes or pooled_bytes
- Use with memory_stats(): Typically paired with memory_stats() to get profiling results
- For profiling only: This doesn't free memory, only resets the measurement counter
- Must reset before profiling: Results will be wrong if you don't reset before measuring a section
- Peak includes pooled memory: peak_bytes includes both active and pooled allocations
- Call at major boundaries: Reset between significant phases, not per-operation (too noisy)
Returns
void
Examples
// Measure memory usage for a specific operation
torch.webgpu.reset_peak_memory_stats();
const result = model.forward(data);
const stats = torch.webgpu.memory_stats();
console.log(`Peak for this operation: ${stats.peak_bytes / 1024 / 1024} MB`);
// Profile memory for individual layers
for (const layer of model.layers) {
torch.webgpu.reset_peak_memory_stats();
const output = layer.forward(input);
const { peak_bytes } = torch.webgpu.memory_stats();
console.log(`${layer.name}: ${peak_bytes / 1024 / 1024} MB`);
}
// Compare implementations
// Test implementation A
torch.webgpu.reset_peak_memory_stats();
const resultA = implementationA(data);
const peakA = torch.webgpu.memory_stats().peak_bytes;
// Test implementation B
torch.webgpu.reset_peak_memory_stats();
const resultB = implementationB(data);
const peakB = torch.webgpu.memory_stats().peak_bytes;
console.log(`A: ${peakA / 1024 / 1024} MB, B: ${peakB / 1024 / 1024} MB`);See Also
- PyTorch torch.cuda.reset_peak_memory_stats()
- memory_stats - Get current memory statistics after reset
- memory_summary - Get formatted memory summary
- empty_cache - Free pooled memory for cleaner profiling