torch.scatter_add_
function scatter_add_<S extends Shape, D extends number>(input: Tensor<S>, dim: D, index: Tensor, src: Tensor): Tensor<S>In-place version of scatter_add: adds at specified indices in-place.
Modifies input tensor in-place by accumulating src values at positions specified by index. Same semantics as scatter_add() but avoids creating a new tensor. Useful for:
- Memory efficiency: in-place accumulation for large tensors
- Iterative updates: repeatedly accumulating values
- Gradient efficiency: direct parameter updates
- Real-time processing: streaming data accumulation
- In-place modification: Input tensor is modified directly
- Returns input: Same tensor object returned for chaining
- Memory efficient: No additional tensor allocated for output
- Gradient safe: Gradients can still flow through in-place ops
- Overwrites input: Original values in input are modified
- Side effects: Anywhere else using input sees the changes
- Accumulation: Values at same index are summed
Parameters
inputTensor<S>- The input tensor (modified in-place)
dimD- The dimension along which to index
indexTensor- The indices tensor specifying where to add
srcTensor- The source tensor with values to add
Returns
Tensor<S>– The modified input tensor (same object)Examples
// Accumulate in batches for memory efficiency
const accumulator = torch.zeros(1000, 64);
for (let i = 0; i < num_batches; i++) {
const batch = get_batch(i);
const indices = get_indices(i);
torch.scatter_add_(accumulator, 0, indices, batch); // In-place
}
// Gradient-based parameter update
const parameters = model.parameters[0]; // Shape [100, 50]
const grad_indices = torch.tensor([[10], [20], [30]]);
const gradients = torch.randn(3, 50) * learning_rate;
torch.scatter_add_(parameters, 0, grad_indices, gradients.neg()); // In-place updateSee Also
- PyTorch torch.Tensor.scatter_add_()
- scatter_add - Non-in-place version
- scatter_reduce_ - In-place scatter with custom reduction
- index_add - Simpler 1D version