torch.Tensor.Tensor.negative_
Tensor.negative_(): thisIn-place negation (element-wise negation).
Negates each element of the tensor in-place. Equivalent to computing -x for each element. Modifies the tensor directly without allocating new memory.
Use Cases:
- Reverse gradients in backpropagation
- Flip sign of model parameters
- Implement loss inversions
- Fast element-wise sign flipping
- In-place operation: Modifies tensor directly, returns
this. - PyTorch alias: This is an alias for
neg_()to match PyTorch naming. - No allocation: More memory-efficient than creating new tensors.
Returns
this– This tensor, modified in-placeExamples
const x = torch.tensor([1, -2, 3, -4]);
x.negative_(); // In-place: [1, -2, 3, -4] → [-1, 2, -3, 4]
// Reverse loss gradients
loss.backward();
for (const param of model.parameters()) {
param.grad?.negative_(); // Flip gradient signs
}
// Negate logits before softmax for temperature scaling
const logits = torch.randn([5, 10]);
logits.negative_();
const negated_probs = torch.softmax(logits, 1);See Also
- PyTorch torch.Tensor.negative_()
- neg - Non-inplace version that creates new tensor
- neg_ - Canonical method name (negative_ is alias)
- abs - Absolute value (magnitude without sign)