torch.Tensor.Tensor.reciprocal_
Tensor.reciprocal_(): thisIn-place reciprocal (multiplicative inverse).
Computes the reciprocal (multiplicative inverse) of each element in-place. Element-wise: y = 1/x. Input must not contain zero values to avoid division by zero. Essential for:
- Inversion operations in linear algebra
- Normalization and standardization
- Probability computations (Bayes rule)
- Learning rate scaling and gradient normalization
Use Cases:
- Inverse normalization (1/std in layer norm)
- Gradient scaling for adaptive optimizers
- Inverse matrix operations
- Probability ratio computations
- Zero handling: Reciprocal of zero produces Infinity
- Numerical stability: Avoid very small values (cause overflow)
- Gradient: ∇(1/x) = -1/x² (negative curvature)
- In-place: Modifies tensor directly and returns it
- Division by zero: Values close to zero will produce very large numbers
- NaN risk: Reciprocal of zero produces NaN or Infinity
Returns
this– This tensor modified in-placeExamples
// Invert scale factor
const scale = torch.tensor([2.0, 4.0, 8.0]);
scale.reciprocal_(); // [0.5, 0.25, 0.125]
// Gradient normalization
const gradients = torch.randn([100, 100]);
const norms = gradients.norm(2, 1, true);
gradients.div_(norms); // Or use reciprocal for inverse
// Adaptive learning rates
const variance = torch.tensor([0.01, 0.1, 1.0]);
const learningRates = variance.clone().reciprocal_(); // 1/varianceSee Also
- PyTorch tensor.reciprocal_()
- reciprocal - Non-inplace version
- div_ - Element-wise division
- mul_ - Element-wise multiplication
- rsqrt_ - Reciprocal square root (1/√x)