torch.Tensor.Tensor.square_
Tensor.square_(): thisIn-place square.
Computes the square of each element in-place. Element-wise: y = x². Useful for:
- Squaring inputs for quadratic loss functions
- Computing variance and MSE (mean squared error)
- Squared Euclidean distances
- Emphasizing larger differences in loss functions
Use Cases:
- MSE loss computation (most common loss function)
- L2 regularization and norm calculations
- Computing variance (E[x²] - E[x]²)
- Squared error terms in optimization
- Always non-negative: Result is always ≥ 0.
- Smooth derivative: Gradient is 2x, always defined.
- In-place: Modifies tensor directly, more memory efficient.
- Inverse: sqrt_() is the approximate inverse (for non-negative values).
- Amplifies values: Values 1 grow rapidly (1→1, 2→4, 10→100).
- Loss scaling: Squares errors, making large errors more penalized.
Returns
this– This tensor, modified in-placeExamples
const x = torch.tensor([1, -2, 3, -4]);
x.square_(); // [1, 4, 9, 16]
// MSE loss - squared differences
const predictions = torch.tensor([0.9, 2.1, 2.8]);
const targets = torch.tensor([1, 2, 3]);
const errors = predictions.sub(targets).square_().mean();
// L2 norm - sum of squares
const gradient = torch.tensor([-0.1, 0.05, -0.02]);
const magnitude = gradient.clone().square_().sum().sqrt();See Also
- PyTorch tensor.square_()
- square - Non-inplace version
- sqrt_ - Inverse operation (for non-negative values)
- pow - General power function for flexible exponents
- mul - Element-wise multiplication (x * x for squaring)