torch.linalg.vector_norm
function vector_norm<S extends Shape, D extends DType, Dev extends DeviceType>(input: Tensor<S, D, Dev>, options?: VectorNormOptions): Tensor<DynamicShape, D, Dev>function vector_norm<S extends Shape, D extends DType, Dev extends DeviceType>(input: Tensor<S, D, Dev>, ord: number | string, dim: number | number[], keepdim: boolean, options?: VectorNormOptions): Tensor<DynamicShape, D, Dev>Computes a vector norm (e.g., L2, L1, Frobenius norms).
Measures the "magnitude" or "length" of a vector in various ways. Essential for:
- Computing distances between points
- Regularization in machine learning (L1/L2 penalties)
- Gradient clipping in neural networks
- Normalizing data for algorithms
- Convergence criteria in iterative methods
- Signal processing and feature scaling
Supports all standard vector norms:
- Euclidean (L2): ||x||₂ = √(Σ x²) - root sum of squares
- Manhattan (L1): ||x||₁ = Σ |x| - sum of absolute values
- Infinity (L∞): ||x||∞ = max|x| - maximum absolute value
- Zero norm (L0): count of non-zero elements
- Lp norm: ||x||ₚ = (Σ |x|ᵖ)^(1/p) for any p > 0
- Default is L2: Most common; use for Euclidean distance
- Dimensionless result: Reduces specified dimensions; scalar output if reducing all
- Broadcasting: keepdim=true enables broadcasting operations
- Numerical stability: Implemented robustly to avoid overflow/underflow
- Non-negative: All norms are ≥ 0; equals 0 only for zero vector
- Homogeneity: ||c*x|| = |c| * ||x|| for scalar c
- Triangle inequality: ||x+y|| ≤ ||x|| + ||y||
Parameters
inputTensor<S, D, Dev>- Input tensor of any shape
optionsVectorNormOptionsoptional- Optional settings for vector norm computation
Returns
Tensor<DynamicShape, D, Dev>– Norm as scalar or tensor with reduced dimensionsExamples
// L2 (Euclidean) norm - default
const x = torch.tensor([3.0, 4.0]);
torch.linalg.vector_norm(x); // 5.0 (√(3² + 4²))
// L1 (Manhattan) norm
const x = torch.tensor([3.0, -4.0, 5.0]);
torch.linalg.vector_norm(x, { ord: 1 }); // 12.0 (|3| + |-4| + |5|)
// L∞ (max absolute) norm
const x = torch.tensor([1.0, -10.0, 2.0]);
torch.linalg.vector_norm(x, { ord: Infinity }); // 10.0 (max absolute value)
// Norm along specific dimension (batched)
const x = torch.tensor([[3.0, 4.0], [5.0, 12.0]]); // [2, 2]
torch.linalg.vector_norm(x, { ord: 2, dim: 1 }); // [5.0, 13.0] - L2 norm per row
// Keep dimensions for broadcasting
const x = torch.tensor([[1.0, 2.0], [3.0, 4.0]]);
const norms = torch.linalg.vector_norm(x, { ord: 2, dim: 1, keepdim: true }); // [[√5], [5]]
const normalized = x.div(norms); // Normalize each row
// Gradient clipping (common in training)
const grad = torch.randn(100, 50);
const grad_norm = torch.linalg.vector_norm(grad);
const max_norm = 1.0;
const scale = torch.min(torch.tensor(1.0), max_norm.div(grad_norm.add(1e-8)));
const clipped_grad = grad.mul(scale);
// L0 norm (sparsity measure)
const x = torch.tensor([0.0, 1.0, 0.0, 2.0, 0.0]);
torch.linalg.vector_norm(x, { ord: 0 }); // 2.0 (two non-zero elements)See Also
- PyTorch torch.linalg.vector_norm()
- matrix_norm - Norms for matrices (Frobenius, spectral, etc.)
- normalize - Vector normalization (divide by norm)
- matmul - Often combined with vector norms for scaling