torch.linalg.matrix_norm
function matrix_norm<S extends Shape, D extends DType, Dev extends DeviceType>(input: Tensor<S, D, Dev>, options?: MatrixNormOptions): Tensor<DynamicShape, D, Dev>function matrix_norm<S extends Shape, D extends DType, Dev extends DeviceType>(input: Tensor<S, D, Dev>, ord: number | string, dim: [number, number], keepdim: boolean, options?: MatrixNormOptions): Tensor<DynamicShape, D, Dev>Computes a matrix norm (Frobenius, spectral, nuclear norms).
Measures the "size" of a matrix in various ways. Essential for:
- Regularization in neural networks (weight penalties)
- Stability analysis in numerical methods
- Matrix conditioning and ill-conditioning detection
- Convergence monitoring in iterative algorithms
- Tensor analysis and multi-dimensional data
- Checking matrix stability (eigenvalue bounds via norms)
Common matrix norms:
- Frobenius norm (default): ||A||F = √(Σ{ij} A²_{ij}) - extends L2 to matrices
- Spectral norm (L2): ||A||₂ = largest singular value σ₁(A)
- L1 norm: ||A||₁ = max_j Σ_i |A_{ij}| (largest column sum)
- L∞ norm: ||A||∞ = max_i Σ_j |A_{ij}| (largest row sum)
- Nuclear norm: ||A||* = Σ σᵢ(A) (sum of all singular values, trace norm)
- Frobenius is default: Most common norm, treats matrix like flattened vector
- Spectral norm expensive: Requires SVD computation; only use when necessary
- Dimensionless result: Returns scalar or batch of scalars
- Non-negative: All norms are ≥ 0; equals 0 only for zero matrix
- Submultiplicativity: ||AB|| ≤ ||A|| ||B||
- Compatibility: Each norm satisfies ||A|| ≥ ||A||₂ (spectral bound for all)
- Conjugate norms: (L1, L∞) and (L∞, L1) are dual norms
- Spectral norm is expensive: Use only when necessary; prefer Frobenius for speed
- Last two dimensions: dim must specify exactly 2 dimensions for matrix norms
- SVD cost: Nuclear norm ('nuc') requires SVD; O(n³) complexity
Parameters
inputTensor<S, D, Dev>- Input tensor of shape (..., m, n) or higher dimensional
optionsMatrixNormOptionsoptional- Optional settings for matrix norm computation
Returns
Tensor<DynamicShape, D, Dev>– Norm as scalar or tensor with reduced dimensionsExamples
// Frobenius norm (default)
const A = torch.tensor([[3.0, 0.0], [0.0, 4.0]]);
torch.linalg.matrix_norm(A); // 5.0 (√(3² + 4²))
// Spectral norm (largest singular value)
const A = torch.tensor([[1.0, 2.0], [3.0, 4.0]]);
torch.linalg.matrix_norm(A, { ord: 2 }); // Largest singular value
// L1 norm (max column sum)
const A = torch.tensor([[1.0, 2.0], [3.0, 4.0]]);
torch.linalg.matrix_norm(A, { ord: 1 }); // max(|1|+|3|, |2|+|4|) = 6.0
// L∞ norm (max row sum)
const A = torch.tensor([[1.0, 2.0], [3.0, 4.0]]);
torch.linalg.matrix_norm(A, { ord: Infinity }); // max(|1|+|2|, |3|+|4|) = 7.0
// Weight regularization in neural networks
const W = torch.randn(100, 50); // Weight matrix
const frobenius_norm = torch.linalg.matrix_norm(W); // For L2 regularization
const loss = computeLoss() + 0.001 * frobenius_norm.pow(2); // L2 penalty
// Batched matrix norms
const A_batch = torch.randn(32, 5, 5); // 32 matrices
const norms = torch.linalg.matrix_norm(A_batch); // [32] - Frobenius norm for each
// Keep dimensions for broadcasting
const A = torch.randn(2, 3, 4, 5);
const norms = torch.linalg.matrix_norm(A, { ord: 'fro', dim: [-2, -1], keepdim: true }); // [2, 3, 1, 1]
// Check matrix stability
const A = torch.randn(10, 10);
const spectral_norm = torch.linalg.matrix_norm(A, { ord: 2 });
console.log('Largest eigenvalue magnitude ≈', spectral_norm);See Also
- PyTorch torch.linalg.matrix_norm()
- vector_norm - Norms for vectors (L1, L2, L∞)
- svd - Singular value decomposition (used for spectral/nuclear norms)
- cond - Condition number (uses matrix norms)