torch.linalg.svdvals
function svdvals<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>): Tensor<DynamicShape, D, Dev>Computes only the singular values of a matrix (without U or Vh).
More efficient than full SVD when only singular values are needed. Essential for:
- Computing matrix rank (count non-negligible singular values)
- Assessing numerical rank and effective dimensionality
- Computing matrix norms (Frobenius, spectral)
- Condition number estimation κ(A) = σ_max / σ_min
- Low-rank approximation truncation decisions
- Image compression and data dimensionality analysis
- Ill-conditioning assessment
Singular values σ are non-negative, ordered in descending order, and encode the "strength" of each principal direction of the matrix. Small singular values indicate near-rank-deficiency.
Relationship to svd():
- svd(): Returns U, S, Vh (O(n³) time, but gives full decomposition)
- svdvals(): Returns only S (still O(n³) asymptotically, but smaller constants)
- svdvals_half() (if available): Returns only half the singular values (faster for large matrices)
Singular values vs eigenvalues:
- Singular values σ(A): Always exist, always non-negative, for any A (even rectangular)
- Eigenvalues λ(A): Exist only for square A, can be complex
- Relationship: σᵢ(A) = √λᵢ(A^T A)
- Always non-negative: σ ≥ 0 for all singular values
- Descending order: Sorted from largest to smallest
- Rectangular matrices OK: Works for any m × n, not just square
- More efficient than svd(): Avoids computing U and Vh
- GPU accelerated: Uses Jacobi method for efficient GPU computation
- Batching supported: Works with batched input matrices
- Rank invariant: Number of non-zero singular values = matrix rank
- Numerical rank is approximate: Distinguish mathematically zero from numerically negligible
- Threshold choice matters: Different tolerances give different rank estimates
- Ill-conditioned matrices: Small singular values can be unreliable
Parameters
ATensor<S, D, Dev>- Input matrix (m × n) or batch (..., m, n)
Returns
Tensor<DynamicShape, D, Dev>– Singular value vector, shape [k] or [..., k] where k = min(m, n) Always in descending order (largest first)Examples
// Compute singular values only (faster than full SVD)
const A = torch.tensor([[1.0, 0.0], [0.0, 2.0]]);
const S = torch.linalg.svdvals(A);
// S ≈ [2, 1] (in descending order)
// Compute matrix rank (count non-negligible singular values)
const A = torch.randn(100, 50); // Likely full rank 50
const S = torch.linalg.svdvals(A);
const tol = 1e-10 * S[0]; // Threshold relative to largest singular value
const rank = S.gt(tol).sum(); // Count singular values > tolerance
// Assess effective dimensionality (intrinsic data dimension)
const X = torch.randn(1000, 100); // 1000 samples, 100 features
const S = torch.linalg.svdvals(X.T); // Singular values of feature matrix
// S encodes how many "effective directions" exist in data
// If many singular values are near zero, data has lower intrinsic dimension
// Condition number (numerical stability indicator)
const A = torch.randn(10, 10);
const S = torch.linalg.svdvals(A);
const cond = S[0].div(S[-1]); // κ(A) = σ_max / σ_min
console.log('Condition number:', cond.item());
// Low-rank approximation (keep only top k singular values)
const A = torch.randn(1000, 500);
const S = torch.linalg.svdvals(A);
const k = 50; // Keep top 50 singular values
const info_retained = S.narrow(0, 0, k).sum().div(S.sum()); // Fraction of info
console.log('Info retained with rank-50 approximation:', info_retained.item() * 100, '%');
// Image compression via singular values
const image = torch.randn(256, 256); // Grayscale image
const S = torch.linalg.svdvals(image);
// Singular values decay rapidly; can use few values for good compression
const cumsum = S.cumsum(0);
const target_energy = 0.99;
const k = cumsum.lt(target_energy * cumsum[-1]).sum() + 1;
console.log(`Can compress to rank-${k.item()} for 99% energy`);
// Batched singular values
const A_batch = torch.randn(32, 50, 30); // 32 matrices, 50×30 each
const S_batch = torch.linalg.svdvals(A_batch);
// S_batch shape: [32, 30] (30 = min(50, 30))See Also
- PyTorch torch.linalg.svdvals()
- svd - Full singular value decomposition with U and Vh
- matrix_rank - Matrix rank (uses SVD to count singular values)
- cond - Condition number (uses singular values)
- matrix_norm - Matrix norms (computed from singular values)
- pinv - Pseudoinverse (uses SVD with singular value thresholding)