torch.linalg.eigvalsh
function eigvalsh<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>): Tensor<DynamicShape, D, Dev>function eigvalsh<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>, options: EighOptions): Tensor<DynamicShape, D, Dev>Computes only the eigenvalues of a symmetric/Hermitian matrix (without eigenvectors).
More efficient than eigh() when only eigenvalues are needed. Essential for:
- Positive/negative definite testing (all λ > 0 means positive-definite)
- Condition number computation κ(A) = λ_max / λ_min
- Spectral radius and stability analysis
- Convexity analysis via Hessian eigenvalues
- Principal component importance (variances) from covariance matrices
- Numerical rank detection
Assumes input is symmetric (A = A^T); only uses upper or lower triangle. Always returns real eigenvalues (even for complex symmetric matrices, though this is unusual). Numerically stable and efficient via Jacobi method.
Symmetric/Hermitian input required:
- For symmetric real matrices: A = A^T
- For Hermitian complex matrices: A = A^H (conjugate transpose)
- Input matrix should be exactly symmetric within numerical precision
Comparison to alternatives:
- eigh(): Returns both eigenvalues and eigenvectors; slightly more expensive
- eigvals(): For general (non-symmetric) matrices; less stable
- svd(): Singular values (different from eigenvalues; more general but slower)
- Symmetric input required: Input must satisfy A = A^T (within tolerance)
- Always real eigenvalues: Unlike eigvals for general matrices
- Usually ordered: Typically ascending (smallest to largest)
- UPLO parameter: Specify which triangle to use; other is ignored
- More efficient than eigh(): Saves computation by skipping eigenvectors
- Square matrix required: Input must be n × n
- GPU accelerated: Uses Jacobi method for efficient GPU computation
- Robust method: Jacobi method is numerically very stable
- Symmetry required: Results undefined if matrix is not symmetric
- Numerical precision: Small asymmetries due to rounding may affect results slightly
- Ties in eigenvalues: If eigenvalues are very close, ill-conditioning may occur
Parameters
ATensor<S, D, Dev>- Symmetric/Hermitian matrix (n × n) or batch (..., n, n)
Returns
Tensor<DynamicShape, D, Dev>– Eigenvalue vector, shape [n] or [..., n], always real and usually sorted ascendingExamples
// Compute eigenvalues only (faster than eigh)
const A = torch.tensor([[4.0, 1.0], [1.0, 3.0]]); // Symmetric
const eigenvalues = torch.linalg.eigvalsh(A);
// eigenvalues ≈ [2, 5] (usually sorted ascending)
// Test for positive definiteness
const A = torch.randn(5, 5);
const A_sym = A.add(A.T).mul(0.5); // Symmetrize
const eigenvalues = torch.linalg.eigvalsh(A_sym);
const is_pd = eigenvalues.min().gt(0); // All λ > 0?
const is_psd = eigenvalues.min().ge(0); // All λ ≥ 0?
console.log(is_pd ? 'Positive definite' : 'Not positive definite');
// Compute condition number
const A = torch.tensor([[10.0, 1.0], [1.0, 1.0]]); // Ill-conditioned
const eigenvalues = torch.linalg.eigvalsh(A);
const cond = eigenvalues.max().div(eigenvalues.min());
console.log('Condition number:', cond.item());
// Principal component analysis (covariance eigenvalues = variances)
const X = torch.randn(1000, 10); // 1000 samples, 10 features
const mean = X.mean(0);
const X_centered = X.sub(mean);
const Cov = X_centered.T.matmul(X_centered).div(1000);
const variances = torch.linalg.eigvalsh(Cov); // Eigenvalues = variances
const variance_explained = variances.div(variances.sum());
console.log('Explained variance:', variance_explained);
// Stability of covariance matrix (should be positive semi-definite)
const empirical_cov = torch.randn(50, 10).T.matmul(torch.randn(50, 10));
const eigs = torch.linalg.eigvalsh(empirical_cov.add(empirical_cov.T).mul(0.5));
const is_numerical_psd = eigs.min().ge(-1e-10); // Allow small numerical errors
// Batched eigenvalues
const A_batch = torch.randn(32, 10, 10);
const A_batch_sym = A_batch.add(A_batch.transpose(-2, -1)).mul(0.5);
const eigenvalues = torch.linalg.eigvalsh(A_batch_sym);
// eigenvalues shape: [32, 10]
// Using upper triangle only
const U_matrix = torch.randn(5, 5).triu();
const eigs = torch.linalg.eigvalsh(U_matrix, { UPLO: 'U' });
// Uses only upper triangle; ignores lowerSee Also
- PyTorch torch.linalg.eigvalsh()
- eigh - Symmetric eigendecomposition with eigenvectors
- eigvals - Eigenvalues of general (non-symmetric) matrices
- svd - Singular values (different concept, for any rectangular matrix)
- cond - Condition number (uses eigenvalues for symmetric matrices)
- matrix_rank - Matrix rank via eigenvalue count