torch.linalg.matrix_rank
function matrix_rank<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>, options?: MatrixRankOptions): Tensor<DynamicShape, 'int32', Dev>function matrix_rank<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>, atol: number, rtol: number, hermitian: boolean, options?: MatrixRankOptions): Tensor<DynamicShape, 'int32', Dev>Computes the numerical rank of a matrix (number of independent rows/columns).
Rank measures the dimensionality of the data that can be expressed by the matrix. Essential for:
- Detecting linearly dependent rows/columns
- Assessing whether a system of equations is solvable
- Determining effective dimensionality of data
- Rank-deficiency detection in least squares problems
- Deciding low-rank approximation strategies
- Numerical stability assessment
Rank interpretation:
- rank(A) = 0: Zero matrix (no information)
- rank(A) = min(m, n): Full rank (no linear dependencies)
- rank(A) < min(m, n): Rank-deficient (linear dependencies exist)
Numerical rank definition: Count of singular values σᵢ that satisfy σᵢ > threshold:
- threshold = max(atol, rtol × σ_max)
- If σᵢ > threshold: counts as non-zero
- If σᵢ ≤ threshold: counts as negligible
Why numerical rank?
- Exact rank is ill-defined due to rounding errors
- Must distinguish "mathematically zero" from "numerically negligible"
- Threshold parameters (atol, rtol) control the distinction
- Numerical vs mathematical rank: Numerical rank depends on tolerance; mathematical rank is exact
- Default tolerances: rtol ≈ 1e-6 × min(m, n), sensible for most applications
- atol vs rtol: atol for small singular values, rtol for relative scaling
- Rectangular matrices OK: Works for any m × n, not just square
- Hermitian fast path: Use hermitian=true for symmetric matrices (uses eigvalsh)
- SVD-based: Uses singular values for robust rank computation
- Batching supported: Works with batched input matrices
- Tolerance sensitivity: Results depend on atol and rtol; choose carefully
- Ill-conditioned matrices: Small singular values can be unreliable; use larger atol
- Near-rank-deficiency: Matrices with many small singular values are numerically tricky
Parameters
ATensor<S, D, Dev>- Input matrix (m × n) or batch (..., m, n)
optionsMatrixRankOptionsoptional
Returns
Tensor<DynamicShape, 'int32', Dev>– Rank as scalar integer tensor or batch of integersExamples
// Full rank matrix (no linear dependencies)
const A = torch.tensor([
[1.0, 2.0],
[3.0, 4.0]
]);
const rank = torch.linalg.matrix_rank(A); // 2 (full rank)
// Rank-deficient matrix (dependent rows)
const B = torch.tensor([
[1.0, 2.0],
[2.0, 4.0] // Second row = 2 × first row
]);
const rank = torch.linalg.matrix_rank(B); // 1 (rank-deficient)
// Tall matrix with full column rank
const A = torch.randn(100, 50); // 100 rows, 50 columns
const rank = torch.linalg.matrix_rank(A); // Should be 50 (full column rank)
// Detecting when least squares is well-posed
const A = torch.randn(100, 20); // 100 equations, 20 unknowns
const b = torch.randn(100);
const rank_A = torch.linalg.matrix_rank(A);
if (rank_A.item() === 20) {
console.log('Full rank; least squares has unique minimum-norm solution');
} else {
console.log('Rank-deficient; least squares solution not unique');
}
// Effective dimensionality of data matrix
const X = torch.randn(1000, 100); // 1000 samples, 100 features
const rank_X = torch.linalg.matrix_rank(X);
console.log(`Data has effective dimensionality ${rank_X.item()}`);
// If rank < 100, data lies in lower-dimensional subspace
// Controlling tolerance for rank estimation
const A = torch.randn(10, 10);
const rank_strict = torch.linalg.matrix_rank(A, 1e-12, 0); // Very strict
const rank_loose = torch.linalg.matrix_rank(A, 1e-6, 0); // More lenient
// Stricter tolerance counts more singular values as non-zero
// Using symmetric matrix fast path
const S = torch.randn(5, 5);
const S_sym = S.add(S.T).mul(0.5); // Symmetrize
const rank = torch.linalg.matrix_rank(S_sym, undefined, undefined, true);
// true = hermitian: uses eigvalsh instead of SVD (faster)
// Batched rank computation
const A_batch = torch.randn(32, 50, 30); // 32 matrices
const ranks = torch.linalg.matrix_rank(A_batch);
// ranks shape: [32] - rank for each matrixSee Also
- PyTorch torch.linalg.matrix_rank()
- svdvals - Singular values (used internally for rank computation)
- svd - Full singular value decomposition (to inspect singular values)
- matrix_norm - Matrix norms (related to conditioning)
- cond - Condition number (relates to near-rank-deficiency)
- pinv - Pseudoinverse (uses rank-aware thresholding)