torch.Tensor.Tensor.pinverse
Tensor.pinverse(): Tensor<DynamicShape, D, Dev>Computes the Moore-Penrose pseudoinverse (generalized inverse).
Calculates the pseudoinverse of a matrix, which is a generalization of the matrix inverse for non-square and rank-deficient matrices. Defined as A† = V S† U†, where A = U S V† is the SVD and S† inverts non-zero singular values. Essential for:
- Solving underdetermined systems: Finding minimum-norm solutions to Ax = b
- Solving overdetermined systems: Finding least-squares solutions when no exact solution exists
- Rank-deficient matrices: Handling matrices with linearly dependent rows/columns
- Generalized inverse: Computing inverse for non-square matrices
- Regression: Computing optimal solutions in linear regression with collinear features
For full-rank square matrices, the pseudoinverse equals the matrix inverse. Works for any matrix shape (m×n) and any rank.
- SVD-based: Computed via Singular Value Decomposition (SVD). For large matrices, SVD can be expensive.
- Singular value cutoff: Small singular values (below threshold) are treated as zero to avoid numerical instability. This threshold typically relates to machine epsilon.
- Output shape: For input shape (m, n), output shape is (n, m). Unlike matrix inverse, this is always defined.
- Minimum norm: For underdetermined systems, pseudoinverse gives minimum Euclidean norm solution. This is the unique solution with smallest ||x||.
- Least squares: For overdetermined systems, pseudoinverse gives least-squares solution. Minimizes ||Ax - b||² over all x.
- Numerical stability: For matrices with very small singular values, results may be affected by numerical errors. Consider regularization for ill-conditioned matrices.
- Computational cost: O(min(m,n)² max(m,n)) complexity due to SVD computation. For large matrices, this can be slow.
- Not truly invertible: For rank-deficient matrices, A A.pinverse() != I. The pseudoinverse is a "best effort" generalization.
Returns
Tensor<DynamicShape, D, Dev>– Tensor of shape (n, m) containing the Moore-Penrose pseudoinverseExamples
// Pseudoinverse of full-rank square matrix (acts like inverse)
const A = torch.tensor([[1, 2], [3, 4]]);
const A_pinv = A.pinverse();
const identity = A.matmul(A_pinv); // Approximately I// Solve underdetermined system (infinite solutions, find minimum norm)
const A = torch.tensor([[1, 2, 3]]); // 1x3 matrix (underdetermined)
const b = torch.tensor([[5]]);
const A_pinv = A.pinverse();
const x_minnorm = A_pinv.matmul(b); // Minimum norm solution to Ax = b// Solve overdetermined system (no exact solution, find least-squares)
const A = torch.tensor([[1, 1], [2, 2], [3, 3]]); // 3x2 matrix (overdetermined)
const b = torch.tensor([[1], [2], [3]]);
const A_pinv = A.pinverse();
const x_lsq = A_pinv.matmul(b); // Least-squares solution// Rank-deficient matrix (singular/noninvertible)
const A = torch.tensor([[1, 2], [2, 4], [3, 6]]); // rank 1 (rows proportional)
const A_pinv = A.pinverse(); // Still works, handles rank deficiency// Linear regression with collinear features
const X = torch.randn([100, 10]); // Data matrix (may have collinearity)
const y = torch.randn([100, 1]); // Target
const X_pinv = X.pinverse();
const beta = X_pinv.matmul(y); // Regression coefficients (handles collinearity)See Also
- PyTorch tensor.pinverse()
- pinv - Functional form of pseudoinverse
- solve - For solving square full-rank systems
- lstsq - Least-squares solution with more control
- svd - Singular value decomposition (used internally)