torch.Tensor.Tensor.cholesky
Tensor.cholesky(options?: CholeskyOptions): Tensor<DynamicShape, D, Dev>Tensor.cholesky(upper: boolean, options?: CholeskyOptions): Tensor<DynamicShape, D, Dev>Computes the Cholesky decomposition of a symmetric positive-definite matrix.
Decomposes a symmetric positive-definite (SPD) matrix A into A = L L^T or A = U^T U, where L is lower triangular and U is upper triangular. Much faster than general LU decomposition and more stable for SPD matrices. Essential for:
- Solving linear systems (via forward/backward substitution)
- Computing log-determinants (sum of log diagonal)
- Sampling from Gaussian distributions (standard form: sample N(0,I), multiply by L)
- Numerical stability in optimization and inference
The decomposition is unique for positive-definite matrices. Input must be symmetric (self == self.T) and all eigenvalues must be positive.
Use Cases:
- Gaussian likelihood computation (more stable than full matrix inverse)
- Sampling from multivariate normal distributions
- Solving structured linear systems efficiently
- Computing matrix square roots and logarithms
- Variational inference with Gaussian posteriors
- Kalman filters and Gaussian processes
- Symmetry required: Input must be symmetric (A == A.T)
- Positive-definite: All eigenvalues must be strictly positive
- Efficiency: O(n³/3) operations, 2-3x faster than LU
- Stability: More numerically stable than LU for well-conditioned matrices
- Uniqueness: The Cholesky factor is unique for positive-definite matrices
- Diagonal positivity: All diagonal elements of L are positive
- Non-symmetric input: Will fail or give incorrect results
- Non-positive-definite: Will fail (all eigenvalues must be 0)
- Singular matrices: Zero eigenvalues cause failure
- Numerical issues: Very ill-conditioned matrices may have stability problems
Parameters
optionsCholeskyOptionsoptional
Returns
Tensor<DynamicShape, D, Dev>– Cholesky factor: lower triangular L (default) or upper triangular UExamples
// Basic Cholesky decomposition
const A = torch.tensor([[4, 2], [2, 3]]); // Symmetric positive-definite
const L = A.cholesky(); // Lower triangular factor
// Verify: L @ L.T ≈ A
// Solve a linear system using Cholesky
const A = torch.eye(3).mul(4); // Diagonal positive-definite matrix
const b = torch.ones(3);
const L = A.cholesky(); // Factor: A = L L^T
const y = torch.linalg.solve_triangular(L, b, upper: false); // Solve L y = b
const x = torch.linalg.solve_triangular(L.T, y, upper: true); // Solve L^T x = y
// Sample from multivariate normal N(mu, Sigma)
const mu = torch.zeros(5);
const Sigma = torch.eye(5); // Covariance matrix
const L = Sigma.cholesky();
const z = torch.randn(5); // Standard normal
const sample = mu.add(L.matmul(z)); // Sample from N(mu, Sigma)
// Compute log determinant efficiently
const L = A.cholesky();
const logdet = 2 * L.diagonal().log().sum(); // log(det(A))
// Upper triangular form
const U = A.cholesky(true); // A = U^T U
// Verify decomposition
const A_reconstructed = U.T.matmul(U);See Also
- PyTorch torch.cholesky() (or tensor.cholesky())
- cholesky_inverse - Compute inverse using Cholesky factor
- cholesky_solve - Solve linear system using Cholesky factor
- lu - General LU decomposition (works for non-SPD matrices)
- qr - QR decomposition alternative
- svd - Singular value decomposition (more general but slower)