torch.linalg.lu_factor
function lu_factor<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>): { LU: Tensor<S, D, Dev>; pivots: Tensor<DynamicShape, 'int32', Dev> }function lu_factor<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>, options: LuFactorOptions): { LU: Tensor<S, D, Dev>; pivots: Tensor<DynamicShape, 'int32', Dev> }Computes the LU decomposition with pivoting of a 2D matrix or batch of matrices.
Decomposes a matrix A into lower triangular (L) and upper triangular (U) matrices such that A = PLU or A = LU depending on the pivot setting. The result is returned as a packed representation where L and U share the same matrix (lower triangle is L, upper triangle is U, diagonal is from U). Essential for:
- Solving linear systems: Decompose once, solve multiple systems efficiently
- Computing determinants: det(A) = det(P) * det(U) where det(U) = product of diagonal
- Matrix inversion: Computing A^-1 via the LU factors
- Numerical linear algebra: Foundation for many matrix algorithms
- Batch operations: Decomposing many matrices simultaneously
- Stability analysis: Computing condition numbers and ranks
When pivot=true (default), uses partial pivoting for numerical stability. This reorders rows to ensure the largest absolute value is in the pivot position, improving accuracy with floating-point arithmetic. Returns both the packed LU matrix and the pivot permutation.
- Packed format: Result combines L and U in single matrix for memory efficiency
- Partial pivoting: Default uses row pivoting for numerical stability
- In-place modification: Original matrix A is typically not modified
- Batch support: Works on batches of matrices via broadcasting
- GPU optimized: WebGPU backend provides accelerated computation for large matrices
- Square matrices only: Input must be square (n × n). Non-square matrices will error
- Numerical stability: Without pivoting (pivot=false) on GPU may lose stability. Consider moving to CPU
- No pivoting on GPU: pivot=false is not yet supported on WebGPU backend. Move tensor to CPU if needed
- Singular/near-singular matrices: LU of singular matrices may produce NaN or inf values
Parameters
ATensor<S, D, Dev>- A 2D tensor with shape [n, n] or batch with shape [..., n, n]
Returns
{ LU: Tensor<S, D, Dev>; pivots: Tensor<DynamicShape, 'int32', Dev> }– Tuple [LU, P] where: - LU: Packed LU matrix (lower triangle is L, upper triangle and diagonal is U), shape matches A - P: Permutation matrix or pivot indices tensor, shape [..., n]Examples
// Basic LU decomposition with pivoting
const A = torch.tensor([[4, 3], [6, 3]]);
const [LU, P] = torch.linalg.lu_factor(A);
// LU contains packed L and U
// P contains permutation info for reordering rows
// Solving linear system using LU decomposition
const A_mat = torch.randn(4, 4);
const b = torch.randn(4);
const [LU, pivots] = torch.linalg.lu_factor(A_mat);
// Would then use forward/backward substitution to solve
// Computing determinant from LU decomposition
const A_3x3 = torch.randn(3, 3);
const [LU, P] = torch.linalg.lu_factor(A_3x3);
// det(A) = product of diagonal of U, times sign from pivots
// Batch processing multiple matrices
const batch_matrices = torch.randn(10, 5, 5); // 10 matrices of 5x5
const [LU_batch, P_batch] = torch.linalg.lu_factor(batch_matrices);
// LU_batch shape: [10, 5, 5]
// P_batch shape: [10, 5]
// LU decomposition without pivoting (less stable but sometimes needed)
const A_no_pivot = torch.tensor([[1, 2], [3, 4]]);
const [LU_np, P_np] = torch.linalg.lu_factor(A_no_pivot, false);See Also
- PyTorch torch.linalg.lu_factor(A, *, pivot=True)
- solve_triangular - Efficiently solve triangular systems from LU decomposition
- lu_factor_ex - Returns additional info tensor (with error status)
- solve - General linear system solver (uses LU internally)
- det - Determinant (can be computed from LU)