torch.linalg.solve_triangular
function solve_triangular<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>, B: Tensor<Shape, D, Dev>, options: SolveTriangularOptions): Tensor<DynamicShape, D, Dev>function solve_triangular<S extends Shape, D extends DType, Dev extends DeviceType>(A: Tensor<S, D, Dev>, B: Tensor<Shape, D, Dev>, upper: boolean, left: boolean, unitriangular: boolean, options?: SolveTriangularOptions): Tensor<DynamicShape, D, Dev>Solves the matrix equation AX = B where A is triangular.
Highly efficient O(n²) solver for triangular systems via forward/backward substitution. Critical for:
- Completing LU-based solvers (solve LY = B, then UX = Y)
- Solving in QR-based least squares (solve R X = Q^T B)
- Direct solution when matrix structure is known
- Cholesky-based solvers (solve LL^T X = B)
Much faster than general solvers (O(n²) vs O(n³)) since A's triangular structure is exploited by forward/backward substitution instead of Gaussian elimination.
Handles both forward substitution (lower triangular) and backward substitution (upper triangular).
- Forward substitution: For lower triangular, solves one unknown per row (top to bottom)
- Backward substitution: For upper triangular, solves one unknown per row (bottom to top)
- Unit diagonal: Setting unitriangular=true skips division (faster if diagonal is all 1s)
- Efficiency: O(n²) - much faster than general solvers O(n³)
- Multiple RHS: Can solve many RHS simultaneously (columns of B)
- GPU accelerated: Efficient parallel implementation on GPU
- Numerical stability: Stable forward/backward substitution; ill-conditioning depends on matrix condition number
- Triangular structure required: A must actually be triangular; results undefined if not
- No pivoting: Unlike general solvers, no row exchanges for stability; matrix should be well-scaled
- Diagonal zeros: If diagonal has zeros and unitriangular=false, division by zero occurs
Parameters
ATensor<S, D, Dev>- Triangular coefficient matrix (n × n) or batch (..., n, n)
optionsSolveTriangularOptions- Solution options: -
upper: Whether A is upper triangular (true) or lower triangular (false) -left: Solve AX = B (true, default) vs XA = B (false, not yet implemented) -unitriangular: Whether A has unit diagonal = 1 (default: false)
Returns
Tensor<DynamicShape, D, Dev>– Solution matrix X with same shape as BExamples
// Solve upper triangular system
const A = torch.tensor([[2.0, 1.0], [0.0, 3.0]]); // Upper triangular
const B = torch.tensor([[7.0], [9.0]]);
const X = torch.linalg.solve_triangular(A, B, { upper: true });
// X ≈ [[2], [3]]
// Solve lower triangular system
const L = torch.tensor([[2.0, 0.0], [1.0, 3.0]]); // Lower triangular
const B = torch.tensor([[4.0], [15.0]]);
const Y = torch.linalg.solve_triangular(L, B, { upper: false });
// Y ≈ [[2], [4.33...]]
// Matrix equation (multiple RHS)
const U = torch.randn(5, 5).triu(); // Random upper triangular
const B = torch.randn(5, 3); // 3 RHS
const X = torch.linalg.solve_triangular(U, B, { upper: true });
// X: [5, 3] - each column solves UX[:,i] = B[:,i]
// Unit diagonal (faster, no division needed on diagonal)
const L_unit = torch.randn(4, 4).tril(); // Lower with implicit 1s on diagonal
const B = torch.randn(4);
const X = torch.linalg.solve_triangular(L_unit, B, { upper: false, unitriangular: true });
// Solving after LU decomposition
const A = torch.randn(10, 10);
const b = torch.randn(10);
const [LU, pivots] = torch.linalg.lu_factor(A);
const L = LU.tril(-1).add(torch.eye(10));
const U = LU.triu();
const y = torch.linalg.solve_triangular(L, b, { upper: false, unitriangular: true });
const x = torch.linalg.solve_triangular(U, y, { upper: true });
// x is solution to Ax = bSee Also
- PyTorch torch.linalg.solve_triangular()
- solve - General linear solver (handles non-triangular matrices)
- lu_factor - LU decomposition (produces triangular L and U)
- cholesky - Cholesky decomposition (produces triangular L for positive-definite)
- qr - QR decomposition (produces triangular R)