torch.logcumsumexp
Returns the logarithm of the cumulative summation of the exponentiation of elements.
Computes log(∑ exp(x)) along a dimension in a numerically stable way. This function is critical for numerical stability when working with probabilities and likelihoods, where direct computation would cause overflow. Uses the log-sum-exp trick internally.
Commonly used for:
- Computing log-probabilities in probabilistic models
- Softmax and cross-entropy loss calculations
- Numerical stability in likelihood computations
- Gaussian mixture models and sequence models
- Internally uses the numerically stable log-sum-exp trick: max(x) + log(sum(exp(x - max(x))))
- Gradient computation is fully stable due to the implementation technique
- Output values are typically large; they represent log(·) of cumulative sums
Parameters
inputTensor- The input tensor (typically log probabilities or large exponentials)
dimnumber- The dimension along which to compute (0 to rank-1)
Returns
Tensor– Tensor containing log(∑ exp(input)) along the dimensionExamples
// Basic usage - log cumulative sum of exponentials
const x = torch.tensor([1.0, 2.0, 3.0]);
torch.logcumsumexp(x, 0); // [log(e^1), log(e^1 + e^2), log(e^1 + e^2 + e^3)]
// Numerical stability for large values (prevents overflow)
const large = torch.tensor([700.0, 710.0, 720.0]);
torch.logcumsumexp(large, 0); // Works fine, direct exp() would overflow
// Computing log-probabilities in batches
const logits = torch.randn(32, 10); // [batch, classes]
const log_probs = torch.logcumsumexp(logits, 1); // [32] - per-sample normalization
// In log-domain computations
const log_a = torch.tensor([1.0, 2.0, 3.0]);
const log_b = torch.tensor([1.5, 2.5, 3.5]);
const log_sum = torch.logcumsumexp(torch.stack([log_a, log_b], 0), 0); // [log(a+b), ...]See Also
- PyTorch torch.logcumsumexp()
- cumsum - Regular cumulative sum without exponential
- logsumexp - Reduces all elements instead of cumulative along dimension
- log_softmax - Softmax in log domain for probabilities