torch.distributions.logits_to_probs
function logits_to_probs(logits: Tensor, options?: LogitsToProbsOptions): Tensorfunction logits_to_probs(logits: Tensor, is_binary: boolean, options?: LogitsToProbsOptions): TensorConverts logits (log-odds) to probabilities.
Transforms logit values into probability space using sigmoid for binary or softmax for categorical distributions. This is the inverse of probs_to_logits. Useful for:
- Probability extraction: Converting model logits to probabilities
- Distribution creation: Converting logits to probability parameters for distributions
- Inference: Getting probability estimates from neural network outputs
- Numerical stability: Sigmoid/softmax provide numerically stable conversions
- Probability interpretation: Making logits human-interpretable as probabilities
For binary distributions: probs = sigmoid(logits) = 1 / (1 + exp(-logits)) For categorical distributions: probs = softmax(logits)
Output is guaranteed to be valid probabilities (values in [0, 1], sum to 1 for categorical).
- Sigmoid for binary: Maps (-∞, ∞) to (0, 1)
- Softmax for categorical: Returns normalized probability distribution
- Inverse operation: Inverse of probs_to_logits()
- Numerically stable: Uses stable implementations to avoid overflow
- Output range: Binary returns values in (0, 1), categorical sums to 1
- Extreme logits: Very large/small logits produce probabilities close to 0/1
- Softmax normalization: Categorical output always sums to 1 across last dimension
Parameters
logitsTensor- Logit tensor with unconstrained values (-∞ to ∞)
optionsLogitsToProbsOptionsoptional
Returns
Tensor– Probability tensor with values in [0, 1]Examples
// Binary logits to probabilities
const logits = torch.tensor([-2.197, 0, 2.197]);
const probs = torch.distributions.logits_to_probs(logits, true);
// probs ≈ [0.1, 0.5, 0.9]// Categorical logits to probabilities
const logits = torch.tensor([1.0, 2.0, 3.0]);
const probs = torch.distributions.logits_to_probs(logits, false);
// probs is normalized probability distribution (sums to 1)// Neural network output to probabilities
const network_output = model.forward(x); // logits
const probs = torch.distributions.logits_to_probs(network_output, false);
// Use probs for creating Categorical distributionSee Also
- PyTorch torch.sigmoid() and torch.softmax()
- probs_to_logits - Convert probabilities to logits
- torch.sigmoid - Sigmoid activation function
- torch.softmax - Softmax normalization