torch.softplus
function softplus<S extends Shape, D extends DType = DType, Dev extends DeviceType = DeviceType>(input: Tensor<S, D, Dev>): Tensor<S, D, Dev>Applies the Softplus function element-wise.
Softplus is a smooth approximation of ReLU that's differentiable everywhere, including at x=0. It's the logarithmic derivative of the logistic sigmoid function. Useful where smooth activations are important or as a constraint function.
- Smooth everywhere: Unlike ReLU, differentiable at x=0
- Approximates ReLU: Becomes nearly identical for x 0
- Always positive: Output is always 0, useful for variances/scales
- Computational cost: More expensive than ReLU due to exp() and log()
Parameters
inputTensor<S, D, Dev>- The input tensor
Returns
Tensor<S, D, Dev>– A new tensor with Softplus applied element-wiseExamples
// Basic usage
const x = torch.tensor([-2, -1, 0, 1, 2]);
torch.softplus(x); // [0.126, 0.313, 0.693, 1.313, 2.127]
// Smooth approximation - nearly identical to ReLU for large values
const relu_out = torch.relu(x); // [0, 0, 0, 1, 2]
const softplus_out = torch.softplus(x); // [0.126, 0.313, 0.693, 1.313, 2.127]
// Use as constraint for learned variances (to ensure positive)
// Standard deviation must be positive, softplus ensures this
const logvar = torch.randn(batch, features);
const variance = torch.softplus(logvar); // Always positive!
const std = torch.sqrt(variance);
// Numerically stable variant of log(1 + exp(x))
// Avoids overflow for large x valuesSee Also
- PyTorch torch.nn.functional.softplus()
- relu - Faster, non-smooth alternative
- elu - Another smooth variant
- gelu - Modern smooth alternative from transformers