torch.Tensor.Tensor.softplus
Softplus activation function.
Smooth approximation to ReLU: log(1 + e^x). Outputs are always positive, with smooth gradient everywhere. Used less frequently than ReLU/GELU.
Use Cases:
- Smooth approximation to ReLU
- Variance and scale parameter modeling (always positive)
- Smooth gradient flow compared to ReLU
- Always positive: softplus(x) 0 for all x
- Approximates ReLU: For large x, softplus(x) ≈ x; for small x, ≈ 0
- Smooth: Continuously differentiable everywhere
Returns
Tensor<S, D, Dev>– New tensor with softplus applied element-wiseExamples
// Smooth ReLU alternative
const x = torch.randn(100);
const activated = x.softplus(); // Smooth version of relu
// Parameter modeling (always positive)
const variance = torch.randn(50).softplus().add(1e-5); // Ensure positive varianceSee Also
- PyTorch tensor.softplus()
- relu - Non-smooth alternative (faster)
- elu - Another smooth alternative