torch.softsign
function softsign<S extends Shape, D extends DType = DType, Dev extends DeviceType = DeviceType>(input: Tensor<S, D, Dev>): Tensor<S, D, Dev>Applies the Softsign function element-wise.
Softsign is similar to tanh but computationally simpler, providing smooth bounded output without exponential. Maps inputs to (-1, 1), making it useful for bounded activation outputs. Less commonly used than tanh or sigmoid, but can be efficient on some hardware.
- Bounded activation: Output is always in (-1, 1)
- No exp: Computationally simpler than tanh (no exponential)
- Slower convergence: Polynomial decay vs exponential can be slower to train
- Rare in modern networks: Most prefer ReLU, tanh, or GELU
Parameters
inputTensor<S, D, Dev>- The input tensor
Returns
Tensor<S, D, Dev>– A new tensor with Softsign applied element-wise, values in (-1, 1)Examples
// Basic usage
const x = torch.tensor([-2, -1, 0, 1, 2]);
torch.softsign(x); // [-0.667, -0.5, 0, 0.5, 0.667]
// Comparison with tanh (similar purpose, different properties)
const tanh_out = torch.tanh(x); // [-0.964, -0.762, 0, 0.762, 0.964]
const softsign_out = torch.softsign(x); // [-0.667, -0.5, 0, 0.5, 0.667]
// Softsign decays more slowly (linear denominator vs exponential)
// Normalizing unbounded outputs to (-1, 1) range
const raw_output = torch.randn(batch, features);
const normalized = torch.softsign(raw_output); // Guaranteed in (-1, 1)See Also
- PyTorch torch.nn.functional.softsign()
- tanh - Similar bounded activation with exponential decay
- sigmoid - Similar bounded output, different range
- relu - Unbounded alternative