torch.Tensor.Tensor.sigmoid_
Tensor.sigmoid_(): thisIn-place sigmoid activation.
Applies sigmoid activation element-wise in-place. Sigmoid is defined as: σ(x) = 1 / (1 + e^(-x)). Outputs bounded to (0, 1), making it ideal for:
- Binary classification probability outputs
- Gate mechanisms in LSTMs and GRUs
- Smooth differentiable approximation to step function
- Any task requiring probability output
Use Cases:
- Binary classification output layers
- Gate functions in RNNs/LSTMs/GRUs
- Attention mechanisms and gating
- Smooth bounded activation with smooth gradients
- Output range: (0, 1) - always bounded, never saturates completely.
- Symmetric: sigmoid(0) = 0.5, sigmoid(-x) = 1 - sigmoid(x).
- In-place: Modifies tensor directly, memory efficient.
- Smooth gradients: Continuous non-zero gradient everywhere.
- Vanishing gradients: For extreme values (|x| 10), gradients approach 0.
- Slower than ReLU: More computation but better for gating.
Returns
this– This tensor, modified in-placeExamples
const x = torch.tensor([-10, -1, 0, 1, 10]);
x.sigmoid_(); // [~0, 0.269, 0.5, 0.731, ~1]
// Binary classification - squash logits to [0, 1]
const logits = torch.randn([32, 1]);
const probs = logits.clone().sigmoid_(); // [0, 1] probabilities
// Gate in LSTM - control information flow
const forget_gate = torch.randn([batch, hidden_size]);
forget_gate.sigmoid_(); // Gate values in [0, 1]See Also
- PyTorch tensor.sigmoid_()
- sigmoid - Non-inplace version
- relu_ - Unbounded activation (faster, sparser)
- tanh_ - Similar but output in (-1, 1)
- softmax - Multi-class probability output