torch.Tensor.Tensor.leaky_relu_
Tensor.leaky_relu_(options?: LeakyReluFunctionalOptions): thisTensor.leaky_relu_(negative_slope: number, inplace: boolean, options: LeakyReluFunctionalOptions): thisIn-place Leaky ReLU activation.
Applies Leaky Rectified Linear Unit in-place. Leaky ReLU is defined as: LReLU(x) = max(negative_slope * x, x). Variant of ReLU that allows small negative gradient flow, solving "dead ReLU" problem. Essential for:
- Preventing dead neurons common in standard ReLU
- Allowing gradient flow for negative inputs
- Slightly better generalization than ReLU in some cases
- Competitive alternative to ReLU with minimal overhead
Use Cases:
- Replacing ReLU to prevent dead neurons
- Generative models and adversarial networks
- Neural networks with deeper architectures
- Improved gradient flow in deeper networks
- Dead ReLU fix: Small negative slope prevents neurons from permanently dying.
- Negative slope: Default 0.01, higher values allow more gradient flow.
- In-place: Modifies tensor directly, memory efficient.
- Smooth at zero: Technically not smooth but differentiable everywhere.
- Vanishing gradients: With small slope, may still have gradient issues.
- Hyperparameter: Negative slope value affects learning dynamics.
Parameters
optionsLeakyReluFunctionalOptionsoptional
Returns
this– This tensor, modified in-placeExamples
const x = torch.tensor([-2, -1, 0, 1, 2]);
x.leaky_relu_(); // [-0.02, -0.01, 0, 1, 2] with default slope 0.01
// Custom negative slope - steeper for negative values
const y = torch.tensor([-2, -1, 0, 1, 2]);
y.leaky_relu_(0.2); // [-0.4, -0.2, 0, 1, 2]
// Prevent dead neurons in deep network
let output = input;
for (const layer of model.layers) {
output = layer(output).leaky_relu_(0.01); // Small slope allows gradient flow
}See Also
- PyTorch tensor.leaky_relu_()
- leaky_relu - Non-inplace version
- relu_ - Standard ReLU (always zero for negatives)
- elu_ - ELU alternative with exponential shape
- prelu - Learnable version with per-element slopes