torch.hardtanh
function hardtanh(input: unknown, options: unknown): TensorHardTanh activation function: clamps values to bounded range.
Applies element-wise: HardTanh(x) = Clamp(x, min_val, max_val). A simple bounded activation that clips all values to a fixed range, typically [-1, 1]. Fast element-wise operation with no learnable parameters. Useful for:
- Bounded activation for quantization (values in predictable range)
- Enforcing constraints on intermediate representations
- Simple non-linearity with hard boundaries
- Mobile/embedded networks (simple, fast operation)
- Layers requiring range control (e.g., normalized features)
When to use HardTanh:
- Need simple bounded activation (faster than Sigmoid/Tanh for most devices)
- Values must be in specific range for downstream processing
- Want hard bounds (not soft like sigmoid)
- Mobile/embedded where Tanh is slow
- Combining with other constraints
HardTanh vs Tanh:
- Computation: HardTanh = clamp (O(1)); Tanh = exp computation (slower)
- Smoothness: Tanh smooth everywhere; HardTanh has sharp corners at bounds
- Gradient: Tanh non-zero everywhere; HardTanh zero outside bounds
- Speed: HardTanh much faster (simple clamp operation)
- Theory: Tanh has nice mathematical properties; HardTanh is pragmatic
HardTanh vs ReLU6:
- Range: HardTanh [-1, 1]; ReLU6 [0, 6]
- Symmetry: HardTanh symmetric around zero; ReLU6 asymmetric
- Negative: HardTanh allows all negatives; ReLU6 only zero/positive
- Usage: ReLU6 for mobile (matches uint8 quantization); HardTanh for symmetric constraints
- Hard bounds: Values exactly equal to bounds always saturate
- Zero gradient outside bounds: Gradients are zero for x min or x max
- Fast operation: Simple clamp, no exponentials or special functions
- Symmetric default: Default [-1, 1] is symmetric around zero
- Piecewise linear: Output is piecewise linear with corners at bounds
- Memory efficient: No stored state, minimal gradient computation
- Dead gradients: Very large inputs cause all gradients to zero (similar to ReLU dying)
- Not smooth: Has discontinuous derivative at boundaries
- Linear in middle: No non-linearity in the linear region
Parameters
inputunknown- Input tensor of any shape
optionsunknown- Optional settings:
Returns
Tensor– Tensor with values clamped to [min_val, max_val]Examples
// Default HardTanh: clamp to [-1, 1]
const x = torch.tensor([-2, -1, -0.5, 0, 0.5, 1, 2]);
torch.nn.functional.hardtanh(x); // [-1, -1, -0.5, 0, 0.5, 1, 1]// Custom bounds: clamp to [0, 1] for normalized probabilities
const x = torch.randn([32, 256]); // Unbounded activations
const bounded = torch.nn.functional.hardtanh(x, { min_val: 0, max_val: 1 }); // [0, 1] range
// Now suitable for probability-like interpretation// Quantization preparation: clamp to quantization range
const input = torch.randn([64, 512]);
// Assuming quantization to [-128, 127] in int8
// Clamp to [-128, 127] then scale
const clamped = torch.nn.functional.hardtanh(input, { min_val: -128, max_val: 127 });See Also
- PyTorch torch.nn.functional.hardtanh
- tanh - Smooth alternative with range [-1, 1]
- relu6 - Similar bounded activation with range [0, 6]
- sigmoid - Smooth alternative with range [0, 1]
- torch.nn.functional.clamp - General clamping operation