torch.minimum
function minimum<S1 extends Shape, S2 extends Shape>(input: Tensor<S1>, other: Tensor<S2>, options?: BinaryOptions<BroadcastShape<S1, S2>>): Tensor<BroadcastShape<S1, S2>>Computes element-wise minimum of two tensors.
Element-wise minimum: for each position, returns the smaller of the two values. If either value is NaN, the result is NaN (NaN propagates). For NaN-safe minimum, use fmin instead. Essential for:
- Clipping operations: Enforcing maximum thresholds element-wise
- Saturation: Capping values at specified limits
- Comparison operations: Finding smaller values in paired data
- Boundary enforcement: Enforcing upper/lower constraints
- Data merging: Combining datasets by taking element-wise minimum
- Safety mechanisms: Enforcing maximum allowed values
This is the standard element-wise minimum that propagates NaN (any NaN input → NaN output). For datasets with missing values, use fmin which ignores NaN. Complementary to maximum, this function also supports broadcasting per NumPy standard rules.
- NaN propagation: Any NaN input produces NaN output
- Broadcasting: Follows standard NumPy broadcasting rules
- Element-wise: Operates on individual elements, not reductions
- Opposite of maximum: Use maximum for element-wise maximum
- In-place option: Can write result to output tensor if provided
- NaN propagates: If you need NaN-safe behavior, use fmin
- Shape broadcasting: Input shapes must be broadcastable
- Different from min: min finds single minimum, minimum finds per-element min
Parameters
inputTensor<S1>- First input tensor (any shape)
otherTensor<S2>- Second input tensor (broadcastable to input shape)
optionsBinaryOptions<BroadcastShape<S1, S2>>optional- Optional output tensor for in-place operation
Returns
Tensor<BroadcastShape<S1, S2>>– wise minimum: min(input[i], other[i]) for each positionExamples
// Basic element-wise minimum
const x = torch.tensor([1, 5, 3, 2]);
const y = torch.tensor([2, 3, 4, 1]);
torch.minimum(x, y); // [1, 3, 3, 1]
// Capping values at maximum (saturation)
const values = torch.tensor([-2, 0.5, 1, 3, 2.5]);
const max_allowed = 2.0;
const saturated = torch.minimum(values, torch.full_like(values, max_allowed));
// saturated: [-2, 0.5, 1, 2, 2] - all values capped at 2.0
// Enforcing maximum thresholds
const scores = torch.tensor([[95, 110], [88, 105]]);
const max_score = 100;
const capped = torch.minimum(scores, torch.full_like(scores, max_score));
// capped: [[95, 100], [88, 100]] - all scores <= 100
// Broadcasting with different shapes
const predictions = torch.randn(32, 10); // Batch predictions
const ceiling = torch.tensor([1.0]); // Single scalar broadcasted
const bounded = torch.minimum(predictions, ceiling);
// All predictions <= 1.0
// Merging two datasets, keeping minimum
const distance_path1 = torch.tensor([5.2, 3.8, 6.1]);
const distance_path2 = torch.tensor([4.9, 4.5, 5.8]);
const shortest_path = torch.minimum(distance_path1, distance_path2);
// shortest_path: [4.9, 3.8, 5.8] - takes the shorter distance
// Comparison with fmin (NaN handling)
const x = torch.tensor([1.0, NaN, 3.0]);
const y = torch.tensor([2.0, 2.0, 2.0]);
torch.minimum(x, y); // [1, NaN, 2] - NaN propagates
torch.fmin(x, y); // [1, 2, 2] - NaN ignoredSee Also
- PyTorch torch.minimum()
- maximum - Element-wise maximum
- fmax - NaN-safe maximum
- fmin - NaN-safe minimum
- clamp - Clamp values between min and max bounds