torch.maximum
function maximum<S1 extends Shape, S2 extends Shape>(input: Tensor<S1>, other: Tensor<S2>, options?: BinaryOptions<BroadcastShape<S1, S2>>): Tensor<BroadcastShape<S1, S2>>Computes element-wise maximum of two tensors.
Element-wise maximum: for each position, returns the larger of the two values. If either value is NaN, the result is NaN (NaN propagates). For NaN-safe maximum, use fmax instead. Essential for:
- Clipping operations: Enforcing minimum thresholds element-wise
- ReLU-like activations: max(0, x) activation functions
- Comparison operations: Finding larger values in paired data
- Boundary enforcement: Enforcing upper/lower constraints
- Data merging: Combining datasets by taking element-wise maximum
- Physics simulations: Computing physical bounds and constraints
This is the standard element-wise maximum that propagates NaN (any NaN input → NaN output). For datasets with missing values, use fmax which ignores NaN. The function supports broadcasting: shapes don't need to be identical, just broadcastable per NumPy rules.
- NaN propagation: Any NaN input produces NaN output
- Broadcasting: Follows standard NumPy broadcasting rules
- Element-wise: Operates on individual elements, not reductions
- Opposite of minimum: Use minimum for element-wise minimum
- In-place option: Can write result to output tensor if provided
- NaN propagates: If you need NaN-safe behavior, use fmax
- Shape broadcasting: Input shapes must be broadcastable
- Different from max: max finds single maximum, maximum finds per-element max
Parameters
inputTensor<S1>- First input tensor (any shape)
otherTensor<S2>- Second input tensor (broadcastable to input shape)
optionsBinaryOptions<BroadcastShape<S1, S2>>optional- Optional output tensor for in-place operation
Returns
Tensor<BroadcastShape<S1, S2>>– wise maximum: max(input[i], other[i]) for each positionExamples
// Basic element-wise maximum
const x = torch.tensor([1, 5, 3, 2]);
const y = torch.tensor([2, 3, 4, 1]);
torch.maximum(x, y); // [2, 5, 4, 2]
// Clipping to minimum value (ReLU-like)
const values = torch.tensor([-2, 0.5, -1, 3, -0.5]);
const zeros = torch.zeros_like(values);
const activated = torch.maximum(values, zeros);
// activated: [0, 0.5, 0, 3, 0] - ReLU behavior
// Enforcing minimum thresholds
const measurements = torch.tensor([[9.2, 10.1], [8.9, 11.3]]);
const minimum_valid = 9.0;
const validated = torch.maximum(measurements, torch.full_like(measurements, minimum_valid));
// validated: [[9.2, 10.1], [9.0, 11.3]] - all values >= 9.0
// Broadcasting with different shapes
const images = torch.randn(32, 3, 224, 224); // Batch of images
const floor = torch.tensor([-1.0]); // Single scalar broadcasted
const floored = torch.maximum(images, floor);
// All pixel values >= -1.0
// Merging two datasets, keeping maximum
const temp_sensor1 = torch.tensor([20.5, 21.2, 19.8]);
const temp_sensor2 = torch.tensor([20.1, 21.5, 20.0]);
const consensus_temp = torch.maximum(temp_sensor1, temp_sensor2);
// consensus_temp: [20.5, 21.5, 20.0] - takes the warmer reading
// Comparison with fmax (NaN handling)
const x = torch.tensor([1.0, NaN, 3.0]);
const y = torch.tensor([2.0, 2.0, 2.0]);
torch.maximum(x, y); // [2, NaN, 3] - NaN propagates
torch.fmax(x, y); // [2, 2, 3] - NaN ignoredSee Also
- PyTorch torch.maximum()
- minimum - Element-wise minimum
- fmax - NaN-safe maximum
- fmin - NaN-safe minimum
- clamp - Clamp values between min and max bounds