torch.nn.functional.adaptive_max_pool1d
function adaptive_max_pool1d(input: Tensor, output_size: number | [number], options: AdaptiveMaxPoolFunctionalOptions & { return_indices: true }): PoolWithIndicesResultfunction adaptive_max_pool1d(input: Tensor, output_size: number | [number], options?: AdaptiveMaxPoolFunctionalOptions): Tensor | PoolWithIndicesResult1D Adaptive Max Pooling: takes max values to fixed output size automatically.
Applies adaptive max pooling that automatically computes kernel and stride sizes to achieve the desired output size. Optionally returns indices of max values. Useful for:
- Network design: handle variable input sizes without manual kernel calculation
- Classification networks: pools to fixed feature size before classification layer
- Feature extraction: preserves strongest activations with automatic sizing
- Transfer learning: adapting to different input lengths
- Unpooling: can use returned indices to reconstruct upsampled features
- Size-invariant architectures: same network with varying input lengths
Unlike regular max pooling where you specify kernel/stride, adaptive pooling automatically calculates them to achieve the target output size. Preserves maximum values in each window. Can return indices for unpooling reconstruction in deconvolutional networks.
- Automatic kernel computation: No manual kernel/stride needed
- Input invariance: Works with any input length, always produces desired output
- Indices useful: return_indices=true enables reconstructing original shape in decoder
- Max preservation: Keeps strongest signals, useful for peak detection
- Global max special case: output_size=1 gives global maximum
- Index semantics: Returned indices correspond to flattened input positions
- Output size constraints: Should not exceed input length
- Unpooling requirement: Must save indices during pooling if unpooling later
Parameters
inputTensor- 3D input tensor of shape (batch, channels, length)
output_sizenumber | [number]- Target output length (single value or [length])
optionsAdaptiveMaxPoolFunctionalOptions & { return_indices: true }
Returns
PoolWithIndicesResult– - If return_indices=false: Tensor with shape (batch, channels, output_size) - If return_indices=true: [pooled_tensor, indices_tensor] for unpoolingExamples
// Global max pooling: extract strongest features
const seq = torch.randn(8, 64, 100);
const pooled = torch.nn.functional.adaptive_max_pool1d(seq, 1);
// Output: (8, 64, 1) - max value from each channel
// Fixed output size: handle variable input lengths
const seq1 = torch.randn(4, 128, 87);
const fixed1 = torch.nn.functional.adaptive_max_pool1d(seq1, 10); // → (4, 128, 10)
const seq2 = torch.randn(4, 128, 200); // Different input length
const fixed2 = torch.nn.functional.adaptive_max_pool1d(seq2, 10); // → (4, 128, 10)
// Both produce (4, 128, 10) regardless of input length
// Feature extraction with indices for unpooling
const embeddings = torch.randn(1, 300, 50);
const [pooled, indices] = torch.nn.functional.adaptive_max_pool1d(embeddings, 10, true);
// pooled: (1, 300, 10) - max values from each 5-element window
// indices: (1, 300, 10) - positions of max values (for unpooling)
// Dense prediction network: preserve spatial resolution with max
const features = torch.randn(16, 256, 14); // Feature maps
const standard = torch.nn.functional.adaptive_max_pool1d(features, 7);
// Output: (16, 256, 7) - half resolution, preserving peaksSee Also
- PyTorch torch.nn.functional.adaptive_max_pool1d
- adaptive_avg_pool1d - Average variant for smoothing instead of peak extraction
- max_pool1d - Regular 1D max pooling with explicit kernel/stride
- adaptive_max_pool2d - 2D adaptive max pooling variant