torch.nn.LPPool1d
new LPPool1d(norm_type: number, kernel_size: number, options?: LPPool1dOptions)
- readonly
norm_type(number) - readonly
kernel_size(number) - readonly
stride(number) - readonly
ceil_mode(boolean)
1D Lp-norm pooling: reduces sequence length using Lp-norm instead of max or mean.
Applies pooling using Lp-norm (power average): out = (sum(|x|^p))^(1/p). Generalized pooling operation where different norm_type values produce different behaviors:
- norm_type=1 (L1): Sum of absolute values (Manhattan distance)
- norm_type=2 (L2): Euclidean norm (square root of sum of squares)
- Between max (sparse) and mean (dense) pooling
Useful for robustness: less extreme than max pooling but less smooth than average. Essential for:
- Robust feature extraction (between max and mean)
- Learning robust representations to outliers
- Domain-specific downsampling requiring norm-based pooling
- L1 vs L2: L1 = sum(|x|), L2 = sqrt(sum(x²))
- Robustness: Between max (sensitive to outliers) and mean (loses peaks)
- ceil_mode: Affects output size calculation at boundaries
Examples
// L2 (Euclidean) norm pooling
const pool = new torch.nn.LPPool1d(2, 3); // L2-norm, kernel=3
const x = torch.randn([32, 64, 100]);
const y = pool.forward(x); // [32, 64, ~34] - L2-norm pooling// L1 (Manhattan) norm pooling
const pool = new torch.nn.LPPool1d(1, 2); // L1-norm, kernel=2
const x = torch.randn([16, 128, 50]);
const y = pool.forward(x); // [16, 128, 25] - sum of absolute values