torch.nn.ReflectionPad1d
class ReflectionPad1d extends Modulenew ReflectionPad1d(padding: Padding1D)
- readonly
padding([number, number])
1D reflection padding: pads by reflecting the signal at boundaries.
Extends sequences by mirroring/reflecting values at the boundaries. The last value isn't repeated; instead, the sequence is reflected as if at a mirror. Produces smooth, natural boundaries without discontinuities. Essential for:
- Audio/signal processing (avoids artificial zero padding artifacts)
- Time series with natural symmetry
- CNN input preprocessing (reduces boundary artifacts)
- Preserving signal continuity at edges
- Computer vision (avoids black borders from zero padding)
Reflection padding creates seamless boundaries by mirroring: for input [..., a, b, c] with padding=2, left becomes [c, b, ...] (mirrored). Prevents sharp discontinuities at edges that zero padding or replication would introduce.
Padding modes comparison:
- Reflection: Mirror at boundary → [c, b | a, b, c | b, a]
- Replication: Repeat edge → [a, a | a, b, c | c, c]
- Zero: Fill with 0 → [0, 0 | a, b, c | 0, 0]
- Circular: Wrap around → [b, c | a, b, c | a, b]
When to use ReflectionPad1d:
- Audio/speech (natural boundary extension)
- Time series with continuity assumptions
- Avoiding artificial discontinuities
- Signal processing (filters expect smooth boundaries)
- When replication/zero padding shows boundary artifacts
Trade-offs:
- vs ZeroPad: Reflection avoids black borders; zero simpler for deep nets
- vs ReplicationPad: Reflection smoother; replication simpler
- vs CircularPad: Reflection asymmetric; circular for periodic signals
- Computation: Slightly more complex than zero padding
- Gradient: Normal backprop through reflected values
Reflection mechanics: For input [a, b, c, d, e] with padding=(2, 3):
- Left padding (2): Mirror [c, b] before 'a'
- Right padding (3): Mirror [e, d, c] after 'e'
- Result: [c, b | a, b, c, d, e | e, d, c]
- Reflection vs mirroring: Excludes the boundary value (true reflection)
- Smooth boundaries: No discontinuities like zero padding would create
- Symmetry: Output reflects input structure near boundaries
- Deterministic: Always produces same output for same input
- No parameters: Pure transformation, no learnable parameters
- Reversible: Can compute original from padded (if padding size known)
- Reflection limit: Cannot pad more than (width - 1) on each side
- Boundary artifacts: Still creates pattern repetition (just smoother)
- Different from rolling: Not circular - true reflection at boundary
- Edge case: Very small tensors may produce unexpected patterns
Examples
// Basic reflection padding
const pad = new torch.nn.ReflectionPad1d(2);
const x = torch.tensor([[[1, 2, 3, 4, 5]]]); // [1, 1, 5]
const y = pad.forward(x);
// Input: [1, 2, 3, 4, 5]
// Output: [3, 2, 1, 2, 3, 4, 5, 4, 3] (reflected at both ends)// Asymmetric padding
const pad = new torch.nn.ReflectionPad1d([1, 2]);
const x = torch.randn([32, 64, 100]); // [batch, channels, width]
const y = pad.forward(x); // width becomes 103// Audio processing: avoid zero-padding artifacts
const pad = new torch.nn.ReflectionPad1d(512); // Large padding
const waveform = torch.randn([1, 1, 16000]); // [batch, channels, samples]
const padded = pad.forward(waveform); // Smooth reflection, no clicks