torch.nn.ReplicationPad1d
class ReplicationPad1d extends Modulenew ReplicationPad1d(padding: Padding1D)
- readonly
padding([number, number])
1D replication padding: pads by repeating edge values.
Extends sequences by replicating the edge values. Unlike reflection which mirrors, replication simply repeats the boundary value for the entire padding. Simpler than reflection, good for avoiding boundary artifacts when signal naturally "continues". Essential for:
- Signal processing (repeated edge assumption)
- Time series with constant boundary behavior
- Simpler alternative to reflection padding
- Performance-critical applications (simpler computation)
Replication padding repeats edge values: for input [a, b, c] with padding=2, left becomes [a, a | a, b, c], right becomes [a, b, c | c, c].
When to use ReplicationPad1d:
- Simpler alternative to reflection
- Signal assumes edge value continues
- Performance matters (faster than reflection)
- Avoiding reflection's boundary limit constraints
Trade-offs:
- vs ReflectionPad: Replication simpler; reflection smoother
- vs ZeroPad: Replication preserves signal; zero adds discontinuity
- Computation: Simpler than reflection (just repeat)
- Gradient: Straightforward backprop
- No limit: Can pad any amount (unlike reflection's width-1 limit)
- Simple edge repeat: Edge value replicated for entire padding
- No limit: Can pad more than width (unlike reflection)
- Smooth signal: Assumes signal continues at boundary
- Simpler computation: Less complex than reflection
- Edge repetition visible: Creates obvious pattern at boundary
- Not smooth: Sharp repetition unlike reflection's gradual mirroring
Examples
// Basic replication
const pad = new torch.nn.ReplicationPad1d(2);
const x = torch.tensor([[[1, 2, 3, 4, 5]]]);
const y = pad.forward(x);
// Input: [1, 2, 3, 4, 5]
// Output: [1, 1, 1, 2, 3, 4, 5, 5, 5] (edges repeated)