torch.nn.InstanceNorm2d
class InstanceNorm2d extends _InstanceNormInstance Normalization for 2D inputs (images): normalizes each image independently.
Per-sample normalization across spatial dimensions. Essential for style transfer (CycleGAN, MUNIT) and generative models. Each image's channels normalized independently, not across batch.
When to use InstanceNorm2d:
- Style transfer and image-to-image translation networks
- GANs and generative models sensitive to instance statistics
- Domain adaptation with unpaired images
- Single-image inference (batch size 1)
- When batch statistics are unreliable (small batches)
- 2D spatial normalization: Normalizes across height and width per image per channel
- Batch-independent: No batch statistics used
- Perfect for single-image: Works with batch size 1 unlike BatchNorm
- Standard in style transfer: Used in CycleGAN, MUNIT, Pix2Pix
- Loses batch info: Ignores batch statistics completely
- May hurt supervised learning: BatchNorm typically better for classification
Examples
// Style transfer network (CycleGAN-style)
const norm = new torch.nn.InstanceNorm2d(256); // 256 channels
const x = torch.randn([16, 256, 64, 64]); // [batch, channels, height, width]
const y = norm.forward(x); // Each image normalized independently// Image-to-image translation independent of batch size
const norm = new torch.nn.InstanceNorm2d(64);
const x_batch_1 = torch.randn([1, 64, 256, 256]); // Single image
const y1 = norm.forward(x_batch_1);
const x_batch_32 = torch.randn([32, 64, 256, 256]); // Batch of 32
const y32 = norm.forward(x_batch_32); // Same normalization behavior