1 Comment
User's avatar
User's avatar
Comment removed
Jan 3
Comment removed
Taras Tsugrii's avatar

The validation drift when scaling batch size is such a common "why is this broken" moment.

The reason CNNs accept the tradeoff: BatchNorm's per-channel normalization actually matches how conv features work. LayerNorm mixes channel statistics, which feels less natural for CNNs.

GroupNorm is the underrated middle ground - batch-independent but still per-channel-group. And the trend is clear: ConvNeXt, ViT, modern architectures all use LayerNorm or GroupNorm.

BatchNorm's days might be numbered, but it's a slow sunset...