torch.optim.lr_scheduler.ConstantLR
class ConstantLR extends LRSchedulernew ConstantLR(optimizer: Optimizer, options: {
/** The number by which to multiply LR (default: 1/3) */
factor?: number;
/** Number of steps to apply constant LR (default: 5) */
total_iters?: number;
/** The index of last epoch (default: -1) */
last_epoch?: number;
/** Whether to print a message for each update (default: false) */
verbose?: boolean;
} = {})
Constructor Parameters
optimizerOptimizer- Wrapped optimizer
options{ /** The number by which to multiply LR (default: 1/3) */ factor?: number; /** Number of steps to apply constant LR (default: 5) */ total_iters?: number; /** The index of last epoch (default: -1) */ last_epoch?: number; /** Whether to print a message for each update (default: false) */ verbose?: boolean; }optional- Scheduler options
factor(number)- – The factor to multiply LR by during the constant phase
total_iters(number)- – Number of steps to apply constant LR
ConstantLR scheduler: Constant multiplier for fixed period (typically warmup).
ConstantLR multiplies learning rate by a constant factor for a fixed number of epochs, then returns to base learning rate. Simple warmup alternative to LinearLR: instead of linearly ramping up, it just stays at a constant reduced value.
Use cases:
- Constant warmup before main learning rate schedule
- Chaining with other schedulers (e.g., ConstantLR then CosineAnnealingLR)
- Less sophisticated than LinearLR but simpler
Algorithm: η_t = η_base * factor for epochs < total_iters, then η_t = η_base
- Simple warmup: Keep learning rate at reduced value before ramping up (LinearLR) or applying main schedule.
- Chain with others: Often used in SequentialLR before CosineAnnealingLR.
Examples
// Constant warmup at 1/3 of base_lr for 5 epochs
const scheduler = new torch.optim.ConstantLR(optimizer, { factor: 1/3, total_iters: 5 });