torch.optim.lr_scheduler.CyclicLR
class CyclicLRnew CyclicLR(optimizer: Optimizer, options: {
/** Base learning rate (or array for each param group) */
base_lr: number | number[];
/** Maximum learning rate (or array for each param group) */
max_lr: number | number[];
/** Number of training iterations in increasing half (default: 2000) */
step_size_up?: number;
/** Number of training iterations in decreasing half (default: step_size_up) */
step_size_down?: number;
/** One of 'triangular', 'triangular2', 'exp_range' (default: 'triangular') */
mode?: ScaleMode;
/** Constant for exp_range mode (default: 1.0) */
gamma?: number;
/** Custom scale function (overrides mode) */
scale_fn?: ScaleFn;
/** 'cycle' or 'iterations' (default: 'cycle') */
scale_mode?: 'cycle' | 'iterations';
/** If true, cycle starts at max_lr (default: true) */
cycle_momentum?: boolean;
/** The index of last epoch (default: -1) */
last_epoch?: number;
/** Whether to print a message for each update (default: false) */
verbose?: boolean;
})
Constructor Parameters
optimizerOptimizer- Wrapped optimizer
options{ /** Base learning rate (or array for each param group) */ base_lr: number | number[]; /** Maximum learning rate (or array for each param group) */ max_lr: number | number[]; /** Number of training iterations in increasing half (default: 2000) */ step_size_up?: number; /** Number of training iterations in decreasing half (default: step_size_up) */ step_size_down?: number; /** One of 'triangular', 'triangular2', 'exp_range' (default: 'triangular') */ mode?: ScaleMode; /** Constant for exp_range mode (default: 1.0) */ gamma?: number; /** Custom scale function (overrides mode) */ scale_fn?: ScaleFn; /** 'cycle' or 'iterations' (default: 'cycle') */ scale_mode?: 'cycle' | 'iterations'; /** If true, cycle starts at max_lr (default: true) */ cycle_momentum?: boolean; /** The index of last epoch (default: -1) */ last_epoch?: number; /** Whether to print a message for each update (default: false) */ verbose?: boolean; }- Scheduler options
optimizer(Optimizer)- – The optimizer being scheduled
base_lrs(number[])- – Base learning rates for each param group
max_lrs(number[])- – Maximum learning rates for each param group
step_size_up(number)- – Number of training iterations in the increasing half of a cycle
step_size_down(number)- – Number of training iterations in the decreasing half of a cycle
mode(ScaleMode)- – Scale mode or custom function
gamma(number)- – Gamma for exp_range mode
scale_fn(ScaleFn | null)- – Custom scale function
scale_mode('cycle' | 'iterations')- – Apply scale to base_lr if true, else to amplitude
cycle(number)- – Current cycle
last_epoch(number)- – Last epoch
CyclicLR scheduler: Cyclical learning rate policy (CLR).
CyclicLR cycles learning rate between base_lr and max_lr with fixed frequency. Similar to OneCycleLR but cycles repeatedly throughout training instead of single cycle. Each cycle has up phase (base_lr → max_lr) and down phase (max_lr → base_lr).
Algorithm: Triangular or exponential cycling between boundaries
- Repeated cycles: Multiple cycles throughout training (vs OneCycleLR's single cycle).
- Empirically works: Research shows cycling improves convergence.
- Step-based: Call step() per batch, not per epoch.
- Modes: Triangular (linear), triangular2 (halving amplitude), exp_range (exponential).
Examples
// Cycle between 0.001 and 0.1 every 4000 steps
const scheduler = new torch.optim.CyclicLR(optimizer, {
base_lr: 0.001,
max_lr: 0.1,
step_size_up: 2000,
step_size_down: 2000
});