Skip to main content
torch.js has not been released yet.
torch.js logotorch.js logotorch.js
PlaygroundContact
Login
Documentation
IntroductionType SafetyTensor ExpressionsTensor IndexingEinsumEinopsAutogradTraining a ModelProfiling & MemoryPyTorch MigrationBest PracticesRuntimesPerformancePyTorch CompatibilityBenchmarksDType Coverage
AnnealStrategyChainedSchedulerConstantLRCosineAnnealingLRCosineAnnealingWarmRestartsCyclicLRExponentialLRget_last_lrget_last_lrget_last_lrget_last_lrget_last_lrget_last_lrget_lrget_lrget_lrLambdaLRLinearLRload_state_dictload_state_dictload_state_dictload_state_dictload_state_dictload_state_dictLRLambdaLRSchedulerLRSchedulerOptionsMultiplicativeLRMultiStepLROneCycleLRPlateauModePolynomialLRprint_lrPrintLrOptionsReduceLROnPlateauScaleFnScaleModeSchedulerStateDictSequentialLRstate_dictstate_dictstate_dictstate_dictstate_dictstate_dictstepstepstepstepstepstepstepStepLRStepOptions
AdadeltaAdadeltaOptionsAdafactorAdafactorOptionsAdagradAdagradOptionsAdamAdamaxAdamaxOptionsAdamOptionsAdamWAdamWOptionsadd_param_groupASGDASGDOptionsget_averaged_paramLBFGSLBFGSOptionsLBFGSStepOptionsload_state_dictLoadStateDictPostHookLoadStateDictPreHookMuonMuonOptionsNAdamNAdamOptionsOptimizerOptimizerStateOptimizerStateDictParamGroupRAdamRAdamOptionsregister_load_state_dict_post_hookregister_load_state_dict_pre_hookregister_state_dict_post_hookregister_state_dict_pre_hookregister_step_post_hookregister_step_pre_hookRemovableHandleRMSpropRMSpropOptionsRpropRpropOptionsSGDSGDOptionsSparseAdamSparseAdamOptionsstate_dictstate_dict_asyncStateDictPostHookStateDictPreHookstepstepStepPostHookStepPreHookstepWithClosurezero_grad
absacosacoshAdaptivePool1dShapeAdaptivePool2dShapeaddaddbmmAddbmmOptionsaddcdivAddcdivOptionsaddcmulAddcmulOptionsaddmmAddmmOptionsaddmvAddmvOptionsaddrAddrOptionsadjointallallcloseAllcloseOptionsAlphaBetaOptionsamaxaminaminmaxAminmaxOptionsangleanyapplyOutarangeare_deterministic_algorithms_enabledargmaxargminargsortargwhereas_stridedas_tensorasinasinhAssertNoShapeErrorAssertNotErrorAsStridedOptionsAtat_error_index_out_of_boundsatanatan2atanhatleast_1datleast_2datleast_3dAtShapeautocast_decrement_nestingautocast_increment_nestingautograd_gradient_mismatch_errorautograd_not_registered_errorAutogradConfigAutogradDeviceAutogradDTypeAutogradEntryAutogradHandleAutogradHandleImplAxesRecordBackwardFnbaddbmmBaddbmmOptionsbartlett_windowBaseKernelConfigbatch_dimensions_do_not_match_errorbernoulliBernoulliOptionsBinaryBackwardFnBinaryBroadcastResultBinaryDTypeBinaryKernelConfigCPUBinaryKernelCPUBinaryOpConfigBinaryOpNamesBinaryOpSchemaBinaryOptionsbincountBincountOptionsbitwise_andbitwise_left_shiftbitwise_notbitwise_orbitwise_right_shiftbitwise_xorblackman_windowblock_diagbmmBooleanDTypeRulebroadcast_error_incompatible_dimensionsbroadcast_shapesbroadcast_tensorsbroadcast_toBroadcastShapeBroadcastShapeRulebroadcastShapesbucketizeBucketizeOptionsBufferUsagebuildEinopsErrorbuildErrorMessagecanBroadcastTocartesian_prodcatCatOptionsCatShapeCauchyOptionscdistCdistOptionsceilceluCeluFunctionalOptionschain_matmulCheckShapeErrorCholeskyShapechunkchunk_error_dim_out_of_rangeChunkOptionsclampClampOptionsclear_autocast_cacheclearEinopsCacheclearEinsumCacheclonecolumn_stackcombinationsCombinationsOptionscompiled_with_cxx11_abicomplexconjconj_physicalcontiguousConv1dShapeConv2dShapeConv3dShapeConvTranspose2dShapecopysigncorrcoefcoscoshcount_nonzeroCountNonzeroOptionscovcoverage_reportcoverageReportCoverageReportCovOptionsCPUForwardFnCPUKernelConfigCPUKernelEntryCPUOnlyResultCPUTensorDatacreateCumExtremeResultcreateTorchCreationOpSchemaCumExtremeResultcummaxcummincumprodCumShapecumsumcumulative_trapezoidCumulativeOptionsCumulativeOptionsWithDimdeg2raddetachDeterministicOptionsDetShapeDevicedevice_error_requiresDeviceBufferDeviceCapabilitiesDeviceCheckedResultDeviceConfigDeviceContextDeviceEntryDeviceHandleDeviceInputDeviceOptionsDeviceRegistryDeviceTypediagdiag_embedDiagEmbedOptionsdiagflatDiagflatOptionsDiagFlatOptionsdiagonal_scatterDiagonalOptionsDiagonalScatterOptionsDiagOptionsDiagShapediffDiffOptionsdigammadimension_error_out_of_rangeDispatchConfigdistDistOptionsdivdotDotShapeRuleDoubleDoubleDimdropoutDropoutFunctionalOptionsdsplitdstackDTypedtype_already_registered_errordtype_components_mismatch_errordtype_not_found_errorDTypeComponentsDTypeConfigDTypeCoverageReportDTypeDisplayConfigDTypeEntryDTypeHandleDTypeHandleImplDTypeInfoDTypeRegistryDTypeRuleDTypeSerializationConfigDynamicShapeEigShapeeinops_error_ambiguous_decompositioneinops_error_anonymous_in_outputeinops_error_dimension_mismatcheinops_error_invalid_patterneinops_error_reduce_undefined_outputeinops_error_repeat_missing_sizeeinops_error_undefined_axiseinsumeinsum_error_dimension_mismatcheinsum_error_index_out_of_rangeeinsum_error_invalid_equationeinsum_error_invalid_sublist_elementeinsum_error_operand_count_mismatcheinsum_error_subscript_rank_mismatcheinsum_error_unknown_output_indexEinsumOptionsEinsumOutputShapeEllipsiseluelu_EluFunctionalOptionsembedding_bag_error_requires_2d_inputemptyempty_cacheempty_likeeqequalerferfcerfinvexpexp2expandexpand_asexpand_error_incompatibleExpandShapeexpm1ExponentialOptionseyeEyeOptionsfftFFTOptionsfindKernelWithPredicatefindSimilarPatternsflattenFlattenOptionsFlattenShapeflipflip_error_dim_out_of_rangefliplrFlipShapeflipudfloat_powerFloatDTypeRulefloorfloor_dividefmaxfminfmodformatEquationErrorformatShapefracfrexpfrombufferfullfull_likefunction_already_registered_errorFunctionConfigFunctionEntryFunctionHandlegathergather_error_dim_out_of_rangeGatherShapegcdgegeluGeometricOptionsget_autocast_cpu_dtypeget_autocast_gpu_dtypeget_autocast_ipu_dtypeget_autocast_xla_dtypeget_default_deviceget_default_dtypeget_deterministic_debug_modeget_device_configget_device_contextget_device_moduleget_dtype_infoget_file_pathget_float32_matmul_precisionget_num_interop_threadsget_num_threadsget_op_infoget_printoptionsget_real_dtypeget_rng_stategetAutogradgetDTypegetEinopsCacheSizegetEinsumCacheSizegetFunctiongetKernelgetMethodgetOpInfoGetOpKindGetOpSchemagetScalarKernelgluGluFunctionalOptionsGradContextGradFnGradientsForgtHalfHalfDimhamming_windowhann_windowhardshrinkhardsigmoidhardswishhardtanhhardtanh_HardtanhFunctionalOptionshas_autogradhas_devicehas_dtypehas_kernelhasAutogradhasDTypehasFunctionhasKernelhasMethodhasScalarKernelHasShapeErrorheavisidehistcHistcOptionshistogramHistogramOptionsHistogramResulthsplithstackhypoti0IdentityShapeifftimagindex_addindex_copyindex_fillindex_putindex_reduceindex_selectindex_select_error_dim_out_of_rangeIndexPutOptionsIndexSelectShapeIndexSpecIndicesOptionsIndicesSpecinitialize_deviceInputsForInsertDiminvalid_config_errorinverseInverseShapeirfftis_anomaly_check_nan_enabledis_anomaly_enabledis_autocast_cache_enabledis_autocast_cpu_enabledis_autocast_ipu_enabledis_autocast_xla_enabledis_complexis_complex_dtypeis_cpu_only_modeis_deterministic_algorithms_warn_only_enabledis_floating_pointis_floating_point_dtypeis_inference_mode_enabledis_nonzerois_tensoris_warn_always_enabledis_webgpu_availableIs2DIsAtLeast1DIsBinaryOpIsBinaryOpNameiscloseIscloseOptionsisfiniteisinisinfisnanisneginfisposinfisrealIsReductionOpIsReductionOpNameIsRegistryErrorIsShapeErroristftISTFTOptionsIsUnaryOpIsUnaryOpNameitem_error_not_scalarItemResultkaiser_windowKaiserWindowOptionskernel_not_registered_errorkernel_signature_mismatch_errorKernelConfigKernelConfigWebGPUKernelEntryKernelHandleKernelInfoKernelPredicateKernelRegistryKernelWebGPUkronkthvalueKthvalueOptionslcmldexpleleaky_reluleaky_relu_LeakyReluFunctionalOptionslerplevenshteinDistancelgammalinalg_error_not_square_matrixlinalg_error_requires_2dlinalg_error_requires_at_least_2dlinearlinspacelist_custom_deviceslist_custom_dtypeslist_deviceslist_dtypeslist_functionslist_kernelslist_methodslist_opslistCustomDTypeslistDTypeslistFunctionslistKernelsListKernelsOptionslistMethodslistOpsListOpsOptionsloglog_softmaxlog10log1plog2logaddexplogaddexp2logcumsumexplogical_andlogical_notlogical_orlogical_xorLogitOptionsLogNormalOptionsLogOptionslogsigmoidlogspacelogsumexpLogsumexpOptionsltLUShapeLuSolveOptionsmasked_fillmasked_selectmasked_select_asyncMaskSpecmatmulmatmul_error_inner_dimensions_do_not_matchMatmul2DShapeMatmulShapeMatmulShapeRuleMatrixTransposeShapemaxmaximummeanmedianmemory_statsmemory_summarymeshgridmethod_already_registered_errormethod_dtype_not_supported_errorMethodConfigMethodEntryMethodHandleminminimummishmmMMShapeRulemodemovedimmsortmulmultinomialmultinomial_asyncMultinomialAsyncOptionsMultinomialOptionsMultiplyBymvMVShapeRulenan_to_numnanmeannanmediannanquantileNanReductionOptionsnansumNanToNumOptionsnarrownarrow_copynarrow_error_length_exceeds_boundsnarrow_error_start_out_of_boundsNarrowShapeneneedsBroadcastnegNegativeDimnextafternonzeroNonzeroOptionsnormnormalNormalOptionsNormOptionsnumelonesones_likeop_kind_mismatch_errorop_not_found_errorOpCoverageEntryOpInfoOpKindOpNameOpSchemaOpSchemasouterOuterShapepackPackShapepermutepermute_error_dimension_count_mismatchPermuteShapepoissonpolarPool1dShapePool2dShapePool3dShapepositivepowpreluPrintOptionsprodprofiler_allow_cudagraph_cupti_lazy_reinit_cuda12promote_typesPromoteDTypeRulePutOptionsquantileQuantileOptionsrad2degrandrand_likerandintrandint_likeRandintLikeOptionsRandintOptionsrandnrandn_likeRandomLikeOptionsRandomOptionsrandpermRangeSpecRankravelrealrearrangeRearrangeOptionsRearrangeShapereciprocalreduceReduceOperationReduceOptionsReduceShapeReductionKernelConfigCPUReductionKernelCPUReductionOpNamesReductionOpSchemaReductionOptionsReductionShapeRuleregister_backwardregister_deviceregister_dtyperegister_forwardregister_functionregister_methodregister_scalar_forwardregisterAutogradRegisterBackwardOptionsregisterBinaryOpregisterDTypeRegisterDTypeOptionsRegisteredDTyperegisterFunctionRegisterFunctionOptionsregisterKernelRegisterKernelOptionsregisterMethodRegisterMethodOptionsregisterScalarKernelregisterUnaryOpregistration_failed_errorrelurelu_relu6ReluFunctionalOptionsremainderRemoveDimrepeatrepeat_interleaveRepeatInterleaveOptionsRepeatOptionsRepeatShapeReplaceDimrequireWebGPUreset_peak_memory_statsreshapeReshapeShaperesult_typerfftrollRollOptionsrot90Rot90Optionsroundrrelurrelu_RreluFunctionalOptionsrsqrtSafeExpandShapeSameDTypeRuleSameShapeRuleSaveForBackwardScalarCPUForwardFnScalarCPUKernelConfigScalarKernelEntryScalarKernelHandleScalarWebGPUKernelConfigScaleDimscatterscatter_addscatter_add_scatter_error_dim_out_of_rangescatter_reducescatter_reduce_ScatterReduceOptionsScatterShapesearchsortedSearchSortedOptionsselectselect_error_index_out_of_boundsselect_scatterSelectShapeseluset_default_deviceset_default_tensor_typeset_deterministic_debug_modeset_float32_matmul_precisionset_printoptionsset_warn_alwaysSetupContextFnShapeShapeCheckedResultShapedTensorShapeErrorMessageShapeOpSchemaShapeRulesigmoidsignsignbitsilusinsincsinhSizeOptionsslice_error_out_of_boundsslice_scatterSliceOptionsSliceScatterOptionsSliceShapeSliceSpecsoftmaxsoftmax_error_dim_out_of_rangeSoftmaxShapesoftminSoftminFunctionalOptionssoftplusSoftplusFunctionalOptionssoftshrinksoftsignsortSortOptionssplitsplit_error_dim_out_of_rangeSplitOptionssqrtsquaresqueezeSqueezeOptionsSqueezeShapestackStackOptionsStackShapestdstd_meanStdVarMeanOptionsStdVarOptionsstftSTFTOptionsStrideOptionssubSublistSublistElementSubscriptIndexsumSVDShapeswapaxessym_floatsym_intsym_notttaketake_along_dimTakeAlongDimOptionstantanhtanhshrinktensortensor_splitTensorCreatorTensorDatatensordotTensordotOptionsTensorLikeTensorMetaTensorOptionsTensorStoragethresholdthreshold_tileTileShapeToOptionstopkTopkOptionsTorchtraceTraceShapetransposetranspose_dims_error_out_of_rangetranspose_error_requires_2d_tensorTransposeDimsShapeTransposeDimsShapeCheckedTransposeShapetrapezoidTrapezoidOptionsTriangularOptionstriltril_indicesTriOptionsTripletriutriu_indicestrue_dividetruncTupleOfLengthTypedArrayTypedArrayForTypedStorageTypeOptionsUnaryBackwardFnUnaryDTypeUnaryKernelConfigCPUUnaryKernelCPUUnaryOpConfigUnaryOpFnUnaryOpNamesUnaryOpParamsUnaryOpSchemaUnaryOptionsunbindunbind_error_dim_out_of_rangeUnbindOptionsunflattenUniformOptionsuniqueunique_consecutiveUniqueConsecutiveOptionsUniqueOptionsunpackUnpackShapeunravel_indexunregister_deviceunsqueezeUnsqueezeOptionsUnsqueezeShapeuse_deterministic_algorithmsValidateBatchedSquareMatrixValidateChunkDimValidatedEinsumShapevalidateDeviceValidateDeviceValidatedRearrangeShapeValidatedReduceShapeValidatedRepeatShapevalidateDTypeValidateEinsumValidateOperandCountValidateRanksValidateScalarValidateSplitDimValidateSquareMatrixValidateUnbindDimValueOptionsvar_var_meanvdotviewview_as_complexview_as_realvmapvsplitvstackWebGPUKernelConfigWebGPUOnlyResultWebGPUTensorDatawhereWindowOptionsxlogyzeroszeros_like
torch.js· 2026
LegalTerms of UsePrivacy Policy
/
/
  1. docs
  2. torch.js
  3. torch
  4. optim
  5. lr_scheduler
  6. ReduceLROnPlateau

torch.optim.lr_scheduler.ReduceLROnPlateau

class ReduceLROnPlateau
new ReduceLROnPlateau(optimizer: Optimizer, options: { /** One of 'min', 'max' (default: 'min') */ mode?: PlateauMode; /** Factor by which the learning rate will be reduced (default: 0.1) */ factor?: number; /** Number of epochs with no improvement (default: 10) */ patience?: number; /** Threshold for measuring the new optimum (default: 1e-4) */ threshold?: number; /** One of 'rel', 'abs' (default: 'rel') */ threshold_mode?: 'rel' | 'abs'; /** Number of epochs to wait before resuming normal operation (default: 0) */ cooldown?: number; /** Lower bound on the learning rate (default: 0) */ min_lr?: number | number[]; /** Minimal decay applied to lr (default: 1e-8) */ eps?: number; /** Whether to print a message for each update (default: false) */ verbose?: boolean; } = {})

Constructor Parameters

optimizerOptimizer
Wrapped optimizer
options{ /** One of 'min', 'max' (default: 'min') */ mode?: PlateauMode; /** Factor by which the learning rate will be reduced (default: 0.1) */ factor?: number; /** Number of epochs with no improvement (default: 10) */ patience?: number; /** Threshold for measuring the new optimum (default: 1e-4) */ threshold?: number; /** One of 'rel', 'abs' (default: 'rel') */ threshold_mode?: 'rel' | 'abs'; /** Number of epochs to wait before resuming normal operation (default: 0) */ cooldown?: number; /** Lower bound on the learning rate (default: 0) */ min_lr?: number | number[]; /** Minimal decay applied to lr (default: 1e-8) */ eps?: number; /** Whether to print a message for each update (default: false) */ verbose?: boolean; }optional
Scheduler options
optimizer(Optimizer)
– The optimizer being scheduled
mode(PlateauMode)
– Mode: 'min' or 'max'
factor(number)
– Factor by which the learning rate will be reduced
patience(number)
– Number of epochs with no improvement after which LR will be reduced
threshold(number)
– Threshold for measuring the new optimum
threshold_mode('rel' | 'abs')
– Mode for threshold comparison
cooldown(number)
– Number of epochs to wait before resuming normal operation
min_lr(number | number[])
– Lower bound on the learning rate
eps(number)
– Minimal decay applied to lr
verbose(boolean)
– Whether to print LR changes

ReduceLROnPlateau scheduler: Reduce learning rate when metric plateaus.

ReduceLROnPlateau is a metric-based learning rate scheduler. Unlike epoch-based schedulers (StepLR, CosineAnnealingLR), it monitors a metric (e.g., validation loss) and reduces the learning rate when the metric stops improving. This is more adaptive and doesn't require knowing training duration in advance.

Key advantages:

  • Adaptive: Responds to actual training progress, not fixed epochs
  • No duration knowledge: Works without knowing total training steps
  • Metric-aware: Uses validation performance to decide when to decay
  • Flexible: Can be combined with any optimizer and other schedules

When to use ReduceLROnPlateau:

  • When you don't know good decay epochs in advance
  • Want learning rate to adapt to actual training progress
  • Metric-based adaptation is acceptable (not for critical timing needs)
  • Fine-tuning or transfer learning (where epochs vary)
  • Avoiding rigid schedules that may decay at suboptimal times

Trade-offs:

  • Requires passing metric value each step (manual)
  • Requires monitoring metric (like validation loss)
  • Less predictable than fixed-epoch schedules
  • Can be slower to adapt if patience is large
  • Different from epoch-based schedulers in semantics

Algorithm: Monitors metric and reduces learning rate when plateau detected:

  1. Track best metric value seen so far
  2. If metric doesn't improve for 'patience' checks, reduce lr by 'factor'
  3. Reset patience counter after reduction
  4. Improvement threshold specified by 'threshold' parameter
Improvement={metric<best⋅(1−threshold)if mode=’min’metric>best⋅(1+threshold)if mode=’max’After no improvement for patience epochs: ηnew=η⋅factor\begin{aligned} \text{Improvement} = \begin{cases} \text{metric} < \text{best} \cdot (1 - \text{threshold}) & \text{if mode='min'} \\ \text{metric} > \text{best} \cdot (1 + \text{threshold}) & \text{if mode='max'} \end{cases} \\ \text{After no improvement for patience epochs: } \eta_{\text{new}} = \eta \cdot \text{factor} \end{aligned}Improvement={metric<best⋅(1−threshold)metric>best⋅(1+threshold)​if mode=’min’if mode=’max’​After no improvement for patience epochs: ηnew​=η⋅factor​
  • Metric-driven: Reduces lr based on actual performance, not fixed epochs.
  • Manual passing: Must explicitly call step(metric) with validation metric.
  • Best practice: Use validation loss/accuracy, not training metrics.
  • Patience critical: Too small → aggressive decay, too large → slow adaptation.
  • Cooldown useful: Prevents rapid successive reductions from metric noise.
  • min_lr important: Prevents learning rate from becoming zero.
  • Comparison: StepLR is rigid, ReduceOnPlateau is adaptive.
  • Combos: Can be used with warmup (LinearLR) in SequentialLR.
  • Not epoch-based: step(metric) not step(), semantically different.
  • Popular choice: Standard for transfer learning and fine-tuning.

Examples

// Reduce lr when validation loss stops improving
const scheduler = new torch.optim.ReduceLROnPlateau(optimizer, {
  mode: 'min',        // Minimize loss
  factor: 0.1,        // Reduce by 10x
  patience: 10,       // Wait 10 epochs
  threshold: 1e-4
});

for (let epoch = 0; epoch < 100; epoch++) {
  train();
  const val_loss = validate();
  scheduler.step(val_loss);  // Pass metric to scheduler
}
// Maximize accuracy (e.g., for classification)
const scheduler = new torch.optim.ReduceLROnPlateau(optimizer, {
  mode: 'max',           // Maximize accuracy
  factor: 0.5,           // Reduce by 50%
  patience: 5,           // Less patient, decay sooner
  threshold: 0.0001      // Threshold for improvement
});
// Conservative: large patience, small decay
const scheduler = new torch.optim.ReduceLROnPlateau(optimizer, {
  mode: 'min',
  factor: 0.5,      // Reduce by 50% (not as aggressive)
  patience: 20,     // Wait 20 epochs before reducing
  min_lr: 1e-6      // Don't go below 1e-6
});
// Aggressive: small patience, large decay
const scheduler = new torch.optim.ReduceLROnPlateau(optimizer, {
  mode: 'min',
  factor: 0.1,      // Reduce by 90% (very aggressive)
  patience: 3,      // Only wait 3 epochs
  cooldown: 1       // Wait 1 epoch after reducing before monitoring again
});

See Also

  • PyTorch torch.optim.lr_scheduler.ReduceLROnPlateau
Previous
PrintLrOptions
Next
ReduceLROnPlateau.get_last_lr