Skip to main content
torch.jstorch.jstorch.js
Getting StartedPlaygroundContact
Login
torch.jstorch.jstorch.js
Documentation
IntroductionType SafetyTensor IndexingEinsumEinopsAutogradTraining a ModelProfiling & MemoryPyTorch MigrationBest PracticesRuntimesPerformance
ActivationOptionsAdaptiveAvgPool1dAdaptiveAvgPool2dAdaptiveAvgPool3dAdaptiveLogSoftmaxWithLossAdaptiveMaxPool1dAdaptiveMaxPool2dAdaptiveMaxPool3dadd_moduleAlphaDropoutappendappendapplyAvgPool1dAvgPool1dOptionsAvgPool2dAvgPool2dOptionsAvgPool3dAvgPool3dOptionsBackwardHookBackwardPreHookBatchNorm1dBatchNorm2dBatchNorm3dBatchNormOptionsBCELossBCEWithLogitsLossBilinearBufferBufferOptionsBufferRegistrationHookbufferscallCELUCELUOptionsChannelShufflechildrenCircularPad1dCircularPad2dCircularPad3dclearConstantPad1dConstantPad2dConstantPad3dConv1dConv2dConv3dConvOptionsConvTranspose1dConvTranspose2dConvTranspose3dConvTransposeOptionsCosineEmbeddingLossCosineEmbeddingLossOptionsCosineSimilarityCosineSimilarityOptionscreatecreateCrossEntropyLossCTCLossdecodedecodedeleteDropoutDropout1dDropout2dDropout3dDropoutOptionsELUELUOptionsEmbeddingEmbeddingBagencodeencodeentriesentriesevalextendFeatureAlphaDropoutFlattenFlattenOptionsFoldFoldOptionsforwardforwardforwardforwardforwardforwardforwardforwardforward_with_targetForwardHookForwardPreHookFractionalMaxPool2dFractionalMaxPool3dfrom_pretrainedfrom_pretrainedGaussianNLLLossGELUGELUOptionsgenerate_square_subsequent_maskgetgetgetgetgetget_bufferget_parameterget_submoduleGLUGLUOptionsGroupNormGroupNormOptionsGRUGRUCellHardshrinkHardshrinkOptionsHardsigmoidHardswishHardtanhHardtanhOptionshashasHingeEmbeddingLossHingeEmbeddingLossOptionsHuberLossHuberLossOptionsIdentityInstanceNorm1dInstanceNorm2dInstanceNorm3dInstanceNormOptionsis_uninitialized_bufferis_uninitialized_parameteriterator]iterator]iterator]keyskeysKLDivLossL1LossL1LossOptionsLayerNormLayerNormOptionsLazyBatchNorm1dLazyBatchNorm2dLazyBatchNorm3dLazyConv1dLazyConv2dLazyConv3dLazyConvOptionsLazyConvTranspose1dLazyConvTranspose2dLazyConvTranspose3dLazyConvTransposeOptionsLazyInstanceNorm1dLazyInstanceNorm2dLazyInstanceNorm3dLazyLinearLeakyReLULeakyReLUOptionsLinearLinearOptionsload_state_dictLocalResponseNormLocalResponseNormOptionslog_probLogSigmoidLogSoftmaxLogSoftmaxOptionsLPPool1dLPPool1dOptionsLPPool2dLPPool2dOptionsLPPool3dLPPool3dOptionsLSTMLSTMCellLSTMCellOptionsMarginRankingLossMarginRankingLossOptionsmaterializematerializematerialize_uninitializedmaterialize_uninitializedMaxPool1dMaxPool1dOptionsMaxPool2dMaxPool2dOptionsMaxPool3dMaxPool3dOptionsMaxUnpool1dMaxUnpool1dOptionsMaxUnpool2dMaxUnpool2dOptionsMaxUnpool3dMaxUnpool3dOptionsMishModuleModuleBuffersModuleChildrenModuleDictModuleListModuleParametersModuleRegistrationHookmodulesMSELossMSELossOptionsmultihead_attnMultiheadAttentionMultiheadAttentionOptionsMultiLabelMarginLossMultiLabelMarginLossOptionsMultiLabelSoftMarginLossMultiMarginLossnamed_buffersnamed_childrennamed_modulesnamed_parametersNLLLossnum_parametersPairwiseDistancePairwiseDistanceOptionsParameterParameterDictParameterListParameterOptionsParameterRegistrationHookparametersPixelShufflePixelUnshufflePoissonNLLLosspopPReLUPReLUOptionsReflectionPad1dReflectionPad2dReflectionPad3dregister_backward_hookregister_bufferregister_forward_hookregister_forward_pre_hookregister_full_backward_hookregister_full_backward_pre_hookregister_module_backward_hookregister_module_buffer_registration_hookregister_module_forward_hookregister_module_forward_pre_hookregister_module_full_backward_hookregister_module_full_backward_pre_hookregister_module_module_registration_hookregister_module_parameter_registration_hookregister_parameterReLUReLU6RemovableHandleremoveReplicationPad1dReplicationPad2dReplicationPad3dRMSNormRNNRNNBaseRNNBaseOptionsRNNCellRNNCellOptionsRReLURReLUOptionsrunSELUSequentialsetsetsetSigmoidSiLUSmoothL1LossSmoothL1LossOptionsSoftMarginLossSoftMarginLossOptionsSoftmaxSoftmax2dSoftmaxOptionsSoftminSoftminOptionsSoftplusSoftplusOptionsSoftshrinkSoftshrinkOptionsSoftsignstate_dictSyncBatchNormTanhTanhshrinkThresholdThresholdOptionstotrainTransformerTransformerDecoderTransformerDecoderLayerTransformerDecoderLayerOptionsTransformerDecoderOptionsTransformerEncoderTransformerEncoderLayerTransformerEncoderLayerOptionsTransformerEncoderOptionsTransformerOptionsTripletMarginLossTripletMarginWithDistanceLossUnflattenUnfoldUnfoldOptionsUninitializedBufferUninitializedOptionsUninitializedParameterupdateUpsampleUpsamplingBilinear2dUpsamplingNearest2dvaluesvalueszero_gradZeroPad1dZeroPad2dZeroPad3d
absacosacoshaddaddbmmAddbmmOptionsaddcdivAddcdivOptionsaddcmulAddcmulOptionsaddmmAddmmOptionsaddmvAddmvOptionsaddrAddrOptionsadjointallallcloseAllcloseOptionsamaxaminaminmaxangleanyapplyOutarangeare_deterministic_algorithms_enabledargmaxargminargsortargwhereas_stridedas_tensorasinasinhAssertNoShapeErrorAtat_error_index_out_of_boundsatanatan2atanhatleast_1datleast_2datleast_3dAtShapeautocast_decrement_nestingautocast_increment_nestingAxesRecordbaddbmmBaddbmmOptionsbatch_dimensions_do_not_match_errorbernoulliBinaryOptionsbincountbitwise_andbitwise_left_shiftbitwise_notbitwise_orbitwise_right_shiftbitwise_xorblock_diagbmmbroadcast_error_incompatible_dimensionsbroadcast_shapesbroadcast_tensorsbroadcast_toBroadcastShapebroadcastShapesbucketizecanBroadcastTocartesian_prodcatCatShapecdistceilchain_matmulCholeskyShapechunkchunk_error_dim_out_of_rangeclampclear_autocast_cacheclonecolumn_stackcombinationscompiled_with_cxx11_abicomplexconjconj_physicalcontiguouscopysigncorrcoefcoscoshcount_nonzerocovCPUTensorDatacreateTorchCumExtremeResultcummaxcummincumprodCumShapecumsumcumulative_trapezoidCumulativeOptionsdeg2raddetachDetShapeDeviceDeviceInputDeviceTypediagdiag_embeddiagflatdiagonal_scatterDiagShapediffdigammadimension_error_out_of_rangedistdivdotdsplitdstackDTypeDynamicShapeEigShapeeinops_error_ambiguous_decompositioneinops_error_anonymous_in_outputeinops_error_dimension_mismatcheinops_error_invalid_patterneinops_error_reduce_undefined_outputeinops_error_repeat_missing_sizeeinops_error_undefined_axiseinsumeinsum_error_dimension_mismatcheinsum_error_index_out_of_rangeeinsum_error_invalid_equationeinsum_error_invalid_sublist_elementeinsum_error_operand_count_mismatcheinsum_error_subscript_rank_mismatcheinsum_error_unknown_output_indexEinsumOutputShapeEllipsiseluembedding_bag_error_requires_2d_inputemptyempty_cacheempty_likeeqequalerferfcerfinvexpexp2expandexpand_asexpand_error_incompatibleExpandShapeexpm1eyeEyeOptionsflattenFlattenShapeflipflip_error_dim_out_of_rangefliplrFlipShapeflipudfloat_powerfloorfloor_dividefmaxfminfmodfracfrexpfrombufferfullfull_likegathergather_error_dim_out_of_rangeGatherShapegcdgegeluget_autocast_cpu_dtypeget_autocast_gpu_dtypeget_autocast_ipu_dtypeget_autocast_xla_dtypeget_default_deviceget_default_dtypeget_deterministic_debug_modeget_device_moduleget_file_pathget_float32_matmul_precisionget_num_interop_threadsget_num_threadsget_printoptionsget_rng_stateGradFngthardsigmoidhardswishHasShapeErrorheavisidehistchistogramHistogramResulthsplithstackhypoti0imagindex_addindex_copyindex_fillindex_putindex_reduceindex_selectindex_select_error_dim_out_of_rangeIndexSelectShapeIndexSpecIndicesSpecinverseInverseShapeis_anomaly_check_nan_enabledis_anomaly_enabledis_autocast_cache_enabledis_autocast_cpu_enabledis_autocast_ipu_enabledis_autocast_xla_enabledis_complexis_complex_dtypeis_cpu_only_modeis_deterministic_algorithms_warn_only_enabledis_floating_pointis_floating_point_dtypeis_inference_mode_enabledis_nonzerois_tensoris_warn_always_enabledis_webgpu_availableIs2DIsAtLeast1DiscloseIscloseOptionsisfiniteisinisinfisnanisneginfisposinfisrealIsShapeErroritem_error_not_scalarItemResultkronkthvalueKthvalueOptionslcmldexpleleaky_relulerplgammalinalg_error_not_square_matrixlinalg_error_requires_2dlinalg_error_requires_at_least_2dlinspaceloglog10log1plog2logaddexplogaddexp2logcumsumexplogical_andlogical_notlogical_orlogical_xorlogitlogspacelogsumexpltLUShapemasked_selectmasked_select_asyncMaskSpecmatmulmatmul_error_inner_dimensions_do_not_matchMatmul2DShapeMatmulShapemaxmaximummeanmedianmemory_statsmemory_summarymeshgridminminimummmmodemovedimmsortmulmultinomialmultinomial_asyncmvnan_to_numnanmeannanmediannanquantilenansumnarrownarrow_copynarrow_error_length_exceeds_boundsnarrow_error_start_out_of_boundsNarrowShapeneneedsBroadcastnegNegativeDimnextafternonzeronormnormalNormOptionsnumelonesones_likeouterpackPackShapepermutepermute_error_dimension_count_mismatchPermuteShapepoissonpolarpositivepowPrintOptionsprodprofiler_allow_cudagraph_cupti_lazy_reinit_cuda12promote_typesquantileQuantileOptionsrad2degrandrand_likerandintrandint_likerandnrandn_likerandpermRangeSpecRankravelrealRearrangeShapereciprocalreduceReduceOperationReduceShapeReductionOptionsreluremainderrepeatrepeat_interleaveRepeatInterleaveOptionsRepeatShaperequireWebGPUreset_peak_memory_statsreshapeReshapeShaperesult_typerollrot90roundrsqrtscatterscatter_addscatter_add_scatter_error_dim_out_of_rangescatter_reducescatter_reduce_ScatterShapesearchsortedselectselect_error_index_out_of_boundsselect_scatterSelectShapeseluset_default_deviceset_default_tensor_typeset_deterministic_debug_modeset_float32_matmul_precisionset_printoptionsset_warn_alwaysShapeShapedTensorsigmoidsignsignbitsilusinsincsinhslice_error_out_of_boundsslice_scatterSliceShapeSliceSpecsoftmax_error_dim_out_of_rangeSoftmaxShapesoftplussoftsignsortSortOptionssplitsplit_error_dim_out_of_rangesqrtsquaresqueezeSqueezeShapestackstdstd_meanStdVarOptionssubSublistSublistElementSubscriptIndexsumSVDShapeswapaxessym_floatsym_intsym_notttaketake_along_dimtantanhtensortensor_splitTensorCreatorTensorDatatensordotTensorOptionsTensorStoragetileTileShapetopkTopkOptionsTorchtraceTraceShapetransposetranspose_dims_error_out_of_rangetranspose_error_requires_2d_tensorTransposeDimsShapeTransposeDimsShapeCheckedTransposeShapetrapezoidtriltril_indicestriutriu_indicestruncTypedArrayTypedStorageUnaryOptionsunbindunbind_error_dim_out_of_rangeunflattenuniqueunique_consecutiveunpackUnpackShapeunravel_indexunsqueezeUnsqueezeShapeuse_deterministic_algorithmsValidateBatchedSquareMatrixValidateChunkDimValidatedEinsumShapevalidateDeviceValidatedRearrangeShapeValidatedReduceShapeValidatedRepeatShapevalidateDTypeValidateEinsumValidateOperandCountValidateRanksValidateScalarValidateSplitDimValidateSquareMatrixValidateUnbindDimvar_var_meanvdotviewview_as_complexview_as_realvmapvsplitvstackWebGPUTensorDatawherexlogyzeroszeros_like
torch.js· 2026
LegalTerms of UsePrivacy Policy
/
/
  1. docs
  2. torch.js
  3. torch
  4. nn
  5. SoftshrinkOptions

torch.nn.SoftshrinkOptions

Softshrink activation function (Soft shrinkage / soft thresholding).

Softshrink is soft shrinkage (soft thresholding) that zeros out small magnitude activations and shrinks larger ones. It applies Softshrink(x) = (x - λ) if x > λ, (x + λ) if x < -λ, 0 otherwise. Unlike hard thresholding (Hardshrink) which preserves large values unchanged, soft shrinkage reduces their magnitude. Softshrink is rarely used in standard deep learning but appears in sparse coding, denoising, and specific signal processing architectures.

Core idea: Softshrink(x) applies a soft threshold at ±λ. For |x| ≤ λ, output is zero. For |x| > λ, output is x - λ*sign(x) (shrinks toward zero). This creates continuous sparse representation compared to Hardshrink's binary sparsity.

When to use Softshrink:

  • Sparse representation learning with smooth shrinkage
  • Denoising autoencoders (soft shrinkage for noise removal)
  • Iterative shrinkage algorithms
  • Sparse coding and dictionary learning
  • Rarely: standard deep networks don't use shrinkage functions

Trade-offs vs Hardshrink:

  • Sparsity: Softshrink shrinks large values (continuous) vs Hardshrink preserves them (binary)
  • Smoothness: Softshrink has no gradient discontinuity (smooth) vs Hardshrink's kink
  • Interpretation: Softshrink continuous shrinkage vs Hardshrink's binary selection
  • Effectiveness: Similar for sparse coding; choice depends on task
  • Gradient: Softshrink has gradient everywhere (smooth) vs Hardshrink's sparse gradient

Trade-offs vs Softplus:

  • Range: Softshrink zeros small values, shrinks large; Softplus is bounded smooth
  • Sparsity: Softshrink explicitly sparse; Softplus non-sparse smooth approximation
  • Use case: Softshrink for sparse coding; Softplus for general smooth activation

Algorithm: Forward: Softshrink(x) = x - λ if x > λ, x + λ if x < -λ, 0 if |x| ≤ λ Backward: ∂/∂x = 1 if |x| > λ, 0 if |x| ≤ λ (like Hardshrink) The soft shrinkage creates a dead zone [−λ, λ] where output is exactly zero.

Definition

export interface SoftshrinkOptions {
  /** Threshold value for shrinkage (default: 0.5) */
  lambd?: number;
}
lambd(number)optional
– Threshold value for shrinkage (default: 0.5)

Examples

// Soft shrinkage for sparse coding
class SparseCodeAutoencoder extends torch.nn.Module {
  private encode: torch.nn.Linear;
  private softshrink: torch.nn.Softshrink;
  private decode: torch.nn.Linear;

  constructor() {
    super();
    this.encode = new torch.nn.Linear(100, 50);
    this.softshrink = new torch.nn.Softshrink(0.1);  // λ = 0.1
    this.decode = new torch.nn.Linear(50, 100);
  }

  forward(x: torch.Tensor): torch.Tensor {
    x = this.encode.forward(x);
    x = this.softshrink.forward(x);  // Soft threshold for sparse codes
    return this.decode.forward(x);
  }
}
// Comparing hard vs soft shrinkage
const x = torch.linspace(-2, 2, [1000]);
const hardshrink = new torch.nn.Hardshrink(0.5);
const softshrink = new torch.nn.Softshrink(0.5);

const y_hard = hardshrink.forward(x);  // Exactly zero in [-0.5, 0.5], unchanged outside
const y_soft = softshrink.forward(x);  // Exactly zero in [-0.5, 0.5], shrunk outside
// Hard: binary selection, Soft: continuous shrinkage
Previous
Softshrink
Next
Softsign