Skip to main content
torch.js has not been released yet.
torch.js logotorch.js logotorch.js
PlaygroundContact
Login
Documentation
IntroductionType SafetyTensor ExpressionsTensor IndexingEinsumEinopsAutogradTraining a ModelProfiling & MemoryPyTorch MigrationBest PracticesRuntimesPerformancePyTorch CompatibilityBenchmarksDType Coverage
ArgConstraintscatCatOptionscheckConstraintGreaterThanGreaterThanEqHalfOpenIntervalindependentIntervalis_dependentLessThanmultinomialOpenIntervalstackStackOptionstoString
ArgConstraintsBernoulliBetaBinomialbroadcast_allCategoricalCauchycdfcheckChi2ClampOptionsclampTensorConstraintDirichletDistributionDistributionOptionsentropyentropyenumerate_supportEnumerateSupportOptionsexpandExpandOptionsExponentialExponentialFamilyextendedShapeFisherSnedecorGammaGeometricgetDeviceFromTensorsgetDTypeFromTensorsgreater_thangreater_than_eqGumbelhalf_open_intervalHalfCauchyHalfNormalicdfindependentIndependentinteger_intervalintervalInverseGammakl_divergenceKumaraswamyLaplacelazy_propertyless_thanlog_probLogisticNormallogits_to_probsLogitsToProbsOptionsLogNormalLowRankMultivariateNormalMixtureSameFamilyMultinomialMultivariateNormalNegativeBinomialNormalOneHotCategoricalParetoperplexityPoissonprobs_to_logitsProbsToLogitsOptionsregister_klRelaxedBernoulliRelaxedOneHotCategoricalrsamplesamplesample_nSampleOptionsset_default_validate_argsstackStackOptionsStudentTStudentTOptionssumRightmosttoStringtoStringtoTensorTransformedDistributionUniformVonMisesWeibullWishart
absacosacoshAdaptivePool1dShapeAdaptivePool2dShapeaddaddbmmAddbmmOptionsaddcdivAddcdivOptionsaddcmulAddcmulOptionsaddmmAddmmOptionsaddmvAddmvOptionsaddrAddrOptionsadjointallallcloseAllcloseOptionsAlphaBetaOptionsamaxaminaminmaxAminmaxOptionsangleanyapplyOutarangeare_deterministic_algorithms_enabledargmaxargminargsortargwhereas_stridedas_tensorasinasinhAssertNoShapeErrorAssertNotErrorAsStridedOptionsAtat_error_index_out_of_boundsatanatan2atanhatleast_1datleast_2datleast_3dAtShapeautocast_decrement_nestingautocast_increment_nestingautograd_gradient_mismatch_errorautograd_not_registered_errorAutogradConfigAutogradDeviceAutogradDTypeAutogradEntryAutogradHandleAutogradHandleImplAxesRecordBackwardFnbaddbmmBaddbmmOptionsbartlett_windowBaseKernelConfigbatch_dimensions_do_not_match_errorbernoulliBernoulliOptionsBinaryBackwardFnBinaryBroadcastResultBinaryDTypeBinaryKernelConfigCPUBinaryKernelCPUBinaryOpConfigBinaryOpNamesBinaryOpSchemaBinaryOptionsbincountBincountOptionsbitwise_andbitwise_left_shiftbitwise_notbitwise_orbitwise_right_shiftbitwise_xorblackman_windowblock_diagbmmBooleanDTypeRulebroadcast_error_incompatible_dimensionsbroadcast_shapesbroadcast_tensorsbroadcast_toBroadcastShapeBroadcastShapeRulebroadcastShapesbucketizeBucketizeOptionsBufferUsagebuildEinopsErrorbuildErrorMessagecanBroadcastTocartesian_prodcatCatOptionsCatShapeCauchyOptionscdistCdistOptionsceilceluCeluFunctionalOptionschain_matmulCheckShapeErrorCholeskyShapechunkchunk_error_dim_out_of_rangeChunkOptionsclampClampOptionsclear_autocast_cacheclearEinopsCacheclearEinsumCacheclonecolumn_stackcombinationsCombinationsOptionscompiled_with_cxx11_abicomplexconjconj_physicalcontiguousConv1dShapeConv2dShapeConv3dShapeConvTranspose2dShapecopysigncorrcoefcoscoshcount_nonzeroCountNonzeroOptionscovcoverage_reportcoverageReportCoverageReportCovOptionsCPUForwardFnCPUKernelConfigCPUKernelEntryCPUOnlyResultCPUTensorDatacreateCumExtremeResultcreateTorchCreationOpSchemaCumExtremeResultcummaxcummincumprodCumShapecumsumcumulative_trapezoidCumulativeOptionsCumulativeOptionsWithDimdeg2raddetachDeterministicOptionsDetShapeDevicedevice_error_requiresDeviceBufferDeviceCapabilitiesDeviceCheckedResultDeviceConfigDeviceContextDeviceEntryDeviceHandleDeviceInputDeviceOptionsDeviceRegistryDeviceTypediagdiag_embedDiagEmbedOptionsdiagflatDiagflatOptionsDiagFlatOptionsdiagonal_scatterDiagonalOptionsDiagonalScatterOptionsDiagOptionsDiagShapediffDiffOptionsdigammadimension_error_out_of_rangeDispatchConfigdistDistOptionsdivdotDotShapeRuleDoubleDoubleDimdropoutDropoutFunctionalOptionsdsplitdstackDTypedtype_already_registered_errordtype_components_mismatch_errordtype_not_found_errorDTypeComponentsDTypeConfigDTypeCoverageReportDTypeDisplayConfigDTypeEntryDTypeHandleDTypeHandleImplDTypeInfoDTypeRegistryDTypeRuleDTypeSerializationConfigDynamicShapeEigShapeeinops_error_ambiguous_decompositioneinops_error_anonymous_in_outputeinops_error_dimension_mismatcheinops_error_invalid_patterneinops_error_reduce_undefined_outputeinops_error_repeat_missing_sizeeinops_error_undefined_axiseinsumeinsum_error_dimension_mismatcheinsum_error_index_out_of_rangeeinsum_error_invalid_equationeinsum_error_invalid_sublist_elementeinsum_error_operand_count_mismatcheinsum_error_subscript_rank_mismatcheinsum_error_unknown_output_indexEinsumOptionsEinsumOutputShapeEllipsiseluelu_EluFunctionalOptionsembedding_bag_error_requires_2d_inputemptyempty_cacheempty_likeeqequalerferfcerfinvexpexp2expandexpand_asexpand_error_incompatibleExpandShapeexpm1ExponentialOptionseyeEyeOptionsfftFFTOptionsfindKernelWithPredicatefindSimilarPatternsflattenFlattenOptionsFlattenShapeflipflip_error_dim_out_of_rangefliplrFlipShapeflipudfloat_powerFloatDTypeRulefloorfloor_dividefmaxfminfmodformatEquationErrorformatShapefracfrexpfrombufferfullfull_likefunction_already_registered_errorFunctionConfigFunctionEntryFunctionHandlegathergather_error_dim_out_of_rangeGatherShapegcdgegeluGeometricOptionsget_autocast_cpu_dtypeget_autocast_gpu_dtypeget_autocast_ipu_dtypeget_autocast_xla_dtypeget_default_deviceget_default_dtypeget_deterministic_debug_modeget_device_configget_device_contextget_device_moduleget_dtype_infoget_file_pathget_float32_matmul_precisionget_num_interop_threadsget_num_threadsget_op_infoget_printoptionsget_real_dtypeget_rng_stategetAutogradgetDTypegetEinopsCacheSizegetEinsumCacheSizegetFunctiongetKernelgetMethodgetOpInfoGetOpKindGetOpSchemagetScalarKernelgluGluFunctionalOptionsGradContextGradFnGradientsForgtHalfHalfDimhamming_windowhann_windowhardshrinkhardsigmoidhardswishhardtanhhardtanh_HardtanhFunctionalOptionshas_autogradhas_devicehas_dtypehas_kernelhasAutogradhasDTypehasFunctionhasKernelhasMethodhasScalarKernelHasShapeErrorheavisidehistcHistcOptionshistogramHistogramOptionsHistogramResulthsplithstackhypoti0IdentityShapeifftimagindex_addindex_copyindex_fillindex_putindex_reduceindex_selectindex_select_error_dim_out_of_rangeIndexPutOptionsIndexSelectShapeIndexSpecIndicesOptionsIndicesSpecinitialize_deviceInputsForInsertDiminvalid_config_errorinverseInverseShapeirfftis_anomaly_check_nan_enabledis_anomaly_enabledis_autocast_cache_enabledis_autocast_cpu_enabledis_autocast_ipu_enabledis_autocast_xla_enabledis_complexis_complex_dtypeis_cpu_only_modeis_deterministic_algorithms_warn_only_enabledis_floating_pointis_floating_point_dtypeis_inference_mode_enabledis_nonzerois_tensoris_warn_always_enabledis_webgpu_availableIs2DIsAtLeast1DIsBinaryOpIsBinaryOpNameiscloseIscloseOptionsisfiniteisinisinfisnanisneginfisposinfisrealIsReductionOpIsReductionOpNameIsRegistryErrorIsShapeErroristftISTFTOptionsIsUnaryOpIsUnaryOpNameitem_error_not_scalarItemResultkaiser_windowKaiserWindowOptionskernel_not_registered_errorkernel_signature_mismatch_errorKernelConfigKernelConfigWebGPUKernelEntryKernelHandleKernelInfoKernelPredicateKernelRegistryKernelWebGPUkronkthvalueKthvalueOptionslcmldexpleleaky_reluleaky_relu_LeakyReluFunctionalOptionslerplevenshteinDistancelgammalinalg_error_not_square_matrixlinalg_error_requires_2dlinalg_error_requires_at_least_2dlinearlinspacelist_custom_deviceslist_custom_dtypeslist_deviceslist_dtypeslist_functionslist_kernelslist_methodslist_opslistCustomDTypeslistDTypeslistFunctionslistKernelsListKernelsOptionslistMethodslistOpsListOpsOptionsloglog_softmaxlog10log1plog2logaddexplogaddexp2logcumsumexplogical_andlogical_notlogical_orlogical_xorLogitOptionsLogNormalOptionsLogOptionslogsigmoidlogspacelogsumexpLogsumexpOptionsltLUShapeLuSolveOptionsmasked_fillmasked_selectmasked_select_asyncMaskSpecmatmulmatmul_error_inner_dimensions_do_not_matchMatmul2DShapeMatmulShapeMatmulShapeRuleMatrixTransposeShapemaxmaximummeanmedianmemory_statsmemory_summarymeshgridmethod_already_registered_errormethod_dtype_not_supported_errorMethodConfigMethodEntryMethodHandleminminimummishmmMMShapeRulemodemovedimmsortmulmultinomialmultinomial_asyncMultinomialAsyncOptionsMultinomialOptionsMultiplyBymvMVShapeRulenan_to_numnanmeannanmediannanquantileNanReductionOptionsnansumNanToNumOptionsnarrownarrow_copynarrow_error_length_exceeds_boundsnarrow_error_start_out_of_boundsNarrowShapeneneedsBroadcastnegNegativeDimnextafternonzeroNonzeroOptionsnormnormalNormalOptionsNormOptionsnumelonesones_likeop_kind_mismatch_errorop_not_found_errorOpCoverageEntryOpInfoOpKindOpNameOpSchemaOpSchemasouterOuterShapepackPackShapepermutepermute_error_dimension_count_mismatchPermuteShapepoissonpolarPool1dShapePool2dShapePool3dShapepositivepowpreluPrintOptionsprodprofiler_allow_cudagraph_cupti_lazy_reinit_cuda12promote_typesPromoteDTypeRulePutOptionsquantileQuantileOptionsrad2degrandrand_likerandintrandint_likeRandintLikeOptionsRandintOptionsrandnrandn_likeRandomLikeOptionsRandomOptionsrandpermRangeSpecRankravelrealrearrangeRearrangeOptionsRearrangeShapereciprocalreduceReduceOperationReduceOptionsReduceShapeReductionKernelConfigCPUReductionKernelCPUReductionOpNamesReductionOpSchemaReductionOptionsReductionShapeRuleregister_backwardregister_deviceregister_dtyperegister_forwardregister_functionregister_methodregister_scalar_forwardregisterAutogradRegisterBackwardOptionsregisterBinaryOpregisterDTypeRegisterDTypeOptionsRegisteredDTyperegisterFunctionRegisterFunctionOptionsregisterKernelRegisterKernelOptionsregisterMethodRegisterMethodOptionsregisterScalarKernelregisterUnaryOpregistration_failed_errorrelurelu_relu6ReluFunctionalOptionsremainderRemoveDimrepeatrepeat_interleaveRepeatInterleaveOptionsRepeatOptionsRepeatShapeReplaceDimrequireWebGPUreset_peak_memory_statsreshapeReshapeShaperesult_typerfftrollRollOptionsrot90Rot90Optionsroundrrelurrelu_RreluFunctionalOptionsrsqrtSafeExpandShapeSameDTypeRuleSameShapeRuleSaveForBackwardScalarCPUForwardFnScalarCPUKernelConfigScalarKernelEntryScalarKernelHandleScalarWebGPUKernelConfigScaleDimscatterscatter_addscatter_add_scatter_error_dim_out_of_rangescatter_reducescatter_reduce_ScatterReduceOptionsScatterShapesearchsortedSearchSortedOptionsselectselect_error_index_out_of_boundsselect_scatterSelectShapeseluset_default_deviceset_default_tensor_typeset_deterministic_debug_modeset_float32_matmul_precisionset_printoptionsset_warn_alwaysSetupContextFnShapeShapeCheckedResultShapedTensorShapeErrorMessageShapeOpSchemaShapeRulesigmoidsignsignbitsilusinsincsinhSizeOptionsslice_error_out_of_boundsslice_scatterSliceOptionsSliceScatterOptionsSliceShapeSliceSpecsoftmaxsoftmax_error_dim_out_of_rangeSoftmaxShapesoftminSoftminFunctionalOptionssoftplusSoftplusFunctionalOptionssoftshrinksoftsignsortSortOptionssplitsplit_error_dim_out_of_rangeSplitOptionssqrtsquaresqueezeSqueezeOptionsSqueezeShapestackStackOptionsStackShapestdstd_meanStdVarMeanOptionsStdVarOptionsstftSTFTOptionsStrideOptionssubSublistSublistElementSubscriptIndexsumSVDShapeswapaxessym_floatsym_intsym_notttaketake_along_dimTakeAlongDimOptionstantanhtanhshrinktensortensor_splitTensorCreatorTensorDatatensordotTensordotOptionsTensorLikeTensorMetaTensorOptionsTensorStoragethresholdthreshold_tileTileShapeToOptionstopkTopkOptionsTorchtraceTraceShapetransposetranspose_dims_error_out_of_rangetranspose_error_requires_2d_tensorTransposeDimsShapeTransposeDimsShapeCheckedTransposeShapetrapezoidTrapezoidOptionsTriangularOptionstriltril_indicesTriOptionsTripletriutriu_indicestrue_dividetruncTupleOfLengthTypedArrayTypedArrayForTypedStorageTypeOptionsUnaryBackwardFnUnaryDTypeUnaryKernelConfigCPUUnaryKernelCPUUnaryOpConfigUnaryOpFnUnaryOpNamesUnaryOpParamsUnaryOpSchemaUnaryOptionsunbindunbind_error_dim_out_of_rangeUnbindOptionsunflattenUniformOptionsuniqueunique_consecutiveUniqueConsecutiveOptionsUniqueOptionsunpackUnpackShapeunravel_indexunregister_deviceunsqueezeUnsqueezeOptionsUnsqueezeShapeuse_deterministic_algorithmsValidateBatchedSquareMatrixValidateChunkDimValidatedEinsumShapevalidateDeviceValidateDeviceValidatedRearrangeShapeValidatedReduceShapeValidatedRepeatShapevalidateDTypeValidateEinsumValidateOperandCountValidateRanksValidateScalarValidateSplitDimValidateSquareMatrixValidateUnbindDimValueOptionsvar_var_meanvdotviewview_as_complexview_as_realvmapvsplitvstackWebGPUKernelConfigWebGPUOnlyResultWebGPUTensorDatawhereWindowOptionsxlogyzeroszeros_like
torch.js· 2026
LegalTerms of UsePrivacy Policy
/
/
  1. docs
  2. torch.js
  3. torch
  4. distributions
  5. constraints
  6. multinomial

torch.distributions.constraints.multinomial

function multinomial(total_count: number): _Multinomial

Creates a constraint for multinomial distributions.

The multinomial constraint ensures that sampled values are valid counts for a multinomial distribution. Specifically, sampled values must be:

  • Non-negative integers - Each count must be ≥ 0 and have no fractional part
  • Sum to total_count - All counts across categories must sum exactly to the specified total

This constraint is essential for categorical sampling where you need to draw a fixed number of samples distributed across multiple categories. Common use cases include:

  • Text generation - Drawing multiple tokens from vocabulary at each step
  • Reinforcement learning - Sampling multiple actions from a discrete action space
  • Discrete mixture models - Ensuring valid mixture component counts
  • Multi-label classification - Constraining predicted class counts

How multinomial sampling works: Given probabilities for K categories and total_count draws, the multinomial distribution generates K counts that sum exactly to total_count, where each count represents how many times that category was sampled.

Relationship to categorical distribution:

  • Categorical: Sample one category per draw
  • Multinomial: Sample total_count categories (with replacement), count outcomes
  • Discrete constraint: The multinomial constraint only accepts integer values. Floating-point counts are rejected by the check.
  • Event dimension: The constraint operates over the last dimension (event_dim=1), so it works naturally with batched samples.
  • Sum constraint: The most important check is that counts sum exactly to total_count. No approximation or tolerance is used.
  • Non-negative: All counts must be ≥ 0. Negative counts are never valid.
  • PyTorch compatibility: Matches torch.distributions.constraints.multinomial() behavior.
  • Common use case: Used internally by Multinomial distribution to validate samples.
  • Strict sum requirement: Even off-by-one errors fail the check (sum must be exactly total_count).
  • Integer only: Fractional counts like [2.5, 3.5, ...] are invalid.
  • Large total_count: With very large total_count, ensure sufficient precision in computations.
  • Zero counts allowed: Having zero counts for some categories is valid and common.

Parameters

total_countnumber
The number of draws for the multinomial distribution. Must be a positive integer. All sampled counts must sum to exactly this value.

Returns

_Multinomial– A constraint object that validates multinomial distribution samples

Examples

// Constraint for rolling a 6-sided die 20 times
const constraint = torch.distributions.constraints.multinomial(20);

// Valid sample: [2, 3, 4, 5, 2, 4] - sums to 20
const valid = torch.tensor([2, 3, 4, 5, 2, 4]);
constraint.check(valid);  // All elements true

// Invalid sample: [2, 3, 4, 5, 2, 3] - sums to 19
const invalid = torch.tensor([2, 3, 4, 5, 2, 3]);
constraint.check(invalid);  // Some elements false
// Sampling from multinomial distribution
const probs = torch.tensor([0.1, 0.3, 0.4, 0.2]);  // 4 categories
const constraint = torch.distributions.constraints.multinomial(100);

// Create multinomial distribution
const dist = torch.distributions.Multinomial(probs, total_count=100);
const sample = dist.sample();  // Shape: [4], sums to 100
constraint.check(sample);  // Always true
// Batch sampling with multinomial constraint
const batch_probs = torch.randn(32, 10).softmax(-1);  // 32 samples, 10 categories
const constraint = torch.distributions.constraints.multinomial(50);

for (let i = 0; i < batch_probs.shape[0]; i++) {
  const dist = torch.distributions.Multinomial(batch_probs[i], total_count=50);
  const sample = dist.sample();  // [10], sums to 50
  const is_valid = constraint.check(sample);  // Always true
}
// Text generation with vocabulary sampling
const vocab_size = 50000;
const probs = torch.ones(vocab_size).div(vocab_size);  // Uniform distribution
const constraint = torch.distributions.constraints.multinomial(vocab_size);

// Sample 1000 tokens, counts how many of each vocab item
const token_counts = torch.distributions.Multinomial(probs, total_count=1000).sample();
constraint.check(token_counts);  // Validates the sample
// Reinforcement learning: sampling actions from action space
const num_actions = 6;
const action_logits = torch.randn(num_actions);
const action_probs = action_logits.softmax(-1);

const constraint = torch.distributions.constraints.multinomial(30);  // 30 episodes
const dist = torch.distributions.Multinomial(action_probs, total_count=30);

// Sample which actions were taken across 30 episodes
const action_histogram = dist.sample();
constraint.check(action_histogram);  // Ensures valid action counts
// Validating constraint properties
const constraint = torch.distributions.constraints.multinomial(100);

// Check constraint properties
console.log(constraint.is_discrete);  // true - counts are discrete integers
console.log(constraint.event_dim);     // 1 - operates along one dimension

// Test edge cases
const all_zeros = torch.zeros(5);
constraint.check(all_zeros);  // False (sum is 0, not 100)

const valid = torch.tensor([20, 20, 20, 20, 20]);
constraint.check(valid);  // True (sum is 100)

See Also

  • PyTorch torch.distributions.constraints.multinomial()
  • torch.distributions.Multinomial - The multinomial distribution using this constraint
  • torch.distributions.constraints.independent - Wrapping constraints with extra dimensions
  • torch.distributions.constraints.nonnegative_integer - Base integer constraint
  • cat - Constraint for concatenated distributions
  • stack - Constraint for stacked distributions
Previous
LessThan
Next
OpenInterval