Skip to main content
torch.js has not been released yet.
torch.js logotorch.js logotorch.js
PlaygroundContact
Login
Documentation
IntroductionType SafetyTensor ExpressionsTensor IndexingEinsumEinopsAutogradTraining a ModelProfiling & MemoryPyTorch MigrationBest PracticesRuntimesPerformancePyTorch CompatibilityBenchmarksDType Coverage
airy_aibessel_j0bessel_j1bessel_y0bessel_y1chebyshev_polynomial_tchebyshev_polynomial_uchebyshev_polynomial_vchebyshev_polynomial_wentrerferfcerfcxerfinvgammaincgammaincchermite_polynomial_hhermite_polynomial_hei0ei1i1elaguerre_polynomial_llegendre_polynomial_plog_ndtrlogitmodified_bessel_i0modified_bessel_i1modified_bessel_k0modified_bessel_k1multigammalnndtrndtripolygammascaled_modified_bessel_k0scaled_modified_bessel_k1shifted_chebyshev_polynomial_tshifted_chebyshev_polynomial_ushifted_chebyshev_polynomial_vshifted_chebyshev_polynomial_wsincSpecialBinaryOptionsSpecialLogitOptionsSpecialPolynomialOptionsSpecialSoftmaxOptionsSpecialUnaryOptionsspherical_bessel_j0xlog1pyzeta
absacosacoshAdaptivePool1dShapeAdaptivePool2dShapeaddaddbmmAddbmmOptionsaddcdivAddcdivOptionsaddcmulAddcmulOptionsaddmmAddmmOptionsaddmvAddmvOptionsaddrAddrOptionsadjointallallcloseAllcloseOptionsAlphaBetaOptionsamaxaminaminmaxAminmaxOptionsangleanyapplyOutarangeare_deterministic_algorithms_enabledargmaxargminargsortargwhereas_stridedas_tensorasinasinhAssertNoShapeErrorAssertNotErrorAsStridedOptionsAtat_error_index_out_of_boundsatanatan2atanhatleast_1datleast_2datleast_3dAtShapeautocast_decrement_nestingautocast_increment_nestingautograd_gradient_mismatch_errorautograd_not_registered_errorAutogradConfigAutogradDeviceAutogradDTypeAutogradEntryAutogradHandleAutogradHandleImplAxesRecordBackwardFnbaddbmmBaddbmmOptionsbartlett_windowBaseKernelConfigbatch_dimensions_do_not_match_errorbernoulliBernoulliOptionsBinaryBackwardFnBinaryBroadcastResultBinaryDTypeBinaryKernelConfigCPUBinaryKernelCPUBinaryOpConfigBinaryOpNamesBinaryOpSchemaBinaryOptionsbincountBincountOptionsbitwise_andbitwise_left_shiftbitwise_notbitwise_orbitwise_right_shiftbitwise_xorblackman_windowblock_diagbmmBooleanDTypeRulebroadcast_error_incompatible_dimensionsbroadcast_shapesbroadcast_tensorsbroadcast_toBroadcastShapeBroadcastShapeRulebroadcastShapesbucketizeBucketizeOptionsBufferUsagebuildEinopsErrorbuildErrorMessagecanBroadcastTocartesian_prodcatCatOptionsCatShapeCauchyOptionscdistCdistOptionsceilceluCeluFunctionalOptionschain_matmulCheckShapeErrorCholeskyShapechunkchunk_error_dim_out_of_rangeChunkOptionsclampClampOptionsclear_autocast_cacheclearEinopsCacheclearEinsumCacheclonecolumn_stackcombinationsCombinationsOptionscompiled_with_cxx11_abicomplexconjconj_physicalcontiguousConv1dShapeConv2dShapeConv3dShapeConvTranspose2dShapecopysigncorrcoefcoscoshcount_nonzeroCountNonzeroOptionscovcoverage_reportcoverageReportCoverageReportCovOptionsCPUForwardFnCPUKernelConfigCPUKernelEntryCPUOnlyResultCPUTensorDatacreateCumExtremeResultcreateTorchCreationOpSchemaCumExtremeResultcummaxcummincumprodCumShapecumsumcumulative_trapezoidCumulativeOptionsCumulativeOptionsWithDimdeg2raddetachDeterministicOptionsDetShapeDevicedevice_error_requiresDeviceBufferDeviceCapabilitiesDeviceCheckedResultDeviceConfigDeviceContextDeviceEntryDeviceHandleDeviceInputDeviceOptionsDeviceRegistryDeviceTypediagdiag_embedDiagEmbedOptionsdiagflatDiagflatOptionsDiagFlatOptionsdiagonal_scatterDiagonalOptionsDiagonalScatterOptionsDiagOptionsDiagShapediffDiffOptionsdigammadimension_error_out_of_rangeDispatchConfigdistDistOptionsdivdotDotShapeRuleDoubleDoubleDimdropoutDropoutFunctionalOptionsdsplitdstackDTypedtype_already_registered_errordtype_components_mismatch_errordtype_not_found_errorDTypeComponentsDTypeConfigDTypeCoverageReportDTypeDisplayConfigDTypeEntryDTypeHandleDTypeHandleImplDTypeInfoDTypeRegistryDTypeRuleDTypeSerializationConfigDynamicShapeEigShapeeinops_error_ambiguous_decompositioneinops_error_anonymous_in_outputeinops_error_dimension_mismatcheinops_error_invalid_patterneinops_error_reduce_undefined_outputeinops_error_repeat_missing_sizeeinops_error_undefined_axiseinsumeinsum_error_dimension_mismatcheinsum_error_index_out_of_rangeeinsum_error_invalid_equationeinsum_error_invalid_sublist_elementeinsum_error_operand_count_mismatcheinsum_error_subscript_rank_mismatcheinsum_error_unknown_output_indexEinsumOptionsEinsumOutputShapeEllipsiseluelu_EluFunctionalOptionsembedding_bag_error_requires_2d_inputemptyempty_cacheempty_likeeqequalerferfcerfinvexpexp2expandexpand_asexpand_error_incompatibleExpandShapeexpm1ExponentialOptionseyeEyeOptionsfftFFTOptionsfindKernelWithPredicatefindSimilarPatternsflattenFlattenOptionsFlattenShapeflipflip_error_dim_out_of_rangefliplrFlipShapeflipudfloat_powerFloatDTypeRulefloorfloor_dividefmaxfminfmodformatEquationErrorformatShapefracfrexpfrombufferfullfull_likefunction_already_registered_errorFunctionConfigFunctionEntryFunctionHandlegathergather_error_dim_out_of_rangeGatherShapegcdgegeluGeometricOptionsget_autocast_cpu_dtypeget_autocast_gpu_dtypeget_autocast_ipu_dtypeget_autocast_xla_dtypeget_default_deviceget_default_dtypeget_deterministic_debug_modeget_device_configget_device_contextget_device_moduleget_dtype_infoget_file_pathget_float32_matmul_precisionget_num_interop_threadsget_num_threadsget_op_infoget_printoptionsget_real_dtypeget_rng_stategetAutogradgetDTypegetEinopsCacheSizegetEinsumCacheSizegetFunctiongetKernelgetMethodgetOpInfoGetOpKindGetOpSchemagetScalarKernelgluGluFunctionalOptionsGradContextGradFnGradientsForgtHalfHalfDimhamming_windowhann_windowhardshrinkhardsigmoidhardswishhardtanhhardtanh_HardtanhFunctionalOptionshas_autogradhas_devicehas_dtypehas_kernelhasAutogradhasDTypehasFunctionhasKernelhasMethodhasScalarKernelHasShapeErrorheavisidehistcHistcOptionshistogramHistogramOptionsHistogramResulthsplithstackhypoti0IdentityShapeifftimagindex_addindex_copyindex_fillindex_putindex_reduceindex_selectindex_select_error_dim_out_of_rangeIndexPutOptionsIndexSelectShapeIndexSpecIndicesOptionsIndicesSpecinitialize_deviceInputsForInsertDiminvalid_config_errorinverseInverseShapeirfftis_anomaly_check_nan_enabledis_anomaly_enabledis_autocast_cache_enabledis_autocast_cpu_enabledis_autocast_ipu_enabledis_autocast_xla_enabledis_complexis_complex_dtypeis_cpu_only_modeis_deterministic_algorithms_warn_only_enabledis_floating_pointis_floating_point_dtypeis_inference_mode_enabledis_nonzerois_tensoris_warn_always_enabledis_webgpu_availableIs2DIsAtLeast1DIsBinaryOpIsBinaryOpNameiscloseIscloseOptionsisfiniteisinisinfisnanisneginfisposinfisrealIsReductionOpIsReductionOpNameIsRegistryErrorIsShapeErroristftISTFTOptionsIsUnaryOpIsUnaryOpNameitem_error_not_scalarItemResultkaiser_windowKaiserWindowOptionskernel_not_registered_errorkernel_signature_mismatch_errorKernelConfigKernelConfigWebGPUKernelEntryKernelHandleKernelInfoKernelPredicateKernelRegistryKernelWebGPUkronkthvalueKthvalueOptionslcmldexpleleaky_reluleaky_relu_LeakyReluFunctionalOptionslerplevenshteinDistancelgammalinalg_error_not_square_matrixlinalg_error_requires_2dlinalg_error_requires_at_least_2dlinearlinspacelist_custom_deviceslist_custom_dtypeslist_deviceslist_dtypeslist_functionslist_kernelslist_methodslist_opslistCustomDTypeslistDTypeslistFunctionslistKernelsListKernelsOptionslistMethodslistOpsListOpsOptionsloglog_softmaxlog10log1plog2logaddexplogaddexp2logcumsumexplogical_andlogical_notlogical_orlogical_xorLogitOptionsLogNormalOptionsLogOptionslogsigmoidlogspacelogsumexpLogsumexpOptionsltLUShapeLuSolveOptionsmasked_fillmasked_selectmasked_select_asyncMaskSpecmatmulmatmul_error_inner_dimensions_do_not_matchMatmul2DShapeMatmulShapeMatmulShapeRuleMatrixTransposeShapemaxmaximummeanmedianmemory_statsmemory_summarymeshgridmethod_already_registered_errormethod_dtype_not_supported_errorMethodConfigMethodEntryMethodHandleminminimummishmmMMShapeRulemodemovedimmsortmulmultinomialmultinomial_asyncMultinomialAsyncOptionsMultinomialOptionsMultiplyBymvMVShapeRulenan_to_numnanmeannanmediannanquantileNanReductionOptionsnansumNanToNumOptionsnarrownarrow_copynarrow_error_length_exceeds_boundsnarrow_error_start_out_of_boundsNarrowShapeneneedsBroadcastnegNegativeDimnextafternonzeroNonzeroOptionsnormnormalNormalOptionsNormOptionsnumelonesones_likeop_kind_mismatch_errorop_not_found_errorOpCoverageEntryOpInfoOpKindOpNameOpSchemaOpSchemasouterOuterShapepackPackShapepermutepermute_error_dimension_count_mismatchPermuteShapepoissonpolarPool1dShapePool2dShapePool3dShapepositivepowpreluPrintOptionsprodprofiler_allow_cudagraph_cupti_lazy_reinit_cuda12promote_typesPromoteDTypeRulePutOptionsquantileQuantileOptionsrad2degrandrand_likerandintrandint_likeRandintLikeOptionsRandintOptionsrandnrandn_likeRandomLikeOptionsRandomOptionsrandpermRangeSpecRankravelrealrearrangeRearrangeOptionsRearrangeShapereciprocalreduceReduceOperationReduceOptionsReduceShapeReductionKernelConfigCPUReductionKernelCPUReductionOpNamesReductionOpSchemaReductionOptionsReductionShapeRuleregister_backwardregister_deviceregister_dtyperegister_forwardregister_functionregister_methodregister_scalar_forwardregisterAutogradRegisterBackwardOptionsregisterBinaryOpregisterDTypeRegisterDTypeOptionsRegisteredDTyperegisterFunctionRegisterFunctionOptionsregisterKernelRegisterKernelOptionsregisterMethodRegisterMethodOptionsregisterScalarKernelregisterUnaryOpregistration_failed_errorrelurelu_relu6ReluFunctionalOptionsremainderRemoveDimrepeatrepeat_interleaveRepeatInterleaveOptionsRepeatOptionsRepeatShapeReplaceDimrequireWebGPUreset_peak_memory_statsreshapeReshapeShaperesult_typerfftrollRollOptionsrot90Rot90Optionsroundrrelurrelu_RreluFunctionalOptionsrsqrtSafeExpandShapeSameDTypeRuleSameShapeRuleSaveForBackwardScalarCPUForwardFnScalarCPUKernelConfigScalarKernelEntryScalarKernelHandleScalarWebGPUKernelConfigScaleDimscatterscatter_addscatter_add_scatter_error_dim_out_of_rangescatter_reducescatter_reduce_ScatterReduceOptionsScatterShapesearchsortedSearchSortedOptionsselectselect_error_index_out_of_boundsselect_scatterSelectShapeseluset_default_deviceset_default_tensor_typeset_deterministic_debug_modeset_float32_matmul_precisionset_printoptionsset_warn_alwaysSetupContextFnShapeShapeCheckedResultShapedTensorShapeErrorMessageShapeOpSchemaShapeRulesigmoidsignsignbitsilusinsincsinhSizeOptionsslice_error_out_of_boundsslice_scatterSliceOptionsSliceScatterOptionsSliceShapeSliceSpecsoftmaxsoftmax_error_dim_out_of_rangeSoftmaxShapesoftminSoftminFunctionalOptionssoftplusSoftplusFunctionalOptionssoftshrinksoftsignsortSortOptionssplitsplit_error_dim_out_of_rangeSplitOptionssqrtsquaresqueezeSqueezeOptionsSqueezeShapestackStackOptionsStackShapestdstd_meanStdVarMeanOptionsStdVarOptionsstftSTFTOptionsStrideOptionssubSublistSublistElementSubscriptIndexsumSVDShapeswapaxessym_floatsym_intsym_notttaketake_along_dimTakeAlongDimOptionstantanhtanhshrinktensortensor_splitTensorCreatorTensorDatatensordotTensordotOptionsTensorLikeTensorMetaTensorOptionsTensorStoragethresholdthreshold_tileTileShapeToOptionstopkTopkOptionsTorchtraceTraceShapetransposetranspose_dims_error_out_of_rangetranspose_error_requires_2d_tensorTransposeDimsShapeTransposeDimsShapeCheckedTransposeShapetrapezoidTrapezoidOptionsTriangularOptionstriltril_indicesTriOptionsTripletriutriu_indicestrue_dividetruncTupleOfLengthTypedArrayTypedArrayForTypedStorageTypeOptionsUnaryBackwardFnUnaryDTypeUnaryKernelConfigCPUUnaryKernelCPUUnaryOpConfigUnaryOpFnUnaryOpNamesUnaryOpParamsUnaryOpSchemaUnaryOptionsunbindunbind_error_dim_out_of_rangeUnbindOptionsunflattenUniformOptionsuniqueunique_consecutiveUniqueConsecutiveOptionsUniqueOptionsunpackUnpackShapeunravel_indexunregister_deviceunsqueezeUnsqueezeOptionsUnsqueezeShapeuse_deterministic_algorithmsValidateBatchedSquareMatrixValidateChunkDimValidatedEinsumShapevalidateDeviceValidateDeviceValidatedRearrangeShapeValidatedReduceShapeValidatedRepeatShapevalidateDTypeValidateEinsumValidateOperandCountValidateRanksValidateScalarValidateSplitDimValidateSquareMatrixValidateUnbindDimValueOptionsvar_var_meanvdotviewview_as_complexview_as_realvmapvsplitvstackWebGPUKernelConfigWebGPUOnlyResultWebGPUTensorDatawhereWindowOptionsxlogyzeroszeros_like
torch.js· 2026
LegalTerms of UsePrivacy Policy
/
/
  1. docs
  2. torch.js
  3. torch
  4. special
  5. multigammaln

torch.special.multigammaln

function multigammaln<S extends Shape>(input: Tensor<S, 'float32'>, p: number, _options?: SpecialUnaryOptions<S>): Tensor<S, 'float32'>

Computes the multivariate log-gamma function with dimension p.

The multivariate log-gamma function log Γ_p(a) appears as the normalizing constant in probability densities for random matrices and multivariate distributions. Essential for:

  • Bayesian statistics: Wishart and inverse-Wishart distributions (matrix variable priors)
  • Matrix variate distributions: matrix-normal, matrix-t, matrix-F distributions with structured covariance
  • Multivariate density normalization: constant term in multivariate Gaussian, Dirichlet (via ratio)
  • Random matrix theory: eigenvalue distributions, matrix concentration bounds
  • Variational inference: ELBO computation for matrix-valued variables, structured variational inference
  • Graphical models: covariance matrix estimation, precision matrix priors (Wishart conjugate)
  • Neural networks: Bayesian deep learning with matrix parameter uncertainty, posterior approximation

Matrix Variable Context: The multivariate gamma Γ_p(a) = π^(p(p-1)/4) ∏_{j=1}^p Γ(a + (1-j)/2). Naturally appears in multivariate analysis when working with p×p covariance/precision matrices. Generalizes univariate gamma Γ(a) to matrix setting; dimension p controls scaling.

Wishart Distribution: The Wishart(S, n) density on p×p positive definite matrices uses Γ_p(n/2) as normalization; most common prior for covariance matrices in Bayesian hierarchical models.

Γp(a)=πp(p−1)/4∏j=1pΓ(a+1−j2)log⁡Γp(a)=p(p−1)4log⁡(π)+∑j=1plog⁡Γ(a+1−j2)Special case: Γ1(a)=Γ(a),log⁡Γ1(a)=log⁡Γ(a)Recursion: Γp(a)=Γ(a)⋅Γp−1(a−1/2)(relates different dimensions)Wishart context: Wishart(S,n) density∝∣X∣(n−p−1)/2e−tr(S−1X)⋅2−pn/2∣πS∣−n/2/Γp(n/2)\begin{aligned} \Gamma_p(a) = \pi^{p(p-1)/4} \prod_{j=1}^p \Gamma\left(a + \frac{1-j}{2}\right) \\ \log \Gamma_p(a) = \frac{p(p-1)}{4} \log(\pi) + \sum_{j=1}^p \log \Gamma\left(a + \frac{1-j}{2}\right) \\ \text{Special case: } \Gamma_1(a) = \Gamma(a), \quad \log \Gamma_1(a) = \log \Gamma(a) \\ \text{Recursion: } \Gamma_p(a) = \Gamma(a) \cdot \Gamma_{p-1}(a - 1/2) \quad \text{(relates different dimensions)} \\ \text{Wishart context: Wishart}(S, n) \text{ density} \propto |X|^{(n-p-1)/2} e^{-\text{tr}(S^{-1}X)} \cdot 2^{-pn/2} |\pi S|^{-n/2} / \Gamma_p(n/2) \end{aligned}Γp​(a)=πp(p−1)/4j=1∏p​Γ(a+21−j​)logΓp​(a)=4p(p−1)​log(π)+j=1∑p​logΓ(a+21−j​)Special case: Γ1​(a)=Γ(a),logΓ1​(a)=logΓ(a)Recursion: Γp​(a)=Γ(a)⋅Γp−1​(a−1/2)(relates different dimensions)Wishart context: Wishart(S,n) density∝∣X∣(n−p−1)/2e−tr(S−1X)⋅2−pn/2∣πS∣−n/2/Γp​(n/2)​
  • Dimension scaling: Increases with p; Γ_p(a) grows much faster than Γ(a) for fixed a
  • Special case p=1: Reduces exactly to standard univariate lgamma; Γ_1(a) = Γ(a)
  • Domain requirement: a (p-1)/2 strictly needed for mathematical convergence and positive definiteness
  • Wishart central role: Γ_p(n/2) normalizes Wishart(·, n) density; most common use case
  • Matrix variate normalization: Appears whenever normalizing densities on p×p random matrices
  • Recursion: Can compute via Γ_p(a) = Γ(a) · Γ_p-1(a-1/2), but direct formula more stable
  • Large p warning: Grows extremely rapidly; overflow risk for large p and moderate a
  • Domain boundary critical: a ≤ (p-1)/2 causes mathematical singularity (determinant → 0)
  • Large p numerically unstable: Γ_p grows as ∏ Γ terms (multiple exponentials); overflow for p 10, a 3
  • Requires p ≥ 1: p must be positive integer; p=0 not meaningful mathematically

Parameters

inputTensor<S, 'float32'>
Input tensor a. Must satisfy a (p-1)/2 for positive definiteness and convergence
pnumber
Dimension parameter (positive integer). Dimension of random matrices involved. Can be 1 (univariate, recovers Γ), 2, 3, ...
_optionsSpecialUnaryOptions<S>optional

Returns

Tensor<S, 'float32'>– Tensor with log Γ_p(a) values

Examples

// Univariate special case: p=1 reduces to standard lgamma
const a = torch.tensor([0.5, 1.0, 2.0, 3.0]);
const multigamma_p1 = torch.special.multigammaln(a, 1);  // Same as torch.lgamma(a)
const standard_lgamma = torch.lgamma(a);
// multigamma_p1 ≈ standard_lgamma (identical for p=1)

// Wishart distribution normalization (p=2 covariance matrix)
const a_wishart = torch.tensor([2.0, 3.0, 4.0]);  // n/2 values
const p_dim = 2;  // 2×2 covariance matrices
const log_Z = torch.special.multigammaln(a_wishart, p_dim);  // -log(normalizing constant)
// log_Z includes log Γ_2(n/2) = log(π/2 * Γ(n/2) * Γ(n/2 - 1/2)) term

// Higher dimensional matrices (p=3 for 3×3 covariance)
const a_3d = torch.tensor([2.5, 3.0, 3.5]);
const log_gamma_3 = torch.special.multigammaln(a_3d, 3);
// Uses Γ_3(a) = π^(3*2/4) * Γ(a) * Γ(a - 1/2) * Γ(a - 1)
// Needed for 3×3 random matrix models (larger covariance matrices in physics, geology)

// Bayesian covariance estimation: Inverse-Wishart prior
const dof = 5.0;  // Degrees of freedom
const n_features = 4;  // p = 4 (feature dimension)
const a_param = (dof + 1) / 2;  // For Inv-Wishart(dof)
const log_prior_const = torch.special.multigammaln(torch.tensor([a_param]), n_features);
// Prior density ∝ |Λ|^{-(dof+p+1)/2} * exp(-tr(V Λ)) / Z where Z involves multigammaln

// Batch computation: different dimensions
const a_batch = torch.tensor([1.0, 2.0, 3.0]);
const log_multigamma_all_p2 = torch.special.multigammaln(a_batch, 2);
// [log Γ_2(1), log Γ_2(2), log Γ_2(3)]

// Domain check: a must be > (p-1)/2
const a_valid = torch.tensor([2.0]);  // > (3-1)/2 = 1
const a_boundary = torch.tensor([1.0]);  // = (3-1)/2 exactly
const a_invalid = torch.tensor([0.5]);  // < (3-1)/2 (mathematically singular)
const p = 3;
// multigammaln(a_valid, 3) ✓ defined
// multigammaln(a_boundary, 3) = -∞ (boundary, diverges)
// multigammaln(a_invalid, 3) undefined (domain violation)

See Also

  • PyTorch torch.special.multigammaln()
  • torch.lgamma - Univariate log-gamma function (special case p=1)
  • torch.special.gammaln - Alias for lgamma (univariate)
  • torch.special.digamma - Digamma function (derivative of log Γ); used in variational inference
Previous
modified_bessel_k1
Next
ndtr