torch.special.multigammaln
function multigammaln<S extends Shape>(input: Tensor<S, 'float32'>, p: number, _options?: SpecialUnaryOptions<S>): Tensor<S, 'float32'>Computes the multivariate log-gamma function with dimension p.
The multivariate log-gamma function log Γ_p(a) appears as the normalizing constant in probability densities for random matrices and multivariate distributions. Essential for:
- Bayesian statistics: Wishart and inverse-Wishart distributions (matrix variable priors)
- Matrix variate distributions: matrix-normal, matrix-t, matrix-F distributions with structured covariance
- Multivariate density normalization: constant term in multivariate Gaussian, Dirichlet (via ratio)
- Random matrix theory: eigenvalue distributions, matrix concentration bounds
- Variational inference: ELBO computation for matrix-valued variables, structured variational inference
- Graphical models: covariance matrix estimation, precision matrix priors (Wishart conjugate)
- Neural networks: Bayesian deep learning with matrix parameter uncertainty, posterior approximation
Matrix Variable Context: The multivariate gamma Γ_p(a) = π^(p(p-1)/4) ∏_{j=1}^p Γ(a + (1-j)/2). Naturally appears in multivariate analysis when working with p×p covariance/precision matrices. Generalizes univariate gamma Γ(a) to matrix setting; dimension p controls scaling.
Wishart Distribution: The Wishart(S, n) density on p×p positive definite matrices uses Γ_p(n/2) as normalization; most common prior for covariance matrices in Bayesian hierarchical models.
- Dimension scaling: Increases with p; Γ_p(a) grows much faster than Γ(a) for fixed a
- Special case p=1: Reduces exactly to standard univariate lgamma; Γ_1(a) = Γ(a)
- Domain requirement: a (p-1)/2 strictly needed for mathematical convergence and positive definiteness
- Wishart central role: Γ_p(n/2) normalizes Wishart(·, n) density; most common use case
- Matrix variate normalization: Appears whenever normalizing densities on p×p random matrices
- Recursion: Can compute via Γ_p(a) = Γ(a) · Γ_p-1(a-1/2), but direct formula more stable
- Large p warning: Grows extremely rapidly; overflow risk for large p and moderate a
- Domain boundary critical: a ≤ (p-1)/2 causes mathematical singularity (determinant → 0)
- Large p numerically unstable: Γ_p grows as ∏ Γ terms (multiple exponentials); overflow for p 10, a 3
- Requires p ≥ 1: p must be positive integer; p=0 not meaningful mathematically
Parameters
inputTensor<S, 'float32'>- Input tensor a. Must satisfy a (p-1)/2 for positive definiteness and convergence
pnumber- Dimension parameter (positive integer). Dimension of random matrices involved. Can be 1 (univariate, recovers Γ), 2, 3, ...
_optionsSpecialUnaryOptions<S>optional
Returns
Tensor<S, 'float32'>– Tensor with log Γ_p(a) valuesExamples
// Univariate special case: p=1 reduces to standard lgamma
const a = torch.tensor([0.5, 1.0, 2.0, 3.0]);
const multigamma_p1 = torch.special.multigammaln(a, 1); // Same as torch.lgamma(a)
const standard_lgamma = torch.lgamma(a);
// multigamma_p1 ≈ standard_lgamma (identical for p=1)
// Wishart distribution normalization (p=2 covariance matrix)
const a_wishart = torch.tensor([2.0, 3.0, 4.0]); // n/2 values
const p_dim = 2; // 2×2 covariance matrices
const log_Z = torch.special.multigammaln(a_wishart, p_dim); // -log(normalizing constant)
// log_Z includes log Γ_2(n/2) = log(π/2 * Γ(n/2) * Γ(n/2 - 1/2)) term
// Higher dimensional matrices (p=3 for 3×3 covariance)
const a_3d = torch.tensor([2.5, 3.0, 3.5]);
const log_gamma_3 = torch.special.multigammaln(a_3d, 3);
// Uses Γ_3(a) = π^(3*2/4) * Γ(a) * Γ(a - 1/2) * Γ(a - 1)
// Needed for 3×3 random matrix models (larger covariance matrices in physics, geology)
// Bayesian covariance estimation: Inverse-Wishart prior
const dof = 5.0; // Degrees of freedom
const n_features = 4; // p = 4 (feature dimension)
const a_param = (dof + 1) / 2; // For Inv-Wishart(dof)
const log_prior_const = torch.special.multigammaln(torch.tensor([a_param]), n_features);
// Prior density ∝ |Λ|^{-(dof+p+1)/2} * exp(-tr(V Λ)) / Z where Z involves multigammaln
// Batch computation: different dimensions
const a_batch = torch.tensor([1.0, 2.0, 3.0]);
const log_multigamma_all_p2 = torch.special.multigammaln(a_batch, 2);
// [log Γ_2(1), log Γ_2(2), log Γ_2(3)]
// Domain check: a must be > (p-1)/2
const a_valid = torch.tensor([2.0]); // > (3-1)/2 = 1
const a_boundary = torch.tensor([1.0]); // = (3-1)/2 exactly
const a_invalid = torch.tensor([0.5]); // < (3-1)/2 (mathematically singular)
const p = 3;
// multigammaln(a_valid, 3) ✓ defined
// multigammaln(a_boundary, 3) = -∞ (boundary, diverges)
// multigammaln(a_invalid, 3) undefined (domain violation)See Also
- PyTorch torch.special.multigammaln()
- torch.lgamma - Univariate log-gamma function (special case p=1)
- torch.special.gammaln - Alias for lgamma (univariate)
- torch.special.digamma - Digamma function (derivative of log Γ); used in variational inference