torch.distributions.Kumaraswamy
class Kumaraswamy extends Distributionnew Kumaraswamy(concentration1: number | Tensor, concentration0: number | Tensor, options?: DistributionOptions)
- readonly
concentration1(Tensor) - – First concentration parameter (a).
- readonly
concentration0(Tensor) - – Second concentration parameter (b).
- readonly
arg_constraints(unknown) - readonly
support(unknown) - readonly
has_rsample(unknown) - readonly
mean(Tensor) - readonly
mode(Tensor) - readonly
variance(Tensor)
Kumaraswamy distribution: bounded [0,1] alternative to Beta with closed-form CDF.
Parameterized by concentration parameters a and b. Support is [0, 1]. The Kumaraswamy distribution is a close relative of the Beta distribution but with closed-form (tractable) PDF and CDF, making it computationally attractive. Unlike Beta which requires incomplete beta function, Kumaraswamy uses elementary functions only. Essential for:
- Bounded continuous data on [0, 1] (probabilities, percentages, proportions)
- Variational autoencoders (VAE) with Kumaraswamy latent variables (easier inference)
- Alternative to Beta when tractable CDF/sampling is critical
- Importance sampling and inverse transform sampling (simple ICF: x = (1-(1-u)^(1/b))^(1/a))
- Reliability and survival analysis (failure time models)
- Streamlined computation in Bayesian models (closed-form solutions)
- Flow-based generative models with bounded support
Tractability Advantage: Unlike Beta, Kumaraswamy has elementary-function CDF/quantile. This makes sampling and likelihood evaluation faster, crucial for variational inference.
Relationship to Beta: Similar shape parameter behavior; often used as Beta approximation. For many (a,b) pairs, Kumaraswamy ≈ Beta(a, b), but computation is faster.
- Closed-form CDF: F(x) = 1 - (1-x^a)^b (no special functions needed)
- Tractable sampling: ICF = (1-(1-u)^(1/b))^(1/a) (one-liner to invert)
- VAE advantage: Easier variational inference than Beta (no incomplete beta functions)
- Bounded support: Always [0, 1], useful for probability/proportion modeling
- Parameter interpretation: a controls left tail shape, b controls right tail
- Beta approximation: Kumaraswamy often close to Beta(a, b) but faster compute
- Special case: a=1, b=1 gives Uniform(0, 1)
- a, b must be positive: a ≤ 0 or b ≤ 0 causes errors
- Mode undefined for a 1 or b 1: Mode formula only valid for a,b ≥ 1
- Bounded support: Not suitable for unbounded data; use Beta or other distributions
- Numerical stability: Very small a or b can cause numerical issues in log_prob
Examples
// Uniform on [0,1]: Kumaraswamy(1, 1) = Uniform(0, 1)
const uniform = new torch.distributions.Kumaraswamy(1, 1);
const samples = uniform.sample([1000]); // uniform on [0, 1]
// U-shaped distribution: a=0.5, b=0.5
const u_shaped = new torch.distributions.Kumaraswamy(0.5, 0.5);
const u_samples = u_shaped.sample([1000]);
// Most values near 0 or 1, few in middle
// Inverted-U (bell-shaped): a=2, b=2
const bell = new torch.distributions.Kumaraswamy(2, 2);
const bell_samples = bell.sample([1000]);
// Concentrated in middle, sparse at extremes
// Skewed right: a=0.5, b=2
const right_skew = new torch.distributions.Kumaraswamy(0.5, 2);
// More probability mass toward 1
// Variational autoencoder: Kumaraswamy latent variable
// Tractable for variational inference (closed-form CDF)
const a_param = torch.tensor([2.0, 1.5, 3.0]); // 3 latent dimensions
const b_param = torch.tensor([2.0, 2.5, 1.0]);
const latent_dist = new torch.distributions.Kumaraswamy(a_param, b_param);
const z = latent_dist.rsample([batch_size]); // reparameterization trick works
const log_prob = latent_dist.log_prob(z); // for variational bound
// Batched Kumaraswamy with different shapes
const a_vals = torch.tensor([0.5, 1.0, 2.0, 5.0]);
const b_vals = torch.tensor([0.5, 1.0, 2.0, 5.0]);
const dist = new torch.distributions.Kumaraswamy(a_vals, b_vals);
const samples = dist.sample(); // [4] shaped samples
// a=0.5,b=0.5: U-shaped; a=1,b=1: uniform; a=2,b=2: bell; a=5,b=5: peaked
// Percentile/quantile modeling
const km = new torch.distributions.Kumaraswamy(1, 2);
const p50 = km.icdf(torch.tensor([0.5])); // median
const p90 = km.icdf(torch.tensor([0.9])); // 90th percentile