torch.distributions.HalfNormal
class HalfNormal extends Distributionnew HalfNormal(scale: number | Tensor, options?: DistributionOptions)
- readonly
scale(Tensor) - – Scale parameter (standard deviation of the underlying normal).
- readonly
arg_constraints(unknown) - readonly
support(unknown) - readonly
has_rsample(unknown) - readonly
mean(Tensor) - readonly
mode(Tensor) - readonly
variance(Tensor)
Half-Normal distribution: folded normal distribution for positive-only values.
Parameterized by scale σ. The half-normal is obtained by taking the absolute value of a Normal(0, σ) distribution: if X = |Y| where Y ~ N(0, σ), then X ~ HalfNormal(σ). Support is (0, ∞) only. Critical for Bayesian modeling because it's the standard weakly-informative prior for scale parameters. Essential for:
- Prior distributions for scale, variance, and standard deviation parameters
- Weakly informative Bayesian priors (less heavy-tailed than HalfCauchy)
- Modeling absolute deviations and measurement magnitudes
- Reliability analysis and tolerance limit specifications
- Quality control and precision specifications
- Bayesian hierarchical models (shrinkage priors)
- Folded normal models and reflected distributions
Relationship to Normal: HalfNormal(σ) = |Normal(0, σ)|. Conceptually, it's like cutting a normal distribution in half at the mean and flipping one half onto the other.
Why useful as prior: More stable than HalfCauchy (lighter tails) while still being weakly informative. Good default for scale parameters when you have weak prior information.
- Absolute value interpretation: Equivalent to |Normal(0, σ)|
- Mean ≈ 0.798σ: Less than the scale parameter due to folding
- Variance ≈ 0.363σ²: Less than Normal(0,σ) due to positive support constraint
- Mode at 0: Highest density at boundary; PDF → 0 as x → ∞
- Posterior form: Posterior for scale in Normal model has similar form (related conjugacy)
- Light tails vs HalfCauchy: Decays exponentially (HalfCauchy decays polynomially)
- Weakly informative: Good default prior for scale parameters without strong beliefs
- Scale must be positive: scale ≤ 0 causes errors
- Support is positive: No negative values; only x 0
- Heavy-tailed at 0: PDF has peak at 0, not spread out like some positive distributions
- Related to normal precision: Not the same as inverse gamma (which is for precision)
Examples
// Standard half-normal: scale=1
const hn = new torch.distributions.HalfNormal(1);
const samples = hn.sample([1000]); // 1000 positive samples
// Typical values: 0.5-1.5, rarely > 3
// Bayesian prior for scale parameter
// Prior for standard deviation of measurement noise
const measurement_scale = 0.1; // express weak prior knowledge
const prior = new torch.distributions.HalfNormal(measurement_scale);
const prior_samples = prior.sample([10000]); // 10000 prior samples
n * // Use in: const noise_std = prior_samples\n *
// Different scale values affect spread
const narrow = new torch.distributions.HalfNormal(0.5); // tighter, more concentrated
const wide = new torch.distributions.HalfNormal(2.0); // wider, more dispersed
// narrow.mean ≈ 0.4, wide.mean ≈ 1.6\n *
// Batched distributions: different scale parameters
const scales = torch.tensor([0.1, 0.5, 1.0, 2.0, 5.0]);
const dist = new torch.distributions.HalfNormal(scales); // [5] batch shape
const samples = dist.sample(); // [5] shaped samples, each from different scale
const means = dist.mean; // ~[0.08, 0.4, 0.8, 1.6, 4.0]
// Hierarchical model: half-normal prior for group-level variation
// Each group has its own scale, drawn from half-normal hyperprior
const num_groups = 5;
const group_scale_prior = new torch.distributions.HalfNormal(1);
const group_scales = group_scale_prior.sample([num_groups]); // 5 group scales
// Then: X_i_j ~ Normal(0, group_scales[i]) for observations in group i\n *
// Quantiles and CDF
const hn = new torch.distributions.HalfNormal(1);
const cdf_val = hn.cdf(torch.tensor([0.5, 1.0, 2.0])); // cumulative probability at these points
const q = hn.icdf(torch.tensor([0.5, 0.9, 0.95])); // quantiles (median, 90%, 95%)