torch.distributions.LogNormal
class LogNormal extends Distributionnew LogNormal(loc: number | Tensor, scale: number | Tensor, options?: DistributionOptions)
- readonly
loc(Tensor) - – Mean of the underlying normal distribution.
- readonly
scale(Tensor) - – Standard deviation of the underlying normal distribution.
- readonly
arg_constraints(unknown) - readonly
support(unknown) - readonly
has_rsample(unknown) - readonly
mean(Tensor) - readonly
mode(Tensor) - readonly
variance(Tensor)
Log-Normal distribution: continuous distribution for positive values.
If X ~ LogNormal(μ, σ), then log(X) ~ Normal(μ, σ). Distribution of log-transformed normal. Essential for:
- Modeling positive-valued skewed data (sizes, durations, concentrations)
- Income and wealth distributions
- Particle sizes in physical systems
- Biological measurements (enzyme concentrations, organism sizes)
- Financial returns and stock prices (under geometric Brownian motion)
- Multiplicative processes and growth phenomena
- Right-skewed data with long tail
If log(X) ~ Normal(μ, σ), then X ~ LogNormal(μ, σ) where μ = loc (mean of log(X)) and σ = scale (std of log(X))
- Log-transformation: log(X) has normal distribution N(loc, scale)
- Geometric mean: Median = exp(loc), related to underlying normal mean
- Right-skewed: Mean Median Mode (positive skew)
- Variance relationship: More spread in log-space → more spread in original
- Multiplicative process: Product of independent lognormals stays lognormal
- Heavy tail: Extreme values more likely than in normal distribution
- Positive support: Always produces strictly positive values (never zero)
- Right-skewed: Mean is larger than median; central tendency not intuitive
- Variance can be large: High scale causes exponentially larger variance
- Parameter interpretation: loc and scale are on log-scale, not original scale
Examples
// Basic log-normal with underlying N(0, 1)
const lognorm = new torch.distributions.LogNormal(0, 1);
lognorm.sample(); // log(X) ~ N(0, 1), so X ~ LogNormal(0, 1)
// Income distribution: right-skewed with long tail
const loc = 10.5; // log-scale location
const scale = 0.8; // log-scale spread
const income_dist = new torch.distributions.LogNormal(loc, scale);
const incomes = income_dist.sample([10000]); // 10000 income samples
// Particle size distribution
const mean_log_size = 2; // log(μm)
const std_log_size = 0.5;
const size_dist = new torch.distributions.LogNormal(mean_log_size, std_log_size);
const sizes = size_dist.sample([1000]);
// Geometric growth process
// Multiplicative process: X_t = X_0 * exp(Y_1 + Y_2 + ... + Y_t)
// where Y_i are normal increments → X_t is lognormal
const initial = 100;
const daily_log_mean = 0.001; // ~0.1% daily growth
const daily_log_std = 0.02; // 2% volatility
const stock_price_log = torch.linspace(-5, 5, 100); // Log-prices
const dist = new torch.distributions.LogNormal(daily_log_mean, daily_log_std);
// Financial modeling: log-returns
// Stock return: S_t = S_0 * exp(R) where R ~ N(μ, σ)
const mu = 0.08; // 8% expected return (annual)
const sigma = 0.15; // 15% volatility
const return_dist = new torch.distributions.LogNormal(mu, sigma);
const stock_prices = return_dist.sample([252]); // 252 trading days
// Batched distributions with different parameters
const locs = torch.tensor([0, 1, 2]);
const scales = torch.tensor([0.5, 1, 1.5]);
const dist = new torch.distributions.LogNormal(locs, scales);
const samples = dist.sample();