torch.distributions.Laplace
class Laplace extends Distributionnew Laplace(loc: number | Tensor, scale: number | Tensor, options?: DistributionOptions)
- readonly
loc(Tensor) - – Mean of the distribution.
- readonly
scale(Tensor) - – Scale of the distribution.
- readonly
arg_constraints(unknown) - readonly
support(unknown) - readonly
has_rsample(unknown) - readonly
mean(Tensor) - readonly
mode(Tensor) - readonly
variance(Tensor) - readonly
stddev(Tensor)
Laplace distribution: continuous distribution with sharp peak and heavy tails.
Also called double exponential distribution. Distribution of difference between two independent exponential random variables. Essential for:
- Robust regression (L1 loss = Laplace likelihood)
- Sparse modeling and feature selection
- Jump diffusion models (sharp changes)
- Modeling data with outliers
- Error distributions in robust statistics
- Approximating cusped distributions
- Variational inference (Laplace approximation)
- Bayesian model selection and regularization
The probability density function: f(x) = (1/(2b)) * exp(-|x - μ| / b) where μ = loc (location/mean) and b = scale (width parameter)
- Sharp peak: Discontinuous derivative at x = μ (cusp)
- L1 interpretation: Laplace likelihood equals L1 loss function
- Heavy tails: More extreme values than normal (kurtosis = 3 vs 0)
- Robust modeling: Resistant to outliers compared to normal distribution
- Symmetry: Distribution symmetric around location μ
- Sparse prior: Laplace prior encourages sparsity in Bayesian models
- Discontinuous derivative: Not differentiable at x = loc
- Gradient at peak: Subdifferential rather than unique derivative
- Heavy tails: More extreme values than normal (may be issue for some models)
- Optimization: L1 loss (from Laplace) can be harder to optimize than L2
Examples
// Standard Laplace: location=0, scale=1
const laplace = new torch.distributions.Laplace(0, 1);
laplace.sample(); // Sharp peak at 0, heavy tails
// Shifted and scaled Laplace
const shifted = new torch.distributions.Laplace(5, 2);
shifted.sample(); // Centered at 5, wider spread
// L1 regression likelihood: robust to outliers
// y ~ Laplace(f(x), scale)
const predictions = model(x); // Model predictions
const likelihood_scale = 1; // Control outlier robustness
const likelihood = new torch.distributions.Laplace(predictions, likelihood_scale);
const log_likelihood = likelihood.log_prob(y).sum();
// L1 loss (robust to outliers) vs L2 (squared loss)
// Jump diffusion process
// dX_t = μ dt + σ dW_t + dJ_t
// where J_t is Poisson jump process with Laplace jump sizes
const jump_location = 0; // Centered at zero
const jump_scale = 0.5; // Size of jumps
const jump_dist = new torch.distributions.Laplace(jump_location, jump_scale);
const jumps = jump_dist.sample([100]); // 100 jump sizes
// Bayesian sparse modeling with Laplace prior
// Prior: θ ~ Laplace(0, λ) promotes sparsity
const lambda = 0.1; // Sparsity parameter
const prior = new torch.distributions.Laplace(0, lambda);
const log_prior = prior.log_prob(theta).sum();
// Variational inference: Laplace approximation
// Approximate posterior with Laplace distribution
const posterior_mean = learned_mean;
const posterior_scale = learned_scale;
const approximate_posterior = new torch.distributions.Laplace(
posterior_mean,
posterior_scale
);
const samples = approximate_posterior.sample([1000]);
// Comparing normal vs Laplace for robust inference
// Laplace has heavier tails, more robust to outliers
const normal = new torch.distributions.Normal(0, 1);
const laplace = new torch.distributions.Laplace(0, 1/Math.sqrt(2)); // Approx same variance
// Laplace more robust to outliers due to exponential tail
// Batched distributions
const locs = torch.tensor([0, 1, 2, 3]);
const scales = torch.tensor([0.5, 1, 1.5, 2]);
const dists = new torch.distributions.Laplace(locs, scales);
const samples = dists.sample();