torch.special.log_ndtr
function log_ndtr<S extends Shape>(input: Tensor<S, 'float32'>, _options?: SpecialUnaryOptions<S>): Tensor<S, 'float32'>Computes the log of the normal cumulative distribution function: log(Φ(x)) = log(ndtr(x)).
The normal CDF Φ(x) ranges from ~0 (for x << 0) to 1 (for x >> 0), causing numerical underflow when computing log(Φ(x)) for large negative x. log_ndtr(x) avoids this by computing log(Φ(x)) directly using specialized algorithms that maintain precision even for extreme values. Essential for:
- Probabilistic modeling: log-likelihood computations with normal distributions
- Machine learning: loss functions based on normal probabilities (logistic sigmoid, etc.)
- Bayesian inference: marginal likelihoods involving Gaussian assumptions
- Risk modeling: extreme value statistics, tail risk computations
- Quantum mechanics: quantum well probability amplitudes
- Statistics: efficient computation of log-probabilities avoiding underflow
Key Properties:
- log_ndtr(x) = log(Φ(x)) = log(P(Z ≤ x)) for Z ~ N(0,1)
- log_ndtr(0) = log(0.5) ≈ -0.693 (at median)
- Monotonically increasing: log_ndtr(-∞) = -∞, log_ndtr(+∞) = 0
- Numerical advantage: avoids underflow that log(ndtr(x)) would suffer
- Symmetry: log_ndtr(x) + log(1 - Φ(x)) = log(Φ(x)(1-Φ(x))) not simple
- Related: log_ndtr(x) ≈ ndtri_inverse_problem (not directly but conceptually related)
- For x < 0 (small tail prob): log_ndtr(x) is essential for avoiding underflow
- Numerical stability: Primary purpose: avoid underflow for negative arguments
- Log of CDF: log_ndtr(x) = log(Φ(x)), direct computation maintains precision
- Monotonic: Strictly increasing, ranging from -∞ to 0
- Median: log_ndtr(0) = log(0.5) ≈ -0.693147
- Tail probability: For x 0, log_ndtr(x) ≈ log(PDF(x)) + log(|x|) (Mills' ratio)
- Asymptotic: log_ndtr(x) ~ -(x²/2 + log(|x|√(2π))) for large negative x
- Differentiable: Smooth gradients everywhere (gradient = pdf(x)/Φ(x))
- Large negative x: ndtr(x) underflows to 0, but log_ndtr(x) remains finite
- Large positive x: log_ndtr(x) → 0, but remains negative always
- Extreme negatives: For x -40, asymptotic form more accurate than direct
- Not log of ndtri: log_ndtr ≠ log(ndtri); different functions!
Parameters
inputTensor<S, 'float32'>- Input tensor with real z-score values (any range, critical for x -6)
_optionsSpecialUnaryOptions<S>optional
Returns
Tensor<S, 'float32'>– Tensor with log(Φ(x)) values (unbounded negative, bounded at 0)Examples
// Log of CDF: avoids underflow for negative z-scores
const x = torch.tensor([-2, -1, 0, 1, 2]);
const log_cdf = torch.special.log_ndtr(x); // log(Φ(x))
// x=-2: ≈-2.275, x=0: ≈-0.693, x=2: ≈-0.0228
// Direct log(ndtr(x)) would underflow for x << -5// Probabilistic model: Gaussian likelihood
const observations = torch.randn([1000]); // Data
const mu = torch.tensor([0.5]);
const sigma = torch.tensor([1.0]);
const z = observations.sub(mu).div(sigma);
const log_likelihood = torch.special.log_ndtr(z).sum(); // Total log-likelihood
// Directly computing log(ndtr(z)) would underflow for large negative z// Extreme value probability: rare event quantiles
const rare_z_scores = torch.tensor([-5, -6, -7, -8, -10]); // Extreme tails
const log_tail_probs = torch.special.log_ndtr(rare_z_scores); // log(Φ(z)) stable
// Computes probabilities like 10^-6, 10^-9 without underflow
// Direct ndtr(z) underflows to 0 for z < -40// Risk analysis: value-at-risk in Gaussian model
const confidence = 0.95;
const z_crit = torch.special.ndtri(torch.tensor([confidence])); // Critical z
const log_prob = torch.special.log_ndtr(z_crit); // log(P(Z < z_crit)) stable
// Risk metric: log-probability of not exceeding risk thresholdSee Also
- PyTorch torch.special.log_ndtr()
- torch.special.ndtr - Forward CDF: Φ(x) (numerical underflow for x 0)
- torch.special.ndtri - Inverse CDF: Φ⁻¹(p) (complements log_ndtr)
- torch.special.erfcx - Similar scaling for error function family
- torch.special.erf - Error function (related to normal CDF)