torch.std_mean
function std_mean<D extends DType = DType, Dev extends DeviceType = DeviceType>(input: Tensor<Shape, D, Dev>, options?: StdVarMeanOptions): { std: Tensor<Shape, D, Dev>; mean: Tensor<Shape, D, Dev> }function std_mean<D extends DType = DType, Dev extends DeviceType = DeviceType>(input: Tensor<Shape, D, Dev>, dim: number, correction: number, keepdim: boolean, options?: StdVarMeanOptions): { std: Tensor<Shape, D, Dev>; mean: Tensor<Shape, D, Dev> }Computes both the standard deviation and mean along a dimension.
Efficiently computes standard deviation AND mean in a single pass, returning both results as an object. Useful for:
- Batch normalization: need both μ and σ for normalization
- Standardization: computing statistics for data preprocessing
- Statistical analysis: reporting mean ± std together
- Numerical efficiency: computing both in one pass (faster than separate calls)
- Model fitting: Gaussian fitting and parameter estimation
- Data quality: checking distribution statistics
This is more efficient than calling std() and mean() separately, as it computes mean once and uses it for std calculation, avoiding redundant computation.
- Computational efficiency: Single pass computes both std and mean
- Correction parameter: 0 for population, 1 for sample statistics
- Broadcasting: keepdim=true for broadcasting with other operations
- Gradient flow: Both std and mean have gradient functions
- Return format: Object with std, mean properties for destructuring
- Efficiency comparison: std_mean() is faster than std() + mean() separately
- Correction semantics: Different correction changes denominator by n vs (n-1)
- Numerical stability: May have precision issues with very large/small values
Parameters
optionsStdVarMeanOptionsoptional
Returns
{ std: Tensor<Shape, D, Dev>; mean: Tensor<Shape, D, Dev> }– Object with std and mean tensors computed along the dimensionExamples
// Batch normalization parameters
const x = torch.tensor([[1, 2, 3], [4, 5, 6]]);
const { std, mean } = torch.std_mean(x, 1);
// std: [0.816, 0.816], mean: [2, 5]
// Efficient standardization (z-score normalization)
const features = torch.randn(1000, 50);
const { mean: mu, std: sigma } = torch.std_mean(features, 0, 1, true);
const standardized = features.sub(mu).div(sigma.add(1e-6));
// Statistical quality check: distribution validation
const batch = torch.randn(32, 100);
const { mean: batch_means, std: batch_stds } = torch.std_mean(batch, 1);
const valid = batch_means.abs().lt(2).all(); // Means close to 0?
// Feature scaling with statistics
const raw_data = torch.tensor([[1, 100, 1000], [2, 200, 2000]]);
const { mean: col_means, std: col_stds } = torch.std_mean(raw_data, 0);
// Use col_means and col_stds for whitening
// Model fitting: estimating Gaussian parameters
const observations = torch.randn(10000);
const { mean: mu_est, std: sigma_est } = torch.std_mean(observations);
// mu_est ≈ 0, sigma_est ≈ 1 for standard normalSee Also
- PyTorch torch.std_mean()
- std - Compute std alone (use if only need std)
- mean - Compute mean alone (use if only need mean)
- var_mean - Get variance and mean (similar compound operation)
- normalize - Normalize using mean and std