torch.count_nonzero
function count_nonzero<S extends Shape, D extends DType, Dev extends DeviceType>(input: Tensor<S, D, Dev>, options?: CountNonzeroOptions): Tensor<DynamicShape, D, Dev>function count_nonzero<S extends Shape, D extends DType, Dev extends DeviceType>(input: Tensor<S, D, Dev>, dim: number): Tensor<DynamicShape, D, Dev>Counts the number of non-zero values in a tensor.
Counts how many elements are non-zero (not equal to 0). Useful for:
- Sparsity analysis: how sparse is a tensor? (fraction of zeros)
- Data validation: how many valid (non-zero) values exist?
- Feature selection: which features have non-zero values?
- Model debugging: checking weight sparsity and activation patterns
- Memory efficiency: estimating compressed storage requirements
- Data quality: finding completely empty dimensions or batches
Counts elements where value ≠ 0. Works with any dtype (int, float, bool). For floating point, small near-zero values (like 1e-10) still count as non-zero.
- Works with any dtype: Counts non-zero for int, float, bool, complex
- Floating point precision: 1e-10 is still non-zero (uses != not absolute threshold)
- Boolean tensors: true (1) counts as non-zero, false (0) does not
- Global vs per-dim: Without dim returns scalar, with dim returns tensor
- Useful for sparsity: Combined with total size gives sparsity ratio
- Floating point quirk: Very small values (1e-30) still count as non-zero
- Not a threshold: Use with comparison (abs().gt()) for threshold-based counting
- Performance: Scanning entire tensor, O(n) where n is total elements
Parameters
inputTensor<S, D, Dev>- The input tensor (any dtype, any shape)
optionsCountNonzeroOptionsoptional
Returns
Tensor<DynamicShape, D, Dev>– - If dim is undefined: Scalar tensor with total count of non-zero elements - If dim is specified: Tensor with counts per slice along that dimensionExamples
// Count all non-zero elements
const x = torch.tensor([[0, 1, 2], [3, 0, 4]]);
torch.count_nonzero(x); // 5 (scalar)
torch.count_nonzero(x, 0); // [1, 1, 2] - non-zero per column
torch.count_nonzero(x, 1); // [2, 2] - non-zero per row
// Sparsity analysis: what fraction is non-zero?
const matrix = torch.randn(1000, 1000);
const sparse_mask = matrix.abs().lt(0.5);
const nonzero_count = torch.count_nonzero(sparse_mask.logical_not());
const sparsity = 1 - nonzero_count.item() / (1000 * 1000);
// Feature validation: which features have any non-zero values?
const batch = torch.randn(32, 256);
const feature_activity = torch.count_nonzero(batch, 0); // [256]
const dead_features = feature_activity.eq(0); // All zeros?
// Activation sparsity in neural networks
const activations = torch.relu(torch.randn(32, 512)); // ReLU sparsity
const active_units = torch.count_nonzero(activations, 0); // [512]
const sparsity_ratio = 1 - active_units.float().div(32);
// Data quality: checking for empty samples
const data = torch.randn(100, 50);
data.masked_fill_(data.lt(0), 0); // Zero out negative values
const empty_rows = torch.count_nonzero(data, 1).eq(0); // [100] - which rows are all zero?
// Compressed format efficiency: estimating storage
const sparse_tensor = torch.randn(5000, 5000);
sparse_tensor.masked_fill_(sparse_tensor.abs().lt(0.1), 0); // Zero out small values
const nonzero = torch.count_nonzero(sparse_tensor).item();
const compression_ratio = nonzero / (5000 * 5000); // ~2.8% for this exampleSee Also
- PyTorch torch.count_nonzero()
- nonzero - Get indices of non-zero elements (not just count)
- sum - Sum values (different from counting non-zero)
- numel - Total number of elements (use to compute sparsity ratio)
- where - Conditional value selection based on zero/non-zero