torch.cartesian_prod
Returns the Cartesian product of input tensors.
Computes all combinations by taking one element from each input tensor. Unlike combinations (which selects from a single set), Cartesian product takes one element from each of multiple sets. The output has shape [n1n2...*nk, k] where ni is the length of the i-th tensor. Similar to itertools.product in Python. Essential for:
- Grid generation: Creating all combinations of parameters/coordinates
- Cross products: Computing all pairs from multiple sets
- Feature combinations: Combining features from different sources
- Hyperparameter grids: Creating search grid from parameter lists
- Nested loops: Generating all loop combinations implicitly
- Coordinate generation: Creating all (x, y, z) points from separate lists
Implementation: Recursively generates all combinations. Output sorted in row-major order (first input varies slowest, last varies fastest).
- Cartesian vs combinations: Cartesian product takes from each of k sets; combinations selects r items from a single set
- Order: First tensor varies slowest (outermost loop), last varies fastest
- Exponential growth: Output size is product of input sizes; can explode quickly
- 1D inputs: All inputs must be exactly 1-D
- CPU only: Currently limited to CPU device
- Array syntax: Can pass either variadic args or single array of tensors
- Output size explosion: n1n2...*nk rows; even moderate sizes blow up
- Example: 3 inputs of size 10 each → 1000 rows; size 20 each → 8000 rows
- 1D requirement: Will error if any input is not 1-D
- CPU only: All inputs must be on CPU device
- Memory: Product can be huge and consume significant memory/time
Parameters
tensorsTensor[]- Variadic list of 1-D tensors (or single array of 1-D tensors)
Returns
AnyTensor– 2D tensor of shape [n1*n2*...*nk, k] containing all Cartesian combinationsExamples
// Simple 2-set Cartesian product
const a = torch.tensor([1, 2]);
const b = torch.tensor([3, 4, 5]);
const product = torch.cartesian_prod(a, b);
// [[1, 3], [1, 4], [1, 5], [2, 3], [2, 4], [2, 5]]
// 3-way Cartesian product
const x = torch.tensor([1, 2]);
const y = torch.tensor([3, 4]);
const z = torch.tensor([5, 6, 7]);
const product3 = torch.cartesian_prod(x, y, z);
// Shape: [2*2*3, 3] = [12, 3]
// All 12 combinations: (1,3,5), (1,3,6), (1,3,7), (1,4,5), ..., (2,4,7)
// Hyperparameter grid
const learning_rates = torch.tensor([0.001, 0.01, 0.1]);
const batch_sizes = torch.tensor([32, 64]);
const regularization = torch.tensor([0.001, 0.01]);
const grid = torch.cartesian_prod(learning_rates, batch_sizes, regularization);
// Shape: [3*2*2, 3] = [12, 3]
// All 12 hyperparameter combinations for grid search
// Coordinate grid (alternative to meshgrid)
const x_vals = torch.tensor([0, 1, 2]);
const y_vals = torch.tensor([0, 1]);
const coords = torch.cartesian_prod(x_vals, y_vals);
// [[0,0], [0,1], [1,0], [1,1], [2,0], [2,1]]
// Nested loop expansion (implicit all combinations)
const coords = torch.cartesian_prod(torch.arange(3), torch.arange(2), torch.arange(4));
// 3*2*4=24 row combinations for triply-nested loop
// Bit patterns (e.g., all binary 3-bit numbers)
const bit = torch.tensor([0, 1]);
const bits3 = torch.cartesian_prod(bit, bit, bit);
// [[0,0,0], [0,0,1], [0,1,0], [0,1,1], [1,0,0], [1,0,1], [1,1,0], [1,1,1]]
// Cross-validation fold pairs
const folds = torch.tensor([0, 1, 2]);
const all_fold_pairs = torch.cartesian_prod(folds, folds);
// All (train_fold, test_fold) combinations, size 9
// Using array input (alternative syntax)
const [a, b, c] = [torch.tensor([1, 2]), torch.tensor([3]), torch.tensor([4, 5])];
const product = torch.cartesian_prod([a, b, c]); // Same result as variadic callSee Also
- PyTorch torch.cartesian_prod()
- combinations - Select r items from single set (without repetition)
- meshgrid - Broadcasting-based grid creation (more memory efficient)
- stack - Stack tensors (for simpler multi-tensor operations)
- outer - Outer product (2-set special case for vectors)