torch.promote_types
Returns the lowest common dtype that can safely hold values from both input dtypes.
Automatically determines the appropriate output dtype when performing operations on tensors with different data types. Follows a strict promotion hierarchy where broader types subsume narrower types: floating point > integer > boolean. Within each category, larger types dominate smaller ones (float32 > float16, int32 > int8 > uint8, etc.). This ensures no loss of precision or range during mixed-dtype operations. Useful for:
- Mixed-dtype arithmetic: Determining output dtype for operations like add, mul
- Type coercion: Automatically converting arguments to compatible types
- Type inference: Computing result dtype without explicit casting
- Numerical correctness: Ensuring results don't overflow or lose precision
- API compatibility: Matching PyTorch's automatic type promotion behavior
- Order-independent: promote_types(A, B) always equals promote_types(B, A)
- Transitivity: If A promotes to B and B promotes to C, A also promotes to C
- Bool handling: bool is the lowest-priority type and promotes to any numeric type
- Mixed signedness: Signed/unsigned integer mixing promotes to int32 for safety
- No precision loss: Promotion always chooses a type that preserves both values
- Range differences: Mixed int8/uint8 promotion to int32 may differ from some systems
- Performance: Promotion may trigger implicit casting in operations (small overhead)
Parameters
Returns
DType– The promoted dtype that can safely represent both inputsExamples
// Integer promoted to float
torch.promote_types('float32', 'int32'); // 'float32'
torch.promote_types('int32', 'float16'); // 'float32'
// Larger float dominates smaller
torch.promote_types('float16', 'float32'); // 'float32'
torch.promote_types('float32', 'float16'); // 'float32' (order-independent)// Same type stays the same
torch.promote_types('float32', 'float32'); // 'float32'
torch.promote_types('int8', 'int8'); // 'int8'// Mixed integer signedness requires safe promotion
torch.promote_types('int8', 'uint8'); // 'int32' (safest common type)
torch.promote_types('int32', 'uint32'); // 'int32' (int32 is more common)// Use in tensor operations
const a = torch.tensor([1, 2, 3], { dtype: 'int32' });
const b = torch.tensor([1.5, 2.5, 3.5], { dtype: 'float32' });
const result_dtype = torch.promote_types(a.dtype, b.dtype); // 'float32'
const result = torch.add(a.to(result_dtype), b); // Properly typedSee Also
- PyTorch torch.promote_types()
- is_floating_point_dtype - Check if dtype is floating point
- is_complex_dtype - Check if dtype is complex