torch.float_power
function float_power(input: unknown, exponent: unknown, options: unknown): TensorRaises input to the power of exponent using double precision arithmetic.
Power operation computed in float64 (double precision) internally, then converted back to output dtype. Provides better accuracy than standard power for critical computations. Essential for:
- Precise exponentiation: Higher accuracy than standard pow() for large exponents
- Numerical stability: Double precision reduces rounding errors
- Scientific computing: When accuracy matters more than speed
- Logarithm operations: Computing log-space exponents accurately
- Numerical methods: ODE solvers, integration requiring high precision
- Machine learning: Gradient computation requiring high precision intermediate
- Mathematical verification: Accurate reference implementations
Why double precision? Power operation has accumulated rounding errors. Computing in float64 then converting provides better accuracy than computing in float32 directly. Cost: slightly slower.
Difference from pow():
pow(x, y): Computes in input dtype (float32 if input is float32)float_power(x, y): Computes in float64, converts result Result is usually more accurate for complex expressions
- Double precision intermediate: Computed in float64 for accuracy
- Result converted back: Output dtype matches input dtype
- Slower than pow(): Trade accuracy for speed; use pow() if speed critical
- Negative base with non-integer exponent produces NaN
- Large exponents can still overflow/underflow despite double precision
Parameters
inputunknown- Base tensor (any shape, any dtype)
exponentunknown- Exponent (tensor of same shape or broadcastable, or scalar)
optionsunknown- Optional settings including
outparameter
Returns
Tensor– Tensor with shape broadcast(input, exponent); output dtype same as inputExamples
// Basic exponentiation with higher precision
const x = torch.tensor([2, 3, 4]);
torch.float_power(x, 2); // [4, 9, 16]
// Fractional exponents: computing roots accurately
const values = torch.tensor([2, 8, 27]);
torch.float_power(values, 1/3); // Cube roots: [1.26, 2, 3]
// Large exponents: precision matters
const base = torch.tensor([1.1]);
const result = torch.float_power(base, 100); // 1.1^100 computed in double precision
// Element-wise exponents
const bases = torch.tensor([2, 3, 4, 5]);
const exponents = torch.tensor([2, 3, 4, 5]);
torch.float_power(bases, exponents); // [4, 27, 256, 3125]See Also
- PyTorch torch.float_power()
- pow - Standard power (faster, less accurate)
- sqrt - Square root (optimized alternative to float_power(x, 0.5))
- exp - Exponential function
- log - Natural logarithm