torch.addcdiv
function addcdiv(input: unknown, tensor1: unknown, tensor2: unknown, valueOrOptions: unknown, options: unknown): TensorPerforms element-wise division then adds to input: result = input + value * (tensor1 / tensor2).
Efficiently combines scaling-by-division with accumulation. Essential for:
- Optimizer implementations: Adam, RMSprop use addcdiv for adaptive gradient scaling
- Normalization: Dividing by variance/std then accumulating
- Adaptive methods: Scaling gradients by running second-moment estimates
- Fused operation: Combined division+addition for efficiency
- Memory efficient: No intermediate tensor from division
Division by zero produces infinity/NaN
Parameters
inputunknown- The input tensor (accumulator)
tensor1unknown- The numerator tensor
tensor2unknown- The denominator tensor
valueOrOptionsunknown- Scaling factor (default: 1) or options object
optionsunknown- Optional settings including
outandvalueparameters
Returns
Tensor– Tensor with shape = broadcast(input, tensor1/tensor2)Examples
// Basic usage
const input = torch.tensor([1, 2, 3]);
const num = torch.tensor([4, 6, 8]);
const den = torch.tensor([2, 3, 4]);
torch.addcdiv(input, num, den); // [3, 4, 5]
// Adam optimizer update (simplified)
const param = torch.randn([100]);
const m = torch.randn([100]); // First moment
const v = torch.randn([100]).abs(); // Second moment
torch.addcdiv(param, m, v.sqrt().add(1e-8), -0.001);See Also
- PyTorch torch.addcdiv()
- addcmul - Similar with multiplication instead of division
- div - Simple division