torch.autograd
Functions
- is_anomaly_detection_enabled - Check if anomaly detection is currently enabled.
- set_detect_anomaly - Set whether anomaly detection is enabled globally.
- detect_anomaly - Context manager for enabling anomaly detection.
- backward - Computes the sum of gradients of given tensors with respect to graph leaves.
- is_grad_enabled - Check if gradient computation is currently enabled globally.
- set_grad_enabled - Set whether gradient computation is enabled globally.
- no_grad - Context manager that disables gradient computation.
- enable_grad - Context manager that enables gradient computation.
- grad_mode - Context manager for explicitly setting gradient computation mode.
- is_inference_mode_enabled - Check if inference mode is currently enabled.
- inference_mode - Context manager for inference mode.
- executeBackward - Execute the backward pass using proper topological ordering.
- once_differentiable - Decorator to mark a backward function as only supporting single differentiation.
- jacobian - Computes the Jacobian of a function with respect to its inputs.
- hessian - Computes the Hessian of a scalar function with respect to its inputs.
- jvp - Computes the Jacobian-vector product (forward-mode automatic differentiation).
- vjp - Computes the vector-Jacobian product (reverse-mode automatic differentiation).
- hvp - Computes the Hessian-vector product.
- vhp - Computes the vector-Hessian product.
- grad - Computes and returns gradients of outputs with respect to inputs without accumulation.
- gradcheck - Check gradients of a function by comparing analytical and numerical gradients.
- gradgradcheck - A faster but less accurate gradient check using forward-mode differentiation.
- saved_tensors_hooks - Context manager for registering custom hooks on tensors saved for backward.
- disable_saved_tensors_hooks - Context manager to temporarily disable saved tensors hooks.
- save_on_cpu - Context manager to save tensors on CPU during forward pass.
- allow_mutation_on_saved_tensors - Check if in-place mutations on saved tensors are safe during backward.
- set_multithreading_enabled - Enable or disable multithreading for autograd operations (no-op in JavaScript).
- profile - Creates a profiler context for measuring neural network performance.
- emit_nvtx - Creates context for emitting NVIDIA NVTX markers for GPU profiling.
FunctionCtx
- save_for_backward - Saves given tensors for a future call to backward().
- mark_dirty - Marks given tensors as modified in an in-place operation.
- mark_non_differentiable - Marks outputs as non-differentiable.
- set_materialize_grads - Sets whether to materialize zero gradients.
Function
- apply - Apply the function to inputs.
ProfilerContext
- start - Start profiling.
- stop - Stop profiling.
- key_averages - Get key averages from the profiler.
- export_chrome_trace - Export Chrome trace format.
ProfilerKeyAverages
- table - Get table representation of profiler data.
Types
- BackwardOptions - Options for backward computation.
- JacobianOptions - Options for jacobian.
- HessianOptions - Options for hessian.
- JVPOptions - Options for jvp.
- VJPOptions - Options for vjp.
- HVPOptions - Options for hvp.
- VHPOptions - Options for vhp.
- GradOptions - Options for grad computation.
- GradcheckOptions - Options for gradient checking.
- GradcheckResult - Result of gradient checking for a single input.
- PackHook - Type for a pack hook function - transforms tensors during forward for storage.
- UnpackHook - Type for an unpack hook function - reconstructs tensors during backward from packed data.
- Node - Represents a node in the autograd graph.
- ProfilerOptions - Autograd profiler module.
- ProfilerTableOptions - Options for ProfilerKeyAverages.table
- EmitNvtxOptions - Options for emit_nvtx