torch.autograd.FunctionCtx.mark_non_differentiable
FunctionCtx.mark_non_differentiable(...tensors: Tensor[]): voidMarks outputs as non-differentiable.
This should be called at most once, in either forward() or setup_context(), and all arguments should be outputs.
This will mark outputs as not requiring gradients, which will increase efficiency of backward computation.
Parameters
tensorsTensor[]- Output tensors that are non-differentiable