PR #18656 add unnecessary input in leaky_relu_grad op
Created by: sneaxiy
PR #18656 adds unnecessary input in leaky_relu_grad op. Grad ops of activation ops would never need both forward input X
and forward output Out
as their inputs, because Out
can be always calculated by X
in backward pass.
This changing would make the existing GPU memory optimization strategy not work. Please fix it asap. @grygielski .
related #18707 (closed)