How to optimize data layer in Paddle
Created by: reyoung
This question is originally asked by a Baidu user. He wants to cheat the pre-trained neural network by just optimizing data layer.
In the previous Paddle, the gradient of the data layer is not calculated since it is not necessary for optimizing parameters.
In the PaddlePaddle Fuild, we are not separate data/parameter/activation at all. Every variable in a neural network can be optimized.
I will give an example of cheating a pre-trained neural network.