Optimizer Design¶
The Problem¶
A PaddlePaddle program, or a block, is a sequence of operators operating variables. A training program needs to do three kinds of works:
- the forward pass, which computes intermediate results and the cost(s),
- the backward pass, which derives gradients from intermediate results and costs, and
- the optimization pass, which update model parameters to optimize the cost(s).
These works rely on three kinds of operators:
- forward operators,
- gradient operators, and
- optimization operators.
It’s true that users should be able to create all these operators manually by calling some low-level API, but it would be much more convenient if they could only describe the forward pass and let PaddlePaddle create the backward and optimization operators automatically.
In this design, we propose a high-level API that automatically derives the optimisation pass and operators from the forward pass.
High-level Python API to describe the training process¶
User write code to describe the network:
images = layer.data("images") labels = layer.data("labels") w1 = pd.var("w1") b1 = pd.var("b1") hidden = layer.fc(images, w=w1, b=b1) cost = layer.mse(hidden, labels)
The above code snippet will create forward operators in Block.
Users create a certain kind of Optimizer with some argument.
optimizer = AdagradOptimizer(learing_rate=0.001)
Users use the optimizer to
minimize
a certaincost
through updating parameters in parameter_list.opt_op_list = optimizer.minimize(cost, parameter_list=[w1, b1])
The above code snippet will create gradient and optimization operators in Block. The return value of
minimize()
is list of optimization operators that will be run by session.Users use Session/Executor to run this opt_op_list as target to do training.
sess.run(target= opt_op_list, ...)
Optimizer Python interface:¶
class Optimizer(object):
"""Optimizer Base class.
"""
def __init__(self):
pass
def create_backward_pass(self, loss, parameter_list=None):
"""
create and add gradient Operators in BlockDesc to Compute gradients of `loss`
for parameters in parameter_list
Args:
loss: an variable generated by cost function.
parameter_list: parameters that need to compute gradient and update to optimize the lost.
Returns:
list of (parameters, gradients) pair.
"""
return None
def create_optimization_pass(self, parameters_and_grads):
"""Add optimization operators to update gradients to variables.
Args:
parameters_and_grads: a list of (variable, gradient) pair to update.
Returns:
optmization_op_list: a list of optimization operator that will update parameter using gradient.
"""
return None
def minimize(self, loss, parameter_list):
"""Add operations to minimize `loss` by updating `parameter_list`.
This method combines interface `create_backward_pass()` and
`create_optimization_pass()` into one.
"""
params_grads = self.create_backward_pass(loss, parameter_list)
update_ops = self.create_optimization_pass(params_grads)
return update_ops
Users can inherit the Optimizer above to create their own Optimizer with some special logic, such as AdagradOptimizer.