Created by: guoshengCS
To support fine-tuning, add reset_optimizer, layers, weights
arguements for model.load
.
Currently, a feasible way for fine-tuning is given in YOLOv3 demo, which is implemented by integrating a backbone model backbone_model
into a full model full_model
and use full_model.backbone_model.load
. However, this needs a way to ignore the saved optimizer states. Thus we add the reset_optimizer
argument.
Additionally, add layers
and weights
arguments for partial loading. The usage is as follows:
guard = fluid.dygraph.guard() if True else null_guard()
inputs = [Input([None, in_size], "float32", name="x")]
labels = [Input([None, 1], "int64", name="label")]
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._hid = Linear(in_size, hid_size, act="sigmoid")
self._fc = Linear(hid_size, class_num, act="softmax")
def forward(self, x):
x = self._hid(x)
return self._fc(x)
with guard:
model = MyModel()
optim = fluid.optimizer.SGD(0.001, parameter_list=model.parameters())
model.prepare(optim, CrossEntropy(), inputs, labels)
model.load("linear_dygraph_tmp/model", layers=model._hid))