动态图下,在第一次运行后,中断guard 再起新guard, 会出现参数无法更新的现象
Created by: xubenben
- 版本、环境信息: 1)PaddlePaddle版本:paddlepaddle==1.5.2 4)系统环境:Python3.7
- 训练信息 1)单机
- 问题描述:动态图下,在第一次运行后,中断guard 再起新guard, 会出现参数无法更新的现象
import numpy as np import paddle import paddle.fluid as fluid import paddle.fluid.dygraph as dygraph place = fluid.CPUPlace() def reader(num_examples): xs = np.random.rand(num_examples, 5).astype("float32") ys = np.random.rand(num_examples, 1).astype("float32") def reader(): for x, y in zip(xs, ys): yield x, y return reader def create_model(): fc = dygraph.FC("FC", 10) adam = fluid.optimizer.AdamOptimizer(learning_rate=0.001) return fc, adam def run_model(reader, model, optimizer): total_avg_loss = 0 total_cnt = 0 reader = paddle.batch(reader, 32) for batch in reader(): x, y = zip(*batch) x = np.stack(x, 0) y = np.stack(y, 0) x = dygraph.to_variable(x) y = dygraph.to_variable(y) pred = model(x) loss = fluid.layers.square_error_cost(pred, y) avg_loss = fluid.layers.mean(loss) avg_loss.backward() optimizer.minimize(avg_loss) avg_loss = avg_loss.numpy()[0] total_avg_loss += avg_loss total_cnt += 1 model.clear_gradients() print(total_avg_loss / total_cnt) model, optimizer = create_model() with dygraph.guard(place): data_reader = reader(1000) for epoch in range(4): run_model(data_reader, model, optimizer) with dygraph.guard(place): for epoch in range(10): run_model(data_reader, model, optimizer)
在代码最后几行,我先中断当前guard,再起一个新guard,然后参数便无法跟新,这是输出结果: 0.4249080354347825 0.33915749890729785 0.27647794131189585 0.2311921687796712 0.2125754221342504 0.2125754221342504 0.2125754221342504 0.2125754221342504 0.2125754221342504 0.2125754221342504 0.2125754221342504 0.2125754221342504 0.2125754221342504 0.2125754221342504