提交 2f93a9f1 编写于 作者: S silingtong123 提交者: lujun

test=develop,Modify the location of test_program definition (#771)

Modify the location of test_program definition
上级 34c281a0
...@@ -213,13 +213,14 @@ avg_loss = fluid.layers.mean(cost) # 对方差求均值,得到平均损失 ...@@ -213,13 +213,14 @@ avg_loss = fluid.layers.mean(cost) # 对方差求均值,得到平均损失
在下面的 `SGD optimizer``learning_rate` 是学习率,与网络的训练收敛速度有关系。 在下面的 `SGD optimizer``learning_rate` 是学习率,与网络的训练收敛速度有关系。
```python ```python
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
#克隆main_program得到test_program #克隆main_program得到test_program
#有些operator在训练和测试之间的操作是不同的,例如batch_norm,使用参数for_test来区分该程序是用来训练还是用来测试 #有些operator在训练和测试之间的操作是不同的,例如batch_norm,使用参数for_test来区分该程序是用来训练还是用来测试
#该api不会删除任何操作符,请在backward和optimization之前使用 #该api不会删除任何操作符,请在backward和optimization之前使用
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
``` ```
### 定义运算场所 ### 定义运算场所
......
...@@ -215,13 +215,14 @@ For details, please refer to: ...@@ -215,13 +215,14 @@ For details, please refer to:
`SGD optimizer`, `learning_rate` below are learning rate, which is related to rate of convergence for train of network. `SGD optimizer`, `learning_rate` below are learning rate, which is related to rate of convergence for train of network.
```python ```python
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
#Clone main_program to get test_program #Clone main_program to get test_program
# operations of some operators are different between train and test. For example, batch_norm use parameter for_test to determine whether the program is for training or for testing. # operations of some operators are different between train and test. For example, batch_norm use parameter for_test to determine whether the program is for training or for testing.
#The api will not delete any operator, please apply it before backward and optimization. #The api will not delete any operator, please apply it before backward and optimization.
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
``` ```
### Define Training Place ### Define Training Place
......
...@@ -255,13 +255,14 @@ avg_loss = fluid.layers.mean(cost) # 对方差求均值,得到平均损失 ...@@ -255,13 +255,14 @@ avg_loss = fluid.layers.mean(cost) # 对方差求均值,得到平均损失
在下面的 `SGD optimizer`,`learning_rate` 是学习率,与网络的训练收敛速度有关系。 在下面的 `SGD optimizer`,`learning_rate` 是学习率,与网络的训练收敛速度有关系。
```python ```python
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
#克隆main_program得到test_program #克隆main_program得到test_program
#有些operator在训练和测试之间的操作是不同的,例如batch_norm,使用参数for_test来区分该程序是用来训练还是用来测试 #有些operator在训练和测试之间的操作是不同的,例如batch_norm,使用参数for_test来区分该程序是用来训练还是用来测试
#该api不会删除任何操作符,请在backward和optimization之前使用 #该api不会删除任何操作符,请在backward和optimization之前使用
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
``` ```
### 定义运算场所 ### 定义运算场所
......
...@@ -257,13 +257,14 @@ For details, please refer to: ...@@ -257,13 +257,14 @@ For details, please refer to:
`SGD optimizer`, `learning_rate` below are learning rate, which is related to rate of convergence for train of network. `SGD optimizer`, `learning_rate` below are learning rate, which is related to rate of convergence for train of network.
```python ```python
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
#Clone main_program to get test_program #Clone main_program to get test_program
# operations of some operators are different between train and test. For example, batch_norm use parameter for_test to determine whether the program is for training or for testing. # operations of some operators are different between train and test. For example, batch_norm use parameter for_test to determine whether the program is for training or for testing.
#The api will not delete any operator, please apply it before backward and optimization. #The api will not delete any operator, please apply it before backward and optimization.
test_program = main_program.clone(for_test=True) test_program = main_program.clone(for_test=True)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss)
``` ```
### Define Training Place ### Define Training Place
......
...@@ -101,11 +101,11 @@ def main(): ...@@ -101,11 +101,11 @@ def main():
cost = fluid.layers.square_error_cost(input=y_predict, label=y) cost = fluid.layers.square_error_cost(input=y_predict, label=y)
avg_loss = fluid.layers.mean(cost) avg_loss = fluid.layers.mean(cost)
test_program = main_program.clone(for_test=True)
sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001) sgd_optimizer = fluid.optimizer.SGD(learning_rate=0.001)
sgd_optimizer.minimize(avg_loss) sgd_optimizer.minimize(avg_loss)
test_program = main_program.clone(for_test=True)
# can use CPU or GPU # can use CPU or GPU
use_cuda = args.use_gpu use_cuda = args.use_gpu
place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace() place = fluid.CUDAPlace(0) if use_cuda else fluid.CPUPlace()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册