未验证 提交 4a811fec 编写于 作者: Z Zeng Jinle 提交者: GitHub

fix 1.7 en doc, test=release/1.7 (#1916)

上级 f70c035b
......@@ -59,40 +59,41 @@ The blocks contain:
The concept of block is the same with that in generic programs. For example, there are three blocks in the following C++ code:
``` cpp
int main(){ //block 0
int i = 0;
if (i<10){ //block 1
for (int j=0;j<10;j++){ //block 2
}
}
return 0;
#include <iostream>
int main() {
int x = 5; // block 0
int y = 4; // block 0
int out; // block 0
if (x < y) { // block 0
out = 1; // block 1
} else {
out = 0; // block 2
}
std::cout << out << std::endl;
return 0;
}
```
Similarly, the following Program contains 3 blocks:
```python
import paddle.fluid as fluid # block 0
limit = fluid.layers.fill_constant_batch_size_like(
Input=label, dtype='int64', shape=[1], value=5.0)
cond = fluid.layers.less_than(x=label, y=limit)
ie = fluid.layers.IfElse(cond)
with ie.true_block(): # block 1
true_image = ie.input(image)
hidden = fluid.layers.fc(input=true_image, size=100, act='tanh')
prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
ie.output(prob)
with ie.false_block(): # block 2
false_image = ie.input(image)
hidden = fluid.layers.fc(
input=false_image, size=200, act='tanh')
prob = fluid.layers.fc(input=hidden, size=10, act='softmax')
ie.output(prob)
prob = ie()
import paddle.fluid as fluid
x = fluid.data(name='x', shape=[1], dtype='int64') # block 0
y = fluid.data(name='y', shape=[1], dtype='int64') # block 0
def true_block():
return fluid.layers.fill_constant(dtype='int64', value=1, shape=[1]) # block 1
def false_block():
return fluid.layers.fill_constant(dtype='int64', value=0, shape=[1]) # block 2
condition = fluid.layers.less_than(x, y) # block 0
out = fluid.layers.cond(condition, true_block, false_block) # block 0
```
### BlockDesc and ProgramDesc
......@@ -229,8 +230,8 @@ import numpy
train_data=numpy.array([[1.0],[2.0],[3.0],[4.0]]).astype('float32')
y_true = numpy.array([[2.0],[4.0],[6.0],[8.0]]).astype('float32')
# Define the network
x = fluid.layers.data(name="x",shape=[1],dtype='float32')
y = fluid.layers.data(name="y",shape=[1],dtype='float32')
x = fluid.data(name="x",shape=[None, 1],dtype='float32')
y = fluid.data(name="y",shape=[None, 1],dtype='float32')
y_predict = fluid.layers.fc(input=x,size=1,act=None)
#definition loss function
cost = fluid.layers.square_error_cost(input=y_predict,label=y)
......@@ -299,7 +300,7 @@ As you can see from the output, the entire definition process is transformed int
BlockDesc contains defined vars and a series of ops. Take input x as an example. In python code, x is 1D data of data type "float 32":
```python
x = fluid.layers.data(name="x",shape=[1],dtype='float32')
x = fluid.data(name="x",shape=[None, 1],dtype='float32')
```
In BlockDesc, the variable x is described as:
```
......@@ -348,7 +349,7 @@ Since there are multiple columns of incoming and outgoing data, fluid defines tr
```python
# Start training
outs = exe.run(
feed={'x':train_data,'y':y_true},
feed={'x':train_data,'y':y_true},
fetch_list=[y_predict.name,avg_cost.name])
```
The above code defines that train_data is to be passed into the x variable, y_true is to be passed into the y variable, and output the predicted value of y and the last round value of cost.
......
......@@ -193,14 +193,3 @@ def image_reader_creator(image_path, label_path, n):
reader = image_reader_creator("/path/to/image_file", "/path/to/label_file", 1024)
paddle.train(paddle.batch(reader, 128), {"image":0, "label":1}, ...)
```
### How is `paddle.train` implemented
An example implementation of paddle.train is:
```python
def train(batch_reader, mapping, batch_size, total_pass):
for pass_idx in range(total_pass):
for mini_batch in batch_reader(): # this loop will never end in online learning.
do_forward_backward(mini_batch, mapping)
```
......@@ -60,7 +60,7 @@
执行单卡训练可以使用 :code:`fluid.Executor()` 中的 :code:`run()` 方法,运行训练\
:code:`fluid.Program` 即可。在运行的时候,用户可以通过 :code:`run(feed=...)`\
参数传入数据;用户可以通过 :code:`run(fetch=...)` 获取持久的数据。例如:\
参数传入数据;用户可以通过 :code:`run(fetch=...)` 获取输出数据。例如:\
.. code-block:: python
......
......@@ -51,7 +51,7 @@ Single-card Training
#####################
Single-card training can be performed through calling :code:`run()` of :code:`fluid.Executor()` to run training :code:`fluid.Program` .
In the runtime, feed data with :code:`run(feed=...)` and get persistable data with :code:`run(fetch=...)` . For example:
In the runtime, users can feed data with :code:`run(feed=...)` and get output data with :code:`run(fetch=...)` . For example:
.. code-block:: python
......@@ -81,17 +81,12 @@ In the runtime, feed data with :code:`run(feed=...)` and get persistable data wi
loss_data, = exe.run(train_program,
feed={"X": x},
fetch_list=[loss.name])
# Or
# compiled_prog = fluid.CompiledProgram(train_program)
# loss_data, = exe.run(compiled_prog,
# feed={"X": x},
# fetch_list=[loss.name])
Notes:
# Or use CompiledProgram:
compiled_prog = fluid.CompiledProgram(train_program)
loss_data, = exe.run(compiled_prog,
feed={"X": x},
fetch_list=[loss.name])
1. About data type supported by feed, please refer to the article :ref:`user_guide_feed_data_to_executor_en`.
2. The return value of :code:`Executor.run` is the variable value of :code:`fetch_list=[...]` .The fetched Variable must be persistable. :code:`fetch_list` can be fed with either Variable list or name list of variables . :code:`Executor.run` returns Fetch result list.
3. If the fetched data contain sequence information, you can set :code:`exe.run(return_numpy=False, ...)` to directly get :code:`fluid.LoDTensor` . You can directly access the information in :code:`fluid.LoDTensor` .
Multi-card Training
#######################
......
......@@ -79,7 +79,7 @@ PaddlePaddle Fluid中使用 :code:`fluid.unique_name` 包来随机初始化用
test_program = fluid.Program()
with fluid.unique_name.guard():
with fluid.program_gurad(test_program, fluid.Program()):
with fluid.program_guard(test_program, fluid.Program()):
test_loss = network(is_test=True)
# fluid.default_main_program() is the train program
......
......@@ -70,7 +70,7 @@ For example:
test_program = fluid.Program()
with fluid.unique_name.guard():
with fluid.program_gurad(test_program, fluid.Program()):
with fluid.program_guard(test_program, fluid.Program()):
test_loss = network(is_test=True)
# fluid.default_main_program() is the train program
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册