提交 db3cdc05 编写于 作者: L lvmengsi 提交者: ruri

upgrade fluid.data api (#814)

上级 900ba733
...@@ -194,8 +194,8 @@ test_reader = paddle.batch( ...@@ -194,8 +194,8 @@ test_reader = paddle.batch(
训练程序的目的是定义一个训练模型的网络结构。对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。更加复杂的结果,比如卷积神经网络,递归神经网络等会在随后的章节中介绍。训练程序必须返回`平均损失`作为第一个返回值,因为它会被后面反向传播算法所用到。 训练程序的目的是定义一个训练模型的网络结构。对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。更加复杂的结果,比如卷积神经网络,递归神经网络等会在随后的章节中介绍。训练程序必须返回`平均损失`作为第一个返回值,因为它会被后面反向传播算法所用到。
```python ```python
x = fluid.layers.data(name='x', shape=[13], dtype='float32') # 定义输入的形状和数据类型 x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # 定义输入的形状和数据类型
y = fluid.layers.data(name='y', shape=[1], dtype='float32') # 定义输出的形状和数据类型 y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # 定义输出的形状和数据类型
y_predict = fluid.layers.fc(input=x, size=1, act=None) # 连接输入和输出的全连接层 y_predict = fluid.layers.fc(input=x, size=1, act=None) # 连接输入和输出的全连接层
main_program = fluid.default_main_program() # 获取默认/全局主函数 main_program = fluid.default_main_program() # 获取默认/全局主函数
......
...@@ -196,8 +196,8 @@ test_reader = paddle.batch( ...@@ -196,8 +196,8 @@ test_reader = paddle.batch(
The aim of the program for training is to define a network structure of a training model. For linear regression, it is a simple fully connected layer from input to output. More complex result, such as Convolutional Neural Network and Recurrent Neural Network, will be introduced in later chapters. It must return `mean error` as the first return value in program for training, for that `mean error` will be used for BackPropagation. The aim of the program for training is to define a network structure of a training model. For linear regression, it is a simple fully connected layer from input to output. More complex result, such as Convolutional Neural Network and Recurrent Neural Network, will be introduced in later chapters. It must return `mean error` as the first return value in program for training, for that `mean error` will be used for BackPropagation.
```python ```python
x = fluid.layers.data(name='x', shape=[13], dtype='float32') # define shape and data type of input x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # define shape and data type of input
y = fluid.layers.data(name='y', shape=[1], dtype='float32') # define shape and data type of output y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # define shape and data type of output
y_predict = fluid.layers.fc(input=x, size=1, act=None) # fully connected layer connecting input and output y_predict = fluid.layers.fc(input=x, size=1, act=None) # fully connected layer connecting input and output
main_program = fluid.default_main_program() # get default/global main function main_program = fluid.default_main_program() # get default/global main function
......
...@@ -236,8 +236,8 @@ test_reader = paddle.batch( ...@@ -236,8 +236,8 @@ test_reader = paddle.batch(
训练程序的目的是定义一个训练模型的网络结构。对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。更加复杂的结果,比如卷积神经网络,递归神经网络等会在随后的章节中介绍。训练程序必须返回`平均损失`作为第一个返回值,因为它会被后面反向传播算法所用到。 训练程序的目的是定义一个训练模型的网络结构。对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。更加复杂的结果,比如卷积神经网络,递归神经网络等会在随后的章节中介绍。训练程序必须返回`平均损失`作为第一个返回值,因为它会被后面反向传播算法所用到。
```python ```python
x = fluid.layers.data(name='x', shape=[13], dtype='float32') # 定义输入的形状和数据类型 x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # 定义输入的形状和数据类型
y = fluid.layers.data(name='y', shape=[1], dtype='float32') # 定义输出的形状和数据类型 y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # 定义输出的形状和数据类型
y_predict = fluid.layers.fc(input=x, size=1, act=None) # 连接输入和输出的全连接层 y_predict = fluid.layers.fc(input=x, size=1, act=None) # 连接输入和输出的全连接层
main_program = fluid.default_main_program() # 获取默认/全局主函数 main_program = fluid.default_main_program() # 获取默认/全局主函数
......
...@@ -238,8 +238,8 @@ test_reader = paddle.batch( ...@@ -238,8 +238,8 @@ test_reader = paddle.batch(
The aim of the program for training is to define a network structure of a training model. For linear regression, it is a simple fully connected layer from input to output. More complex result, such as Convolutional Neural Network and Recurrent Neural Network, will be introduced in later chapters. It must return `mean error` as the first return value in program for training, for that `mean error` will be used for BackPropagation. The aim of the program for training is to define a network structure of a training model. For linear regression, it is a simple fully connected layer from input to output. More complex result, such as Convolutional Neural Network and Recurrent Neural Network, will be introduced in later chapters. It must return `mean error` as the first return value in program for training, for that `mean error` will be used for BackPropagation.
```python ```python
x = fluid.layers.data(name='x', shape=[13], dtype='float32') # define shape and data type of input x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # define shape and data type of input
y = fluid.layers.data(name='y', shape=[1], dtype='float32') # define shape and data type of output y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # define shape and data type of output
y_predict = fluid.layers.fc(input=x, size=1, act=None) # fully connected layer connecting input and output y_predict = fluid.layers.fc(input=x, size=1, act=None) # fully connected layer connecting input and output
main_program = fluid.default_main_program() # get default/global main function main_program = fluid.default_main_program() # get default/global main function
......
...@@ -87,8 +87,8 @@ def main(): ...@@ -87,8 +87,8 @@ def main():
batch_size=batch_size) batch_size=batch_size)
# feature vector of length 13 # feature vector of length 13
x = fluid.layers.data(name='x', shape=[13], dtype='float32') x = fluid.data(name='x', shape=[-1, 13], dtype='float32')
y = fluid.layers.data(name='y', shape=[1], dtype='float32') y = fluid.data(name='y', shape=[-1, 1], dtype='float32')
main_program = fluid.default_main_program() main_program = fluid.default_main_program()
startup_program = fluid.default_startup_program() startup_program = fluid.default_startup_program()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册