diff --git a/01.fit_a_line/README.cn.md b/01.fit_a_line/README.cn.md index 412bc3c148f55c7d7248e5ebe35af5932d4ff143..715bc98e5dcbbcfb340478c5de8d448fa4d4cc55 100644 --- a/01.fit_a_line/README.cn.md +++ b/01.fit_a_line/README.cn.md @@ -194,8 +194,8 @@ test_reader = paddle.batch( 训练程序的目的是定义一个训练模型的网络结构。对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。更加复杂的结果,比如卷积神经网络,递归神经网络等会在随后的章节中介绍。训练程序必须返回`平均损失`作为第一个返回值,因为它会被后面反向传播算法所用到。 ```python -x = fluid.layers.data(name='x', shape=[13], dtype='float32') # 定义输入的形状和数据类型 -y = fluid.layers.data(name='y', shape=[1], dtype='float32') # 定义输出的形状和数据类型 +x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # 定义输入的形状和数据类型 +y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # 定义输出的形状和数据类型 y_predict = fluid.layers.fc(input=x, size=1, act=None) # 连接输入和输出的全连接层 main_program = fluid.default_main_program() # 获取默认/全局主函数 diff --git a/01.fit_a_line/README.md b/01.fit_a_line/README.md index 6f5ed25088e9b8804441dbd6c119e8447ac02810..29fdc1a6c72c4f3da0939688359fb98b296f87f6 100644 --- a/01.fit_a_line/README.md +++ b/01.fit_a_line/README.md @@ -196,8 +196,8 @@ test_reader = paddle.batch( The aim of the program for training is to define a network structure of a training model. For linear regression, it is a simple fully connected layer from input to output. More complex result, such as Convolutional Neural Network and Recurrent Neural Network, will be introduced in later chapters. It must return `mean error` as the first return value in program for training, for that `mean error` will be used for BackPropagation. ```python -x = fluid.layers.data(name='x', shape=[13], dtype='float32') # define shape and data type of input -y = fluid.layers.data(name='y', shape=[1], dtype='float32') # define shape and data type of output +x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # define shape and data type of input +y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # define shape and data type of output y_predict = fluid.layers.fc(input=x, size=1, act=None) # fully connected layer connecting input and output main_program = fluid.default_main_program() # get default/global main function diff --git a/01.fit_a_line/index.cn.html b/01.fit_a_line/index.cn.html index 10447f802b8e5092d7841ce130cc93c4a7b1b9f7..40d91286ee06e375b056bdea95e5a2bbf5a43328 100644 --- a/01.fit_a_line/index.cn.html +++ b/01.fit_a_line/index.cn.html @@ -236,8 +236,8 @@ test_reader = paddle.batch( 训练程序的目的是定义一个训练模型的网络结构。对于线性回归来讲,它就是一个从输入到输出的简单的全连接层。更加复杂的结果,比如卷积神经网络,递归神经网络等会在随后的章节中介绍。训练程序必须返回`平均损失`作为第一个返回值,因为它会被后面反向传播算法所用到。 ```python -x = fluid.layers.data(name='x', shape=[13], dtype='float32') # 定义输入的形状和数据类型 -y = fluid.layers.data(name='y', shape=[1], dtype='float32') # 定义输出的形状和数据类型 +x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # 定义输入的形状和数据类型 +y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # 定义输出的形状和数据类型 y_predict = fluid.layers.fc(input=x, size=1, act=None) # 连接输入和输出的全连接层 main_program = fluid.default_main_program() # 获取默认/全局主函数 diff --git a/01.fit_a_line/index.html b/01.fit_a_line/index.html index 2c3823eefd74cd232ac79805874b91a78175407f..184e8253bda3711afe6f3f2d60fc21aab2bcdb1c 100644 --- a/01.fit_a_line/index.html +++ b/01.fit_a_line/index.html @@ -238,8 +238,8 @@ test_reader = paddle.batch( The aim of the program for training is to define a network structure of a training model. For linear regression, it is a simple fully connected layer from input to output. More complex result, such as Convolutional Neural Network and Recurrent Neural Network, will be introduced in later chapters. It must return `mean error` as the first return value in program for training, for that `mean error` will be used for BackPropagation. ```python -x = fluid.layers.data(name='x', shape=[13], dtype='float32') # define shape and data type of input -y = fluid.layers.data(name='y', shape=[1], dtype='float32') # define shape and data type of output +x = fluid.data(name='x', shape=[-1, 13], dtype='float32') # define shape and data type of input +y = fluid.data(name='y', shape=[-1, 1], dtype='float32') # define shape and data type of output y_predict = fluid.layers.fc(input=x, size=1, act=None) # fully connected layer connecting input and output main_program = fluid.default_main_program() # get default/global main function diff --git a/01.fit_a_line/train.py b/01.fit_a_line/train.py index b2c21574f622a8ce0403454710cd8b90948ac779..2ebe21e6fd9319c18369d177684ed62186bedd82 100644 --- a/01.fit_a_line/train.py +++ b/01.fit_a_line/train.py @@ -87,8 +87,8 @@ def main(): batch_size=batch_size) # feature vector of length 13 - x = fluid.layers.data(name='x', shape=[13], dtype='float32') - y = fluid.layers.data(name='y', shape=[1], dtype='float32') + x = fluid.data(name='x', shape=[-1, 13], dtype='float32') + y = fluid.data(name='y', shape=[-1, 1], dtype='float32') main_program = fluid.default_main_program() startup_program = fluid.default_startup_program()