未验证 提交 16c198aa 编写于 作者: W wangchaochaohu 提交者: GitHub

remove some API doc test=develop (#1923)

* removefill_constant_batch_size_like  gaussian_random_batch_size_like  uniform_random_batch_size_like_cn API doc test=develop
上级 748387bd
...@@ -104,7 +104,6 @@ fluid.layers ...@@ -104,7 +104,6 @@ fluid.layers
layers/eye.rst layers/eye.rst
layers/fc.rst layers/fc.rst
layers/fill_constant.rst layers/fill_constant.rst
layers/fill_constant_batch_size_like.rst
layers/filter_by_instag.rst layers/filter_by_instag.rst
layers/flatten.rst layers/flatten.rst
layers/floor.rst layers/floor.rst
...@@ -113,7 +112,6 @@ fluid.layers ...@@ -113,7 +112,6 @@ fluid.layers
layers/gather_nd.rst layers/gather_nd.rst
layers/gather_tree.rst layers/gather_tree.rst
layers/gaussian_random.rst layers/gaussian_random.rst
layers/gaussian_random_batch_size_like.rst
layers/gelu.rst layers/gelu.rst
layers/generate_mask_labels.rst layers/generate_mask_labels.rst
layers/generate_proposal_labels.rst layers/generate_proposal_labels.rst
...@@ -308,7 +306,6 @@ fluid.layers ...@@ -308,7 +306,6 @@ fluid.layers
layers/unfold.rst layers/unfold.rst
layers/Uniform.rst layers/Uniform.rst
layers/uniform_random.rst layers/uniform_random.rst
layers/uniform_random_batch_size_like.rst
layers/unique.rst layers/unique.rst
layers/unique_with_counts.rst layers/unique_with_counts.rst
layers/unsqueeze.rst layers/unsqueeze.rst
......
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_fluid_layers_fill_constant_batch_size_like:
fill_constant_batch_size_like
-----------------------------
.. autofunction:: paddle.fluid.layers.fill_constant_batch_size_like
:noindex:
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_fluid_layers_gaussian_random_batch_size_like:
gaussian_random_batch_size_like
-------------------------------
.. autofunction:: paddle.fluid.layers.gaussian_random_batch_size_like
:noindex:
.. THIS FILE IS GENERATED BY `gen_doc.{py|sh}`
!DO NOT EDIT THIS FILE MANUALLY!
.. _api_fluid_layers_uniform_random_batch_size_like:
uniform_random_batch_size_like
------------------------------
.. autofunction:: paddle.fluid.layers.uniform_random_batch_size_like
:noindex:
...@@ -108,7 +108,6 @@ fluid.layers ...@@ -108,7 +108,6 @@ fluid.layers
layers_cn/exponential_decay_cn.rst layers_cn/exponential_decay_cn.rst
layers_cn/eye_cn.rst layers_cn/eye_cn.rst
layers_cn/fc_cn.rst layers_cn/fc_cn.rst
layers_cn/fill_constant_batch_size_like_cn.rst
layers_cn/fill_constant_cn.rst layers_cn/fill_constant_cn.rst
layers_cn/filter_by_instag_cn.rst layers_cn/filter_by_instag_cn.rst
layers_cn/flatten_cn.rst layers_cn/flatten_cn.rst
...@@ -117,7 +116,6 @@ fluid.layers ...@@ -117,7 +116,6 @@ fluid.layers
layers_cn/gather_cn.rst layers_cn/gather_cn.rst
layers_cn/gather_nd_cn.rst layers_cn/gather_nd_cn.rst
layers_cn/gather_tree_cn.rst layers_cn/gather_tree_cn.rst
layers_cn/gaussian_random_batch_size_like_cn.rst
layers_cn/gaussian_random_cn.rst layers_cn/gaussian_random_cn.rst
layers_cn/gelu_cn.rst layers_cn/gelu_cn.rst
layers_cn/generate_mask_labels_cn.rst layers_cn/generate_mask_labels_cn.rst
...@@ -313,7 +311,6 @@ fluid.layers ...@@ -313,7 +311,6 @@ fluid.layers
layers_cn/unfold_cn.rst layers_cn/unfold_cn.rst
layers_cn/Uniform_cn.rst layers_cn/Uniform_cn.rst
layers_cn/uniform_random_cn.rst layers_cn/uniform_random_cn.rst
layers_cn/uniform_random_batch_size_like_cn.rst
layers_cn/unique_cn.rst layers_cn/unique_cn.rst
layers_cn/unique_with_counts_cn.rst layers_cn/unique_with_counts_cn.rst
layers_cn/unsqueeze_cn.rst layers_cn/unsqueeze_cn.rst
......
...@@ -31,7 +31,8 @@ continuous_value_model ...@@ -31,7 +31,8 @@ continuous_value_model
input=input, input=input,
size=[100, 11], size=[100, 11],
dtype='float32') dtype='float32')
ones = fluid.layers.fill_constant_batch_size_like(input=label, shape=[-1, 1], dtype="int64", value=1) label_shape = fluid.layers.shape(label)
ones = fluid.layers.fill_constant(shape=[label_shape[0], 1], dtype="int64", value=1)
show_clk = fluid.layers.cast(fluid.layers.concat([ones, label], axis=1), dtype='float32') show_clk = fluid.layers.cast(fluid.layers.concat([ones, label], axis=1), dtype='float32')
show_clk.stop_gradient = True show_clk.stop_gradient = True
input_with_cvm = fluid.layers.continuous_value_model(embed, show_clk, True) input_with_cvm = fluid.layers.continuous_value_model(embed, show_clk, True)
......
.. _cn_api_fluid_layers_fill_constant_batch_size_like:
fill_constant_batch_size_like
-------------------------------
.. py:function:: paddle.fluid.layers.fill_constant_batch_size_like(input,shape,dtype,value,input_dim_idx=0,output_dim_idx=0,force_cpu=False)
该OP创建一个形状为shape并且数据类型为dtype的Tensor,同时用 ``value`` 中提供的常量初始化该Tensor。在输入为LoDTensor并且input_dim_idx为0的
时候将输出output_dim_idx维度的大小设置为input输入的batch_size的值,创建的Tensor的stop_gradient属性默认为False。
参数:
- **input** (Variable)- 输入的Tensor或者LoDTensor,支持数据类型为 float32, float64, int32, int64,bool。
- **shape** (list)- 创建Tensor的shape,最后创建的LoDTensor的shape可能会依据input发生变动。
- **dtype** (np.dtype|core.VarDesc.VarType|str)- 创建Tensor的数据类型,支持数据类型为 float32, float64, int32, int64,bool。
- **value** (float|int)- 用于初始化输出Tensor的常量数据的值。
- **input_dim_idx** (int)- 当值为0并且输入为LoDTensor的时候,创建Tensor的output_dim_idx维度会设置为input的batch_size值,默认值为0。
- **output_dim_idx** (int) -用于指定创建的Tensor哪个维度设置为输入batch_size的值,默认值为0。
- **force_cpu** (bool)- 用于返回的Tensor是否创建在CPU上,默认值为False,若设为true,则数据在CPU上。
返回:创建的Tensor, 数据类型为dtype。
返回类型:(Variable)
**代码示例**:
.. code-block:: python
import paddle.fluid as fluid
like = fluid.layers.fill_constant(shape=[1,2], value=10, dtype='int64') #like=[[10, 10]]
data = fluid.layers.fill_constant_batch_size_like(
input=like, shape=[1], value=0, dtype='int64') #like=[[10, 10]] data=[0]
\ No newline at end of file
.. _cn_api_fluid_layers_gaussian_random_batch_size_like:
gaussian_random_batch_size_like
-------------------------------
.. py:function:: paddle.fluid.layers.gaussian_random_batch_size_like(input, shape, input_dim_idx=0, output_dim_idx=0, mean=0.0, std=1.0, seed=0, dtype='float32')
使用高斯随机发生器初始化张量。高斯分布的默认均值(mean)为0,默认标准差(std)为 1 。用户可以通过输入参数设置 mean 和 std 。
参数:
- **input** (Variable)- 其 input_dim_idx'th 维度指定 batch_size 的张量(Tensor)。
- **shape** (tuple|list)- 输出的形状。
- **input_dim_idx** (Int)- (默认值0)输入批量大小维度的索引。
- **output_dim_idx** (Int)- (默认值0)输出批量大小维度的索引。
- **mean** (float)- (默认值 0.0)高斯分布的平均值(或中心值)。
- **std** (float)- (默认值 1.0)高斯分布的标准差(std或spread)。
- **seed** (int)- (默认值为 0)用于随机数发生器的随机种子。0表示使用系统生成的种子。请注意,如果seed不为0,则此算子每次将始终生成相同的随机数。
- **dtype** (np.dtype | core.VarDesc.VarType | str)- 输出数据的类型,float32、float_16、int 等。
返回:指定形状的张量,由从高斯分布抽样产生的随机数所填充。
返回类型:Variable
**代码示例:**
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[13, 11], dtype='float32')
out = fluid.layers.gaussian_random_batch_size_like(
input, shape=[-1, 11], mean=1.0, std=2.0)
.. _cn_api_fluid_layers_uniform_random_batch_size_like:
uniform_random_batch_size_like
-------------------------------
.. py:function:: paddle.fluid.layers.uniform_random_batch_size_like(input, shape, dtype='float32', input_dim_idx=0, output_dim_idx=0, min=-1.0, max=1.0, seed=0)
该OP使用从范围[min,max)内均匀分布采样的随机值初始化一个Tensor,且输出Tensor的
指定维度将被设置为与输入Tensor指定维度相同的值。
::
示例1:
给定:
input =[[0.946741 , 0.1357001 , 0.38086128]] # input.shape=[1,3]
shape=[2,4]
此时,output_dim_idx = 0, input_dim_idx = 0,result.shape[0] = input.shape[0]
则:
result=[[ 0.3443427 , -0.23056602, 0.3477049 , 0.06139076]] # result.shape=[1,4]
示例2:
给定:
input =[[0.946741 , 0.1357001 , 0.38086128]] # input.shape=[1,3]
shape=[2,4]
input_dim_idx=1
output_dim_idx=1
此时,output_dim_idx = 1, input_dim_idx = 1,result.shape[1] = input.shape[1]
则:
result=[[-0.23133647, -0.84195036, 0.21441269],
[-0.08774924, 0.25605237, -0.09403259]] # result.shape=[2,3]
参数:
- **input** (Variable)- 输入Tensor,支持的数据类型:float32。
- **shape** (list|tuple)- 输出Tensor的维度,类型为list或者tuple。支持的数据类型:int。
- **input_dim_idx** (int,可选)- 输入Tensor指定维度的索引。该参数指定输入Tensor维度的值将用于调整输出Tensor维度的大小。支持的数据类型:int。默认值为0。
- **output_dim_idx** (int,可选)- 输出Tensor指定维度的索引。该参数指定输出Tensor的维度将被设置为与输入Tensor指定维度相同的值。支持的数据类型:int。默认值为0。
- **min** (float,可选)- 要生成的随机值范围的下限,min包含在范围中。支持的数据类型:float。默认值为 1.0。
- **max** (float,可选)- 要生成的随机值范围的上限,max不包含在范围中。支持的数据类型:float。默认值为1.0。
- **seed** (int,可选)- 用于生成样本的随机种子。0表示使用系统生成的种子,数据类型为int。注意如果seed不为0,则此算子将始终每次生成相同的随机数。支持的数据类型:int。默认值为0。
- **dtype** (np.dtype | core.VarDesc.VarType | str,可选) - 输出Tensor的数据类型。支持的数据类型:float32, float64,默认值为float32。
返回: 表示随机初始化结果的Tensor,数据类型由dtype参数设置,该Tensor的维度由shape参数和输入Tensor的指定维度共同决定。
返回类型: Variable
**代码示例:**
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
input = fluid.data(name="input", shape=[13, 11], dtype='float32')
# examp 1:
# input_dim_idx和output_dim_idx使用默认值
out1 = layers.uniform_random_batch_size_like(input, [3, 5])
out1_shape = layers.shape(out1) # [13,5]
# example 2:
# input_dim_idx和output_dim_idx使用指定值
out2=layers.uniform_random_batch_size_like(input, [3, 5], input_dim_idx=1, output_dim_idx=1)
out2_shape = layers.shape(out2) # [3,11]
...@@ -56,64 +56,56 @@ Fluid 使用 :code:`sums` 执行对输入数据的加和。 ...@@ -56,64 +56,56 @@ Fluid 使用 :code:`sums` 执行对输入数据的加和。
API reference 请参考::ref:`cn_api_fluid_layers_sums` API reference 请参考::ref:`cn_api_fluid_layers_sums`
7. fill_constant
7. fill_constant_batch_size_like
---------------------------------
Fluid 使用 :code:`fill_constant_batch_size_like` 创建一个具有特定形状、类型和 batch_size 的 Tensor。并且该Tensor的初始值可以被指定为任意常数。其中 batch_size 信息由该tensor的 :code:`input_dim_idx` 和 :code:`output_dim_idx` 确定。
API reference 请参考::ref:`cn_api_fluid_layers_fill_constant_batch_size_like`
8. fill_constant
----------------- -----------------
Fluid 使用 :code:`fill_constant` 创建一个具有特定形状和类型的 Tensor。可以通过 :code:`value` 设置该变量的初始值。 Fluid 使用 :code:`fill_constant` 创建一个具有特定形状和类型的 Tensor。可以通过 :code:`value` 设置该变量的初始值。
API reference 请参考: :ref:`cn_api_fluid_layers_fill_constant` API reference 请参考: :ref:`cn_api_fluid_layers_fill_constant`
9. assign 8. assign
--------------- ---------------
Fluid 使用 :code:`assign` 复制一个变量。 Fluid 使用 :code:`assign` 复制一个变量。
API reference 请参考::ref:`cn_api_fluid_layers_assign` API reference 请参考::ref:`cn_api_fluid_layers_assign`
10. argmin 9. argmin
-------------- --------------
Fluid 使用 :code:`argmin` 计算输入 Tensor 指定轴上最小元素的索引。 Fluid 使用 :code:`argmin` 计算输入 Tensor 指定轴上最小元素的索引。
API reference 请参考::ref:`cn_api_fluid_layers_assign` API reference 请参考::ref:`cn_api_fluid_layers_assign`
11. argmax 10. argmax
----------- -----------
Fluid 使用 :code:`argmax` 计算输入 Tensor 指定轴上最大元素的索引。 Fluid 使用 :code:`argmax` 计算输入 Tensor 指定轴上最大元素的索引。
API reference 请参考::ref:`cn_api_fluid_layers_argmax` API reference 请参考::ref:`cn_api_fluid_layers_argmax`
12. argsort 11. argsort
------------ ------------
Fluid 使用 :code:`argsort` 对输入 Tensor 在指定轴上进行排序,并返回排序后的数据变量及其对应的索引值。 Fluid 使用 :code:`argsort` 对输入 Tensor 在指定轴上进行排序,并返回排序后的数据变量及其对应的索引值。
API reference 请参考: :ref:`cn_api_fluid_layers_argsort` API reference 请参考: :ref:`cn_api_fluid_layers_argsort`
13. ones 12. ones
------------- -------------
Fluid 使用 :code:`ones` 创建一个指定大小和数据类型的Tensor,且初始值为1。 Fluid 使用 :code:`ones` 创建一个指定大小和数据类型的Tensor,且初始值为1。
API reference 请参考: :ref:`cn_api_fluid_layers_ones` API reference 请参考: :ref:`cn_api_fluid_layers_ones`
14. zeros 13. zeros
--------------- ---------------
Fluid 使用 :code:`zeros` 创建一个指定大小和数据类型的Tensor,且初始值为0。 Fluid 使用 :code:`zeros` 创建一个指定大小和数据类型的Tensor,且初始值为0。
API reference 请参考: :ref:`cn_api_fluid_layers_zeros` API reference 请参考: :ref:`cn_api_fluid_layers_zeros`
15. reverse 14. reverse
------------------- -------------------
Fluid 使用 :code:`reverse` 沿指定轴反转 Tensor。 Fluid 使用 :code:`reverse` 沿指定轴反转 Tensor。
...@@ -146,4 +138,4 @@ API reference 请参考: :ref:`cn_api_fluid_create_random_int_lodtensor` ...@@ -146,4 +138,4 @@ API reference 请参考: :ref:`cn_api_fluid_create_random_int_lodtensor`
Fluid 使用 :code:`reorder_lod_tensor_by_rank` 对输入 LoD_Tensor 的序列信息按指定顺序重拍。 Fluid 使用 :code:`reorder_lod_tensor_by_rank` 对输入 LoD_Tensor 的序列信息按指定顺序重拍。
API reference 请参考::ref:`cn_api_fluid_layers_reorder_lod_tensor_by_rank` API reference 请参考::ref:`cn_api_fluid_layers_reorder_lod_tensor_by_rank`
\ No newline at end of file
...@@ -56,64 +56,56 @@ Fluid uses :code:`sums` to sum up the input data. ...@@ -56,64 +56,56 @@ Fluid uses :code:`sums` to sum up the input data.
API reference : :ref:`api_fluid_layers_sums` API reference : :ref:`api_fluid_layers_sums`
7. fill_constant
7. fill_constant_batch_size_like
---------------------------------
Fluid uses :code:`fill_constant_batch_size_like` to create a Tensor with a specific shape, type, and batch_size. And the initial value of the Tensor can be specified as an arbitrary constant. The batch_size information is determined by the tensor's :code:`input_dim_idx` and :code:`output_dim_idx`.
API reference : :ref:`api_fluid_layers_fill_constant_batch_size_like`
8. fill_constant
----------------- -----------------
Fluid uses :code:`fill_constant` to create a Tensor with a specific shape and type. The initial value of this variable can be set via :code:`value`. Fluid uses :code:`fill_constant` to create a Tensor with a specific shape and type. The initial value of this variable can be set via :code:`value`.
API reference : :ref:`api_fluid_layers_fill_constant` API reference : :ref:`api_fluid_layers_fill_constant`
9. assign 8. assign
--------------- ---------------
Fluid uses :code:`assign` to duplicate a variable. Fluid uses :code:`assign` to duplicate a variable.
API reference : :ref:`api_fluid_layers_assign` API reference : :ref:`api_fluid_layers_assign`
10. argmin 9. argmin
-------------- --------------
Fluid uses :code:`argmin` to calculate the index of the smallest element on the specified axis of Tensor. Fluid uses :code:`argmin` to calculate the index of the smallest element on the specified axis of Tensor.
API reference : :ref:`api_fluid_layers_argmin` API reference : :ref:`api_fluid_layers_argmin`
11. argmax 10. argmax
----------- -----------
Fluid uses :code:`argmax` to calculate the index of the largest element on the specified axis of Tensor. Fluid uses :code:`argmax` to calculate the index of the largest element on the specified axis of Tensor.
API reference : :ref:`api_fluid_layers_argmax` API reference : :ref:`api_fluid_layers_argmax`
12. argsort 11. argsort
------------ ------------
Fluid uses :code:`argsort` to sort the input Tensor on the specified axis and it will return the sorted data variables and their corresponding index values. Fluid uses :code:`argsort` to sort the input Tensor on the specified axis and it will return the sorted data variables and their corresponding index values.
API reference : :ref:`api_fluid_layers_argsort` API reference : :ref:`api_fluid_layers_argsort`
13. ones 12. ones
------------- -------------
Fluid uses :code:`ones` to create a Tensor of the specified size and data type with an initial value of 1. Fluid uses :code:`ones` to create a Tensor of the specified size and data type with an initial value of 1.
API reference : :ref:`api_fluid_layers_ones` API reference : :ref:`api_fluid_layers_ones`
14. zeros 13. zeros
--------------- ---------------
Fluid uses :code:`zeros` to create a Tensor of the specified size and data type with an initial value of zero. Fluid uses :code:`zeros` to create a Tensor of the specified size and data type with an initial value of zero.
API reference : :ref:`api_fluid_layers_zeros` API reference : :ref:`api_fluid_layers_zeros`
15. reverse 14. reverse
------------------- -------------------
Fluid uses :code:`reverse` to invert Tensor along the specified axis. Fluid uses :code:`reverse` to invert Tensor along the specified axis.
...@@ -146,4 +138,4 @@ API reference : :ref:`api_fluid_create_random_int_lodtensor` ...@@ -146,4 +138,4 @@ API reference : :ref:`api_fluid_create_random_int_lodtensor`
Fluid uses :code:`reorder_lod_tensor_by_rank` to reorder the sequence information of the input LoD_Tensor in the specified order. Fluid uses :code:`reorder_lod_tensor_by_rank` to reorder the sequence information of the input LoD_Tensor in the specified order.
API reference : :ref:`api_fluid_layers_reorder_lod_tensor_by_rank` API reference : :ref:`api_fluid_layers_reorder_lod_tensor_by_rank`
\ No newline at end of file
...@@ -285,10 +285,11 @@ with fluid.program_guard(dg_program): ...@@ -285,10 +285,11 @@ with fluid.program_guard(dg_program):
dg_logit = D(g_img) dg_logit = D(g_img)
# 计算生成图片被判别为真实样本的loss # 计算生成图片被判别为真实样本的loss
noise_shape = fluid.layers.shape(noise)
dg_loss = loss( dg_loss = loss(
dg_logit, dg_logit,
fluid.layers.fill_constant_batch_size_like( fluid.layers.fill_constant(
input=noise, dtype='float32', shape=[-1, 1], value=1.0)) dtype='float32', shape=[noise_shape[0], 1], value=1.0))
``` ```
使用adam作为优化器,分别优化判别真实图片的loss和判别生成图片的loss。 使用adam作为优化器,分别优化判别真实图片的loss和判别生成图片的loss。
......
...@@ -284,10 +284,11 @@ with fluid.program_guard(dg_program): ...@@ -284,10 +284,11 @@ with fluid.program_guard(dg_program):
dg_logit = D(g_img) dg_logit = D(g_img)
# Calculate the loss of the generated image as the real sample # Calculate the loss of the generated image as the real sample
noise_shape = fluid.layers.shape(noise)
dg_loss = loss( dg_loss = loss(
dg_logit, dg_logit,
fluid.layers.fill_constant_batch_size_like( fluid.layers.fill_constant(
input=noise, dtype='float32', shape=[-1, 1], value=1.0)) dtype='float32', shape=[noise_shape[0], 1], value=1.0))
``` ```
Adam is used as the optimizer to distinguish the loss of the real picture and the loss of the generated picture. Adam is used as the optimizer to distinguish the loss of the real picture and the loss of the generated picture.
......
...@@ -74,11 +74,11 @@ def train(args): ...@@ -74,11 +74,11 @@ def train(args):
g_program_test = dg_program.clone(for_test=True) g_program_test = dg_program.clone(for_test=True)
dg_logit = D(g_img) dg_logit = D(g_img)
noise_shape = fluid.layers.shape(noise)
dg_loss = loss(dg_logit, dg_loss = loss(dg_logit,
fluid.layers.fill_constant_batch_size_like( fluid.layers.fill_constant(
input=noise,
dtype='float32', dtype='float32',
shape=[-1, 1], shape=[noise_shape[0], 1],
value=1.0)) value=1.0))
opt = fluid.optimizer.Adam(learning_rate=LEARNING_RATE) opt = fluid.optimizer.Adam(learning_rate=LEARNING_RATE)
......
...@@ -327,10 +327,11 @@ with fluid.program_guard(dg_program): ...@@ -327,10 +327,11 @@ with fluid.program_guard(dg_program):
dg_logit = D(g_img) dg_logit = D(g_img)
# 计算生成图片被判别为真实样本的loss # 计算生成图片被判别为真实样本的loss
noise_shape = fluid.layers.shape(noise)
dg_loss = loss( dg_loss = loss(
dg_logit, dg_logit,
fluid.layers.fill_constant_batch_size_like( fluid.layers.fill_constant(
input=noise, dtype='float32', shape=[-1, 1], value=1.0)) dtype='float32', shape=[noise_shape[0], 1], value=1.0))
``` ```
使用adam作为优化器,分别优化判别真实图片的loss和判别生成图片的loss。 使用adam作为优化器,分别优化判别真实图片的loss和判别生成图片的loss。
......
...@@ -326,10 +326,11 @@ with fluid.program_guard(dg_program): ...@@ -326,10 +326,11 @@ with fluid.program_guard(dg_program):
dg_logit = D(g_img) dg_logit = D(g_img)
# Calculate the loss of the generated image as the real sample # Calculate the loss of the generated image as the real sample
noise_shape = fluid.layers.shape(noise)
dg_loss = loss( dg_loss = loss(
dg_logit, dg_logit,
fluid.layers.fill_constant_batch_size_like( fluid.layers.fill_constant(
input=noise, dtype='float32', shape=[-1, 1], value=1.0)) dtype='float32', shape=[noise_shape[0], 1], value=1.0))
``` ```
Adam is used as the optimizer to distinguish the loss of the real picture and the loss of the generated picture. Adam is used as the optimizer to distinguish the loss of the real picture and the loss of the generated picture.
......
...@@ -89,8 +89,9 @@ def deconv(x, ...@@ -89,8 +89,9 @@ def deconv(x,
def conv_cond_concat(x, y): def conv_cond_concat(x, y):
"""Concatenate conditioning vector on feature map axis.""" """Concatenate conditioning vector on feature map axis."""
ones = fluid.layers.fill_constant_batch_size_like( x_shape = fluid.layers.shape(x)
x, [-1, y.shape[1], x.shape[2], x.shape[3]], "float32", 1.0) ones = fluid.layers.fill_constant(
x, [x_shape[0], y.shape[1], x.shape[2], x.shape[3]], "float32", 1.0)
return fluid.layers.concat([x, ones * y], 1) return fluid.layers.concat([x, ones * y], 1)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册