未验证 提交 fa79f486 编写于 作者: X xsrobin 提交者: GitHub

Cherrypick1.5 (#1028)

* add 1.5.1 whl and fix some bugs (#1010)

* add windows install whl

* whl and bug

* fix some bugs

* update 1.5.1 cn API

* add url
上级 d5b5f414
......@@ -19,6 +19,7 @@ WeightedAverage
.. code-block:: python
import paddle.fluid as fluid
avg = fluid.average.WeightedAverage()
avg.add(value=2.0, weight=1)
avg.add(value=4.0, weight=2)
......
......@@ -36,6 +36,7 @@ append_backward
# 网络配置
# 损失计算
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[13], dtype='float32')
y = fluid.layers.data(name='y', shape=[1], dtype='float32')
......
......@@ -292,7 +292,7 @@ Data Reader Interface
返回:新的数据读取器
抛出异常: ``ComposeNotAligned`` – reader的输出不一致。 当check_alignment设置为False,不会升高
抛出异常: ``ComposeNotAligned`` – reader的输出不一致。 当check_alignment设置为False,不会抛出异常
......@@ -378,22 +378,15 @@ PipeReader通过流从一个命令中读取数据,将它的stdout放到管道
.. py:method:: get_line(cut_lines=True, line_break='\n')
param cut_lines:
cut buffer to lines
type cut_lines: bool
param line_break:
line break of the file, like
or
参数:
- **cut_lines** (bool) - 将缓冲区分行。
- **line_break** (string) - 文件中的行分割符,比如 ‘\\n’ 或者 ‘\\r’。
type line_break:
string
return: one line or a buffer of bytes
返回:一行或者一段缓冲区。
rtype: string
返回类型: string
......
......@@ -267,6 +267,7 @@ scope_guard
.. code-block:: python
import paddle.fluid as fluid
import numpy
new_scope = fluid.Scope()
......
......@@ -249,7 +249,7 @@ cpu_places
创建 ``fluid.CPUPlace`` 对象列表。
如果 ``device_count`` 为None,则设备数目将由环境变量 ``CPU_NUM`` 确定。如果未设置 ``CPU_NUM`` ,则设备数目将由 ``multiprocessing.cpu_count()`` 确定
如果 ``device_count`` 为None,则设备数目将由环境变量 ``CPU_NUM`` 确定。如果未设置 ``CPU_NUM`` ,则设备数目默认为1,也就是说, ``CPU_NUM`` =1
参数:
- **device_count** (None|int) - 设备数目
......@@ -262,6 +262,7 @@ cpu_places
.. code-block:: python
import paddle.fluid as fluid
cpu_places = fluid.cpu_places()
......@@ -279,6 +280,7 @@ CPUPlace是设备的描述符。它代表一个CPU,可以访问CPUPlace对应
.. code-block:: python
import paddle.fluid as fluid
cpu_place = fluid.CPUPlace()
......@@ -397,6 +399,7 @@ cuda_pinned_places
.. code-block:: python
import paddle.fluid as fluid
cuda_pinned_places_cpu_num = fluid.cuda_pinned_places()
# 或者
cuda_pinned_places = fluid.cuda_pinned_places(1)
......@@ -428,6 +431,7 @@ cuda_places
.. code-block:: python
import paddle.fluid as fluid
cuda_places = fluid.cuda_places()
.. _cn_api_fluid_CUDAPinnedPlace:
......@@ -443,6 +447,7 @@ CUDAPinnedPlace是一个设备描述符,它所指代的存储空间可以被GP
.. code-block:: python
import paddle.fluid as fluid
place = fluid.CUDAPinnedPlace()
.. _cn_api_fluid_CUDAPlace:
......@@ -458,6 +463,7 @@ CUDAPlace是一个设备描述符,它代表一个GPU,并且每个CUDAPlace
.. code-block:: python
import paddle.fluid as fluid
gpu_place = fluid.CUDAPlace(0)
......@@ -482,6 +488,7 @@ DataFeedDesc应由来自磁盘的有效protobuf消息初始化。
.. code-block:: python
import paddle.fluid as fluid
f = open("data.proto", "w")
print >> f, 'name: "MultiSlotDataFeed"'
print >> f, 'batch_size: 2'
......@@ -508,6 +515,7 @@ DataFeedDesc也可以在运行时更改。一旦你熟悉了每个字段的含
.. code-block:: python
import paddle.fluid as fluid
data_feed = fluid.DataFeedDesc('data.proto')
data_feed.set_batch_size(128)
data_feed.set_dense_slots('wd') # 名为'wd'的slot将被设置为密集的
......@@ -534,6 +542,7 @@ DataFeedDesc也可以在运行时更改。一旦你熟悉了每个字段的含
.. code-block:: python
import paddle.fluid as fluid
f = open("data.proto", "w")
print >> f, 'name: "MultiSlotDataFeed"'
print >> f, 'batch_size: 2'
......@@ -569,6 +578,7 @@ DataFeedDesc也可以在运行时更改。一旦你熟悉了每个字段的含
.. code-block:: python
import paddle.fluid as fluid
f = open("data.proto", "w")
print >> f, 'name: "MultiSlotDataFeed"'
print >> f, 'batch_size: 2'
......@@ -606,6 +616,7 @@ DataFeedDesc也可以在运行时更改。一旦你熟悉了每个字段的含
.. code-block:: python
import paddle.fluid as fluid
f = open("data.proto", "w")
print >> f, 'name: "MultiSlotDataFeed"'
print >> f, 'batch_size: 2'
......@@ -642,6 +653,7 @@ DataFeedDesc也可以在运行时更改。一旦你熟悉了每个字段的含
.. code-block:: python
import paddle.fluid as fluid
f = open("data.proto", "w")
print >> f, 'name: "MultiSlotDataFeed"'
print >> f, 'batch_size: 2'
......@@ -993,6 +1005,7 @@ DistributeTranspiler
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[13], dtype='float32')
y = fluid.layers.data(name='y', shape=[1], dtype='float32')
y_predict = fluid.layers.fc(input=x, size=1, act=None)
......@@ -1053,6 +1066,7 @@ DistributeTranspiler
.. code-block:: python
import paddle.fluid as fluid
transpiler = fluid.DistributeTranspiler()
t.transpile(
trainer_id=0,
......@@ -1162,6 +1176,7 @@ DistributeTranspiler
.. code-block:: python
import paddle.fluid as fluid
pserver_endpoints = "192.168.0.1:6174,192.168.0.2:6174"
trainer_endpoints = "192.168.0.1:6174,192.168.0.2:6174"
current_endpoint = "192.168.0.1:6174"
......@@ -1207,6 +1222,7 @@ block中分割(split)出的元素个数的最小值。
.. code-block:: python
import paddle.fluid as fluid
config = fluid.DistributeTranspilerConfig()
config.slice_var_up = True
......@@ -1226,6 +1242,7 @@ ExecutionStrategy
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[13], dtype='float32')
y = fluid.layers.data(name='y', shape=[1], dtype='float32')
y_predict = fluid.layers.fc(input=x, size=1, act=None)
......@@ -1578,6 +1595,7 @@ in_dygraph_mode
.. code-block:: python
import paddle.fluid as fluid
if fluid.in_dygraph_mode():
pass
......@@ -1875,6 +1893,7 @@ name_scope
.. code-block:: python
import paddle.fluid as fluid
with fluid.name_scope("s1"):
a = fluid.layers.data(name='data', shape=[1], dtype='int32')
b = a + 1
......@@ -2043,6 +2062,7 @@ ParallelExecutor
.. code-block:: python
import paddle.fluid as fluid
pe = fluid.ParallelExecutor(use_cuda=use_cuda,
loss_name=avg_cost.name,
main_program=fluid.default_main_program())
......@@ -2211,6 +2231,7 @@ Program
.. code-block:: python
import paddle.fluid as fluid
test_program = fluid.default_main_program().clone(for_test=True)
optimizer = fluid.optimizer.Momentum(learning_rate=0.01, momentum=0.9)
optimizer.minimize()
......@@ -2538,6 +2559,7 @@ scope_guard
.. code-block:: python
import paddle.fluid as fluid
import numpy
new_scope = fluid.Scope()
......
......@@ -26,6 +26,7 @@ BilinearInitializer
.. code-block:: python
import paddle.fluid as fluid
factor = 2
C = 2
w_attr = fluid.initializer.ParamAttr(
......@@ -77,6 +78,7 @@ ConstantInitializer
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=x, size=10,
param_attr=fluid.initializer.Constant(value=2.0))
......@@ -104,6 +106,7 @@ force_init_on_cpu
.. code-block:: python
import paddle.fluid as fluid
if fluid.initializer.force_init_on_cpu():
step = fluid.layers.create_global_var(shape=[2,3], value=1.0, dtype='float32')
......@@ -130,6 +133,7 @@ init_on_cpu
.. code-block:: python
import paddle.fluid as fluid
with fluid.initializer.init_on_cpu():
step = fluid.layers.create_global_var(shape=[2,3], value=1.0, dtype='float32')
......@@ -183,6 +187,7 @@ MSRAInitializer
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=x, size=10, param_attr=fluid.initializer.MSRA(uniform=False))
......@@ -219,6 +224,7 @@ NormalInitializer
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
fc = fluid.layers.fc(input=x, size=10,
param_attr=fluid.initializer.Normal(loc=0.0, scale=2.0)
......@@ -240,6 +246,7 @@ NumpyArrayInitializer
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[5], dtype='float32')
fc = fluid.layers.fc(input=x, size=10,
param_attr=fluid.initializer.NumpyArrayInitializer(numpy.array([1,2])))
......
......@@ -96,6 +96,7 @@ load_params
.. code-block:: python
import paddle.fluid as fluid
exe = fluid.Executor(fluid.CPUPlace())
param_path = "./my_paddle_model"
prog = fluid.default_main_program()
......@@ -131,6 +132,7 @@ load_persistables
.. code-block:: python
import paddle.fluid as fluid
exe = fluid.Executor(fluid.CPUPlace())
param_path = "./my_paddle_model"
prog = fluid.default_main_program()
......@@ -237,6 +239,7 @@ PyReader
.. code-block:: python
import paddle.fluid as fluid
EPOCH_NUM = 3
ITER_NUM = 5
BATCH_SIZE = 3
......@@ -278,6 +281,7 @@ PyReader
.. code-block:: python
import paddle.fluid as fluid
EPOCH_NUM = 3
ITER_NUM = 5
BATCH_SIZE = 10
......@@ -346,6 +350,7 @@ PyReader
.. code-block:: python
import paddle.fluid as fluid
BATCH_SIZE = 10
def generator():
......@@ -376,6 +381,7 @@ PyReader
.. code-block:: python
import paddle.fluid as fluid
BATCH_SIZE = 10
def generator():
......@@ -418,6 +424,7 @@ PyReader
.. code-block:: python
import paddle.fluid as fluid
EPOCH_NUM = 3
ITER_NUM = 15
BATCH_SIZE = 3
......@@ -464,6 +471,7 @@ PyReader
.. code-block:: python
import paddle.fluid as fluid
EPOCH_NUM = 3
ITER_NUM = 15
BATCH_SIZE = 3
......@@ -510,6 +518,7 @@ PyReader
.. code-block:: python
import paddle.fluid as fluid
EPOCH_NUM = 3
ITER_NUM = 15
BATCH_SIZE = 3
......@@ -636,6 +645,7 @@ save_params
.. code-block:: python
import paddle.fluid as fluid
exe = fluid.Executor(fluid.CPUPlace())
param_path = "./my_paddle_model"
prog = fluid.default_main_program()
......
......@@ -149,6 +149,7 @@ create_array
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.create_array(dtype='float32')
......@@ -391,6 +392,7 @@ greater_equal
.. code-block:: python
import paddle.fluid as fluid
out = fluid.layers.greater_equal(x=label, y=limit)
......@@ -417,6 +419,7 @@ greater_than
.. code-block:: python
import paddle.fluid as fluid
out = fluid.layers.greater_than(x=label, y=limit)
......@@ -564,6 +567,7 @@ less_equal
.. code-block:: python
import paddle.fluid as fluid
out = fluid.layers.less_equal(x=label, y=limit)
......@@ -594,6 +598,7 @@ less_than
.. code-block:: python
import paddle.fluid as fluid
label = fluid.layers.data(name='y', shape=[1], dtype='int64')
limit = fluid.layers.fill_constant(shape=[1], dtype='int64', value=5)
cond = fluid.layers.less_than(x=label, y=limit)
......@@ -621,6 +626,7 @@ not_equal
.. code-block:: python
import paddle.fluid as fluid
out = fluid.layers.not_equal(x=label, y=limit)
......
......@@ -34,6 +34,7 @@ anchor_generator
.. code-block:: python
import paddle.fluid as fluid
conv1 = fluid.layers.data(name='conv1', shape=[48, 16, 16], dtype='float32')
anchor, var = fluid.layers.anchor_generator(
input=conv1,
......@@ -86,6 +87,7 @@ bipartite_match
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[4], dtype='float32')
y = fluid.layers.data(name='y', shape=[4], dtype='float32')
iou = fluid.layers.iou_similarity(x=x, y=y)
......@@ -131,6 +133,7 @@ box_clip
.. code-block:: python
import paddle.fluid as fluid
boxes = fluid.layers.data(
name='boxes', shape=[8, 4], dtype='float32', lod_level=1)
im_info = fluid.layers.data(name='im_info', shape=[3])
......@@ -210,6 +213,7 @@ Bounding Box Coder
.. code-block:: python
import paddle.fluid as fluid
prior_box = fluid.layers.data(name='prior_box',
shape=[512, 4],
dtype='float32',
......@@ -278,6 +282,7 @@ box decode过程得出decode_box,然后分配方案如下所述:
.. code-block:: python
import paddle.fluid as fluid
pb = fluid.layers.data(
name='prior_box', shape=[4], dtype='float32')
pbv = fluid.layers.data(
......@@ -322,6 +327,7 @@ collect_fpn_proposals
.. code-block:: python
import paddle.fluid as fluid
multi_rois = []
multi_scores = []
for i in range(4):
......@@ -402,6 +408,7 @@ density prior box的量由fixed_sizes and fixed_ratios决定。显然地,fixed
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[3,6,9])
images = fluid.layers.data(name="images", shape=[3,9,12])
box, var = fluid.layers.density_prior_box(
......@@ -468,6 +475,8 @@ detection_map
.. code-block:: python
import paddle.fluid as fluid
from fluid.layers import detection
detect_res = fluid.layers.data(
name='detect_res',
shape=[10, 6],
......@@ -581,6 +590,7 @@ distribute_fpn_proposals
.. code-block:: python
import paddle.fluid as fluid
fpn_rois = fluid.layers.data(
name='data', shape=[4], dtype='float32', lod_level=1)
multi_rois, restore_ind = fluid.layers.distribute_fpn_proposals(
......@@ -962,6 +972,7 @@ multiclass_nms
.. code-block:: python
import paddle.fluid as fluid
boxes = fluid.layers.data(name='bboxes', shape=[81, 4],
dtype='float32', lod_level=1)
scores = fluid.layers.data(name='scores', shape=[81],
......@@ -1047,6 +1058,7 @@ prior_box
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[3,6,9])
images = fluid.layers.data(name="images", shape=[3,9,12])
box, var = fluid.layers.prior_box(
......@@ -1422,6 +1434,7 @@ ssd_loss
.. code-block:: python
import paddle.fluid as fluid
pb = fluid.layers.data(
name='prior_box',
shape=[10, 4],
......@@ -1669,6 +1682,7 @@ yolov3_loss
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[255, 13, 13], dtype='float32')
gt_box = fluid.layers.data(name='gtbox', shape=[6, 4], dtype='float32')
gt_label = fluid.layers.data(name='gtlabel', shape=[6], dtype='int32')
......
......@@ -24,6 +24,7 @@ batch
.. code-block:: python
import paddle.fluid as fluid
raw_reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
'./data2.recordio'],
shapes=[(3,224,224), (1,)],
......@@ -153,6 +154,7 @@ data
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='x', shape=[784], dtype='float32')
......@@ -299,6 +301,7 @@ reader变量中数据预处理块。
.. code-block:: python
import paddle.fluid as fluid
reader = fluid.layers.io.open_files(
filenames=['./data1.recordio', './data2.recordio'],
shapes=[(3, 224, 224), (1, )],
......@@ -331,7 +334,7 @@ py_reader
创建一个由在Python端提供数据的reader
该layer返回一个Reader Variable。reader提供了 ``decorate_paddle_reader()`` 和 ``decorate_tensor_provider()`` 来设置Python generator作为数据源。更多细节请参考异步数据读取:ref:`user_guide_use_py_reader`,在c++端调用 ``Executor::Run()`` 时,来自generator的数据将被自动读取。与 ``DataFeeder.feed()`` 不同,数据读取进程和 ``Executor::Run()`` 进程可以使用 ``py_reader`` 并行运行。reader的 ``start()`` 方法应该在每次数据传递开始时调用,在传递结束和抛出 ``fluid.core.EOFException`` 后执行 ``reset()`` 方法。注意, ``Program.clone()`` 方法不能克隆 ``py_reader`` 。
该layer返回一个Reader Variable。reader提供了 ``decorate_paddle_reader()`` 和 ``decorate_tensor_provider()`` 来设置Python generator作为数据源。更多细节请参考 :ref:`user_guides_use_py_reader`,在c++端调用 ``Executor::Run()`` 时,来自generator的数据将被自动读取。与 ``DataFeeder.feed()`` 不同,数据读取进程和 ``Executor::Run()`` 进程可以使用 ``py_reader`` 并行运行。reader的 ``start()`` 方法应该在每次数据传递开始时调用,在传递结束和抛出 ``fluid.core.EOFException`` 后执行 ``reset()`` 方法。注意, ``Program.clone()`` 方法不能克隆 ``py_reader`` 。
参数:
- **capacity** (int) – ``py_reader`` 维护的缓冲区容量
......@@ -485,6 +488,7 @@ random_data_generator
.. code-block:: python
import paddle.fluid as fluid
reader = fluid.layers.random_data_generator(
low=0.0,
high=1.0,
......@@ -563,6 +567,7 @@ shuffle
.. code-block:: python
import paddle.fluid as fluid
raw_reader = fluid.layers.io.open_files(filenames=['./data1.recordio',
'./data2.recordio'],
shapes=[(3,224,224), (1,)],
......
......@@ -29,6 +29,7 @@ cosine_decay
.. code-block:: python
import paddle.fluid as fluid
base_lr = 0.1
lr = fluid.layers.cosine_decay( learning_rate = base_lr, step_each_epoch=10000, epochs=120)
......@@ -156,6 +157,7 @@ linear_lr_warmup
.. code-block:: python
import paddle.fluid as fluid
boundaries = [100, 200]
lr_steps = [0.1, 0.01, 0.001]
warmup_steps = 50
......@@ -225,6 +227,7 @@ Noam衰减方法。noam衰减的numpy实现如下。
.. code-block:: python
import padde.fluid as fluid
import numpy as np
# 设置超参数
d_model = 2
......
......@@ -60,6 +60,7 @@ pooling2d操作根据输入 ``input`` , ``pool_size`` , ``pool_type`` 参数
# wend = ceil((i + 1) * W / n)
# output[:, :, i, j] = avg(input[:, :, hstart: hend, wstart: wend])
#
import paddle.fluid as fluid
data = fluid.layers.data(
name='data', shape=[3, 32, 32], dtype='float32')
pool_out = fluid.layers.adaptive_pool2d(
......@@ -377,6 +378,7 @@ autoincreased_step_counter
.. code-block:: python
import paddle.fluid as fluid
global_step = fluid.layers.autoincreased_step_counter(
counter_name='@LR_DECAY_COUNTER@', begin=0, step=1)
......@@ -447,6 +449,7 @@ batch_norm
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[3, 7, 3, 7], dtype='float32', append_batch_size=False)
hidden1 = fluid.layers.fc(input=x, size=200, param_attr='fc1.w')
hidden2 = fluid.layers.batch_norm(input=hidden1)
......@@ -621,6 +624,7 @@ bilinear_tensor_product
.. code-block:: python
import paddle.fluid as fluid
layer1 = fluid.layers.data("t1", shape=[-1, 5], dtype="float32")
layer2 = fluid.layers.data("t2", shape=[-1, 4], dtype="float32")
tensor = fluid.layers.bilinear_tensor_product(x=layer1, y=layer2, size=1000)
......@@ -693,6 +697,7 @@ BRelu 激活函数
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[2,3,16,16], dtype=”float32”)
y = fluid.layers.brelu(x, t_min=1.0, t_max=20.0)
......@@ -876,6 +881,7 @@ ClipByNorm算子
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(
name='data', shape=[1], dtype='float32')
reward = fluid.layers.clip_by_norm(x=input, max_norm=1.0)
......@@ -912,6 +918,7 @@ continuous_value_model
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[-1, 1], lod_level=1, append_batch_size=False, dtype="int64")#, stop_gradient=False)
label = fluid.layers.data(name="label", shape=[-1, 1], append_batch_size=False, dtype="int64")
embed = fluid.layers.embedding(
......@@ -997,6 +1004,7 @@ conv2d
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
conv2d = fluid.layers.conv2d(input=data, num_filters=2, filter_size=3, act="relu")
......@@ -1094,6 +1102,7 @@ conv2d_transpose
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
conv2d_transpose = fluid.layers.conv2d_transpose(input=data, num_filters=2, filter_size=3)
......@@ -1174,6 +1183,7 @@ conv3d
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32')
conv3d = fluid.layers.conv3d(input=data, num_filters=2, filter_size=3, act="relu")
......@@ -1278,6 +1288,7 @@ conv3d_transpose
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[3, 12, 32, 32], dtype='float32')
conv3d_transpose = fluid.layers.conv3d_transpose(input=data, num_filters=2, filter_size=3)
......@@ -1319,6 +1330,7 @@ cos_sim
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[3, 7], dtype='float32', append_batch_size=False)
y = fluid.layers.data(name='y', shape=[1, 7], dtype='float32', append_batch_size=False)
out = fluid.layers.cos_sim(x, y)
......@@ -1359,6 +1371,7 @@ crf_decoding
.. code-block:: python
import paddle.fluid as fluid
images = fluid.layers.data(name='pixel', shape=[784], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int32')
hidden = fluid.layers.fc(input=images, size=2)
......@@ -1506,6 +1519,7 @@ cross_entropy
.. code-block:: python
import paddle.fluid as fluid
classdim = 7
x = fluid.layers.data(name='x', shape=[3, 7], dtype='float32', append_batch_size=False)
label = fluid.layers.data(name='label', shape=[3, 1], dtype='float32', append_batch_size=False)
......@@ -1705,6 +1719,7 @@ deformable_conv
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[3, 32, 32], dtype='float32')
offset = fluid.layers.data(name='offset', shape=[18, 32, 32], dtype='float32')
mask = fluid.layers.data(name='mask', shape=[9, 32, 32], dtype='float32')
......@@ -1747,6 +1762,7 @@ deformable_roi_pooling
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input",
shape=[2, 192, 64, 64],
dtype='float32',
......@@ -1866,6 +1882,7 @@ dropout操作符可以从程序中移除,程序变得高效。
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
droped = fluid.layers.dropout(x, dropout_prob=0.5)
......@@ -2051,6 +2068,7 @@ W 代表了权重矩阵(weight matrix),例如 :math:`W_{xi}` 是从输入门
.. code-block:: python
import paddle.fluid as fluid
emb_dim = 256
vocab_size = 10000
hidden_dim = 512
......@@ -2162,6 +2180,7 @@ LSTMP层(具有循环映射的LSTM)在LSTM层后有一个分离的映射层,
.. code-block:: python
import paddle.fluid as fluid
dict_dim, emb_dim = 128, 64
data = fluid.layers.data(name='sequence', shape=[1],
dtype='int32', lod_level=1)
......@@ -2929,31 +2948,37 @@ elementwise_pow
.. code-block:: python
# 例1: shape(x) = (2, 3, 4, 5), shape(y) = (2, 3, 4, 5)
import paddle.fluid as fluid
x0 = fluid.layers.data(name="x0", shape=[2, 3, 4, 5], dtype='float32')
y0 = fluid.layers.data(name="y0", shape=[2, 3, 4, 5], dtype='float32')
z0 = fluid.layers.elementwise_pow(x0, y0)
# 例2: shape(X) = (2, 3, 4, 5), shape(Y) = (5)
import paddle.fluid as fluid
x1 = fluid.layers.data(name="x1", shape=[2, 3, 4, 5], dtype='float32')
y1 = fluid.layers.data(name="y1", shape=[5], dtype='float32')
z1 = fluid.layers.elementwise_pow(x1, y1)
# 例3: shape(X) = (2, 3, 4, 5), shape(Y) = (4, 5), with axis=-1(default) or axis=2
import paddle.fluid as fluid
x2 = fluid.layers.data(name="x2", shape=[2, 3, 4, 5], dtype='float32')
y2 = fluid.layers.data(name="y2", shape=[4, 5], dtype='float32')
z2 = fluid.layers.elementwise_pow(x2, y2, axis=2)
# 例4: shape(X) = (2, 3, 4, 5), shape(Y) = (3, 4), with axis=1
import paddle.fluid as fluid
x3 = fluid.layers.data(name="x3", shape=[2, 3, 4, 5], dtype='float32')
y3 = fluid.layers.data(name="y3", shape=[3, 4], dtype='float32')
z3 = fluid.layers.elementwise_pow(x3, y3, axis=1)
# 例5: shape(X) = (2, 3, 4, 5), shape(Y) = (2), with axis=0
import paddle.fluid as fluid
x4 = fluid.layers.data(name="x4", shape=[2, 3, 4, 5], dtype='float32')
y4 = fluid.layers.data(name="y4", shape=[2], dtype='float32')
z4 = fluid.layers.elementwise_pow(x4, y4, axis=0)
# 例6: shape(X) = (2, 3, 4, 5), shape(Y) = (2, 1), with axis=0
import paddle.fluid as fluid
x5 = fluid.layers.data(name="x5", shape=[2, 3, 4, 5], dtype='float32')
y5 = fluid.layers.data(name="y5", shape=[2], dtype='float32')
z5 = fluid.layers.elementwise_pow(x5, y5, axis=0)
......@@ -3079,6 +3104,7 @@ ELU激活层(ELU Activation Operator)
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32")
y = fluid.layers.elu(x, alpha=0.2)
......@@ -3168,6 +3194,7 @@ expand运算会按给定的次数对输入各维度进行复制(tile)运算
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[10], dtype='float32')
out = fluid.layers.expand(x=x, expand_times=[1, 2, 2])
......@@ -3253,6 +3280,7 @@ fc
.. code-block:: python
import paddle.fluid as fluid
# 当输入为单个张量时
data = fluid.layers.data(name="data", shape=[32, 32], dtype="float32")
......@@ -3323,6 +3351,7 @@ flatten
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[4, 4, 3], dtype="float32")
out = fluid.layers.flatten(x=x, axis=2)
......@@ -3411,6 +3440,7 @@ gather
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[-1, 5], dtype='float32')
index = fluid.layers.data(name='index', shape=[-1, 1], dtype='int32')
output = fluid.layers.gather(x, index)
......@@ -3449,6 +3479,7 @@ gaussian_random算子。
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
out = fluid.layers.gaussian_random(shape=[20, 30])
......@@ -3488,6 +3519,7 @@ gaussian_random_batch_size_like
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[13, 11], dtype='float32')
out = fluid.layers.gaussian_random_batch_size_like(
......@@ -3636,6 +3668,7 @@ group_norm
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[8, 32, 32],
dtype='float32')
x = fluid.layers.group_norm(input=data, groups=4)
......@@ -4175,6 +4208,7 @@ https://en.wikipedia.org/wiki/Bilinear_interpolation。
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32")
out = fluid.layers.image_resize(input, out_shape=[12, 12], resample="NEAREST")
......@@ -4211,6 +4245,7 @@ image_resize_short
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32")
out = fluid.layers.image_resize_short(input, out_short_len=3)
......@@ -4253,6 +4288,7 @@ kL发散损失计算如下:
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[4,2,2], dtype='float32')
target = fluid.layers.data(name='target', shape=[4,2,2], dtype='float32')
loss = fluid.layers.kldiv_loss(x=x, target=target, reduction='batchmean')
......@@ -4294,6 +4330,7 @@ L2正则(L2 normalize Layer)
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="data",
shape=(3, 17, 13),
dtype="float32")
......@@ -4343,6 +4380,7 @@ label_smooth
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
label = fluid.layers.data(name="label", shape=[1], dtype="float32")
......@@ -4401,6 +4439,7 @@ layer_norm
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[3, 32, 32],
dtype='float32')
x = fluid.layers.layer_norm(input=data, begin_norm_axis=1)
......@@ -4433,6 +4472,7 @@ LeakyRelu 激活函数
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[2,3,16,16], dtype="float32")
y = fluid.layers.leaky_relu(x, alpha=0.01)
......@@ -4605,6 +4645,7 @@ lod_reset
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[10])
y = fluid.layers.data(name='y', shape=[10, 20], lod_level=2)
out = fluid.layers.lod_reset(x=x, y=y)
......@@ -4644,6 +4685,7 @@ log
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3, 4], dtype="float32")
output = fluid.layers.log(x)
......@@ -4688,6 +4730,7 @@ log_loss
.. code-block:: python
import paddle.fluid as fluid
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
prob = fluid.layers.data(name='prob', shape=[10], dtype='float32')
cost = fluid.layers.log_loss(input=prob, label=label)
......@@ -4731,6 +4774,7 @@ logical_and算子
.. code-block:: python
import paddle.fluid as fluid
left = fluid.layers.data(
name='left', shape=[1], dtype='int32')
right = fluid.layers.data(
......@@ -4773,6 +4817,7 @@ logical_not算子
.. code-block:: python
import paddle.fluid as fluid
left = fluid.layers.data(
name='left', shape=[1], dtype='int32')
result = fluid.layers.logical_not(x=left)
......@@ -4814,6 +4859,7 @@ logical_or算子
.. code-block:: python
import paddle.fluid as fluid
left = fluid.layers.data(
name='left', shape=[1], dtype='int32')
right = fluid.layers.data(
......@@ -4855,6 +4901,7 @@ logical_xor算子
.. code-block:: python
import paddle.fluid as fluid
left = fluid.layers.data(
name='left', shape=[1], dtype='int32')
right = fluid.layers.data(
......@@ -4908,6 +4955,7 @@ lrn
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(
name="data", shape=[3, 112, 112], dtype="float32")
lrn = fluid.layers.lrn(input=data)
......@@ -4980,6 +5028,7 @@ sigmoid的计算公式为: :math:`sigmoid(x) = 1 / (1 + e^{-x})` 。
.. code-block:: python
import paddle.fluid as fluid
emb_dim = 256
vocab_size = 10000
data = fluid.layers.data(name='x', shape=[-1, 100, 1],
......@@ -5115,6 +5164,7 @@ margin rank loss(差距排序损失)层。在排序问题中,它可以比
.. code-block:: python
import paddle.fluid as fluid
label = fluid.layers.data(name="label", shape=[-1, 1], dtype="float32")
left = fluid.layers.data(name="left", shape=[-1, 1], dtype="float32")
right = fluid.layers.data(name="right", shape=[-1, 1], dtype="float32")
......@@ -5191,6 +5241,7 @@ matmul
# x: [M], y: [N]
# fluid.layers.matmul(x, y, True, True) # out: [M, N]
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[2, 3], dtype='float32')
y = fluid.layers.data(name='y', shape=[3, 2], dtype='float32')
out = fluid.layers.matmul(x, y, True, True)
......@@ -5237,6 +5288,7 @@ maxout
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(
name='data',
shape=[256, 32, 32],
......@@ -5273,6 +5325,7 @@ mean算子计算X中所有元素的平均值
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(
name='data', shape=[2, 3], dtype='float32')
mean = fluid.layers.mean(input)
......@@ -5370,6 +5423,7 @@ merge_selected_rows
.. code-block:: python
import paddle.fluid as fluid
b = fluid.default_main_program().global_block()
var = b.create_var(
name="X", dtype="float32", persistable=True,
......@@ -5555,6 +5609,7 @@ nce
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
window_size = 5
......@@ -5619,6 +5674,7 @@ NPair损失需要成对的数据。NPair损失分为两部分:第一部分是
.. code-block:: python
import paddle.fluid as fluid
anchor = fluid.layers.data(
name = 'anchor', shape = [18, 6], dtype = 'float32', append_batch_size=False)
positive = fluid.layers.data(
......@@ -5654,6 +5710,7 @@ one_hot
.. code-block:: python
import paddle.fluid as fluid
label = fluid.layers.data(name="label", shape=[1], dtype="int64")
one_hot_label = fluid.layers.one_hot(input=label, depth=10)
......@@ -5886,6 +5943,7 @@ pixel shuffle 层(像素重组层)
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[9,4,4])
output = fluid.layers.pixel_shuffle(x=input, upscale_factor=3)
......@@ -5975,6 +6033,7 @@ pooling2d操作符根据 ``input`` , 池化类型 ``pool_type`` , 池化核
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(
name='data', shape=[3, 32, 32], dtype='float32')
pool2d = fluid.layers.pool2d(
......@@ -6071,6 +6130,7 @@ pooling3d操作根据input,pool_type,pool_size,strides和paddings参数计
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(
name='data', shape=[3, 32, 32, 32], dtype='float32')
pool3d = fluid.layers.pool3d(
......@@ -6306,6 +6366,7 @@ random_crop
.. code-block:: python
import paddle.fluid as fluid
img = fluid.layers.data("img", [3, 256, 256])
cropped_img = fluid.layers.random_crop(img, shape=[3, 224, 224])
......@@ -6335,6 +6396,7 @@ rank
.. code-block:: python
import paddle.fluid as fluid
input = layers.data(
name="input", shape=[3, 100, 100], dtype="float32")
rank = layers.rank(input) # 4
......@@ -6379,6 +6441,7 @@ P 的取值可为: {0, 1} 或 {0, 0.5, 1}, 其中,0.5表示输入的两文
.. code-block:: python
import paddle.fluid as fluid
label = fluid.layers.data(name="label", shape=[-1, 1], dtype="float32")
left = fluid.layers.data(name="left", shape=[-1, 1], dtype="float32")
right = fluid.layers.data(name="right", shape=[-1, 1], dtype="float32")
......@@ -6414,6 +6477,9 @@ reduce_all
# [[True, False]
# [True, True]]
# 接下来的示例中,我们在每处函数调用后面都标注出了它的结果张量。
import paddle.fluid as fluid
import paddle.fluid.layers as layers
import numpy as np
fluid.layers.reduce_all(x) # False
fluid.layers.reduce_all(x, dim=0) # [True, False]
fluid.layers.reduce_all(x, dim=-1) # [False, True]
......@@ -6448,6 +6514,9 @@ reduce_any
# [[True, False]
# [False, False]]
# 接下来的示例中,我们在每处函数调用后面都标注出了它的结果张量。
import paddle.fluid as fluid
import paddle.fluid.layers as layers
import numpy as np
fluid.layers.reduce_any(x) # True
fluid.layers.reduce_any(x, dim=0) # [True, False]
fluid.layers.reduce_any(x, dim=-1) # [True, False]
......@@ -6740,6 +6809,7 @@ Relu接受一个输入数据(张量),输出一个张量。将线性函数y = m
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3, 4], dtype="float32")
output = fluid.layers.relu(x)
......@@ -6780,6 +6850,7 @@ relu6激活算子(Relu6 Activation Operator)
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32")
y = fluid.layers.relu6(x, threshold=6.0)
......@@ -6833,6 +6904,7 @@ reshape
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(
name='data', shape=[2, 4, 6], dtype='float32')
reshaped = fluid.layers.reshape(
......@@ -6916,6 +6988,7 @@ align_corners和align_mode是可选参数,插值的计算方法可以由它们
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32")
out = fluid.layers.resize_bilinear(input, out_shape=[12, 12])
......@@ -6988,6 +7061,7 @@ resize_nearest
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name="input", shape=[3,6,9], dtype="float32")
out = fluid.layers.resize_nearest(input, out_shape=[12, 12])
......@@ -7032,6 +7106,7 @@ Region of Interests align(直译:有意义、有价值选区对齐) 用于实
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(
name='data', shape=[256, 32, 32], dtype='float32')
rois = fluid.layers.data(
......@@ -7229,6 +7304,7 @@ sampling_id算子。用于从输入的多项分布中对id进行采样的图层
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(
name="X",
shape=[13, 11],
......@@ -7500,6 +7576,7 @@ sequence_enumerate
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(shape[-1, 1], dtype='int32', lod_level=1)
out = fluid.layers.sequence_enumerate(input=x, win_size=3, pad_value=0)
......@@ -7570,6 +7647,7 @@ sequence_expand
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
x = fluid.layers.data(name='x', shape=[10], dtype='float32')
y = fluid.layers.data(name='y', shape=[10, 20],
......@@ -7642,9 +7720,14 @@ Sequence Expand As Layer
.. code-block:: python
x = fluid.layers.data(name='x', shape=[7, 1],
import paddle.fluid as fluid
import paddle.fluid.layers as layers
x = fluid.layers.data(name='x', shape=[10], dtype='float32')
y = fluid.layers.data(name='y', shape=[10, 20],
dtype='float32', lod_level=1)
x_first_step = fluid.layers.sequence_first_step(input=x)
out = layers.sequence_expand_as(x=x, y=y)
......@@ -7684,6 +7767,7 @@ sequence_first_step
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
x_first_step = fluid.layers.sequence_first_step(input=x)
......@@ -7731,6 +7815,7 @@ sequence_last_step
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
x_last_step = fluid.layers.sequence_last_step(input=x)
......@@ -7772,6 +7857,7 @@ sequence_mask
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
x = fluid.layers.data(name='x', shape=[10], dtype='float32', lod_level=1)
......@@ -7869,6 +7955,7 @@ sequence_pad
.. code-block:: python
import paddle.fluid as fluid
import numpy
x = fluid.layers.data(name='y', shape=[10, 5],
......@@ -8110,6 +8197,7 @@ sequence_scatter
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
input = layers.data( name="x", shape=[3, 6], append_batch_size=False, dtype='float32' )
......@@ -8171,6 +8259,7 @@ sequence_slice
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
seqs = fluid.layers.data(name='x', shape=[10, 5],
dtype='float32', lod_level=1)
......@@ -8220,6 +8309,7 @@ sequence_softmax
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[7, 1],
dtype='float32', lod_level=1)
x_sequence_softmax = fluid.layers.sequence_softmax(input=x)
......@@ -8277,6 +8367,7 @@ sequence_unpad
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', shape=[10, 5], dtype='float32')
len = fluid.layers.data(name='length', shape=[1], dtype='int64')
out = fluid.layers.sequence_unpad(x=x, length=len)
......@@ -8382,6 +8473,7 @@ shuffle_channel
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name='input', shape=[4,2,2], dtype='float32')
out = fluid.layers.shuffle_channel(x=input, group=2)
......@@ -8443,6 +8535,7 @@ sigmoid_cross_entropy_with_logits
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(
name='data', shape=[10], dtype='float32')
label = fluid.layers.data(
......@@ -8481,6 +8574,7 @@ sign
.. code-block:: python
# [1, 0, -1]
import paddle.fluid as fluid
data = fluid.layers.sign(np.array([3, 0, -2]))
......@@ -8571,6 +8665,7 @@ similarity_focus
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(
name='data', shape=[-1, 3, 2, 2], dtype='float32')
fluid.layers.similarity_focus(input=data, axis=1, indexes=[0])
......@@ -8673,6 +8768,7 @@ smooth_l1
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(
name='label', shape=[100], dtype='float32')
......@@ -8824,6 +8920,7 @@ softmax_with_cross_entropy
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='data', shape=[128], dtype='float32')
label = fluid.layers.data(name='label', shape=[1], dtype='int64')
fc = fluid.layers.fc(input=data, size=100)
......@@ -9028,6 +9125,7 @@ square_error_cost
.. code-block:: python
import paddle.fluid as fluid
y = fluid.layers.data(name='y', shape=[1], dtype='float32')
y_predict = fluid.layers.data(name='y_predict', shape=[1], dtype='float32')
cost = fluid.layers.square_error_cost(input=y_predict, label=y)
......@@ -9080,6 +9178,7 @@ squeeze
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
x = fluid.layers.data(name='x', shape=[5, 1, 10])
y = fluid.layers.sequeeze(input=x, axes=[1])
......@@ -9159,6 +9258,7 @@ stack
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
x1 = layers.data(name='x1', shape=[1, 2], dtype='int32')
x2 = layers.data(name='x2', shape=[1, 2], dtype='int32')
......@@ -9196,6 +9296,7 @@ STanh 激活算子(STanh Activation Operator.)
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32")
y = fluid.layers.stanh(x, scale_a=0.67, scale_b=1.72)
......@@ -9228,6 +9329,7 @@ sum算子。
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
input0 = fluid.layers.data(name="input0", shape=[13, 11], dtype='float32')
input1 = layers.data(name="input1", shape=[13, 11], dtype='float32')
......@@ -9265,6 +9367,7 @@ Swish 激活函数
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3,10,32,32], dtype="float32")
y = fluid.layers.swish(x, beta=2.0)
......@@ -9358,6 +9461,7 @@ temporal_shift
.. code-block:: python
import paddle.fluid as fluid
input = fluid.layers.data(name='input', shape=[4,2,2], dtype='float32')
out = fluid.layers.temporal_shift(x=input, seg_num=2, shift_ratio=0.2)
......@@ -9411,6 +9515,7 @@ topk
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
input = layers.data(name="input", shape=[13, 11], dtype='float32')
top5_values, top5_indices = fluid.layers.topk(input, k=5)
......@@ -9488,6 +9593,7 @@ tree_conv
.. code-block:: python
import paddle.fluid as fluid
# 10 代表数据集的最大节点大小max_node_size,5 代表向量宽度
nodes_vector = fluid.layers.data(name='vectors', shape=[10, 5], dtype='float32')
# 10 代表数据集的最大节点大小max_node_size, 2 代表每条边连接两个节点
......@@ -9542,6 +9648,7 @@ uniform_random_batch_size_like算子。
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
input = fluid.layers.data(name="input", shape=[13, 11], dtype='float32')
......@@ -9685,6 +9792,9 @@ where
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
import numpy as np
# condition为张量[True, False, True]
out = fluid.layers.where(condition) # [[0], [2]]
......
......@@ -26,6 +26,7 @@ abs
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.abs(data)
......@@ -51,6 +52,7 @@ arccosine激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.acos(data)
......@@ -76,6 +78,7 @@ arcsine激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.asin(data)
......@@ -102,6 +105,7 @@ arctanh激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.atan(data)
......@@ -134,6 +138,7 @@ ceil
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.ceil(data)
......@@ -172,6 +177,7 @@ Cosine余弦激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.cos(data)
......@@ -203,6 +209,7 @@ cumsum
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.cumsum(data, axis=0)
......@@ -238,6 +245,7 @@ Exp激活函数(Exp指以自然常数e为底的指数运算)。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.exp(data)
......@@ -275,6 +283,7 @@ floor
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.floor(data)
......@@ -315,6 +324,7 @@ HardShrink激活函数(HardShrink activation operator)
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[784])
result = fluid.layers.hard_shrink(x=data, threshold=0.3)
......@@ -351,6 +361,7 @@ Logsigmoid激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.logsigmoid(data)
......@@ -386,6 +397,7 @@ Reciprocal(取倒数)激活函数
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.reciprocal(data)
......@@ -425,6 +437,7 @@ Round取整激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.round(data)
......@@ -456,6 +469,7 @@ rsqrt激活函数
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.rsqrt(data)
......@@ -485,6 +499,7 @@ sigmoid激活函数
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.sigmoid(data)
......@@ -524,6 +539,7 @@ sin
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.sin(data)
......@@ -561,6 +577,7 @@ softplus激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.softplus(data)
......@@ -600,6 +617,7 @@ Softshrink激活算子
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.softshrink(data)
......@@ -638,6 +656,7 @@ softsign激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.softsign(data)
......@@ -677,6 +696,7 @@ sqrt
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.sqrt(data)
......@@ -714,6 +734,7 @@ square
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.square(data)
......@@ -754,6 +775,7 @@ tanh 激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.tanh(data)
......@@ -792,6 +814,7 @@ tanh_shrink激活函数。
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[32, 784])
result = fluid.layers.tanh_shrink(data)
......@@ -829,6 +852,7 @@ ThresholdedRelu激活函数
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name="input", shape=[1])
result = fluid.layers.thresholded_relu(data, threshold=0.4)
......
......@@ -26,6 +26,7 @@ argmax
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3, 4], dtype="float32")
out = fluid.layers.argmax(x=in, axis=0)
out = fluid.layers.argmax(x=in, axis=-1)
......@@ -61,6 +62,7 @@ argmin
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3, 4], dtype="float32")
out = fluid.layers.argmin(x=in, axis=0)
out = fluid.layers.argmin(x=in, axis=-1)
......@@ -113,6 +115,7 @@ argsort
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name="x", shape=[3, 4], dtype="float32")
out, indices = fluid.layers.argsort(input=x, axis=0)
......@@ -180,6 +183,7 @@ cast
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(name='x', shape=[13], dtype='float32')
result = fluid.layers.cast(x=data, dtype='float64')
......@@ -215,6 +219,7 @@ concat
.. code-block:: python
import paddle.fluid as fluid
a = fluid.layers.data(name='a', shape=[2, 13], dtype='float32')
b = fluid.layers.data(name='b', shape=[2, 3], dtype='float32')
c = fluid.layers.data(name='c', shape=[2, 2], dtype='float32')
......@@ -254,6 +259,7 @@ create_global_var
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
var = layers.create_global_var(shape=[2,3], value=1.0, dtype='float32',
persistable=True, force_cpu=True, name='new_var')
......@@ -290,6 +296,7 @@ create_parameter
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
W = fluid.layers.create_parameter(shape=[784, 200], dtype='float32')
......@@ -323,6 +330,7 @@ create_tensor
.. code-block:: python
import paddle.fluid as fluid
tensor = fluid.layers.create_tensor(dtype='float32')
......@@ -350,6 +358,7 @@ diag
# [3, 0, 0]
# [0, 4, 0]
# [0, 0, 5]
import paddle.fluid as fluid
data = fluid.layers.diag(np.arange(3, 6))
......@@ -508,6 +517,7 @@ isfinite
.. code-block:: python
import paddle.fluid as fluid
var = fluid.layers.data(name="data",
shape=(4, 6),
dtype="float32")
......@@ -540,6 +550,7 @@ linspace
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.linspace(0, 10, 5, 'float32') # [0.0, 2.5, 5.0, 7.5, 10.0]
data = fluid.layers.linspace(0, 10, 1, 'float32') # [0.0]
......@@ -602,6 +613,7 @@ range
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.range(0, 10, 2, 'int32')
......@@ -808,6 +820,7 @@ zeros_like
.. code-block:: python
import paddle.fluid as fluid
x = fluid.layers.data(name='x', dtype='float32', shape=[3], append_batch_size=False)
data = fluid.layers.zeros_like(x) # [0.0, 0.0, 0.0]
......
......@@ -21,6 +21,7 @@ https://en.wikipedia.org/wiki/Accuracy_and_precision
.. code-block:: python
import paddle.fluid as fluid
# 假设有batch_size = 128
batch_size=128
accuracy_manager = fluid.metrics.Accuracy()
......@@ -77,6 +78,7 @@ auc函数创建四个局部变量true_positives, true_negatives, false_positives
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
# 初始化auc度量
auc_metric = fluid.metrics.Auc("ROC")
......@@ -134,6 +136,7 @@ ChunkEvaluator
.. code-block:: python
import paddle.fluid as fluid
# 初始化chunck-level的评价管理。
metric = fluid.metrics.ChunkEvaluator()
......@@ -185,6 +188,7 @@ CompositeMetric
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
preds = [[0.1], [0.7], [0.8], [0.9], [0.2],
[0.2], [0.3], [0.5], [0.8], [0.6]]
......@@ -271,6 +275,7 @@ DetectionMAP
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
batch_size = -1 # 可以为任意大小
......@@ -339,6 +344,7 @@ EditDistance
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
# 假设batch_size为128
......@@ -366,6 +372,7 @@ EditDistance
.. code-block:: python
import paddle.fluid as fluid
edit_distances_batch2 = np.random.randint(low = 0, high = 10, size = (batch_size, 1))
seq_num_batch2 = batch_size
distance_evaluator.update(edit_distances_batch2, seq_num_batch2)
......@@ -456,6 +463,7 @@ https://en.wikipedia.org/wiki/Evaluation_of_binary_classifiers
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
metric = fluid.metrics.Precision()
......@@ -499,6 +507,7 @@ https://en.wikipedia.org/wiki/Precision_and_recall
.. code-block:: python
import paddle.fluid as fluid
import numpy as np
metric = fluid.metrics.Recall()
......
......@@ -30,6 +30,7 @@ he Gated Linear Units(GLU)由切分(split),sigmoid激活函数和按元素
.. code-block:: python
import paddle.fluid as fluid
data = fluid.layers.data(
name="words", shape=[-1, 6, 3, 9], dtype="float32")
# 输出的形状为[-1, 3, 3, 9]
......@@ -74,6 +75,7 @@ Image Convolution Group由Convolution2d,BatchNorm,DropOut和Pool2d组成。
.. code-block:: python
import paddle.fluid as fluid
img = fluid.layers.data(name='img', shape=[1, 28, 28], dtype='float32')
conv_pool = fluid.nets.img_conv_group(input=img,
conv_padding=1,
......
......@@ -340,6 +340,7 @@ DGC还使用动量因子掩藏(momentum factor masking)和预训练(warm-up)来
.. code-block:: python
import paddle.fluid as fluid
optimizer = fluid.optimizer.DGCMomentumOptimizer(
learning_rate=0.0001,
momentum=0.9,
......@@ -349,84 +350,6 @@ DGC还使用动量因子掩藏(momentum factor masking)和预训练(warm-up)来
.. _cn_api_fluid_optimizer_PipelineOptimizer:
PipelineOptimizer
-------------------------------
.. py:class:: paddle.fluid.optimizer.PipelineOptimizer(optimizer, cut_list=None, place_list=None, concurrency_list=None, queue_size=30, sync_steps=1, start_cpu_core_id=0)
Pipeline 优化器训练。该程序将由cut_list分割。如果cut_list的长度是k,则整个程序(包括向后部分)将被分割为2 * k-1个部分。 所以place_list和concurrency_list的长度也必须是2 * k-1。
.. note::
虽然异步模式应用于管道训练中以加速,但最终的性能取决于每个管道的训练进度。 我们将在未来尝试同步模式。
参数:
- **optimizer** (Optimizer) - 基础优化器,如SGD
- **cut_list** (list of Variable list) - main_program的cut变量
- **place_lis** (list of Place) - 某部分运行的位置
- **concurrency_lis** (list of int) - 并发度
- **queue_size** (int) - 每个部分都将使用其范围内队列(in-scope queue)中的范围并将范围生成到范围外队列(out-scope queue)。 而这个参数定范围队列大小。 这一参数可选,默认值:30。
- **sync_steps** (int) - 不同显卡之间的同步步数
- **start_cpu_core_id** (int) - 设置第一个cpu核的id。这一参数可选,默认值:0。
**代码示例**
.. code-block:: python
x = fluid.layers.data(name='x', shape=[1], dtype='int64', lod_level=0)
y = fluid.layers.data(name='y', shape=[1], dtype='int64', lod_level=0)
emb_x = layers.embedding(input=x, param_attr=fluid.ParamAttr(name="embx"), size=[10,2], is_sparse=False)
emb_y = layers.embedding(input=y, param_attr=fluid.ParamAttr(name="emby",learning_rate=0.9), size=[10,2], is_sparse=False)
concat = layers.concat([emb_x, emb_y], axis=1)
fc = layers.fc(input=concat, name="fc", size=1, num_flatten_dims=1, bias_attr=False)
loss = layers.reduce_mean(fc)
optimizer = fluid.optimizer.SGD(learning_rate=0.5)
optimizer = fluid.optimizer.PipelineOptimizer(optimizer,
cut_list=[[emb_x, emb_y], [loss]],
place_list=[fluid.CPUPlace(), fluid.CUDAPlace(0), fluid.CPUPlace()],
concurrency_list=[1, 1, 4],
queue_size=2,
sync_steps=1,
)
optimizer.minimize(loss)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
filelist = [] # 您应该根据需求自行设置文件列表, 如: filelist = ["dataA.txt"]
dataset = fluid.DatasetFactory().create_dataset("FileInstantDataset")
dataset.set_use_var([x,y])
dataset.set_batch_size(batch_size)
dataset.set_filelist(filelist)
exe.train_from_dataset(
fluid.default_main_program(),
dataset,
thread=2,
debug=False,
fetch_list=[],
fetch_info=[],
print_period=1)
.. py:method:: extract_section_opt_ops(ops, cut_point_name)
获取指定section的优化算子(opt ops)
.. py:method:: extract_section_opt_ops(ops, cut_point_name)
获取指定section的输入和输出
.. py:method:: find_persistable_vars(ops, whole_parameters)
获取指定section的持久性输入变量
.. py:method:: extract_section_ops(ops, cut_point_name)
获取指定的section的算子(ops)
.. _cn_api_fluid_optimizer_ExponentialMovingAverage:
ExponentialMovingAverage
......@@ -624,15 +547,16 @@ FTRL 原始论文: ( `https://www.eecs.tufts.edu/~dsculley/papers/ad-click-predi
LambOptimizer
-------------------------------
.. py:class:: paddle.fluid.optimizer.LambOptimizer(learning_rate=0.001, lamb_weight_decay=0.01, beta1=0.9, beta2=0.999, epsilon=1e-06, regularization=None, name=None)
.. py:class:: paddle.fluid.optimizer.LambOptimizer(learning_rate=0.001, lamb_weight_decay=0.01, beta1=0.9, beta2=0.999, epsilon=1e-06, regularization=None, exclude_from_weight_decay_fn=None, name=None)
LAMB(Layer-wise Adaptive Moments optimizer for Batching training)优化器
LAMB优化器旨在不降低准确性的条件下扩大训练的批量大小,支持自适应元素更新和精确的分层校正。 更多信息请参考Reducing BERT Pre-Training Time from 3 Days to 76 Minutes。
LAMB优化器旨在不降低准确性的条件下扩大训练的批量大小,支持自适应元素更新和精确的分层校正。 更多信息请参考 `Large Batch Optimization for
Deep Learning: Training BERT in 76 minutes <https://arxiv.org/pdf/1904.00962.pdf>`_ 。
参数更新如下:
.. math::
\begin{align}\begin{aligned}m_t^l & = \beta_1 m_{t - 1}^l + (1 - \beta_1)g_t^l\\v_t^l & = \beta_2 v_{t - 1}^l + (1 - \beta_2)g_t^l \odot g_t^l\\\widehat{m}_t^l & = m_t^l/(1 - \beta_1^t)\\\widehat{v}_t^l & = v_t^l/(1 - \beta_2^t)\\r_1 & = \left \| w_{t-1}^l \right \|_2\\r_2 & = \left \| \frac{\widehat{m}_t^l}{\sqrt{\widehat{v}_t^l+\epsilon}} + \lambda w_{t-1}^l \right \|_2\\r & = r_1 / r_2\\\eta^l & = r \times \eta\\w_t^l & = w_{t-1}^l -\eta ^l \times (\frac{\widehat{m}_t^l}{\sqrt{\widehat{v}_t^l+\epsilon}} + \lambda w_{t-1}^l)\end{aligned}\end{align}
\begin{align}\begin{aligned}m_t &= \beta_1 m_{t - 1}+ (1 - \beta_1)g_t \\\v_t &= \beta_2 v_{t - 1} + (1 - \beta_2)g_t^2 \\\r_t &= \frac{m_t}{\sqrt{v_t}+\epsilon} \\\w_t &= w_{t-1} -\eta_t \frac{\left \| w_{t-1}\right \|}{\left \| r_t + \lambda w_{t-1}\right \|} (r_t + \lambda w_{t-1})\end{aligned}\end{align}
其中 :math:`m` 为第一个时刻,:math:`v` 为第二个时刻,:math:`\eta` 为学习率,:math:`\lambda` 为LAMB权重衰减率。
......@@ -642,7 +566,8 @@ LAMB优化器旨在不降低准确性的条件下扩大训练的批量大小,
- **beta1** (float) – 第一个时刻估计的指数衰减率。
- **beta2** (float) – 第二个时刻估计的指数衰减率。
- **epsilon** (float) – 一个小的浮点值,目的是维持数值稳定性。
- **regularization** – 一个正则化器,如fluid.regularizer.L1DecayRegularizer。
- **regularization** (Regularizer) – 一个正则化器,如fluid.regularizer.L1DecayRegularizer。
- **exclude_from_weight_decay_fn** (function) – 当返回值为True时从权重衰减中去除某个参数。
- **name** (str|None) – 名字前缀(可选项)。
**代码示例**
......@@ -655,7 +580,11 @@ LAMB优化器旨在不降低准确性的条件下扩大训练的批量大小,
hidden = fluid.layers.fc(input=data, size=10)
cost = fluid.layers.mean(hidden)
optimizer = fluid.optimizer.Lamb(learning_rate=0.002)
def exclude_fn(param):
return param.name.endswith('.b_0')
optimizer = fluid.optimizer.Lamb(learning_rate=0.002,
exclude_from_weight_decay_fn=exclude_fn)
optimizer.minimize(cost)
......@@ -703,6 +632,7 @@ LARS支持的Momentum优化器
.. code-block:: python
import paddle.fluid as fluid
optimizer = fluid.optimizer.LarsMomentum(learning_rate=0.2, momentum=0.1, lars_weight_decay=0.001)
optimizer.minimize(cost)
......@@ -859,6 +789,70 @@ MomentumOptimizer
.. _cn_api_fluid_optimizer_PipelineOptimizer:
PipelineOptimizer
-------------------------------
.. py:class:: paddle.fluid.optimizer.PipelineOptimizer(optimizer, cut_list=None, place_list=None, concurrency_list=None, queue_size=30, sync_steps=1, start_cpu_core_id=0)
使用流水线模式进行训练。
Program会根据切分列表cut_list进行分割。如果cut_list的长度是k,则整个program(包括反向部分)将被分割为2*k-1个section。 所以place_list和concurrency_list的长度也必须是2*k-1。
.. note::
虽然我们在流水线训练模式中采用异步更新的方式来加速,但最终的效果会依赖于每条流水线的训练进程。我们将在未来尝试同步模式。
参数:
- **optimizer** (Optimizer) - 基础优化器,如SGD
- **cut_list** (list of Variable list) - main_program的cut变量列表
- **place_list** (list of Place) - 对应section运行所在的place
- **concurrency_list** (list of int) - 指定每个section的并发度列表
- **queue_size** (int) - 每个section都会消费其输入队列(in-scope queue)中的scope,并向输出队列(out-scope queue)产出scope。 此参数的作用就是指定队列的大小。 可选,默认值:30
- **sync_steps** (int) - 不同显卡之间的同步周期数。可选,默认值:1
- **start_cpu_core_id** (int) - 指定所使用的第一个CPU核的id。可选,默认值:0
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.layers as layers
x = fluid.layers.data(name='x', shape=[1], dtype='int64', lod_level=0)
y = fluid.layers.data(name='y', shape=[1], dtype='int64', lod_level=0)
emb_x = layers.embedding(input=x, param_attr=fluid.ParamAttr(name="embx"), size=[10,2], is_sparse=False)
emb_y = layers.embedding(input=y, param_attr=fluid.ParamAttr(name="emby",learning_rate=0.9), size=[10,2], is_sparse=False)
concat = layers.concat([emb_x, emb_y], axis=1)
fc = layers.fc(input=concat, name="fc", size=1, num_flatten_dims=1, bias_attr=False)
loss = layers.reduce_mean(fc)
optimizer = fluid.optimizer.SGD(learning_rate=0.5)
optimizer = fluid.optimizer.PipelineOptimizer(optimizer,
cut_list=[[emb_x, emb_y], [loss]],
place_list=[fluid.CPUPlace(), fluid.CUDAPlace(0), fluid.CPUPlace()],
concurrency_list=[1, 1, 4],
queue_size=2,
sync_steps=1,
)
optimizer.minimize(loss)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
filelist = [] # you should set your own filelist, e.g. filelist = ["dataA.txt"]
dataset = fluid.DatasetFactory().create_dataset("FileInstantDataset")
dataset.set_use_var([x,y])
dataset.set_batch_size(batch_size)
dataset.set_filelist(filelist)
exe.train_from_dataset(
fluid.default_main_program(),
dataset,
thread=2,
debug=False,
fetch_list=[],
fetch_info=[],
print_period=1)
.. _cn_api_fluid_optimizer_RMSPropOptimizer:
......
......@@ -82,6 +82,7 @@ profile interface 。与cuda_profiler不同,此profiler可用于分析CPU和GP
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.profiler as profiler
import numpy as np
......@@ -118,6 +119,7 @@ reset_profiler
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.profiler as profiler
with profiler.profiler('CPU', 'total', '/tmp/profile'):
for iter in range(10):
......@@ -155,6 +157,7 @@ start_profiler
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.profiler as profiler
profiler.start_profiler('GPU')
......@@ -195,6 +198,7 @@ stop_profiler
.. code-block:: python
import paddle.fluid as fluid
import paddle.fluid.profiler as profiler
profiler.start_profiler('GPU')
......
......@@ -218,16 +218,16 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。
</thead>
<tbody>
<tr>
<td> paddlepaddle==[版本号] 例如 paddlepaddle==1.5.0 </td>
<td> paddlepaddle==[版本号] 例如 paddlepaddle==1.5.1 </td>
<td> 只支持CPU对应版本的PaddlePaddle,具体版本请参见<a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> </td>
</tr>
<tr>
<td> paddlepaddle-gpu==1.5.0 </td>
<td> 使用CUDA 9.0和cuDNN 7编译的1.5.0版本 </td>
<td> paddlepaddle-gpu==1.5.1 </td>
<td> 使用CUDA 9.0和cuDNN 7编译的1.5.1版本 </td>
</tr>
<tr>
<td> paddlepaddle-gpu==1.5.0.post87 </td>
<td> 使用CUDA 8.0和cuDNN 7编译的1.5.0版本 </td>
<td> paddlepaddle-gpu==1.5.1.post87 </td>
<td> 使用CUDA 8.0和cuDNN 7编译的1.5.1版本 </td>
</tr>
</tbody>
</table>
......@@ -259,74 +259,110 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。
<tbody>
<tr>
<td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> win_cpu_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda8_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda9_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> mac_cpu </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp37-cp37m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp37-cp37m-macosx_10_6_intel.whl</a></td>
</tr>
</tbody>
</table>
......
......@@ -239,16 +239,16 @@ PaddePaddle implements references to various BLAS/CUDA/cuDNN libraries by specif
</thead>
<tbody>
<tr>
<td> paddlepaddle==[version code] such as paddlepaddle==1.5.0 </td>
<td> paddlepaddle==[version code] such as paddlepaddle==1.5.1 </td>
<td> Only support the corresponding version of the CPU PaddlePaddle, please refer to <a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> for the specific version. </td>
</tr>
<tr>
<td> paddlepaddle-gpu==1.5.0 </td>
<td> Using version 1.5.0 compiled with CUDA 9.0 and cuDNN 7 </td>
<td> paddlepaddle-gpu==1.5.1 </td>
<td> Using version 1.5.1 compiled with CUDA 9.0 and cuDNN 7 </td>
</tr>
<tr>
<td> paddlepaddle-gpu==1.5.0.post87 </td>
<td> Using version 1.5.0 compiled with CUDA 8.0 and cuDNN 7 </td>
<td> paddlepaddle-gpu==1.5.1.post87 </td>
<td> Using version 1.5.1 compiled with CUDA 8.0 and cuDNN 7 </td>
</tr>
</tbody>
</table>
......@@ -319,74 +319,110 @@ You can find the docker image for each release of PaddlePaddle in the [DockerHub
<tbody>
<tr>
<td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mkl/paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mkl/paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-openblas/paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-openblas/paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-openblas/paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda8-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post87-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda8-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post87-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.0.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.5.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.5.1.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> win_cpu_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle-1.5.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda8_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post87-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda9_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-win-open/paddlepaddle_gpu-1.5.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.5.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> mac_cpu </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.0-cpu-mac/paddlepaddle-1.5.0-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.5.0-cp37-cp37m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.5.1-cpu-mac/paddlepaddle-1.5.1-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.5.1-cp37-cp37m-macosx_10_6_intel.whl</a></td>
</tr>
</tbody>
</table>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册