提交 34ced953 编写于 作者: G grasswolfs

Merge branch 'release/1.8' of https://github.com/PaddlePaddle/FluidDoc into...

Merge branch 'release/1.8' of https://github.com/PaddlePaddle/FluidDoc into remotes/origin/release/1.8
......@@ -59,19 +59,21 @@ result = exe.run(fluid.default_main_program(), fetch_list=[avg_cost])
```python
import paddle.fluid as fluid
import paddle.fluid.compiler as compiler
import paddle.fluid.profiler as profiler
data1 = fluid.layers.fill_constant(shape=[1, 3, 8, 8], value=0.5, dtype='float32')
data2 = fluid.layers.fill_constant(shape=[1, 3, 5, 5], value=0.5, dtype='float32')
shape = fluid.layers.shape(data2)
shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4])
out = fluid.layers.crop_tensor(data1, shape=shape)
place = fluid.CUDAPlace(0)
shape = fluid.layers.shape(data2)
shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4])
out = fluid.layers.crop_tensor(data1, shape=shape)
place = fluid.CUDAPlace(0)
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
compiled_prog = compiler.CompiledProgram(fluid.default_main_program())
with profiler.profiler('All', 'total') as prof:
for i in range(10):
result = exe.run(fetch_list=[out])
result = exe.run(program=compiled_prog, fetch_list=[out])
```
在程序运行结束后,将会自动地打印出profile report。在下面的profile report中,可以看到 `GpuMemCpy Summary`中给出了2项数据传输的调用耗时。在OP执行过程中,如果输入Tensor所在的设备与OP执行的设备不同,就会发生`GpuMemcpySync`,通常我们可以直接优化的就是这一项。进一步分析,可以看到`slice``crop_tensor`执行中都发生了`GpuMemcpySync`。尽管我们在程序中设置了GPU模式运行,但是框架中有些OP,例如shape,会将输出结果放在CPU上。
......@@ -82,35 +84,34 @@ with profiler.profiler('All', 'total') as prof:
Note! This Report merge all thread info into one.
Place: All
Time unit: ms
Sorted by event first end time in descending order in the same thread
Sorted by total time in descending order in the same thread
Total time: 24.0922
Computation time Total: 3.60143 Ratio: 14.9485%
Framework overhead Total: 20.4908 Ratio: 85.0515%
Total time: 26.6328
Computation time Total: 13.3133 Ratio: 49.9884%
Framework overhead Total: 13.3195 Ratio: 50.0116%
------------------------- GpuMemCpy Summary -------------------------
GpuMemcpy Calls: 30 Total: 1.44377 Ratio: 5.99267%
GpuMemcpyAsync Calls: 10 Total: 0.459803 Ratio: 1.90851%
GpuMemcpySync Calls: 20 Total: 0.983967 Ratio: 4.08416%
GpuMemcpy Calls: 30 Total: 1.47508 Ratio: 5.5386%
GpuMemcpyAsync Calls: 10 Total: 0.443514 Ratio: 1.66529%
GpuMemcpySync Calls: 20 Total: 1.03157 Ratio: 3.87331%
------------------------- Event Summary -------------------------
Event Calls Total CPU Time (Ratio) GPU Time (Ratio) Min. Max. Ave. Ratio.
fill_constant 20 2.03147 1.995597 (0.982342) 0.035872 (0.017658) 0.064199 0.379822 0.101573 0.0843204
shape 10 0.466503 0.466503 (1.000000) 0.000000 (0.000000) 0.021165 0.207393 0.0466503 0.0193632
eager_deletion 30 0.28398 0.283980 (1.000000) 0.000000 (0.000000) 0.004668 0.028065 0.009466 0.0117872
slice 10 1.53533 1.505664 (0.980679) 0.029664 (0.019321) 0.1312 0.259446 0.153533 0.0637271
GpuMemcpySync:CPU->GPU 10 0.41714 0.408532 (0.979364) 0.008608 (0.020636) 0.038545 0.054022 0.041714 0.0173143
crop_tensor 10 1.49584 1.438558 (0.961707) 0.057280 (0.038293) 0.129106 0.246395 0.149584 0.0620879
GpuMemcpySync:GPU->CPU 10 0.566827 0.543787 (0.959353) 0.023040 (0.040647) 0.047598 0.097705 0.0566827 0.0235274
Fetch 10 0.921333 0.897141 (0.973742) 0.024192 (0.026258) 0.077059 0.177223 0.0921333 0.0382419
GpuMemcpyAsync:GPU->CPU 10 0.459803 0.435611 (0.947386) 0.024192 (0.052614) 0.039321 0.073849 0.0459803 0.0190851
ParallelExecutor::Run 10 17.3578 17.345797 (0.999309) 0.012000 (0.000691) 0.705361 10.3389 1.73578 0.720472
InitLocalVars 1 0.084954 0.084954 (1.000000) 0.000000 (0.000000) 0.084954 0.084954 0.084954 0.0035262
ScopeBufferedMonitor::pre_local_exec_scopes_process 10 0.040771 0.040771 (1.000000) 0.000000 (0.000000) 0.003653 0.00543 0.0040771 0.00169229
FastThreadedSSAGraphExecutorPrepare 10 8.64291 8.630914 (0.998612) 0.012000 (0.001388) 0.033383 8.29818 0.864291 0.358743
ScopeBufferedMonitor::post_local_exec_scopes_process 10 0.252618 0.252618 (1.000000) 0.000000 (0.000000) 0.022696 0.041439 0.0252618 0.0104854
FastThreadedSSAGraphExecutorPrepare 10 9.16493 9.152509 (0.998645) 0.012417 (0.001355) 0.025192 8.85968 0.916493 0.344122
shape 10 8.33057 8.330568 (1.000000) 0.000000 (0.000000) 0.030711 7.99849 0.833057 0.312793
fill_constant 20 4.06097 4.024522 (0.991025) 0.036449 (0.008975) 0.075087 0.888959 0.203049 0.15248
slice 10 1.78033 1.750439 (0.983212) 0.029888 (0.016788) 0.148503 0.290851 0.178033 0.0668471
GpuMemcpySync:CPU->GPU 10 0.45524 0.446312 (0.980388) 0.008928 (0.019612) 0.039089 0.060694 0.045524 0.0170932
crop_tensor 10 1.67658 1.620542 (0.966578) 0.056034 (0.033422) 0.143906 0.258776 0.167658 0.0629515
GpuMemcpySync:GPU->CPU 10 0.57633 0.552906 (0.959357) 0.023424 (0.040643) 0.050657 0.076322 0.057633 0.0216398
Fetch 10 0.919361 0.895201 (0.973721) 0.024160 (0.026279) 0.082935 0.138122 0.0919361 0.0345199
GpuMemcpyAsync:GPU->CPU 10 0.443514 0.419354 (0.945526) 0.024160 (0.054474) 0.040639 0.059673 0.0443514 0.0166529
ScopeBufferedMonitor::post_local_exec_scopes_process 10 0.341999 0.341999 (1.000000) 0.000000 (0.000000) 0.028436 0.057134 0.0341999 0.0128413
eager_deletion 30 0.287236 0.287236 (1.000000) 0.000000 (0.000000) 0.005452 0.022696 0.00957453 0.010785
ScopeBufferedMonitor::pre_local_exec_scopes_process 10 0.047864 0.047864 (1.000000) 0.000000 (0.000000) 0.003668 0.011592 0.0047864 0.00179718
InitLocalVars 1 0.022981 0.022981 (1.000000) 0.000000 (0.000000) 0.022981 0.022981 0.022981 0.000862883
```
### 通过log查看发生数据传输的具体位置
......@@ -138,6 +139,7 @@ I0406 14:56:23.287473 17516 operator.cc:180] CUDAPlace(0) Op(crop_tensor), input
```python
import paddle.fluid as fluid
import paddle.fluid.compiler as compiler
import paddle.fluid.profiler as profiler
data1 = fluid.layers.fill_constant(shape=[1, 3, 8, 8], value=0.5, dtype='float32')
......@@ -146,13 +148,13 @@ shape = fluid.layers.shape(data2)
with fluid.device_guard("cpu"):
shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4])
out = fluid.layers.crop_tensor(data1, shape=shape)
place = fluid.CUDAPlace(0)
place = fluid.CUDAPlace(0)
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
compiled_prog = compiler.CompiledProgram(fluid.default_main_program())
with profiler.profiler('All', 'total') as prof:
for i in range(10):
result = exe.run(fetch_list=[out])
result = exe.run(program=compiled_prog, fetch_list=[out])
```
再次观察profile report中`GpuMemCpy Summary`的内容,可以看到`GpuMemCpySync`已经被消除。在实际的模型中,若`GpuMemCpySync` 调用耗时占比较大,并且可以通过设置`device_guard`避免,那么就能够带来一定的性能提升。
......
......@@ -6,6 +6,8 @@
append_backward
---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.backward.append_backward
:noindex:
......@@ -6,6 +6,8 @@
gradients
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.backward.gradients
:noindex:
......@@ -6,6 +6,8 @@
set_gradient_clip
-----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.clip.set_gradient_clip
:noindex:
......@@ -6,6 +6,8 @@
BackwardStrategy
----------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.BackwardStrategy
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
CosineDecay
-----------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.CosineDecay
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
ExponentialDecay
----------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.ExponentialDecay
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
InverseTimeDecay
----------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.InverseTimeDecay
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
NaturalExpDecay
---------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.NaturalExpDecay
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
NoamDecay
---------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.NoamDecay
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
PiecewiseDecay
--------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.PiecewiseDecay
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
PolynomialDecay
---------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.PolynomialDecay
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
TracedLayer
-----------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.TracedLayer
:members:
:noindex:
......
......@@ -6,6 +6,8 @@
grad
----
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.grad
:noindex:
......@@ -6,6 +6,8 @@
guard
-----
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.guard
:noindex:
......@@ -6,6 +6,8 @@
load_dygraph
------------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.load_dygraph
:noindex:
......@@ -6,6 +6,8 @@
no_grad
-------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.no_grad
:noindex:
......@@ -6,6 +6,8 @@
save_dygraph
------------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.save_dygraph
:noindex:
......@@ -6,6 +6,8 @@
to_variable
-----------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.to_variable
:noindex:
......@@ -6,6 +6,8 @@
Executor
--------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.executor.Executor
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
global_scope
------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.executor.global_scope
:noindex:
......@@ -6,6 +6,8 @@
scope_guard
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.executor.scope_guard
:noindex:
......@@ -6,6 +6,8 @@
BuildStrategy
-------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.BuildStrategy
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
CompiledProgram
---------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.CompiledProgram
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
DataFeedDesc
------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.DataFeedDesc
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
DataFeeder
----------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.DataFeeder
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
ExecutionStrategy
-----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.ExecutionStrategy
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
Executor
--------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.Executor
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
ParallelExecutor
----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.ParallelExecutor
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
WeightNormParamAttr
-------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.WeightNormParamAttr
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
create_random_int_lodtensor
---------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.create_random_int_lodtensor
:noindex:
......@@ -6,6 +6,8 @@
data
----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.data
:noindex:
......@@ -6,6 +6,8 @@
device_guard
------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.device_guard
:noindex:
......@@ -6,6 +6,8 @@
embedding
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.embedding
:noindex:
......@@ -6,6 +6,8 @@
global_scope
------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.global_scope
:noindex:
......@@ -6,6 +6,8 @@
gradients
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.gradients
:noindex:
......@@ -6,6 +6,8 @@
memory_optimize
---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.memory_optimize
:noindex:
......@@ -6,6 +6,8 @@
name_scope
----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.name_scope
:noindex:
......@@ -6,6 +6,8 @@
program_guard
-------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.program_guard
:noindex:
......@@ -6,6 +6,8 @@
release_memory
--------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.release_memory
:noindex:
......@@ -6,6 +6,8 @@
save
----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.save
:noindex:
......@@ -6,6 +6,8 @@
scope_guard
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.scope_guard
:noindex:
......@@ -6,6 +6,8 @@
load_inference_model
--------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.load_inference_model
:noindex:
......@@ -6,6 +6,8 @@
load_params
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.load_params
:noindex:
......@@ -6,6 +6,8 @@
load_persistables
-----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.load_persistables
:noindex:
......@@ -6,6 +6,8 @@
save
----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save
:noindex:
......@@ -6,6 +6,8 @@
save_inference_model
--------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_inference_model
:noindex:
......@@ -6,6 +6,8 @@
save_params
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_params
:noindex:
......@@ -6,6 +6,8 @@
save_persistables
-----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_persistables
:noindex:
......@@ -6,6 +6,8 @@
save_vars
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_vars
:noindex:
......@@ -6,6 +6,8 @@
BeamSearchDecoder
-----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.BeamSearchDecoder
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
Decoder
-------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.Decoder
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
DynamicRNN
----------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.DynamicRNN
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
GRUCell
-------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.GRUCell
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
IfElse
------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.IfElse
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
LSTMCell
--------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.LSTMCell
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
Print
-----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.Print
:noindex:
......@@ -6,6 +6,8 @@
RNNCell
-------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.RNNCell
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
StaticRNN
---------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.StaticRNN
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
Switch
------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.Switch
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
While
-----
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.While
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
autoincreased_step_counter
--------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.autoincreased_step_counter
:noindex:
......@@ -6,6 +6,8 @@
batch_norm
----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.batch_norm
:noindex:
......@@ -6,6 +6,8 @@
bilinear_tensor_product
-----------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.bilinear_tensor_product
:noindex:
......@@ -6,6 +6,8 @@
case
----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.case
:noindex:
......@@ -6,6 +6,8 @@
center_loss
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.center_loss
:noindex:
......@@ -6,6 +6,8 @@
cond
----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.cond
:noindex:
......@@ -6,6 +6,8 @@
conv2d
------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv2d
:noindex:
......@@ -6,6 +6,8 @@
conv2d_transpose
----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv2d_transpose
:noindex:
......@@ -6,6 +6,8 @@
conv3d
------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv3d
:noindex:
......@@ -6,6 +6,8 @@
conv3d_transpose
----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv3d_transpose
:noindex:
......@@ -6,6 +6,8 @@
create_parameter
----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.create_parameter
:noindex:
......@@ -6,6 +6,8 @@
create_py_reader_by_data
------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.create_py_reader_by_data
:noindex:
......@@ -6,6 +6,8 @@
crf_decoding
------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.crf_decoding
:noindex:
......@@ -6,6 +6,8 @@
data
----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.data
:noindex:
......@@ -6,6 +6,8 @@
data_norm
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.data_norm
:noindex:
......@@ -6,6 +6,8 @@
deformable_conv
---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.deformable_conv
:noindex:
......@@ -6,6 +6,8 @@
dynamic_decode
--------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_decode
:noindex:
......@@ -6,6 +6,8 @@
dynamic_gru
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_gru
:noindex:
......@@ -6,6 +6,8 @@
dynamic_lstm
------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_lstm
:noindex:
......@@ -6,6 +6,8 @@
dynamic_lstmp
-------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_lstmp
:noindex:
......@@ -6,6 +6,8 @@
embedding
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.embedding
:noindex:
......@@ -6,6 +6,8 @@
fc
--
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.fc
:noindex:
......@@ -6,6 +6,8 @@
group_norm
----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.group_norm
:noindex:
......@@ -6,6 +6,8 @@
gru_unit
--------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.gru_unit
:noindex:
......@@ -6,6 +6,8 @@
hsigmoid
--------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.hsigmoid
:noindex:
......@@ -6,6 +6,8 @@
im2sequence
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.im2sequence
:noindex:
......@@ -6,6 +6,8 @@
inplace_abn
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.inplace_abn
:noindex:
......@@ -6,6 +6,8 @@
instance_norm
-------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.instance_norm
:noindex:
......@@ -6,6 +6,8 @@
layer_norm
----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.layer_norm
:noindex:
......@@ -6,6 +6,8 @@
linear_chain_crf
----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.linear_chain_crf
:noindex:
......@@ -6,6 +6,8 @@
lstm
----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.lstm
:noindex:
......@@ -6,6 +6,8 @@
lstm_unit
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.lstm_unit
:noindex:
......@@ -6,6 +6,8 @@
multi_box_head
--------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.multi_box_head
:noindex:
......@@ -6,6 +6,8 @@
nce
---
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.nce
:noindex:
......@@ -6,6 +6,8 @@
prelu
-----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.prelu
:noindex:
......@@ -6,6 +6,8 @@
py_func
-------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.py_func
:noindex:
......@@ -6,6 +6,8 @@
py_reader
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.py_reader
:noindex:
......@@ -6,6 +6,8 @@
read_file
---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.read_file
:noindex:
......@@ -6,6 +6,8 @@
rnn
---
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.rnn
:noindex:
......@@ -6,6 +6,8 @@
row_conv
--------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.row_conv
:noindex:
......@@ -6,6 +6,8 @@
sequence_concat
---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_concat
:noindex:
......@@ -6,6 +6,8 @@
sequence_conv
-------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_conv
:noindex:
......@@ -6,6 +6,8 @@
sequence_enumerate
------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_enumerate
:noindex:
......@@ -6,6 +6,8 @@
sequence_expand
---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_expand
:noindex:
......@@ -6,6 +6,8 @@
sequence_expand_as
------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_expand_as
:noindex:
......@@ -6,6 +6,8 @@
sequence_first_step
-------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_first_step
:noindex:
......@@ -6,6 +6,8 @@
sequence_last_step
------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_last_step
:noindex:
......@@ -6,6 +6,8 @@
sequence_pad
------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_pad
:noindex:
......@@ -6,6 +6,8 @@
sequence_pool
-------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_pool
:noindex:
......@@ -6,6 +6,8 @@
sequence_reshape
----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_reshape
:noindex:
......@@ -6,6 +6,8 @@
sequence_scatter
----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_scatter
:noindex:
......@@ -6,6 +6,8 @@
sequence_slice
--------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_slice
:noindex:
......@@ -6,6 +6,8 @@
sequence_softmax
----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_softmax
:noindex:
......@@ -6,6 +6,8 @@
sequence_unpad
--------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_unpad
:noindex:
......@@ -6,6 +6,8 @@
spectral_norm
-------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.spectral_norm
:noindex:
......@@ -6,6 +6,8 @@
switch_case
-----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.switch_case
:noindex:
......@@ -6,6 +6,8 @@
while_loop
----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.while_loop
:noindex:
......@@ -6,6 +6,8 @@
glu
---
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.glu
:noindex:
......@@ -6,6 +6,8 @@
img_conv_group
--------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.img_conv_group
:noindex:
......@@ -6,6 +6,8 @@
scaled_dot_product_attention
----------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.scaled_dot_product_attention
:noindex:
......@@ -6,6 +6,8 @@
sequence_conv_pool
------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.sequence_conv_pool
:noindex:
......@@ -6,6 +6,8 @@
simple_img_conv_pool
--------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.simple_img_conv_pool
:noindex:
......@@ -6,6 +6,8 @@
DGCMomentumOptimizer
--------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.DGCMomentumOptimizer
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
ExponentialMovingAverage
------------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.ExponentialMovingAverage
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
LookaheadOptimizer
------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.LookaheadOptimizer
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
ModelAverage
------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.ModelAverage
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
PipelineOptimizer
-----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.PipelineOptimizer
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
RecomputeOptimizer
------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.RecomputeOptimizer
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
DistributeTranspiler
--------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.DistributeTranspiler
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
DistributeTranspilerConfig
--------------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.DistributeTranspilerConfig
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
HashName
--------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.HashName
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
RoundRobin
----------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.RoundRobin
:members:
:inherited-members:
......
......@@ -6,6 +6,8 @@
memory_optimize
---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.transpiler.memory_optimize
:noindex:
......@@ -6,6 +6,8 @@
release_memory
--------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.transpiler.release_memory
:noindex:
......@@ -3,7 +3,7 @@
append_backward
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.backward.append_backward(loss, parameter_list=None, no_grad_set=None, callbacks=None)
......
......@@ -3,7 +3,7 @@
gradients
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.backward.gradients(targets, inputs, target_gradients=None, no_grad_set=None)
......
......@@ -3,7 +3,7 @@
set_gradient_clip
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.clip.set_gradient_clip(clip, param_list=None, program=None)
......
......@@ -3,7 +3,7 @@
BackwardStrategy
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.BackwardStrategy
......
......@@ -3,7 +3,7 @@
CosineDecay
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.CosineDecay(learning_rate, step_each_epoch, epochs, begin=0, step=1, dtype='float32')
......
......@@ -3,7 +3,7 @@
ExponentialDecay
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.ExponentialDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype=’float32‘)
......
......@@ -3,7 +3,7 @@
FC
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.FC(name_scope, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, is_test=False, dtype='float32')
......
......@@ -3,7 +3,7 @@
InverseTimeDecay
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.InverseTimeDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32')
......
......@@ -3,7 +3,7 @@
NaturalExpDecay
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.NaturalExpDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32')
......
......@@ -3,7 +3,7 @@
NoamDecay
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.NoamDecay(d_model, warmup_steps, begin=1, step=1, dtype='float32', learning_rate=1.0)
......
......@@ -3,7 +3,7 @@
PiecewiseDecay
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.PiecewiseDecay(boundaries, values, begin, step=1, dtype='float32')
......
......@@ -3,7 +3,7 @@
PolynomialDecay
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.PolynomialDecay(learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False, begin=0, step=1, dtype='float32')
......
......@@ -3,7 +3,7 @@
TracedLayer
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.TracedLayer(program, parameters, feed_names, fetch_names)
......
......@@ -3,7 +3,7 @@
grad
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:method:: paddle.fluid.dygraph.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False, no_grad_vars=None, backward_strategy=None)
......
......@@ -3,7 +3,7 @@
guard
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.guard(place=None)
......
......@@ -3,7 +3,7 @@
load_dygraph
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.load_dygraph(model_path)
......
......@@ -3,7 +3,7 @@
no_grad
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:method:: paddle.fluid.dygraph.no_grad(func=None)
......
......@@ -3,7 +3,7 @@
save_dygraph
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.save_dygraph(state_dict, model_path)
......
......@@ -3,7 +3,7 @@
to_variable
-------------------------------
**注意:该API仅支持【动态图】模式**
:api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.to_variable(value, name=None, zero_copy=None)
......
......@@ -3,7 +3,7 @@
Executor
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.executor.Executor (place=None)
......
......@@ -3,7 +3,7 @@
global_scope
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.global_scope()
......
......@@ -3,7 +3,7 @@
scope_guard
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.executor.scope_guard (scope)
......
......@@ -3,7 +3,7 @@
BuildStrategy
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.BuildStrategy
......
......@@ -3,7 +3,7 @@
CompiledProgram
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.CompiledProgram(program_or_graph, build_strategy=None)
......
......@@ -3,7 +3,7 @@
DataFeedDesc
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.DataFeedDesc(proto_file)
......
......@@ -3,7 +3,7 @@
DataFeeder
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.DataFeeder(feed_list, place, program=None)
......
......@@ -3,7 +3,7 @@
ExecutionStrategy
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.ExecutionStrategy
......
......@@ -4,7 +4,7 @@ Executor
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.Executor (place=None)
......
......@@ -3,7 +3,7 @@
ParallelExecutor
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.ParallelExecutor(use_cuda, loss_name=None, main_program=None, share_vars_from=None, exec_strategy=None, build_strategy=None, num_trainers=1, trainer_id=0, scope=None)
......
......@@ -3,7 +3,7 @@
WeightNormParamAttr
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.WeightNormParamAttr(dim=None, name=None, initializer=None, learning_rate=1.0, regularizer=None, trainable=True, do_model_average=False)
......
......@@ -4,7 +4,7 @@
create_random_int_lodtensor
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.create_random_int_lodtensor(recursive_seq_lens, base_shape, place, low, high)
......
......@@ -3,7 +3,7 @@
data
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.data(name, shape, dtype='float32', lod_level=0)
......
......@@ -3,7 +3,7 @@
device_guard
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.device_guard(device=None)
......
......@@ -3,7 +3,7 @@
embedding
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')
......
......@@ -3,7 +3,7 @@
global_scope
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.global_scope()
......
......@@ -3,7 +3,7 @@
gradients
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.gradients(targets, inputs, target_gradients=None, no_grad_set=None)
......
......@@ -3,7 +3,7 @@
memory_optimize
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.memory_optimize(input_program, skip_opt_set=None, print_log=False, level=0, skip_grads=True)
......
......@@ -3,7 +3,7 @@
name_scope
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.name_scope(prefix=None)
......
......@@ -3,7 +3,7 @@
program_guard
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.program_guard(main_program, startup_program=None)
......
......@@ -3,7 +3,7 @@
release_memory
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.release_memory(input_program, skip_opt_set=None)
......
......@@ -3,7 +3,7 @@
save
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.save(program, model_path)
......
......@@ -3,7 +3,7 @@
scope_guard
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.scope_guard(scope)
......
......@@ -3,7 +3,7 @@
load_inference_model
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.load_inference_model(dirname, executor, model_filename=None, params_filename=None, pserver_endpoints=None)
......
......@@ -3,7 +3,7 @@
load_params
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.load_params(executor, dirname, main_program=None, filename=None)
......
......@@ -3,7 +3,7 @@
load_persistables
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.load_persistables(executor, dirname, main_program=None, filename=None)
......
......@@ -3,7 +3,7 @@
save
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save(program, model_path)
......
......@@ -3,7 +3,7 @@
save_inference_model
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_inference_model(dirname, feeded_var_names, target_vars, executor, main_program=None, model_filename=None, params_filename=None, export_for_deployment=True, program_only=False)
......
......@@ -3,7 +3,7 @@
save_params
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_params(executor, dirname, main_program=None, filename=None)
......
......@@ -3,7 +3,7 @@
save_persistables
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_persistables(executor, dirname, main_program=None, filename=None)
......
......@@ -3,7 +3,7 @@
save_vars
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_vars(executor, dirname, main_program=None, vars=None, predicate=None, filename=None)
......
......@@ -4,7 +4,7 @@ BeamSearchDecoder
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.BeamSearchDecoder(cell, start_token, end_token, beam_size, embedding_fn=None, output_fn=None)
......
......@@ -4,7 +4,7 @@ Decoder
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.Decoder()
......
......@@ -3,7 +3,7 @@
DynamicRNN
===================
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.DynamicRNN(name=None)
......
......@@ -3,7 +3,7 @@
GRUCell
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.GRUCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, dtype="float32", name="GRUCell")
......
......@@ -3,7 +3,7 @@
IfElse
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.IfElse(cond, name=None)
......
......@@ -4,7 +4,7 @@ LSTMCell
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.LSTMCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, forget_bias=1.0, dtype="float32", name="LSTMCell")
......
......@@ -3,7 +3,7 @@
Print
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.Print(input, first_n=-1, message=None, summarize=20, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both')
......
......@@ -4,7 +4,7 @@ RNNCell
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.RNNCell(name=None)
RNNCell是抽象的基类,代表将输入和状态映射到输出和新状态的计算,主要用于RNN。
......
......@@ -3,7 +3,7 @@
StaticRNN
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.StaticRNN(name=None)
......
......@@ -3,7 +3,7 @@
Switch
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.Switch (name=None)
......
......@@ -3,7 +3,7 @@
While
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.While (cond, is_test=False, name=None)
......
......@@ -3,7 +3,7 @@
autoincreased_step_counter
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.autoincreased_step_counter(counter_name=None, begin=1, step=1)
......
......@@ -3,7 +3,7 @@
batch_norm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, use_global_stats=False)
......
......@@ -3,7 +3,7 @@
bilinear_tensor_product
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.bilinear_tensor_product(x, y, size, act=None, name=None, param_attr=None, bias_attr=None)
......
......@@ -3,7 +3,7 @@
case
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.case(pred_fn_pairs, default=None, name=None)
......
......@@ -3,7 +3,7 @@
center_loss
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.center_loss(input, label, num_classes, alpha, param_attr, update_center=True)
......
......@@ -3,7 +3,7 @@
cond
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.cond(pred, true_fn=None, false_fn=None, name=None)
......
......@@ -3,7 +3,7 @@
conv2d
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv2d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCHW")
......
......@@ -3,7 +3,7 @@
conv2d_transpose
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCHW')
......
......@@ -3,7 +3,7 @@
conv3d
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv3d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCDHW")
......
......@@ -3,7 +3,7 @@
conv3d_transpose
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv3d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCDHW')
......
......@@ -3,7 +3,7 @@
create_parameter
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.create_parameter(shape,dtype,name=None,attr=None,is_bias=False,default_initializer=None)
......
......@@ -3,7 +3,7 @@
create_py_reader_by_data
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.create_py_reader_by_data(capacity,feed_list,name=None,use_double_buffer=True)
......
......@@ -3,7 +3,7 @@
crf_decoding
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.crf_decoding(input, param_attr, label=None, length=None)
......
......@@ -3,7 +3,7 @@
data
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.data(name, shape, append_batch_size=True, dtype='float32', lod_level=0, type=VarType.LOD_TENSOR, stop_gradient=True)
......
......@@ -3,7 +3,7 @@
data_norm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.data_norm(input, act=None, epsilon=1e-05, param_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False)
......
......@@ -3,7 +3,7 @@
deformable_conv
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.deformable_conv(input, offset, mask, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, deformable_groups=None, im2col_step=None, param_attr=None, bias_attr=None, modulated=True, name=None)
......
......@@ -4,7 +4,7 @@ dynamic_decode
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:method:: dynamic_decode(decoder, inits=None, max_step_num=None, output_time_major=False, **kwargs):
......
......@@ -3,7 +3,7 @@
dynamic_gru
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.dynamic_gru(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None, origin_mode=False)
......
......@@ -3,7 +3,7 @@
dynamic_lstm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.dynamic_lstm(input, size, h_0=None, c_0=None, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None)
......
......@@ -2,7 +2,7 @@
dynamic_lstmp
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.dynamic_lstmp(input, size, proj_size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', proj_activation='tanh', dtype='float32', name=None, h_0=None, c_0=None, cell_clip=None, proj_clip=None)
......
......@@ -3,7 +3,7 @@
embedding
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')
......
......@@ -3,7 +3,7 @@
fc
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, name=None)
......
......@@ -3,7 +3,7 @@
group_norm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.group_norm(input, groups, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, data_layout='NCHW', name=None)
......
......@@ -3,7 +3,7 @@
gru_unit
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.gru_unit(input, hidden, size, param_attr=None, bias_attr=None, activation='tanh', gate_activation='sigmoid', origin_mode=False)
......
......@@ -3,7 +3,7 @@
hsigmoid
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.hsigmoid(input, label, num_classes, param_attr=None, bias_attr=None, name=None, path_table=None, path_code=None, is_custom=False, is_sparse=False)
......
......@@ -3,7 +3,7 @@
im2sequence
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.im2sequence(input, filter_size=1, stride=1, padding=0, input_image_size=None, out_stride=1, name=None)
......
......@@ -3,7 +3,7 @@
inplace_abn
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.inplace_abn(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, use_global_stats=False, act_alpha=1.0)
......
......@@ -3,7 +3,7 @@
instance_norm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.instance_norm(input, epsilon=1e-05, param_attr=None, bias_attr=None, name=None)
......
......@@ -3,7 +3,7 @@
layer_norm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.layer_norm(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None)
......
......@@ -3,7 +3,7 @@
linear_chain_crf
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.linear_chain_crf(input, label, param_attr=None, length=None)
......
......@@ -3,7 +3,7 @@
lstm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.lstm(input, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=0.0, is_bidirec=False, is_test=False, name=None, default_initializer=None, seed=-1)
......
......@@ -3,7 +3,7 @@
lstm_unit
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.lstm_unit(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None)
......
......@@ -3,7 +3,7 @@
multi_box_head
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.multi_box_head(inputs, image, base_size, num_classes, aspect_ratios, min_ratio=None, max_ratio=None, min_sizes=None, max_sizes=None, steps=None, step_w=None, step_h=None, offset=0.5, variance=[0.1, 0.1, 0.2, 0.2], flip=True, clip=False, kernel_size=1, pad=0, stride=1, name=None, min_max_aspect_ratios_order=False)
......
......@@ -3,7 +3,7 @@
nce
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.nce(input, label, num_total_classes, sample_weight=None, param_attr=None, bias_attr=None, num_neg_samples=None, name=None, sampler='uniform', custom_dist=None, seed=0, is_sparse=False)
......
......@@ -3,7 +3,7 @@
prelu
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.prelu(x, mode, param_attr=None, name=None)
......
......@@ -3,7 +3,7 @@
py_func
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None)
......
......@@ -3,7 +3,7 @@
py_reader
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.py_reader(capacity, shapes, dtypes, lod_levels=None, name=None, use_double_buffer=True)
......
......@@ -3,7 +3,7 @@
read_file
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.read_file(reader)
......
......@@ -4,7 +4,7 @@ rnn
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:method:: paddle.fluid.layers.rnn(cell, inputs, initial_states=None, sequence_length=None, time_major=False, is_reverse=False, **kwargs)
......
......@@ -3,7 +3,7 @@
row_conv
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.row_conv(input, future_context_size, param_attr=None, act=None)
......
......@@ -3,7 +3,7 @@
sequence_concat
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_concat(input, name=None)
......
......@@ -3,7 +3,7 @@
sequence_conv
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=True, padding_start=None, bias_attr=None, param_attr=None, act=None, name=None)
......
......@@ -3,7 +3,7 @@
sequence_enumerate
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_enumerate(input, win_size, pad_value=0, name=None)
......
......@@ -3,7 +3,7 @@
sequence_expand_as
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_expand_as(x, y, name=None)
......
......@@ -3,7 +3,7 @@
sequence_expand
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_expand(x, y, ref_level=-1, name=None)
......
......@@ -3,7 +3,7 @@
sequence_first_step
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_first_step(input)
......
......@@ -3,7 +3,7 @@
sequence_last_step
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_last_step(input)
......
......@@ -3,7 +3,7 @@
sequence_pad
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_pad(x,pad_value,maxlen=None,name=None)
......
......@@ -3,7 +3,7 @@
sequence_pool
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_pool(input, pool_type, is_test=False, pad_value=0.0)
......
......@@ -3,7 +3,7 @@
sequence_reshape
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_reshape(input, new_dim)
......
......@@ -3,7 +3,7 @@
sequence_scatter
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_scatter(input, index, updates, name=None)
......
......@@ -3,7 +3,7 @@
sequence_slice
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_slice(input, offset, length, name=None)
......
......@@ -3,7 +3,7 @@
sequence_softmax
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_softmax(input, use_cudnn=False, name=None)
......
......@@ -3,7 +3,7 @@
sequence_unpad
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_unpad(x, length, name=None)
......
......@@ -3,7 +3,7 @@
spectral_norm
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.spectral_norm(weight, dim=0, power_iters=1, eps=1e-12, name=None)
......
......@@ -3,7 +3,7 @@
switch_case
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.switch_case(branch_index, branch_fns, default=None, name=None)
......
......@@ -4,7 +4,7 @@ while_loop
____________________________________
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.while_loop(cond, body, loop_vars, is_test=False, name=None)
......
......@@ -2,7 +2,7 @@
glu
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.glu(input, dim=-1)
......
......@@ -3,7 +3,7 @@
img_conv_group
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.img_conv_group(input, conv_num_filter, pool_size, conv_padding=1, conv_filter_size=3, conv_act=None, param_attr=None, conv_with_batchnorm=False, conv_batchnorm_drop_rate=0.0, pool_stride=1, pool_type='max', use_cudnn=True)
......
......@@ -3,7 +3,7 @@
scaled_dot_product_attention
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.scaled_dot_product_attention(queries, keys, values, num_heads=1, dropout_rate=0.0)
......
......@@ -3,7 +3,7 @@
sequence_conv_pool
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.sequence_conv_pool(input, num_filters, filter_size, param_attr=None, act='sigmoid', pool_type='max', bias_attr=None)
......
......@@ -3,7 +3,7 @@
simple_img_conv_pool
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.simple_img_conv_pool(input, num_filters, filter_size, pool_size, pool_stride, pool_padding=0, pool_type='max', global_pooling=False, conv_stride=1, conv_padding=0, conv_dilation=1, conv_groups=1, param_attr=None, bias_attr=None, act=None, use_cudnn=True)
......
......@@ -3,7 +3,7 @@
DGCMomentumOptimizer
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.DGCMomentumOptimizer(learning_rate, momentum, rampup_begin_step, rampup_step=1, sparsity=[0.999], use_nesterov=False, local_grad_clip_norm=None, num_trainers=None, regularization=None, grad_clip=None, name=None)
......
......@@ -3,7 +3,7 @@
ExponentialMovingAverage
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.ExponentialMovingAverage(decay=0.999, thres_steps=None, name=None)
......
......@@ -3,7 +3,7 @@
LookaheadOptimizer
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.LookaheadOptimizer(inner_optimizer, alpha=0.5, k=5)
......
......@@ -3,7 +3,7 @@
ModelAverage
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.ModelAverage(average_window_rate, min_average_window=10000, max_average_window=10000, regularization=None, name=None)
......
......@@ -3,7 +3,7 @@
PipelineOptimizer
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.PipelineOptimizer(optimizer, cut_list=None, place_list=None, concurrency_list=None, queue_size=30, sync_steps=1, start_cpu_core_id=0)
......
......@@ -3,7 +3,7 @@
RecomputeOptimizer
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.RecomputeOptimizer(optimizer)
......
......@@ -3,7 +3,7 @@
DistributeTranspilerConfig
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.DistributeTranspilerConfig
......
......@@ -3,7 +3,7 @@
DistributeTranspiler
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.DistributeTranspiler (config=None)
......
......@@ -3,7 +3,7 @@
HashName
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.HashName(pserver_endpoints)
......
......@@ -3,7 +3,7 @@
RoundRobin
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.RoundRobin(pserver_endpoints)
......
......@@ -3,7 +3,7 @@
memory_optimize
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.transpiler.memory_optimize(input_program, skip_opt_set=None, print_log=False, level=0, skip_grads=True)
......
......@@ -3,7 +3,7 @@
release_memory
-------------------------------
**注意:该API仅支持【静态图】模式**
:api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.transpiler.release_memory(input_program, skip_opt_set=None)
......
......@@ -228,15 +228,15 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。
</thead>
<tbody>
<tr>
<td> paddlepaddle==[版本号] 例如 paddlepaddle==1.8.0 </td>
<td> paddlepaddle==[版本号] 例如 paddlepaddle==1.8.1 </td>
<td> 只支持CPU对应版本的PaddlePaddle,具体版本请参见<a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> </td>
</tr>
<tr>
<td> paddlepaddle-gpu==[版本号] 例如 paddlepaddle-gpu==1.8.0 </td>
<td> paddlepaddle-gpu==[版本号] 例如 paddlepaddle-gpu==1.8.1 </td>
<td> 默认安装支持CUDA 10.0和cuDNN 7的对应[版本号]的PaddlePaddle安装包 </td>
</tr>
<tr>
<td> paddlepaddle-gpu==[版本号].postXX 例如 paddlepaddle-gpu==1.8.0.post97 </td>
<td> paddlepaddle-gpu==[版本号].postXX 例如 paddlepaddle-gpu==1.8.1.post97 </td>
<td> 支持CUDA 9.0和cuDNN 7的对应PaddlePaddle版本的安装包</td>
</tr>
</tbody>
......@@ -268,126 +268,126 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。
<tbody>
<tr>
<td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> win_cpu_mkl </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda9_cudnn7_mkl </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda10_cudnn7_mkl </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cpu_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda9_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> mac_cpu </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl</a></td>
</tr>
</tbody>
</table>
......@@ -556,16 +556,16 @@ platform tag: 类似 'linux_x86_64', 'any'
<tbody>
<tr>
<td> cuda10.1-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
</tbody>
</table>
......
......@@ -225,15 +225,15 @@ PaddePaddle implements references to various BLAS/CUDA/cuDNN libraries by specif
</thead>
<tbody>
<tr>
<td> paddlepaddle==[version code] such as paddlepaddle==1.8.0 </td>
<td> paddlepaddle==[version code] such as paddlepaddle==1.8.1 </td>
<td> Only support the corresponding version of the CPU PaddlePaddle, please refer to <a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> for the specific version. </td>
</tr>
<tr>
<td> paddlepaddle-gpu==[version code], such as paddlepaddle-gpu==1.8.0 </td>
<td> paddlepaddle-gpu==[version code], such as paddlepaddle-gpu==1.8.1 </td>
<td> The default installation supports the PaddlePaddle installation package corresponding to [version number] of CUDA 10.0 and cuDNN 7 </td>
</tr>
<tr>
<td> paddlepaddle-gpu==[version code].postXX, such as paddlepaddle-gpu==1.8.0.post97 </td>
<td> paddlepaddle-gpu==[version code].postXX, such as paddlepaddle-gpu==1.8.1.post97 </td>
<td> Installation package supporting the corresponding PaddlePaddle version of CUDA 9.0 and cuDNN 7 </td>
</tr>
</tbody>
......@@ -267,126 +267,126 @@ Please note that: in the commands, <code> paddlepaddle-gpu </code> will install
<tbody>
<tr>
<td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
<tr>
<td> win_cpu_mkl </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda9_cudnn7_mkl </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda10_cudnn7_mkl </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cpu_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> win_cuda9_cudnn7_openblas </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr>
<tr>
<td> mac_cpu </td>
<td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl</a></td>
</tr>
</tbody>
</table>
......@@ -559,16 +559,16 @@ platform tag: similar to 'linux_x86_64', 'any'
<tbody>
<tr>
<td> cuda10.1-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr>
</tbody>
</table>
......
......@@ -214,20 +214,20 @@
如果您是使用 Python 2,CUDA 9,cuDNN 7.3+,安装GPU版本的命令为:
::
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple
如果您是使用 Python 2,CUDA 10.0,cuDNN 7.3+,安装GPU版本的命令为:
::
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple
如果您是使用 Python 3,请将上述命令中的 **python** 更换为 **python3** 进行安装。
......@@ -430,12 +430,12 @@
(2). 拉取预安装 PaddlePaddle 的镜像:
::
docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0
docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1
(3). 用镜像构建并进入Docker容器:
::
docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0 /bin/bash
docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] 设定Docker的名称;
......@@ -443,7 +443,7 @@
> -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
> hub.baidubce.com/paddlepaddle/paddle:1.8.1 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
2. **GPU 版本**
......@@ -471,12 +471,12 @@
(2). 拉取支持 CUDA 10.0 , cuDNN 7.3+ 预安装 PaddlePaddle 的镜像:
::
nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7
nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7
(3). 用镜像构建并进入Docker容器:
::
nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 /bin/bash
nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7 /bin/bash
> --name [Name of container] 设定Docker的名称;
......@@ -484,7 +484,7 @@
> -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
> hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
或如果您需要支持 **CUDA 9** 的版本,将上述命令的 **cuda10.0** 替换成 **cuda9.0** 即可
......@@ -492,7 +492,7 @@
::
docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.0 /bin/bash
docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] 设定Docker的名称;
......@@ -500,7 +500,7 @@
> -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录;
> paddlepaddle/paddle:1.8.0 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
> paddlepaddle/paddle:1.8.1 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
4. 验证安装
......
......@@ -216,20 +216,20 @@ This section describes how to use pip to install.
If you are using Python 2, CUDA 9, cuDNN 7.3+, command to install GPU version:
::
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://mirror.baidu.com/pypi/simple
or
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple
If you are using Python 2, CUDA 10.0, cuDNN 7.3+, command to install GPU version:
::
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://mirror.baidu.com/pypi/simple
or
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple
python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple
If you are using Python 3, please change **python** in the above command to **python3** and install.
......@@ -437,12 +437,12 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
(2). Pull the image of the preinstalled PaddlePaddle:
::
docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0
docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1
(3). Use the image to build and enter the Docker container:
::
docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0 /bin/bash
docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] set the name of Docker;
......@@ -450,7 +450,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
> -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0 is the image name you need to use;/bin/bash is the command to be executed in Docker
> hub.baidubce.com/paddlepaddle/paddle:1.8.1 is the image name you need to use;/bin/bash is the command to be executed in Docker
2. **GPU version**
......@@ -478,12 +478,12 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
(2). Pull the image that supports CUDA 10.0, cuDNN 7.3 + pre installed PaddlePaddle:
::
nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7
nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7
(3). Use the image to build and enter the docker container:
::
nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 /bin/bash
nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7 /bin/bash
> --name [Name of container] set name of Docker;
......@@ -491,7 +491,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
> -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0 is the image name you need to use;/bin/bash is the command to be executed in Docker
> hub.baidubce.com/paddlepaddle/paddle:1.8.1 is the image name you need to use;/bin/bash is the command to be executed in Docker
Or if you need the version supporting **CUDA 9**, replace **cuda10.0** of the above command with **cuda9.0**
......@@ -499,7 +499,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
::
docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.0 /bin/bash
docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] set name of Docker;
......@@ -507,7 +507,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
> -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container;
> paddlepaddle/paddle:1.8.0 is the image name you need to use;/bin/bash is the command to be executed in docker
> paddlepaddle/paddle:1.8.1 is the image name you need to use;/bin/bash is the command to be executed in docker
4. Verify installation
......
......@@ -388,8 +388,8 @@ The data introduction section mentions the payment of the CoNLL 2005 training se
crf_decode = fluid.layers.crf_decoding(
input=feature_out, param_attr=fluid.ParamAttr(name='crfw'))
train_data = paddle.batch(
paddle.reader.shuffle(
train_data = fluid.io.batch(
fluid.io.shuffle(
paddle.dataset.conll05.test(), buf_size=8192),
batch_size=BATCH_SIZE)
......
......@@ -143,8 +143,8 @@ def train(use_cuda, save_dirname=None, is_local=True):
# define network topology
feature_out = db_lstm(**locals())
target = fluid.layers.data(
name='target', shape=[1], dtype='int64', lod_level=1)
target = fluid.data(
name='target', shape=[None, 1], dtype='int64', lod_level=1)
crf_cost = fluid.layers.linear_chain_crf(
input=feature_out,
label=target,
......@@ -165,11 +165,11 @@ def train(use_cuda, save_dirname=None, is_local=True):
input=feature_out, param_attr=fluid.ParamAttr(name='crfw'))
if args.enable_ce:
train_data = paddle.batch(
train_data = fluid.io.batch(
paddle.dataset.conll05.test(), batch_size=BATCH_SIZE)
else:
train_data = paddle.batch(
paddle.reader.shuffle(
train_data = fluid.io.batch(
fluid.io.shuffle(
paddle.dataset.conll05.test(), buf_size=8192),
batch_size=BATCH_SIZE)
......
......@@ -289,15 +289,15 @@ def optimizer_func():
- Now we can start training. This version is much simpler than before. We have ready-made training and test sets: `paddle.dataset.imikolov.train()` and `paddle.dataset.imikolov.test()`. Both will return a reader. In PaddlePaddle, the reader is a Python function that reads the next piece of data when called each time . It is a Python generator.
`paddle.batch` will read in a reader and output a batched reader. We can also output the training of each step and batch during the training process.
`fluid.io.batch` will read in a reader and output a batched reader. We can also output the training of each step and batch during the training process.
```python
def train(if_use_cuda, params_dirname, is_sparse=True):
place = fluid.CUDAPlace(0) if if_use_cuda else fluid.CPUPlace()
train_reader = paddle.batch(
train_reader = fluid.io.batch(
paddle.dataset.imikolov.train(word_dict, N), BATCH_SIZE)
test_reader = paddle.batch(
test_reader = fluid.io.batch(
paddle.dataset.imikolov.test(word_dict, N), BATCH_SIZE)
first_word = fluid.data(name='firstw', shape=[None, 1], dtype='int64')
......
......@@ -98,9 +98,9 @@ def optimizer_func():
def train(if_use_cuda, params_dirname, is_sparse=True):
place = fluid.CUDAPlace(0) if if_use_cuda else fluid.CPUPlace()
train_reader = paddle.batch(
train_reader = fluid.io.batch(
paddle.dataset.imikolov.train(word_dict, N), BATCH_SIZE)
test_reader = paddle.batch(
test_reader = fluid.io.batch(
paddle.dataset.imikolov.test(word_dict, N), BATCH_SIZE)
first_word = fluid.data(name='firstw', shape=[None, 1], dtype='int64')
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册