“a7edafc4bf653c05444939c5fd6dc5482f6a51cb”上不存在“paddle/git@gitcode.net:BaiXuePrincess/Paddle.git”
提交 34ced953 编写于 作者: G grasswolfs

Merge branch 'release/1.8' of https://github.com/PaddlePaddle/FluidDoc into...

Merge branch 'release/1.8' of https://github.com/PaddlePaddle/FluidDoc into remotes/origin/release/1.8
...@@ -59,19 +59,21 @@ result = exe.run(fluid.default_main_program(), fetch_list=[avg_cost]) ...@@ -59,19 +59,21 @@ result = exe.run(fluid.default_main_program(), fetch_list=[avg_cost])
```python ```python
import paddle.fluid as fluid import paddle.fluid as fluid
import paddle.fluid.compiler as compiler
import paddle.fluid.profiler as profiler import paddle.fluid.profiler as profiler
data1 = fluid.layers.fill_constant(shape=[1, 3, 8, 8], value=0.5, dtype='float32') data1 = fluid.layers.fill_constant(shape=[1, 3, 8, 8], value=0.5, dtype='float32')
data2 = fluid.layers.fill_constant(shape=[1, 3, 5, 5], value=0.5, dtype='float32') data2 = fluid.layers.fill_constant(shape=[1, 3, 5, 5], value=0.5, dtype='float32')
shape = fluid.layers.shape(data2) shape = fluid.layers.shape(data2)
shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4]) shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4])
out = fluid.layers.crop_tensor(data1, shape=shape) out = fluid.layers.crop_tensor(data1, shape=shape)
place = fluid.CUDAPlace(0) place = fluid.CUDAPlace(0)
exe = fluid.Executor(place) exe = fluid.Executor(place)
exe.run(fluid.default_startup_program()) exe.run(fluid.default_startup_program())
compiled_prog = compiler.CompiledProgram(fluid.default_main_program())
with profiler.profiler('All', 'total') as prof: with profiler.profiler('All', 'total') as prof:
for i in range(10): for i in range(10):
result = exe.run(fetch_list=[out]) result = exe.run(program=compiled_prog, fetch_list=[out])
``` ```
在程序运行结束后,将会自动地打印出profile report。在下面的profile report中,可以看到 `GpuMemCpy Summary`中给出了2项数据传输的调用耗时。在OP执行过程中,如果输入Tensor所在的设备与OP执行的设备不同,就会发生`GpuMemcpySync`,通常我们可以直接优化的就是这一项。进一步分析,可以看到`slice``crop_tensor`执行中都发生了`GpuMemcpySync`。尽管我们在程序中设置了GPU模式运行,但是框架中有些OP,例如shape,会将输出结果放在CPU上。 在程序运行结束后,将会自动地打印出profile report。在下面的profile report中,可以看到 `GpuMemCpy Summary`中给出了2项数据传输的调用耗时。在OP执行过程中,如果输入Tensor所在的设备与OP执行的设备不同,就会发生`GpuMemcpySync`,通常我们可以直接优化的就是这一项。进一步分析,可以看到`slice``crop_tensor`执行中都发生了`GpuMemcpySync`。尽管我们在程序中设置了GPU模式运行,但是框架中有些OP,例如shape,会将输出结果放在CPU上。
...@@ -82,35 +84,34 @@ with profiler.profiler('All', 'total') as prof: ...@@ -82,35 +84,34 @@ with profiler.profiler('All', 'total') as prof:
Note! This Report merge all thread info into one. Note! This Report merge all thread info into one.
Place: All Place: All
Time unit: ms Time unit: ms
Sorted by event first end time in descending order in the same thread Sorted by total time in descending order in the same thread
Total time: 24.0922 Total time: 26.6328
Computation time Total: 3.60143 Ratio: 14.9485% Computation time Total: 13.3133 Ratio: 49.9884%
Framework overhead Total: 20.4908 Ratio: 85.0515% Framework overhead Total: 13.3195 Ratio: 50.0116%
------------------------- GpuMemCpy Summary ------------------------- ------------------------- GpuMemCpy Summary -------------------------
GpuMemcpy Calls: 30 Total: 1.44377 Ratio: 5.99267% GpuMemcpy Calls: 30 Total: 1.47508 Ratio: 5.5386%
GpuMemcpyAsync Calls: 10 Total: 0.459803 Ratio: 1.90851% GpuMemcpyAsync Calls: 10 Total: 0.443514 Ratio: 1.66529%
GpuMemcpySync Calls: 20 Total: 0.983967 Ratio: 4.08416% GpuMemcpySync Calls: 20 Total: 1.03157 Ratio: 3.87331%
------------------------- Event Summary ------------------------- ------------------------- Event Summary -------------------------
Event Calls Total CPU Time (Ratio) GPU Time (Ratio) Min. Max. Ave. Ratio. Event Calls Total CPU Time (Ratio) GPU Time (Ratio) Min. Max. Ave. Ratio.
fill_constant 20 2.03147 1.995597 (0.982342) 0.035872 (0.017658) 0.064199 0.379822 0.101573 0.0843204 FastThreadedSSAGraphExecutorPrepare 10 9.16493 9.152509 (0.998645) 0.012417 (0.001355) 0.025192 8.85968 0.916493 0.344122
shape 10 0.466503 0.466503 (1.000000) 0.000000 (0.000000) 0.021165 0.207393 0.0466503 0.0193632 shape 10 8.33057 8.330568 (1.000000) 0.000000 (0.000000) 0.030711 7.99849 0.833057 0.312793
eager_deletion 30 0.28398 0.283980 (1.000000) 0.000000 (0.000000) 0.004668 0.028065 0.009466 0.0117872 fill_constant 20 4.06097 4.024522 (0.991025) 0.036449 (0.008975) 0.075087 0.888959 0.203049 0.15248
slice 10 1.53533 1.505664 (0.980679) 0.029664 (0.019321) 0.1312 0.259446 0.153533 0.0637271 slice 10 1.78033 1.750439 (0.983212) 0.029888 (0.016788) 0.148503 0.290851 0.178033 0.0668471
GpuMemcpySync:CPU->GPU 10 0.41714 0.408532 (0.979364) 0.008608 (0.020636) 0.038545 0.054022 0.041714 0.0173143 GpuMemcpySync:CPU->GPU 10 0.45524 0.446312 (0.980388) 0.008928 (0.019612) 0.039089 0.060694 0.045524 0.0170932
crop_tensor 10 1.49584 1.438558 (0.961707) 0.057280 (0.038293) 0.129106 0.246395 0.149584 0.0620879 crop_tensor 10 1.67658 1.620542 (0.966578) 0.056034 (0.033422) 0.143906 0.258776 0.167658 0.0629515
GpuMemcpySync:GPU->CPU 10 0.566827 0.543787 (0.959353) 0.023040 (0.040647) 0.047598 0.097705 0.0566827 0.0235274 GpuMemcpySync:GPU->CPU 10 0.57633 0.552906 (0.959357) 0.023424 (0.040643) 0.050657 0.076322 0.057633 0.0216398
Fetch 10 0.921333 0.897141 (0.973742) 0.024192 (0.026258) 0.077059 0.177223 0.0921333 0.0382419 Fetch 10 0.919361 0.895201 (0.973721) 0.024160 (0.026279) 0.082935 0.138122 0.0919361 0.0345199
GpuMemcpyAsync:GPU->CPU 10 0.459803 0.435611 (0.947386) 0.024192 (0.052614) 0.039321 0.073849 0.0459803 0.0190851 GpuMemcpyAsync:GPU->CPU 10 0.443514 0.419354 (0.945526) 0.024160 (0.054474) 0.040639 0.059673 0.0443514 0.0166529
ParallelExecutor::Run 10 17.3578 17.345797 (0.999309) 0.012000 (0.000691) 0.705361 10.3389 1.73578 0.720472 ScopeBufferedMonitor::post_local_exec_scopes_process 10 0.341999 0.341999 (1.000000) 0.000000 (0.000000) 0.028436 0.057134 0.0341999 0.0128413
InitLocalVars 1 0.084954 0.084954 (1.000000) 0.000000 (0.000000) 0.084954 0.084954 0.084954 0.0035262 eager_deletion 30 0.287236 0.287236 (1.000000) 0.000000 (0.000000) 0.005452 0.022696 0.00957453 0.010785
ScopeBufferedMonitor::pre_local_exec_scopes_process 10 0.040771 0.040771 (1.000000) 0.000000 (0.000000) 0.003653 0.00543 0.0040771 0.00169229 ScopeBufferedMonitor::pre_local_exec_scopes_process 10 0.047864 0.047864 (1.000000) 0.000000 (0.000000) 0.003668 0.011592 0.0047864 0.00179718
FastThreadedSSAGraphExecutorPrepare 10 8.64291 8.630914 (0.998612) 0.012000 (0.001388) 0.033383 8.29818 0.864291 0.358743 InitLocalVars 1 0.022981 0.022981 (1.000000) 0.000000 (0.000000) 0.022981 0.022981 0.022981 0.000862883
ScopeBufferedMonitor::post_local_exec_scopes_process 10 0.252618 0.252618 (1.000000) 0.000000 (0.000000) 0.022696 0.041439 0.0252618 0.0104854
``` ```
### 通过log查看发生数据传输的具体位置 ### 通过log查看发生数据传输的具体位置
...@@ -138,6 +139,7 @@ I0406 14:56:23.287473 17516 operator.cc:180] CUDAPlace(0) Op(crop_tensor), input ...@@ -138,6 +139,7 @@ I0406 14:56:23.287473 17516 operator.cc:180] CUDAPlace(0) Op(crop_tensor), input
```python ```python
import paddle.fluid as fluid import paddle.fluid as fluid
import paddle.fluid.compiler as compiler
import paddle.fluid.profiler as profiler import paddle.fluid.profiler as profiler
data1 = fluid.layers.fill_constant(shape=[1, 3, 8, 8], value=0.5, dtype='float32') data1 = fluid.layers.fill_constant(shape=[1, 3, 8, 8], value=0.5, dtype='float32')
...@@ -146,13 +148,13 @@ shape = fluid.layers.shape(data2) ...@@ -146,13 +148,13 @@ shape = fluid.layers.shape(data2)
with fluid.device_guard("cpu"): with fluid.device_guard("cpu"):
shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4]) shape = fluid.layers.slice(shape, axes=[0], starts=[0], ends=[4])
out = fluid.layers.crop_tensor(data1, shape=shape) out = fluid.layers.crop_tensor(data1, shape=shape)
place = fluid.CUDAPlace(0)
place = fluid.CUDAPlace(0)
exe = fluid.Executor(place) exe = fluid.Executor(place)
exe.run(fluid.default_startup_program()) exe.run(fluid.default_startup_program())
compiled_prog = compiler.CompiledProgram(fluid.default_main_program())
with profiler.profiler('All', 'total') as prof: with profiler.profiler('All', 'total') as prof:
for i in range(10): for i in range(10):
result = exe.run(fetch_list=[out]) result = exe.run(program=compiled_prog, fetch_list=[out])
``` ```
再次观察profile report中`GpuMemCpy Summary`的内容,可以看到`GpuMemCpySync`已经被消除。在实际的模型中,若`GpuMemCpySync` 调用耗时占比较大,并且可以通过设置`device_guard`避免,那么就能够带来一定的性能提升。 再次观察profile report中`GpuMemCpy Summary`的内容,可以看到`GpuMemCpySync`已经被消除。在实际的模型中,若`GpuMemCpySync` 调用耗时占比较大,并且可以通过设置`device_guard`避免,那么就能够带来一定的性能提升。
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
append_backward append_backward
--------------- ---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.backward.append_backward .. autofunction:: paddle.fluid.backward.append_backward
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
gradients gradients
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.backward.gradients .. autofunction:: paddle.fluid.backward.gradients
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
set_gradient_clip set_gradient_clip
----------------- -----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.clip.set_gradient_clip .. autofunction:: paddle.fluid.clip.set_gradient_clip
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
BackwardStrategy BackwardStrategy
---------------- ----------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.BackwardStrategy .. autoclass:: paddle.fluid.dygraph.BackwardStrategy
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
CosineDecay CosineDecay
----------- -----------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.CosineDecay .. autoclass:: paddle.fluid.dygraph.CosineDecay
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
ExponentialDecay ExponentialDecay
---------------- ----------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.ExponentialDecay .. autoclass:: paddle.fluid.dygraph.ExponentialDecay
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
InverseTimeDecay InverseTimeDecay
---------------- ----------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.InverseTimeDecay .. autoclass:: paddle.fluid.dygraph.InverseTimeDecay
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
NaturalExpDecay NaturalExpDecay
--------------- ---------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.NaturalExpDecay .. autoclass:: paddle.fluid.dygraph.NaturalExpDecay
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
NoamDecay NoamDecay
--------- ---------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.NoamDecay .. autoclass:: paddle.fluid.dygraph.NoamDecay
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
PiecewiseDecay PiecewiseDecay
-------------- --------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.PiecewiseDecay .. autoclass:: paddle.fluid.dygraph.PiecewiseDecay
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
PolynomialDecay PolynomialDecay
--------------- ---------------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.PolynomialDecay .. autoclass:: paddle.fluid.dygraph.PolynomialDecay
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
TracedLayer TracedLayer
----------- -----------
:api_attr: imperative programming (dynamic graph)
.. autoclass:: paddle.fluid.dygraph.TracedLayer .. autoclass:: paddle.fluid.dygraph.TracedLayer
:members: :members:
:noindex: :noindex:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
grad grad
---- ----
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.grad .. autofunction:: paddle.fluid.dygraph.grad
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
guard guard
----- -----
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.guard .. autofunction:: paddle.fluid.dygraph.guard
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
load_dygraph load_dygraph
------------ ------------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.load_dygraph .. autofunction:: paddle.fluid.dygraph.load_dygraph
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
no_grad no_grad
------- -------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.no_grad .. autofunction:: paddle.fluid.dygraph.no_grad
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
save_dygraph save_dygraph
------------ ------------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.save_dygraph .. autofunction:: paddle.fluid.dygraph.save_dygraph
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
to_variable to_variable
----------- -----------
:api_attr: imperative programming (dynamic graph)
.. autofunction:: paddle.fluid.dygraph.to_variable .. autofunction:: paddle.fluid.dygraph.to_variable
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
Executor Executor
-------- --------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.executor.Executor .. autoclass:: paddle.fluid.executor.Executor
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
global_scope global_scope
------------ ------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.executor.global_scope .. autofunction:: paddle.fluid.executor.global_scope
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
scope_guard scope_guard
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.executor.scope_guard .. autofunction:: paddle.fluid.executor.scope_guard
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
BuildStrategy BuildStrategy
------------- -------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.BuildStrategy .. autoclass:: paddle.fluid.BuildStrategy
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
CompiledProgram CompiledProgram
--------------- ---------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.CompiledProgram .. autoclass:: paddle.fluid.CompiledProgram
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
DataFeedDesc DataFeedDesc
------------ ------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.DataFeedDesc .. autoclass:: paddle.fluid.DataFeedDesc
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
DataFeeder DataFeeder
---------- ----------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.DataFeeder .. autoclass:: paddle.fluid.DataFeeder
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
ExecutionStrategy ExecutionStrategy
----------------- -----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.ExecutionStrategy .. autoclass:: paddle.fluid.ExecutionStrategy
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
Executor Executor
-------- --------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.Executor .. autoclass:: paddle.fluid.Executor
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
ParallelExecutor ParallelExecutor
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.ParallelExecutor .. autoclass:: paddle.fluid.ParallelExecutor
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
WeightNormParamAttr WeightNormParamAttr
------------------- -------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.WeightNormParamAttr .. autoclass:: paddle.fluid.WeightNormParamAttr
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
create_random_int_lodtensor create_random_int_lodtensor
--------------------------- ---------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.create_random_int_lodtensor .. autofunction:: paddle.fluid.create_random_int_lodtensor
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
data data
---- ----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.data .. autofunction:: paddle.fluid.data
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
device_guard device_guard
------------ ------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.device_guard .. autofunction:: paddle.fluid.device_guard
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
embedding embedding
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.embedding .. autofunction:: paddle.fluid.embedding
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
global_scope global_scope
------------ ------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.global_scope .. autofunction:: paddle.fluid.global_scope
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
gradients gradients
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.gradients .. autofunction:: paddle.fluid.gradients
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
memory_optimize memory_optimize
--------------- ---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.memory_optimize .. autofunction:: paddle.fluid.memory_optimize
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
name_scope name_scope
---------- ----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.name_scope .. autofunction:: paddle.fluid.name_scope
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
program_guard program_guard
------------- -------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.program_guard .. autofunction:: paddle.fluid.program_guard
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
release_memory release_memory
-------------- --------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.release_memory .. autofunction:: paddle.fluid.release_memory
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
save save
---- ----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.save .. autofunction:: paddle.fluid.save
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
scope_guard scope_guard
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.scope_guard .. autofunction:: paddle.fluid.scope_guard
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
load_inference_model load_inference_model
-------------------- --------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.load_inference_model .. autofunction:: paddle.fluid.io.load_inference_model
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
load_params load_params
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.load_params .. autofunction:: paddle.fluid.io.load_params
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
load_persistables load_persistables
----------------- -----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.load_persistables .. autofunction:: paddle.fluid.io.load_persistables
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
save save
---- ----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save .. autofunction:: paddle.fluid.io.save
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
save_inference_model save_inference_model
-------------------- --------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_inference_model .. autofunction:: paddle.fluid.io.save_inference_model
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
save_params save_params
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_params .. autofunction:: paddle.fluid.io.save_params
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
save_persistables save_persistables
----------------- -----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_persistables .. autofunction:: paddle.fluid.io.save_persistables
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
save_vars save_vars
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.io.save_vars .. autofunction:: paddle.fluid.io.save_vars
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
BeamSearchDecoder BeamSearchDecoder
----------------- -----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.BeamSearchDecoder .. autoclass:: paddle.fluid.layers.BeamSearchDecoder
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
Decoder Decoder
------- -------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.Decoder .. autoclass:: paddle.fluid.layers.Decoder
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
DynamicRNN DynamicRNN
---------- ----------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.DynamicRNN .. autoclass:: paddle.fluid.layers.DynamicRNN
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
GRUCell GRUCell
------- -------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.GRUCell .. autoclass:: paddle.fluid.layers.GRUCell
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
IfElse IfElse
------ ------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.IfElse .. autoclass:: paddle.fluid.layers.IfElse
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
LSTMCell LSTMCell
-------- --------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.LSTMCell .. autoclass:: paddle.fluid.layers.LSTMCell
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
Print Print
----- -----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.Print .. autofunction:: paddle.fluid.layers.Print
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
RNNCell RNNCell
------- -------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.RNNCell .. autoclass:: paddle.fluid.layers.RNNCell
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
StaticRNN StaticRNN
--------- ---------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.StaticRNN .. autoclass:: paddle.fluid.layers.StaticRNN
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
Switch Switch
------ ------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.Switch .. autoclass:: paddle.fluid.layers.Switch
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
While While
----- -----
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.layers.While .. autoclass:: paddle.fluid.layers.While
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
autoincreased_step_counter autoincreased_step_counter
-------------------------- --------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.autoincreased_step_counter .. autofunction:: paddle.fluid.layers.autoincreased_step_counter
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
batch_norm batch_norm
---------- ----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.batch_norm .. autofunction:: paddle.fluid.layers.batch_norm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
bilinear_tensor_product bilinear_tensor_product
----------------------- -----------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.bilinear_tensor_product .. autofunction:: paddle.fluid.layers.bilinear_tensor_product
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
case case
---- ----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.case .. autofunction:: paddle.fluid.layers.case
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
center_loss center_loss
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.center_loss .. autofunction:: paddle.fluid.layers.center_loss
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
cond cond
---- ----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.cond .. autofunction:: paddle.fluid.layers.cond
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
conv2d conv2d
------ ------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv2d .. autofunction:: paddle.fluid.layers.conv2d
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
conv2d_transpose conv2d_transpose
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv2d_transpose .. autofunction:: paddle.fluid.layers.conv2d_transpose
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
conv3d conv3d
------ ------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv3d .. autofunction:: paddle.fluid.layers.conv3d
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
conv3d_transpose conv3d_transpose
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.conv3d_transpose .. autofunction:: paddle.fluid.layers.conv3d_transpose
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
create_parameter create_parameter
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.create_parameter .. autofunction:: paddle.fluid.layers.create_parameter
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
create_py_reader_by_data create_py_reader_by_data
------------------------ ------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.create_py_reader_by_data .. autofunction:: paddle.fluid.layers.create_py_reader_by_data
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
crf_decoding crf_decoding
------------ ------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.crf_decoding .. autofunction:: paddle.fluid.layers.crf_decoding
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
data data
---- ----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.data .. autofunction:: paddle.fluid.layers.data
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
data_norm data_norm
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.data_norm .. autofunction:: paddle.fluid.layers.data_norm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
deformable_conv deformable_conv
--------------- ---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.deformable_conv .. autofunction:: paddle.fluid.layers.deformable_conv
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
dynamic_decode dynamic_decode
-------------- --------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_decode .. autofunction:: paddle.fluid.layers.dynamic_decode
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
dynamic_gru dynamic_gru
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_gru .. autofunction:: paddle.fluid.layers.dynamic_gru
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
dynamic_lstm dynamic_lstm
------------ ------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_lstm .. autofunction:: paddle.fluid.layers.dynamic_lstm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
dynamic_lstmp dynamic_lstmp
------------- -------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.dynamic_lstmp .. autofunction:: paddle.fluid.layers.dynamic_lstmp
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
embedding embedding
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.embedding .. autofunction:: paddle.fluid.layers.embedding
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
fc fc
-- --
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.fc .. autofunction:: paddle.fluid.layers.fc
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
group_norm group_norm
---------- ----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.group_norm .. autofunction:: paddle.fluid.layers.group_norm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
gru_unit gru_unit
-------- --------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.gru_unit .. autofunction:: paddle.fluid.layers.gru_unit
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
hsigmoid hsigmoid
-------- --------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.hsigmoid .. autofunction:: paddle.fluid.layers.hsigmoid
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
im2sequence im2sequence
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.im2sequence .. autofunction:: paddle.fluid.layers.im2sequence
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
inplace_abn inplace_abn
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.inplace_abn .. autofunction:: paddle.fluid.layers.inplace_abn
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
instance_norm instance_norm
------------- -------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.instance_norm .. autofunction:: paddle.fluid.layers.instance_norm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
layer_norm layer_norm
---------- ----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.layer_norm .. autofunction:: paddle.fluid.layers.layer_norm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
linear_chain_crf linear_chain_crf
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.linear_chain_crf .. autofunction:: paddle.fluid.layers.linear_chain_crf
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
lstm lstm
---- ----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.lstm .. autofunction:: paddle.fluid.layers.lstm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
lstm_unit lstm_unit
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.lstm_unit .. autofunction:: paddle.fluid.layers.lstm_unit
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
multi_box_head multi_box_head
-------------- --------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.multi_box_head .. autofunction:: paddle.fluid.layers.multi_box_head
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
nce nce
--- ---
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.nce .. autofunction:: paddle.fluid.layers.nce
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
prelu prelu
----- -----
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.prelu .. autofunction:: paddle.fluid.layers.prelu
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
py_func py_func
------- -------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.py_func .. autofunction:: paddle.fluid.layers.py_func
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
py_reader py_reader
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.py_reader .. autofunction:: paddle.fluid.layers.py_reader
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
read_file read_file
--------- ---------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.read_file .. autofunction:: paddle.fluid.layers.read_file
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
rnn rnn
--- ---
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.rnn .. autofunction:: paddle.fluid.layers.rnn
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
row_conv row_conv
-------- --------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.row_conv .. autofunction:: paddle.fluid.layers.row_conv
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_concat sequence_concat
--------------- ---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_concat .. autofunction:: paddle.fluid.layers.sequence_concat
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_conv sequence_conv
------------- -------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_conv .. autofunction:: paddle.fluid.layers.sequence_conv
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_enumerate sequence_enumerate
------------------ ------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_enumerate .. autofunction:: paddle.fluid.layers.sequence_enumerate
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_expand sequence_expand
--------------- ---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_expand .. autofunction:: paddle.fluid.layers.sequence_expand
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_expand_as sequence_expand_as
------------------ ------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_expand_as .. autofunction:: paddle.fluid.layers.sequence_expand_as
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_first_step sequence_first_step
------------------- -------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_first_step .. autofunction:: paddle.fluid.layers.sequence_first_step
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_last_step sequence_last_step
------------------ ------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_last_step .. autofunction:: paddle.fluid.layers.sequence_last_step
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_pad sequence_pad
------------ ------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_pad .. autofunction:: paddle.fluid.layers.sequence_pad
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_pool sequence_pool
------------- -------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_pool .. autofunction:: paddle.fluid.layers.sequence_pool
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_reshape sequence_reshape
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_reshape .. autofunction:: paddle.fluid.layers.sequence_reshape
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_scatter sequence_scatter
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_scatter .. autofunction:: paddle.fluid.layers.sequence_scatter
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_slice sequence_slice
-------------- --------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_slice .. autofunction:: paddle.fluid.layers.sequence_slice
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_softmax sequence_softmax
---------------- ----------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_softmax .. autofunction:: paddle.fluid.layers.sequence_softmax
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_unpad sequence_unpad
-------------- --------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.sequence_unpad .. autofunction:: paddle.fluid.layers.sequence_unpad
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
spectral_norm spectral_norm
------------- -------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.spectral_norm .. autofunction:: paddle.fluid.layers.spectral_norm
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
switch_case switch_case
----------- -----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.switch_case .. autofunction:: paddle.fluid.layers.switch_case
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
while_loop while_loop
---------- ----------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.layers.while_loop .. autofunction:: paddle.fluid.layers.while_loop
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
glu glu
--- ---
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.glu .. autofunction:: paddle.fluid.nets.glu
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
img_conv_group img_conv_group
-------------- --------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.img_conv_group .. autofunction:: paddle.fluid.nets.img_conv_group
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
scaled_dot_product_attention scaled_dot_product_attention
---------------------------- ----------------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.scaled_dot_product_attention .. autofunction:: paddle.fluid.nets.scaled_dot_product_attention
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
sequence_conv_pool sequence_conv_pool
------------------ ------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.sequence_conv_pool .. autofunction:: paddle.fluid.nets.sequence_conv_pool
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
simple_img_conv_pool simple_img_conv_pool
-------------------- --------------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.nets.simple_img_conv_pool .. autofunction:: paddle.fluid.nets.simple_img_conv_pool
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
DGCMomentumOptimizer DGCMomentumOptimizer
-------------------- --------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.DGCMomentumOptimizer .. autoclass:: paddle.fluid.optimizer.DGCMomentumOptimizer
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
ExponentialMovingAverage ExponentialMovingAverage
------------------------ ------------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.ExponentialMovingAverage .. autoclass:: paddle.fluid.optimizer.ExponentialMovingAverage
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
LookaheadOptimizer LookaheadOptimizer
------------------ ------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.LookaheadOptimizer .. autoclass:: paddle.fluid.optimizer.LookaheadOptimizer
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
ModelAverage ModelAverage
------------ ------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.ModelAverage .. autoclass:: paddle.fluid.optimizer.ModelAverage
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
PipelineOptimizer PipelineOptimizer
----------------- -----------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.PipelineOptimizer .. autoclass:: paddle.fluid.optimizer.PipelineOptimizer
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
RecomputeOptimizer RecomputeOptimizer
------------------ ------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.optimizer.RecomputeOptimizer .. autoclass:: paddle.fluid.optimizer.RecomputeOptimizer
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
DistributeTranspiler DistributeTranspiler
-------------------- --------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.DistributeTranspiler .. autoclass:: paddle.fluid.transpiler.DistributeTranspiler
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
DistributeTranspilerConfig DistributeTranspilerConfig
-------------------------- --------------------------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.DistributeTranspilerConfig .. autoclass:: paddle.fluid.transpiler.DistributeTranspilerConfig
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
HashName HashName
-------- --------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.HashName .. autoclass:: paddle.fluid.transpiler.HashName
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
RoundRobin RoundRobin
---------- ----------
:api_attr: declarative programming (static graph)
.. autoclass:: paddle.fluid.transpiler.RoundRobin .. autoclass:: paddle.fluid.transpiler.RoundRobin
:members: :members:
:inherited-members: :inherited-members:
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
memory_optimize memory_optimize
--------------- ---------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.transpiler.memory_optimize .. autofunction:: paddle.fluid.transpiler.memory_optimize
:noindex: :noindex:
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
release_memory release_memory
-------------- --------------
:api_attr: declarative programming (static graph)
.. autofunction:: paddle.fluid.transpiler.release_memory .. autofunction:: paddle.fluid.transpiler.release_memory
:noindex: :noindex:
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
append_backward append_backward
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.backward.append_backward(loss, parameter_list=None, no_grad_set=None, callbacks=None) .. py:function:: paddle.fluid.backward.append_backward(loss, parameter_list=None, no_grad_set=None, callbacks=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
gradients gradients
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.backward.gradients(targets, inputs, target_gradients=None, no_grad_set=None) .. py:function:: paddle.fluid.backward.gradients(targets, inputs, target_gradients=None, no_grad_set=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
set_gradient_clip set_gradient_clip
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.clip.set_gradient_clip(clip, param_list=None, program=None) .. py:function:: paddle.fluid.clip.set_gradient_clip(clip, param_list=None, program=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
BackwardStrategy BackwardStrategy
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.BackwardStrategy .. py:class:: paddle.fluid.dygraph.BackwardStrategy
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
CosineDecay CosineDecay
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.CosineDecay(learning_rate, step_each_epoch, epochs, begin=0, step=1, dtype='float32') .. py:class:: paddle.fluid.dygraph.CosineDecay(learning_rate, step_each_epoch, epochs, begin=0, step=1, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
ExponentialDecay ExponentialDecay
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.ExponentialDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype=’float32‘) .. py:class:: paddle.fluid.dygraph.ExponentialDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype=’float32‘)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
FC FC
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.FC(name_scope, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, is_test=False, dtype='float32') .. py:class:: paddle.fluid.dygraph.FC(name_scope, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, is_test=False, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
InverseTimeDecay InverseTimeDecay
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.InverseTimeDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32') .. py:class:: paddle.fluid.dygraph.InverseTimeDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
NaturalExpDecay NaturalExpDecay
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.NaturalExpDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32') .. py:class:: paddle.fluid.dygraph.NaturalExpDecay(learning_rate, decay_steps, decay_rate, staircase=False, begin=0, step=1, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
NoamDecay NoamDecay
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.NoamDecay(d_model, warmup_steps, begin=1, step=1, dtype='float32', learning_rate=1.0) .. py:class:: paddle.fluid.dygraph.NoamDecay(d_model, warmup_steps, begin=1, step=1, dtype='float32', learning_rate=1.0)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
PiecewiseDecay PiecewiseDecay
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.PiecewiseDecay(boundaries, values, begin, step=1, dtype='float32') .. py:class:: paddle.fluid.dygraph.PiecewiseDecay(boundaries, values, begin, step=1, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
PolynomialDecay PolynomialDecay
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.PolynomialDecay(learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False, begin=0, step=1, dtype='float32') .. py:class:: paddle.fluid.dygraph.PolynomialDecay(learning_rate, decay_steps, end_learning_rate=0.0001, power=1.0, cycle=False, begin=0, step=1, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
TracedLayer TracedLayer
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:class:: paddle.fluid.dygraph.TracedLayer(program, parameters, feed_names, fetch_names) .. py:class:: paddle.fluid.dygraph.TracedLayer(program, parameters, feed_names, fetch_names)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
grad grad
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:method:: paddle.fluid.dygraph.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False, no_grad_vars=None, backward_strategy=None) .. py:method:: paddle.fluid.dygraph.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False, no_grad_vars=None, backward_strategy=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
guard guard
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.guard(place=None) .. py:function:: paddle.fluid.dygraph.guard(place=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
load_dygraph load_dygraph
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.load_dygraph(model_path) .. py:function:: paddle.fluid.dygraph.load_dygraph(model_path)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
no_grad no_grad
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:method:: paddle.fluid.dygraph.no_grad(func=None) .. py:method:: paddle.fluid.dygraph.no_grad(func=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
save_dygraph save_dygraph
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.save_dygraph(state_dict, model_path) .. py:function:: paddle.fluid.dygraph.save_dygraph(state_dict, model_path)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
to_variable to_variable
------------------------------- -------------------------------
**注意:该API仅支持【动态图】模式** :api_attr: 命令式编程模式(动态图)
.. py:function:: paddle.fluid.dygraph.to_variable(value, name=None, zero_copy=None) .. py:function:: paddle.fluid.dygraph.to_variable(value, name=None, zero_copy=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
Executor Executor
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.executor.Executor (place=None) .. py:class:: paddle.fluid.executor.Executor (place=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
global_scope global_scope
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.global_scope() .. py:function:: paddle.fluid.global_scope()
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
scope_guard scope_guard
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.executor.scope_guard (scope) .. py:function:: paddle.fluid.executor.scope_guard (scope)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
BuildStrategy BuildStrategy
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.BuildStrategy .. py:class:: paddle.fluid.BuildStrategy
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
CompiledProgram CompiledProgram
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.CompiledProgram(program_or_graph, build_strategy=None) .. py:class:: paddle.fluid.CompiledProgram(program_or_graph, build_strategy=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
DataFeedDesc DataFeedDesc
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.DataFeedDesc(proto_file) .. py:class:: paddle.fluid.DataFeedDesc(proto_file)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
DataFeeder DataFeeder
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.DataFeeder(feed_list, place, program=None) .. py:class:: paddle.fluid.DataFeeder(feed_list, place, program=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
ExecutionStrategy ExecutionStrategy
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.ExecutionStrategy .. py:class:: paddle.fluid.ExecutionStrategy
......
...@@ -4,7 +4,7 @@ Executor ...@@ -4,7 +4,7 @@ Executor
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.Executor (place=None) .. py:class:: paddle.fluid.Executor (place=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
ParallelExecutor ParallelExecutor
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.ParallelExecutor(use_cuda, loss_name=None, main_program=None, share_vars_from=None, exec_strategy=None, build_strategy=None, num_trainers=1, trainer_id=0, scope=None) .. py:class:: paddle.fluid.ParallelExecutor(use_cuda, loss_name=None, main_program=None, share_vars_from=None, exec_strategy=None, build_strategy=None, num_trainers=1, trainer_id=0, scope=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
WeightNormParamAttr WeightNormParamAttr
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.WeightNormParamAttr(dim=None, name=None, initializer=None, learning_rate=1.0, regularizer=None, trainable=True, do_model_average=False) .. py:class:: paddle.fluid.WeightNormParamAttr(dim=None, name=None, initializer=None, learning_rate=1.0, regularizer=None, trainable=True, do_model_average=False)
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
create_random_int_lodtensor create_random_int_lodtensor
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.create_random_int_lodtensor(recursive_seq_lens, base_shape, place, low, high) .. py:function:: paddle.fluid.create_random_int_lodtensor(recursive_seq_lens, base_shape, place, low, high)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
data data
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.data(name, shape, dtype='float32', lod_level=0) .. py:function:: paddle.fluid.data(name, shape, dtype='float32', lod_level=0)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
device_guard device_guard
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.device_guard(device=None) .. py:function:: paddle.fluid.device_guard(device=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
embedding embedding
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32') .. py:function:: paddle.fluid.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
global_scope global_scope
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.global_scope() .. py:function:: paddle.fluid.global_scope()
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
gradients gradients
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.gradients(targets, inputs, target_gradients=None, no_grad_set=None) .. py:function:: paddle.fluid.gradients(targets, inputs, target_gradients=None, no_grad_set=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
memory_optimize memory_optimize
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.memory_optimize(input_program, skip_opt_set=None, print_log=False, level=0, skip_grads=True) .. py:function:: paddle.fluid.memory_optimize(input_program, skip_opt_set=None, print_log=False, level=0, skip_grads=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
name_scope name_scope
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.name_scope(prefix=None) .. py:function:: paddle.fluid.name_scope(prefix=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
program_guard program_guard
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.program_guard(main_program, startup_program=None) .. py:function:: paddle.fluid.program_guard(main_program, startup_program=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
release_memory release_memory
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.release_memory(input_program, skip_opt_set=None) .. py:function:: paddle.fluid.release_memory(input_program, skip_opt_set=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
save save
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.save(program, model_path) .. py:function:: paddle.fluid.save(program, model_path)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
scope_guard scope_guard
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.scope_guard(scope) .. py:function:: paddle.fluid.scope_guard(scope)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
load_inference_model load_inference_model
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.load_inference_model(dirname, executor, model_filename=None, params_filename=None, pserver_endpoints=None) .. py:function:: paddle.fluid.io.load_inference_model(dirname, executor, model_filename=None, params_filename=None, pserver_endpoints=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
load_params load_params
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.load_params(executor, dirname, main_program=None, filename=None) .. py:function:: paddle.fluid.io.load_params(executor, dirname, main_program=None, filename=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
load_persistables load_persistables
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.load_persistables(executor, dirname, main_program=None, filename=None) .. py:function:: paddle.fluid.io.load_persistables(executor, dirname, main_program=None, filename=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
save save
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save(program, model_path) .. py:function:: paddle.fluid.io.save(program, model_path)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
save_inference_model save_inference_model
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_inference_model(dirname, feeded_var_names, target_vars, executor, main_program=None, model_filename=None, params_filename=None, export_for_deployment=True, program_only=False) .. py:function:: paddle.fluid.io.save_inference_model(dirname, feeded_var_names, target_vars, executor, main_program=None, model_filename=None, params_filename=None, export_for_deployment=True, program_only=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
save_params save_params
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_params(executor, dirname, main_program=None, filename=None) .. py:function:: paddle.fluid.io.save_params(executor, dirname, main_program=None, filename=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
save_persistables save_persistables
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_persistables(executor, dirname, main_program=None, filename=None) .. py:function:: paddle.fluid.io.save_persistables(executor, dirname, main_program=None, filename=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
save_vars save_vars
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.io.save_vars(executor, dirname, main_program=None, vars=None, predicate=None, filename=None) .. py:function:: paddle.fluid.io.save_vars(executor, dirname, main_program=None, vars=None, predicate=None, filename=None)
......
...@@ -4,7 +4,7 @@ BeamSearchDecoder ...@@ -4,7 +4,7 @@ BeamSearchDecoder
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.BeamSearchDecoder(cell, start_token, end_token, beam_size, embedding_fn=None, output_fn=None) .. py:class:: paddle.fluid.layers.BeamSearchDecoder(cell, start_token, end_token, beam_size, embedding_fn=None, output_fn=None)
......
...@@ -4,7 +4,7 @@ Decoder ...@@ -4,7 +4,7 @@ Decoder
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.Decoder() .. py:class:: paddle.fluid.layers.Decoder()
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
DynamicRNN DynamicRNN
=================== ===================
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.DynamicRNN(name=None) .. py:class:: paddle.fluid.layers.DynamicRNN(name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
GRUCell GRUCell
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.GRUCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, dtype="float32", name="GRUCell") .. py:class:: paddle.fluid.layers.GRUCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, dtype="float32", name="GRUCell")
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
IfElse IfElse
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.IfElse(cond, name=None) .. py:class:: paddle.fluid.layers.IfElse(cond, name=None)
......
...@@ -4,7 +4,7 @@ LSTMCell ...@@ -4,7 +4,7 @@ LSTMCell
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.LSTMCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, forget_bias=1.0, dtype="float32", name="LSTMCell") .. py:class:: paddle.fluid.layers.LSTMCell(hidden_size, param_attr=None, bias_attr=None, gate_activation=None, activation=None, forget_bias=1.0, dtype="float32", name="LSTMCell")
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
Print Print
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.Print(input, first_n=-1, message=None, summarize=20, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both') .. py:function:: paddle.fluid.layers.Print(input, first_n=-1, message=None, summarize=20, print_tensor_name=True, print_tensor_type=True, print_tensor_shape=True, print_tensor_lod=True, print_phase='both')
......
...@@ -4,7 +4,7 @@ RNNCell ...@@ -4,7 +4,7 @@ RNNCell
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.RNNCell(name=None) .. py:class:: paddle.fluid.layers.RNNCell(name=None)
RNNCell是抽象的基类,代表将输入和状态映射到输出和新状态的计算,主要用于RNN。 RNNCell是抽象的基类,代表将输入和状态映射到输出和新状态的计算,主要用于RNN。
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
StaticRNN StaticRNN
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.StaticRNN(name=None) .. py:class:: paddle.fluid.layers.StaticRNN(name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
Switch Switch
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.Switch (name=None) .. py:class:: paddle.fluid.layers.Switch (name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
While While
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.layers.While (cond, is_test=False, name=None) .. py:class:: paddle.fluid.layers.While (cond, is_test=False, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
autoincreased_step_counter autoincreased_step_counter
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.autoincreased_step_counter(counter_name=None, begin=1, step=1) .. py:function:: paddle.fluid.layers.autoincreased_step_counter(counter_name=None, begin=1, step=1)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
batch_norm batch_norm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, use_global_stats=False) .. py:function:: paddle.fluid.layers.batch_norm(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, use_global_stats=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
bilinear_tensor_product bilinear_tensor_product
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.bilinear_tensor_product(x, y, size, act=None, name=None, param_attr=None, bias_attr=None) .. py:function:: paddle.fluid.layers.bilinear_tensor_product(x, y, size, act=None, name=None, param_attr=None, bias_attr=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
case case
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.case(pred_fn_pairs, default=None, name=None) .. py:function:: paddle.fluid.layers.case(pred_fn_pairs, default=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
center_loss center_loss
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.center_loss(input, label, num_classes, alpha, param_attr, update_center=True) .. py:function:: paddle.fluid.layers.center_loss(input, label, num_classes, alpha, param_attr, update_center=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
cond cond
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.cond(pred, true_fn=None, false_fn=None, name=None) .. py:function:: paddle.fluid.layers.cond(pred, true_fn=None, false_fn=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
conv2d conv2d
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv2d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCHW") .. py:function:: paddle.fluid.layers.conv2d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCHW")
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
conv2d_transpose conv2d_transpose
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCHW') .. py:function:: paddle.fluid.layers.conv2d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCHW')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
conv3d conv3d
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv3d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCDHW") .. py:function:: paddle.fluid.layers.conv3d(input, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format="NCDHW")
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
conv3d_transpose conv3d_transpose
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.conv3d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCDHW') .. py:function:: paddle.fluid.layers.conv3d_transpose(input, num_filters, output_size=None, filter_size=None, padding=0, stride=1, dilation=1, groups=None, param_attr=None, bias_attr=None, use_cudnn=True, act=None, name=None, data_format='NCDHW')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
create_parameter create_parameter
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.create_parameter(shape,dtype,name=None,attr=None,is_bias=False,default_initializer=None) .. py:function:: paddle.fluid.layers.create_parameter(shape,dtype,name=None,attr=None,is_bias=False,default_initializer=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
create_py_reader_by_data create_py_reader_by_data
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.create_py_reader_by_data(capacity,feed_list,name=None,use_double_buffer=True) .. py:function:: paddle.fluid.layers.create_py_reader_by_data(capacity,feed_list,name=None,use_double_buffer=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
crf_decoding crf_decoding
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.crf_decoding(input, param_attr, label=None, length=None) .. py:function:: paddle.fluid.layers.crf_decoding(input, param_attr, label=None, length=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
data data
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.data(name, shape, append_batch_size=True, dtype='float32', lod_level=0, type=VarType.LOD_TENSOR, stop_gradient=True) .. py:function:: paddle.fluid.layers.data(name, shape, append_batch_size=True, dtype='float32', lod_level=0, type=VarType.LOD_TENSOR, stop_gradient=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
data_norm data_norm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.data_norm(input, act=None, epsilon=1e-05, param_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False) .. py:function:: paddle.fluid.layers.data_norm(input, act=None, epsilon=1e-05, param_attr=None, data_layout='NCHW', in_place=False, name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
deformable_conv deformable_conv
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.deformable_conv(input, offset, mask, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, deformable_groups=None, im2col_step=None, param_attr=None, bias_attr=None, modulated=True, name=None) .. py:function:: paddle.fluid.layers.deformable_conv(input, offset, mask, num_filters, filter_size, stride=1, padding=0, dilation=1, groups=None, deformable_groups=None, im2col_step=None, param_attr=None, bias_attr=None, modulated=True, name=None)
......
...@@ -4,7 +4,7 @@ dynamic_decode ...@@ -4,7 +4,7 @@ dynamic_decode
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:method:: dynamic_decode(decoder, inits=None, max_step_num=None, output_time_major=False, **kwargs): .. py:method:: dynamic_decode(decoder, inits=None, max_step_num=None, output_time_major=False, **kwargs):
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
dynamic_gru dynamic_gru
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.dynamic_gru(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None, origin_mode=False) .. py:function:: paddle.fluid.layers.dynamic_gru(input, size, param_attr=None, bias_attr=None, is_reverse=False, gate_activation='sigmoid', candidate_activation='tanh', h_0=None, origin_mode=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
dynamic_lstm dynamic_lstm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.dynamic_lstm(input, size, h_0=None, c_0=None, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None) .. py:function:: paddle.fluid.layers.dynamic_lstm(input, size, h_0=None, c_0=None, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', dtype='float32', name=None)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
dynamic_lstmp dynamic_lstmp
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.dynamic_lstmp(input, size, proj_size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', proj_activation='tanh', dtype='float32', name=None, h_0=None, c_0=None, cell_clip=None, proj_clip=None) .. py:function:: paddle.fluid.layers.dynamic_lstmp(input, size, proj_size, param_attr=None, bias_attr=None, use_peepholes=True, is_reverse=False, gate_activation='sigmoid', cell_activation='tanh', candidate_activation='tanh', proj_activation='tanh', dtype='float32', name=None, h_0=None, c_0=None, cell_clip=None, proj_clip=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
embedding embedding
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32') .. py:function:: paddle.fluid.layers.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
fc fc
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, name=None) .. py:function:: paddle.fluid.layers.fc(input, size, num_flatten_dims=1, param_attr=None, bias_attr=None, act=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
group_norm group_norm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.group_norm(input, groups, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, data_layout='NCHW', name=None) .. py:function:: paddle.fluid.layers.group_norm(input, groups, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, data_layout='NCHW', name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
gru_unit gru_unit
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.gru_unit(input, hidden, size, param_attr=None, bias_attr=None, activation='tanh', gate_activation='sigmoid', origin_mode=False) .. py:function:: paddle.fluid.layers.gru_unit(input, hidden, size, param_attr=None, bias_attr=None, activation='tanh', gate_activation='sigmoid', origin_mode=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
hsigmoid hsigmoid
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.hsigmoid(input, label, num_classes, param_attr=None, bias_attr=None, name=None, path_table=None, path_code=None, is_custom=False, is_sparse=False) .. py:function:: paddle.fluid.layers.hsigmoid(input, label, num_classes, param_attr=None, bias_attr=None, name=None, path_table=None, path_code=None, is_custom=False, is_sparse=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
im2sequence im2sequence
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.im2sequence(input, filter_size=1, stride=1, padding=0, input_image_size=None, out_stride=1, name=None) .. py:function:: paddle.fluid.layers.im2sequence(input, filter_size=1, stride=1, padding=0, input_image_size=None, out_stride=1, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
inplace_abn inplace_abn
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.inplace_abn(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, use_global_stats=False, act_alpha=1.0) .. py:function:: paddle.fluid.layers.inplace_abn(input, act=None, is_test=False, momentum=0.9, epsilon=1e-05, param_attr=None, bias_attr=None, data_layout='NCHW', name=None, moving_mean_name=None, moving_variance_name=None, do_model_average_for_mean_and_var=False, use_global_stats=False, act_alpha=1.0)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
instance_norm instance_norm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.instance_norm(input, epsilon=1e-05, param_attr=None, bias_attr=None, name=None) .. py:function:: paddle.fluid.layers.instance_norm(input, epsilon=1e-05, param_attr=None, bias_attr=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
layer_norm layer_norm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.layer_norm(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None) .. py:function:: paddle.fluid.layers.layer_norm(input, scale=True, shift=True, begin_norm_axis=1, epsilon=1e-05, param_attr=None, bias_attr=None, act=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
linear_chain_crf linear_chain_crf
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.linear_chain_crf(input, label, param_attr=None, length=None) .. py:function:: paddle.fluid.layers.linear_chain_crf(input, label, param_attr=None, length=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
lstm lstm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.lstm(input, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=0.0, is_bidirec=False, is_test=False, name=None, default_initializer=None, seed=-1) .. py:function:: paddle.fluid.layers.lstm(input, init_h, init_c, max_len, hidden_size, num_layers, dropout_prob=0.0, is_bidirec=False, is_test=False, name=None, default_initializer=None, seed=-1)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
lstm_unit lstm_unit
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.lstm_unit(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None) .. py:function:: paddle.fluid.layers.lstm_unit(x_t, hidden_t_prev, cell_t_prev, forget_bias=0.0, param_attr=None, bias_attr=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
multi_box_head multi_box_head
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.multi_box_head(inputs, image, base_size, num_classes, aspect_ratios, min_ratio=None, max_ratio=None, min_sizes=None, max_sizes=None, steps=None, step_w=None, step_h=None, offset=0.5, variance=[0.1, 0.1, 0.2, 0.2], flip=True, clip=False, kernel_size=1, pad=0, stride=1, name=None, min_max_aspect_ratios_order=False) .. py:function:: paddle.fluid.layers.multi_box_head(inputs, image, base_size, num_classes, aspect_ratios, min_ratio=None, max_ratio=None, min_sizes=None, max_sizes=None, steps=None, step_w=None, step_h=None, offset=0.5, variance=[0.1, 0.1, 0.2, 0.2], flip=True, clip=False, kernel_size=1, pad=0, stride=1, name=None, min_max_aspect_ratios_order=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
nce nce
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.nce(input, label, num_total_classes, sample_weight=None, param_attr=None, bias_attr=None, num_neg_samples=None, name=None, sampler='uniform', custom_dist=None, seed=0, is_sparse=False) .. py:function:: paddle.fluid.layers.nce(input, label, num_total_classes, sample_weight=None, param_attr=None, bias_attr=None, num_neg_samples=None, name=None, sampler='uniform', custom_dist=None, seed=0, is_sparse=False)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
prelu prelu
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.prelu(x, mode, param_attr=None, name=None) .. py:function:: paddle.fluid.layers.prelu(x, mode, param_attr=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
py_func py_func
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None) .. py:function:: paddle.fluid.layers.py_func(func, x, out, backward_func=None, skip_vars_in_backward_input=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
py_reader py_reader
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.py_reader(capacity, shapes, dtypes, lod_levels=None, name=None, use_double_buffer=True) .. py:function:: paddle.fluid.layers.py_reader(capacity, shapes, dtypes, lod_levels=None, name=None, use_double_buffer=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
read_file read_file
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.read_file(reader) .. py:function:: paddle.fluid.layers.read_file(reader)
......
...@@ -4,7 +4,7 @@ rnn ...@@ -4,7 +4,7 @@ rnn
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:method:: paddle.fluid.layers.rnn(cell, inputs, initial_states=None, sequence_length=None, time_major=False, is_reverse=False, **kwargs) .. py:method:: paddle.fluid.layers.rnn(cell, inputs, initial_states=None, sequence_length=None, time_major=False, is_reverse=False, **kwargs)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
row_conv row_conv
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.row_conv(input, future_context_size, param_attr=None, act=None) .. py:function:: paddle.fluid.layers.row_conv(input, future_context_size, param_attr=None, act=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_concat sequence_concat
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_concat(input, name=None) .. py:function:: paddle.fluid.layers.sequence_concat(input, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_conv sequence_conv
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=True, padding_start=None, bias_attr=None, param_attr=None, act=None, name=None) .. py:function:: paddle.fluid.layers.sequence_conv(input, num_filters, filter_size=3, filter_stride=1, padding=True, padding_start=None, bias_attr=None, param_attr=None, act=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_enumerate sequence_enumerate
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_enumerate(input, win_size, pad_value=0, name=None) .. py:function:: paddle.fluid.layers.sequence_enumerate(input, win_size, pad_value=0, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_expand_as sequence_expand_as
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_expand_as(x, y, name=None) .. py:function:: paddle.fluid.layers.sequence_expand_as(x, y, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_expand sequence_expand
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_expand(x, y, ref_level=-1, name=None) .. py:function:: paddle.fluid.layers.sequence_expand(x, y, ref_level=-1, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_first_step sequence_first_step
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_first_step(input) .. py:function:: paddle.fluid.layers.sequence_first_step(input)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_last_step sequence_last_step
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_last_step(input) .. py:function:: paddle.fluid.layers.sequence_last_step(input)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_pad sequence_pad
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_pad(x,pad_value,maxlen=None,name=None) .. py:function:: paddle.fluid.layers.sequence_pad(x,pad_value,maxlen=None,name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_pool sequence_pool
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_pool(input, pool_type, is_test=False, pad_value=0.0) .. py:function:: paddle.fluid.layers.sequence_pool(input, pool_type, is_test=False, pad_value=0.0)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_reshape sequence_reshape
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_reshape(input, new_dim) .. py:function:: paddle.fluid.layers.sequence_reshape(input, new_dim)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_scatter sequence_scatter
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_scatter(input, index, updates, name=None) .. py:function:: paddle.fluid.layers.sequence_scatter(input, index, updates, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_slice sequence_slice
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_slice(input, offset, length, name=None) .. py:function:: paddle.fluid.layers.sequence_slice(input, offset, length, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_softmax sequence_softmax
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_softmax(input, use_cudnn=False, name=None) .. py:function:: paddle.fluid.layers.sequence_softmax(input, use_cudnn=False, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_unpad sequence_unpad
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.sequence_unpad(x, length, name=None) .. py:function:: paddle.fluid.layers.sequence_unpad(x, length, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
spectral_norm spectral_norm
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.spectral_norm(weight, dim=0, power_iters=1, eps=1e-12, name=None) .. py:function:: paddle.fluid.layers.spectral_norm(weight, dim=0, power_iters=1, eps=1e-12, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
switch_case switch_case
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.switch_case(branch_index, branch_fns, default=None, name=None) .. py:function:: paddle.fluid.layers.switch_case(branch_index, branch_fns, default=None, name=None)
......
...@@ -4,7 +4,7 @@ while_loop ...@@ -4,7 +4,7 @@ while_loop
____________________________________ ____________________________________
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.layers.while_loop(cond, body, loop_vars, is_test=False, name=None) .. py:function:: paddle.fluid.layers.while_loop(cond, body, loop_vars, is_test=False, name=None)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
glu glu
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.glu(input, dim=-1) .. py:function:: paddle.fluid.nets.glu(input, dim=-1)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
img_conv_group img_conv_group
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.img_conv_group(input, conv_num_filter, pool_size, conv_padding=1, conv_filter_size=3, conv_act=None, param_attr=None, conv_with_batchnorm=False, conv_batchnorm_drop_rate=0.0, pool_stride=1, pool_type='max', use_cudnn=True) .. py:function:: paddle.fluid.nets.img_conv_group(input, conv_num_filter, pool_size, conv_padding=1, conv_filter_size=3, conv_act=None, param_attr=None, conv_with_batchnorm=False, conv_batchnorm_drop_rate=0.0, pool_stride=1, pool_type='max', use_cudnn=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
scaled_dot_product_attention scaled_dot_product_attention
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.scaled_dot_product_attention(queries, keys, values, num_heads=1, dropout_rate=0.0) .. py:function:: paddle.fluid.nets.scaled_dot_product_attention(queries, keys, values, num_heads=1, dropout_rate=0.0)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
sequence_conv_pool sequence_conv_pool
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.sequence_conv_pool(input, num_filters, filter_size, param_attr=None, act='sigmoid', pool_type='max', bias_attr=None) .. py:function:: paddle.fluid.nets.sequence_conv_pool(input, num_filters, filter_size, param_attr=None, act='sigmoid', pool_type='max', bias_attr=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
simple_img_conv_pool simple_img_conv_pool
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.nets.simple_img_conv_pool(input, num_filters, filter_size, pool_size, pool_stride, pool_padding=0, pool_type='max', global_pooling=False, conv_stride=1, conv_padding=0, conv_dilation=1, conv_groups=1, param_attr=None, bias_attr=None, act=None, use_cudnn=True) .. py:function:: paddle.fluid.nets.simple_img_conv_pool(input, num_filters, filter_size, pool_size, pool_stride, pool_padding=0, pool_type='max', global_pooling=False, conv_stride=1, conv_padding=0, conv_dilation=1, conv_groups=1, param_attr=None, bias_attr=None, act=None, use_cudnn=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
DGCMomentumOptimizer DGCMomentumOptimizer
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.DGCMomentumOptimizer(learning_rate, momentum, rampup_begin_step, rampup_step=1, sparsity=[0.999], use_nesterov=False, local_grad_clip_norm=None, num_trainers=None, regularization=None, grad_clip=None, name=None) .. py:class:: paddle.fluid.optimizer.DGCMomentumOptimizer(learning_rate, momentum, rampup_begin_step, rampup_step=1, sparsity=[0.999], use_nesterov=False, local_grad_clip_norm=None, num_trainers=None, regularization=None, grad_clip=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
ExponentialMovingAverage ExponentialMovingAverage
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.ExponentialMovingAverage(decay=0.999, thres_steps=None, name=None) .. py:class:: paddle.fluid.optimizer.ExponentialMovingAverage(decay=0.999, thres_steps=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
LookaheadOptimizer LookaheadOptimizer
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.LookaheadOptimizer(inner_optimizer, alpha=0.5, k=5) .. py:class:: paddle.fluid.optimizer.LookaheadOptimizer(inner_optimizer, alpha=0.5, k=5)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
ModelAverage ModelAverage
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.ModelAverage(average_window_rate, min_average_window=10000, max_average_window=10000, regularization=None, name=None) .. py:class:: paddle.fluid.optimizer.ModelAverage(average_window_rate, min_average_window=10000, max_average_window=10000, regularization=None, name=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
PipelineOptimizer PipelineOptimizer
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.PipelineOptimizer(optimizer, cut_list=None, place_list=None, concurrency_list=None, queue_size=30, sync_steps=1, start_cpu_core_id=0) .. py:class:: paddle.fluid.optimizer.PipelineOptimizer(optimizer, cut_list=None, place_list=None, concurrency_list=None, queue_size=30, sync_steps=1, start_cpu_core_id=0)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
RecomputeOptimizer RecomputeOptimizer
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.optimizer.RecomputeOptimizer(optimizer) .. py:class:: paddle.fluid.optimizer.RecomputeOptimizer(optimizer)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
DistributeTranspilerConfig DistributeTranspilerConfig
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.DistributeTranspilerConfig .. py:class:: paddle.fluid.transpiler.DistributeTranspilerConfig
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
DistributeTranspiler DistributeTranspiler
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.DistributeTranspiler (config=None) .. py:class:: paddle.fluid.transpiler.DistributeTranspiler (config=None)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
HashName HashName
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.HashName(pserver_endpoints) .. py:class:: paddle.fluid.transpiler.HashName(pserver_endpoints)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
RoundRobin RoundRobin
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:class:: paddle.fluid.transpiler.RoundRobin(pserver_endpoints) .. py:class:: paddle.fluid.transpiler.RoundRobin(pserver_endpoints)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
memory_optimize memory_optimize
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.transpiler.memory_optimize(input_program, skip_opt_set=None, print_log=False, level=0, skip_grads=True) .. py:function:: paddle.fluid.transpiler.memory_optimize(input_program, skip_opt_set=None, print_log=False, level=0, skip_grads=True)
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
release_memory release_memory
------------------------------- -------------------------------
**注意:该API仅支持【静态图】模式** :api_attr: 声明式编程模式(静态图)
.. py:function:: paddle.fluid.transpiler.release_memory(input_program, skip_opt_set=None) .. py:function:: paddle.fluid.transpiler.release_memory(input_program, skip_opt_set=None)
......
...@@ -228,15 +228,15 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。 ...@@ -228,15 +228,15 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。
</thead> </thead>
<tbody> <tbody>
<tr> <tr>
<td> paddlepaddle==[版本号] 例如 paddlepaddle==1.8.0 </td> <td> paddlepaddle==[版本号] 例如 paddlepaddle==1.8.1 </td>
<td> 只支持CPU对应版本的PaddlePaddle,具体版本请参见<a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> </td> <td> 只支持CPU对应版本的PaddlePaddle,具体版本请参见<a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> </td>
</tr> </tr>
<tr> <tr>
<td> paddlepaddle-gpu==[版本号] 例如 paddlepaddle-gpu==1.8.0 </td> <td> paddlepaddle-gpu==[版本号] 例如 paddlepaddle-gpu==1.8.1 </td>
<td> 默认安装支持CUDA 10.0和cuDNN 7的对应[版本号]的PaddlePaddle安装包 </td> <td> 默认安装支持CUDA 10.0和cuDNN 7的对应[版本号]的PaddlePaddle安装包 </td>
</tr> </tr>
<tr> <tr>
<td> paddlepaddle-gpu==[版本号].postXX 例如 paddlepaddle-gpu==1.8.0.post97 </td> <td> paddlepaddle-gpu==[版本号].postXX 例如 paddlepaddle-gpu==1.8.1.post97 </td>
<td> 支持CUDA 9.0和cuDNN 7的对应PaddlePaddle版本的安装包</td> <td> 支持CUDA 9.0和cuDNN 7的对应PaddlePaddle版本的安装包</td>
</tr> </tr>
</tbody> </tbody>
...@@ -268,126 +268,126 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。 ...@@ -268,126 +268,126 @@ PaddePaddle通过编译时指定路径来实现引用各种BLAS/CUDA/cuDNN库。
<tbody> <tbody>
<tr> <tr>
<td> cpu-mkl </td> <td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cpu-openblas </td> <td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cuda9-cudnn7-openblas </td> <td> cuda9-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cuda9-cudnn7-mkl </td> <td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cuda10_cudnn7-mkl </td> <td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cpu_mkl </td> <td> win_cpu_mkl </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cuda9_cudnn7_mkl </td> <td> win_cuda9_cudnn7_mkl </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cuda10_cudnn7_mkl </td> <td> win_cuda10_cudnn7_mkl </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cpu_openblas </td> <td> win_cpu_openblas </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cuda9_cudnn7_openblas </td> <td> win_cuda9_cudnn7_openblas </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> mac_cpu </td> <td> mac_cpu </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl</a></td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
...@@ -556,16 +556,16 @@ platform tag: 类似 'linux_x86_64', 'any' ...@@ -556,16 +556,16 @@ platform tag: 类似 'linux_x86_64', 'any'
<tbody> <tbody>
<tr> <tr>
<td> cuda10.1-cudnn7-mkl </td> <td> cuda10.1-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
......
...@@ -225,15 +225,15 @@ PaddePaddle implements references to various BLAS/CUDA/cuDNN libraries by specif ...@@ -225,15 +225,15 @@ PaddePaddle implements references to various BLAS/CUDA/cuDNN libraries by specif
</thead> </thead>
<tbody> <tbody>
<tr> <tr>
<td> paddlepaddle==[version code] such as paddlepaddle==1.8.0 </td> <td> paddlepaddle==[version code] such as paddlepaddle==1.8.1 </td>
<td> Only support the corresponding version of the CPU PaddlePaddle, please refer to <a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> for the specific version. </td> <td> Only support the corresponding version of the CPU PaddlePaddle, please refer to <a href=https://pypi.org/project/paddlepaddle/#history>Pypi</a> for the specific version. </td>
</tr> </tr>
<tr> <tr>
<td> paddlepaddle-gpu==[version code], such as paddlepaddle-gpu==1.8.0 </td> <td> paddlepaddle-gpu==[version code], such as paddlepaddle-gpu==1.8.1 </td>
<td> The default installation supports the PaddlePaddle installation package corresponding to [version number] of CUDA 10.0 and cuDNN 7 </td> <td> The default installation supports the PaddlePaddle installation package corresponding to [version number] of CUDA 10.0 and cuDNN 7 </td>
</tr> </tr>
<tr> <tr>
<td> paddlepaddle-gpu==[version code].postXX, such as paddlepaddle-gpu==1.8.0.post97 </td> <td> paddlepaddle-gpu==[version code].postXX, such as paddlepaddle-gpu==1.8.1.post97 </td>
<td> Installation package supporting the corresponding PaddlePaddle version of CUDA 9.0 and cuDNN 7 </td> <td> Installation package supporting the corresponding PaddlePaddle version of CUDA 9.0 and cuDNN 7 </td>
</tr> </tr>
</tbody> </tbody>
...@@ -267,126 +267,126 @@ Please note that: in the commands, <code> paddlepaddle-gpu </code> will install ...@@ -267,126 +267,126 @@ Please note that: in the commands, <code> paddlepaddle-gpu </code> will install
<tbody> <tbody>
<tr> <tr>
<td> cpu-mkl </td> <td> cpu-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mkl/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mkl/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cpu-openblas </td> <td> cpu-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-openblas/paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-openblas/paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cuda9-cudnn7-openblas </td> <td> cuda9-cudnn7-openblas </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-openblas/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cuda9-cudnn7-mkl </td> <td> cuda9-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda9-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> cuda10_cudnn7-mkl </td> <td> cuda10_cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27mu-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-linux_x86_64.whl"> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10-cudnn7-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cpu_mkl </td> <td> win_cpu_mkl </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cuda9_cudnn7_mkl </td> <td> win_cuda9_cudnn7_mkl </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cuda10_cudnn7_mkl </td> <td> win_cuda10_cudnn7_mkl </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-mkl/paddlepaddle_gpu-1.8.0.post107-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-mkl/paddlepaddle_gpu-1.8.1.post107-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cpu_openblas </td> <td> win_cpu_openblas </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl">
paddlepaddle-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl">
paddlepaddle-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl">
paddlepaddle-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl">
paddlepaddle-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> win_cuda9_cudnn7_openblas </td> <td> win_cuda9_cudnn7_openblas </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp27-cp27m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp27-cp27m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp35-cp35m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp35-cp35m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp36-cp36m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp36-cp36m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-win_amd64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0/win-open/paddlepaddle_gpu-1.8.0.post97-cp37-cp37m-win_amd64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1/win-open/paddlepaddle_gpu-1.8.1.post97-cp37-cp37m-win_amd64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-win_amd64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-win_amd64.whl</a></td>
</tr> </tr>
<tr> <tr>
<td> mac_cpu </td> <td> mac_cpu </td>
<td> - </td> <td> - </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp27-cp27m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp27-cp27m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp35-cp35m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp35-cp35m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp36-cp36m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp36-cp36m-macosx_10_6_intel.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-cpu-mac/paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-cpu-mac/paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl">
paddlepaddle-1.8.0-cp37-cp37m-macosx_10_6_intel.whl</a></td> paddlepaddle-1.8.1-cp37-cp37m-macosx_10_6_intel.whl</a></td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
...@@ -559,16 +559,16 @@ platform tag: similar to 'linux_x86_64', 'any' ...@@ -559,16 +559,16 @@ platform tag: similar to 'linux_x86_64', 'any'
<tbody> <tbody>
<tr> <tr>
<td> cuda10.1-cudnn7-mkl </td> <td> cuda10.1-cudnn7-mkl </td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27mu-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27mu-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp27-cp27m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp27-cp27m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp35-cp35m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp35-cp35m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp36-cp36m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp36-cp36m-linux_x86_64.whl</a></td>
<td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.0-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl"> <td> <a href="https://paddle-wheel.bj.bcebos.com/1.8.1-gpu-cuda10.1-cudnn7-mkl_gcc8.2/paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl">
paddlepaddle_gpu-1.8.0-cp37-cp37m-linux_x86_64.whl</a></td> paddlepaddle_gpu-1.8.1-cp37-cp37m-linux_x86_64.whl</a></td>
</tr> </tr>
</tbody> </tbody>
</table> </table>
......
...@@ -214,20 +214,20 @@ ...@@ -214,20 +214,20 @@
如果您是使用 Python 2,CUDA 9,cuDNN 7.3+,安装GPU版本的命令为: 如果您是使用 Python 2,CUDA 9,cuDNN 7.3+,安装GPU版本的命令为:
:: ::
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://mirror.baidu.com/pypi/simple python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple
如果您是使用 Python 2,CUDA 10.0,cuDNN 7.3+,安装GPU版本的命令为: 如果您是使用 Python 2,CUDA 10.0,cuDNN 7.3+,安装GPU版本的命令为:
:: ::
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://mirror.baidu.com/pypi/simple python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://mirror.baidu.com/pypi/simple
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple
如果您是使用 Python 3,请将上述命令中的 **python** 更换为 **python3** 进行安装。 如果您是使用 Python 3,请将上述命令中的 **python** 更换为 **python3** 进行安装。
...@@ -430,12 +430,12 @@ ...@@ -430,12 +430,12 @@
(2). 拉取预安装 PaddlePaddle 的镜像: (2). 拉取预安装 PaddlePaddle 的镜像:
:: ::
docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0 docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1
(3). 用镜像构建并进入Docker容器: (3). 用镜像构建并进入Docker容器:
:: ::
docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0 /bin/bash docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] 设定Docker的名称; > --name [Name of container] 设定Docker的名称;
...@@ -443,7 +443,7 @@ ...@@ -443,7 +443,7 @@
> -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录; > -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0 是需要使用的image名称;/bin/bash是在Docker中要执行的命令 > hub.baidubce.com/paddlepaddle/paddle:1.8.1 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
2. **GPU 版本** 2. **GPU 版本**
...@@ -471,12 +471,12 @@ ...@@ -471,12 +471,12 @@
(2). 拉取支持 CUDA 10.0 , cuDNN 7.3+ 预安装 PaddlePaddle 的镜像: (2). 拉取支持 CUDA 10.0 , cuDNN 7.3+ 预安装 PaddlePaddle 的镜像:
:: ::
nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7
(3). 用镜像构建并进入Docker容器: (3). 用镜像构建并进入Docker容器:
:: ::
nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 /bin/bash nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7 /bin/bash
> --name [Name of container] 设定Docker的名称; > --name [Name of container] 设定Docker的名称;
...@@ -484,7 +484,7 @@ ...@@ -484,7 +484,7 @@
> -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录; > -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 是需要使用的image名称;/bin/bash是在Docker中要执行的命令 > hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
或如果您需要支持 **CUDA 9** 的版本,将上述命令的 **cuda10.0** 替换成 **cuda9.0** 即可 或如果您需要支持 **CUDA 9** 的版本,将上述命令的 **cuda10.0** 替换成 **cuda9.0** 即可
...@@ -492,7 +492,7 @@ ...@@ -492,7 +492,7 @@
:: ::
docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.0 /bin/bash docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] 设定Docker的名称; > --name [Name of container] 设定Docker的名称;
...@@ -500,7 +500,7 @@ ...@@ -500,7 +500,7 @@
> -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录; > -v 参数用于宿主机与容器里文件共享;其中dir1为宿主机目录,dir2为挂载到容器内部的目录,用户可以通过设定dir1和dir2自定义自己的挂载目录;例如:$PWD:/paddle 指定将宿主机的当前路径(Linux中PWD变量会展开为当前路径的绝对路径)挂载到容器内部的 /paddle 目录;
> paddlepaddle/paddle:1.8.0 是需要使用的image名称;/bin/bash是在Docker中要执行的命令 > paddlepaddle/paddle:1.8.1 是需要使用的image名称;/bin/bash是在Docker中要执行的命令
4. 验证安装 4. 验证安装
......
...@@ -216,20 +216,20 @@ This section describes how to use pip to install. ...@@ -216,20 +216,20 @@ This section describes how to use pip to install.
If you are using Python 2, CUDA 9, cuDNN 7.3+, command to install GPU version: If you are using Python 2, CUDA 9, cuDNN 7.3+, command to install GPU version:
:: ::
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://mirror.baidu.com/pypi/simple python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://mirror.baidu.com/pypi/simple
or or
python -m pip install paddlepaddle-gpu==1.8.0.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple python -m pip install paddlepaddle-gpu==1.8.1.post97 -i https://pypi.tuna.tsinghua.edu.cn/simple
If you are using Python 2, CUDA 10.0, cuDNN 7.3+, command to install GPU version: If you are using Python 2, CUDA 10.0, cuDNN 7.3+, command to install GPU version:
:: ::
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://mirror.baidu.com/pypi/simple python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://mirror.baidu.com/pypi/simple
or or
python -m pip install paddlepaddle-gpu==1.8.0.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple python -m pip install paddlepaddle-gpu==1.8.1.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple
If you are using Python 3, please change **python** in the above command to **python3** and install. If you are using Python 3, please change **python** in the above command to **python3** and install.
...@@ -437,12 +437,12 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y ...@@ -437,12 +437,12 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
(2). Pull the image of the preinstalled PaddlePaddle: (2). Pull the image of the preinstalled PaddlePaddle:
:: ::
docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0 docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1
(3). Use the image to build and enter the Docker container: (3). Use the image to build and enter the Docker container:
:: ::
docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0 /bin/bash docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] set the name of Docker; > --name [Name of container] set the name of Docker;
...@@ -450,7 +450,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y ...@@ -450,7 +450,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
> -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container; > -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0 is the image name you need to use;/bin/bash is the command to be executed in Docker > hub.baidubce.com/paddlepaddle/paddle:1.8.1 is the image name you need to use;/bin/bash is the command to be executed in Docker
2. **GPU version** 2. **GPU version**
...@@ -478,12 +478,12 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y ...@@ -478,12 +478,12 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
(2). Pull the image that supports CUDA 10.0, cuDNN 7.3 + pre installed PaddlePaddle: (2). Pull the image that supports CUDA 10.0, cuDNN 7.3 + pre installed PaddlePaddle:
:: ::
nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 nvidia-docker pull hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7
(3). Use the image to build and enter the docker container: (3). Use the image to build and enter the docker container:
:: ::
nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.0-gpu-cuda10.0-cudnn7 /bin/bash nvidia-docker run --name paddle -it -v dir1:dir2 hub.baidubce.com/paddlepaddle/paddle:1.8.1-gpu-cuda10.0-cudnn7 /bin/bash
> --name [Name of container] set name of Docker; > --name [Name of container] set name of Docker;
...@@ -491,7 +491,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y ...@@ -491,7 +491,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
> -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container; > -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container;
> hub.baidubce.com/paddlepaddle/paddle:1.8.0 is the image name you need to use;/bin/bash is the command to be executed in Docker > hub.baidubce.com/paddlepaddle/paddle:1.8.1 is the image name you need to use;/bin/bash is the command to be executed in Docker
Or if you need the version supporting **CUDA 9**, replace **cuda10.0** of the above command with **cuda9.0** Or if you need the version supporting **CUDA 9**, replace **cuda10.0** of the above command with **cuda9.0**
...@@ -499,7 +499,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y ...@@ -499,7 +499,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
:: ::
docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.0 /bin/bash docker run --name paddle -it -v dir1:dir2 paddlepaddle/paddle:1.8.1 /bin/bash
> --name [Name of container] set name of Docker; > --name [Name of container] set name of Docker;
...@@ -507,7 +507,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y ...@@ -507,7 +507,7 @@ If you want to use `docker <https://www.docker.com>`_ to install PaddlePaddle, y
> -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container; > -v Parameter is used to share files between the host and the container. dir1 is the host directory and dir2 is the directory mounted inside the container. Users can customize their own mounting directory by setting dir1 and dir2.For example, $PWD:/paddle specifies to mount the current path of the host (PWD variable in Linux will expand to the absolute path of the current path) to the /paddle directory inside the container;
> paddlepaddle/paddle:1.8.0 is the image name you need to use;/bin/bash is the command to be executed in docker > paddlepaddle/paddle:1.8.1 is the image name you need to use;/bin/bash is the command to be executed in docker
4. Verify installation 4. Verify installation
......
...@@ -388,8 +388,8 @@ The data introduction section mentions the payment of the CoNLL 2005 training se ...@@ -388,8 +388,8 @@ The data introduction section mentions the payment of the CoNLL 2005 training se
crf_decode = fluid.layers.crf_decoding( crf_decode = fluid.layers.crf_decoding(
input=feature_out, param_attr=fluid.ParamAttr(name='crfw')) input=feature_out, param_attr=fluid.ParamAttr(name='crfw'))
train_data = paddle.batch( train_data = fluid.io.batch(
paddle.reader.shuffle( fluid.io.shuffle(
paddle.dataset.conll05.test(), buf_size=8192), paddle.dataset.conll05.test(), buf_size=8192),
batch_size=BATCH_SIZE) batch_size=BATCH_SIZE)
......
...@@ -143,8 +143,8 @@ def train(use_cuda, save_dirname=None, is_local=True): ...@@ -143,8 +143,8 @@ def train(use_cuda, save_dirname=None, is_local=True):
# define network topology # define network topology
feature_out = db_lstm(**locals()) feature_out = db_lstm(**locals())
target = fluid.layers.data( target = fluid.data(
name='target', shape=[1], dtype='int64', lod_level=1) name='target', shape=[None, 1], dtype='int64', lod_level=1)
crf_cost = fluid.layers.linear_chain_crf( crf_cost = fluid.layers.linear_chain_crf(
input=feature_out, input=feature_out,
label=target, label=target,
...@@ -165,11 +165,11 @@ def train(use_cuda, save_dirname=None, is_local=True): ...@@ -165,11 +165,11 @@ def train(use_cuda, save_dirname=None, is_local=True):
input=feature_out, param_attr=fluid.ParamAttr(name='crfw')) input=feature_out, param_attr=fluid.ParamAttr(name='crfw'))
if args.enable_ce: if args.enable_ce:
train_data = paddle.batch( train_data = fluid.io.batch(
paddle.dataset.conll05.test(), batch_size=BATCH_SIZE) paddle.dataset.conll05.test(), batch_size=BATCH_SIZE)
else: else:
train_data = paddle.batch( train_data = fluid.io.batch(
paddle.reader.shuffle( fluid.io.shuffle(
paddle.dataset.conll05.test(), buf_size=8192), paddle.dataset.conll05.test(), buf_size=8192),
batch_size=BATCH_SIZE) batch_size=BATCH_SIZE)
......
...@@ -289,15 +289,15 @@ def optimizer_func(): ...@@ -289,15 +289,15 @@ def optimizer_func():
- Now we can start training. This version is much simpler than before. We have ready-made training and test sets: `paddle.dataset.imikolov.train()` and `paddle.dataset.imikolov.test()`. Both will return a reader. In PaddlePaddle, the reader is a Python function that reads the next piece of data when called each time . It is a Python generator. - Now we can start training. This version is much simpler than before. We have ready-made training and test sets: `paddle.dataset.imikolov.train()` and `paddle.dataset.imikolov.test()`. Both will return a reader. In PaddlePaddle, the reader is a Python function that reads the next piece of data when called each time . It is a Python generator.
`paddle.batch` will read in a reader and output a batched reader. We can also output the training of each step and batch during the training process. `fluid.io.batch` will read in a reader and output a batched reader. We can also output the training of each step and batch during the training process.
```python ```python
def train(if_use_cuda, params_dirname, is_sparse=True): def train(if_use_cuda, params_dirname, is_sparse=True):
place = fluid.CUDAPlace(0) if if_use_cuda else fluid.CPUPlace() place = fluid.CUDAPlace(0) if if_use_cuda else fluid.CPUPlace()
train_reader = paddle.batch( train_reader = fluid.io.batch(
paddle.dataset.imikolov.train(word_dict, N), BATCH_SIZE) paddle.dataset.imikolov.train(word_dict, N), BATCH_SIZE)
test_reader = paddle.batch( test_reader = fluid.io.batch(
paddle.dataset.imikolov.test(word_dict, N), BATCH_SIZE) paddle.dataset.imikolov.test(word_dict, N), BATCH_SIZE)
first_word = fluid.data(name='firstw', shape=[None, 1], dtype='int64') first_word = fluid.data(name='firstw', shape=[None, 1], dtype='int64')
......
...@@ -98,9 +98,9 @@ def optimizer_func(): ...@@ -98,9 +98,9 @@ def optimizer_func():
def train(if_use_cuda, params_dirname, is_sparse=True): def train(if_use_cuda, params_dirname, is_sparse=True):
place = fluid.CUDAPlace(0) if if_use_cuda else fluid.CPUPlace() place = fluid.CUDAPlace(0) if if_use_cuda else fluid.CPUPlace()
train_reader = paddle.batch( train_reader = fluid.io.batch(
paddle.dataset.imikolov.train(word_dict, N), BATCH_SIZE) paddle.dataset.imikolov.train(word_dict, N), BATCH_SIZE)
test_reader = paddle.batch( test_reader = fluid.io.batch(
paddle.dataset.imikolov.test(word_dict, N), BATCH_SIZE) paddle.dataset.imikolov.test(word_dict, N), BATCH_SIZE)
first_word = fluid.data(name='firstw', shape=[None, 1], dtype='int64') first_word = fluid.data(name='firstw', shape=[None, 1], dtype='int64')
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册