未验证 提交 982dabe2 编写于 作者: X Xin Pan 提交者: GitHub

Merge pull request #11866 from panyx0718/move_func

Move some v2 codes to a legacy directory.
...@@ -196,7 +196,7 @@ include(inference_lib) # add paddle fluid inference libraries ...@@ -196,7 +196,7 @@ include(inference_lib) # add paddle fluid inference libraries
include_directories("${PADDLE_SOURCE_DIR}") include_directories("${PADDLE_SOURCE_DIR}")
include_directories("${PADDLE_SOURCE_DIR}/paddle/cuda/include") include_directories("${PADDLE_SOURCE_DIR}/paddle/legacy/cuda/include")
include_directories("${CMAKE_CURRENT_BINARY_DIR}/proto") include_directories("${CMAKE_CURRENT_BINARY_DIR}/proto")
include_directories("${CMAKE_CURRENT_BINARY_DIR}/go/pserver/client/c") include_directories("${CMAKE_CURRENT_BINARY_DIR}/go/pserver/client/c")
...@@ -240,7 +240,7 @@ add_subdirectory(proto) ...@@ -240,7 +240,7 @@ add_subdirectory(proto)
if(NOT MOBILE_INFERENCE AND NOT WITH_FLUID_ONLY) if(NOT MOBILE_INFERENCE AND NOT WITH_FLUID_ONLY)
# "add_subdirectory(go)" should be placed after the following loine, # "add_subdirectory(go)" should be placed after the following loine,
# because it depends on paddle/optimizer. # because it depends on paddle/optimizer.
add_subdirectory(paddle/optimizer) add_subdirectory(paddle/legacy/optimizer)
endif() endif()
# "add_subdirectory(paddle)" and "add_subdirectory(python)" should be # "add_subdirectory(paddle)" and "add_subdirectory(python)" should be
......
...@@ -159,4 +159,4 @@ This will enable VLOG messages generated by `buddy_allocator.{h,cc}` and in the ...@@ -159,4 +159,4 @@ This will enable VLOG messages generated by `buddy_allocator.{h,cc}` and in the
- verbose level 1: [framework](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/framework) - verbose level 1: [framework](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/framework)
- verbose level 3: [operators](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators) - verbose level 3: [operators](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/operators)
- verbose level 5: [memory](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/memory), [platform](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/platform) - verbose level 5: [memory](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/memory), [platform](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/platform)
- verbose level 7: [math](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/math) - verbose level 7: [math](https://github.com/PaddlePaddle/Paddle/tree/develop/paddle/legacy/math)
...@@ -65,7 +65,7 @@ paddle_error paddle_matrix_get_shape(paddle_matrix matrix, ...@@ -65,7 +65,7 @@ paddle_error paddle_matrix_get_shape(paddle_matrix matrix,
而在CPP里面实现这个C的接口,文件 `paddle_matrix.cpp` 而在CPP里面实现这个C的接口,文件 `paddle_matrix.cpp`
```cpp ```cpp
#include "paddle/math/matrix.h" #include "paddle/legacy/math/matrix.h"
extern "C" extern "C"
paddle_error paddle_matrix_shape(paddle_matrix matrix, paddle_error paddle_matrix_shape(paddle_matrix matrix,
uint64_t *width, uint64_t *width,
......
...@@ -58,7 +58,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。 ...@@ -58,7 +58,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
实现C++类 实现C++类
=================== ===================
一个网络层的C++类需要实现初始化,前向和后向。全连接层的实现位于:code:`paddle/gserver/layers/FullyConnectedLayer.h`及:code:`paddle/gserver/layers/FullyConnectedLayer.cpp`。这里我们展示一份简化过的代码。 一个网络层的C++类需要实现初始化,前向和后向。全连接层的实现位于:code:`paddle/legacy/gserver/layers/FullyConnectedLayer.h`及:code:`paddle/legacy/gserver/layers/FullyConnectedLayer.cpp`。这里我们展示一份简化过的代码。
这个类需要继承 :code:`paddle::Layer` 这个基类,并且需要重写基类中的以下几个虚函数: 这个类需要继承 :code:`paddle::Layer` 这个基类,并且需要重写基类中的以下几个虚函数:
...@@ -153,7 +153,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。 ...@@ -153,7 +153,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
- 每个层在其 :code:`forward` 函数的开头必须调用 :code:`Layer::forward(passType);` 。 - 每个层在其 :code:`forward` 函数的开头必须调用 :code:`Layer::forward(passType);` 。
- 之后使用 :code:`reserveOutput(batchSize, size);` 为输出分配内存。由于我们支持训练数据有不同的批次大小,所以这一步是必要的。 :code:`reserveOutput` 会相应地改变输出的尺寸。为了保证效率,如果需要扩大矩阵,我们会重新分配内存;如果需要缩减矩阵,我们会继续使用现有的内存块。 - 之后使用 :code:`reserveOutput(batchSize, size);` 为输出分配内存。由于我们支持训练数据有不同的批次大小,所以这一步是必要的。 :code:`reserveOutput` 会相应地改变输出的尺寸。为了保证效率,如果需要扩大矩阵,我们会重新分配内存;如果需要缩减矩阵,我们会继续使用现有的内存块。
- 之后使用矩阵运算函数来计算 :math:`\sum_i W_i x + b`。:code:`getInput(i).value` 返回第i个输入矩阵。每个输入都是一个 :math:`batchSize \times dim` 的矩阵,每行表示一个批次中的单个输入。对于我们支持的全部矩阵操作,请参考 :code:`paddle/math/Matrix.h`和:code:`paddle/math/BaseMatrix.h` 。 - 之后使用矩阵运算函数来计算 :math:`\sum_i W_i x + b`。:code:`getInput(i).value` 返回第i个输入矩阵。每个输入都是一个 :math:`batchSize \times dim` 的矩阵,每行表示一个批次中的单个输入。对于我们支持的全部矩阵操作,请参考 :code:`paddle/legacy/math/Matrix.h`和:code:`paddle/legacy/math/BaseMatrix.h` 。
- 最终,使用 :code:`forwardActivation();` 进行激活操作。这会自动进行网络配置中声明的激活操作。 - 最终,使用 :code:`forwardActivation();` 进行激活操作。这会自动进行网络配置中声明的激活操作。
...@@ -262,7 +262,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。 ...@@ -262,7 +262,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
REGISTER_LAYER(fc, FullyConnectedLayer); REGISTER_LAYER(fc, FullyConnectedLayer);
} }
若 :code:`cpp` 被放在 :code:`paddle/gserver/layers` 目录下,其会自动被加入编译列表。 若 :code:`cpp` 被放在 :code:`paddle/legacy/gserver/layers` 目录下,其会自动被加入编译列表。
写梯度检查单元测试 写梯度检查单元测试
...@@ -270,7 +270,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。 ...@@ -270,7 +270,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
写梯度检查单元测试是一个验证新实现的层是否正确的相对简单的办法。梯度检查单元测试通过有限差分法来验证一个层的梯度。首先对输入做一个小的扰动 :math:`\Delta x` ,然后观察到输出的变化为 :math:`\Delta y` ,那么,梯度就可以通过这个方程计算得到 :math:`\frac{\Delta y}{\Delta x }` 。之后,再用这个梯度去和 :code:`backward` 函数得到的梯度去对比,以保证梯度计算的正确性。需要注意的是梯度检查仅仅验证了梯度的计算,并不保证 :code:`forward` 和 :code:`backward` 函数的实现是正确的。你需要一些更复杂的单元测试来保证你实现的网络层是正确的。 写梯度检查单元测试是一个验证新实现的层是否正确的相对简单的办法。梯度检查单元测试通过有限差分法来验证一个层的梯度。首先对输入做一个小的扰动 :math:`\Delta x` ,然后观察到输出的变化为 :math:`\Delta y` ,那么,梯度就可以通过这个方程计算得到 :math:`\frac{\Delta y}{\Delta x }` 。之后,再用这个梯度去和 :code:`backward` 函数得到的梯度去对比,以保证梯度计算的正确性。需要注意的是梯度检查仅仅验证了梯度的计算,并不保证 :code:`forward` 和 :code:`backward` 函数的实现是正确的。你需要一些更复杂的单元测试来保证你实现的网络层是正确的。
所有网络层的梯度检查单测都位于 :code:`paddle/gserver/tests/test_LayerGrad.cpp` 。我们建议你在写新网络层时把测试代码放入新的文件中。下面列出了全连接层的梯度检查单元测试。它包含以下几步: 所有网络层的梯度检查单测都位于 :code:`paddle/legacy/gserver/tests/test_LayerGrad.cpp` 。我们建议你在写新网络层时把测试代码放入新的文件中。下面列出了全连接层的梯度检查单元测试。它包含以下几步:
+ 生成网络层配置。网络层配置包含以下几项: + 生成网络层配置。网络层配置包含以下几项:
- 偏置参数的大小。(例子中是4096) - 偏置参数的大小。(例子中是4096)
...@@ -322,7 +322,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。 ...@@ -322,7 +322,7 @@ PaddlePaddle的base layer类可以自动计算上面的导数。
} }
} }
如果你要为了测试而增加新的文件,例如 :code:`paddle/gserver/tests/testFCGrad.cpp` ,你需要把该文件加入 :code:`paddle/gserver/tests/CMakeLists.txt` 中。下面给出了一个例子。当你执行命令 :code:`make tests` 时,所有的单测都会被执行一次。注意,有些层可能需要高精度来保证梯度检查单测正确执行。你需要在配置cmake时将 :code:`WITH_DOUBLE` 设置为 `ON` 。 如果你要为了测试而增加新的文件,例如 :code:`paddle/legacy/gserver/tests/testFCGrad.cpp` ,你需要把该文件加入 :code:`paddle/legacy/gserver/tests/CMakeLists.txt` 中。下面给出了一个例子。当你执行命令 :code:`make tests` 时,所有的单测都会被执行一次。注意,有些层可能需要高精度来保证梯度检查单测正确执行。你需要在配置cmake时将 :code:`WITH_DOUBLE` 设置为 `ON` 。
.. code-block:: bash .. code-block:: bash
......
...@@ -58,7 +58,7 @@ Finally we can use chain rule to calculate :math:`\frac{\partial z}{\partial x}` ...@@ -58,7 +58,7 @@ Finally we can use chain rule to calculate :math:`\frac{\partial z}{\partial x}`
Implement C++ Class Implement C++ Class
=================== ===================
The C++ class of the layer implements the initialization, forward, and backward part of the layer. The fully connected layer is at :code:`paddle/gserver/layers/FullyConnectedLayer.h` and :code:`paddle/gserver/layers/FullyConnectedLayer.cpp`. We list simplified version of the code below. The C++ class of the layer implements the initialization, forward, and backward part of the layer. The fully connected layer is at :code:`paddle/legacy/gserver/layers/FullyConnectedLayer.h` and :code:`paddle/legacy/gserver/layers/FullyConnectedLayer.cpp`. We list simplified version of the code below.
It needs to derive the base class :code:`paddle::Layer`, and it needs to override the following functions: It needs to derive the base class :code:`paddle::Layer`, and it needs to override the following functions:
...@@ -154,7 +154,7 @@ The implementation of the forward part has the following steps. ...@@ -154,7 +154,7 @@ The implementation of the forward part has the following steps.
- Every layer must call :code:`Layer::forward(passType);` at the beginning of its :code:`forward` function. - Every layer must call :code:`Layer::forward(passType);` at the beginning of its :code:`forward` function.
- Then it allocates memory for the output using :code:`reserveOutput(batchSize, size);`. This step is necessary because we support the batches to have different batch sizes. :code:`reserveOutput` will change the size of the output accordingly. For the sake of efficiency, we will allocate new memory if we want to expand the matrix, but we will reuse the existing memory block if we want to shrink the matrix. - Then it allocates memory for the output using :code:`reserveOutput(batchSize, size);`. This step is necessary because we support the batches to have different batch sizes. :code:`reserveOutput` will change the size of the output accordingly. For the sake of efficiency, we will allocate new memory if we want to expand the matrix, but we will reuse the existing memory block if we want to shrink the matrix.
- Then it computes :math:`\sum_i W_i x + b` using Matrix operations. :code:`getInput(i).value` retrieve the matrix of the i-th input. Each input is a :math:`batchSize \times dim` matrix, where each row represents an single input in a batch. For a complete lists of supported matrix operations, please refer to :code:`paddle/math/Matrix.h` and :code:`paddle/math/BaseMatrix.h`. - Then it computes :math:`\sum_i W_i x + b` using Matrix operations. :code:`getInput(i).value` retrieve the matrix of the i-th input. Each input is a :math:`batchSize \times dim` matrix, where each row represents an single input in a batch. For a complete lists of supported matrix operations, please refer to :code:`paddle/legacy/math/Matrix.h` and :code:`paddle/legacy/math/BaseMatrix.h`.
- Finally it applies the activation function using :code:`forwardActivation();`. It will automatically applies the corresponding activation function specifies in the network configuration. - Finally it applies the activation function using :code:`forwardActivation();`. It will automatically applies the corresponding activation function specifies in the network configuration.
...@@ -263,7 +263,7 @@ Finally, you can use :code:`REGISTER_LAYER(fc, FullyConnectedLayer);` to registe ...@@ -263,7 +263,7 @@ Finally, you can use :code:`REGISTER_LAYER(fc, FullyConnectedLayer);` to registe
REGISTER_LAYER(fc, FullyConnectedLayer); REGISTER_LAYER(fc, FullyConnectedLayer);
} }
If the :code:`cpp` file is put into :code:`paddle/gserver/layers`, it will be automatically added to the compilation list. If the :code:`cpp` file is put into :code:`paddle/legacy/gserver/layers`, it will be automatically added to the compilation list.
Write Gradient Check Unit Test Write Gradient Check Unit Test
...@@ -271,7 +271,7 @@ Write Gradient Check Unit Test ...@@ -271,7 +271,7 @@ Write Gradient Check Unit Test
An easy way to verify the correctness of new layer's implementation is to write a gradient check unit test. Gradient check unit test utilizes finite difference method to verify the gradient of a layer. It modifies the input with a small perturbation :math:`\Delta x` and observes the changes of output :math:`\Delta y`, the gradient can be computed as :math:`\frac{\Delta y}{\Delta x }`. This gradient can be compared with the gradient computed by the :code:`backward` function of the layer to ensure the correctness of the gradient computation. Notice that the gradient check only tests the correctness of the gradient computation, it does not necessarily guarantee the correctness of the implementation of the :code:`forward` and :code:`backward` function. You need to write more sophisticated unit tests to make sure your layer is implemented correctly. An easy way to verify the correctness of new layer's implementation is to write a gradient check unit test. Gradient check unit test utilizes finite difference method to verify the gradient of a layer. It modifies the input with a small perturbation :math:`\Delta x` and observes the changes of output :math:`\Delta y`, the gradient can be computed as :math:`\frac{\Delta y}{\Delta x }`. This gradient can be compared with the gradient computed by the :code:`backward` function of the layer to ensure the correctness of the gradient computation. Notice that the gradient check only tests the correctness of the gradient computation, it does not necessarily guarantee the correctness of the implementation of the :code:`forward` and :code:`backward` function. You need to write more sophisticated unit tests to make sure your layer is implemented correctly.
All the gradient check unit tests are located in :code:`paddle/gserver/tests/test_LayerGrad.cpp`. You are recommended to put your test into a new test file if you are planning to write a new layer. The gradient test of the gradient check unit test of the fully connected layer is listed below. It has the following steps. All the gradient check unit tests are located in :code:`paddle/legacy/gserver/tests/test_LayerGrad.cpp`. You are recommended to put your test into a new test file if you are planning to write a new layer. The gradient test of the gradient check unit test of the fully connected layer is listed below. It has the following steps.
+ Create layer configuration. A layer configuration can include the following attributes: + Create layer configuration. A layer configuration can include the following attributes:
- size of the bias parameter. (4096 in our example) - size of the bias parameter. (4096 in our example)
...@@ -323,7 +323,7 @@ All the gradient check unit tests are located in :code:`paddle/gserver/tests/tes ...@@ -323,7 +323,7 @@ All the gradient check unit tests are located in :code:`paddle/gserver/tests/tes
} }
} }
If you are creating a new file for the test, such as :code:`paddle/gserver/tests/testFCGrad.cpp`, you need to add the file to :code:`paddle/gserver/tests/CMakeLists.txt`. An example is given below. All the unit tests will run when you execute the command :code:`make tests`. Notice that some layers might need high accuracy for the gradient check unit tests to work well. You need to configure :code:`WITH_DOUBLE` to `ON` when configuring cmake. If you are creating a new file for the test, such as :code:`paddle/legacy/gserver/tests/testFCGrad.cpp`, you need to add the file to :code:`paddle/legacy/gserver/tests/CMakeLists.txt`. An example is given below. All the unit tests will run when you execute the command :code:`make tests`. Notice that some layers might need high accuracy for the gradient check unit tests to work well. You need to configure :code:`WITH_DOUBLE` to `ON` when configuring cmake.
.. code-block:: bash .. code-block:: bash
......
...@@ -196,6 +196,6 @@ PaddlePaddle保存的模型参数文件内容由16字节头信息和网络参数 ...@@ -196,6 +196,6 @@ PaddlePaddle保存的模型参数文件内容由16字节头信息和网络参数
obj="process", obj="process",
args={"src_dict_path": src_dict_path}) args={"src_dict_path": src_dict_path})
完整源码可参考 `sequence_recurrent <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_recurrent.py>`_ 示例。 完整源码可参考 `sequence_recurrent <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_recurrent.py>`_ 示例。
...@@ -50,12 +50,12 @@ GPU则还需要高并行性,才能发挥其全部能力。这正是它们速 ...@@ -50,12 +50,12 @@ GPU则还需要高并行性,才能发挥其全部能力。这正是它们速
**nvprof** 是Nvidia性能分析工具, **nvvp** 则是带GUI的Nvidia可视化性能分析工具。 **nvprof** 是Nvidia性能分析工具, **nvvp** 则是带GUI的Nvidia可视化性能分析工具。
在这个教程中,我们主要会介绍nvprof和nvvp。 在这个教程中,我们主要会介绍nvprof和nvvp。
:code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate :code:`test_GpuProfiler` from :code:`paddle/legacy/math/tests` directory will be used to evaluate
above profilers. above profilers.
:code:`paddle/math/test` 目录中的 :code:`test_GpuProfiler` 就是用于展示上述分析工具的用法。 :code:`paddle/legacy/math/test` 目录中的 :code:`test_GpuProfiler` 就是用于展示上述分析工具的用法。
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp .. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
:language: c++ :language: c++
:lines: 137-151 :lines: 137-151
:linenos: :linenos:
...@@ -83,7 +83,7 @@ program crashes when CPU version of PaddlePaddle invokes them. ...@@ -83,7 +83,7 @@ program crashes when CPU version of PaddlePaddle invokes them.
1. 加入 :code:`REGISTER_TIMER_INFO` 和 :code:`printAllStatus` 函数(如高亮部分)。 1. 加入 :code:`REGISTER_TIMER_INFO` 和 :code:`printAllStatus` 函数(如高亮部分)。
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp .. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
:language: c++ :language: c++
:lines: 137-151 :lines: 137-151
:emphasize-lines: 8-12,14 :emphasize-lines: 8-12,14
...@@ -101,8 +101,8 @@ program crashes when CPU version of PaddlePaddle invokes them. ...@@ -101,8 +101,8 @@ program crashes when CPU version of PaddlePaddle invokes them.
.. code-block:: bash .. code-block:: bash
:emphasize-lines: 1,12-15 :emphasize-lines: 1,12-15
> ./paddle/math/tests/test_GpuProfiler > ./paddle/legacy/math/tests/test_GpuProfiler
I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/math/tests/test_GpuProfiler I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/legacy/math/tests/test_GpuProfiler
I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions
I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done. I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done.
[==========] Running 1 test from 1 test case. [==========] Running 1 test from 1 test case.
...@@ -130,7 +130,7 @@ nvprof 工具 ...@@ -130,7 +130,7 @@ nvprof 工具
1. 将 :code:`REGISTER_GPU_PROFILER` 函数加到代码中(参考强调部分)。 1. 将 :code:`REGISTER_GPU_PROFILER` 函数加到代码中(参考强调部分)。
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp .. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
:language: c++ :language: c++
:lines: 137-151 :lines: 137-151
:emphasize-lines: 6-7 :emphasize-lines: 6-7
...@@ -147,13 +147,13 @@ nvprof 工具 ...@@ -147,13 +147,13 @@ nvprof 工具
.. code-block:: bash .. code-block:: bash
nvprof ./paddle/math/tests/test_GpuProfiler nvprof ./paddle/legacy/math/tests/test_GpuProfiler
然后,您就能获得如下的分析结果: 然后,您就能获得如下的分析结果:
.. code-block:: bash .. code-block:: bash
==78544== Profiling application: ./paddle/math/tests/test_GpuProfiler ==78544== Profiling application: ./paddle/legacy/math/tests/test_GpuProfiler
==78544== Profiling result: ==78544== Profiling result:
Time(%) Time Calls Avg Min Max Name Time(%) Time Calls Avg Min Max Name
27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD] 27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD]
......
...@@ -51,10 +51,10 @@ For general GPU profiling, a bunch of tools are provided from both NVIDIA and th ...@@ -51,10 +51,10 @@ For general GPU profiling, a bunch of tools are provided from both NVIDIA and th
**nvprof** is Nvidia profiler and **nvvp** is (GUI based) Nvidia visual profiler. **nvprof** is Nvidia profiler and **nvvp** is (GUI based) Nvidia visual profiler.
In this tutorial, we will focus on nvprof and nvvp. In this tutorial, we will focus on nvprof and nvvp.
:code:`test_GpuProfiler` from :code:`paddle/math/tests` directory will be used to evaluate :code:`test_GpuProfiler` from :code:`paddle/legacy/math/tests` directory will be used to evaluate
above profilers. above profilers.
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp .. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
:language: c++ :language: c++
:lines: 137-151 :lines: 137-151
:linenos: :linenos:
...@@ -80,7 +80,7 @@ As a simple example, consider the following: ...@@ -80,7 +80,7 @@ As a simple example, consider the following:
1. Add :code:`REGISTER_TIMER_INFO` and :code:`printAllStatus` functions (see the emphasize-lines). 1. Add :code:`REGISTER_TIMER_INFO` and :code:`printAllStatus` functions (see the emphasize-lines).
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp .. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
:language: c++ :language: c++
:lines: 137-151 :lines: 137-151
:emphasize-lines: 8-12,14 :emphasize-lines: 8-12,14
...@@ -98,8 +98,8 @@ As a simple example, consider the following: ...@@ -98,8 +98,8 @@ As a simple example, consider the following:
.. code-block:: bash .. code-block:: bash
:emphasize-lines: 1,12-15 :emphasize-lines: 1,12-15
> ./paddle/math/tests/test_GpuProfiler > ./paddle/legacy/math/tests/test_GpuProfiler
I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/math/tests/test_GpuProfiler I1117 11:13:42.313065 2522362816 Util.cpp:155] commandline: ./paddle/legacy/math/tests/test_GpuProfiler
I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions I1117 11:13:42.845065 2522362816 Util.cpp:130] Calling runInitFunctions
I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done. I1117 11:13:42.845208 2522362816 Util.cpp:143] Call runInitFunctions done.
[==========] Running 1 test from 1 test case. [==========] Running 1 test from 1 test case.
...@@ -127,7 +127,7 @@ To use this command line profiler **nvprof**, you can simply issue the following ...@@ -127,7 +127,7 @@ To use this command line profiler **nvprof**, you can simply issue the following
1. Add :code:`REGISTER_GPU_PROFILER` function (see the emphasize-lines). 1. Add :code:`REGISTER_GPU_PROFILER` function (see the emphasize-lines).
.. literalinclude:: ../../../../paddle/math/tests/test_GpuProfiler.cpp .. literalinclude:: ../../../../paddle/legacy/math/tests/test_GpuProfiler.cpp
:language: c++ :language: c++
:lines: 137-151 :lines: 137-151
:emphasize-lines: 6-7 :emphasize-lines: 6-7
...@@ -144,13 +144,13 @@ To use this command line profiler **nvprof**, you can simply issue the following ...@@ -144,13 +144,13 @@ To use this command line profiler **nvprof**, you can simply issue the following
.. code-block:: bash .. code-block:: bash
nvprof ./paddle/math/tests/test_GpuProfiler nvprof ./paddle/legacy/math/tests/test_GpuProfiler
Then, you can get the following profiling result: Then, you can get the following profiling result:
.. code-block:: bash .. code-block:: bash
==78544== Profiling application: ./paddle/math/tests/test_GpuProfiler ==78544== Profiling application: ./paddle/legacy/math/tests/test_GpuProfiler
==78544== Profiling result: ==78544== Profiling result:
Time(%) Time Calls Avg Min Max Name Time(%) Time Calls Avg Min Max Name
27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD] 27.60% 9.6305ms 5 1.9261ms 3.4560us 6.4035ms [CUDA memcpy HtoD]
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
单双层RNN API对比介绍 单双层RNN API对比介绍
##################### #####################
本文以PaddlePaddle的双层RNN单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用双层RNN。本文中所有的例子,都只是介绍双层RNN的API接口,并不是使用双层RNN解决实际的问题。如果想要了解双层RNN在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。 本文以PaddlePaddle的双层RNN单元测试为示例,用多对效果完全相同的、分别使用单双层RNN作为网络配置的模型,来讲解如何使用双层RNN。本文中所有的例子,都只是介绍双层RNN的API接口,并不是使用双层RNN解决实际的问题。如果想要了解双层RNN在具体问题中的使用,请参考\ :ref:`algo_hrnn_demo`\ 。本文中示例所使用的单元测试文件是\ `test_RecurrentGradientMachine.cpp <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。
示例1:双层RNN,子序列间无Memory 示例1:双层RNN,子序列间无Memory
================================ ================================
...@@ -13,8 +13,8 @@ ...@@ -13,8 +13,8 @@
在本示例中,单层RNN和双层RNN的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是RNN使用两层序列模型,将多句话看成一个整体同时使用encoder压缩。二者语意上完全一致。这组语义相同的示例配置如下: 在本示例中,单层RNN和双层RNN的网络配置,都是将每一句分好词后的句子,使用LSTM作为encoder,压缩成一个向量。区别是RNN使用两层序列模型,将多句话看成一个整体同时使用encoder压缩。二者语意上完全一致。这组语义相同的示例配置如下:
* 单层RNN\: `sequence_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_layer_group.conf>`_ * 单层RNN\: `sequence_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_layer_group.conf>`_
* 双层RNN\: `sequence_nest_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_layer_group.conf>`_ * 双层RNN\: `sequence_nest_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_nest_layer_group.conf>`_
读取双层序列数据 读取双层序列数据
...@@ -24,18 +24,18 @@ ...@@ -24,18 +24,18 @@
- 本例中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层RNN网络直接使用。 - 本例中的原始数据一共有10个样本。每个样本由两部分组成,一个label(此处都为2)和一个已经分词后的句子。这个数据也被单层RNN网络直接使用。
.. literalinclude:: ../../../../paddle/gserver/tests/Sequence/tour_train_wdseg .. literalinclude:: ../../../../paddle/legacy/gserver/tests/Sequence/tour_train_wdseg
:language: text :language: text
- 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。但于双层序列的LSTM来说,第一个样本同时encode两条数据成两个向量。这四条数据同时处理的句子数量为\ :code:`[2, 3, 2, 3]`\ 。 - 双层序列数据一共有4个样本。 每个样本间用空行分开,整体数据和原始数据完全一样。但于双层序列的LSTM来说,第一个样本同时encode两条数据成两个向量。这四条数据同时处理的句子数量为\ :code:`[2, 3, 2, 3]`\ 。
.. literalinclude:: ../../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest .. literalinclude:: ../../../../paddle/legacy/gserver/tests/Sequence/tour_train_wdseg.nest
:language: text :language: text
其次,对于两种不同的输入数据类型,不同DataProvider对比如下(`sequenceGen.py <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequenceGen.py>`_)\: 其次,对于两种不同的输入数据类型,不同DataProvider对比如下(`sequenceGen.py <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequenceGen.py>`_)\:
.. literalinclude:: ../../../../paddle/gserver/tests/sequenceGen.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequenceGen.py
:language: python :language: python
:lines: 21-39 :lines: 21-39
:linenos: :linenos:
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
- words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层时间序列。 - words是原始数据中的每一句话,所对应的词表index数组。它是integer_value_sequence类型的,即整数数组。words即为这个数据中的单层时间序列。
- label是原始数据中对于每一句话的分类标签,它是integer_value类型的。 - label是原始数据中对于每一句话的分类标签,它是integer_value类型的。
.. literalinclude:: ../../../../paddle/gserver/tests/sequenceGen.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequenceGen.py
:language: python :language: python
:lines: 42-71 :lines: 42-71
:linenos: :linenos:
...@@ -64,7 +64,7 @@ ...@@ -64,7 +64,7 @@
首先,我们看一下单层RNN的配置。代码中9-15行(高亮部分)即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的RNN处理函数。在这个函数中,RNN对于每一个时间步通过了一个LSTM网络。 首先,我们看一下单层RNN的配置。代码中9-15行(高亮部分)即为单层RNN序列的使用代码。这里使用了PaddlePaddle预定义好的RNN处理函数。在这个函数中,RNN对于每一个时间步通过了一个LSTM网络。
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_layer_group.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_layer_group.conf
:language: python :language: python
:lines: 38-63 :lines: 38-63
:linenos: :linenos:
...@@ -85,7 +85,7 @@ ...@@ -85,7 +85,7 @@
* 至此,\ :code:`lstm_last`\ 便和单层RNN配置中的\ :code:`lstm_last`\ 具有相同的结果了。 * 至此,\ :code:`lstm_last`\ 便和单层RNN配置中的\ :code:`lstm_last`\ 具有相同的结果了。
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_layer_group.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_nest_layer_group.conf
:language: python :language: python
:lines: 38-64 :lines: 38-64
:linenos: :linenos:
...@@ -107,7 +107,7 @@ ...@@ -107,7 +107,7 @@
- 单层RNN:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。 - 单层RNN:过了一个很简单的recurrent_group。每一个时间步,当前的输入y和上一个时间步的输出rnn_state做了一个全链接。
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_rnn.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_rnn.conf
:language: python :language: python
:lines: 36-48 :lines: 36-48
...@@ -116,7 +116,7 @@ ...@@ -116,7 +116,7 @@
- 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。 - 内层inner_step的recurrent_group和单层序列的几乎一样。除了boot_layer=outer_mem,表示将外层的outer_mem作为内层memory的初始状态。外层outer_step中,outer_mem是一个子句的最后一个向量,即整个双层group是将前一个子句的最后一个向量,作为下一个子句memory的初始状态。
- 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每个时间步都用了上一个时间步的输出结果”一致。 - 从输入数据上看,单双层序列的句子是一样的,只是双层序列将其又做了子序列划分。因此双层序列的配置中,必须将前一个子句的最后一个元素,作为boot_layer传给下一个子句的memory,才能保证和单层序列的配置中“每个时间步都用了上一个时间步的输出结果”一致。
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_rnn.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_nest_rnn.conf
:language: python :language: python
:lines: 39-66 :lines: 39-66
...@@ -134,7 +134,7 @@ ...@@ -134,7 +134,7 @@
**输入不等长** 是指recurrent_group的多个输入序列,在每个时间步的子序列长度可以不相等。但序列输出时,需要指定与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定哪一个输入和输出序列信息一致,默认指定第一个输入。 **输入不等长** 是指recurrent_group的多个输入序列,在每个时间步的子序列长度可以不相等。但序列输出时,需要指定与某一个输入的序列信息是一致的。使用\ :red:`targetInlink`\ 可以指定哪一个输入和输出序列信息一致,默认指定第一个输入。
示例3的配置分别为\ `单层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py>`_\ 和\ `双层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py>`_\ 。 示例3的配置分别为\ `单层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_rnn_multi_unequalength_inputs.py>`_\ 和\ `双层不等长RNN <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py>`_\ 。
示例3对于单层RNN和双层RNN数据完全相同。 示例3对于单层RNN和双层RNN数据完全相同。
...@@ -152,14 +152,14 @@ ...@@ -152,14 +152,14 @@
* 单层RNN\: * 单层RNN\:
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_rnn_multi_unequalength_inputs.py
:language: python :language: python
:lines: 42-59 :lines: 42-59
:linenos: :linenos:
* 双层RNN\ \: * 双层RNN\ \:
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py
:language: python :language: python
:lines: 41-80 :lines: 41-80
:linenos: :linenos:
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
API comparision between RNN and hierarchical RNN API comparision between RNN and hierarchical RNN
##################### #####################
This article takes PaddlePaddle's hierarchical RNN unit test as an example. We will use several examples to illestrate the usage of single-layer and hierarchical RNNs. Each example has two model configurations, one for single-layer, and the other for hierarchical RNN. Although the implementations are different, both the two model configurations' effects are the same. All of the examples in this article only describe the API interface of the hierarchical RNN, while we do not use this hierarchical RNN to solve practical problems. If you want to understand the use of hierarchical RNN in specific issues, please refer to \ :ref:`algo_hrnn_demo`\ The unit test file used in this article's example is \ `test_RecurrentGradientMachine.cpp <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。 This article takes PaddlePaddle's hierarchical RNN unit test as an example. We will use several examples to illestrate the usage of single-layer and hierarchical RNNs. Each example has two model configurations, one for single-layer, and the other for hierarchical RNN. Although the implementations are different, both the two model configurations' effects are the same. All of the examples in this article only describe the API interface of the hierarchical RNN, while we do not use this hierarchical RNN to solve practical problems. If you want to understand the use of hierarchical RNN in specific issues, please refer to \ :ref:`algo_hrnn_demo`\ The unit test file used in this article's example is \ `test_RecurrentGradientMachine.cpp <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/test_RecurrentGradientMachine.cpp>`_\ 。
Example 1:Hierarchical RNN without Memory between subsequences Example 1:Hierarchical RNN without Memory between subsequences
================================ ================================
...@@ -13,8 +13,8 @@ The classical case in the hierarchical RNN is to perform sequence operations on ...@@ -13,8 +13,8 @@ The classical case in the hierarchical RNN is to perform sequence operations on
In this example, the network configuration of single-layer RNNs and hierarchical RNNs are all to use LSTM as en encoder to compress a word-segmented sentence into a vector. The difference is that, RNN uses a hierarchical RNN model, treating multiple sentences as a whole to use encoder to compress simultaneously. They are completely consistent in their semantic meanings. This pair of semantically identical example configurations is as follows: In this example, the network configuration of single-layer RNNs and hierarchical RNNs are all to use LSTM as en encoder to compress a word-segmented sentence into a vector. The difference is that, RNN uses a hierarchical RNN model, treating multiple sentences as a whole to use encoder to compress simultaneously. They are completely consistent in their semantic meanings. This pair of semantically identical example configurations is as follows:
* RNN\: `sequence_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_layer_group.conf>`_ * RNN\: `sequence_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_layer_group.conf>`_
* Hierarchical RNN\: `sequence_nest_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_layer_group.conf>`_ * Hierarchical RNN\: `sequence_nest_layer_group.conf <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_nest_layer_group.conf>`_
Reading hierarchical sequence data Reading hierarchical sequence data
...@@ -24,18 +24,18 @@ Firstly, the original data in this example is as follows \: ...@@ -24,18 +24,18 @@ Firstly, the original data in this example is as follows \:
- The original data in this example has 10 samples. Each of the sample includes two components: a lable(all 2 here), and a word-segmented sentence. This data is used by single RNN as well. - The original data in this example has 10 samples. Each of the sample includes two components: a lable(all 2 here), and a word-segmented sentence. This data is used by single RNN as well.
.. literalinclude:: ../../../../paddle/gserver/tests/Sequence/tour_train_wdseg .. literalinclude:: ../../../../paddle/legacy/gserver/tests/Sequence/tour_train_wdseg
:language: text :language: text
- The data for hierarchical RNN has 4 samples. Every sample is seperated by a blank line, while the content of the data is the same as the original data. But as for hierarchical LSTM, the first sample will encode two sentences into two vectors simultaneously. The sentence count dealed simultaneously by this 4 samples are \ :code:`[2, 3, 2, 3]`\ . - The data for hierarchical RNN has 4 samples. Every sample is seperated by a blank line, while the content of the data is the same as the original data. But as for hierarchical LSTM, the first sample will encode two sentences into two vectors simultaneously. The sentence count dealed simultaneously by this 4 samples are \ :code:`[2, 3, 2, 3]`\ .
.. literalinclude:: ../../../../paddle/gserver/tests/Sequence/tour_train_wdseg.nest .. literalinclude:: ../../../../paddle/legacy/gserver/tests/Sequence/tour_train_wdseg.nest
:language: text :language: text
Secondly, as for these two types of different input data formats, the contrast of different DataProviders are as follows (`sequenceGen.py <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequenceGen.py>`_)\: Secondly, as for these two types of different input data formats, the contrast of different DataProviders are as follows (`sequenceGen.py <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequenceGen.py>`_)\:
.. literalinclude:: ../../../../paddle/gserver/tests/sequenceGen.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequenceGen.py
:language: python :language: python
:lines: 21-39 :lines: 21-39
:linenos: :linenos:
...@@ -47,7 +47,7 @@ Secondly, as for these two types of different input data formats, the contrast o ...@@ -47,7 +47,7 @@ Secondly, as for these two types of different input data formats, the contrast o
- "words" is a list of word table indices corresponding to each word in the sentence in the original data. Its data type is integer_value_sequence, that is integer list. So, "words" is a singler-layer time series in the data. - "words" is a list of word table indices corresponding to each word in the sentence in the original data. Its data type is integer_value_sequence, that is integer list. So, "words" is a singler-layer time series in the data.
- "label" is the categorical label of each sentence, whose data type is integer_value. - "label" is the categorical label of each sentence, whose data type is integer_value.
.. literalinclude:: ../../../../paddle/gserver/tests/sequenceGen.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequenceGen.py
:language: python :language: python
:lines: 42-71 :lines: 42-71
:linenos: :linenos:
...@@ -64,7 +64,7 @@ Model configuration ...@@ -64,7 +64,7 @@ Model configuration
Firstly, let's look at the configuration of single-layer RNN. The hightlighted part of line 9 to line 15 is the usage of single-layer RNN. Here we use the pre-defined RNN process function in PaddlePaddle. In this function, for each time step, RNN passes through an LSTM network. Firstly, let's look at the configuration of single-layer RNN. The hightlighted part of line 9 to line 15 is the usage of single-layer RNN. Here we use the pre-defined RNN process function in PaddlePaddle. In this function, for each time step, RNN passes through an LSTM network.
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_layer_group.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_layer_group.conf
:language: python :language: python
:lines: 38-63 :lines: 38-63
:linenos: :linenos:
...@@ -85,7 +85,7 @@ Secondly, let's look at the model configuration of hierarchical RNN which has th ...@@ -85,7 +85,7 @@ Secondly, let's look at the model configuration of hierarchical RNN which has th
* Till now, \ :code:`lstm_last`\ has the same result as \ :code:`lstm_last`\ in single-layer RNN configuration. * Till now, \ :code:`lstm_last`\ has the same result as \ :code:`lstm_last`\ in single-layer RNN configuration.
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_layer_group.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_nest_layer_group.conf
:language: python :language: python
:lines: 38-64 :lines: 38-64
:linenos: :linenos:
...@@ -107,7 +107,7 @@ We select the different parts between single-layer RNN and hierarchical RNN conf ...@@ -107,7 +107,7 @@ We select the different parts between single-layer RNN and hierarchical RNN conf
- single-layer RNN:passes through a simple recurrent_group. For each time step, the current input y and the last time step's output rnn_state pass through a fully-connected layer. - single-layer RNN:passes through a simple recurrent_group. For each time step, the current input y and the last time step's output rnn_state pass through a fully-connected layer.
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_rnn.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_rnn.conf
:language: python :language: python
:lines: 36-48 :lines: 36-48
...@@ -116,7 +116,7 @@ We select the different parts between single-layer RNN and hierarchical RNN conf ...@@ -116,7 +116,7 @@ We select the different parts between single-layer RNN and hierarchical RNN conf
- The recurrent_group of inner layer's inner_step is nearly the same as single-layer sequence, except for the case of boot_layer=outer_mem, which means using the outer layer's outer_mem as the initial state for the inner layer's memory. In the outer layer's out_step, outer_mem is the last vector of a subsequence, that is, the whole hierarchical group uses the last vector of the previous subsequence as the initial state for the next subsequence's memory. - The recurrent_group of inner layer's inner_step is nearly the same as single-layer sequence, except for the case of boot_layer=outer_mem, which means using the outer layer's outer_mem as the initial state for the inner layer's memory. In the outer layer's out_step, outer_mem is the last vector of a subsequence, that is, the whole hierarchical group uses the last vector of the previous subsequence as the initial state for the next subsequence's memory.
- From the aspect of the input data, sentences from single-layer and hierarchical RNN are the same. The only difference is that, hierarchical RNN disassembes the sequence into subsequences. So in the hierarchical RNN configuration, we must use the last element of the previous subsequence as a boot_layer for the memory of the next subsequence, so that it makes no difference with "every time step uses the output of last time step" in the sigle-layer RNN configuration. - From the aspect of the input data, sentences from single-layer and hierarchical RNN are the same. The only difference is that, hierarchical RNN disassembes the sequence into subsequences. So in the hierarchical RNN configuration, we must use the last element of the previous subsequence as a boot_layer for the memory of the next subsequence, so that it makes no difference with "every time step uses the output of last time step" in the sigle-layer RNN configuration.
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_rnn.conf .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_nest_rnn.conf
:language: python :language: python
:lines: 39-66 :lines: 39-66
...@@ -134,7 +134,7 @@ Example 3:hierarchical RNN with unequal length inputs ...@@ -134,7 +134,7 @@ Example 3:hierarchical RNN with unequal length inputs
**unequal length inputs** means in the multiple input sequences of recurrent_group, the lengths of subsequences can be unequal. But the output of the sequence, needs to be consistent with one of the input sequences. Using \ :red:`targetInlink`\ can help you specify which of the input sequences and the output sequence can be consistent, by default is the first input. **unequal length inputs** means in the multiple input sequences of recurrent_group, the lengths of subsequences can be unequal. But the output of the sequence, needs to be consistent with one of the input sequences. Using \ :red:`targetInlink`\ can help you specify which of the input sequences and the output sequence can be consistent, by default is the first input.
The configurations of Example 3 are \ `sequence_rnn_multi_unequalength_inputs <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py>`_ \ and \ `sequence_nest_rnn_multi_unequalength_inputs <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py>`_\ . The configurations of Example 3 are \ `sequence_rnn_multi_unequalength_inputs <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_rnn_multi_unequalength_inputs.py>`_ \ and \ `sequence_nest_rnn_multi_unequalength_inputs <https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/legacy/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py>`_\ .
The data for the configurations of Example 3's single-layer RNN and hierarchical RNN are exactly the same. The data for the configurations of Example 3's single-layer RNN and hierarchical RNN are exactly the same.
...@@ -152,14 +152,14 @@ Similar to Example 2's configuration, Example 3's configuration uses single-laye ...@@ -152,14 +152,14 @@ Similar to Example 2's configuration, Example 3's configuration uses single-laye
* single-layer RNN\: * single-layer RNN\:
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_rnn_multi_unequalength_inputs.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_rnn_multi_unequalength_inputs.py
:language: python :language: python
:lines: 42-59 :lines: 42-59
:linenos: :linenos:
* hierarchical RNN\ \: * hierarchical RNN\ \:
.. literalinclude:: ../../../../paddle/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py .. literalinclude:: ../../../../paddle/legacy/gserver/tests/sequence_nest_rnn_multi_unequalength_inputs.py
:language: python :language: python
:lines: 41-80 :lines: 41-80
:linenos: :linenos:
......
...@@ -16,7 +16,7 @@ package pserver ...@@ -16,7 +16,7 @@ package pserver
// #cgo CFLAGS: -I ../../ // #cgo CFLAGS: -I ../../
// #cgo LDFLAGS: ${SRCDIR}/client/c/libpaddle_go_optimizer.a -lstdc++ -lm // #cgo LDFLAGS: ${SRCDIR}/client/c/libpaddle_go_optimizer.a -lstdc++ -lm
// #include "paddle/optimizer/optimizer.h" // #include "paddle/legacy/optimizer/optimizer.h"
// #include <stdlib.h> // #include <stdlib.h>
// #include <string.h> // #include <string.h>
import "C" import "C"
......
if(NOT WITH_FLUID_ONLY) if(NOT WITH_FLUID_ONLY)
add_subdirectory(cuda) add_subdirectory(legacy/cuda)
add_subdirectory(function) add_subdirectory(legacy/function)
add_subdirectory(utils) add_subdirectory(utils)
add_subdirectory(math) add_subdirectory(legacy/math)
add_subdirectory(gserver) add_subdirectory(legacy/gserver)
add_subdirectory(parameter) add_subdirectory(legacy/parameter)
if(MOBILE_INFERENCE) if(MOBILE_INFERENCE)
add_subdirectory(capi) add_subdirectory(capi)
else() else()
add_subdirectory(pserver) add_subdirectory(legacy/pserver)
add_subdirectory(trainer) add_subdirectory(trainer)
add_subdirectory(scripts) add_subdirectory(scripts)
......
...@@ -15,7 +15,7 @@ limitations under the License. */ ...@@ -15,7 +15,7 @@ limitations under the License. */
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "PaddleAPIPrivate.h" #include "PaddleAPIPrivate.h"
#include "paddle/parameter/Argument.h" #include "paddle/legacy/parameter/Argument.h"
size_t Arguments::getSlotNum() const { return m->outputs.size(); } size_t Arguments::getSlotNum() const { return m->outputs.size(); }
......
...@@ -16,7 +16,7 @@ limitations under the License. */ ...@@ -16,7 +16,7 @@ limitations under the License. */
#include "PaddleAPIPrivate.h" #include "PaddleAPIPrivate.h"
#include "Internal.h" #include "Internal.h"
#include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/legacy/gserver/gradientmachines/NeuralNetwork.h"
std::vector<int> GradientMachine::defaultParamTypes = { std::vector<int> GradientMachine::defaultParamTypes = {
PARAMETER_VALUE, PARAMETER_GRADIENT, PARAMETER_MOMENTUM}; PARAMETER_VALUE, PARAMETER_GRADIENT, PARAMETER_MOMENTUM};
......
...@@ -12,12 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ...@@ -12,12 +12,12 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#include "paddle/math/Matrix.h" #include "paddle/legacy/math/Matrix.h"
#include <cstring> #include <cstring>
#include <iostream> #include <iostream>
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "paddle/math/CpuSparseMatrix.h" #include "paddle/legacy/math/CpuSparseMatrix.h"
#include "paddle/math/SparseMatrix.h" #include "paddle/legacy/math/SparseMatrix.h"
struct MatrixPrivate { struct MatrixPrivate {
std::shared_ptr<paddle::Matrix> mat; std::shared_ptr<paddle::Matrix> mat;
......
...@@ -19,7 +19,7 @@ limitations under the License. */ ...@@ -19,7 +19,7 @@ limitations under the License. */
#include <stdexcept> #include <stdexcept>
#include <string> #include <string>
#include <vector> #include <vector>
#include "paddle/gserver/gradientmachines/GradientMachine.h" #include "paddle/legacy/gserver/gradientmachines/GradientMachine.h"
#include "paddle/utils/Common.h" #include "paddle/utils/Common.h"
#include "paddle/utils/GlobalConstants.h" #include "paddle/utils/GlobalConstants.h"
......
...@@ -14,9 +14,9 @@ limitations under the License. */ ...@@ -14,9 +14,9 @@ limitations under the License. */
#pragma once #pragma once
#include <memory> #include <memory>
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "paddle/gserver/evaluators/Evaluator.h" #include "paddle/legacy/gserver/evaluators/Evaluator.h"
#include "paddle/gserver/gradientmachines/GradientMachine.h" #include "paddle/legacy/gserver/gradientmachines/GradientMachine.h"
#include "paddle/parameter/ParameterUpdaterBase.h" #include "paddle/legacy/parameter/ParameterUpdaterBase.h"
#include "paddle/trainer/TrainerConfigHelper.h" #include "paddle/trainer/TrainerConfigHelper.h"
struct GradientMachinePrivate { struct GradientMachinePrivate {
......
...@@ -12,7 +12,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ...@@ -12,7 +12,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#include "paddle/parameter/Parameter.h" #include "paddle/legacy/parameter/Parameter.h"
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "PaddleAPIPrivate.h" #include "PaddleAPIPrivate.h"
......
...@@ -12,7 +12,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ...@@ -12,7 +12,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#include "paddle/parameter/ParameterOptimizer.h" #include "paddle/legacy/parameter/ParameterOptimizer.h"
#include <algorithm> #include <algorithm>
#include "Internal.h" #include "Internal.h"
#include "PaddleAPI.h" #include "PaddleAPI.h"
......
...@@ -17,8 +17,8 @@ limitations under the License. */ ...@@ -17,8 +17,8 @@ limitations under the License. */
#include <sstream> #include <sstream>
#include <vector> #include <vector>
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "paddle/gserver/gradientmachines/GradientMachine.h" #include "paddle/legacy/gserver/gradientmachines/GradientMachine.h"
#include "paddle/parameter/Argument.h" #include "paddle/legacy/parameter/Argument.h"
#include "paddle/utils/Flags.h" #include "paddle/utils/Flags.h"
// used to represent partial sequence // used to represent partial sequence
......
...@@ -19,7 +19,7 @@ limitations under the License. */ ...@@ -19,7 +19,7 @@ limitations under the License. */
#include <atomic> #include <atomic>
#include <memory> #include <memory>
#include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/legacy/gserver/gradientmachines/NeuralNetwork.h"
#include "paddle/trainer/ParamUtil.h" #include "paddle/trainer/ParamUtil.h"
#include "paddle/trainer/Trainer.h" #include "paddle/trainer/Trainer.h"
#include "paddle/trainer/TrainerInternal.h" #include "paddle/trainer/TrainerInternal.h"
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "paddle/parameter/Parameter.h" #include "paddle/legacy/parameter/Parameter.h"
#include "paddle/utils/Common.h" #include "paddle/utils/Common.h"
#include "paddle/utils/Flags.h" #include "paddle/utils/Flags.h"
#include "paddle/utils/PythonUtil.h" #include "paddle/utils/PythonUtil.h"
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "paddle/math/Vector.h" #include "paddle/legacy/math/Vector.h"
#include <cstring> #include <cstring>
......
...@@ -13,10 +13,10 @@ See the License for the specific language governing permissions and ...@@ -13,10 +13,10 @@ See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#include "capi.h" #include "capi.h"
#include "paddle/gserver/gradientmachines/GradientMachine.h" #include "paddle/legacy/gserver/gradientmachines/GradientMachine.h"
#include "paddle/math/Matrix.h" #include "paddle/legacy/math/Matrix.h"
#include "paddle/math/Vector.h" #include "paddle/legacy/math/Vector.h"
#include "paddle/parameter/Argument.h" #include "paddle/legacy/parameter/Argument.h"
#pragma once #pragma once
namespace paddle { namespace paddle {
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include "gradient_machine.h" #include "gradient_machine.h"
#include "capi_private.h" #include "capi_private.h"
#include "paddle/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/legacy/gserver/gradientmachines/NeuralNetwork.h"
#define cast(v) paddle::capi::cast<paddle::capi::CGradientMachine>(v) #define cast(v) paddle::capi::cast<paddle::capi::CGradientMachine>(v)
......
...@@ -13,7 +13,7 @@ See the License for the specific language governing permissions and ...@@ -13,7 +13,7 @@ See the License for the specific language governing permissions and
limitations under the License. */ limitations under the License. */
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <paddle/gserver/gradientmachines/GradientMachine.h> #include <paddle/legacy/gserver/gradientmachines/GradientMachine.h>
#include <paddle/trainer/TrainerConfigHelper.h> #include <paddle/trainer/TrainerConfigHelper.h>
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>
......
...@@ -51,7 +51,7 @@ It can be used as a helper class that draws the modified graph after each pass. ...@@ -51,7 +51,7 @@ It can be used as a helper class that draws the modified graph after each pass.
## Utilities ## Utilities
There is some helper function/class for analysis. There is some helper legacy/function/class for analysis.
- [dot.h](./dot.h) give a easy to use interface for generating `DOT` codes, - [dot.h](./dot.h) give a easy to use interface for generating `DOT` codes,
- [graph_traits.h](./graph_traits.h) contains the graph traversal algorithms, it uses `iterator` to make the algorithms easy to share across different passes. - [graph_traits.h](./graph_traits.h) contains the graph traversal algorithms, it uses `iterator` to make the algorithms easy to share across different passes.
...@@ -17,7 +17,7 @@ limitations under the License. */ ...@@ -17,7 +17,7 @@ limitations under the License. */
#include <immintrin.h> #include <immintrin.h>
#include "paddle/fluid/operators/math/detail/activation_functions.h" #include "paddle/fluid/operators/math/detail/activation_functions.h"
// TODO(qingqing) refine this dependence // TODO(qingqing) refine this dependence
#include "paddle/cuda/src/avx_mathfun.h" #include "paddle/legacy/cuda/src/avx_mathfun.h"
namespace paddle { namespace paddle {
namespace operators { namespace operators {
......
gserver/tests/Sequence/tour_train_wdseg
gserver/tests/Sequence/tour_train_wdseg.nest
...@@ -207,7 +207,7 @@ typedef struct { ...@@ -207,7 +207,7 @@ typedef struct {
#ifdef __NVCC__ #ifdef __NVCC__
#include <cuda_runtime.h> #include <cuda_runtime.h>
#include "paddle/cuda/include/hl_cuda.h" #include "paddle/legacy/cuda/include/hl_cuda.h"
#include "paddle/utils/Logging.h" #include "paddle/utils/Logging.h"
extern __thread bool g_sync_flag; extern __thread bool g_sync_flag;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册