提交 aa2f76fd 编写于 作者: X Xin Pan

move trainer

上级 da158c22
...@@ -52,7 +52,7 @@ In `trainer_internal.cpp:L93 trainOneBatch`: ...@@ -52,7 +52,7 @@ In `trainer_internal.cpp:L93 trainOneBatch`:
When doing actual network forward and backward, at the beginning of each batch, the trainer will try to download one row of data from pserver. When doing actual network forward and backward, at the beginning of each batch, the trainer will try to download one row of data from pserver.
In `trainer/RemoteParameterUpdater.cpp`: `parameterUpdater_->getParametersRemote();`: In `legacy/trainer/RemoteParameterUpdater.cpp`: `parameterUpdater_->getParametersRemote();`:
```c++ ```c++
if (fullSize) { if (fullSize) {
......
...@@ -18,20 +18,20 @@ Figure 1. PaddlePaddle on IA ...@@ -18,20 +18,20 @@ Figure 1. PaddlePaddle on IA
具体的完成状态可以参见[这里](https://github.com/PaddlePaddle/Paddle/projects/21) 具体的完成状态可以参见[这里](https://github.com/PaddlePaddle/Paddle/projects/21)
## Contents ## Contents
- [Overview](#overview) - [Overview](#overview)
- [Actions](#actions) - [Actions](#actions)
- [CMake](#cmake) - [CMake](#cmake)
- [Matrix](#matrix) - [Matrix](#matrix)
- [Layers](#layers) - [Layers](#layers)
- [Activations](#activations) - [Activations](#activations)
- [Parameters](#parameters) - [Parameters](#parameters)
- [Gradients](#gradients) - [Gradients](#gradients)
- [Unit Tests](#unit-tests) - [Unit Tests](#unit-tests)
- [Python API](#python-api) - [Python API](#python-api)
- [Benchmarking](#benchmarking) - [Benchmarking](#benchmarking)
- [Others](#others) - [Others](#others)
- [Design Concerns](#design-concerns) - [Design Concerns](#design-concerns)
## Overview ## Overview
...@@ -218,20 +218,20 @@ if use_mkldnn ...@@ -218,20 +218,20 @@ if use_mkldnn
我们总结出一些特别需要注意的点: 我们总结出一些特别需要注意的点:
1. 使用**deviceId_**。为了尽可能少的在父类Layer中添加变量或者函数, 1. 使用**deviceId_**。为了尽可能少的在父类Layer中添加变量或者函数,
我们决定使用已有的`deviceId_`变量来区分layer的属性,定义`-2``MKLDNNLayer`特有的设备ID。 我们决定使用已有的`deviceId_`变量来区分layer的属性,定义`-2``MKLDNNLayer`特有的设备ID。
2. 重写父类Layer的**init**函数,修改`deviceId_``-2`,代表这个layer是用于跑在MKL-DNN的环境下。 2. 重写父类Layer的**init**函数,修改`deviceId_``-2`,代表这个layer是用于跑在MKL-DNN的环境下。
3. 创建`MKLDNNBase`,定义一些除了layer和memory相关的类和函数。 3. 创建`MKLDNNBase`,定义一些除了layer和memory相关的类和函数。
包括MKL-DNN会用到`MKLDNNStream``CPUEngine`,和未来可能还会用到`FPGAEngine`等。 包括MKL-DNN会用到`MKLDNNStream``CPUEngine`,和未来可能还会用到`FPGAEngine`等。
4. 如果MKL-DNN layer的后面接有cpu device,那么就会使`output_.value``extOutVal_`共享内存, 4. 如果MKL-DNN layer的后面接有cpu device,那么就会使`output_.value``extOutVal_`共享内存,
同时数据格式就是`NCHW`,这样下一个cpu device就能拿到正确的数据。 同时数据格式就是`NCHW`,这样下一个cpu device就能拿到正确的数据。
在有普通的CPU layer时, `extOutVal_``extOutGrad_`的格式始终是`NCHW`或者`NC` 在有普通的CPU layer时, `extOutVal_``extOutGrad_`的格式始终是`NCHW`或者`NC`
## References ## References
1. [MKL small library](https://github.com/01org/mkl-dnn#linking-your-application)[Intel MKL](https://software.intel.com/en-us/mkl)的一个子集。 1. [MKL small library](https://github.com/01org/mkl-dnn#linking-your-application)[Intel MKL](https://software.intel.com/en-us/mkl)的一个子集。
主要包括了深度学习相关的数学原语与操作,一般由MKL-DNN在发布[新版本](https://github.com/01org/mkl-dnn/releases)时一起更新。 主要包括了深度学习相关的数学原语与操作,一般由MKL-DNN在发布[新版本](https://github.com/01org/mkl-dnn/releases)时一起更新。
2. [MKL-DNN System Requirements](https://github.com/01org/mkl-dnn#system-requirements) 2. [MKL-DNN System Requirements](https://github.com/01org/mkl-dnn#system-requirements)
目前在PaddlePaddle中,仅会在支持AVX2指令集及以上的机器才使用MKL-DNN。 目前在PaddlePaddle中,仅会在支持AVX2指令集及以上的机器才使用MKL-DNN。
3. [原来的方案](https://github.com/PaddlePaddle/Paddle/pull/3096)会引入**nextLayer**的信息。 3. [原来的方案](https://github.com/PaddlePaddle/Paddle/pull/3096)会引入**nextLayer**的信息。
但是在PaddlePaddle中,无论是重构前的layer还是重构后的op,都不会想要知道next layer/op的信息。 但是在PaddlePaddle中,无论是重构前的layer还是重构后的op,都不会想要知道next layer/op的信息。
4. MKL-DNN的高性能格式与PaddlePaddle原有的`NCHW`不同(PaddlePaddle中的cuDNN部分使用的也是`NCHW`,所以不存在这个问题)。 4. MKL-DNN的高性能格式与PaddlePaddle原有的`NCHW`不同(PaddlePaddle中的cuDNN部分使用的也是`NCHW`,所以不存在这个问题)。
所以需要引入一个转换方法,并且只需要在必要的时候转换这种格式,才能更好的发挥MKL-DNN的性能。 所以需要引入一个转换方法,并且只需要在必要的时候转换这种格式,才能更好的发挥MKL-DNN的性能。
...@@ -339,7 +339,7 @@ If you are creating a new file for the test, such as :code:`paddle/legacy/gserve ...@@ -339,7 +339,7 @@ If you are creating a new file for the test, such as :code:`paddle/legacy/gserve
Implement Python Wrapper Implement Python Wrapper
======================== ========================
Implementing Python wrapper allows us to use the added layer in configuration files. All the Python wrappers are in file :code:`python/paddle/trainer/config_parser.py`. An example of the Python wrapper for fully connected layer is listed below. It has the following steps: Implementing Python wrapper allows us to use the added layer in configuration files. All the Python wrappers are in file :code:`python/paddle/legacy/trainer/config_parser.py`. An example of the Python wrapper for fully connected layer is listed below. It has the following steps:
- Use :code:`@config_layer('fc')` at the decorator for all the Python wrapper class. :code:`fc` is the identifier of the layer. - Use :code:`@config_layer('fc')` at the decorator for all the Python wrapper class. :code:`fc` is the identifier of the layer.
- Implements :code:`__init__` constructor function. - Implements :code:`__init__` constructor function.
......
...@@ -10,7 +10,7 @@ if(NOT WITH_FLUID_ONLY) ...@@ -10,7 +10,7 @@ if(NOT WITH_FLUID_ONLY)
add_subdirectory(legacy/capi) add_subdirectory(legacy/capi)
else() else()
add_subdirectory(legacy/pserver) add_subdirectory(legacy/pserver)
add_subdirectory(trainer) add_subdirectory(legacy/trainer)
add_subdirectory(scripts) add_subdirectory(scripts)
if(WITH_C_API) if(WITH_C_API)
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include "PaddleAPI.h" #include "PaddleAPI.h"
#include "PaddleAPIPrivate.h" #include "PaddleAPIPrivate.h"
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
struct ParameterConfigPrivate { struct ParameterConfigPrivate {
paddle::ParameterPtr parameter; paddle::ParameterPtr parameter;
......
...@@ -17,7 +17,7 @@ limitations under the License. */ ...@@ -17,7 +17,7 @@ limitations under the License. */
#include "paddle/legacy/gserver/evaluators/Evaluator.h" #include "paddle/legacy/gserver/evaluators/Evaluator.h"
#include "paddle/legacy/gserver/gradientmachines/GradientMachine.h" #include "paddle/legacy/gserver/gradientmachines/GradientMachine.h"
#include "paddle/legacy/parameter/ParameterUpdaterBase.h" #include "paddle/legacy/parameter/ParameterUpdaterBase.h"
#include "paddle/trainer/TrainerConfigHelper.h" #include "paddle/legacy/trainer/TrainerConfigHelper.h"
struct GradientMachinePrivate { struct GradientMachinePrivate {
std::shared_ptr<paddle::GradientMachine> machine; std::shared_ptr<paddle::GradientMachine> machine;
......
...@@ -16,10 +16,10 @@ limitations under the License. */ ...@@ -16,10 +16,10 @@ limitations under the License. */
#include "PaddleAPIPrivate.h" #include "PaddleAPIPrivate.h"
#ifndef PADDLE_WITHOUT_GOLANG #ifndef PADDLE_WITHOUT_GOLANG
#include "paddle/trainer/NewRemoteParameterUpdater.h" #include "paddle/legacy/trainer/NewRemoteParameterUpdater.h"
#endif #endif
#include "paddle/trainer/RemoteParameterUpdater.h" #include "paddle/legacy/trainer/RemoteParameterUpdater.h"
#include "paddle/trainer/ThreadParameterUpdater.h" #include "paddle/legacy/trainer/ThreadParameterUpdater.h"
ParameterUpdater::ParameterUpdater() : m(new ParameterUpdaterPrivate()) {} ParameterUpdater::ParameterUpdater() : m(new ParameterUpdaterPrivate()) {}
......
...@@ -20,9 +20,9 @@ limitations under the License. */ ...@@ -20,9 +20,9 @@ limitations under the License. */
#include <memory> #include <memory>
#include "paddle/legacy/gserver/gradientmachines/NeuralNetwork.h" #include "paddle/legacy/gserver/gradientmachines/NeuralNetwork.h"
#include "paddle/trainer/ParamUtil.h" #include "paddle/legacy/trainer/ParamUtil.h"
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
#include "paddle/trainer/TrainerInternal.h" #include "paddle/legacy/trainer/TrainerInternal.h"
#include "paddle/utils/Flags.h" #include "paddle/utils/Flags.h"
using paddle::real; using paddle::real;
......
...@@ -18,7 +18,7 @@ limitations under the License. */ ...@@ -18,7 +18,7 @@ limitations under the License. */
#include <vector> #include <vector>
#include "capi_private.h" #include "capi_private.h"
#include "main.h" #include "main.h"
#include "paddle/trainer/TrainerConfigHelper.h" #include "paddle/legacy/trainer/TrainerConfigHelper.h"
#include "paddle/utils/Excepts.h" #include "paddle/utils/Excepts.h"
#include "paddle/utils/PythonUtil.h" #include "paddle/utils/PythonUtil.h"
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <paddle/legacy/gserver/gradientmachines/GradientMachine.h> #include <paddle/legacy/gserver/gradientmachines/GradientMachine.h>
#include <paddle/trainer/TrainerConfigHelper.h> #include <paddle/legacy/trainer/TrainerConfigHelper.h>
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>
#include <type_traits> #include <type_traits>
......
...@@ -15,7 +15,7 @@ limitations under the License. */ ...@@ -15,7 +15,7 @@ limitations under the License. */
#include "MKLDNNTester.h" #include "MKLDNNTester.h"
#include "paddle/legacy/gserver/layers/MKLDNNBase.h" #include "paddle/legacy/gserver/layers/MKLDNNBase.h"
#include "paddle/legacy/gserver/layers/MKLDNNLayer.h" #include "paddle/legacy/gserver/layers/MKLDNNLayer.h"
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
namespace paddle { namespace paddle {
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include <paddle/utils/PythonUtil.h> #include <paddle/utils/PythonUtil.h>
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <paddle/legacy/pserver/ParameterServer2.h> #include <paddle/legacy/pserver/ParameterServer2.h>
......
...@@ -17,7 +17,7 @@ limitations under the License. */ ...@@ -17,7 +17,7 @@ limitations under the License. */
#include <algorithm> #include <algorithm>
#include <cstdlib> #include <cstdlib>
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
using namespace paddle; // NOLINT using namespace paddle; // NOLINT
using namespace std; // NOLINT using namespace std; // NOLINT
......
...@@ -15,8 +15,8 @@ limitations under the License. */ ...@@ -15,8 +15,8 @@ limitations under the License. */
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <vector> #include <vector>
#include "ModelConfig.pb.h" #include "ModelConfig.pb.h"
#include "paddle/legacy/trainer/Trainer.h"
#include "paddle/testing/TestUtil.h" #include "paddle/testing/TestUtil.h"
#include "paddle/trainer/Trainer.h"
using namespace paddle; // NOLINT using namespace paddle; // NOLINT
using namespace std; // NOLINT using namespace std; // NOLINT
......
...@@ -18,8 +18,8 @@ limitations under the License. */ ...@@ -18,8 +18,8 @@ limitations under the License. */
#include <algorithm> #include <algorithm>
#include <cstdlib> #include <cstdlib>
#include "paddle/legacy/trainer/Trainer.h"
#include "paddle/testing/TestUtil.h" #include "paddle/testing/TestUtil.h"
#include "paddle/trainer/Trainer.h"
#include "paddle/utils/Stat.h" #include "paddle/utils/Stat.h"
using namespace paddle; // NOLINT using namespace paddle; // NOLINT
......
...@@ -15,8 +15,8 @@ limitations under the License. */ ...@@ -15,8 +15,8 @@ limitations under the License. */
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <paddle/legacy/gserver/gradientmachines/GradientMachine.h> #include <paddle/legacy/gserver/gradientmachines/GradientMachine.h>
#include <paddle/legacy/parameter/ParameterUpdateFunctions.h> #include <paddle/legacy/parameter/ParameterUpdateFunctions.h>
#include <paddle/trainer/Trainer.h> #include <paddle/legacy/trainer/Trainer.h>
#include <paddle/trainer/TrainerInternal.h> #include <paddle/legacy/trainer/TrainerInternal.h>
#include <paddle/utils/PythonUtil.h> #include <paddle/utils/PythonUtil.h>
#include <paddle/utils/Util.h> #include <paddle/utils/Util.h>
#include <paddle/utils/Version.h> #include <paddle/utils/Version.h>
......
...@@ -5,7 +5,7 @@ add_custom_target(copy_trainer_conf ALL DEPENDS sample_trainer_config.conf) ...@@ -5,7 +5,7 @@ add_custom_target(copy_trainer_conf ALL DEPENDS sample_trainer_config.conf)
set(PYTHON_PATH set(PYTHON_PATH
${PADDLE_SOURCE_DIR}/paddle/.set_python_path.sh -d ${PADDLE_SOURCE_DIR}/paddle/.set_python_path.sh -d
${PADDLE_BINARY_DIR}/python/:${PADDLE_BINARY_DIR}/paddle/trainer/tests) ${PADDLE_BINARY_DIR}/python/:${PADDLE_BINARY_DIR}/paddle/legacy/trainer/tests)
function(trainer_test TARGET) function(trainer_test TARGET)
add_unittest_without_exec(${TARGET} ${TARGET}.cpp) add_unittest_without_exec(${TARGET} ${TARGET}.cpp)
add_test(NAME ${TARGET} add_test(NAME ${TARGET}
...@@ -33,5 +33,5 @@ endif() ...@@ -33,5 +33,5 @@ endif()
#################### test_config_parser ######################### #################### test_config_parser #########################
add_test(NAME test_config_parser add_test(NAME test_config_parser
COMMAND ${PYTHON_PATH} ${PYTHON_EXECUTABLE} COMMAND ${PYTHON_PATH} ${PYTHON_EXECUTABLE}
${PADDLE_SOURCE_DIR}/paddle/trainer/tests/config_parser_test.py ${PADDLE_SOURCE_DIR}/paddle/legacy/trainer/tests/config_parser_test.py
WORKING_DIRECTORY ${PADDLE_BINARY_DIR}/paddle/) WORKING_DIRECTORY ${PADDLE_BINARY_DIR}/paddle/)
...@@ -15,9 +15,9 @@ ...@@ -15,9 +15,9 @@
from paddle.trainer.config_parser import parse_config_and_serialize from paddle.trainer.config_parser import parse_config_and_serialize
if __name__ == '__main__': if __name__ == '__main__':
parse_config_and_serialize('trainer/tests/test_config.conf', '') parse_config_and_serialize('legacy/trainer/tests/test_config.conf', '')
parse_config_and_serialize( parse_config_and_serialize(
'trainer/tests/sample_trainer_config.conf', 'legacy/trainer/tests/sample_trainer_config.conf',
'extension_module_name=paddle.trainer.config_parser_extension') 'extension_module_name=paddle.trainer.config_parser_extension')
parse_config_and_serialize( parse_config_and_serialize(
'legacy/gserver/tests/pyDataProvider/trainer.conf', '') 'legacy/gserver/tests/pyDataProvider/trainer.conf', '')
legacy/trainer/tests/pydata_provider_wrapper_dir/test_pydata_provider_wrapper.data
legacy/trainer/tests/sample_data.txt
...@@ -16,13 +16,13 @@ ...@@ -16,13 +16,13 @@
from paddle.trainer_config_helpers import * from paddle.trainer_config_helpers import *
TrainData(SimpleData( TrainData(SimpleData(
files = "trainer/tests/sample_filelist.txt", files = "legacy/trainer/tests/sample_filelist.txt",
feat_dim = 3, feat_dim = 3,
context_len = 0, context_len = 0,
buffer_capacity = 1000000)) buffer_capacity = 1000000))
TestData(SimpleData( TestData(SimpleData(
files = "trainer/tests/sample_filelist.txt", files = "legacy/trainer/tests/sample_filelist.txt",
feat_dim = 3, feat_dim = 3,
context_len = 0, context_len = 0,
buffer_capacity = 1000000)) buffer_capacity = 1000000))
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
from paddle.trainer_config_helpers import * from paddle.trainer_config_helpers import *
TrainData(SimpleData( TrainData(SimpleData(
files = "trainer/tests/sample_filelist.txt", files = "legacy/trainer/tests/sample_filelist.txt",
feat_dim = 3, feat_dim = 3,
context_len = 0, context_len = 0,
buffer_capacity = 1000000, buffer_capacity = 1000000,
......
...@@ -16,13 +16,13 @@ ...@@ -16,13 +16,13 @@
from paddle.trainer_config_helpers import * from paddle.trainer_config_helpers import *
TrainData(SimpleData( TrainData(SimpleData(
files = "trainer/tests/sample_filelist.txt", files = "legacy/trainer/tests/sample_filelist.txt",
feat_dim = 3, feat_dim = 3,
context_len = 0, context_len = 0,
buffer_capacity = 1000000)) buffer_capacity = 1000000))
TestData(SimpleData( TestData(SimpleData(
files = "trainer/tests/sample_filelist.txt", files = "legacy/trainer/tests/sample_filelist.txt",
feat_dim = 3, feat_dim = 3,
context_len = 0, context_len = 0,
buffer_capacity = 1000000)) buffer_capacity = 1000000))
......
...@@ -63,8 +63,8 @@ beam_gen_concat = recurrent_group(name="rnn_gen_concat", ...@@ -63,8 +63,8 @@ beam_gen_concat = recurrent_group(name="rnn_gen_concat",
seqtext_printer_evaluator(input=beam_gen_concat, seqtext_printer_evaluator(input=beam_gen_concat,
id_input=sent_id, id_input=sent_id,
dict_file="./trainer/tests/test_gen_dict.txt", dict_file="./legacy/trainer/tests/test_gen_dict.txt",
result_file="./trainer/tests/dump_text.test") result_file="./legacy/trainer/tests/dump_text.test")
#outputs(beam_gen_concat) #outputs(beam_gen_concat)
# In this config, as dummy_data_input doesn't work on beam_gen (we can find dummy_memory # In this config, as dummy_data_input doesn't work on beam_gen (we can find dummy_memory
# is read-only memory, and isn't used by other layers of step), we show the Inputs and Outputs # is read-only memory, and isn't used by other layers of step), we show the Inputs and Outputs
......
...@@ -56,8 +56,8 @@ beam_gen = beam_search(name="rnn_gen", ...@@ -56,8 +56,8 @@ beam_gen = beam_search(name="rnn_gen",
seqtext_printer_evaluator(input=beam_gen, seqtext_printer_evaluator(input=beam_gen,
id_input=sent_id, id_input=sent_id,
dict_file="./trainer/tests/test_gen_dict.txt", dict_file="./legacy/trainer/tests/test_gen_dict.txt",
result_file="./trainer/tests/dump_text.test") result_file="./legacy/trainer/tests/dump_text.test")
#outputs(beam_gen) #outputs(beam_gen)
# In this config, as dummy_data_input doesn't work on beam_gen (we can find dummy_memory # In this config, as dummy_data_input doesn't work on beam_gen (we can find dummy_memory
# is read-only memory, and isn't used by other layers of step), we show the Inputs and Outputs # is read-only memory, and isn't used by other layers of step), we show the Inputs and Outputs
......
...@@ -16,7 +16,7 @@ from paddle.trainer_config_helpers import * ...@@ -16,7 +16,7 @@ from paddle.trainer_config_helpers import *
settings(batch_size=17, learning_method=AdaGradOptimizer(), learning_rate=1e-4) settings(batch_size=17, learning_method=AdaGradOptimizer(), learning_rate=1e-4)
file_list = 'trainer/tests/fake_file_list.list' file_list = 'legacy/trainer/tests/fake_file_list.list'
define_py_data_sources2( define_py_data_sources2(
train_list=file_list, train_list=file_list,
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include <paddle/utils/PythonUtil.h> #include <paddle/utils/PythonUtil.h>
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <cstdlib> #include <cstdlib>
...@@ -22,7 +22,8 @@ limitations under the License. */ ...@@ -22,7 +22,8 @@ limitations under the License. */
using namespace paddle; // NOLINT using namespace paddle; // NOLINT
using namespace std; // NOLINT using namespace std; // NOLINT
static const string& configFile = "trainer/tests/sample_trainer_config.conf"; static const string& configFile =
"/legacy/trainer/tests/sample_trainer_config.conf";
DECLARE_int32(gpu_id); DECLARE_int32(gpu_id);
DECLARE_bool(use_gpu); DECLARE_bool(use_gpu);
......
...@@ -26,7 +26,7 @@ limitations under the License. */ ...@@ -26,7 +26,7 @@ limitations under the License. */
#include "picojson.h" #include "picojson.h"
void checkValue(std::vector<paddle::Argument>& arguments, picojson::array& arr); void checkValue(std::vector<paddle::Argument>& arguments, picojson::array& arr);
const std::string kDir = "./trainer/tests/pydata_provider_wrapper_dir/"; const std::string kDir = "./legacy/trainer/tests/pydata_provider_wrapper_dir/";
TEST(PyDataProviderWrapper, SequenceData) { TEST(PyDataProviderWrapper, SequenceData) {
paddle::DataConfig conf; paddle::DataConfig conf;
......
...@@ -14,18 +14,19 @@ limitations under the License. */ ...@@ -14,18 +14,19 @@ limitations under the License. */
#include <paddle/utils/PythonUtil.h> #include <paddle/utils/PythonUtil.h>
#include <paddle/utils/Version.h> #include <paddle/utils/Version.h>
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
#include <gtest/gtest.h> #include <gtest/gtest.h>
using namespace paddle; // NOLINT using namespace paddle; // NOLINT
using namespace std; // NOLINT using namespace std; // NOLINT
static const string& configFile1 = "trainer/tests/sample_trainer_config.conf"; static const string& configFile1 =
"legacy/trainer/tests/sample_trainer_config.conf";
static const string& configFile2 = static const string& configFile2 =
"trainer/tests/sample_trainer_config_hsigmoid.conf"; "legacy/trainer/tests/sample_trainer_config_hsigmoid.conf";
static const string& configFile4 = static const string& configFile4 =
"trainer/tests/sample_trainer_config_parallel.conf"; "legacy/trainer/tests/sample_trainer_config_parallel.conf";
DECLARE_bool(use_gpu); DECLARE_bool(use_gpu);
DECLARE_string(config); DECLARE_string(config);
......
...@@ -14,8 +14,8 @@ limitations under the License. */ ...@@ -14,8 +14,8 @@ limitations under the License. */
#include <paddle/utils/GlobalConstants.h> #include <paddle/utils/GlobalConstants.h>
#include <paddle/utils/PythonUtil.h> #include <paddle/utils/PythonUtil.h>
#include "paddle/trainer/Trainer.h" #include "paddle/legacy/trainer/Trainer.h"
#include "paddle/trainer/TrainerInternal.h" #include "paddle/legacy/trainer/TrainerInternal.h"
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <paddle/legacy/pserver/ParameterServer2.h> #include <paddle/legacy/pserver/ParameterServer2.h>
...@@ -23,12 +23,13 @@ limitations under the License. */ ...@@ -23,12 +23,13 @@ limitations under the License. */
using namespace paddle; // NOLINT using namespace paddle; // NOLINT
using namespace std; // NOLINT using namespace std; // NOLINT
static const string& configFile1 = "trainer/tests/sample_trainer_config.conf"; static const string& configFile1 =
"legacy/trainer/tests/sample_trainer_config.conf";
static const string& configFile2 = static const string& configFile2 =
"trainer/tests/sample_trainer_config_parallel.conf"; "legacy/trainer/tests/sample_trainer_config_parallel.conf";
static const string& configFileSimpleSparse = static const string& configFileSimpleSparse =
"trainer/tests/simple_sparse_neural_network.py"; "legacy/trainer/tests/simple_sparse_neural_network.py";
DECLARE_bool(use_gpu); DECLARE_bool(use_gpu);
DECLARE_string(config); DECLARE_string(config);
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
from paddle.trainer_config_helpers import * from paddle.trainer_config_helpers import *
TrainData(SimpleData( TrainData(SimpleData(
files = "trainer/tests/sample_filelist.txt", files = "legacy/trainer/tests/sample_filelist.txt",
feat_dim = 3, feat_dim = 3,
context_len = 0, context_len = 0,
buffer_capacity = 1000000, buffer_capacity = 1000000,
......
...@@ -14,7 +14,7 @@ limitations under the License. */ ...@@ -14,7 +14,7 @@ limitations under the License. */
#include <fstream> #include <fstream>
#include <paddle/trainer/Trainer.h> #include <paddle/legacy/trainer/Trainer.h>
#include <paddle/utils/PythonUtil.h> #include <paddle/utils/PythonUtil.h>
#include <gtest/gtest.h> #include <gtest/gtest.h>
...@@ -22,13 +22,15 @@ limitations under the License. */ ...@@ -22,13 +22,15 @@ limitations under the License. */
using namespace paddle; // NOLINT using namespace paddle; // NOLINT
using namespace std; // NOLINT using namespace std; // NOLINT
static const string& CONFIG_FILE = "trainer/tests/sample_trainer_rnn_gen.conf"; static const string& CONFIG_FILE =
"legacy/trainer/tests/sample_trainer_rnn_gen.conf";
static const string& NEST_CONFIG_FILE = static const string& NEST_CONFIG_FILE =
"trainer/tests/sample_trainer_nest_rnn_gen.conf"; "legacy/trainer/tests/sample_trainer_nest_rnn_gen.conf";
static const string& OUTPUT_DIR = "trainer/tests/dump_text.test"; static const string& OUTPUT_DIR = "legacy/trainer/tests/dump_text.test";
static string modelDir = "trainer/tests/rnn_gen_test_model_dir/t1"; // NOLINT static string modelDir =
static string expectFile = // NOLINT "legacy/trainer/tests/rnn_gen_test_model_dir/t1"; // NOLINT
"trainer/tests/rnn_gen_test_model_dir/r1.test"; // NOLINT static string expectFile = // NOLINT
"legacy/trainer/tests/rnn_gen_test_model_dir/r1.test"; // NOLINT
DECLARE_string(config_args); DECLARE_string(config_args);
......
trainer/tests/pydata_provider_wrapper_dir/test_pydata_provider_wrapper.data
...@@ -67,7 +67,7 @@ extension_module_name=[MODULE_NAME], then config_parser will call ...@@ -67,7 +67,7 @@ extension_module_name=[MODULE_NAME], then config_parser will call
MODULE_NAME.get_config_funcs(g_config) MODULE_NAME.get_config_funcs(g_config)
MODULE_NAME.get_config_funcs() should return a dictionary of name to functions, MODULE_NAME.get_config_funcs() should return a dictionary of name to functions,
those functions will be available in the config file. those functions will be available in the config file.
See trainer/tests/config_parser_test.py for example See legacy/trainer/tests/config_parser_test.py for example
To use this from paddle_trainer, paddle_trainer should be called with To use this from paddle_trainer, paddle_trainer should be called with
--config_args=extension_module_name=[MODULE_NAME] --config_args=extension_module_name=[MODULE_NAME]
......
...@@ -93,8 +93,8 @@ if '${CMAKE_SYSTEM_PROCESSOR}' not in ['arm', 'armv7-a', 'aarch64']: ...@@ -93,8 +93,8 @@ if '${CMAKE_SYSTEM_PROCESSOR}' not in ['arm', 'armv7-a', 'aarch64']:
paddle_bins = '' paddle_bins = ''
if '${WITH_FLUID_ONLY}'== 'OFF': if '${WITH_FLUID_ONLY}'== 'OFF':
paddle_bin_dir = 'opt/paddle/bin' paddle_bin_dir = 'opt/paddle/bin'
paddle_bins = ['${PADDLE_BINARY_DIR}/paddle/trainer/paddle_trainer', paddle_bins = ['${PADDLE_BINARY_DIR}/paddle/legacy/trainer/paddle_trainer',
'${PADDLE_BINARY_DIR}/paddle/trainer/paddle_merge_model', '${PADDLE_BINARY_DIR}/paddle/legacy/trainer/paddle_merge_model',
'${PADDLE_BINARY_DIR}/paddle/legacy/pserver/paddle_pserver_main', '${PADDLE_BINARY_DIR}/paddle/legacy/pserver/paddle_pserver_main',
'${PADDLE_BINARY_DIR}/paddle/scripts/paddle'] '${PADDLE_BINARY_DIR}/paddle/scripts/paddle']
......
...@@ -4,7 +4,7 @@ TOTAL_ERRORS=0 ...@@ -4,7 +4,7 @@ TOTAL_ERRORS=0
# The trick to remove deleted files: https://stackoverflow.com/a/2413151 # The trick to remove deleted files: https://stackoverflow.com/a/2413151
for file in $(git diff --cached --name-status | awk '$1 != "D" {print $2}'); do for file in $(git diff --cached --name-status | awk '$1 != "D" {print $2}'); do
if [[ $file =~ ^(paddle/api/.*|paddle/capi/.*|paddle/contrib/.*|paddle/legacy/cuda/.*|paddle/legacy/function/.*|paddle/legacy/gserver/.*|paddle/legacy/math/.*|paddle/legacy/optimizer/.*|paddle/legacy/parameter/.*|paddle/legacy/pserver/.*|paddle/trainer/.*|paddle/utils/.*|paddle/testing/TestUtil.*) ]]; then if [[ $file =~ ^(paddle/legacy/api/.*|paddle/legacy/capi/.*|paddle/contrib/.*|paddle/legacy/cuda/.*|paddle/legacy/function/.*|paddle/legacy/gserver/.*|paddle/legacy/math/.*|paddle/legacy/optimizer/.*|paddle/legacy/parameter/.*|paddle/legacy/pserver/.*|paddle/legacy/trainer/.*|paddle/utils/.*|paddle/testing/TestUtil.*) ]]; then
continue; continue;
else else
cpplint --filter=-readability/fn_size $file; cpplint --filter=-readability/fn_size $file;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册